0

这是我的本地二进制模式函数:

def lbp(x):

imgUMat = np.float32(x)

gray = cv2.cvtColor(imgUMat, cv2.COLOR_RGB2GRAY)

radius = 2
n_points = 8 * radius

METHOD = 'uniform'
    
lbp = local_binary_pattern(gray, n_points, radius, METHOD)
lbp = torch.from_numpy(lbp).long()
    
return lbp

这里我调用 lbp 函数:

input_img = plt.imread(trn_fnames[31])
x = lbp(input_img)

当我使用 x.shape 它是:

torch.Size([600, 600])

听起来不错!!!

但我的问题是当我在转换函数中使用 transforms.Lambda(lbp) 时,我的输出图像是 torch.Size([600])

tfms = transforms.Compose([

transforms.Lambda(lbp)])

train_ds = datasets.ImageFolder(trn_dir, transform = tfms)

(train_ds[0][0][0]).shape
torch.Size([600])!!! >>>> my problem

我需要 torch.Size([600, 600])

我也有不同的方式,比如:

tfms = transforms.Compose([

transforms.Lambda(lbp),
transforms.ToPILImage(),
transforms.Resize((sz, sz))])

我得到了这个错误:

TypeError: pic should be Tensor or ndarray. Got <class ‘torch.Tensor’&gt;.

我还添加了

transforms.ToTensor()])

但仍然有同样的错误:

TypeError: pic should be Tensor or ndarray. Got <class ‘torch.Tensor’&gt;.

我会感谢您的意见!谢谢你。

4

1 回答 1

0
def lbp_transform(x):
    radius = 2
    n_points = 8 * radius
    METHOD = 'uniform'
    imgUMat = np.float32(x)
    gray = cv2.cvtColor(imgUMat, cv2.COLOR_RGB2GRAY)
    lbp = local_binary_pattern(gray, n_points, radius, METHOD)
    lbp = torch.from_numpy(lbp).float()
    return lbp


x = np.random.randn(600, 600, 3)
out = lbp_transform(x)
print(out.shape)
> torch.Size([600, 600])

tfms = transforms.Compose([
    transforms.Lambda(lbp_transform),
    transforms.ToPILImage(),
    transforms.Resize((300, 300)),
    transforms.ToTensor()
])

out = tfms(x)
print(out.shape)
> torch.Size([1, 300, 300])
于 2020-07-24T17:44:16.413 回答