0

您好我已经使用原始和目标是图像的数据集类创建了一个自定义数据集,例如语义分割和 pix2pix,我正在使用 Imagefolder 和自定义整理功能加载数据集,并且我正在尝试使用数据加载器加载我的自定义数据集以训练神经网络,但发生错误说输入应该是张量而不是列表:

我的整理功能

def my_collate(batch):
data = [item[0] for item in batch]
target = [item[1] for item in batch]
return [data, target]

数据集类:

class bsds_dataset(Dataset):
def __init__(self, ds_main, ds_energy):
    self.dataset1 = ds_main
    self.dataset2 = ds_energy

def __getitem__(self, index):
    x1 = self.dataset1[index]
    x2 = self.dataset2[index]

    return x1, x2

def __len__(self):
    return len(self.dataset1)

加载数据集:

generic_transform = transforms.Compose([
transforms.ToTensor(),
transforms.ToPILImage(),
#transforms.CenterCrop(size=128),
#transforms.Lambda(lambda x: myimresize(x, (128, 128))),
transforms.ToTensor(),
#transforms.Normalize((0., 0., 0.), (6, 6, 6))
])
original_imagefolder = './images/whole'
target_imagefolder = './results/whole'

original_ds = ImageFolder(original_imagefolder, 
transform=generic_transform)
energy_ds = ImageFolder(target_imagefolder, transform=generic_transform)

dataset = bsds_dataset(original_ds, energy_ds)
loader = DataLoader(dataset, batch_size=16, collate_fn=my_collate)

epochs = 2
model = UNet(1, depth=5, merge_mode='concat')
model.cuda()
loss = torch.nn.MSELoss()
criterion_pixelwise = torch.nn.L1Loss()

loss.cuda()
criterion_pixelwise.cuda()

optimizer = optim.SGD(model.parameters(), lr=0.001)

Tensor = torch.cuda.FloatTensor
#main loop
for epoch in range(epochs):
   for i, batch in enumerate(loader):
       original, target = batch
       out = model(original)

并且发生了这个错误:

  TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not 
  list

我正在尝试将批次的每个实例转发到模型中。迭代中的批次是列表,但它应该是张量我不知道如何将其转换为张量或加载批次的每个实例。请帮忙。非常感谢。

完整追溯:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-147-d1dea1bc00f8> in <module>
     15     for i, batch in enumerate(loader):
     16         original, target = batch
---> 17         out = model(original)

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

<ipython-input-7-5f743c3455c4> in forward(self, x)
     89         # encoder pathway, save outputs for merging
     90         for i, module in enumerate(self.down_convs):
---> 91             x, before_pool = module(x)
     92             encoder_outs.append(before_pool)
     93 

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

<ipython-input-5-26a0f7e21ea6> in forward(self, x)
     14 
     15     def forward(self, x):
---> 16         x = F.relu(self.conv1(x))
     17         x = F.relu(self.conv2(x))
     18         before_pool = x

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
    336                             _pair(0), self.dilation, self.groups)
    337         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338                         self.padding, self.dilation, self.groups)
    339 
    340 

TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not list
4

1 回答 1

0

您的整理功能是问题所在。即使单个数据集样本是张量,批次也是具有两个列表元素的列表。

至少,两者data和都target需要是张量。请参阅此处了解更多信息。

也许这样的事情会起作用:

def my_collate(batch):
    data = [item[0] for item in batch]
    target = [item[1] for item in batch]
    return torch.Tensor(data), torch.Tensor(target)
于 2021-02-02T09:02:44.257 回答