1

我的数据库有大小为 128 * 128* 1 的灰度图像,每个批次大小 =10 我正在使用 cnn 模型,但我在 BatchNorm2d
预期的 4D 输入(获得 2D 输入)中收到此错误

我发布了我用来转换图像的方式(灰度 - 张量 - 归一化)并将其分成批次

data_transforms = {
    'train': transforms.Compose([
        transforms.Grayscale(num_output_channels=1),
        transforms.Resize(128),
        transforms.CenterCrop(128),
        transforms.ToTensor(),
        transforms.Normalize([0.5], [0.5])
    ]),
    'val': transforms.Compose([
        transforms.Grayscale(num_output_channels=1),
        transforms.Resize(128),
        transforms.CenterCrop(128),
        transforms.ToTensor(),
        transforms.Normalize([0.5], [0.5])
    ]),
}


data_dir = '/content/drive/My Drive/Colab Notebooks/pytorch'
dsets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x])
         for x in ['train', 'val']}
dset_loaders = {x: torch.utils.data.DataLoader(dsets[x], batch_size=10,
                                               shuffle=True, num_workers=25)
                for x in ['train', 'val']}
dset_sizes = {x: len(dsets[x]) for x in ['train', 'val']}
dset_classes = dsets['train'].classes

我用了这个模型

class HeartNet(nn.Module):
    def __init__(self, num_classes=7):
        
        super(HeartNet, self).__init__()

        self.features = nn.Sequential(
            nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(64),
            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(64),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(128),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(128),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(256),
            nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(256),
            nn.MaxPool2d(kernel_size=2, stride=2)
            )

        self.classifier = nn.Sequential(
            nn.Dropout(0.5),
            nn.Linear(16*16*256, 2048),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(2048),
            nn.Linear(2048, num_classes)
            )

        nn.init.xavier_uniform_(self.classifier[1].weight)
        nn.init.xavier_uniform_(self.classifier[4].weight)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.size(0), 16 * 16 * 256)
        x = self.classifier(x)
        return x

我怎么解决这个问题 ?

4

1 回答 1

1

您的self.classifier子网络中的批处理规范层存在问题:虽然您的self.features子网络是完全卷积且需要BatchNorm2d的,但self.classifier子网络是一个完全连接的多层感知器 (MLP) 网络,并且本质上是一维的。请注意该函数如何在将特征映射提供给分类器之前forward从特征映射中删除空间维度。x

尝试将BatchNorm2dinself.classifier替换为BatchNorm1d.

于 2020-07-20T08:21:21.173 回答