4

我正在学习 Udacity 深度学习纳米学位,并致力于自动编码器迷你项目。我不明白解决方案,也不知道如何自己检查。所以这是2个问题。

我们从 28*28 图像开始。这些通过 3 个卷积层馈送,每个卷积层的填充为 1,并且每个最大池化为原始尺寸的一半。我不明白的是最后一个元素?当然 2 轮 maxpooling (28/2)/2 给出 7,因此进一步的 maxpooling 不应该是可能的,因为它会导致奇数。有人可以解释为什么我会这样吗?要复制的代码在这里:
'''

import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms

# convert data to torch.FloatTensor
transform = transforms.ToTensor()

# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
                                   download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
                                  download=True, transform=transform)

# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20

# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)

import torch.nn as nn
import torch.nn.functional as F

# define the NN architecture
class ConvDenoiser(nn.Module):
    def __init__(self):
        super(ConvDenoiser, self).__init__()
        ## encoder layers ##
        # conv layer (depth from 1 --> 32), 3x3 kernels
        self.conv1 = nn.Conv2d(1, 32, 3, padding=1)  
        # conv layer (depth from 32 --> 16), 3x3 kernels
        self.conv2 = nn.Conv2d(32, 16, 3, padding=1)
        # conv layer (depth from 16 --> 8), 3x3 kernels
        self.conv3 = nn.Conv2d(16, 8, 3, padding=1)
        # pooling layer to reduce x-y dims by two; kernel and stride of 2
        self.pool = nn.MaxPool2d(2, 2)

        ## decoder layers ##
        # transpose layer, a kernel of 2 and a stride of 2 will increase the spatial dims by 2
        self.t_conv1 = nn.ConvTranspose2d(8, 8, 3, stride=2)  # kernel_size=3 to get to a 7x7 image output
        # two more transpose layers with a kernel of 2
        self.t_conv2 = nn.ConvTranspose2d(8, 16, 2, stride=2)
        self.t_conv3 = nn.ConvTranspose2d(16, 32, 2, stride=2)
        # one, final, normal conv layer to decrease the depth
        self.conv_out = nn.Conv2d(32, 1, 3, padding=1)


    def forward(self, x):
        ## encode ##
        # add hidden layers with relu activation function
        # and maxpooling after
        x = F.relu(self.conv1(x))
        x = self.pool(x)
        # add second hidden layer
        x = F.relu(self.conv2(x))
        x = self.pool(x)
        # add third hidden layer
        x = F.relu(self.conv3(x))
        x = self.pool(x)  # compressed representation

        ## decode ##
        # add transpose conv layers, with relu activation function
        x = F.relu(self.t_conv1(x))
        x = F.relu(self.t_conv2(x))
        x = F.relu(self.t_conv3(x))
        # transpose again, output should have a sigmoid applied
        x = F.sigmoid(self.conv_out(x))

        return x

# initialize the NN
model = ConvDenoiser()
print(model)

我想通过手动将单个图像通过图层来尝试理解这一点,并查看结果是什么,但这导致了错误。有人可以向我解释如何看到穿过图层的形状吗?代码有点乱,但我把它留在那里,这样你就可以看到我尝试了什么。

dataiter = iter(train_loader)
images, labels = dataiter.next()
# images = images.numpy()

# get one image from the batch
# img = np.squeeze(images[0])
img=images[0]

#create hidden layer
conv1 = nn.Conv2d(1, 32, 3, padding=1)  

# z=torch.from_numpy(images[0])
z1=conv1(img)

感谢您能给我的任何见解。
谢谢,
J

4

1 回答 1

3

关于您的第一个问题:
您可以在文档中阅读如何计算最大池的输出形状。您可以在有或没有填充的情况下以均匀的步幅最大化奇数形状的张量。您需要注意可能丢失某些像素的边界。


关于您的第二个问题:
您的模型需要 4D 输入:batch-channel-height-width。
通过从批次 ( img=images[0]) 中仅选择一个图像,您可以消除最终只有 3D 张量的批次维度。
要解决这个问题:

img=images[0:1, ...]  # select first image, but leave batch dimension as a singleton
于 2020-05-24T10:01:00.690 回答