TLDR使用功能工具箱中的卷积,torch.nn.fuctional.conv2d
, not torch.nn.Conv2d
,并围绕垂直和水平轴翻转过滤器。
torch.nn.Conv2d
是网络的卷积层。因为权重是学习的,所以是否使用互相关来实现并不重要,因为网络只会学习内核的镜像版本(感谢@etarion 的澄清)。
torch.nn.fuctional.conv2d
使用作为参数提供的输入和权重执行卷积,类似于示例中的 tensorflow 函数。我写了一个简单的测试来确定它是否像 tensorflow 函数一样,实际上是在执行互相关,是否有必要翻转滤波器以获得正确的卷积结果。
import torch
import torch.nn.functional as F
import torch.autograd as autograd
import numpy as np
#A vertical edge detection filter.
#Because this filter is not symmetric, for correct convolution the filter must be flipped before element-wise multiplication
filters = autograd.Variable(torch.FloatTensor([[[[-1, 1]]]]))
#A test image of a square
inputs = autograd.Variable(torch.FloatTensor([[[[0,0,0,0,0,0,0], [0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0],
[0,0,0,0,0,0,0]]]]))
print(F.conv2d(inputs, filters))
这输出
Variable containing:
(0 ,0 ,.,.) =
0 0 0 0 0 0
0 1 0 0 -1 0
0 1 0 0 -1 0
0 1 0 0 -1 0
0 0 0 0 0 0
[torch.FloatTensor of size 1x1x5x6]
该输出是互相关的结果。因此,我们需要翻转过滤器
def flip_tensor(t):
flipped = t.numpy().copy()
for i in range(len(filters.size())):
flipped = np.flip(flipped,i) #Reverse given tensor on dimention i
return torch.from_numpy(flipped.copy())
print(F.conv2d(inputs, autograd.Variable(flip_tensor(filters.data))))
新输出是卷积的正确结果。
Variable containing:
(0 ,0 ,.,.) =
0 0 0 0 0 0
0 -1 0 0 1 0
0 -1 0 0 1 0
0 -1 0 0 1 0
0 0 0 0 0 0
[torch.FloatTensor of size 1x1x5x6]