我正在做神经风格迁移。我正在尝试重建 VGG19 网络的卷积层 conv4_2 的输出。
def get_features(image, model):
layers = {'0': 'conv1_1', '5': 'conv2_1', '10': 'conv3_1',
'19': 'conv4_1', '21': 'conv4_2', '28': 'conv5_1'}
x = image
features = {}
for name, layer in model._modules.items():
x = layer(x)
if name in layers:
features[layers[name]] = x
return features
content_img_features = get_features(content_img, vgg)
style_img_features = get_features(style_img, vgg)
target_content = content_img_features['conv4_2']
content_img_features 是一个包含每一层输出的字典。target_content 是一个形状张量torch.Size([1, 512, 50, 50])
这是我使用张量绘制图像的方法。它适用于输入图像以及最终输出。
def tensor_to_image(tensor):
image = tensor.clone().detach()
image = image.numpy().squeeze()
image = image.transpose(1, 2, 0)
image *= np.array((0.22, 0.22, 0.22))+ np.array((0.44, 0.44, 0.44))
image = image.clip(0, 1)
return image
image = tensor_to_image(target_content)
fig = plt.figure()
plt.imshow(image)
但这会引发错误,
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-188-a75a5f0743bb> in <module>()
1
----> 2 image = tensor_to_image(target_content)
3 fig = plt.figure()
4 plt.imshow(image)
<ipython-input-186-e9385dbc4a85> in tensor_to_image(tensor)
3 image = image.numpy().squeeze()
4 image = image.transpose(1, 2, 0)
----> 5 image *= np.array((0.22, 0.22, 0.22))+ np.array((0.44, 0.44, 0.44))
6 image = image.clip(0, 1)
7 return image
ValueError: operands could not be broadcast together with shapes (50,50,512) (3,) (50,50,512)
这是我在传递给 cnn 层之前对图像应用的初始转换,
def transformation(img):
tasks = tf.Compose([tf.Resize(400), tf.ToTensor(),
tf.Normalize((0.44,0.44,0.44),(0.22,0.22,0.22))])
img = tasks(img)[:3,:,:].unsqueeze(0)
return img
我该如何解决?还有另一种方法可以从卷积层重建图像吗?