1

我们导入了一个在 ImageNet 上预训练的 ResNet50 模型,并希望在其顶部添加一些反卷积层以实现语义分割。

我们使用 google colaboratory 和 Keras 和 Tensorflow 作为后端。

import keras
from keras.applications.resnet50 import ResNet50
from keras.layers import Dense, Activation, Conv2DTranspose, Reshape, UpSampling2D
from keras.regularizers import l2
from keras import backend as K; 

height = 224 #dimensions of image
width = 224
channel = 3

# Importing the ResNet architecture pretrained on ImageNet
resnet_model = ResNet50(weights = 'imagenet', input_shape=(height, width, channel))
# Removing the classification layer and the last average
resnet_model.layers.pop()   
resnet_model.layers.pop()
#resnet_model.summary() 


# Upsampling
conv1 = Conv2DTranspose(28, (3,3), strides=(2,2), activation = None, kernel_regularizer=l2(0.))(resnet_model.outputs)
model = Model(inputs=resnet_model.input, outputs=conv1)

我们收到以下错误:

“ValueError:输入 0 与层 conv2d_transpose_1 不兼容:预期 ndim=4,发现 ndim=2”

看起来我们的 resnet 模型的输出(没有最后两层)是一个单维向量,但我们希望它是一个三维向量。

这是弹出后“resnet_model.summary()”的最终输出部分

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_10 (InputLayer)           (None, 224, 224, 3)  0                                            
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D)       (None, 230, 230, 3)  0           input_10[0][0]                   
__________________________________________________________________________________________________
.
.
.
.
.          
__________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizati (None, 7, 7, 512)    2048        res5c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_489 (Activation)     (None, 7, 7, 512)    0           bn5c_branch2b[0][0]              
__________________________________________________________________________________________________
res5c_branch2c (Conv2D)         (None, 7, 7, 2048)   1050624     activation_489[0][0]             
__________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizati (None, 7, 7, 2048)   8192        res5c_branch2c[0][0]             
__________________________________________________________________________________________________
add_160 (Add)                   (None, 7, 7, 2048)   0           bn5c_branch2c[0][0]              
                                                                 activation_487[0][0]             
__________________________________________________________________________________________________
activation_490 (Activation)     (None, 7, 7, 2048)   0           add_160[0][0]                    
==================================================================================================
Total params: 23,587,712
Trainable params: 23,534,592
Non-trainable params: 53,120
__________________________________________________________________________________________________

我们如何解决这个问题?

4

1 回答 1

2

不要这样做:

resnet_model.layers.pop()   

Pop 对于功能模型来说有点无意义,因为层不再是连续的,特别是对于使用残差连接的 ResNet。如果您在弹出后检查,summary()确认这些层已被删除,但计算图仍然有它们:

>>> resnet_model.output
<tf.Tensor 'fc1000/Softmax:0' shape=(?, 1000) dtype=float32>

在没有分类层的情况下建立模型的一种受支持的方法是使用include_top=False

resnet_model = ResNet50(weights = 'imagenet', input_shape=(224,224,3), include_top=False)

您可以通过实例化模型来确认输出张量具有预期的形状和语义:

>>> resnet_model.output
<tf.Tensor 'activation_98/Relu:0' shape=(?, 7, 7, 2048) dtype=float32>

另外一件事,更喜欢使用model.output而不是model.outputs,因为这个特定的模型只有一个输出。

于 2019-07-19T09:35:43.680 回答