0

我无法编译这个模型。

我正在尝试实现 VGG16,但我将使用自定义损失函数。目标变量的形状是(?, 14, 14, 9, 6)我们只使用二元交叉熵作为开关Y_train[:,:,:,:,0]Y_train[:,:,:,:,1]有效地关闭损失,使其成为一个小批量——其他变量将用于神经网络的一个单独分支。这是此分支上的二元分类问题,因此我只想输出 shape (?, 14, 14, 9, 1)

我在下面列出了我的错误。您能否首先解释一下出了什么问题,其次是如何缓解这个问题?

型号代码

img_input = Input(shape = (224,224,3))

x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)

# # Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)

# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)

# # Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)

# # Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)

x = Conv2D(512, (3, 3), padding='same', activation='relu', kernel_initializer='normal', name='rpn_conv1')(x)

x_class = Conv2D(9, (1, 1), activation='sigmoid', kernel_initializer='uniform', name='rpn_out_class')(x)

x_class = Reshape((14,14,9,1))(x_class)
model = Model(inputs=img_input, outputs=x_class)
model.compile(loss=rpn_loss_cls(), optimizer='adam')

损失函数代码:

def rpn_loss_cls(lambda_rpn_class=1.0, epsilon = 1e-4):

    def rpn_loss_cls_fixed_num(y_true, y_pred):
        return lambda_rpn_class * K.sum(y_true[:,:,:,:,0] 
                                * K.binary_crossentropy(y_pred[:,:,:,:,:], y_true[:,:,:,:,1]))
                                / K.sum(epsilon + y_true[:,:,:,:,0])
    return rpn_loss_cls_fixed_num

错误:

ValueError: logits and labels must have the same shape ((?, ?, ?, ?) vs (?, 14, 14, 9, 1))

注意:我已经阅读了这个网站上的多个问题,但都有相同的错误,但没有一个解决方案允许我的模型编译。

潜在的解决方案:

我继续搞砸这个,发现通过添加

y_true = K.expand_dims(y_true, axis=-1)

我能够编译模型。仍然怀疑这是否会正常工作。

4

1 回答 1

0

Keras 模型设置y_true形状等同于输入形状。因此,当您的损失函数出现形状不匹配错误时。因此,您需要使用 expand_dims 对齐尺寸。然而,这需要考虑您的模型架构、数据和损失函数来完成。下面的代码将编译。

def rpn_loss_cls(lambda_rpn_class=1.0, epsilon = 1e-4):

    def rpn_loss_cls_fixed_num(y_true, y_pred):
        y_true = tf.keras.backend.expand_dims(y_true, -1)
        return lambda_rpn_class * K.sum(y_true[:,:,:,:,0] 
                                * K.binary_crossentropy(y_pred[:,:,:,:,:], y_true[:,:,:,:,1]))
                                / K.sum(epsilon + y_true[:,:,:,:,0])
    return rpn_loss_cls_fixed_num
于 2019-04-09T16:43:08.677 回答