4

我已经看到在 Keras 中使用 LSTM 构建编码器-解码器网络的示例,但我想要一个 ConvLSTM 编码器-解码器,并且由于 ConvLSTM2D 不接受任何“initial_state”参数,因此我可以将编码器的初始状态传递给解码器,我尝试在 Keras 中使用 RNN 并尝试将 ConvLSTM2D 作为 RNN 的单元传递,但出现以下错误:

ValueError: ('`cell` should have a `call` method. The RNN was passed:', <tf.Tensor 'encoder_1/TensorArrayReadV3:0' shape=(?, 62, 62, 32) dtype=float32>)

这就是我尝试定义 RNN 单元的方式:

first_input = Input(shape=(None, 62, 62, 12))
encoder_convlstm2d = ConvLSTM2D(filters=32, kernel_size=(3, 3),
                                    padding='same',
                                    name='encoder'+ str(1))(first_input )
encoder_outputs, state_h, state_c = keras.layers.RNN(cell=encoder_convlstm2d, return_sequences=False, return_state=True, go_backwards=False,
                 stateful=False, unroll=False)
4

1 回答 1

0

以下是我使用 ConvLSTM 实现基于编码器-解码器的解决方案的方法。

def convlstm(input_shape):

    print(np.shape(input_shape))

    inpTensor = Input((input_shape))

    #encoder
    net1 = ConvLSTM2D(filters=32, kernel_size=3,
                   padding='same', return_sequences=True)(inpTensor)

    max_pool1 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, 
    padding='same')(net1)

    bn1 = BatchNormalization(axis=1)(max_pool1)

    dp1 = Dropout(0.2)(bn1)

    net2 = ConvLSTM2D(filters=64, kernel_size=3,
                    padding='same', return_sequences=True)(dp1)

    max_pool2 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, 
    padding='same')(net2)

    bn2 = BatchNormalization(axis=1)(max_pool2)

    dp2 = Dropout(0.2)(bn2)

    net3 = ConvLSTM2D(filters=128, kernel_size=3,
                   padding='same', return_sequences=True)(dp2)

    max_pool3 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, 
    padding='same')(net3)

    bn3 = BatchNormalization(axis=1)(max_pool3)

    dp3 = Dropout(0.2)(bn3)


    #decoder
    net4 = ConvLSTM2D(filters=128, kernel_size=3,
                    padding='same', return_sequences=True)(dp3)

    up1 = UpSampling3D((2, 2, 2))(net4)

    net5= ConvLSTM2D(filters=64, kernel_size=3,
                    padding='same', return_sequences=True)(up1)

    up2 = UpSampling3D((2, 2, 2))(net5)

    net6 = ConvLSTM2D(filters=32, kernel_size=3,
                    padding='same', return_sequences=False)(up2)

    up3 = UpSampling2D((2, 2))(net6)

    out = Conv2D(filters=1, kernel_size=(3, 3), activation='sigmoid',
                  padding='same', data_format='channels_last')(up3)

    #or use only return out
    return Model(inpTensor, out)
于 2021-06-11T11:59:54.980 回答