1

我有一个在 Keras 中实现的模型,但我需要在 tensorflow 中实现相同的模型。所以,我希望只实现模型的RNN层,其余的保持不变,即预测方法、拟合模型……都是在keras中实现的。因此,这里是代码:

凯拉斯模型:

def emotion_model(max_seq_len, num_features, learning_rate, num_units_1, num_units_2, bidirectional, dropout, num_targets):
    # Input layer
    inputs = Input(shape=(max_seq_len, num_features))

    # 1st layer
    net = LSTM(num_units_1, return_sequences=True, dropout=dropout, recurrent_dropout=dropout)(net)

    # 2nd layer
    net = LSTM(num_units_2, return_sequences=True, dropout=dropout, recurrent_dropout=dropout)(net)

    # Output layer
    outputs = []
    out1 = TimeDistributed(Dense(1))(net)  # linear activation
    outputs.append(out1)
    if num_targets >= 2:
        out2 = TimeDistributed(Dense(1))(net)  # linear activation
        outputs.append(out2)
    if num_targets == 3:
        out3 = TimeDistributed(Dense(1))(net)  # linear activation
        outputs.append(out3)

    # Create and compile model
    rmsprop = RMSprop(lr=learning_rate)
    model   = Model(inputs=inputs, outputs=outputs)
    model.compile(optimizer=rmsprop, loss=ccc_loss)  # CCC-based loss function
    return model

现在,我想用 tensorflow 中的等效代码替换上面的 LSTM 层。因此,在另一个模块中,我实现了以下内容:

def baseline_model(inputs, cell_Size1, cell_Size2, dropout):
    with tf.variable_scope('model', reuse=tf.AUTO_REUSE):
        cell1 = tf.nn.rnn_cell.LSTMCell(cell_Size1)
        cell1 = tf.nn.rnn_cell.DropoutWrapper(cell1, input_keep_prob=1.0 - dropout, state_keep_prob=1.0 - dropout)

        cell2 = tf.nn.rnn_cell.LSTMCell(cell_Size2)
        cell2 = tf.nn.rnn_cell.DropoutWrapper(cell2, input_keep_prob=1.0 - dropout, state_keep_prob=1.0 - dropout)

        cell = tf.nn.rnn_cell.MultiRNNCell([cell1, cell2], state_is_tuple=True)

        # output1: shape=[1, time_steps, 32]
        output, new_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)

        return output

我已经尝试从方法“baseline_model”参数net = Lambda(partial(baseline_model, dropout))(net)中删除cell_size1and的位置cell_size2,但没有奏效

其次,我尝试直接转储在 tensorflow 中实现的 LSTM 层,而不是LSTM上面 keras 中的层,但这并不能解决我的问题。

任何帮助深表感谢!!

4

0 回答 0