我正在使用 Keras 创建一个 LSTM 编码器-解码器网络,遵循此处提供的代码:https ://github.com/LukeTonin/keras-seq-2-seq-signal-prediction 。我所做的唯一更改是将 GRUCell 替换为 LSTMCell。基本上编码器和解码器都包含 2 层,35 个 LSTMCell。这些层使用 RNN 层相互堆叠(并组合)。
LSTMCell 返回 2 个状态,而 GRUCell 返回 1 个状态。这是我遇到错误的地方,因为我不知道如何为 LSTMCell 的 2 个返回状态编码。
我创建了两个模型:第一个是编码器-解码器模型。第二,预测模型。我在编码器-解码器模型中没有遇到任何问题,而是在预测模型的解码器中遇到问题。
我得到的错误是:
ValueError: Layer rnn_4 expects 9 inputs, but it received 3 input tensors. Input received: [<tf.Tensor 'input_4:0' shape=(?, ?, 1) dtype=float32>, <tf.Tensor 'input_11:0' shape=(?, 35) dtype=float32>, <tf.Tensor 'input_12:0' shape=(?, 35) dtype=float32>]
当在预测模型中运行下面的这一行时,会发生此错误:
decoder_outputs_and_states = decoder(
decoder_inputs, initial_state=decoder_states_inputs)
这适合的代码部分是:
encoder_predict_model = keras.models.Model(encoder_inputs,
encoder_states)
decoder_states_inputs = []
# Read layers backwards to fit the format of initial_state
# For some reason, the states of the model are order backwards (state of the first layer at the end of the list)
# If instead of a GRU you were using an LSTM Cell, you would have to append two Input tensors since the LSTM has 2 states.
for hidden_neurons in layers[::-1]:
# One state for GRU, but two states for LSTMCell
decoder_states_inputs.append(keras.layers.Input(shape=(hidden_neurons,)))
decoder_outputs_and_states = decoder(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_outputs = decoder_outputs_and_states[0]
decoder_states = decoder_outputs_and_states[1:]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_predict_model = keras.models.Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
有人可以帮我处理上面的 for 循环,然后我应该通过解码器的初始状态吗?