3

参考这个帖子了解问题的背景: TensorFlow embedding_attention_seq2seq方法是否默认实现了双向RNN Encoder?

我正在研究相同的模型,并想用双向层替换单向 LSTM 层。我意识到我必须使用 static_bidirectional_rnn 而不是 static_rnn,但由于张量形状中的一些不匹配,我得到了一个错误。

我替换了以下行:

encoder_outputs, encoder_state = core_rnn.static_rnn(encoder_cell, encoder_inputs, dtype=dtype)

与下面的行:

encoder_outputs, encoder_state_fw, encoder_state_bw = core_rnn.static_bidirectional_rnn(encoder_cell, encoder_cell, encoder_inputs, dtype=dtype)

这给了我以下错误:

InvalidArgumentError(有关回溯,请参见上文):不兼容的形状:[32,5,1,256] 与 [16,1,1,256] [[节点:梯度/model_with_buckets/embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/Attention_0/add_grad/BroadcastGradientArgs = BroadcastGradientArgs[T =DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](梯度/model_with_buckets/embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/Attention_0/add_grad/Shape, gradients/model_with_buckets/embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder /Attention_0/add_grad/Shape_1)]]

我知道这两种方法的输出是不同的,但我不知道如何修改注意力代码来合并它。如何将前向和后向状态都发送到注意力模块——我是否将两个隐藏状态连接起来?

4

1 回答 1

1

我从错误消息中发现,某处的两个张量的批大小不匹配,一个是 32,另一个是 16。我想这是因为双向 rnn 的输出列表是单向的两倍。而且您只是不相应地在以下代码中进行调整。

如何将前向和后向状态都发送到注意力模块——我是否将两个隐藏状态连接起来?

你可以参考这段代码:

  def _reduce_states(self, fw_st, bw_st):
    """Add to the graph a linear layer to reduce the encoder's final FW and BW state into a single initial state for the decoder. This is needed because the encoder is bidirectional but the decoder is not.
    Args:
      fw_st: LSTMStateTuple with hidden_dim units.
      bw_st: LSTMStateTuple with hidden_dim units.
    Returns:
      state: LSTMStateTuple with hidden_dim units.
    """
    hidden_dim = self._hps.hidden_dim
    with tf.variable_scope('reduce_final_st'):

      # Define weights and biases to reduce the cell and reduce the state
      w_reduce_c = tf.get_variable('w_reduce_c', [hidden_dim * 2, hidden_dim], dtype=tf.float32, initializer=self.trunc_norm_init)
      w_reduce_h = tf.get_variable('w_reduce_h', [hidden_dim * 2, hidden_dim], dtype=tf.float32, initializer=self.trunc_norm_init)
      bias_reduce_c = tf.get_variable('bias_reduce_c', [hidden_dim], dtype=tf.float32, initializer=self.trunc_norm_init)
      bias_reduce_h = tf.get_variable('bias_reduce_h', [hidden_dim], dtype=tf.float32, initializer=self.trunc_norm_init)

      # Apply linear layer
      old_c = tf.concat(axis=1, values=[fw_st.c, bw_st.c]) # Concatenation of fw and bw cell
      old_h = tf.concat(axis=1, values=[fw_st.h, bw_st.h]) # Concatenation of fw and bw state
      new_c = tf.nn.relu(tf.matmul(old_c, w_reduce_c) + bias_reduce_c) # Get new cell from old cell
      new_h = tf.nn.relu(tf.matmul(old_h, w_reduce_h) + bias_reduce_h) # Get new state from old state
return tf.contrib.rnn.LSTMStateTuple(new_c, new_h) # Return new cell and state
于 2017-07-15T03:08:13.143 回答