0

我在 TensorFlow 2 中遇到了一个我不完全理解的AdditiveAttention()层错误(即Bahdanau Attention)。Question我想用一个在两个和数据集上训练的 seq2seq 注意力模型来训练一个聊天机器人Answer

当我尝试将注意力层添加到模型时,我遇到的错误代表了我的问题。这是我的构建功能:

def build_model():
    import tensorflow as tf
    from tensorflow.keras.models import Model
    from tensorflow.keras.layers import Input, Embedding, LSTM, AdditiveAttention, Dense
    
    # Input: get char embeddings
    encoder_inputs = Input(shape=(200), name='encoder_inputs')
    encoder_embedding = Embedding(60, 200, name='encoder_embedding')(encoder_inputs)
    
    # LSTM Encoder receives Question - returns states
    encoder_lstm = LSTM(units=64, return_state=True, name='encoder_lstm')
    encoder_outputs, h, c = encoder_lstm(encoder_embedding)
    encoder_states = [h, c]
    
    # Bahdanau Attention
    context_vector, attention_weights = AdditiveAttention([h, encoder_outputs])
    
    # Decoder Embedding layer receives Answer as input (teacher forcing)
    decoder_inputs = Input(shape=(None,), name='decoder_inputs')
    decoder_embedding = Embedding(60, 200, name='decoder_embedding')(decoder_inputs)
    
    concat = tf.concat([tf.expand_dims(context_vector, 1), decoder_embedding], axis=-1)

    # Decoder LSTM layer is set with Encoder LSTM's states as initial state
    decoder_lstm = LSTM(units=64, return_state=True, return_sequences=True, name='decoder_lstm')
    decoder_outputs, _, _ = decoder_lstm(concat) 
    
    decoder_dense = Dense(units=60, activation='softmax', name='decoder_dense')
    decoder_outputs = decoder_dense(decoder_outputs)

    chatbot = Model(inputs=[encoder_inputs, decoder_inputs], outputs=[decoder_outputs]) 
    return chatbot

当我运行该功能时:

bot = build_model() 

我收到以下错误:

TypeError: 'AdditiveAttention' object is not iterable

有人可以帮助我理解错误,并正确实现 Attentional seq2seq 模型吗?

4

1 回答 1

2

本周我遇到了同样的问题。似乎 tf.keras Additive attention 不返回注意权重,只返回上下文向量。

因此,您只需要在调用 AdditiveAttention() 时消除“attention_weights”就可以了。

于 2020-07-12T07:20:04.340 回答