我正在为文本摘要创建一个关于词级嵌入的 seq2seq 模型,我正面临数据形状问题,请帮忙。谢谢。
encoder_input=Input(shape=(max_encoder_seq_length,))
embed_layer=Embedding(num_encoder_tokens,256,mask_zero=True)(encoder_input)
encoder=LSTM(256,return_state=True,return_sequences=False)
encoder_ouput,state_h,state_c=encoder(embed_layer)
encoder_state=[state_h,state_c]
decoder_input=Input(shape=(max_decoder_seq_length,))
de_embed=Embedding(num_decoder_tokens,256)(decoder_input)
decoder=LSTM(256,return_state=True,return_sequences=True)
decoder_output,_,_=decoder(de_embed,initial_state=encoder_state)
decoder_dense=Dense(num_decoder_tokens,activation='softmax')
decoder_output=decoder_dense(decoder_output)
model=Model([encoder_input,decoder_input],decoder_output)
model.compile(optimizer='adam',loss="categorical_crossentropy",metrics=['accuracy'])
由于输入的形状,它在训练时会出错。请帮助重新塑造我的数据,因为当前的形状是
编码器数据形状:(50, 1966, 7059) 解码器数据形状:(50, 69, 1183) 解码器目标形状:(50, 69, 1183)
Epoch 1/35
WARNING:tensorflow:Model was constructed with shape (None, 1966) for input Tensor("input_37:0", shape=(None, 1966), dtype=float32), but it was called on an input with incompatible shape (None, 1966, 7059).
WARNING:tensorflow:Model was constructed with shape (None, 69) for input Tensor("input_38:0", shape=(None, 69), dtype=float32), but it was called on an input with incompatible shape (None, 69, 1183).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-71-d02252f12e7f> in <module>()
1 model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
2 batch_size=16,
----> 3 epochs=35)
ValueError: Input 0 of layer lstm_35 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 1966, 7059, 256]