0

Input我在取Embedding第一层时遇到错误。(,9)尽管我在 中明确提到了形状,但它无法找到形状的张量Input()。有人可以帮我解决这个问题吗?

代码如下:

def model_3(src_vocab, tar_vocab, src_timesteps, tar_timesteps, n_units):

    _nput = Input(shape=[src_timesteps], dtype='int32')
    embedding = Embedding(input_dim = src_vocab, output_dim = n_units, input_length=src_timesteps, mask_zero=False)(_nput)
    activations = LSTM(n_units, return_sequences=True)(embedding)
    attention = Dense(1, activation='tanh')(activations)
    attention = Flatten()(attention)
    attention = Activation('softmax')(attention)
    attention = RepeatVector(tar_timesteps)(attention)
    activations = Permute([2,1])(activations)
    sent_representation = dot([attention,activations], axes=-1)
    sent_representation = LSTM(n_units, return_sequences=True)(sent_representation)
    sent_representation = TimeDistributed(Dense(tar_vocab, activation='softmax'))(sent_representation)
    model = Model(input=_nput,output=sent) 
    model.compile(optimizer='adam', loss='categorical_crossentropy')
    print(model.summary())
    plot_model(model, to_file='model.png', show_shapes=True)
4

0 回答 0