我正在尝试让 4D TimeDistributed(LSTM(...)) 在 Keras 中工作,但我遇到了输入/输出形状的问题。
batch_size = 1
model = Sequential()
model.add(TimeDistributed(LSTM(7, batch_input_shape=(batch_size,
look_back,dataset.shape[1], dataset.shape[2]), stateful=True,
return_sequences=True), batch_input_shape=(batch_size,
look_back, dataset.shape[1], dataset.shape[2])))
model.add(TimeDistributed(LSTM(7, batch_input_shape= (batch_size,
look_back,dataset.shape[1],dataset.shape[2]),
stateful=True), batch_input_shape=(batch_size, look_back,
dataset.shape[1], dataset.shape[2])))
model.add(TimeDistributed(Dense(7, input_shape = (batch_size,
1,look_back, dataset.shape[1],dataset.shape[2]))))
model.compile(loss = 'mean_squared_error', optimizer='adam')
for i in range(10):
model.fit(trainX, trainY, epochs=1, batch_size=batch_size,
verbose=2, shuffle=False)
model.reset_states()
trainX、trainY 和 dataset 的输入形状如下:
trainX.shape = (63, 3, 34607, 7)
trainY.shape = (63, 34607, 7)
dataset.shape = (100, 34607, 7)
我收到的错误如下:
检查目标时出错:预期 time_distributed_59 的形状为 (1, 3, 7) 但得到的数组的形状为 (63, 34607, 7)
上面提到的层是关于最后一个 TimeDistributed Dense Layer 的。
这是我打印出每一层的输入和输出形状时的输出:
(1, 3, 34607, 7) layer[0] - 输入
(1, 3, 34607, 7) layer[0] - 输出
(1, 3, 34607, 7) layer[1] - 输入
(1, 3, 7) layer[1] - 输出
(1, 3, 7) layer[2] - 输入
(1, 3, 7) layer[2] - 输出
但是,最终的输出层应该是形状为 (1, 1, 34067, 7) 或形状为 (1, 34067, 7) 的预测
感谢您的任何建议!