0

我正在使用 RNN 网络进行二元分类。当我在重塑数据后尝试训练模型时,模型正在工作并且我的分数很高,但我不知道输入的形状是否正确,并在训练开始时出现警告消息。本次训练的输入形状为 x_train=(466, 1, 1024) 和 y_train(466, 1)

Epoch 1/300
WARNING:tensorflow:Model was constructed with shape (None, 466, 1024) for input Tensor("bidirectional_4_input:0", shape=(None, 466, 1024), dtype=float32), but it was called on an input with incompatible shape (None, 1, 1024).
WARNING:tensorflow:Model was constructed with shape (None, 466, 1024) for input Tensor("bidirectional_4_input:0", shape=(None, 466, 1024), dtype=float32), but it was called on an input with incompatible shape (None, 1, 1024).
1/8 [==>...........................] - ETA: 0s - loss: 0.6936 - binary_accuracy: 0.4375WARNING:tensorflow:Model was constructed with shape (None, 466, 1024) for input Tensor("bidirectional_4_input:0", shape=(None, 466, 1024), dtype=float32), but it was called on an input with incompatible shape (None, 1, 1024).
8/8 [==============================] - 1s 140ms/step - loss: 0.6942 - binary_accuracy: 0.4828 - val_loss: 0.6986 - val_binary_accuracy: 0.2301
Epoch 2/300
8/8 [==============================] - 0s 8ms/step - loss: 0.6928 - binary_accuracy: 0.5021 - val_loss: 0.6911 - val_binary_accuracy: 0.7699
Epoch 3/300
8/8 [==============================] - 0s 8ms/step - loss: 0.6927 - binary_accuracy: 0.5000 - val_loss: 0.6840 - val_binary_accuracy: 0.7699
Epoch 4/300
8/8 [==============================] - 0s 8ms/step - loss: 0.6922 - binary_accuracy: 0.5515 - val_loss: 0.6892 - val_binary_accuracy: 0.8110
Epoch 5/300
8/8 [==============================] - 0s 7ms/step - loss: 0.6911 - binary_accuracy: 0.6116 - val_loss: 0.6854 - val_binary_accuracy: 0.8110
Epoch 6/300
8/8 [==============================] - 0s 7ms/step - loss: 0.6857 - binary_accuracy: 0.6352 - val_loss: 0.7034 - val_binary_accuracy: 0.2301
Epoch 7/300
8/8 [==============================] - 0s 8ms/step - loss: 0.6798 - binary_accuracy: 0.5579 - val_loss: 0.6663 - val_binary_accuracy: 0.8110
Epoch 8/300
8/8 [==============================] - 0s 7ms/step - loss: 0.6688 - binary_accuracy: 0.5322 - val_loss: 0.5643 - val_binary_accuracy: 0.8110
Epoch 9/300
8/8 [==============================] - 0s 8ms/step - loss: 0.6590 - binary_accuracy: 0.6180 - val_loss: 0.7223 - val_binary_accuracy: 0.2301
Epoch 10/300
8/8 [==============================] - 0s 8ms/step - loss: 0.6281 - binary_accuracy: 0.5901 - val_loss: 0.5275 - val_binary_accuracy: 0.8110
Epoch 11/300
8/8 [==============================] - 0s 7ms/step - loss: 0.6184 - binary_accuracy: 0.5923 - val_loss: 0.5207 - val_binary_accuracy: 0.8110
Epoch 12/300
8/8 [==============================] - 0s 8ms/step - loss: 0.6253 - binary_accuracy: 0.6073 - val_loss: 0.6351 - val_binary_accuracy: 0.7726
Epoch 13/300
8/8 [==============================] - 0s 8ms/step - loss: 0.5943 - binary_accuracy: 0.6524 - val_loss: 0.5801 - val_binary_accuracy: 0.8137
Epoch 14/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5878 - binary_accuracy: 0.6867 - val_loss: 0.5733 - val_binary_accuracy: 0.8110
Epoch 15/300
8/8 [==============================] - 0s 8ms/step - loss: 0.5753 - binary_accuracy: 0.6760 - val_loss: 0.7999 - val_binary_accuracy: 0.2329
Epoch 16/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5593 - binary_accuracy: 0.6567 - val_loss: 0.6224 - val_binary_accuracy: 0.7315
Epoch 17/300
8/8 [==============================] - 0s 8ms/step - loss: 0.5366 - binary_accuracy: 0.7682 - val_loss: 0.5514 - val_binary_accuracy: 0.8110
Epoch 18/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5426 - binary_accuracy: 0.6931 - val_loss: 0.4470 - val_binary_accuracy: 0.8137
Epoch 19/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5837 - binary_accuracy: 0.6824 - val_loss: 0.4938 - val_binary_accuracy: 0.8027
Epoch 20/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5041 - binary_accuracy: 0.7639 - val_loss: 0.5890 - val_binary_accuracy: 0.7452
Epoch 21/300
8/8 [==============================] - 0s 7ms/step - loss: 0.4826 - binary_accuracy: 0.7768 - val_loss: 0.4466 - val_binary_accuracy: 0.8082
Epoch 22/300
8/8 [==============================] - 0s 7ms/step - loss: 0.4873 - binary_accuracy: 0.7682 - val_loss: 0.4667 - val_binary_accuracy: 0.8164
Epoch 23/300
8/8 [==============================] - 0s 8ms/step - loss: 0.5146 - binary_accuracy: 0.7425 - val_loss: 0.8445 - val_binary_accuracy: 0.3425
Epoch 24/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5146 - binary_accuracy: 0.7253 - val_loss: 0.4355 - val_binary_accuracy: 0.8055
Epoch 25/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5277 - binary_accuracy: 0.7039 - val_loss: 0.4367 - val_binary_accuracy: 0.8164
Epoch 26/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5107 - binary_accuracy: 0.7210 - val_loss: 0.4461 - val_binary_accuracy: 0.8082
Epoch 27/300
8/8 [==============================] - 0s 7ms/step - loss: 0.4904 - binary_accuracy: 0.7704 - val_loss: 0.4660 - val_binary_accuracy: 0.8164
Epoch 28/300
8/8 [==============================] - 0s 7ms/step - loss: 0.4995 - binary_accuracy: 0.7103 - val_loss: 0.4699 - val_binary_accuracy: 0.8164
Epoch 29/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5750 - binary_accuracy: 0.6803 - val_loss: 0.7111 - val_binary_accuracy: 0.3890
Epoch 30/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5079 - binary_accuracy: 0.7167 - val_loss: 0.6765 - val_binary_accuracy: 0.6411
Epoch 31/300
8/8 [==============================] - 0s 7ms/step - loss: 0.5094 - binary_accuracy: 0.7210 - val_loss: 0.4508 - val_binary_accuracy: 0.8164
Epoch 32/300
8/8 [==============================] - 0s 7ms/step - loss: 0.4875 - binary_accuracy: 0.7897 - val_loss: 0.4815 - val_binary_accuracy: 0.8164
Epoch 33/300
8/8 [==============================] - 0s 7ms/step - loss: 0.4754 - binary_accuracy: 0.7897 - val_loss: 0.4455 - val_binary_accuracy: 0.8110

输入是466个序列,每个序列的长度是1024

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM, Bidirectional
data_dim = 1024
timesteps = 466
num_classes = 10
# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(Bidirectional(LSTM(50, return_sequences=True), input_shape=(timesteps, data_dim)))
model.add(Bidirectional(LSTM(50)))
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
bidirectional_4 (Bidirection (None, 466, 100)          430000    
_________________________________________________________________
bidirectional_5 (Bidirection (None, 100)               60400     
_________________________________________________________________
dense_4 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_5 (Dense)              (None, 1)                 101       
=================================================================
Total params: 500,601
Trainable params: 500,601
Non-trainable params: 0

当我尝试 x_train=(1, 466, 1024) 和 y_train(1, 466) 时,模型没有训练并且我遇到了这个错误(按照我看到的指南)

 ValueError: logits and labels must have the same shape ((None, 1) vs (None, 466))

有人可以帮我解决这个问题或确认第一次培训是否正确吗?

4

0 回答 0