我一直在努力理解时代和批量大小的概念。你可以在下面看到我的 CNN 的训练结果:
Epoch 160/170
32/32 [==============================] - 90s 3s/step - loss: 0.5461 - accuracy: 0.8200 - val_loss: 0.6561 - val_accuracy: 0.7882
Epoch 161/170
32/32 [==============================] - 92s 3s/step - loss: 0.5057 - accuracy: 0.8356 - val_loss: 0.62020 - val_accuracy: 0.7882
Epoch 162/170
32/32 [==============================] - 90s 3s/step - loss: 0.5178 - accuracy: 0.8521 - val_loss: 0.6652 - val_accuracy: 0.7774
Epoch 163/170
32/32 [==============================] - 94s 3s/step - loss: 0.5377 - accuracy: 0.8418 - val_loss: 0.6733 - val_accuracy: 0.7822
所以有 163 个 epoch,32 个批大小。由于批量大小是每个时期的样本数,因此有 163*32 = 5216 个样本,但数据集中只有 3459 个样本。那么当它们不足够时,它是否开始从数据集的开头拍摄图像?