感谢@Prune 对我的问题的批评意见。
我试图通过使用 MNIST 数据集来找到批量大小和训练时间之间的关系。
通过阅读 stackoverflow 中的许多问题,例如: 批量大小如何影响神经网络中的时间执行? 人们说当我使用小批量时训练时间会减少。
但是,通过尝试这两个,我发现批量大小 == 1 的训练比批量大小 == 60,000 需要更多时间。我将纪元设置为 10。
我将 MMIST 数据集分成 60k 用于训练和 10k 用于测试。
下面是我的代码和结果。
mnist_trainset = torchvision.datasets.MNIST(root=root_dir, train=True,
download=True,
transform=transforms.Compose([transforms.ToTensor()]))
mnist_testset = torchvision.datasets.MNIST(root=root_dir,
train=False,
download=True,
transform=transforms.Compose([transforms.ToTensor()]))
train_dataloader = torch.utils.data.DataLoader(mnist_trainset,
batch_size=1,
shuffle=True)
test_dataloader = torch.utils.data.DataLoader(mnist_testset,
batch_size=50,
shuffle=False)
# Define the model
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear_1 = torch.nn.Linear(784, 256)
self.linear_2 = torch.nn.Linear(256, 10)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = x.reshape(x.size(0), -1)
x = self.linear_1(x)
x = self.sigmoid(x)
pred = self.linear_2(x)
return pred
# trainer
no_epochs = 10
def my_trainer(optimizer, model):
criterion = torch.nn.CrossEntropyLoss()
train_loss = list()
test_loss = list()
test_acc = list()
best_test_loss = 1
for epoch in range(no_epochs):
# timer starts
start = timer()
total_train_loss = 0
total_test_loss = 0
# training
# set up training mode
model.train()
for itr, (image, label) in enumerate(train_dataloader):
optimizer.zero_grad()
pred = model(image)
loss = criterion(pred, label)
total_train_loss += loss.item()
loss.backward()
optimizer.step()
total_train_loss = total_train_loss / (itr + 1)
train_loss.append(total_train_loss)
# testing
# change to evaluation mode
model.eval()
total = 0
for itr, (image, label) in enumerate(test_dataloader):
pred = model(image)
loss = criterion(pred, label)
total_test_loss += loss.item()
# we now need softmax because we are testing.
pred = torch.nn.functional.softmax(pred, dim=1)
for i, p in enumerate(pred):
if label[i] == torch.max(p.data, 0)[1]:
total = total + 1
# caculate accuracy
accuracy = total / len(mnist_testset)
# append accuracy here
test_acc.append(accuracy)
# append test loss here
total_test_loss = total_test_loss / (itr + 1)
test_loss.append(total_test_loss)
print('\nEpoch: {}/{}, Train Loss: {:.8f}, Test Loss: {:.8f}, Test Accuracy: {:.8f}'.format(epoch + 1, no_epochs, total_train_loss, total_test_loss, accuracy))
if total_test_loss < best_test_loss:
best_test_loss = total_test_loss
print("Saving the model state dictionary for Epoch: {} with Test loss: {:.8f}".format(epoch + 1, total_test_loss))
torch.save(model.state_dict(), "model.dth")
# timer finishes
end = timer()
print(end - start)
return no_epochs, test_acc, test_loss
model_sgd = Model()
optimizer_SGD = torch.optim.SGD(model_sgd.parameters(), lr=0.1)
sgd_no_epochs, sgd_test_acc, sgd_test_loss = my_trainer(optimizer=optimizer_SGD, model=model_sgd)
我计算了每个 epoch 花费了多少时间。
这就是结果。
Epoch: 1/10, Train Loss: 0.23193890, Test Loss: 0.12670580, Test Accuracy: 0.96230000
63.98903721500005 seconds
Epoch: 2/10, Train Loss: 0.10275097, Test Loss: 0.10111042, Test Accuracy: 0.96730000
63.97179028100004 seconds
Epoch: 3/10, Train Loss: 0.07269370, Test Loss: 0.09668248, Test Accuracy: 0.97150000
63.969843954 seconds
Epoch: 4/10, Train Loss: 0.05658571, Test Loss: 0.09841745, Test Accuracy: 0.97070000
64.24135530400008 seconds
Epoch: 5/10, Train Loss: 0.04183391, Test Loss: 0.09828428, Test Accuracy: 0.97230000
64.19695308500013 seconds
Epoch: 6/10, Train Loss: 0.03393899, Test Loss: 0.08982467, Test Accuracy: 0.97530000
63.96944059600014 seconds
Epoch: 7/10, Train Loss: 0.02808819, Test Loss: 0.08597597, Test Accuracy: 0.97700000
63.59837343000004 seconds
Epoch: 8/10, Train Loss: 0.01859330, Test Loss: 0.07529452, Test Accuracy: 0.97950000
63.591578820999985 seconds
Epoch: 9/10, Train Loss: 0.01383720, Test Loss: 0.08568452, Test Accuracy: 0.97820000
63.66664020100029
Epoch: 10/10, Train Loss: 0.00911216, Test Loss: 0.07377760, Test Accuracy: 0.98060000
63.92636473799985 seconds
在此之后,我将批处理大小更改为 60000 并再次运行相同的程序。
train_dataloader = torch.utils.data.DataLoader(mnist_trainset,
batch_size=60000,
shuffle=True)
test_dataloader = torch.utils.data.DataLoader(mnist_testset,
batch_size=50,
shuffle=False)
print("\n===== Entering SGD optimizer =====\n")
model_sgd = Model()
optimizer_SGD = torch.optim.SGD(model_sgd.parameters(), lr=0.1)
sgd_no_epochs, sgd_test_acc, sgd_test_loss = my_trainer(optimizer=optimizer_SGD, model=model_sgd)
我得到这个批量大小的结果 == 60000
Epoch: 1/10, Train Loss: 2.32325006, Test Loss: 2.30074144, Test Accuracy: 0.11740000
6.54154992299982 seconds
Epoch: 2/10, Train Loss: 2.30010080, Test Loss: 2.29524792, Test Accuracy: 0.11790000
6.341824101999919 seconds
Epoch: 3/10, Train Loss: 2.29514933, Test Loss: 2.29183527, Test Accuracy: 0.11410000
6.161918789000083 seconds
Epoch: 4/10, Train Loss: 2.29196787, Test Loss: 2.28874513, Test Accuracy: 0.11450000
6.180891567999879 seconds
Epoch: 5/10, Train Loss: 2.28899717, Test Loss: 2.28571669, Test Accuracy: 0.11570000
6.1449509030003355 seconds
Epoch: 6/10, Train Loss: 2.28604794, Test Loss: 2.28270152, Test Accuracy: 0.11780000
6.311743144000047 seconds
Epoch: 7/10, Train Loss: 2.28307867, Test Loss: 2.27968731, Test Accuracy: 0.12250000
6.060618773999977 seconds
Epoch: 8/10, Train Loss: 2.28014660, Test Loss: 2.27666961, Test Accuracy: 0.12890000
6.171511712999745 seconds
Epoch: 9/10, Train Loss: 2.27718973, Test Loss: 2.27364607, Test Accuracy: 0.13930000
6.164125173999764 seconds
Epoch: 10/10, Train Loss: 2.27423453, Test Loss: 2.27061504, Test Accuracy: 0.15350000
6.077817454000069 seconds
正如您所看到的,很明显,当 batch_size == 1 时,每个 epoch 都需要更多时间,这与我所看到的不同。
也许我对每个时期的训练时间与收敛前的训练时间感到困惑?通过查看此网页,我的直觉似乎是正确的:https ://medium.com/deep-learning-experiments/effect-of-batch-size-on-neural-net-training-c5ae8516e57
有人可以解释发生了什么吗?