我有 280 个类别的分类问题,有约 278,000 张图像。我使用 quick_solver.txt 基于模型 GoogleNet(caffe 中的 bvlc_googlenet)进行微调。我的求解器如下:
test_iter: 1000
test_interval: 4000
test_initialization: false
display: 40
average_loss: 40
base_lr: 0.001
lr_policy: "poly"
power: 0.5
max_iter: 800000
momentum: 0.9
weight_decay: 0.0002
snapshot: 20000
在训练期间,我使用 32 的批大小,以及 32 的测试批。我只是从头开始重新学习三层 loss1/classifier loss2/classifier 和 loss3/classifier 通过重命名它们。我将全局学习率设置为 0.001,即比从头训练中使用的学习率低 10 倍。然而,最后三层仍然获得 0.01 的学习率。
第一次迭代的日志文件:
I0515 08:44:41.838122 1279 solver.cpp:228] Iteration 40, loss = 9.72169
I0515 08:44:41.838163 1279 solver.cpp:244] Train net output #0: loss1/loss1 = 5.7261 (* 0.3 = 1.71783 loss)
I0515 08:44:41.838170 1279 solver.cpp:244] Train net output #1: loss2/loss1 = 5.65961 (* 0.3 = 1.69788 loss)
I0515 08:44:41.838173 1279 solver.cpp:244] Train net output #2: loss3/loss3 = 5.46685 (* 1 = 5.46685 loss)
I0515 08:44:41.838179 1279 sgd_solver.cpp:106] Iteration 40, lr = 0.000999975
在第 100,000 次迭代之前,我的网络获得了 50% 的 top-1 准确度和 ~80% 的 top-5 准确度:
I0515 13:45:59.789113 1279 solver.cpp:337] Iteration 100000, Testing net (#0)
I0515 13:46:53.914217 1279 solver.cpp:404] Test net output #0: loss1/loss1 = 2.08631 (* 0.3 = 0.625893 loss)
I0515 13:46:53.914274 1279 solver.cpp:404] Test net output #1: loss1/top-1 = 0.458375
I0515 13:46:53.914279 1279 solver.cpp:404] Test net output #2: loss1/top-5 = 0.768781
I0515 13:46:53.914284 1279 solver.cpp:404] Test net output #3: loss2/loss1 = 1.88489 (* 0.3 = 0.565468 loss)
I0515 13:46:53.914288 1279 solver.cpp:404] Test net output #4: loss2/top-1 = 0.494906
I0515 13:46:53.914290 1279 solver.cpp:404] Test net output #5: loss2/top-5 = 0.805906
I0515 13:46:53.914294 1279 solver.cpp:404] Test net output #6: loss3/loss3 = 1.77118 (* 1 = 1.77118 loss)
I0515 13:46:53.914297 1279 solver.cpp:404] Test net output #7: loss3/top-1 = 0.517719
I0515 13:46:53.914299 1279 solver.cpp:404] Test net output #8: loss3/top-5 = 0.827125
在第 119,00 次迭代中,一切仍然正常
I0515 14:43:38.669674 1279 solver.cpp:228] Iteration 119000, loss = 2.70265
I0515 14:43:38.669777 1279 solver.cpp:244] Train net output #0: loss1/loss1 = 2.41406 (* 0.3 = 0.724217 loss)
I0515 14:43:38.669783 1279 solver.cpp:244] Train net output #1: loss2/loss1 = 2.38374 (* 0.3 = 0.715123 loss)
I0515 14:43:38.669787 1279 solver.cpp:244] Train net output #2: loss3/loss3 = 1.92663 (* 1 = 1.92663 loss)
I0515 14:43:38.669798 1279 sgd_solver.cpp:106] Iteration 119000, lr = 0.000922632
紧接着损失突然增加,即等于初始损失(从 8 到 9),
I0515 14:43:45.377710 1279 solver.cpp:228] Iteration 119040, loss = 8.3068
I0515 14:43:45.377751 1279 solver.cpp:244] Train net output #0: loss1/loss1 = 5.77026 (* 0.3 = 1.73108 loss)
I0515 14:43:45.377758 1279 solver.cpp:244] Train net output #1: loss2/loss1 = 5.76971 (* 0.3 = 1.73091 loss)
I0515 14:43:45.377763 1279 solver.cpp:244] Train net output #2: loss3/loss3 = 5.70022 (* 1 = 5.70022 loss)
I0515 14:43:45.377768 1279 sgd_solver.cpp:106] Iteration 119040, lr = 0.000922605
在突然的变化发生很久之后,网络无法减少这种损失
I0515 16:51:10.485610 1279 solver.cpp:228] Iteration 161040, loss = 9.01994
I0515 16:51:10.485649 1279 solver.cpp:244] Train net output #0: loss1/loss1 = 5.63485 (* 0.3 = 1.69046 loss)
I0515 16:51:10.485656 1279 solver.cpp:244] Train net output #1: loss2/loss1 = 5.63484 (* 0.3 = 1.69045 loss)
I0515 16:51:10.485661 1279 solver.cpp:244] Train net output #2: loss3/loss3 = 5.62972 (* 1 = 5.62972 loss)
I0515 16:51:10.485666 1279 sgd_solver.cpp:106] Iteration 161040, lr = 0.0008937
我重新运行了两次实验,它只是在第 119040 次迭代中完全重复。有关更多信息,我在创建 LMDB 数据库时进行了数据混洗。我使用这个数据库训练了一个 VGG-16(步进学习率策略,最大 80k 次迭代,每步 20k 次迭代),没有任何问题。使用 VGG,我获得了 55% 的 top-1 准确率。
有人遇到和我类似的问题吗?