1

我试图用cifar10 数据集和 resnet18实现tensorflow 联合的简单fedavg 。这也是pytorch 的实现. 就像可训练的一样,我将不可训练的批量标准化参数汇总到服务器并对其进行平均。我使用了 5 个客户端,数据集被随机分为 5 个,每个客户端有 50k/5=10k 个训练样本,因此没有严重的偏态分布。在训练之后,我用完整的测试数据集、10k 个样本测试了每个客户端,我也用这些数据集来测试服务器。问题是在第一轮训练之后,尽管每个客户端都有 20-25% 的准确率,但服务器有 10% 的准确率,并且基本上对每个输入做出几乎相同的预测。这是第一轮唯一的情况,因为在那轮之后服务器几乎总是比该轮中的任何客户端具有更好的准确性。例如

Round 0 training loss: 3.0080783367156982
Round 0 client_id: 0 eval_score: 0.2287999987602234
Round 0 client_id: 1 eval_score: 0.2614000141620636
Round 0 client_id: 2 eval_score: 0.22040000557899475
Round 0 client_id: 3 eval_score: 0.24799999594688416
Round 0 client_id: 4 eval_score: 0.2565999925136566
Round 0 validation accuracy: 10.0
Round 1 training loss: 1.920640230178833
Round 1 client_id: 0 eval_score: 0.25220000743865967
Round 1 client_id: 1 eval_score: 0.32199999690055847
Round 1 client_id: 2 eval_score: 0.32580000162124634
Round 1 client_id: 3 eval_score: 0.3513000011444092
Round 1 client_id: 4 eval_score: 0.34689998626708984
Round 1 validation accuracy: 34.470001220703125
Round 2 training loss: 1.65810227394104
Round 2 client_id: 0 eval_score: 0.34369999170303345
Round 2 client_id: 1 eval_score: 0.3138999938964844
Round 2 client_id: 2 eval_score: 0.35580000281333923
Round 2 client_id: 3 eval_score: 0.39649999141693115
Round 2 client_id: 4 eval_score: 0.3917999863624573
Round 2 validation accuracy: 45.0
Round 3 training loss: 1.4956902265548706
Round 3 client_id: 0 eval_score: 0.46380001306533813
Round 3 client_id: 1 eval_score: 0.388700008392334
Round 3 client_id: 2 eval_score: 0.39239999651908875
Round 3 client_id: 3 eval_score: 0.43700000643730164
Round 3 client_id: 4 eval_score: 0.430400013923645
Round 3 validation accuracy: 50.62000274658203
Round 4 training loss: 1.3692104816436768
Round 4 client_id: 0 eval_score: 0.510200023651123
Round 4 client_id: 1 eval_score: 0.42739999294281006
Round 4 client_id: 2 eval_score: 0.4223000109195709
Round 4 client_id: 3 eval_score: 0.45080000162124634
Round 4 client_id: 4 eval_score: 0.45559999346733093
Round 4 validation accuracy: 54.83000183105469

为了解决第一轮的问题,我尝试重复数据集,但没有帮助。之后,我尝试为每个客户端使用所有 cifar10 训练样本,而不是为每个客户端创建 5 个不同的 10k 样本数据集,我使用所有 50k 样本作为数据集。

Round 0 training loss: 1.9335068464279175
Round 0 client_id: 0 eval_score: 0.4571000039577484
Round 0 client_id: 1 eval_score: 0.4514000117778778
Round 0 client_id: 2 eval_score: 0.4738999903202057
Round 0 client_id: 3 eval_score: 0.4560000002384186
Round 0 client_id: 4 eval_score: 0.4697999954223633
Round 0 validation accuracy: 10.0
Round 1 training loss: 1.4404207468032837
Round 1 client_id: 0 eval_score: 0.5945000052452087
Round 1 client_id: 1 eval_score: 0.5909000039100647
Round 1 client_id: 2 eval_score: 0.5864999890327454
Round 1 client_id: 3 eval_score: 0.5871999859809875
Round 1 client_id: 4 eval_score: 0.5684000253677368
Round 1 validation accuracy: 59.57999801635742
Round 2 training loss: 1.0174440145492554
Round 2 client_id: 0 eval_score: 0.7002999782562256
Round 2 client_id: 1 eval_score: 0.6953999996185303
Round 2 client_id: 2 eval_score: 0.6830999851226807
Round 2 client_id: 3 eval_score: 0.6682999730110168
Round 2 client_id: 4 eval_score: 0.6754000186920166
Round 2 validation accuracy: 72.41999816894531
Round 3 training loss: 0.7608759999275208
Round 3 client_id: 0 eval_score: 0.7621999979019165
Round 3 client_id: 1 eval_score: 0.7608000040054321
Round 3 client_id: 2 eval_score: 0.7390000224113464
Round 3 client_id: 3 eval_score: 0.7301999926567078
Round 3 client_id: 4 eval_score: 0.7303000092506409
Round 3 validation accuracy: 78.33000183105469
Round 4 training loss: 0.5893330574035645
Round 4 client_id: 0 eval_score: 0.7814000248908997
Round 4 client_id: 1 eval_score: 0.7861999869346619
Round 4 client_id: 2 eval_score: 0.7804999947547913
Round 4 client_id: 3 eval_score: 0.7694000005722046
Round 4 client_id: 4 eval_score: 0.758400022983551
Round 4 validation accuracy: 81.30000305175781

客户端显然具有相同的初始化,但我猜由于使用 gpu 存在一些细微的精度差异,但每个都有 45+% 的精度。但正如你所看到的,即使这对第一轮也没有帮助。当使用简单的 cnn 时,例如“.main”中可用的 cnn,带有合适的参数,这个问题不存在。并使用

learning_rate=0.01 or momentum=0

代替

learning_rate=0.1 and momentum=0.9

在第一轮减少了这个问题,但它的整体性能更差,我正在尝试复制使用后一个参数的论文。

我也对 pytorch 进行了同样的尝试,并得到了非常相似的结果。Colab for pytorch code两者的结果都可以在 github 中找到。

我对此很困惑。特别是当我使用整个训练数据集并且每个客户端都有 45% 的准确率时。还有为什么要在接下来的几轮比赛中取得好成绩?第一轮和其他轮之间有什么变化?每次客户端之间都有相同的初始化、相同的损失函数和具有相同参数的相同优化器。唯一改变的是轮次之间的实际初始化。

那么是否有一个特殊的初始化可以解决这个第一轮问题或者我错过了什么?

编辑:

当每个客户端使用整个 cifar10 训练集,使用 dataset.repeat 重复数据时。

Pre-training validation accuracy: 9.029999732971191
Round 0 training loss: 1.6472676992416382
Round 0 client_id: 0 eval_score: 0.5931000113487244
Round 0 client_id: 1 eval_score: 0.5042999982833862
Round 0 client_id: 2 eval_score: 0.5083000063896179
Round 0 client_id: 3 eval_score: 0.5600000023841858
Round 0 client_id: 4 eval_score: 0.6104999780654907
Round 0 validation accuracy: 10.0

在这里引起我注意的是这里的客户端准确性实际上与数据集不重复时客户端的第二轮(第一轮)准确性非常相似(以前的结果)。所以尽管服务器有 10% 的准确率,但对下一轮的结果影响不大。

这就是它与简单 cnn 一起工作的方式(在 github 的 main.py 中定义)

With the training set divided to 5
Pre-training validation accuracy: 9.489999771118164
Round 0 training loss: 2.1234841346740723
Round 0 client_id: 0 eval_score: 0.30250000953674316
Round 0 client_id: 1 eval_score: 0.2879999876022339
Round 0 client_id: 2 eval_score: 0.2533999979496002
Round 0 client_id: 3 eval_score: 0.25999999046325684
Round 0 client_id: 4 eval_score: 0.2897999882698059
Round 0 validation accuracy: 31.18000030517578

Entire training set for all the clients
Pre-training validation accuracy: 9.489999771118164
Round 0 training loss: 1.636365532875061
Round 0 client_id: 0 eval_score: 0.47850000858306885
Round 0 client_id: 1 eval_score: 0.49470001459121704
Round 0 client_id: 2 eval_score: 0.4918000102043152
Round 0 client_id: 3 eval_score: 0.492900013923645
Round 0 client_id: 4 eval_score: 0.4043000042438507
Round 0 validation accuracy: 50.62000274658203

正如我们所看到的,当使用简单的 cnn 时,服务器精度优于最佳客户端精度,并且绝对优于平均水平,从第一轮开始。我试图理解为什么 resnet 无法做到这一点并且无论输入如何都做出相同的预测。在第一轮之后,预测看起来像

[[0.02677999 0.02175025 0.10807421 0.25275248 0.08478505 0.20601839
  0.16497472 0.09307405 0.01779539 0.02399557]
 [0.04087764 0.03603332 0.09987792 0.23636964 0.07425722 0.19982725
  0.13649824 0.09779423 0.03454168 0.04392283]
 [0.02448712 0.01900426 0.11061406 0.25295085 0.08886322 0.20792796
  0.17296027 0.08762561 0.01570844 0.01985822]
 [0.01790532 0.01536059 0.11237497 0.2519772  0.09357632 0.20954111
  0.18946911 0.08571784 0.01004946 0.01402805]
 [0.02116687 0.02263201 0.10294028 0.25523028 0.08544692 0.21299754
  0.17604835 0.088608   0.01438032 0.02054946]
 [0.01598492 0.01457187 0.10899033 0.25493488 0.09417254 0.20747423
  0.19798534 0.08387674 0.0089481  0.01306108]
 [0.01432306 0.01214803 0.11237216 0.25138852 0.09796435 0.2036258
  0.20656979 0.08344456 0.00726837 0.01089529]
 [0.01605278 0.0135905  0.11161591 0.25388476 0.09531546 0.20592561
  0.19932476 0.08305667 0.00873495 0.01249863]
 [0.02512863 0.0238647  0.10465285 0.24918261 0.08625458 0.21051233
  0.16839236 0.09075507 0.01765386 0.02360307]
 [0.05418856 0.05830322 0.09909651 0.20211859 0.07324574 0.18549475
  0.11666768 0.0990423  0.05081367 0.06102907]]

他们都返回第三个标签。

4

1 回答 1

0

我想从以前训练的模型中加载模型权重会解决这个问题吗?请参阅如何使用某些权重初始化模型?关于如何初始化第一轮模型权重。

于 2021-10-11T18:13:26.153 回答