如何使用 TFF 框架进行异步模型训练?
我回顾了迭代训练过程循环,但是我不确定如何知道收到了哪些客户模型。
很可能在 TFF 中模拟类似于“异步 FL”的东西。考虑这一点的一种方法可能是从概念上将模拟时间与挂钟时间分离。
每轮对不同数量的客户端进行采样(而不是K
通常使用的统一客户端),也许使用一些根据预期训练时间对客户端进行加权的分布,可以模拟异步 FL。可以先只处理选定客户的一部分,研究人员可以根据需要自由分割数据/计算。
Python 风格的伪代码演示了两种技术,不同的客户端采样和延迟梯度应用:
state = fed_avg_iter_proc.initialize()
for round_num in range(NUM_ROUNDS):
# Here we conceptualize a "round" as a block of time, rather than a synchronous
# round. We have a function that determines which clients will "finish" within
# our configured block of time. This might even return only a single client.
participants = get_next_clients(time_window=timedelta(minutes=30))
num_participants = len(participants)
# Here we only process the first half, and then updated the global model.
state2, metrics = fed_avg_iter_proc.next(state, participants[:num_participants/2])
# Now process the second half of the selected clients.
# Note: this is now apply the 'pseudo-gradient' that was computed on clients
# (the difference between the original `state` and their local training result),
# to the model that has already taken one step (`state2`). This possibly has
# undesirable effects on the optimisation process, or may be improved with
# techniques that handle "stale" gradients.
state3, metrics = fed_avg_iter_proc.next(state2, participants[num_participants/2:])
# Finally update the state for the next for-loop of the simulation.
state = state3