这是一个很好的问题,并且与Distribution Strategy
.
在阅读了这个Tensorflow 文档、TPU 策略文档和这个同步和异步训练的解释之后,
我可以这么说
> the optimizer computes 8 different steps on batches of size
> per_replica_batch_size, updating the weights of the model 8 different
> times
Tensorflow 文档的以下解释应阐明:
> So, how should the loss be calculated when using a
> tf.distribute.Strategy?
>
> For an example, let's say you have 4 GPU's and a batch size of 64. One
> batch of input is distributed across the replicas (4 GPUs), each
> replica getting an input of size 16.
>
> The model on each replica does a forward pass with its respective
> input and calculates the loss. Now, instead of dividing the loss by
> the number of examples in its respective input (BATCH_SIZE_PER_REPLICA
> = 16), the loss should be divided by the GLOBAL_BATCH_SIZE (64).
在下面提供其他链接的解释(以防它们将来不起作用):
TPU 策略文档指出:
> In terms of distributed training architecture, `TPUStrategy` is the
> same `MirroredStrategy` - it implements `synchronous` distributed
> training. `TPUs` provide their own implementation of efficient
> `all-reduce` and other collective operations across multiple `TPU`
> cores, which are used in `TPUStrategy`.
同步和异步训练的解释如下:
> `Synchronous vs asynchronous training`: These are two common ways of
> `distributing training` with `data parallelism`. In `sync training`, all
> `workers` train over different slices of input data in `sync`, and
> **`aggregating gradients`** at each step. In `async` training, all workers are
> independently training over the input data and updating variables
> `asynchronously`. Typically sync training is supported via all-reduce
> and `async` through parameter server architecture.
您还可以通过此MPI 教程详细了解 All_Reduce 的概念。
下面的屏幕截图显示了 All_Reduce 的工作原理: