0

这个问题之前发过这里,我在这里重新打开它以引起更多关注。

主要问题是在正常的 float32 环境中测试时,张量流返回类似的梯度,但是在我mixed_precision.set_global_policy('mixed_float16')使用 float16 切换后,返回的梯度始终为 0。

下面是一个可以重现错误的玩具代码。

系统信息

操作系统平台和发行版:linux TensorFlow 版本(使用下面的命令):tf2.4.1

重现代码


import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import mixed_precision
import numpy as np
from tqdm import tqdm

gpus = tf.config.experimental.list_physical_devices('GPU')

for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)


mixed_precision.set_global_policy('mixed_float16')


def forward_conv(x, filters, kernels, name='forward', padding='same'):
    i = 0
    for flt, kernel in zip(filters, kernels):
        x = layers.Conv3D(flt, kernel, activation='relu', padding=padding, dilation_rate=(1, 1, 1),
                          use_bias=False, name=str(i) + '_' + name)(x)
        x = layers.BatchNormalization(name=str(i) + '_bn_' + name)(x)
        i += 1
    return x


def part_one(ipt):
    l1 = forward_conv(ipt, (4, 4), (3, 3), name='enc1')
    d2 = layers.MaxPool3D(pool_size=(2, 2, 2))(l1)
    l2 = forward_conv(d2, (4, 4), (3, 3), name='enc2')
    return l1, l2


def part_inner(ipt1, ipt2):
    l1 = forward_conv(ipt1, (4, 4), (3, 3), name='enc1')
    l2 = forward_conv(ipt2, (4, 4), (3, 3), name='enc2')
    return l1, l2


def part_two(ipt1, ipt2):
    l2 = forward_conv(ipt2, (4, 4), (3, 3), name='dec2')
    u1 = layers.UpSampling3D(size=(2, 2, 2))(l2)
    r1 = forward_conv(ipt1 + u1, (4, 4), (3, 3), name='dec1')
    return r1


initial = tf.ones([1, 256, 368, 368, 1], dtype=tf.float16)

tf.random.set_seed(1)

with tf.GradientTape() as g:
    g.watch(initial)
    l1_, l2_ = part_one(initial)
    for _ in range(2):
        l1_, l2_ = part_inner(l1_, l2_)
    opt_ = part_two(l1_, l2_)
    loss = tf.reduce_mean(l1_) + tf.reduce_mean(opt_)
    gd = g.gradient(loss, initial)
    print('-' * 100)
    print(f'loss is {loss} and grad is {np.sum(gd)} with ckpt= {ckpt}')

行为描述

使用 tf.float32 设置时,梯度的结果是合理的,值在 0.6 左右,然而,当使用混合精度转换到 tf.float16 时,梯度始终为 0。我们是否应该期望计算的梯度在普通 float32 模式和混合精度 float16 模式?谢谢!

4

1 回答 1

0

在 Tensorflow文档中,mixed_precision他们谈到了使用损失缩放来解决这个问题。

由于 tensorflow 的文档通常会变得过时,因此建议的代码如下:

loss_scale = 1024
loss = model(inputs)
loss *= loss_scale
# Assume `grads` are float32. You do not want to divide float16 gradients.
grads = compute_gradient(loss, model.trainable_variables)
grads /= loss_scale

这应该可以解决问题。

于 2021-10-15T17:31:44.670 回答