0

使用渐变胶带时,您可以使用以下方法计算渐变:

with tf.GradientTape() as tape:
        out = model(x, training=True)
        out = tf.reshape(out, (num_img, 1, 10)) # Resizing 
        loss = tf.keras.losses.categorical_crossentropy(y, out) 
        gradient = tape.gradient(loss, model.trainable_variables)

但是,在 cifar10 输入的情况下,这将返回输入图像的梯度。有没有办法访问中间步骤的梯度,以便他们已经通过“一些”训练?

4

1 回答 1

0

编辑:感谢您的评论,我对您的问题有了更好的了解。以下代码远非理想,并且没有考虑批量训练等,但它可能会给您一个很好的起点。我编写了一个自定义训练步骤,它基本上替代了该model.fit方法。可能有更好的方法可以做到这一点,但它应该可以让您快速比较渐变。

def custom_training(model, data):
    x, y = data
    # Training 
    with tf.GradientTape() as tape:
        y_pred = model(x, training=True)  # Forward pass
        # Compute the loss value
        # (the loss function is configured in `compile()`)
        loss = tf.keras.losses.mse(y, y_pred)
        
    trainable_vars = model.trainable_variables
    gradients = tape.gradient(loss, trainable_vars)
    tf.keras.optimizers.Adam().apply_gradients(zip(gradients, trainable_vars))
    # computing the gradient without optimizing it!
    with tf.GradientTape() as tape:
        y_pred = model(x, training=False)  # Forward pass
        # Compute the loss value
        # (the loss function is configured in `compile()`)
        loss = tf.keras.losses.mse(y, y_pred)
    trainable_vars = model.trainable_variables
    gradients_plus = tape.gradient(loss, trainable_vars)
    
    return gradients, gradients_plus

让我们假设一个非常简单的模型:

import tensorflow as tf

train_data = tf.random.normal((1000, 32))
train_features = tf.random.normal((1000,))

inputs = tf.keras.layers.Input(shape=(32))
hidden_1 = tf.keras.layers.Dense(32)(inputs)
hidden_2 = tf.keras.layers.Dense(32)(hidden_1)
outputs = tf.keras.layers.Dense(1)(hidden_2)

model = tf.keras.Model(inputs, outputs)

你想计算所有层相对于输入的梯度。您可以使用以下内容:

with tf.GradientTape(persistent=True) as tape:
    tape.watch(inputs)
    out_intermediate = []
    inputs = train_data
    cargo = model.layers[0](inputs)
    for layer in model.layers[1:]:
        cargo = layer(cargo)
        out_intermediate.append(cargo)
        
for x in out_intermediate:
    print(tape.gradient(x, inputs))

如果您想计算自定义损失,我建议自定义 Model.fit 中发生的情况

于 2021-02-09T15:15:43.137 回答