在 tensorflow-2.0 中,我试图创建一个keras.layers.Layer
输出两个之间的 Kullback-Leibler (KL) 散度tensorflow_probability.distributions
。我想计算输出相对于其中一个的平均值的梯度(即 KL 散度)tensorflow_probability.distributions
。
0
不幸的是,在我迄今为止的所有尝试中,生成的渐变都是。
我尝试实现如下所示的最小示例。我想知道这些问题是否与 的急切执行模式有关tf 2
,因为我知道在 中有效的类似方法,tf 1
默认情况下禁用急切执行。
这是我尝试过的最小示例:
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Layer,Input
# 1 Define Layer
class test_layer(Layer):
def __init__(self, **kwargs):
super(test_layer, self).__init__(**kwargs)
def build(self, input_shape):
self.mean_W = self.add_weight('mean_W',trainable=True)
self.kernel_dist = tfp.distributions.MultivariateNormalDiag(
loc=self.mean_W,
scale_diag=(1.,)
)
super(test_layer, self).build(input_shape)
def call(self,x):
return tfp.distributions.kl_divergence(
self.kernel_dist,
tfp.distributions.MultivariateNormalDiag(
loc=self.mean_W*0.,
scale_diag=(1.,)
)
)
# 2 Create model
x = Input(shape=(3,))
fx = test_layer()(x)
test_model = Model(name='test_random', inputs=[x], outputs=[fx])
# 3 Calculate gradient
print('\n\n\nCalculating gradients: ')
# example data, only used as a dummy
x_data = np.random.rand(99,3).astype(np.float32)
for x_now in np.split(x_data,3):
# print(x_now.shape)
with tf.GradientTape() as tape:
fx_now = test_model(x_now)
grads = tape.gradient(
fx_now,
test_model.trainable_variables,
)
print('\nKL-Divergence: ', fx_now, '\nGradient: ',grads,'\n')
print(test_model.summary())
上面代码的输出是
Calculating gradients:
KL-Divergence: tf.Tensor(0.0029436834, shape=(), dtype=float32)
Gradient: [<tf.Tensor: id=237, shape=(), dtype=float32, numpy=0.0>]
KL-Divergence: tf.Tensor(0.0029436834, shape=(), dtype=float32)
Gradient: [<tf.Tensor: id=358, shape=(), dtype=float32, numpy=0.0>]
KL-Divergence: tf.Tensor(0.0029436834, shape=(), dtype=float32)
Gradient: [<tf.Tensor: id=479, shape=(), dtype=float32, numpy=0.0>]
Model: "test_random"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 3)] 0
_________________________________________________________________
test_layer_3 (test_layer) () 1
=================================================================
Total params: 1
Trainable params: 1
Non-trainable params: 0
_________________________________________________________________
None
KL 散度计算正确,但所得梯度为0
. 获得梯度的正确方法是什么?