下面是一个代码片段,给定 a ,从状态相关分布 ( )state
生成一个。然后根据选择该动作的概率 -1 倍的损失来更新图的权重。在以下示例中,MultivariateNormal 的均值 ( ) 和协方差 ( ) 都是可训练/学习的。action
prob_policy
mu
sigma
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
# make the graph
state = tf.placeholder(tf.float32, (1, 2), name="state")
mu = tf.contrib.layers.fully_connected(
inputs=state,
num_outputs=2,
biases_initializer=tf.ones_initializer)
sigma = tf.contrib.layers.fully_connected(
inputs=state,
num_outputs=2,
biases_initializer=tf.ones_initializer)
sigma = tf.squeeze(sigma)
mu = tf.squeeze(mu)
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=sigma)
action = prob_policy.sample()
picked_action_prob = prob_policy.prob(action)
loss = -tf.log(picked_action_prob)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
# run the optimizer
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
state_input = np.expand_dims([0.,0.],0)
_, action_loss = sess.run([train_op, loss], { state: state_input })
print(action_loss)
但是,当我替换此行时
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=sigma)
使用以下行(并注释掉生成 sigma 层并挤压它的行)
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=[1.,1.])
我收到以下错误
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'fully_connected/weights:0' shape=(2, 2) dtype=float32_ref>", "<tf.Variable 'fully_connected/biases:0' shape=(2,) dtype=float32_ref>"] and loss Tensor("Neg:0", shape=(), dtype=float32).
我不明白为什么会这样。它不应该仍然能够对mu
层中的权重进行梯度吗?为什么使分布的协方差突然变得不可微?
系统详情:
- 张量流 1.13.1
- 张量流概率 0.6.0
- Python 3.6.8
- macOS 10.13.6