0

下面是一个代码片段,给定 a ,从状态相关分布 ( )state生成一个。然后根据选择该动作的概率 -1 倍的损失来更新图的权重。在以下示例中,MultivariateNormal 的均值 ( ) 和协方差 ( ) 都是可训练/学习的。actionprob_policymusigma

import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp

# make the graph
state = tf.placeholder(tf.float32, (1, 2), name="state")
mu = tf.contrib.layers.fully_connected(
    inputs=state,
    num_outputs=2,
    biases_initializer=tf.ones_initializer)
sigma = tf.contrib.layers.fully_connected(
    inputs=state,
    num_outputs=2,
    biases_initializer=tf.ones_initializer)
sigma = tf.squeeze(sigma)
mu = tf.squeeze(mu)
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=sigma)
action = prob_policy.sample()
picked_action_prob = prob_policy.prob(action)
loss = -tf.log(picked_action_prob)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)

# run the optimizer
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    state_input = np.expand_dims([0.,0.],0)
    _, action_loss = sess.run([train_op, loss], { state: state_input })
    print(action_loss)

但是,当我替换此行时

prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=sigma)

使用以下行(并注释掉生成 sigma 层并挤压它的行)

prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=[1.,1.])

我收到以下错误

ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'fully_connected/weights:0' shape=(2, 2) dtype=float32_ref>", "<tf.Variable 'fully_connected/biases:0' shape=(2,) dtype=float32_ref>"] and loss Tensor("Neg:0", shape=(), dtype=float32).

我不明白为什么会这样。它不应该仍然能够对mu层中的权重进行梯度吗?为什么使分布的协方差突然变得不可微?

系统详情:

  • 张量流 1.13.1
  • 张量流概率 0.6.0
  • Python 3.6.8
  • macOS 10.13.6
4

2 回答 2

1

我们在 MVNDiag(以及 TransformedDistribution 的其他子类)中为可逆性进行的一些缓存会导致问题。

如果您+ 0在 .sample() 之后执行(作为一种解决方法),则渐变将起作用。

另外我建议使用dist.log_prob(..)而不是tf.log(dist.prob(..)). 更好的数字。

import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp

# make the graph
state = tf.placeholder(tf.float32, (1, 2), name="state")
mu = tf.contrib.layers.fully_connected(
    inputs=state,
    num_outputs=2,
    biases_initializer=tf.ones_initializer)
sigma = tf.contrib.layers.fully_connected(
    inputs=state,
    num_outputs=2,
    biases_initializer=tf.ones_initializer)
sigma = tf.squeeze(sigma)
mu = tf.squeeze(mu)
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=[1.,1.])
action = prob_policy.sample() + 0
loss = -prob_policy.log_prob(action)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)

# run the optimizer
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    state_input = np.expand_dims([0.,0.],0)
    _, action_loss = sess.run([train_op, loss], { state: state_input })
    print(action_loss)
于 2019-05-31T17:43:53.853 回答
0

我不得不改变这条线

action = prob_policy.sample()

到这条线

action = tf.stop_gradient(prob_policy.sample())

如果有人解释为什么学习协方差的权重会使 loc 的权重在损失中可微分,但使协方差成为常数却不能,以及这条线的变化如何促成这一点,我会喜欢解释!谢谢!

于 2019-05-30T22:56:48.713 回答