0

我在尝试使用 Tensorflow 和 Keras 运行应用程序时遇到以下问题。我跑着,

from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()

设置,因为最初的问题是由于张量没有接受张量到 AdamOptimizer 的 get_update() 方法而出现的,该方法显示错误不能转换为 numpy 数组。

代码片段如下:

from keras import backend as K

def optimizer(self):
    action = K.placeholder(dtype=float, shape=(None, 5))
    discounted_rewards = K.placeholder(shape=(None,))

    action_prob = K.sum(action * self.model.output, axis=1) 
    cross_entropy = K.log(action_prob) * discounted_rewards
    loss = -K.sum(cross_entropy)

    optimizer = Adam(lr=self.learning_rate)
    updates = optimizer.get_updates(self.model.trainable_weights, loss)
    train = K.function([self.model.input, action, discounted_rewards], [], updates=updates)

    return train

如上所述,我现在面临以下问题(请参阅堆栈跟踪)。

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 24)                384       
_________________________________________________________________
dense_1 (Dense)              (None, 24)                600       
_________________________________________________________________
dense_2 (Dense)              (None, 5)                 125       
=================================================================
Total params: 1,109
Trainable params: 1,109
Non-trainable params: 0
_________________________________________________________________
Traceback (most recent call last):
  File "reinforce_agent.py", line 95, in <module>
    agent = ReinforceAgent()
  File "reinforce_agent.py", line 28, in __init__
    self.optimizer = self.optimizer()
  File "reinforce_agent.py", line 55, in optimizer
    updates = optimizer.get_updates(self.model.trainable_weights, loss)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 727, in get_updates
    grads = self.get_gradients(loss, params)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 719, in get_gradients
    raise ValueError("Variable {} has `None` for gradient. "
ValueError: Variable Tensor("Neg:0", shape=(), dtype=float32) has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

我已经尝试过各种解决方案,包括之前使用K.eval(loss)过,但这会导致一些其他问题。我的 Tensorflow 版本是 2.4.1,Keras 版本是 2.4.3 和 Numpy 版本是 1.19.5。

有什么解决办法吗?

4

1 回答 1

0

通过对问题中提到的优化器方法的代码进行以下简单更改,可以解决以下问题:

def optimizer(self):
    action = K.placeholder(dtype=float, shape=(None, 5))
    discounted_rewards = K.placeholder(shape=(None,))

    # Calculate cross entropy error function
    action_prob = K.sum(action * self.model.output, axis=1) 
    cross_entropy = K.log(action_prob) * discounted_rewards
    loss = -K.sum(cross_entropy)

    # create training function
    optimizer = Adam(lr=self.learning_rate)
    updates = optimizer.get_updates(params=self.model.trainable_weights, loss=loss)
    train = K.function(inputs=[self.model.input, action, discounted_rewards], outputs=self.model.output, updates=updates)

    return train

见第 12 行和第 13 行。

谢谢!

于 2021-04-04T05:19:27.300 回答