所以我一直在关注 DQN 代理示例/教程,并像在示例中一样进行设置,唯一的区别是我构建了自己的自定义 python 环境,然后将其包装在 TensorFlow 中。然而,无论我如何塑造我的观察和行动规范,每当我对其进行观察并要求采取行动时,我似乎都无法让它发挥作用。这是我得到的错误:
tensorflow.python.framework.errors_impl.InvalidArgumentError: In[0] 不是矩阵。相反,它具有形状 [10] [Op:MatMul]
这是我设置代理的方式:
layer_parameters = (10,) #10 layers deep, shape is unspecified
#placeholders
learning_rate = 1e-3 # @param {type:"number"}
train_step_counter = tf.Variable(0)
#instantiate agent
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
env = SumoEnvironment(self._num_actions,self._num_states)
env2 = tf_py_environment.TFPyEnvironment(env)
q_net= q_network.QNetwork(env2.observation_spec(),env2.action_spec(),fc_layer_params = layer_parameters)
print("Time step spec")
print(env2.time_step_spec())
agent = dqn_agent.DqnAgent(env2.time_step_spec(),
env2.action_spec(),
q_network=q_net,
optimizer = optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
以下是我设置环境的方式:
class SumoEnvironment(py_environment.PyEnvironment):
def __init__(self, no_of_Actions, no_of_Observations):
#this means that the observation consists of a number of arrays equal to self._num_states, with datatype float32
self._observation_spec = specs.TensorSpec(shape=(16,),dtype=np.float32,name='observation')
#action spec, shape unknown, min is 0, max is the number of actions
self._action_spec = specs.BoundedArraySpec(shape=(1,),dtype=np.int32,minimum=0,maximum=no_of_Actions-1,name='action')
self._state = 0
self._episode_ended = False
这是我的输入/观察结果:
tf.Tensor([ 0. 0. 0. 0. 0. 0. 0. 0. -1. -1. -1. -1. 0. 0. 0. -1.], shape=(16,) , dtype=float32)
我尝试过尝试我的 Q_Net 的形状和深度,在我看来,错误中的 [10] 与我的 q 网络的形状有关。将其层参数设置为 (4,) 会产生以下错误:
tensorflow.python.framework.errors_impl.InvalidArgumentError: In[0] 不是矩阵。相反,它具有形状 [4] [Op:MatMul]