0

TL;DR:RLlib 的rollout命令似乎是在训练网络,而不是评估。

我正在尝试使用 Ray RLlib 的 DQN 在定制的模拟器上训练、保存和评估神经网络。为此,我一直在使用 OpenAI Gym 的 CartPole-v0 环境对工作流程进行原型设计。这样做,我在运行rollout命令进行评估时发现了一些奇怪的结果。(我使用了在RLlib Training APIs - Evaluating Trained Policies 文档中编写的完全相同的方法。)

首先,我训练了一个普通的 DQN 网络,直到它达到episode_reward_mean200 分。然后,我使用该rllib rollout命令在 CartPole-v0 中测试了 1000 集的网络。前 135 集的episode_reward_mean分数很糟糕,从 10 到 200 不等。然而,从第 136 集开始,分数一直是 200,这在 CartPole-v0 中是满分。

所以,看起来更像rllib rollout是训练网络,而不是评估。我知道情况并非如此,因为rollout.py模块中没有用于培训的代码。但我不得不说,这看起来真的很像训练。不然怎么会随着情节的增多而逐渐增加分数呢?此外,网络在评估过程后期“适应”不同的起始位置,这在我看来是训练的证据。

为什么会发生这种情况?

我使用的代码如下:

  • 训练
results = tune.run(
                    "DQN",
                    stop={"episode_reward_mean": 200},
                    config={
                            "env": "CartPole-v0",
                            "num_workers": 6
                    },
                    checkpoint_freq=0,
                    keep_checkpoints_num=1,
                    checkpoint_score_attr="episode_reward_mean",
                    checkpoint_at_end=True,
                    local_dir=r"/home/ray_results/CartPole_Evaluation"
)
  • 评估
rllib rollout ~/ray_results/CartPole_Evaluation/DQN_CartPole-v0_13hfd/checkpoint_139/checkpoint-139 \
             --run DQN --env CartPole-v0 --episodes 1000
  • 结果
2021-01-12 17:26:48,764 INFO trainable.py:489 -- Current state after restoring: {'_iteration': 77, '_timesteps_total': None, '_time_total': 128.41606998443604, '_episodes_total': 819}
Episode #0: reward: 21.0
Episode #1: reward: 13.0
Episode #2: reward: 13.0
Episode #3: reward: 27.0
Episode #4: reward: 26.0
Episode #5: reward: 14.0
Episode #6: reward: 16.0
Episode #7: reward: 22.0
Episode #8: reward: 25.0
Episode #9: reward: 17.0
Episode #10: reward: 16.0
Episode #11: reward: 31.0
Episode #12: reward: 10.0
Episode #13: reward: 23.0
Episode #14: reward: 17.0
Episode #15: reward: 41.0
Episode #16: reward: 46.0
Episode #17: reward: 15.0
Episode #18: reward: 17.0
Episode #19: reward: 32.0
Episode #20: reward: 25.0
...
Episode #114: reward: 134.0
Episode #115: reward: 90.0
Episode #116: reward: 38.0
Episode #117: reward: 33.0
Episode #118: reward: 36.0
Episode #119: reward: 114.0
Episode #120: reward: 183.0
Episode #121: reward: 200.0
Episode #122: reward: 166.0
Episode #123: reward: 200.0
Episode #124: reward: 155.0
Episode #125: reward: 181.0
Episode #126: reward: 72.0
Episode #127: reward: 200.0
Episode #128: reward: 54.0
Episode #129: reward: 196.0
Episode #130: reward: 200.0
Episode #131: reward: 200.0
Episode #132: reward: 188.0
Episode #133: reward: 200.0
Episode #134: reward: 200.0
Episode #135: reward: 173.0
Episode #136: reward: 200.0
Episode #137: reward: 200.0
Episode #138: reward: 200.0
Episode #139: reward: 200.0
Episode #140: reward: 200.0
...
Episode #988: reward: 200.0
Episode #989: reward: 200.0
Episode #990: reward: 200.0
Episode #991: reward: 200.0
Episode #992: reward: 200.0
Episode #993: reward: 200.0
Episode #994: reward: 200.0
Episode #995: reward: 200.0
Episode #996: reward: 200.0
Episode #997: reward: 200.0
Episode #998: reward: 200.0
Episode #999: reward: 200.0
4

1 回答 1

1

我在Ray Discussion上发布了相同的问题,并得到了解决此问题的答案。

由于我正在调用rollout训练有素的网络,该网络具有EpsilonGreedy为 10k 步设置的探索模块,因此代理实际上首先选择具有一些随机性的动作。然而,随着时间步长的增加,随机性部分降低到 0.02,使得网络只选择最佳动作。这就是为什么在使用 调用时恢复的代理似乎正在训练的原因rollout

正如 Sven Mika 所建议的,该问题的解决方案是简单地抑制探索行为以进行评估:

config:
     evaluation_config:
         explore: false

这导致代理测试的所有剧集得分为 200!

于 2021-01-22T04:26:47.423 回答