我正在使用Ray 1.3.0(用于 RLlib)和SUMO 版本 1.9.2的组合来模拟多代理场景。我已将 RLlib 配置为使用单个PPO 网络,该网络通常由所有N个代理更新/使用。我的评估设置如下所示:
# === Evaluation Settings ===
# Evaluate with every `evaluation_interval` training iterations.
# The evaluation stats will be reported under the "evaluation" metric key.
# Note that evaluation is currently not parallelized, and that for Ape-X
# metrics are already only reported for the lowest epsilon workers.
"evaluation_interval": 20,
# Number of episodes to run per evaluation period. If using multiple
# evaluation workers, we will run at least this many episodes total.
"evaluation_num_episodes": 10,
# Whether to run evaluation in parallel to a Trainer.train() call
# using threading. Default=False.
# E.g. evaluation_interval=2 -> For every other training iteration,
# the Trainer.train() and Trainer.evaluate() calls run in parallel.
# Note: This is experimental. Possible pitfalls could be race conditions
# for weight synching at the beginning of the evaluation loop.
"evaluation_parallel_to_training": False,
# Internal flag that is set to True for evaluation workers.
"in_evaluation": True,
# Typical usage is to pass extra args to evaluation env creator
# and to disable exploration by computing deterministic actions.
# IMPORTANT NOTE: Policy gradient algorithms are able to find the optimal
# policy, even if this is a stochastic one. Setting "explore=False" here
# will result in the evaluation workers not using this optimal policy!
"evaluation_config": {
# Example: overriding env_config, exploration, etc:
"lr": 0, # To prevent any kind of learning during evaluation
"explore": True # As required by PPO (read IMPORTANT NOTE above)
},
# Number of parallel workers to use for evaluation. Note that this is set
# to zero by default, which means evaluation will be run in the trainer
# process (only if evaluation_interval is not None). If you increase this,
# it will increase the Ray resource usage of the trainer since evaluation
# workers are created separately from rollout workers (used to sample data
# for training).
"evaluation_num_workers": 1,
# Customize the evaluation method. This must be a function of signature
# (trainer: Trainer, eval_workers: WorkerSet) -> metrics: dict. See the
# Trainer.evaluate() method to see the default implementation. The
# trainer guarantees all eval workers have the latest policy state before
# this function is called.
"custom_eval_function": None,
发生的情况是每 20 次迭代(每次迭代收集“X”个训练样本),至少有 10 集的评估运行。所有N个代理收到的奖励总和在这些情节中相加,并设置为该特定评估运行的奖励总和。随着时间的推移,我注意到有一种模式,奖励总和在相同的评估间隔内不断重复,并且学习无处可去。
更新(23/06/2021)
不幸的是,我没有为该特定运行激活 TensorBoard,但从每 10 集的评估期间收集的平均奖励(每 20 次迭代发生一次),很明显存在重复模式,如下面的注释图所示:
场景中的 20 个代理应该学习避免碰撞,而是继续以某种方式停滞在某个策略上,并最终在评估期间显示完全相同的奖励序列?
这是我如何配置评估方面的特征,还是我应该检查其他内容?如果有人能给我建议或指出正确的方向,我将不胜感激。
谢谢你。