0

我正在使用 3DBall 示例环境,但我得到了一些非常奇怪的结果,我不明白它们为什么会发生。到目前为止,我的代码只是一个 for range 循环,它查看奖励并用随机值填充所需的输入。但是,当我这样做时,从未显示出负奖励,并且随机不会有决策步骤,这是有道理的,但它不应该继续模拟直到有决策步骤吗?任何帮助将不胜感激,因为除了文档之外,对此几乎没有追索权。

env = UnityEnvironment()
env.reset()
behavior_names = env.behavior_specs

for i in range(50):
    arr = []
    behavior_names = env.behavior_specs
    for i in behavior_names:
        print(i)
    DecisionSteps = env.get_steps("3DBall?team=0")
    print(DecisionSteps[0].reward,len(DecisionSteps[0].reward))
    print(DecisionSteps[0].action_mask) #for some reason it returns action mask as false when Decisionsteps[0].reward is empty and is None when not


    for i in range(len(DecisionSteps[0])):
        arr.append([])
        for b in range(2):
            arr[-1].append(random.uniform(-10,10))
    if(len(DecisionSteps[0])!= 0):
        env.set_actions("3DBall?team=0",numpy.array(arr))
        env.step()
    else:
        env.step()
env.close()
4

1 回答 1

0

我认为您的问题是,当模拟终止并需要重置时,代理不会返回 adecision_step而是返回 a terminal_step。这是因为代理掉了球,并且在 terminal_step 中返回的奖励将是 -1.0。我已经获取了您的代码并进行了一些更改,现在它运行良好(除了您可能想要更改以便您不会在每次其中一个代理丢球时重置)。

import numpy as np
import mlagents
from mlagents_envs.environment import UnityEnvironment

# -----------------
# This code is used to close an env that might not have been closed before
try:
    unity_env.close()
except:
    pass
# -----------------

env = UnityEnvironment(file_name = None)
env.reset()

for i in range(1000):
    arr = []
    behavior_names = env.behavior_specs

    # Go through all existing behaviors
    for behavior_name in behavior_names:
        decision_steps, terminal_steps = env.get_steps(behavior_name)

        for agent_id_terminated in terminal_steps:
            print("Agent " + behavior_name + " has terminated, resetting environment.")
            # This is probably not the desired behaviour, as the other agents are still active. 
            env.reset()

        actions = []
        for agent_id_decisions in decision_steps:
            actions.append(np.random.uniform(-1,1,2))

        # print(decision_steps[0].reward)
        # print(decision_steps[0].action_mask)

        if len(actions) > 0:
            env.set_actions(behavior_name, np.array(actions))
    try:
        env.step()
    except:
        print("Something happend when taking a step in the environment.")
        print("The communicatior has probably terminated, stopping simulation early.")
        break
env.close()
于 2020-10-20T11:38:52.657 回答