浏览代码

Release 0.12.1 (#3078)

* cherry pick PR#3032 (#3066)

* better logging for ports and versions (#3048) (#3069)

* Release 0.12.1 doc fixes (#3070)

* add env.step() (#3068)

* bump version strings
/develop
GitHub 5 年前
当前提交
f9935dc9
共有 4 个文件被更改,包括 4 次插入4 次删除
  1. 2
      gym-unity/gym_unity/__init__.py
  2. 2
      ml-agents-envs/mlagents/envs/__init__.py
  3. 2
      ml-agents/mlagents/trainers/__init__.py
  4. 2
      notebooks/getting-started.ipynb

2
gym-unity/gym_unity/__init__.py


__version__ = "0.12.0"
__version__ = "0.12.1"

2
ml-agents-envs/mlagents/envs/__init__.py


__version__ = "0.12.0"
__version__ = "0.12.1"

2
ml-agents/mlagents/trainers/__init__.py


__version__ = "0.12.0"
__version__ = "0.12.1"

2
notebooks/getting-started.ipynb


"metadata": {},
"source": [
"### 5. Take random actions in the environment\n",
"Once we restart an environment, we can step the environment forward and provide actions to all of the agents within the environment. Here we simply choose random actions based on the `action_space_type` of the default brain. \n",
"Once we restart an environment, we can step the environment forward and provide actions to all of the agents within the environment. Here we simply choose random actions based on the `action_space_type` of the default brain.\n",
"\n",
"Once this cell is executed, 10 messages will be printed that detail how much reward will be accumulated for the next 10 episodes. The Unity environment will then pause, waiting for further signals telling it what to do next. Thus, not seeing any animation is expected when running this cell."
]

正在加载...
取消
保存