浏览代码

Additional typo, grammar, and vocabulary fixes (#197)

* Fixed typo and grammer

* Update Agents-Editor-Interface.md

* Fixed typo

* Fixed vocabulary

* Fixed typo

* Fixed typo
/tag-0.2.1
Arthur Juliani 7 年前
当前提交
cfaa6d28
共有 4 个文件被更改,包括 6 次插入6 次删除
  1. 2
      docs/Agents-Editor-Interface.md
  2. 2
      docs/Organizing-the-Scene.md
  3. 2
      docs/Unity-Agents-Overview.md
  4. 6
      docs/best-practices.md

2
docs/Agents-Editor-Interface.md


* `Action Descriptions` - A list of strings used to name the available actions for the Brain.
* `State Space Type` - Corresponds to whether state vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
* `Action Space Type` - Corresponds to whether action vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
* `Type of Brain` - Describes how Brain will decide actions.
* `Type of Brain` - Describes how the Brain will decide actions.
* `External` - Actions are decided using Python API.
* `Internal` - Actions are decided using internal TensorflowSharp model.
* `Player` - Actions are decided using Player input mappings.

2
docs/Organizing-the-Scene.md


* Coordinating the Brains which must be set as children of the Academy.
#### Brains
Each brain corresponds to a specific Decision-making method. This often aligns with a specific neural network model. A Brains is responsible for deciding the action of all the Agents which are linked to it. There can be multiple brains in the same scene and multiple agents can subscribe to the same brain.
Each brain corresponds to a specific Decision-making method. This often aligns with a specific neural network model. The brain is responsible for deciding the action of all the Agents which are linked to it. There can be multiple brains in the same scene and multiple agents can subscribe to the same brain.
#### Agents
Each agent within a scene takes actions according to the decisions provided by it's linked Brain. There can be as many Agents of as many types as you like in the scene. The state size and action size of each agent must match the brain's parameters in order for the Brain to decide actions for it.

2
docs/Unity-Agents-Overview.md


![diagram](../images/agents_diagram.png)
A visual depiction of how an Learning Environment might be configured within ML-Agents.
A visual depiction of how a Learning Environment might be configured within ML-Agents.
The three main kinds of objects within any Agents Learning Environment are:

6
docs/best-practices.md


# Environment Design Best Practices
## General
* It is often helpful to being with the simplest version of the problem, to ensure the agent can learn it. From there increase
* It is often helpful to start with the simplest version of the problem, to ensure the agent can learn it. From there increase
* When possible, It is often helpful to ensure that you can complete the task by using a Player Brain to control the agent.
* When possible, it is often helpful to ensure that you can complete the task by using a Player Brain to control the agent.
* If you want the agent the finish a task quickly, it is often helpful to provide a small penalty every step (-0.05) that the agent does not complete the task. In this case completion of the task should also coincide with the end of the episode.
* If you want the agent to finish a task quickly, it is often helpful to provide a small penalty every step (-0.05) that the agent does not complete the task. In this case completion of the task should also coincide with the end of the episode.
* Overly-large negative rewards can cause undesirable behavior where an agent learns to avoid any behavior which might produce the negative reward, even if it is also behavior which can eventually lead to a positive reward.
## States

正在加载...
取消
保存