浏览代码

Merge pull request #599 from Unity-Technologies/docs-refactor

Quick start integrated into Installation
/develop-generalizationTraining-TrainerController
GitHub 7 年前
当前提交
c2902dfe
共有 22 个文件被更改,包括 2108 次插入194 次删除
  1. 11
      README.md
  2. 105
      docs/Getting-Started-with-Balance-Ball.md
  3. 14
      docs/Installation.md
  4. 2
      docs/Learning-Environment-Best-Practices.md
  5. 2
      docs/Learning-Environment-Design-Agents.md
  6. 8
      docs/Learning-Environment-Examples.md
  7. 12
      docs/Readme.md
  8. 12
      docs/Training-ML-Agents.md
  9. 6
      docs/Using-TensorFlow-Sharp-in-Unity.md
  10. 6
      docs/localized/zh-CN/docs/Getting-Started-with-Balance-Ball.md
  11. 2
      docs/localized/zh-CN/docs/Installation.md
  12. 3
      docs/localized/zh-CN/docs/Readme.md
  13. 2
      unity-environment/Assets/ML-Agents/Scripts/CoreBrainInternal.cs
  14. 152
      docs/Basic-Guide.md
  15. 91
      docs/FAQ.md
  16. 19
      docs/Limitations.md
  17. 117
      docs/images/imported-tensorflowsharp.png
  18. 60
      docs/images/project-settings.png
  19. 1001
      docs/images/running-a-pretrained-model.gif
  20. 191
      docs/images/training-command-example.png
  21. 419
      docs/images/training-running.png
  22. 67
      docs/Limitations-and-Common-Issues.md

11
README.md


* Visualizing network outputs within the environment
* Simplified set-up with Docker
## Documentation and References
## Documentation
**For more information, in addition to installation and usage
instructions, see our [documentation home](docs/Readme.md).** If you have
* For more information, in addition to installation and usage
instructions, see our [documentation home](docs/Readme.md).
* If you have
We have also published a series of blog posts that are relevant for ML-Agents:
## References
We have published a series of blog posts that are relevant for ML-Agents:
- Overviewing reinforcement learning concepts
([multi-armed bandit](https://blogs.unity3d.com/2017/06/26/unity-ai-themed-blog-entries/)
and [Q-learning](https://blogs.unity3d.com/2017/08/22/unity-ai-reinforcement-learning-with-q-learning/))

105
docs/Getting-Started-with-Balance-Ball.md


## Building the Environment
The first step is to open the Unity scene containing the 3D Balance Ball
environment:
1. Launch Unity.
2. On the Projects dialog, choose the **Open** option at the top of the window.
3. Using the file dialog that opens, locate the `unity-environment` folder
within the ML-Agents project and click **Open**.
4. In the `Project` window, navigate to the folder
`Assets/ML-Agents/Examples/3DBall/`.
5. Double-click the `Scene` file to load the scene containing the Balance
Ball environment.
![3DBall Scene](images/mlagents-Open3DBall.png)
Since we are going to build this environment to conduct training, we need to
set the brain used by the agents to **External**. This allows the agents to
communicate with the external training process when making their decisions.
1. In the **Scene** window, click the triangle icon next to the Ball3DAcademy
object.
2. Select its child object `Ball3DBrain`.
3. In the Inspector window, set **Brain Type** to `External`.
![Set Brain to External](images/mlagents-SetExternalBrain.png)
Next, we want the set up scene to play correctly when the training process
launches our environment executable. This means:
* The environment application runs in the background
* No dialogs require interaction
* The correct scene loads automatically
1. Open Player Settings (menu: **Edit** > **Project Settings** > **Player**).
2. Under **Resolution and Presentation**:
- Ensure that **Run in Background** is Checked.
- Ensure that **Display Resolution Dialog** is set to Disabled.
3. Open the Build Settings window (menu:**File** > **Build Settings**).
4. Choose your target platform.
- (optional) Select “Development Build” to
[log debug messages](https://docs.unity3d.com/Manual/LogFiles.html).
5. If any scenes are shown in the **Scenes in Build** list, make sure that
the 3DBall Scene is the only one checked. (If the list is empty, than only the
current scene is included in the build).
6. Click *Build*:
a. In the File dialog, navigate to the `python` folder in your ML-Agents
directory.
b. Assign a file name and click **Save**.
![Build Window](images/mlagents-BuildWindow.png)
To build the 3D Balance Ball environment, follow the steps in the
[Building an Environment](Basic-Guide.md#building-an-example-environment) section
of the Basic Guide page.
can perform the training. To first ensure that your environment and the Python
API work as expected, you can use the `python/Basics`
[Jupyter notebook](Background-Jupyter.md).
This notebook contains a simple walkthrough of the functionality of the API.
Within `Basics`, be sure to set `env_name` to the name of the environment file
you built earlier.
can perform the training.
### Training with PPO

every training run are saved to the same directory and will all be included
on the same graph.
To summarize, go to your command line, enter the `ml-agents` directory and type:
To summarize, go to your command line, enter the `ml-agents/python` directory and type:
python3 python/learn.py <env_file_path> --run-id=<run-identifier> --train
python3 learn.py <env_name> --run-id=<run-identifier> --train
The `--train` flag tells ML-Agents to run in training mode. `env_file_path` should be the path to the Unity executable that was just created.
The `--train` flag tells ML-Agents to run in training mode. `env_name` should be the name of the Unity executable that was just created.
Once you start training using `learn.py` in the way described in the previous section, the `ml-agents` folder will
Once you start training using `learn.py` in the way described in the previous section, the `ml-agents/python` folder will
in more detail, you can use TensorBoard. From the command line run:
in more detail, you can use TensorBoard. From the command line navigate to `ml-agents/python` folder and run:
Then navigate to `localhost:6006`.
Then navigate to `localhost:6006` in your browser.
From TensorBoard, you will see the summary statistics:

default. In order to enable it, you must follow these steps. Please note that
the `Internal` Brain mode will only be available once completing these steps.
1. Make sure the TensorFlowSharp plugin is in your `Assets` folder. A Plugins
folder which includes TF# can be downloaded
[here](https://s3.amazonaws.com/unity-ml-agents/0.3/TFSharpPlugin.unitypackage).
Double click and import it once downloaded. You can see if this was
successfully installed by checking the TensorFlow files in the Project tab
under `Assets` -> `ML-Agents` -> `Plugins` -> `Computer`
2. Go to `Edit` -> `Project Settings` -> `Player`
3. For each of the platforms you target
(**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
1. Go into `Other Settings`.
2. Select `Scripting Runtime Version` to
`Experimental (.NET 4.6 Equivalent)`
3. In `Scripting Defined Symbols`, add the flag `ENABLE_TENSORFLOW`.
After typing in, press Enter.
4. Go to `File` -> `Save Project`
5. Restart the Unity Editor.
To set up the TensorFlowSharp Support, follow [Setting up ML-Agents within Unity](Basic-Guide.md#setting-up-ml-agents-within-unity) section.
of the Basic Guide page.
1. The trained model is stored in `models/<run-identifier>` in the `ml-agents` folder. Once the
training is complete, there will be a `<env_name>.bytes` file in that location where `<env_name>` is the name
of the executable used during training.
2. Move `<env_name>.bytes` from `python/models/ppo/` into
`unity-environment/Assets/ML-Agents/Examples/3DBall/TFModels/`.
3. Open the Unity Editor, and select the `3DBall` scene as described above.
4. Select the `Ball3DBrain` object from the Scene hierarchy.
5. Change the `Type of Brain` to `Internal`.
6. Drag the `<env_name>.bytes` file from the Project window of the Editor
to the `Graph Model` placeholder in the `3DBallBrain` inspector window.
7. Press the Play button at the top of the editor.
If you followed these steps correctly, you should now see the trained model
being used to control the behavior of the balance ball within the Editor
itself. From here you can re-build the Unity binary, and run it standalone
with your agent's new learned behavior built right in.
To embed the trained model into Unity, follow the later part of [Training the Brain with Reinforcement Learning](Basic-Guide.md#training-the-brain-with-reinforcement-learning) section of the Basic Buides page.

14
docs/Installation.md


# Installation & Set-up
# Installation
To install and use ML-Agents, you need install Unity, clone this repository
and install Python with additional dependencies. Each of the subsections

Once installed, you will want to clone the ML-Agents GitHub repository.
git clone git@github.com:Unity-Technologies/ml-agents.git
git clone https://github.com/Unity-Technologies/ml-agents.git
The `unity-environment` directory in this repository contains the Unity Assets
to add to your projects. The `python` directory contains the training code.

[instructions](https://packaging.python.org/guides/installing-using-linux-tools/#installing-pip-setuptools-wheel-with-linux-package-managers)
on installing it.
To install dependencies, go into the `python` subdirectory of the repository,
To install dependencies, **go into the `python` subdirectory** of the repository,
and run from the command line:
pip3 install .

If you'd like to use Docker for ML-Agents, please follow
[this guide](Using-Docker.md).
## Unity Packages
## Next Steps
You can download the [TensorFlowSharp](Background-TensorFlow.md#tensorflowsharp) plugin as a Unity package [here](https://s3.amazonaws.com/unity-ml-agents/0.3/TFSharpPlugin.unitypackage).
The [Basic Guide](Basic-Guide.md) page contains several short
tutorials on setting up ML-Agents within Unity, running a pre-trained model, in
addition to building and training environments.
If you run into any problems installing ML-Agents,
If you run into any problems regarding ML-Agents, refer to our [FAQ](FAQ.md) and our [Limitations](Limitations.md) pages. If you can't find anything please
[submit an issue](https://github.com/Unity-Technologies/ml-agents/issues) and
make sure to cite relevant information on OS, Python version, and exact error
message (whenever possible).

2
docs/Learning-Environment-Best-Practices.md


## Vector Observations
* Vector Observations should include all variables relevant to allowing the agent to take the optimally informed decision.
* In cases where Vector Observations need to be remembered or compared over time, increase the `Stacked Vectors` value to allow the agent to keep track of multiple observations into the past.
* Categorical variables such as type of object (Sword, Shield, Bow) should be encoded in one-hot fashion (i.e. `3` -> `0, 0, 1`).
* Categorical variables such as type of object (Sword, Shield, Bow) should be encoded in one-hot fashion (i.e. `3` > `0, 0, 1`).
* Besides encoding non-numeric values, all inputs should be normalized to be in the range 0 to +1 (or -1 to 1). For example, the `x` position information of an agent where the maximum possible value is `maxValue` should be recorded as `AddVectorObs(transform.position.x / maxValue);` rather than `AddVectorObs(transform.position.x);`. See the equation below for one approach of normalization.
* Positional information of relevant GameObjects should be encoded in relative coordinates wherever possible. This is often relative to the agent position.

2
docs/Learning-Environment-Design-Agents.md


![Agent Camera](images/visual-observation.png)
In addition, make sure that the Agent's Brain expects a visual observation. In the Brain inspector, under `Brain Parameters` -> `Visual Observations`, specify the number of Cameras the agent is using for its visual observations. For each visual observation, set the width and height of the image (in pixels) and whether or not the observation is color or grayscale (when `Black And White` is checked).
In addition, make sure that the Agent's Brain expects a visual observation. In the Brain inspector, under **Brain Parameters** > **Visual Observations**, specify the number of Cameras the agent is using for its visual observations. For each visual observation, set the width and height of the image (in pixels) and whether or not the observation is color or grayscale (when `Black And White` is checked).
### Discrete Vector Observation Space: Table Lookup

8
docs/Learning-Environment-Examples.md


* Visual Observations: None
* Reset Parameters: None
## Banana Collector
## [Banana Collector](https://youtu.be/heVMs3t9qSk)
![Banana](images/banana.png)

* Visual Observations (Optional; None by default): First-person view for each agent.
* Reset Parameters: None
## Hallway
## [Hallway](https://youtu.be/53GyfpPQRUQ)
![Hallway](images/hallway.png)

* Visual Observations (Optional): First-person view for the agent.
* Reset Parameters: None
## Bouncer
## [Bouncer](https://youtu.be/Tkv-c-b1b2I)
![Bouncer](images/bouncer.png)

* Visual Observations: None
* Reset Parameters: None
## Soccer Twos
## [Soccer Twos](https://youtu.be/Hg3nmYD3DjQ)
![SoccerTwos](images/soccer.png)

12
docs/Readme.md


# Unity ML-Agents Documentation
## Installation & Set-up
* [Installation](Installation.md)
* [Background: Jupyter Notebooks](Background-Jupyter.md)
* [Docker Set-up](Using-Docker.md)
* [Basic Guide](Basic-Guide.md)
* [Installation & Set-up](Installation.md)
* [Background: Jupyter Notebooks](Background-Jupyter.md)
* [Docker Set-up](Using-Docker.md)
* [Getting Started with the 3D Balance Ball Environment](Getting-Started-with-Balance-Ball.md)
* [Example Environments](Learning-Environment-Examples.md)

## Help
* [Migrating to ML-Agents v0.3](Migrating-v0.3.md)
* [Frequently Asked Questions](FAQ.md)
* [Limitations & Common Issues](Limitations-and-Common-Issues.md)
* [Limitations](Limitations.md)
## API Docs
* [API Reference](API-Reference.md)

12
docs/Training-ML-Agents.md


For a broader overview of reinforcement learning, imitation learning and the ML-Agents training process, see [ML-Agents Overview](ML-Agents-Overview.md).
## Training with Learn.py
## Training with learn.py
Use the Python `Learn.py` program to train agents. `Learn.py` supports training with [reinforcement learning](Background-Machine-Learning.md#reinforcement-learning), [curriculum learning](Training-Curriculum-Learning.md), and [behavioral cloning imitation learning](Training-Imitation-Learning.md).
Use the Python `learn.py` program to train agents. `learn.py` supports training with [reinforcement learning](Background-Machine-Learning.md#reinforcement-learning), [curriculum learning](Training-Curriculum-Learning.md), and [behavioral cloning imitation learning](Training-Imitation-Learning.md).
Run `Learn.py` from the command line to launch the training process. Use the command line patterns and the `trainer_config.yaml` file to control training options.
Run `learn.py` from the command line to launch the training process. Use the command line patterns and the `trainer_config.yaml` file to control training options.
python3 learn.py <env_file_path> --run-id=<run-identifier> --train
python3 learn.py <env_name> --run-id=<run-identifier> --train
where `<env_file_path>` is the path to your Unity executable containing the agents to be trained and `<run-identifier>` is an optional identifier you can use to identify the results of individual training runs.
where `<env_name>` is the name(including path) of your Unity executable containing the agents to be trained and `<run-identifier>` is an optional identifier you can use to identify the results of individual training runs.
For example, suppose you have a project in Unity named "CatsOnBicyclesCatsOnBicycles" which contains agents ready to train. To perform the training:
For example, suppose you have a project in Unity named "CatsOnBicycles" which contains agents ready to train. To perform the training:
1. Build the project, making sure that you only include the training scene.
2. Open a terminal or console window.

6
docs/Using-TensorFlow-Sharp-in-Unity.md


# iOS additional instructions for building
* Once you build for iOS in the editor, Xcode will launch.
* In `General` -> `Linked Frameworks and Libraries`:
* In **General** > **Linked Frameworks and Libraries**:
* In `Build Settings`->`Linking`->`Other Linker Flags`:
* In **Build Settings** > **Linking** > **Other Linker Flags**:
* Drag the library `libtensorflow-core.a` from the `Project Navigator` on the left under `Libraries/ML-Agents/Plugins/iOS` into the flag list, after `-force_load`.
* Drag the library `libtensorflow-core.a` from the **Project Navigator** on the left under `Libraries/ML-Agents/Plugins/iOS` into the flag list, after `-force_load`.
# Using TensorFlowSharp without ML-Agents

6
docs/localized/zh-CN/docs/Getting-Started-with-Balance-Ball.md


总之,转到命令行,进入 `ml-agents` 目录并输入:
```python
python3 python/learn.py <env_file_path> --run-id=<run-identifier> --train
```
python3 python/learn.py <env_name> --run-id=<run-identifier> --train
`--train` 标志告诉 ML-Agents 以训练模式运行。`env_file_path` 应该是刚才创建的 Unity 可执行文件的路径
`--train` 标志告诉 ML-Agents 以训练模式运行。`env_name` 应该是刚才创建的 Unity 可执行文件的名字
### 观测训练进度

2
docs/localized/zh-CN/docs/Installation.md


## Unity 包
您可以通过 Unity 包的形式下载TensorflowSharp 插件([AWS S3链接](https://s3.amazonaws.com/unity-ml-agents/0.3/TFSharpPlugin.unitypackage),[百度盘链接](https://pan.baidu.com/s/1s0mJN8lvuxTcYbs2kL2FqA))
您可以通过 Unity 包的形式下载TensorFlowSharp 插件([AWS S3链接](https://s3.amazonaws.com/unity-ml-agents/0.3/TFSharpPlugin.unitypackage),[百度盘链接](https://pan.baidu.com/s/1s0mJN8lvuxTcYbs2kL2FqA))
## 帮助

3
docs/localized/zh-CN/docs/Readme.md


## 帮助
* [如何从老版本升级到 ML-Agents v0.3](/docs/Migrating-v0.3.md)
* [常见问题](/docs/FAQ.md)
* [ML-Agents 尚未实现功能以及常见问题](/docs/Limitations-and-Common-Issues.md)
* [ML-Agents 尚未实现功能](/docs/Limitations.md)
## API 文档
* [API 参考](/docs/API-Reference.md)

2
unity-environment/Assets/ML-Agents/Scripts/CoreBrainInternal.cs


ExternalCommunicator coord;
[Tooltip("This must be the bytes file corresponding to the pretrained Tensorflow graph.")]
[Tooltip("This must be the bytes file corresponding to the pretrained TensorFlow graph.")]
/// Modify only in inspector : Reference to the Graph asset
public TextAsset graphModel;

152
docs/Basic-Guide.md


# Basic Guide
This guide will show you how to use a pretrained model in an example Unity environment, and show you how to train the model yourself.
If you are not familiar with the [Unity Engine](https://unity3d.com/unity),
we highly recommend the [Roll-a-ball tutorial](https://unity3d.com/learn/tutorials/s/roll-ball-tutorial) to learn all the basic concepts of Unity.
## Setting up ML-Agents within Unity
In order to use ML-Agents within Unity, you need to change some Unity settings first. Also [TensorFlowSharp plugin](https://github.com/migueldeicaza/TensorFlowSharp) is needed for you to use pretrained model within Unity.
1. Launch Unity
2. On the Projects dialog, choose the **Open** option at the top of the window.
3. Using the file dialog that opens, locate the `unity-environment` folder within the ML-Agents project and click **Open**.
4. Go to **Edit** > **Project Settings** > **Player**
5. For **each** of the platforms you target
(**PC, Mac and Linux Standalone**, **iOS** or **Android**):
1. Option the **Other Settings** section.
2. Select **Scripting Runtime Version** to
**Experimental (.NET 4.6 Equivalent)**
3. In **Scripting Defined Symbols**, add the flag `ENABLE_TENSORFLOW`.
After typing in the flag name, press Enter.
6. Go to **File** > **Save Project**
![Project Settings](images/project-settings.png)
[Download](https://s3.amazonaws.com/unity-ml-agents/0.3/TFSharpPlugin.unitypackage) the TensorFlowSharp plugin. Then import it into Unity by double clicking the downloaded file. You can check if it was successfully imported by checking the TensorFlow files in the Project window under **Assets** > **ML-Agents** > **Plugins** > **Computer**.
**Note**: If you don't see anything under **Assets**, drag the `ml-agents/unity-environment/Assets/ML-Agents` folder under **Assets** within Project window.
![Imported TensorFlowsharp](images/imported-tensorflowsharp.png)
## Running a Pre-trained Model
1. In the **Project** window, go to `Assets/ML-Agents/Examples/3DBall` folder and open the `3DBall` scene file.
2. In the **Hierarchy** window, select the **Ball3DBrain** child under the **Ball3DAcademy** GameObject to view its properties in the Inspector window.
3. On the **Ball3DBrain** object's **Brain** component, change the **Brain Type** to **Internal**.
4. In the **Project** window, locate the `Assets/ML-Agents/Examples/3DBall/TFModels` folder.
5. Drag the `3DBall` model file from the `TFModels` folder to the **Graph Model** field of the **Ball3DBrain** object's **Brain** component.
5. Click the **Play** button and you will see the platforms balance the balls using the pretrained model.
![Running a pretrained model](images/running-a-pretrained-model.gif)
## Building an Example Environment
The first step is to open the Unity scene containing the 3D Balance Ball
environment:
1. Launch Unity.
2. On the Projects dialog, choose the **Open** option at the top of the window.
3. Using the file dialog that opens, locate the `unity-environment` folder
within the ML-Agents project and click **Open**.
4. In the **Project** window, navigate to the folder
`Assets/ML-Agents/Examples/3DBall/`.
5. Double-click the `3DBall` file to load the scene containing the Balance
Ball environment.
![3DBall Scene](images/mlagents-Open3DBall.png)
Since we are going to build this environment to conduct training, we need to
set the brain used by the agents to **External**. This allows the agents to
communicate with the external training process when making their decisions.
1. In the **Scene** window, click the triangle icon next to the Ball3DAcademy
object.
2. Select its child object **Ball3DBrain**.
3. In the Inspector window, set **Brain Type** to **External**.
![Set Brain to External](images/mlagents-SetExternalBrain.png)
Next, we want the set up scene to play correctly when the training process
launches our environment executable. This means:
* The environment application runs in the background
* No dialogs require interaction
* The correct scene loads automatically
1. Open Player Settings (menu: **Edit** > **Project Settings** > **Player**).
2. Under **Resolution and Presentation**:
- Ensure that **Run in Background** is Checked.
- Ensure that **Display Resolution Dialog** is set to Disabled.
3. Open the Build Settings window (menu:**File** > **Build Settings**).
4. Choose your target platform.
- (optional) Select “Development Build” to
[log debug messages](https://docs.unity3d.com/Manual/LogFiles.html).
5. If any scenes are shown in the **Scenes in Build** list, make sure that
the 3DBall Scene is the only one checked. (If the list is empty, than only the
current scene is included in the build).
6. Click **Build**:
a. In the File dialog, navigate to the `python` folder in your ML-Agents
directory.
b. Assign a file name and click **Save**.
![Build Window](images/mlagents-BuildWindow.png)
Now that we have a Unity executable containing the simulation environment, we
can perform the training. You can ensure that your environment and the Python
API work as expected, by using the `python/Basics`
[Jupyter notebook](Background-Jupyter.md) introduced in the next section.
## Using the Basics Jupyter Notebook
The `python/Basics` [Jupyter notebook](Background-Jupyter.md) contains a
simple walkthrough of the functionality of the Python
API. It can also serve as a simple test that your environment is configured
correctly. Within `Basics`, be sure to set `env_name` to the name of the
Unity executable you built earlier.
More information and documentation is provided in the
[Python API](Python-API.md) page.
## Training the Brain with Reinforcement Learning
1. Open a command or terminal window.
2. Nagivate to the folder where you installed ML-Agents.
3. Change to the python directory.
4. Run `python3 learn.py <env_name> --run-id=<run-identifier> --train`
Where:
- `<env_name>` is the name and path to the executable you exported from Unity (without extension)
- `<run-identifier>` is a string used to separate the results of different training runs
- And the `--train` tells learn.py to run a training session (rather than inference)
For example, if you are training with a 3DBall executable you exported to the ml-agents/python directory, run:
```
python3 learn.py 3DBall --run-id=firstRun --train
```
![Training command example](images/training-command-example.png)
**Note**: If you're using Anaconda, don't forget to activate the ml-agents environment first.
If the learn.py runs correctly and starts training, you should see something like this:
![Training running](images/training-running.png)
You can press Ctrl+C to stop the training, and your trained model will be at `ml-agents/python/models/<run-identifier>/<env_name>_<run-identifier>.bytes`, which corresponds to your model's latest checkpoint. You can now embed this trained model into your internal brain by following the steps below, which is similar to the steps described [above](#play-an-example-environment-using-pretrained-model).
1. Move your model file into
`unity-environment/Assets/ML-Agents/Examples/3DBall/TFModels/`.
2. Open the Unity Editor, and select the **3DBall** scene as described above.
3. Select the **Ball3DBrain** object from the Scene hierarchy.
4. Change the **Type of Brain** to **Internal**.
5. Drag the `<env_name>_<run-identifier>.bytes` file from the Project window of the Editor
to the **Graph Model** placeholder in the **Ball3DBrain** inspector window.
6. Press the Play button at the top of the editor.
## Next Steps
* For more information on ML-Agents, in addition to helpful background, check out the [ML-Agents Overview](ML-Agents-Overview.md) page.
* For a more detailed walk-through of our 3D Balance Ball environment, check out the [Getting Started](Getting-Started-with-Balance-Ball.md) page.
* For a "Hello World" introduction to creating your own learning environment, check out the [Making a New Learning Environment](Learning-Environment-Create-New.md) page.
* For a series of Youtube video tutorials, checkout the [Machine Learning Agents PlayList](https://www.youtube.com/playlist?list=PLX2vGYjWbI0R08eWQkO7nQkGiicHAX7IX) page.

91
docs/FAQ.md


# Frequently Asked Questions
### Scripting Runtime Environment not setup correctly
If you haven't switched your scripting runtime version from .NET 3.5 to .NET 4.6, you will see such error message:
```
error CS1061: Type `System.Text.StringBuilder' does not contain a definition for `Clear' and no extension method `Clear' of type `System.Text.StringBuilder' could be found. Are you missing an assembly reference?
```
This is because .NET 3.5 doesn't support method Clear() for StringBuilder, refer to [Setting Up ML-Agents Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.
### TensorFlowSharp flag not turned on.
If you have already imported the TensorFlowSharp plugin, but havn't set ENABLE_TENSORFLOW flag for your scripting define symbols, you will see the following error message:
```
You need to install and enable the TensorFlowSharp plugin in order to use the internal brain.
```
This error message occurs because the TensorFlowSharp plugin won't be usage without the ENABLE_TENSORFLOW flag, refer to [Setting Up ML-Agents Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.
### Tensorflow epsilon placeholder error
If you have a graph placeholder set in the internal Brain inspector that is not present in the TensorFlow graph, you will see some error like this:
```
UnityAgentsException: One of the Tensorflow placeholder could not be found. In brain <some_brain_name>, there are no FloatingPoint placeholder named <some_placeholder_name>.
```
Solution: Go to all of your Brain object, find `Graph placeholders` and change its `size` to 0 to remove the `epsilon` placeholder.
Similarly, if you have a graph scope set in the internal Brain inspector that is not correctly set, you will see some error like this:
```
UnityAgentsException: The node <Wrong_Graph_Scope>/action could not be found. Please make sure the graphScope <Wrong_Graph_Scope>/ is correct
```
Solution: Make sure your Graph Scope field matches the corresponding brain object name in your Hierachy Inspector when there is multiple brain.
### Environment Permission Error
If you directly import your Unity environment without building it in the
editor, you might need to give it additional permissions to execute it.
If you receive such a permission error on macOS, run:
`chmod -R 755 *.app`
or on Linux:
`chmod -R 755 *.x86_64`
On Windows, you can find
[instructions](https://technet.microsoft.com/en-us/library/cc754344(v=ws.11).aspx).
### Environment Connection Timeout
If you are able to launch the environment from `UnityEnvironment` but
then receive a timeout error, there may be a number of possible causes.
* _Cause_: There may be no Brains in your environment which are set
to `External`. In this case, the environment will not attempt to
communicate with python. _Solution_: Set the Brains(s) you wish to
externally control through the Python API to `External` from the
Unity Editor, and rebuild the environment.
* _Cause_: On OSX, the firewall may be preventing communication with
the environment. _Solution_: Add the built environment binary to the
list of exceptions on the firewall by following
[instructions](https://support.apple.com/en-us/HT201642).
* _Cause_: An error happened in the Unity Environment preventing
communication. _Solution_: Look into the
[log files](https://docs.unity3d.com/Manual/LogFiles.html)
generated by the Unity Environment to figure what error happened.
### Communication port {} still in use
If you receive an exception `"Couldn't launch new environment because
communication port {} is still in use. "`, you can change the worker
number in the Python script when calling
`UnityEnvironment(file_name=filename, worker_id=X)`
### Mean reward : nan
If you receive a message `Mean reward : nan` when attempting to train a
model using PPO, this is due to the episodes of the learning environment
not terminating. In order to address this, set `Max Steps` for either
the Academy or Agents within the Scene Inspector to a value greater
than 0. Alternatively, it is possible to manually set `done` conditions
for episodes from within scripts for custom episode-terminating events.

19
docs/Limitations.md


# Limitations
## Unity SDK
### Headless Mode
If you enable Headless mode, you will not be able to collect visual
observations from your agents.
### Rendering Speed and Synchronization
Currently the speed of the game physics can only be increased to 100x
real-time. The Academy also moves in time with FixedUpdate() rather than
Update(), so game behavior implemented in Update() may be out of sync with the Agent decision making. See [Execution Order of Event Functions](https://docs.unity3d.com/Manual/ExecutionOrder.html) for more information.
## Python API
### Python version
As of version 0.3, we no longer support Python 2.
### Tensorflow support
Currently Ml-Agents uses TensorFlow 1.4 due to the version of the TensorFlowSharp plugin we are using.

117
docs/images/imported-tensorflowsharp.png

之前 之后
宽度: 452  |  高度: 288  |  大小: 27 KiB

60
docs/images/project-settings.png

之前 之后
宽度: 364  |  高度: 155  |  大小: 19 KiB

1001
docs/images/running-a-pretrained-model.gif
文件差异内容过多而无法显示
查看文件

191
docs/images/training-command-example.png

之前 之后
宽度: 731  |  高度: 310  |  大小: 54 KiB

419
docs/images/training-running.png

之前 之后
宽度: 728  |  高度: 496  |  大小: 104 KiB

67
docs/Limitations-and-Common-Issues.md


# Limitations and Common Issues
## Unity SDK
### Headless Mode
If you enable Headless mode, you will not be able to collect visual
observations from your agents.
### Rendering Speed and Synchronization
Currently the speed of the game physics can only be increased to 100x
real-time. The Academy also moves in time with FixedUpdate() rather than
Update(), so game behavior tied to frame updates may be out of sync.
## Python API
### Python version
As of version 0.3, we no longer support Python 2.
### Environment Permission Error
If you directly import your Unity environment without building it in the
editor, you might need to give it additional permissions to execute it.
If you receive such a permission error on macOS, run:
`chmod -R 755 *.app`
or on Linux:
`chmod -R 755 *.x86_64`
On Windows, you can find instructions
[here](https://technet.microsoft.com/en-us/library/cc754344(v=ws.11).aspx).
### Environment Connection Timeout
If you are able to launch the environment from `UnityEnvironment` but
then receive a timeout error, there may be a number of possible causes.
* _Cause_: There may be no Brains in your environment which are set
to `External`. In this case, the environment will not attempt to
communicate with python. _Solution_: Set the Brains(s) you wish to
externally control through the Python API to `External` from the
Unity Editor, and rebuild the environment.
* _Cause_: On OSX, the firewall may be preventing communication with
the environment. _Solution_: Add the built environment binary to the
list of exceptions on the firewall by following instructions
[here](https://support.apple.com/en-us/HT201642).
* _Cause_: An error happened in the Unity Environment preventing
communication. _Solution_: Look into the
[log files](https://docs.unity3d.com/Manual/LogFiles.html)
generated by the Unity Environment to figure what error happened.
### Communication port {} still in use
If you receive an exception `"Couldn't launch new environment because
communication port {} is still in use. "`, you can change the worker
number in the Python script when calling
`UnityEnvironment(file_name=filename, worker_id=X)`
### Mean reward : nan
If you receive a message `Mean reward : nan` when attempting to train a
model using PPO, this is due to the episodes of the learning environment
not terminating. In order to address this, set `Max Steps` for either
the Academy or Agents within the Scene Inspector to a value greater
than 0. Alternatively, it is possible to manually set `done` conditions
for episodes from within scripts for custom episode-terminating events.
正在加载...
取消
保存