浏览代码

Various doc improvements (#3775)

* Various doc improvements

For Using-Virtual-Environment.md:
- Made a note regarding updating setuptools and pip.
- Changed lists from "-" to "*"

For Using-Tensorboard.md:
- Changed the ordered list to use "1."

For Training-on-Microsoft-Azure-Custom-Instance.md:
- Deleted as it was not linked anywhere

For FAQ.md
- Removed stale issues given upgrade to 2018.3

For Readme.md
- Added links for Reward Signals, Self-Play and Profiling Trainers

For Learning-Environment-Executable.md
- Changed the ordered list to use "1."

For Learning-Environment-Examples.md
- Minor rewording of intro paragraphs

* consolidating custom instances page in main page

So we have a single page for Azure.

Adding warning note for deprecated docs

* Fixing doc links that are failing CI
/develop/gym-wrapper
GitHub 5 年前
当前提交
8b300031
共有 13 个文件被更改,包括 151 次插入172 次删除
  1. 4
      com.unity.ml-agents/CHANGELOG.md
  2. 23
      docs/FAQ.md
  3. 4
      docs/Installation-Anaconda-Windows.md
  4. 20
      docs/Learning-Environment-Examples.md
  5. 32
      docs/Learning-Environment-Executable.md
  6. 19
      docs/Readme.md
  7. 2
      docs/Training-ML-Agents.md
  8. 2
      docs/Training-on-Amazon-Web-Service.md
  9. 94
      docs/Training-on-Microsoft-Azure.md
  10. 4
      docs/Using-Docker.md
  11. 11
      docs/Using-Tensorboard.md
  12. 17
      docs/Using-Virtual-Environment.md
  13. 91
      docs/Training-on-Microsoft-Azure-Custom-Instance.md

4
com.unity.ml-agents/CHANGELOG.md


- The Jupyter notebooks have been removed from the repository.
- Introduced the `SideChannelUtils` to register, unregister and access side channels.
- `Academy.FloatProperties` was removed, please use `SideChannelUtils.GetSideChannel<FloatPropertiesChannel>()` instead.
- Removed the multi-agent gym option from the gym wrapper. For multi-agent scenarios, use the [Low Level Python API](Python-API.md).
- The low level Python API has changed. You can look at the document [Low Level Python API documentation](Python-API.md) for more information. If you use `mlagents-learn` for training, this should be a transparent change.
- Removed the multi-agent gym option from the gym wrapper. For multi-agent scenarios, use the [Low Level Python API](../docs/Python-API.md).
- The low level Python API has changed. You can look at the document [Low Level Python API documentation](../docs/Python-API.md) for more information. If you use `mlagents-learn` for training, this should be a transparent change.
- Added ability to start training (initialize model weights) from a previous run ID. (#3710)
- The internal event `Academy.AgentSetStatus` was renamed to `Academy.AgentPreStep` and made public.
- The offset logic was removed from DecisionRequester.

23
docs/FAQ.md


In all of these cases, the issue is a pip/python environment setup issue. Please search the tensorflow github issues
for similar problems and solutions before creating a new issue.
## Scripting Runtime Environment not setup correctly
If you haven't switched your scripting runtime version from .NET 3.5 to .NET 4.6
or .NET 4.x, you will see such error message:
```console
error CS1061: Type `System.Text.StringBuilder' does not contain a definition for `Clear' and no extension method `Clear' of type `System.Text.StringBuilder' could be found. Are you missing an assembly reference?
```
This is because .NET 3.5 doesn't support method Clear() for StringBuilder, refer
to [Setting Up The ML-Agents Toolkit Within
Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.
## Environment Permission Error
If you directly import your Unity environment without building it in the

Agents within the Scene Inspector to a value greater than 0. Alternatively, it
is possible to manually set `done` conditions for episodes from within scripts
for custom episode-terminating events.
## Problems with training on AWS
Please refer to [Training on Amazon Web Service FAQ](Training-on-Amazon-Web-Service.md#faq)
# Known Issues
## Release 0.10.0
* ml-agents 0.10.0 and earlier were incompatible with TensorFlow 1.15.0; the graph could contain
an operator that `tensorflow_to_barracuda` didn't handle. This was fixed in the 0.11.0 release.

4
docs/Installation-Anaconda-Windows.md


# Installing ML-Agents Toolkit for Windows (Deprecated)
Note: We no longer use this guide ourselves and so it may not work correctly. We've decided to
keep it up just in case it is helpful to you.
:warning: **Note:** We no longer use this guide ourselves and so it may not work correctly. We've
decided to keep it up just in case it is helpful to you.
The ML-Agents toolkit supports Windows 10. While it might be possible to run the
ML-Agents toolkit using other versions of Windows, it has not been tested on

20
docs/Learning-Environment-Examples.md


# Example Learning Environments
The Unity ML-Agents toolkit contains an expanding set of example environments
which demonstrate various features of the platform. Environments are located in
The Unity ML-Agents Toolkit includes an expanding set of example environments that highlight the
various features of the toolkit. These environments can also serve as templates for new environments
or as ways to test new ML algorithms. Environments are located in
This page only overviews the example environments we provide. To learn more on
how to design and build your own environments see our [Making a New Learning
Environment](Learning-Environment-Create-New.md) page.
Note: Environment scenes marked as _optional_ do not have accompanying
pre-trained model files, and are designed to serve as challenges for
researchers.
For the environments that highlight specific features of the toolkit, we provide the
pre-trained model files and the training config file that enables you to train the scene
yourself. The environments that are designed to serve as challenges for researchers
do not have accompanying pre-trained model files or training configs and are marked as
_Optional_ below.
This page only overviews the example environments we provide. To learn more on
how to design and build your own environments see our
[Making a New Learning Environment](Learning-Environment-Create-New.md) page.
If you would like to contribute environments, please see our
[contribution guidelines](../com.unity.ml-agents/CONTRIBUTING.md) page.

32
docs/Learning-Environment-Executable.md


environment:
1. Launch Unity.
2. On the Projects dialog, choose the **Open** option at the top of the window.
3. Using the file dialog that opens, locate the `Project` folder within the
1. On the Projects dialog, choose the **Open** option at the top of the window.
1. Using the file dialog that opens, locate the `Project` folder within the
4. In the **Project** window, navigate to the folder
1. In the **Project** window, navigate to the folder
5. Double-click the `3DBall` file to load the scene containing the Balance Ball
1. Double-click the `3DBall` file to load the scene containing the Balance Ball
environment.
![3DBall Scene](images/mlagents-Open3DBall.png)

* The correct scene loads automatically.
1. Open Player Settings (menu: **Edit** > **Project Settings** > **Player**).
2. Under **Resolution and Presentation**:
1. Under **Resolution and Presentation**:
3. Open the Build Settings window (menu:**File** > **Build Settings**).
4. Choose your target platform.
1. Open the Build Settings window (menu:**File** > **Build Settings**).
1. Choose your target platform.
5. If any scenes are shown in the **Scenes in Build** list, make sure that the
1. If any scenes are shown in the **Scenes in Build** list, make sure that the
6. Click **Build**:
1. Click **Build**:
* In the File dialog, navigate to your ML-Agents directory.
* Assign a file name and click **Save**.
* (For Windows)With Unity 2018.1, it will ask you to select a folder instead

## Training the Environment
1. Open a command or terminal window.
2. Navigate to the folder where you installed the ML-Agents Toolkit. If you
1. Navigate to the folder where you installed the ML-Agents Toolkit. If you
3. Run
1. Run
`mlagents-learn <trainer-config-file> --env=<env_name> --run-id=<run-identifier>`
Where:
* `<trainer-config-file>` is the file path of the trainer configuration yaml

use_curiosity: False
curiosity_strength: 0.01
curiosity_enc_size: 128
model_path: ./models/first-run-0/Ball3DLearning
model_path: ./models/first-run-0/Ball3DLearning
INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 1000. Mean Reward: 1.242. Std of Reward: 0.746. Training.
INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 2000. Mean Reward: 1.319. Std of Reward: 0.693. Training.
INFO:mlagents.trainers: first-run-0: Ball3DLearning: Step: 3000. Mean Reward: 1.804. Std of Reward: 1.056. Training.

1. Move your model file into
`Project/Assets/ML-Agents/Examples/3DBall/TFModels/`.
2. Open the Unity Editor, and select the **3DBall** scene as described above.
3. Select the **3DBall** prefab from the Project window and select **Agent**.
5. Drag the `<behavior_name>.nn` file from the Project window of
1. Open the Unity Editor, and select the **3DBall** scene as described above.
1. Select the **3DBall** prefab from the Project window and select **Agent**.
1. Drag the `<behavior_name>.nn` file from the Project window of
6. Press the Play button at the top of the editor.
1. Press the :arrow_forward: button at the top of the editor.

19
docs/Readme.md


* [Designing Agents](Learning-Environment-Design-Agents.md)
### Advanced Usage
* [Using the Monitor](Feature-Monitor.md)
* [Using the Video Recorder](https://github.com/Unity-Technologies/video-recorder)
* [Using an Executable Environment](Learning-Environment-Executable.md)
* [Creating Custom Side Channels](Custom-SideChannels.md)
* [Using the Monitor](Feature-Monitor.md)
* [Using an Executable Environment](Learning-Environment-Executable.md)
* [Reward Signals](Reward-Signals.md)
* [Profiling Trainers](Profiling-Python.md)
* [Training with Self-Play](Training-Self-Play.md)
### Advanced Training Methods

* [Unity Inference Engine](Unity-Inference-Engine.md)
## Extending ML-Agents
* [Creating Custom Side Channels](Custom-SideChannels.md)
## Help
* [Migrating from earlier versions of ML-Agents](Migrating.md)

We no longer use them ourselves and so they may not be up-to-date.
We've decided to keep them up just in case they are helpful to you.
* [Windows Anaconda Installation](Installation-Anaconda-Windows.md)
* [Using Docker](Using-Docker.md)
* [Using Docker](Using-Docker.md)
* [Windows Anaconda Installation](Installation-Anaconda-Windows.md)
* [Using the Video Recorder](https://github.com/Unity-Technologies/video-recorder)

2
docs/Training-ML-Agents.md


You can also use this mode to run inference of an already-trained model in Python.
Append both the `--resume` and `--inference` to do this. Note that if you want to run
inference in Unity, you should use the
[Unity Inference Engine](Getting-started#Running-a-pre-trained-model).
[Unity Inference Engine](Getting-started.md#running-a-pre-trained-model).
If you've already trained a model using the specified `<run-identifier>` and `--resume` is not
specified, you will not be able to continue with training. Use `--force` to force ML-Agents to

2
docs/Training-on-Amazon-Web-Service.md


# Training on Amazon Web Service
Note: We no longer use this guide ourselves and so it may not work correctly. We've
:warning: **Note:** We no longer use this guide ourselves and so it may not work correctly. We've
decided to keep it up just in case it is helpful to you.
This page contains instructions for setting up an EC2 instance on Amazon Web

94
docs/Training-on-Microsoft-Azure.md


# Training on Microsoft Azure (works with ML-Agents toolkit v0.3)
Note: We no longer use this guide ourselves and so it may not work correctly. We've
:warning: **Note:** We no longer use this guide ourselves and so it may not work correctly. We've
decided to keep it up just in case it is helpful to you.
This page contains instructions for setting up training on Microsoft Azure

Setting up your own instance requires a number of package installations. Please
view the documentation for doing so
[here](Training-on-Microsoft-Azure-Custom-Instance.md).
[here](#custom-instances).
## Installing ML-Agents

then be shut down. This ensures you aren't leaving a billable VM running when
it isn't needed. Using ACI enables you to offload training of your models without needing to
install Python and TensorFlow on your own computer.
## Custom Instances
This page contains instructions for setting up a custom Virtual Machine on Microsoft Azure so you can running ML-Agents training in the cloud.
1. Start by
[deploying an Azure VM](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal)
with Ubuntu Linux (tests were done with 16.04 LTS). To use GPU support, use
a N-Series VM.
2. SSH into your VM.
3. Start with the following commands to install the Nvidia driver:
```sh
wget http://us.download.nvidia.com/tesla/375.66/nvidia-diag-driver-local-repo-ubuntu1604_375.66-1_amd64.deb
sudo dpkg -i nvidia-diag-driver-local-repo-ubuntu1604_375.66-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda-drivers
sudo reboot
```
4. After a minute you should be able to reconnect to your VM and install the
CUDA toolkit:
```sh
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda-8-0
```
5. You'll next need to download cuDNN from the Nvidia developer site. This
requires a registered account.
6. Navigate to [http://developer.nvidia.com](http://developer.nvidia.com) and
create an account and verify it.
7. Download (to your own computer) cuDNN from [this url](https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v6/prod/8.0_20170307/Ubuntu16_04_x64/libcudnn6_6.0.20-1+cuda8.0_amd64-deb).
8. Copy the deb package to your VM:
```sh
scp libcudnn6_6.0.21-1+cuda8.0_amd64.deb <VMUserName>@<VMIPAddress>:libcudnn6_6.0.21-1+cuda8.0_amd64.deb
```
9. SSH back to your VM and execute the following:
```console
sudo dpkg -i libcudnn6_6.0.21-1+cuda8.0_amd64.deb
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:/usr/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH
. ~/.profile
sudo reboot
```
10. After a minute, you should be able to SSH back into your VM. After doing
so, run the following:
```sh
sudo apt install python-pip
sudo apt install python3-pip
```
11. At this point, you need to install TensorFlow. The version you install
should be tied to if you are using GPU to train:
```sh
pip3 install tensorflow-gpu==1.4.0 keras==2.0.6
```
Or CPU to train:
```sh
pip3 install tensorflow==1.4.0 keras==2.0.6
```
12. You'll then need to install additional dependencies:
```sh
pip3 install pillow
pip3 install numpy
```

4
docs/Using-Docker.md


# Using Docker For ML-Agents (Deprecated)
Note: We no longer use this guide ourselves and so it may not work correctly. We've decided to
keep it up just in case it is helpful to you.
:warning: **Note:** We no longer use this guide ourselves and so it may not work correctly. We've
decided to keep it up just in case it is helpful to you.
We currently offer a solution for Windows and Mac users who would like to do
training or inference using Docker. This option may be appealing to those who

11
docs/Using-Tensorboard.md


start TensorBoard:
1. Open a terminal or console window:
2. Navigate to the directory where the ML-Agents Toolkit is installed.
3. From the command line run :
```sh
tensorboard --logdir=summaries --port=6006
```
4. Open a browser window and navigate to [localhost:6006](http://localhost:6006).
1. Navigate to the directory where the ML-Agents Toolkit is installed.
1. From the command line run: `tensorboard --logdir=summaries --port=6006`
1. Open a browser window and navigate to [localhost:6006](http://localhost:6006).
**Note:** The default port TensorBoard uses is 6006. If there is an existing session
running on port 6006 a new session can be launched on an open port using the --port

17
docs/Using-Virtual-Environment.md


## What is a Virtual Environment?
A Virtual Environment is a self contained directory tree that contains a Python installation
for a particular version of Python, plus a number of additional packages. To learn more about
Virtual Environments see [here](https://docs.python.org/3/library/venv.html)
Virtual Environments see [here](https://docs.python.org/3/library/venv.html).
## Why should I use a Virtual Environment?
A Virtual Environment keeps all dependencies for the Python project separate from dependencies

1. Create a folder where the virtual environments will reside `$ mkdir ~/python-envs`
1. To create a new environment named `sample-env` execute `$ python3 -m venv ~/python-envs/sample-env`
1. To activate the environment execute `$ source ~/python-envs/sample-env/bin/activate`
1. Verify pip version is the same as in the __Installing Pip__ section. In case it is not the latest, upgrade to
the latest pip version using `$ pip3 install --upgrade pip`
1. Upgrade to the latest pip version using `$ pip3 install --upgrade pip`
1. Upgrade to the latest setuptools version using `$ pip3 install --upgrade setuptools`
1. To deactivate the environment execute `$ deactivate` (you can reactivate the environment
using the same `activate` command listed above)

1. Create a folder where the virtual environments will reside `md python-envs`
1. To create a new environment named `sample-env` execute `python -m venv python-envs\sample-env`
1. To activate the environment execute `python-envs\sample-env\Scripts\activate`
1. Verify pip version is the same as in the __Installing Pip__ section. In case it is not the
latest, upgrade to the latest pip version using `pip install --upgrade pip`
1. Upgrade to the latest pip version using `pip install --upgrade pip`
- Verify that you are using Python 3.6 or Python 3.7. Launch a command prompt using `cmd` and
execute `python --version` to verify the version.
- Python3 installation may require admin privileges on Windows.
- This guide is for Windows 10 using a 64-bit architecture only.
* Verify that you are using Python 3.6 or Python 3.7. Launch a command prompt using `cmd` and
execute `python --version` to verify the version.
* Python3 installation may require admin privileges on Windows.
* This guide is for Windows 10 using a 64-bit architecture only.

91
docs/Training-on-Microsoft-Azure-Custom-Instance.md


# Setting up a Custom Instance on Microsoft Azure for Training (works with the ML-Agents toolkit v0.3)
This page contains instructions for setting up a custom Virtual Machine on Microsoft Azure so you can running ML-Agents training in the cloud.
1. Start by
[deploying an Azure VM](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal)
with Ubuntu Linux (tests were done with 16.04 LTS). To use GPU support, use
a N-Series VM.
2. SSH into your VM.
3. Start with the following commands to install the Nvidia driver:
```sh
wget http://us.download.nvidia.com/tesla/375.66/nvidia-diag-driver-local-repo-ubuntu1604_375.66-1_amd64.deb
sudo dpkg -i nvidia-diag-driver-local-repo-ubuntu1604_375.66-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda-drivers
sudo reboot
```
4. After a minute you should be able to reconnect to your VM and install the
CUDA toolkit:
```sh
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda-8-0
```
5. You'll next need to download cuDNN from the Nvidia developer site. This
requires a registered account.
6. Navigate to [http://developer.nvidia.com](http://developer.nvidia.com) and
create an account and verify it.
7. Download (to your own computer) cuDNN from [this url](https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v6/prod/8.0_20170307/Ubuntu16_04_x64/libcudnn6_6.0.20-1+cuda8.0_amd64-deb).
8. Copy the deb package to your VM:
```sh
scp libcudnn6_6.0.21-1+cuda8.0_amd64.deb <VMUserName>@<VMIPAddress>:libcudnn6_6.0.21-1+cuda8.0_amd64.deb
```
9. SSH back to your VM and execute the following:
```console
sudo dpkg -i libcudnn6_6.0.21-1+cuda8.0_amd64.deb
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:/usr/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH
. ~/.profile
sudo reboot
```
10. After a minute, you should be able to SSH back into your VM. After doing
so, run the following:
```sh
sudo apt install python-pip
sudo apt install python3-pip
```
11. At this point, you need to install TensorFlow. The version you install
should be tied to if you are using GPU to train:
```sh
pip3 install tensorflow-gpu==1.4.0 keras==2.0.6
```
Or CPU to train:
```sh
pip3 install tensorflow==1.4.0 keras==2.0.6
```
12. You'll then need to install additional dependencies:
```sh
pip3 install pillow
pip3 install numpy
```
13. You can now return to the
[main Azure instruction page](Training-on-Microsoft-Azure.md).
正在加载...
取消
保存