浏览代码

Clean-up of python references

- Used python3 instead of python to be explicit
- Fixed Python capitalization
/develop-generalizationTraining-TrainerController
Marwan Mattar 7 年前
当前提交
7993e019
共有 7 个文件被更改,包括 10 次插入18 次删除
  1. 2
      docs/Feature-Memory.md
  2. 4
      docs/Getting-Started-with-Balance-Ball.md
  3. 12
      docs/Installation.md
  4. 2
      docs/Limitations-and-Common-Issues.md
  5. 2
      docs/Training-Imitation-Learning.md
  6. 4
      docs/Training-ML-Agents.md
  7. 2
      docs/Training-on-Amazon-Web-Service.md

2
docs/Feature-Memory.md


## Limitations
* LSTM does not work well with continuous vector action space.
Please use discrete vector action space for better results.
* Since the memories must be sent back and forth between python
* Since the memories must be sent back and forth between Python
and Unity, using too large `memory_size` will slow down training.
* Adding a recurrent layer increases the complexity of the neural
network, it is recommended to decrease `num_layers` when using recurrent.

4
docs/Getting-Started-with-Balance-Ball.md


To train the agents within the Ball Balance environment, we will be using the python
package. We have provided a convenient python wrapper script called `learn.py` which accepts arguments used to configure both training and inference phases.
package. We have provided a convenient Python wrapper script called `learn.py` which accepts arguments used to configure both training and inference phases.
We will pass to this script the path of the environment executable that we just built. (Optionally) We can

```
python python/learn.py <env_file_path> --run-id=<run-identifier> --train
python3 python/learn.py <env_file_path> --run-id=<run-identifier> --train
```

12
docs/Installation.md


## Install Python (with Dependencies)
In order to use ML-Agents, you need Python (2 or 3; 64 bit required) along with
In order to use ML-Agents, you need Python 3 along with
We **strongly** recommend using Python 3 as we do not guarantee supporting Python 2
in future releases. In all of our subsequent instructions, we use `python`
to refer to either Python 2 or 3, depending on your installation.
### Windows Users

on installing it.
To install dependencies, go into the `python` subdirectory of the repository,
and run (depending on your Python version) from the command line:
pip install .
or
and run from the command line:
pip3 install .

2
docs/Limitations-and-Common-Issues.md


If you receive an exception `"Couldn't launch new environment because
communication port {} is still in use. "`, you can change the worker
number in the python script when calling
number in the Python script when calling
`UnityEnvironment(file_name=filename, worker_id=X)`

2
docs/Training-Imitation-Learning.md


4. Link the brains to the desired agents (one agent as the teacher and at least one agent as a student).
5. Build the Unity executable for your desired platform.
6. In `trainer_config.yaml`, add an entry for the "Student" brain. Set the `trainer` parameter of this entry to `imitation`, and the `brain_to_imitate` parameter to the name of the teacher brain: "Teacher". Additionally, set `batches_per_epoch`, which controls how much training to do each moment. Increase the `max_steps` option if you'd like to keep training the agents for a longer period of time.
7. Launch the training process with `python python/learn.py <env_name> --train --slow`, where `<env_name>` is the path to your built Unity executable.
7. Launch the training process with `python3 python/learn.py <env_name> --train --slow`, where `<env_name>` is the path to your built Unity executable.
8. From the Unity window, control the agent with the Teacher brain by providing "teacher demonstrations" of the behavior you would like to see.
9. Watch as the agent(s) with the student brain attached begin to behave similarly to the demonstrations.
10. Once the Student agents are exhibiting the desired behavior, end the training process with `CTL+C` from the command line.

4
docs/Training-ML-Agents.md


The basic command for training is:
python learn.py <env_file_path> --run-id=<run-identifier> --train
python3 learn.py <env_file_path> --run-id=<run-identifier> --train
where `<env_file_path>` is the path to your Unity executable containing the agents to be trained and `<run-identifier>` is an optional identifier you can use to identify the results of individual training runs.

3. Navigate to the ml-agents `python` folder.
4. Run the following to launch the training process using the path to the Unity environment you built in step 1:
python learn.py ../../projects/Cats/CatsOnBicycles.app --run-id=cob_1 --train
python3 learn.py ../../projects/Cats/CatsOnBicycles.app --run-id=cob_1 --train
During a training session, the training program prints out and saves updates at regular intervals (specified by the `summary_freq` option). The saved statistics are grouped by the `run-id` value so you should assign a unique id to each training run if you plan to view the statistics. You can view these statistics using TensorBoard during or after training by running the following command (from the ML-Agents python directory):

2
docs/Training-on-Amazon-Web-Service.md


## Testing
If all steps worked correctly, upload an example binary built for Linux to the instance, and test it from python with:
If all steps worked correctly, upload an example binary built for Linux to the instance, and test it from Python with:
```python
from unityagents import UnityEnvironment

正在加载...
取消
保存