GitHub
b05c9ac1
Add environment manager for parallel environments (#2209)
Previously in v0.8 we added parallel environments via the SubprocessUnityEnvironment, which exposed the same abstraction as UnityEnvironment while actually wrapping many parallel environments via subprocesses. Wrapping many environments with the same interface as a single environment had some downsides, however: * Ordering needed to be preserved for agents across different envs, complicating the SubprocessEnvironment logic * Asynchronous environments with steps taken out of sync with the trainer aren't viable with the Environment abstraction This PR introduces a new EnvManager abstraction which exposes a reduced subset of the UnityEnvironment abstraction and a SubprocessEnvManager implementation which replaces the SubprocessUnityEnvironment. |
6 年前 | |
---|---|---|
.. | ||
mlagents/trainers | Add environment manager for parallel environments (#2209) | 6 年前 |
README.md | Fixing tables in documentation and other markdown errors. (#1199) | 7 年前 |
setup.py | Fixed the import issue (#2158) | 6 年前 |
README.md
Unity ML-Agents Python Interface and Trainers
The mlagents
Python package is part of the
ML-Agents Toolkit.
mlagents
provides a Python API that allows direct interaction with the Unity
game engine as well as a collection of trainers and algorithms to train agents
in Unity environments.
The mlagents
Python package contains two sub packages:
-
mlagents.envs
: A low level API which allows you to interact directly with a Unity Environment. See here for more information on using this package. -
mlagents.trainers
: A set of Reinforcement Learning algorithms designed to be used with Unity environments. Access them using the:mlagents-learn
access point. See here for more information on using this package.
Installation
Install the mlagents
package with:
pip install mlagents
Usage & More Information
For more detailed documentation, check out the ML-Agents Toolkit documentation.