Ervin Teng cbfbff2c | 5 年前 | |
---|---|---|
.circleci | 5 年前 | |
.github | 5 年前 | |
.yamato | 5 年前 | |
Project | 5 年前 | |
com.unity.ml-agents | 5 年前 | |
config | 5 年前 | |
demos | 5 年前 | |
docs | 5 年前 | |
gym-unity | 5 年前 | |
ml-agents | 5 年前 | |
ml-agents-envs | 5 年前 | |
notebooks | 5 年前 | |
protobuf-definitions | 5 年前 | |
unity-volume | 7 年前 | |
utils | 5 年前 | |
.gitattributes | 7 年前 | |
.gitignore | 5 年前 | |
.pre-commit-config.yaml | 5 年前 | |
.pylintrc | 5 年前 | |
CODE_OF_CONDUCT.md | 6 年前 | |
Dockerfile | 6 年前 | |
LICENSE | 7 年前 | |
README.md | 5 年前 | |
SURVEY.md | 5 年前 | |
markdown-link-check.fast.json | 5 年前 | |
markdown-link-check.full.json | 5 年前 | |
setup.cfg | 5 年前 | |
test_constraints_max_tf1_version.txt | 5 年前 | |
test_constraints_max_tf2_version.txt | 5 年前 | |
test_constraints_min_version.txt | 5 年前 | |
test_requirements.txt | 5 年前 |
README.md
Unity ML-Agents Toolkit (Beta)
(latest release) (all releases)
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. We also provide implementations (based on TensorFlow) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release. The ML-Agents toolkit is mutually beneficial for both game developers and AI researchers as it provides a central platform where advances in AI can be evaluated on Unity’s rich environments and then made accessible to the wider research and game developer communities.
Features
- Unity environment control from Python
- 10+ sample Unity environments
- Two deep reinforcement learning algorithms, Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC)
- Support for multiple environment configurations and training scenarios
- Train memory-enhanced agents using deep reinforcement learning
- Easily definable Curriculum Learning and Generalization scenarios
- Built-in support for Imitation Learning
- Flexible agent control with On Demand Decision Making
- Visualizing network outputs within the environment
- Simplified set-up with Docker
- Wrap learning environments as a gym
- Utilizes the Unity Inference Engine
- Train using concurrent Unity environment instances
Documentation
- For more information, in addition to installation and usage instructions, see our documentation home.
- If you are a researcher interested in a discussion of Unity as an AI platform, see a pre-print of our reference paper on Unity and the ML-Agents Toolkit. Also, see below for instructions on citing this paper.
- If you have used an earlier version of the ML-Agents toolkit, we strongly recommend our guide on migrating from earlier versions.
Additional Resources
We have published a series of blog posts that are relevant for ML-Agents:
- Overviewing reinforcement learning concepts (multi-armed bandit and Q-learning)
- Using Machine Learning Agents in a real game: a beginner’s guide
- Post announcing the winners of our first ML-Agents Challenge
- Post overviewing how Unity can be leveraged as a simulator to design safer cities.
In addition to our own documentation, here are some additional, relevant articles:
- A Game Developer Learns Machine Learning
- Explore Unity Technologies ML-Agents Exclusively on Intel Architecture
- ML-Agents Penguins tutorial
Community and Feedback
The ML-Agents toolkit is an open-source project and we encourage and welcome contributions. If you wish to contribute, be sure to review our contribution guidelines and code of conduct.
For problems with the installation and setup of the the ML-Agents toolkit, or discussions about how to best setup or train your agents, please create a new thread on the Unity ML-Agents forum and make sure to include as much detail as possible. If you run into any other problems using the ML-Agents toolkit, or have a specific feature requests, please submit a GitHub issue.
Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few minutes to let us know about it.
For any other questions or feedback, connect directly with the ML-Agents team at ml-agents@unity3d.com.
Translations
To make the Unity ML-Agents toolkit accessible to the global research and Unity developer communities, we're attempting to create and maintain translations of our documentation. We've started with translating a subset of the documentation to one language (Chinese), but we hope to continue translating more pages and to other languages. Consequently, we welcome any enhancements and improvements from the community.
License
Citation
If you use Unity or the ML-Agents Toolkit to conduct research, we ask that you cite the following paper as a reference:
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). Unity: A General Platform for Intelligent Agents. arXiv preprint arXiv:1809.02627. https://github.com/Unity-Technologies/ml-agents.