Unity 机器学习代理工具包 (ML-Agents) 是一个开源项目,它使游戏和模拟能够作为训练智能代理的环境。
您最多选择25个主题 主题必须以中文或者字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符
 
 
 
 
 
Ervin Teng 30e4424c Fix PPO optimizer creation 5 年前
.circleci Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
.github Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
.yamato Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
DevProject [upkeep] Add a dev project to take advantage of package that only work with 2019.x or newer. (#3452) 5 年前
Project Merge commit 'fbcdd83c087135f870e785cc72e5ff9a7e898e3a' into develop-splitpolicyoptimizer 5 年前
com.unity.ml-agents Merge commit 'fbcdd83c087135f870e785cc72e5ff9a7e898e3a' into develop-splitpolicyoptimizer 5 年前
config Merge commit 'fbcdd83c087135f870e785cc72e5ff9a7e898e3a' into develop-splitpolicyoptimizer 5 年前
demos Merge pull request #3010 from Unity-Technologies/release-0.12.0-to-master 5 年前
docs Merge commit 'fbcdd83c087135f870e785cc72e5ff9a7e898e3a' into develop-splitpolicyoptimizer 5 年前
gym-unity Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
ml-agents Fix PPO optimizer creation 5 年前
ml-agents-envs Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
notebooks Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
protobuf-definitions Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
unity-volume [containerization] CPU based containerization to support all environments that don't use observations 7 年前
utils Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
.gitattributes Develop communicator redesign (#638) 7 年前
.gitignore Update docs to reflect new package installation workflow. (#3362) 5 年前
.pre-commit-config.yaml update flake8 plugin version and fix warnings (#3180) 5 年前
.pylintrc Rename mlagents.envs to mlagents_envs (#3083) 5 年前
CODE_OF_CONDUCT.md Release v0.5 (#1202) 6 年前
Dockerfile Install dependencies for ml-agents-envs and ml-agents in Docker 6 年前
LICENSE Initial commit 7 年前
README.md landing page links to latest_release docs (#3415) 5 年前
SURVEY.md fix trailing whitespace in markdown (#2786) 5 年前
markdown-link-check.fast.json [CI] filter remote links, setup nightly full check (#2811) 5 年前
markdown-link-check.full.json exclude bair.berkeley.edu temporarily (#2702) 5 年前
setup.cfg add flake8-bugbear (#3137) 5 年前
test_constraints_max_tf1_version.txt Don't use tf 1.15.1 in tests (#3289) 5 年前
test_constraints_max_tf2_version.txt pass file mode to h5py.File() (#3165) 5 年前
test_constraints_min_version.txt Support for ONNX export (#3101) 5 年前
test_requirements.txt Support for ONNX export (#3101) 5 年前

README.md

Unity ML-Agents Toolkit (Beta)

docs badge license badge

(latest release) (all releases)

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. We also provide implementations (based on TensorFlow) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release. The ML-Agents toolkit is mutually beneficial for both game developers and AI researchers as it provides a central platform where advances in AI can be evaluated on Unity’s rich environments and then made accessible to the wider research and game developer communities.

Features

  • Unity environment control from Python
  • 10+ sample Unity environments
  • Two deep reinforcement learning algorithms, Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC)
  • Support for multiple environment configurations and training scenarios
  • Self-play mechanism for training agents in adversarial scenarios
  • Train memory-enhanced agents using deep reinforcement learning
  • Easily definable Curriculum Learning and Generalization scenarios
  • Built-in support for Imitation Learning
  • Flexible agent control with On Demand Decision Making
  • Visualizing network outputs within the environment
  • Simplified set-up with Docker
  • Wrap learning environments as a gym
  • Utilizes the Unity Inference Engine
  • Train using concurrent Unity environment instances

Documentation

Additional Resources

We have published a series of blog posts that are relevant for ML-Agents:

In addition to our own documentation, here are some additional, relevant articles:

Community and Feedback

The ML-Agents toolkit is an open-source project and we encourage and welcome contributions. If you wish to contribute, be sure to review our contribution guidelines and code of conduct.

For problems with the installation and setup of the the ML-Agents toolkit, or discussions about how to best setup or train your agents, please create a new thread on the Unity ML-Agents forum and make sure to include as much detail as possible. If you run into any other problems using the ML-Agents toolkit, or have a specific feature requests, please submit a GitHub issue.

Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few minutes to let us know about it.

For any other questions or feedback, connect directly with the ML-Agents team at ml-agents@unity3d.com.

Releases

The latest release is 0.14.0. Previous releases can be found below:

Version Source Documentation Download
0.13.1 source docs download
0.13.0 source docs download
0.12.1 source docs download
0.12.0 source docs download
0.11.0 source docs download
0.10.1 source docs download
0.10.0 source docs download

See the GitHub releases for more details of the changes between versions.

Please note that the master branch is under active development, so the documentation there may differ from the code of a previous release. Always use the documentation that corresponds to the release version you're using.

License

Apache License 2.0

Citation

If you use Unity or the ML-Agents Toolkit to conduct research, we ask that you cite the following paper as a reference:

Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). Unity: A General Platform for Intelligent Agents. arXiv preprint arXiv:1809.02627. https://github.com/Unity-Technologies/ml-agents.