GitHub
5 年前
当前提交
e3af96ca
共有 75 个文件被更改,包括 1471 次插入 和 689 次删除
-
2Project/Assets/ML-Agents/Examples/3DBall/Scripts/Ball3DAgent.cs
-
2Project/Assets/ML-Agents/Examples/3DBall/Scripts/Ball3DHardAgent.cs
-
2Project/Assets/ML-Agents/Examples/Bouncer/Scripts/BouncerAgent.cs
-
2Project/Assets/ML-Agents/Examples/GridWorld/Scripts/GridArea.cs
-
2Project/Assets/ML-Agents/Examples/Tennis/Scripts/TennisAgent.cs
-
2Project/Assets/ML-Agents/Examples/Walker/Scripts/WalkerAgent.cs
-
108README.md
-
2com.unity.ml-agents/CHANGELOG.md
-
93com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
-
10com.unity.ml-agents/Editor/RayPerceptionSensorComponentBaseEditor.cs
-
2com.unity.ml-agents/Runtime/Academy.cs
-
33com.unity.ml-agents/Runtime/Agent.cs
-
48com.unity.ml-agents/Runtime/Communicator/RpcCommunicator.cs
-
2com.unity.ml-agents/Runtime/Inference/ModelRunner.cs
-
32com.unity.ml-agents/Runtime/Policies/BehaviorParameters.cs
-
14com.unity.ml-agents/Runtime/Policies/BrainParameters.cs
-
19com.unity.ml-agents/Runtime/Sensors/CameraSensor.cs
-
78com.unity.ml-agents/Runtime/Sensors/CameraSensorComponent.cs
-
8com.unity.ml-agents/Runtime/Sensors/RayPerceptionSensorComponent3D.cs
-
31com.unity.ml-agents/Runtime/Sensors/RayPerceptionSensorComponentBase.cs
-
28com.unity.ml-agents/Runtime/Sensors/RenderTextureSensor.cs
-
55com.unity.ml-agents/Runtime/Sensors/RenderTextureSensorComponent.cs
-
37com.unity.ml-agents/Runtime/SideChannels/FloatPropertiesChannel.cs
-
2config/sac_trainer_config.yaml
-
4config/trainer_config.yaml
-
2docs/Getting-Started-with-Balance-Ball.md
-
160docs/Installation.md
-
5docs/Learning-Environment-Best-Practices.md
-
4docs/Learning-Environment-Create-New.md
-
9docs/Learning-Environment-Design-Agents.md
-
4docs/Learning-Environment-Examples.md
-
35docs/Limitations.md
-
4docs/Migrating.md
-
2docs/Readme.md
-
9docs/Using-Docker.md
-
12docs/Using-Virtual-Environment.md
-
951docs/images/unity_package_manager_window.png
-
2docs/localized/KR/docs/Installation.md
-
2docs/localized/zh-CN/docs/Installation.md
-
4ml-agents-envs/mlagents_envs/communicator.py
-
2ml-agents-envs/mlagents_envs/environment.py
-
5ml-agents-envs/mlagents_envs/exception.py
-
3ml-agents-envs/mlagents_envs/rpc_communicator.py
-
3ml-agents-envs/mlagents_envs/rpc_utils.py
-
4ml-agents/mlagents/trainers/brain.py
-
3ml-agents/mlagents/trainers/components/reward_signals/gail/signal.py
-
4ml-agents/mlagents/trainers/components/reward_signals/reward_signal_factory.py
-
2ml-agents/mlagents/trainers/demo_loader.py
-
7ml-agents/mlagents/trainers/ghost/trainer.py
-
13ml-agents/mlagents/trainers/learn.py
-
3ml-agents/mlagents/trainers/models.py
-
5ml-agents/mlagents/trainers/policy/nn_policy.py
-
3ml-agents/mlagents/trainers/policy/tf_policy.py
-
5ml-agents/mlagents/trainers/ppo/optimizer.py
-
3ml-agents/mlagents/trainers/ppo/trainer.py
-
6ml-agents/mlagents/trainers/sac/network.py
-
14ml-agents/mlagents/trainers/sac/optimizer.py
-
3ml-agents/mlagents/trainers/sac/trainer.py
-
3ml-agents/mlagents/trainers/trainer/rl_trainer.py
-
10ml-agents/mlagents/trainers/trainer/trainer.py
-
48com.unity.ml-agents/Editor/CameraSensorComponentEditor.cs
-
11com.unity.ml-agents/Editor/CameraSensorComponentEditor.cs.meta
-
43com.unity.ml-agents/Editor/RenderTextureSensorComponentEditor.cs
-
11com.unity.ml-agents/Editor/RenderTextureSensorComponentEditor.cs.meta
-
34com.unity.ml-agents/Tests/Editor/Sensor/RenderTextureSenorTests.cs
-
11com.unity.ml-agents/Tests/Editor/Sensor/RenderTextureSenorTests.cs.meta
-
42com.unity.ml-agents/Tests/Editor/Sensor/RenderTextureSensorComponentTests.cs
-
11com.unity.ml-agents/Tests/Editor/Sensor/RenderTextureSensorComponentTests.cs.meta
-
10ml-agents/mlagents/logging_util.py
-
3com.unity.ml-agents/Documentation~/TableOfContents.md
-
7com.unity.ml-agents/README.md.meta
-
5com.unity.ml-agents/README.md
-
0/com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
-
0/docs/localized/KR/docs/Installation-Anaconda-Windows.md
-
0/docs/Installation-Anaconda-Windows.md
|
|||
Please see the [ML-Agents README)(https://github.com/Unity-Technologies/ml-agents/blob/master/README.md) |
|||
# About ML-Agents package (`com.unity.ml-agents`) |
|||
|
|||
The Unity ML-Agents package contains the C# SDK for the |
|||
[Unity ML-Agents Toolkit](https://github.com/Unity-Technologies/ml-agents). |
|||
|
|||
The package provides the ability for any Unity scene to be converted into a learning |
|||
environment where character behaviors can be trained using a variety of machine learning |
|||
algorithms. Additionally, it enables any trained behavior to be embedded back into the Unity |
|||
scene. More specifically, the package provides the following core functionalities: |
|||
* Define Agents: entities whose behavior will be learned. Agents are entities |
|||
that generate observations (through sensors), take actions and receive rewards from |
|||
the environment. |
|||
* Define Behaviors: entities that specifiy how an agent should act. Multiple agents can |
|||
share the same Behavior and a scene may have multiple Behaviors. |
|||
* Record demonstrations of an agent within the Editor. These demonstrations can be |
|||
valuable to train a behavior for that agent. |
|||
* Embedding a trained behavior into the scene via the |
|||
[Unity Inference Engine](https://docs.unity3d.com/Packages/com.unity.barracuda@latest/index.html). |
|||
Thus an Agent can switch from a learning behavior to an inference behavior. |
|||
|
|||
Note that this package does not contain the machine learning algorithms for training |
|||
behaviors. It relies on a Python package to orchestrate the training. This package |
|||
only enables instrumenting a Unity scene and setting it up for training, and then |
|||
embedding the trained model back into your Unity scene. |
|||
|
|||
## Preview package |
|||
This package is available as a preview, so it is not ready for production use. |
|||
The features and documentation in this package might change before it is verified for release. |
|||
|
|||
|
|||
## Package contents |
|||
|
|||
The following table describes the package folder structure: |
|||
|
|||
|**Location**|**Description**| |
|||
|---|---| |
|||
|*Documentation~*|Contains the documentation for the Unity package.| |
|||
|*Editor*|Contains utilities for Editor windows and drawers.| |
|||
|*Plugins*|Contains third-party DLLs.| |
|||
|*Runtime*|Contains core C# APIs for integrating ML-Agents into your Unity scene. | |
|||
|*Tests*|Contains the unit tests for the package.| |
|||
|
|||
<a name="Installation"></a> |
|||
|
|||
## Installation |
|||
|
|||
To install this package, follow the instructions in the |
|||
[Package Manager documentation](https://docs.unity3d.com/Manual/upm-ui-install.html). |
|||
|
|||
To install the Python package to enable training behaviors, follow the instructions on our |
|||
[GitHub repository](https://github.com/Unity-Technologies/ml-agents/blob/latest_release/docs/Installation.md). |
|||
|
|||
## Requirements |
|||
|
|||
This version of the Unity ML-Agents package is compatible with the following versions of the Unity Editor: |
|||
|
|||
* 2018.4 and later (recommended) |
|||
|
|||
## Known limitations |
|||
|
|||
### Headless Mode |
|||
|
|||
If you enable Headless mode, you will not be able to collect visual observations |
|||
from your agents. |
|||
|
|||
### Rendering Speed and Synchronization |
|||
|
|||
Currently the speed of the game physics can only be increased to 100x real-time. |
|||
The Academy also moves in time with FixedUpdate() rather than Update(), so game |
|||
behavior implemented in Update() may be out of sync with the agent decision |
|||
making. See |
|||
[Execution Order of Event Functions](https://docs.unity3d.com/Manual/ExecutionOrder.html) |
|||
for more information. |
|||
|
|||
You can control the frequency of Academy stepping by calling |
|||
`Academy.Instance.DisableAutomaticStepping()`, and then calling |
|||
`Academy.Instance.EnvironmentStep()` |
|||
|
|||
### Unity Inference Engine Models |
|||
Currently, only models created with our trainers are supported for running |
|||
ML-Agents with a neural network behavior. |
|||
|
|||
|
|||
## Helpful links |
|||
|
|||
If you are new to the Unity ML-Agents package, or have a question after reading |
|||
the documentation, you can checkout our |
|||
[GitHUb Repository](https://github.com/Unity-Technologies/ml-agents), which |
|||
also includes a number of ways to |
|||
[connect with us](https://github.com/Unity-Technologies/ml-agents#community-and-feedback) |
|||
including our [ML-Agents Forum](https://forum.unity.com/forums/ml-agents.453/). |
|||
|
|
|||
# Limitations |
|||
|
|||
## Unity SDK |
|||
|
|||
### Headless Mode |
|||
|
|||
If you enable Headless mode, you will not be able to collect visual observations |
|||
from your agents. |
|||
|
|||
### Rendering Speed and Synchronization |
|||
|
|||
Currently the speed of the game physics can only be increased to 100x real-time. |
|||
The Academy also moves in time with FixedUpdate() rather than Update(), so game |
|||
behavior implemented in Update() may be out of sync with the agent decision |
|||
making. See |
|||
[Execution Order of Event Functions](https://docs.unity3d.com/Manual/ExecutionOrder.html) |
|||
for more information. |
|||
|
|||
You can control the frequency of Academy stepping by calling |
|||
`Academy.Instance.DisableAutomaticStepping()`, and then calling |
|||
`Academy.Instance.EnvironmentStep()` |
|||
|
|||
### Unity Inference Engine Models |
|||
Currently, only models created with our trainers are supported for running |
|||
ML-Agents with a neural network behavior. |
|||
|
|||
## Python API |
|||
|
|||
### Python version |
|||
|
|||
As of version 0.3, we no longer support Python 2. |
|||
|
|||
See the package-specific Limitations pages: |
|||
* [Unity `com.unity.mlagents` package](../com.unity.ml-agents/Documentation~/com.unity.ml-agents.md) |
|||
* [`mlagents` Python package](../ml-agents/README.md) |
|||
* [`mlagents_envs` Python package](../ml-agents-envs/README.md) |
|||
* [`gym_unity` Python package](../gym-unity/README.md) |
|
|||
import logging |
|||
|
|||
|
|||