浏览代码

Merge release 15 into Main

[release_15] Release 15 Merge into Main
/develop/lex-walker-model
GitHub 3 年前
当前提交
3607f062
共有 26 个文件被更改,包括 110 次插入61 次删除
  1. 19
      README.md
  2. 2
      com.unity.ml-agents.extensions/Documentation~/Grid-Sensor.md
  3. 2
      com.unity.ml-agents.extensions/Documentation~/Match3.md
  4. 12
      com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md
  5. 6
      com.unity.ml-agents/CHANGELOG.md
  6. 4
      com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
  7. 4
      com.unity.ml-agents/Runtime/Academy.cs
  8. 2
      com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs
  9. 2
      com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs
  10. 26
      com.unity.ml-agents/Runtime/Agent.cs
  11. 2
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs
  12. 2
      com.unity.ml-agents/Runtime/DiscreteActionMasker.cs
  13. 2
      com.unity.ml-agents/Runtime/IMultiAgentGroup.cs
  14. 7
      com.unity.ml-agents/Runtime/Sensors/BufferSensor.cs
  15. 4
      com.unity.ml-agents/Runtime/SimpleMultiAgentGroup.cs
  16. 8
      docs/Installation-Anaconda-Windows.md
  17. 8
      docs/Installation.md
  18. 7
      docs/Learning-Environment-Design-Agents.md
  19. 5
      docs/ML-Agents-Overview.md
  20. 2
      docs/Training-on-Amazon-Web-Service.md
  21. 2
      docs/Training-on-Microsoft-Azure.md
  22. 4
      docs/Unity-Inference-Engine.md
  23. 2
      ml-agents-envs/README.md
  24. 2
      ml-agents/README.md
  25. 32
      ml-agents/mlagents/trainers/tests/test_rl_trainer.py
  26. 3
      ml-agents/mlagents/trainers/trainer/rl_trainer.py

19
README.md


# Unity ML-Agents Toolkit
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/)
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/)
[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE)

- 18+ [example Unity environments](docs/Learning-Environment-Examples.md)
- Support for multiple environment configurations and training scenarios
- Flexible Unity SDK that can be integrated into your game or custom Unity scene
- Training using two deep reinforcement learning algorithms, Proximal Policy
Optimization (PPO) and Soft Actor-Critic (SAC)
- Built-in support for Imitation Learning through Behavioral Cloning (BC) or
Generative Adversarial Imitation Learning (GAIL)
- Self-play mechanism for training agents in adversarial scenarios
- Support for training single-agent, multi-agent cooperative, and multi-agent
competitive scenarios via several Deep Reinforcement Learning algorithms (PPO, SAC, MA-POCA, self-play).
- Support for learning from demonstrations through two Imitation Learning algorithms (BC and GAIL).
- Easily definable Curriculum Learning scenarios for complex tasks
- Train robust agents using environment randomization
- Flexible agent control with On Demand Decision Making

## Releases & Documentation
**Our latest, stable release is `Release 14`. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Readme.md)
**Our latest, stable release is `Release 15`. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/Readme.md)
to get started with the latest release of ML-Agents.**
The table below lists all our releases, including our `main` branch which is

| **Version** | **Release Date** | **Source** | **Documentation** | **Download** | **Python Package** | **Unity Package** |
|:-------:|:------:|:-------------:|:-------:|:------------:|:------------:|:------------:|
| **main (unstable)** | -- | [source](https://github.com/Unity-Technologies/ml-agents/tree/main) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/main/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/main.zip) | -- | -- |
| **Release 14** | **March 5, 2021** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/release_14)** | **[docs](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/release_14.zip)** | **[0.24.1](https://pypi.org/project/mlagents/0.24.1/)** | **[1.8.1](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.8/manual/index.html)** |
| **Release 15** | **March 17, 2021** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/release_15)** | **[docs](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/release_15.zip)** | **[0.25.0](https://pypi.org/project/mlagents/0.25.0/)** | **[1.9.0](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.9/manual/index.html)** |
| **Release 13** | **February 17, 2021** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/release_13)** | **[docs](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/release_13.zip)** | **[0.24.0](https://pypi.org/project/mlagents/0.24.0/)** | **[1.8.0](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.8/manual/index.html)** |
| **Release 14** | March 5, 2021 | [source](https://github.com/Unity-Technologies/ml-agents/tree/release_14) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/release_14.zip) | [0.24.1](https://pypi.org/project/mlagents/0.24.1/) | [1.8.1](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.8/manual/index.html) |
| **Release 13** | February 17, 2021 | [source](https://github.com/Unity-Technologies/ml-agents/tree/release_13) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/release_13.zip) | [0.24.0](https://pypi.org/project/mlagents/0.24.0/) | [1.8.0](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.8/manual/index.html) |
| **Release 12** | December 22, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/release_12) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/release_12_docs/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/release_12.zip) | [0.23.0](https://pypi.org/project/mlagents/0.23.0/) | [1.7.2](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.7/manual/index.html) |
| **Release 11** | December 21, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/release_11) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/release_11_docs/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/release_11.zip) | [0.23.0](https://pypi.org/project/mlagents/0.23.0/) | [1.7.0](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.7/manual/index.html) |
| **Release 10** | November 18, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/release_10) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/release_10_docs/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/release_10.zip) | [0.22.0](https://pypi.org/project/mlagents/0.22.0/) | [1.6.0](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.6/manual/index.html) |

2
com.unity.ml-agents.extensions/Documentation~/Grid-Sensor.md


An image can be thought of as a matrix of a predefined width (W) and a height (H) and each pixel can be thought of as simply an array of length 3 (in the case of RGB), `[Red, Green, Blue]` holding the different channel information of the color (channel) intensities at that pixel location. Thus an image is just a 3 dimensional matrix of size WxHx3. A Grid Observation can be thought of as a generalization of this setup where in place of a pixel there is a "cell" which is an array of length N representing different channel intensities at that cell position. From a Convolutional Neural Network point of view, the introduction of multiple channels in an "image" isn't a new concept. One such example is using an RGB-Depth image which is used in several robotics applications. The distinction of Grid Observations is what the data within the channels represents. Instead of limiting the channels to color intensities, the channels within a cell of a Grid Observation generalize to any data that can be represented by a single number (float or int).
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
The Food Collector environment can be described as:
* Set-up: A multi-agent environment where agents compete to collect food.

2
com.unity.ml-agents.extensions/Documentation~/Match3.md


This implementation includes:
* C# implementation catered toward a Match-3 setup including concepts around encoding for moves based on [Human Like Playtesting with Deep Learning](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)
* An example Match-3 scene with ML-Agents implemented (located under /Project/Assets/ML-Agents/Examples/Match3). More information, on Match-3 example [here](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/docs/Learning-Environment-Examples.md#match-3).
* An example Match-3 scene with ML-Agents implemented (located under /Project/Assets/ML-Agents/Examples/Match3). More information, on Match-3 example [here](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/docs/Learning-Environment-Examples.md#match-3).
### Feedback
If you are a Match-3 developer and are trying to leverage ML-Agents for this scenario, [we want to hear from you](https://forms.gle/TBsB9jc8WshgzViU9). Additionally, we are also looking for interested Match-3 teams to speak with us for 45 minutes. If you are interested, please indicate that in the [form](https://forms.gle/TBsB9jc8WshgzViU9). If selected, we will provide gift cards as a token of appreciation.

12
com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md


recommended ways to install the package:
### Local Installation
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/Installation.md#advanced-local-installation-for-development-1)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/images/unity_package_manager_git_url.png)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/images/unity_package_manager_git_url.png)
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_14
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_15
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_14",
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_15",
```
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
may take several minutes to resolve the packages the first time that you add it.

- No way to customize the action space of the `InputActuatorComponent`
## Need Help?
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/README.md) contains links for contacting the team or getting support.
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/README.md) contains links for contacting the team or getting support.

6
com.unity.ml-agents/CHANGELOG.md


### Minor Changes
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Make com.unity.modules.unityanalytics an optional dependency. (#5109)
#### ml-agents / ml-agents-envs / gym-unity (Python)
### Bug Fixes

## [1.9.0-preview] - 2021-03-17
### Major Changes
#### com.unity.ml-agents (C#)
- The `BufferSensor` and `BufferSensorComponent` have been added. They allow the Agent to observe variable number of entities. (#4909)
- The `BufferSensor` and `BufferSensorComponent` have been added. They allow the Agent to observe variable number of entities. For an example, see the [Sorter environment](https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Examples.md#sorter). (#4909)
end episodes in groups. (#4923)
end episodes in groups. For examples, see the [Cooperative Push Block](https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Examples.md#cooperative-push-block), [Dungeon Escape](https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Examples.md#dungeon-escape) and [Soccer](https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Examples.md#soccer-twos) environments. (#4923)
#### ml-agents / ml-agents-envs / gym-unity (Python)
- The MA-POCA trainer has been added. This is a new trainer that enables Agents to learn how to work together in groups. Configure
`poca` as the trainer in the configuration YAML after instantiating a `SimpleMultiAgentGroup` to use this feature. (#5005)

- Updated com.unity.barracuda to 1.3.2-preview. (#5084)
- Make com.unity.modules.unityanalytics an optional dependency. (#5109)
- Added 3D Ball to the `com.unity.ml-agents` samples. (#5077)
#### ml-agents / ml-agents-envs / gym-unity (Python)
- The `encoding_size` setting for RewardSignals has been deprecated. Please use `network_settings` instead. (#4982)

4
com.unity.ml-agents/Documentation~/com.unity.ml-agents.md


[unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
[unity inference engine]: https://docs.unity3d.com/Packages/com.unity.barracuda@latest/index.html
[package manager documentation]: https://docs.unity3d.com/Manual/upm-ui-install.html
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Installation.md
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Installation.md
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/com.unity.ml-agents.extensions
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/com.unity.ml-agents.extensions

4
com.unity.ml-agents/Runtime/Academy.cs


* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/
* https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/docs/
*/
namespace Unity.MLAgents

/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_15_docs/" +
"docs/Learning-Environment-Design.md")]
public class Academy : IDisposable
{

2
com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);

2
com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndices">The indices of the masked actions.</param>

26
com.unity.ml-agents/Runtime/Agent.cs


/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Readme.md
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)

/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>

/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{

///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask)

///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="actions">
/// Struct containing the buffers of actions to be executed at this step.

2
com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs


/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]

2
com.unity.ml-agents/Runtime/DiscreteActionMasker.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndices">The indices of the masked actions.</param>

2
com.unity.ml-agents/Runtime/IMultiAgentGroup.cs


/// <summary>
/// Register agent to the MultiAgentGroup.
/// </summary>
/// <param name="agent">The Agent to register.</param>
/// <param name="agent">The Agent to unregister.</param>
void UnregisterAgent(Agent agent);
}
}

7
com.unity.ml-agents/Runtime/Sensors/BufferSensor.cs


DimensionProperty.VariableSize,
DimensionProperty.None
};
/// <summary>
/// Creates the BufferSensor.
/// </summary>
/// <param name="maxNumberObs">The maximum number of observations to be appended to this BufferSensor.</param>
/// <param name="obsSize">The size of each observation appended to the BufferSensor.</param>
/// <param name="name">The name of the sensor.</param>
public BufferSensor(int maxNumberObs, int obsSize, string name)
{
m_Name = name;

4
com.unity.ml-agents/Runtime/SimpleMultiAgentGroup.cs


readonly int m_Id = MultiAgentGroupIdCounter.GetGroupId();
HashSet<Agent> m_Agents = new HashSet<Agent>();
/// <summary>
/// Disposes of the SimpleMultiAgentGroup.
/// </summary>
public virtual void Dispose()
{
while (m_Agents.Count > 0)

8
docs/Installation-Anaconda-Windows.md


the ml-agents Conda environment by typing `activate ml-agents`)_:
```sh
git clone --branch release_14 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_15 https://github.com/Unity-Technologies/ml-agents.git
The `--branch release_14` option will switch to the tag of the latest stable
The `--branch release_15` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially
unstable.

connected to the Internet and then type in the Anaconda Prompt:
```console
python -m pip install mlagents==0.24.1
python -m pip install mlagents==0.25.0
```
This will complete the installation of all the required Python packages to run

this, you can try:
```console
python -m pip install mlagents==0.24.1 --no-cache-dir
python -m pip install mlagents==0.25.0 --no-cache-dir
```
This `--no-cache-dir` tells the pip to disable the cache.

8
docs/Installation.md


the repository if you would like to explore more examples.
```sh
git clone --branch release_14 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_15 https://github.com/Unity-Technologies/ml-agents.git
The `--branch release_14` option will switch to the tag of the latest stable
The `--branch release_15` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially unstable.
#### Advanced: Local Installation for Development

back, make sure to clone the `main` branch (by omitting `--branch release_14`
back, make sure to clone the `main` branch (by omitting `--branch release_15`
from the command above). See our
[Contributions Guidelines](../com.unity.ml-agents/CONTRIBUTING.md) for more
information on contributing to the ML-Agents Toolkit.

run from the command line:
```sh
python -m pip install mlagents==0.24.1
python -m pip install mlagents==0.25.0
```
Note that this will install `mlagents` from PyPi, _not_ from the cloned

7
docs/Learning-Environment-Design-Agents.md


- [Agent Properties](#agent-properties)
- [Destroying an Agent](#destroying-an-agent)
- [Defining Multi-agent Scenarios](#defining-multi-agent-scenarios)
- [Teams for Adversarial Scenarios](#teams-for-adversarial-scenarios)
- [Groups for Cooperative Scenarios](#groups-for-cooperative-scenarios)
- [Recording Demonstrations](#recording-demonstrations)
An agent is an entity that can observe its environment, decide on the best

configuring MA-POCA. When using MA-POCA, agents which are deactivated or removed from the Scene
during the episode will still learn to contribute to the group's long term rewards, even
if they are not active in the scene to experience them.
See the [Cooperative Push Block](Learning-Environment-Examples.md#cooperative-push-block) environment
for an example of how to use Multi Agent Groups, and the
[Dungeon Escape](Learning-Environment-Examples.md#dungeon-escape) environment for an example of
how the Multi Agent Group can be used with agents that are removed from the scene mid-episode.
**NOTE**: Groups differ from Teams (for competitive settings) in the following way - Agents
working together should be added to the same Group, while agents playing against each other

5
docs/ML-Agents-Overview.md


- [Recording Demonstrations](#recording-demonstrations)
- [Summary](#summary)
- [Training Methods: Environment-specific](#training-methods-environment-specific)
- [Training in Multi-Agent Environments with Self-Play](#training-in-multi-agent-environments-with-self-play)
- [Training in Competitive Multi-Agent Environments with Self-Play](#training-in-competitive-multi-agent-environments-with-self-play)
- [Training in Cooperative Multi-Agent Environments with MA-POCA](#training-in-cooperative-multi-agent-environments-with-ma-poca)
- [Solving Complex Tasks using Curriculum Learning](#solving-complex-tasks-using-curriculum-learning)
- [Training Robust Agents using Environment Parameter Randomization](#training-robust-agents-using-environment-parameter-randomization)
- [Model Types](#model-types)

MA-POCA can also be combined with self-play to train teams of agents to play against each other.
To learn more about enabling cooperative behaviors for agents in an ML-Agents environment,
check out [this page](Learning-Environment-Design-Agents.md#cooperative-scenarios).
check out [this page](Learning-Environment-Design-Agents.md#groups-for-cooperative-scenarios).
For further reading, MA-POCA builds on previous work in multi-agent cooperative learning
([Lowe et al.](https://arxiv.org/abs/1706.02275), [Foerster et al.](https://arxiv.org/pdf/1705.08926.pdf),

2
docs/Training-on-Amazon-Web-Service.md


2. Clone the ML-Agents repo and install the required Python packages
```sh
git clone --branch release_14 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_15 https://github.com/Unity-Technologies/ml-agents.git
cd ml-agents/ml-agents/
pip3 install -e .
```

2
docs/Training-on-Microsoft-Azure.md


instance, and set it as the working directory.
2. Install the required packages:
Torch: `pip3 install torch==1.7.0 -f https://download.pytorch.org/whl/torch_stable.html` and
MLAgents: `python -m pip install mlagents==0.24.1`
MLAgents: `python -m pip install mlagents==0.25.0`
## Testing

4
docs/Unity-Inference-Engine.md


loading expects certain conventions for constants and tensor names. While it is
possible to construct a model that follows these conventions, we don't provide
any additional help for this. More details can be found in
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
[BarracudaModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs).
[BarracudaModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_15_docs/com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs).
If you wish to run inference on an externally trained model, you should use
Barracuda directly, instead of trying to run it through ML-Agents.

2
ml-agents-envs/README.md


Install the `mlagents_envs` package with:
```sh
python -m pip install mlagents_envs==0.24.1
python -m pip install mlagents_envs==0.25.0
```
## Usage & More Information

2
ml-agents/README.md


Install the `mlagents` package with:
```sh
python -m pip install mlagents==0.24.1
python -m pip install mlagents==0.25.0
```
## Usage & More Information

32
ml-agents/mlagents/trainers/tests/test_rl_trainer.py


import os
import unittest
from unittest import mock
import pytest
import mlagents.trainers.tests.mock_brain as mb

for step in checkpoint_range
]
mock_add_checkpoint.assert_has_calls(add_checkpoint_calls)
class RLTrainerWarningTest(unittest.TestCase):
def test_warning_group_reward(self):
with self.assertLogs("mlagents.trainers", level="WARN") as cm:
rl_trainer = create_rl_trainer()
# This one should warn
trajectory = mb.make_fake_trajectory(
length=10,
observation_specs=create_observation_specs_with_shapes([(1,)]),
max_step_complete=True,
action_spec=ActionSpec.create_discrete((2,)),
group_reward=1.0,
)
buff = trajectory.to_agentbuffer()
rl_trainer._warn_if_group_reward(buff)
assert len(cm.output) > 0
len_of_first_warning = len(cm.output)
rl_trainer = create_rl_trainer()
# This one shouldn't
trajectory = mb.make_fake_trajectory(
length=10,
observation_specs=create_observation_specs_with_shapes([(1,)]),
max_step_complete=True,
action_spec=ActionSpec.create_discrete((2,)),
)
buff = trajectory.to_agentbuffer()
rl_trainer._warn_if_group_reward(buff)
# Make sure warnings don't get bigger
assert len(cm.output) == len_of_first_warning

3
ml-agents/mlagents/trainers/trainer/rl_trainer.py


Warn if the trainer receives a Group Reward but isn't a multiagent trainer (e.g. POCA).
"""
if not self._has_warned_group_rewards:
group_reward = np.sum(buffer[BufferKey.GROUP_REWARD])
if group_reward > 0.0:
if np.any(buffer[BufferKey.GROUP_REWARD]):
logger.warning(
"An agent recieved a Group Reward, but you are not using a multi-agent trainer. "
"Please use the POCA trainer for best results."

正在加载...
取消
保存