浏览代码

[Release 18] Update versions and links (#5414)

/release_18_branch
GitHub 4 年前
当前提交
9354ca64
共有 31 个文件被更改,包括 77 次插入88 次删除
  1. 2
      DevProject/Packages/packages-lock.json
  2. 2
      Project/Packages/packages-lock.json
  3. 4
      README.md
  4. 4
      colab/Colab_UnityEnvironment_1_Run.ipynb
  5. 6
      colab/Colab_UnityEnvironment_2_Train.ipynb
  6. 12
      colab/Colab_UnityEnvironment_3_SideChannel.ipynb
  7. 12
      com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md
  8. 4
      com.unity.ml-agents.extensions/package.json
  9. 11
      com.unity.ml-agents/CHANGELOG.md
  10. 14
      com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
  11. 6
      com.unity.ml-agents/Runtime/Academy.cs
  12. 2
      com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs
  13. 2
      com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs
  14. 26
      com.unity.ml-agents/Runtime/Agent.cs
  15. 2
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs
  16. 2
      com.unity.ml-agents/package.json
  17. 8
      docs/Installation-Anaconda-Windows.md
  18. 8
      docs/Installation.md
  19. 2
      docs/Learning-Environment-Design-Agents.md
  20. 2
      docs/Migrating.md
  21. 6
      docs/Readme.md
  22. 2
      docs/Training-on-Amazon-Web-Service.md
  23. 2
      docs/Training-on-Microsoft-Azure.md
  24. 4
      docs/Unity-Inference-Engine.md
  25. 4
      gym-unity/gym_unity/__init__.py
  26. 2
      ml-agents-envs/README.md
  27. 4
      ml-agents-envs/mlagents_envs/__init__.py
  28. 2
      ml-agents/README.md
  29. 4
      ml-agents/mlagents/trainers/__init__.py
  30. 2
      ml-agents/setup.py
  31. 2
      utils/validate_release_links.py

2
DevProject/Packages/packages-lock.json


"depth": 0,
"source": "local",
"dependencies": {
"com.unity.ml-agents": "2.0.0-exp.1",
"com.unity.ml-agents": "2.1.0-exp.1",
"com.unity.modules.physics": "1.0.0"
}
},

2
Project/Packages/packages-lock.json


"depth": 0,
"source": "local",
"dependencies": {
"com.unity.ml-agents": "2.0.0-exp.1",
"com.unity.ml-agents": "2.1.0-exp.1",
"com.unity.modules.physics": "1.0.0"
}
},

4
README.md


# Unity ML-Agents Toolkit
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/docs/)
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/docs/)
[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE)

## Releases & Documentation
**Our latest, stable release is `Release 17`. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/docs/Readme.md)
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/docs/Readme.md)
to get started with the latest release of ML-Agents.**
The table below lists all our releases, including our `main` branch which is

4
colab/Colab_UnityEnvironment_1_Run.ipynb


},
"source": [
"# ML-Agents Open a UnityEnvironment\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_1/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{

" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !pip install -q mlagents==0.25.1\n",
" !python -m pip install -q mlagents==0.27.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": null,

6
colab/Colab_UnityEnvironment_2_Train.ipynb


},
"source": [
"# ML-Agents Q-Learning with GridWorld\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_2/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{

" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !pip install -q mlagents==0.25.1\n",
" !python -m pip install -q mlagents==0.27.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": null,

"id": "pZhVRfdoyPmv"
},
"source": [
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_2/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"\n",
"The observation is an image obtained by a camera on top of the grid.\n",
"\n",

12
colab/Colab_UnityEnvironment_3_SideChannel.ipynb


},
"source": [
"# ML-Agents Use SideChannels\n",
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_1/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_18_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
]
},
{

" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !pip install -q mlagents==0.25.1\n",
" !python -m pip install -q mlagents==0.27.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": null,

"## Side Channel\n",
"\n",
"SideChannels are objects that can be passed to the constructor of a UnityEnvironment or the `make()` method of a registry entry to send non Reinforcement Learning related data.\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_1/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"\n",
"\n",
"\n"

},
"source": [
"### Engine Configuration SideChannel\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_1/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},

},
"source": [
"### Environment Parameters Channel\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_1/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},

},
"source": [
"### Creating your own Side Channels\n",
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_1/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
]
}
]

12
com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md


recommended ways to install the package:
### Local Installation
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/docs/Installation.md#advanced-local-installation-for-development-1)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/images/unity_package_manager_git_url.png)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/images/unity_package_manager_git_url.png)
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_17
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_18
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_17",
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_18",
```
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
may take several minutes to resolve the packages the first time that you add it.

- No way to customize the action space of the `InputActuatorComponent`
## Need Help?
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/README.md) contains links for contacting the team or getting support.
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/README.md) contains links for contacting the team or getting support.

4
com.unity.ml-agents.extensions/package.json


{
"name": "com.unity.ml-agents.extensions",
"displayName": "ML Agents Extensions",
"version": "0.4.0-preview",
"version": "0.5.0-preview",
"com.unity.ml-agents": "2.0.0-exp.1",
"com.unity.ml-agents": "2.1.0-exp.1",
"com.unity.modules.physics": "1.0.0"
}
}

11
com.unity.ml-agents/CHANGELOG.md


and this project adheres to
[Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Major Changes
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
#### ml-agents / ml-agents-envs / gym-unity (Python)
### Minor Changes
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
#### ml-agents / ml-agents-envs / gym-unity (Python)
### Bug Fixes
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
#### ml-agents / ml-agents-envs / gym-unity (Python)
## [2.1.0-exp.1] - 2021-06-09
### Minor Changes

14
com.unity.ml-agents/Documentation~/com.unity.ml-agents.md


In Unity 2019.4 or later, open the Package Manager, hit the "+" button, and select "Add package from git URL".
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/images/unity_package_manager_git_url.png)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/images/unity_package_manager_git_url.png)
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents#release_17
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents#release_18
"com.unity.ml-agents": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents#release_17",
"com.unity.ml-agents": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents#release_18",
```
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
may take several minutes to resolve the packages the first time that you add it.

[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/docs/Installation.md#advanced-local-installation-for-development-1)
directions.
## Requirements

[unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
[unity inference engine]: https://docs.unity3d.com/Packages/com.unity.barracuda@latest/index.html
[package manager documentation]: https://docs.unity3d.com/Manual/upm-ui-install.html
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Installation.md
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Installation.md
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/com.unity.ml-agents.extensions
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/com.unity.ml-agents.extensions

6
com.unity.ml-agents/Runtime/Academy.cs


* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/docs/
* https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/docs/
*/
namespace Unity.MLAgents

/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_17_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_18_docs/" +
"docs/Learning-Environment-Design.md")]
public class Academy : IDisposable
{

/// Unity package version of com.unity.ml-agents.
/// This must match the version string in package.json and is checked in a unit test.
/// </summary>
internal const string k_PackageVersion = "2.0.0-exp.1";
internal const string k_PackageVersion = "2.1.0-exp.1";
const int k_EditorTrainingPort = 5004;

2
com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);

2
com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndex">Index of the action.</param>

26
com.unity.ml-agents/Runtime/Agent.cs


/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Readme.md
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)

/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>

/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{

///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }

///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="actions">
/// Struct containing the buffers of actions to be executed at this step.

2
com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs


/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]

2
com.unity.ml-agents/package.json


{
"name": "com.unity.ml-agents",
"displayName": "ML Agents",
"version": "2.0.0-exp.1",
"version": "2.1.0-exp.1",
"unity": "2019.4",
"description": "Use state-of-the-art machine learning to create intelligent character behaviors in any Unity environment (games, robotics, film, etc.).",
"dependencies": {

8
docs/Installation-Anaconda-Windows.md


the ml-agents Conda environment by typing `activate ml-agents`)_:
```sh
git clone --branch release_17 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_18 https://github.com/Unity-Technologies/ml-agents.git
The `--branch release_17` option will switch to the tag of the latest stable
The `--branch release_18` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially
unstable.

connected to the Internet and then type in the Anaconda Prompt:
```console
python -m pip install mlagents==0.26.0
python -m pip install mlagents==0.27.0
```
This will complete the installation of all the required Python packages to run

this, you can try:
```console
python -m pip install mlagents==0.26.0 --no-cache-dir
python -m pip install mlagents==0.27.0 --no-cache-dir
```
This `--no-cache-dir` tells the pip to disable the cache.

8
docs/Installation.md


the repository if you would like to explore more examples.
```sh
git clone --branch release_17 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_18 https://github.com/Unity-Technologies/ml-agents.git
The `--branch release_17` option will switch to the tag of the latest stable
The `--branch release_18` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially unstable.
#### Advanced: Local Installation for Development

back, make sure to clone the `main` branch (by omitting `--branch release_17`
back, make sure to clone the `main` branch (by omitting `--branch release_18`
from the command above). See our
[Contributions Guidelines](../com.unity.ml-agents/CONTRIBUTING.md) for more
information on contributing to the ML-Agents Toolkit.

run from the command line:
```sh
python -m pip install mlagents==0.26.0
python -m pip install mlagents==0.27.0
```
Note that this will install `mlagents` from PyPi, _not_ from the cloned

2
docs/Learning-Environment-Design-Agents.md


`GridSensorComponent` and the underlying `GridSensorBase` also provides interfaces
that can be overridden to collect customized observation from detected objects.
See the doc on
[extending grid Sensors](https://github.com/Unity-Technologies/ml-agents/blob/release_17/com.unity.ml-agents.extensions/Documentation~/CustomGridSensors.md)
[extending grid Sensors](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/com.unity.ml-agents.extensions/Documentation~/CustomGridSensors.md)
for more details on custom grid sensors.
__Note__: The `GridSensor` only works in 3D environments and will not behave

2
docs/Migrating.md


- The Parameter Randomization feature has been merged with the Curriculum feature. It is now possible to specify a sampler
in the lesson of a Curriculum. Curriculum has been refactored and is now specified at the level of the parameter, not the
behavior. More information
[here](https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Training-ML-Agents.md).(#4160)
[here](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Training-ML-Agents.md).(#4160)
### Steps to Migrate
- The configuration format for curriculum and parameter randomization has changed. To upgrade your configuration files,

6
docs/Readme.md


## Python Tutorial with Google Colab
- [Using a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/main/colab/Colab_UnityEnvironment_1_Run.ipynb)
- [Q-Learning with a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/main/colab/Colab_UnityEnvironment_2_Train.ipynb)
- [Using Side Channels on a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/main/colab/Colab_UnityEnvironment_3_SideChannel.ipynb)
- [Using a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_18_docs/colab/Colab_UnityEnvironment_1_Run.ipynb)
- [Q-Learning with a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_18_docs/colab/Colab_UnityEnvironment_2_Train.ipynb)
- [Using Side Channels on a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_18_docs/colab/Colab_UnityEnvironment_3_SideChannel.ipynb)
## Help

2
docs/Training-on-Amazon-Web-Service.md


2. Clone the ML-Agents repo and install the required Python packages
```sh
git clone --branch release_17 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_18 https://github.com/Unity-Technologies/ml-agents.git
cd ml-agents/ml-agents/
pip3 install -e .
```

2
docs/Training-on-Microsoft-Azure.md


instance, and set it as the working directory.
2. Install the required packages:
Torch: `pip3 install torch==1.7.0 -f https://download.pytorch.org/whl/torch_stable.html` and
MLAgents: `python -m pip install mlagents==0.26.0`
MLAgents: `python -m pip install mlagents==0.27.0`
## Testing

4
docs/Unity-Inference-Engine.md


loading expects certain conventions for constants and tensor names. While it is
possible to construct a model that follows these conventions, we don't provide
any additional help for this. More details can be found in
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
[BarracudaModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs).
[BarracudaModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs).
If you wish to run inference on an externally trained model, you should use
Barracuda directly, instead of trying to run it through ML-Agents.

4
gym-unity/gym_unity/__init__.py


# Version of the library that will be used to upload to pypi
__version__ = "0.27.0.dev0"
__version__ = "0.27.0"
__release_tag__ = None
__release_tag__ = "release_18"

2
ml-agents-envs/README.md


Install the `mlagents_envs` package with:
```sh
python -m pip install mlagents_envs==0.26.0
python -m pip install mlagents_envs==0.27.0
```
## Usage & More Information

4
ml-agents-envs/mlagents_envs/__init__.py


# Version of the library that will be used to upload to pypi
__version__ = "0.27.0.dev0"
__version__ = "0.27.0"
__release_tag__ = None
__release_tag__ = "release_18"

2
ml-agents/README.md


Install the `mlagents` package with:
```sh
python -m pip install mlagents==0.26.0
python -m pip install mlagents==0.27.0
```
## Usage & More Information

4
ml-agents/mlagents/trainers/__init__.py


# Version of the library that will be used to upload to pypi
__version__ = "0.27.0.dev0"
__version__ = "0.27.0"
__release_tag__ = None
__release_tag__ = "release_18"

2
ml-agents/setup.py


"protobuf>=3.6",
"pyyaml>=3.1.0",
# Windows ver. of PyTorch doesn't work from PyPi. Installation:
# https://github.com/Unity-Technologies/ml-agents/blob/release_17_docs/docs/Installation.md#windows-installing-pytorch
# https://github.com/Unity-Technologies/ml-agents/blob/release_18_docs/docs/Installation.md#windows-installing-pytorch
# Torch only working on python 3.9 for 1.8.0 and above. Details see:
# https://github.com/pytorch/pytorch/issues/50014
"torch>=1.8.0,<1.9.0;(platform_system!='Windows' and python_version>='3.9')",

2
utils/validate_release_links.py


match = PIP_INSTALL_PATTERN.search(line)
if match is not None: # if there is a pip install line
package_name = match.group("package")
quiet_option = match.group("quiet")
quiet_option = match.group("quiet") or ""
replacement_version = (
f"python -m pip install {quiet_option}{package_name}=={package_verion}"
)

正在加载...
取消
保存