浏览代码

Release 2 verified update docs (#4535)

/r2v-yamato-linux
GitHub 4 年前
当前提交
56d07c4b
共有 11 个文件被更改,包括 96 次插入28 次删除
  1. 4
      README.md
  2. 11
      com.unity.ml-agents/CHANGELOG.md
  3. 14
      com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
  4. 4
      com.unity.ml-agents/Runtime/Academy.cs
  5. 26
      com.unity.ml-agents/Runtime/Agent.cs
  6. 39
      com.unity.ml-agents/Runtime/Communicator/RpcCommunicator.cs
  7. 2
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs
  8. 2
      com.unity.ml-agents/Runtime/DiscreteActionMasker.cs
  9. 16
      com.unity.ml-agents/Tests/Editor/Communicator/RpcCommunicatorTests.cs
  10. 4
      docs/Installation-Anaconda-Windows.md
  11. 2
      docs/Installation.md

4
README.md


# Unity ML-Agents Toolkit
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_2_docs/docs/)
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs/docs/)
[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE)

## Releases & Documentation
**Our latest, stable release is `Release 2`. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_2_docs/docs/Readme.md)
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs/docs/Readme.md)
to get started with the latest release of ML-Agents.**
The table below lists all our releases, including our `master` branch which is

11
com.unity.ml-agents/CHANGELOG.md


and this project adheres to
[Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Minor Changes
#### com.unity.ml-agents (C#)
- Update documentation with recommended version of Python trainer. (#4535)
- Log a warning if a version of the Python trainer is used that is newer than expected. (#4535)
### Bug Fixes
#### com.unity.ml-agents (C#)
- Fixed a bug with visual observations using .onnx model files and newer versions of Barracuda (1.1.0 or later). (#4533)
## [1.0.5] - 2020-09-23
### Minor Changes
#### com.unity.ml-agents (C#)

14
com.unity.ml-agents/Documentation~/com.unity.ml-agents.md


Manager documentation].
To install the companion Python package to enable training behaviors, follow the
[installation instructions] on our [GitHub repository].
[installation instructions] on our [GitHub repository]. It is strongly recommended that you
use the Python package that corresponds to this release (version 0.16.1) for the best experience;
versions between 0.16.1 and 0.20.0 are supported.
## Requirements

the documentation, you can checkout our [GitHUb Repository], which also includes
a number of ways to [connect with us] including our [ML-Agents Forum].
[unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
[unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_1_docs/docs/Installation.md
[github repository]: https://github.com/Unity-Technologies/ml-agents
[python package]: https://github.com/Unity-Technologies/ml-agents
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Installation.md
[github repository]: https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs
[python package]: https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs
[connect with us]: https://github.com/Unity-Technologies/ml-agents#community-and-feedback
[connect with us]: https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs#community-and-feedback
[ml-agents forum]: https://forum.unity.com/forums/ml-agents.453/

4
com.unity.ml-agents/Runtime/Academy.cs


* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_2_docs/docs/
* https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs/docs/
*/
namespace Unity.MLAgents

/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_2_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_2_verified_docs/" +
"docs/Learning-Environment-Design.md")]
public class Academy : IDisposable
{

26
com.unity.ml-agents/Runtime/Agent.cs


/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Readme.md
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)

/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>

/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{

///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="OnActionReceived(float[])"/>
public virtual void CollectDiscreteActionMasks(DiscreteActionMasker actionMasker)

///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="vectorAction">
/// An array containing the action vector. The length of the array is specified

39
com.unity.ml-agents/Runtime/Communicator/RpcCommunicator.cs


/// Responsible for communication with External using gRPC.
internal class RpcCommunicator : ICommunicator
{
// The python package version must be >= s_MinSupportedPythonPackageVersion
// and <= s_MaxSupportedPythonPackageVersion.
static Version s_MinSupportedPythonPackageVersion = new Version("0.16.1");
static Version s_MaxSupportedPythonPackageVersion = new Version("0.20.0");
public event QuitCommandHandler QuitCommandReceived;
public event ResetCommandHandler ResetCommandReceived;

return true;
}
internal static bool CheckPythonPackageVersionIsCompatible(string pythonLibraryVersion)
{
Version pythonVersion;
try
{
pythonVersion = new Version(pythonLibraryVersion);
}
catch
{
// Unparseable - this also catches things like "0.20.0-dev0" which we don't want to support
return false;
}
if (pythonVersion < s_MinSupportedPythonPackageVersion ||
pythonVersion > s_MaxSupportedPythonPackageVersion)
{
return false;
}
return true;
}
/// <summary>
/// Sends the initialization parameters through the Communicator.
/// Is used by the academy to send initialization parameters to the communicator.

}
throw new UnityAgentsException("ICommunicator.Initialize() failed.");
}
var packageVersionSupported = CheckPythonPackageVersionIsCompatible(pythonPackageVersion);
if (!packageVersionSupported)
{
Debug.LogWarningFormat(
"Python package version ({0}) is out of the supported range or not from an official release. " +
"It is strongly recommended that you use a Python package between {1} and {2}. " +
"Training will proceed, but the output format may be different.",
pythonPackageVersion,
s_MinSupportedPythonPackageVersion,
s_MaxSupportedPythonPackageVersion
);
}
}
catch

2
com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs


/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]

2
com.unity.ml-agents/Runtime/DiscreteActionMasker.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndices">The indices of the masked actions.</param>

16
com.unity.ml-agents/Tests/Editor/Communicator/RpcCommunicatorTests.cs


pythonPackageVerStr));
}
[Test]
public void TestCheckPythonPackageVersionIsCompatible()
{
Assert.IsFalse(RpcCommunicator.CheckPythonPackageVersionIsCompatible("0.13.37")); // too low
Assert.IsFalse(RpcCommunicator.CheckPythonPackageVersionIsCompatible("0.42.0")); // too high
// These are fine
Assert.IsTrue(RpcCommunicator.CheckPythonPackageVersionIsCompatible("0.16.1"));
Assert.IsTrue(RpcCommunicator.CheckPythonPackageVersionIsCompatible("0.17.17"));
Assert.IsTrue(RpcCommunicator.CheckPythonPackageVersionIsCompatible("0.20.0"));
// "dev" string or otherwise unparseable
Assert.IsFalse(RpcCommunicator.CheckPythonPackageVersionIsCompatible("0.17.0-dev0"));
Assert.IsFalse(RpcCommunicator.CheckPythonPackageVersionIsCompatible("oh point seventeen point oh"));
}
}
}

4
docs/Installation-Anaconda-Windows.md


connected to the Internet and then type in the Anaconda Prompt:
```console
pip install mlagents
pip install mlagents==0.16.1
```
This will complete the installation of all the required Python packages to run

this, you can try:
```console
pip install mlagents --no-cache-dir
pip install mlagents==0.16.1 --no-cache-dir
```
This `--no-cache-dir` tells the pip to disable the cache.

2
docs/Installation.md


run from the command line:
```sh
pip3 install mlagents
pip3 install mlagents==0.16.1
```
Note that this will install `mlagents` from PyPi, _not_ from the cloned

正在加载...
取消
保存