浏览代码

Merge remote-tracking branch 'origin/master' into develop-add-fire

# Conflicts:
#	ml-agents/mlagents/trainers/policy/tf_policy.py
/develop/add-fire
Arthur Juliani 5 年前
当前提交
89ad3020
共有 383 个文件被更改,包括 4765 次插入8765 次删除
  1. 54
      .circleci/config.yml
  2. 4
      .gitignore
  3. 3
      .pylintrc
  4. 4
      .yamato/training-int-tests.yml
  5. 10
      Project/Assets/ML-Agents/Editor/Tests/StandaloneBuildTest.cs
  6. 2
      Project/Assets/ML-Agents/Examples/3DBall/Demos/Expert3DBall.demo.meta
  7. 2
      Project/Assets/ML-Agents/Examples/3DBall/Demos/Expert3DBallHard.demo.meta
  8. 5
      Project/Assets/ML-Agents/Examples/3DBall/Scripts/Ball3DAgent.cs
  9. 5
      Project/Assets/ML-Agents/Examples/3DBall/Scripts/Ball3DHardAgent.cs
  10. 497
      Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBall.nn
  11. 603
      Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHard.nn
  12. 2
      Project/Assets/ML-Agents/Examples/Basic/Demos/ExpertBasic.demo.meta
  13. 24
      Project/Assets/ML-Agents/Examples/Basic/Scripts/BasicController.cs
  14. 7
      Project/Assets/ML-Agents/Examples/Basic/Scripts/BasicSensorComponent.cs
  15. 10
      Project/Assets/ML-Agents/Examples/Basic/TFModels/Basic.nn
  16. 2
      Project/Assets/ML-Agents/Examples/Bouncer/Demos/ExpertBouncer.demo.meta
  17. 5
      Project/Assets/ML-Agents/Examples/Bouncer/Scripts/BouncerAgent.cs
  18. 2
      Project/Assets/ML-Agents/Examples/Bouncer/Scripts/BouncerTarget.cs
  19. 154
      Project/Assets/ML-Agents/Examples/Bouncer/TFModels/Bouncer.nn
  20. 2
      Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerDyn.demo.meta
  21. 2
      Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerSta.demo.meta
  22. 6
      Project/Assets/ML-Agents/Examples/Crawler/Scripts/CrawlerAgent.cs
  23. 1001
      Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerDynamic.nn
  24. 1001
      Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerStatic.nn
  25. 2
      Project/Assets/ML-Agents/Examples/FoodCollector/Demos/ExpertFood.demo.meta
  26. 5
      Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorAgent.cs
  27. 2
      Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorArea.cs
  28. 4
      Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorSettings.cs
  29. 674
      Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.nn
  30. 4
      Project/Assets/ML-Agents/Examples/GridWorld/Demos/ExpertGrid.demo.meta
  31. 12
      Project/Assets/ML-Agents/Examples/GridWorld/Scripts/GridAgent.cs
  32. 5
      Project/Assets/ML-Agents/Examples/GridWorld/Scripts/GridArea.cs
  33. 3
      Project/Assets/ML-Agents/Examples/GridWorld/Scripts/GridSettings.cs
  34. 1001
      Project/Assets/ML-Agents/Examples/GridWorld/TFModels/GridWorld.nn
  35. 2
      Project/Assets/ML-Agents/Examples/Hallway/Demos/ExpertHallway.demo.meta
  36. 4
      Project/Assets/ML-Agents/Examples/Hallway/Scripts/HallwayAgent.cs
  37. 1001
      Project/Assets/ML-Agents/Examples/Hallway/TFModels/Hallway.nn
  38. 2
      Project/Assets/ML-Agents/Examples/PushBlock/Demos/ExpertPush.demo.meta
  39. 7
      Project/Assets/ML-Agents/Examples/PushBlock/Scripts/PushAgentBasic.cs
  40. 1001
      Project/Assets/ML-Agents/Examples/PushBlock/TFModels/PushBlock.nn
  41. 2
      Project/Assets/ML-Agents/Examples/Pyramids/Demos/ExpertPyramid.demo.meta
  42. 4
      Project/Assets/ML-Agents/Examples/Pyramids/Scripts/PyramidAgent.cs
  43. 2
      Project/Assets/ML-Agents/Examples/Pyramids/Scripts/PyramidArea.cs
  44. 1001
      Project/Assets/ML-Agents/Examples/Pyramids/TFModels/Pyramids.nn
  45. 2
      Project/Assets/ML-Agents/Examples/Reacher/Demos/ExpertReacher.demo.meta
  46. 7
      Project/Assets/ML-Agents/Examples/Reacher/Scripts/ReacherAgent.cs
  47. 570
      Project/Assets/ML-Agents/Examples/Reacher/TFModels/Reacher.nn
  48. 2
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/Area.cs
  49. 2
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/CameraFollow.cs
  50. 2
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/FlyCamera.cs
  51. 4
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/GroundContact.cs
  52. 4
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/JointDriveController.cs
  53. 14
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/ModelOverrider.cs
  54. 3
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/Monitor.cs
  55. 5
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/ProjectSettingsOverrides.cs
  56. 4
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/SensorBase.cs
  57. 2
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/TargetContact.cs
  58. 9
      Project/Assets/ML-Agents/Examples/Soccer/Scripts/AgentSoccer.cs
  59. 5
      Project/Assets/ML-Agents/Examples/Soccer/Scripts/SoccerFieldArea.cs
  60. 1001
      Project/Assets/ML-Agents/Examples/Soccer/TFModels/SoccerTwos.nn
  61. 8
      Project/Assets/ML-Agents/Examples/Startup/Scripts/Startup.cs
  62. 4
      Project/Assets/ML-Agents/Examples/Template/Scripts/TemplateAgent.cs
  63. 2
      Project/Assets/ML-Agents/Examples/Tennis/Demos/ExpertTennis.demo.meta
  64. 5
      Project/Assets/ML-Agents/Examples/Tennis/Scripts/TennisAgent.cs
  65. 2
      Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalker.demo.meta
  66. 7
      Project/Assets/ML-Agents/Examples/Walker/Scripts/WalkerAgent.cs
  67. 1001
      Project/Assets/ML-Agents/Examples/Walker/TFModels/Walker.nn
  68. 5
      Project/Assets/ML-Agents/Examples/WallJump/Scripts/WallJumpAgent.cs
  69. 1001
      Project/Assets/ML-Agents/Examples/WallJump/TFModels/BigWallJump.nn
  70. 1001
      Project/Assets/ML-Agents/Examples/WallJump/TFModels/SmallWallJump.nn
  71. 17
      Project/Assets/ML-Agents/Examples/Worm/Scripts/WormAgent.cs
  72. 144
      README.md
  73. 6
      SURVEY.md
  74. 153
      com.unity.ml-agents/CHANGELOG.md
  75. 76
      com.unity.ml-agents/CONTRIBUTING.md
  76. 121
      com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
  77. 10
      com.unity.ml-agents/Documentation~/filter.yml
  78. 2
      com.unity.ml-agents/Editor/AgentEditor.cs
  79. 6
      com.unity.ml-agents/Editor/BehaviorParametersEditor.cs
  80. 4
      com.unity.ml-agents/Editor/BrainParametersDrawer.cs
  81. 4
      com.unity.ml-agents/Editor/CameraSensorComponentEditor.cs
  82. 6
      com.unity.ml-agents/Editor/DemonstrationDrawer.cs
  83. 6
      com.unity.ml-agents/Editor/DemonstrationImporter.cs
  84. 2
      com.unity.ml-agents/Editor/EditorUtilities.cs
  85. 4
      com.unity.ml-agents/Editor/RayPerceptionSensorComponentBaseEditor.cs
  86. 4
      com.unity.ml-agents/Editor/RenderTextureSensorComponentEditor.cs
  87. 32
      com.unity.ml-agents/Runtime/Academy.cs
  88. 48
      com.unity.ml-agents/Runtime/Agent.cs
  89. 10
      com.unity.ml-agents/Runtime/Communicator/GrpcExtensions.cs
  90. 7
      com.unity.ml-agents/Runtime/Communicator/ICommunicator.cs
  91. 10
      com.unity.ml-agents/Runtime/Communicator/RpcCommunicator.cs
  92. 2
      com.unity.ml-agents/Runtime/Communicator/UnityRLCapabilities.cs
  93. 2
      com.unity.ml-agents/Runtime/Constants.cs
  94. 2
      com.unity.ml-agents/Runtime/DecisionRequester.cs
  95. 3
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationMetaData.cs
  96. 6
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs
  97. 4
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationSummary.cs
  98. 8
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationWriter.cs
  99. 10
      com.unity.ml-agents/Runtime/DiscreteActionMasker.cs
  100. 4
      com.unity.ml-agents/Runtime/EnvironmentParameters.cs

54
.circleci/config.yml


directory:
type: string
description: Local directory to use for publishing (e.g. ml-agents)
username:
type: string
description: pypi username
default: mlagents
test_repository_args:
type: string
description: Optional override repository URL. Only use for tests, leave blank for "real"
default: ""
docker:
- image: circleci/python:3.6
steps:

command: |
. venv/bin/activate
cd << parameters.directory >>
twine upload -u mlagents -p $PYPI_PASSWORD dist/*
twine upload -u << parameters.username >> -p $PYPI_PASSWORD << parameters.test_repository_args >> dist/*
workflows:
version: 2

pip_constraints: test_constraints_max_tf2_version.txt
- markdown_link_check
- pre-commit
# The first deploy jobs are the "real" ones that upload to pypi
only: /[0-9]+(\.[0-9]+)*(\.dev[0-9]+)*/
# Matches e.g. "release_123"
only: /^release_[0-9]+$/
branches:
ignore: /.*/
- deploy:

tags:
only: /[0-9]+(\.[0-9]+)*(\.dev[0-9]+)*/
# Matches e.g. "release_123"
only: /^release_[0-9]+$/
branches:
ignore: /.*/
- deploy:

tags:
only: /[0-9]+(\.[0-9]+)*(\.dev[0-9]+)*/
# Matches e.g. "release_123"
only: /^release_[0-9]+$/
branches:
ignore: /.*/
# These deploy jobs upload to the pypi test repo. They have different tag triggers than the real ones.
- deploy:
name: test deploy ml-agents-envs
directory: ml-agents-envs
username: mlagents-test
test_repository_args: --repository-url https://test.pypi.org/legacy/
filters:
tags:
# Matches e.g. "release_123_test456
only: /^release_[0-9]+_test[0-9]+$/
branches:
ignore: /.*/
- deploy:
name: test deploy ml-agents
directory: ml-agents
username: mlagents-test
test_repository_args: --repository-url https://test.pypi.org/legacy/
filters:
tags:
# Matches e.g. "release_123_test456
only: /^release_[0-9]+_test[0-9]+$/
branches:
ignore: /.*/
- deploy:
name: test deploy gym-unity
directory: gym-unity
username: mlagents-test
test_repository_args: --repository-url https://test.pypi.org/legacy/
filters:
tags:
# Matches e.g. "release_123_test456
only: /^release_[0-9]+_test[0-9]+$/
branches:
ignore: /.*/
nightly:

4
.gitignore


# Tensorflow Model Info
# Output Artifacts (Legacy)
# Output Artifacts
/results
# Training environments
/envs

3
.pylintrc


# Using the global statement
W0603,
# "Access to a protected member _foo of a client class (protected-access)"
W0212

4
.yamato/training-int-tests.yml


# Backwards-compatibility tests.
# If we make a breaking change to the communication protocol, these will need
# to be disabled until the next release.
# - python -u -m ml-agents.tests.yamato.training_int_tests --python=0.15.0
# - python -u -m ml-agents.tests.yamato.training_int_tests --csharp=0.15.0
- python -u -m ml-agents.tests.yamato.training_int_tests --python=0.16.0
- python -u -m ml-agents.tests.yamato.training_int_tests --csharp=1.0.0
dependencies:
- .yamato/standalone-build-test.yml#test_mac_standalone_{{ editor.version }}
triggers:

10
Project/Assets/ML-Agents/Editor/Tests/StandaloneBuildTest.cs


using UnityEngine;
using UnityEditor.Build.Reporting;
namespace MLAgents
namespace Unity.MLAgents
const string k_outputCommandLineFlag = "--mlagents-build-output-path";
const string k_sceneCommandLineFlag = "--mlagents-build-scene-path";
const string k_OutputCommandLineFlag = "--mlagents-build-output-path";
const string k_SceneCommandLineFlag = "--mlagents-build-scene-path";
public static void BuildStandalonePlayerOSX()
{

var args = Environment.GetCommandLineArgs();
for (var i = 0; i < args.Length - 1; i++)
{
if (args[i] == k_outputCommandLineFlag)
if (args[i] == k_OutputCommandLineFlag)
else if (args[i] == k_sceneCommandLineFlag)
else if (args[i] == k_SceneCommandLineFlag)
{
scenePath = args[i + 1];
}

2
Project/Assets/ML-Agents/Examples/3DBall/Demos/Expert3DBall.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/3DBall/Demos/Expert3DBall.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

2
Project/Assets/ML-Agents/Examples/3DBall/Demos/Expert3DBallHard.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/3DBall/Demos/Expert3DBallHard.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

5
Project/Assets/ML-Agents/Examples/3DBall/Scripts/Ball3DAgent.cs


using UnityEngine;
using MLAgents;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class Ball3DAgent : Agent
{

5
Project/Assets/ML-Agents/Examples/3DBall/Scripts/Ball3DHardAgent.cs


using UnityEngine;
using MLAgents;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class Ball3DHardAgent : Agent
{

497
Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBall.nn
文件差异内容过多而无法显示
查看文件

603
Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHard.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/Basic/Demos/ExpertBasic.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Basic/Demos/ExpertBasic.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

24
Project/Assets/ML-Agents/Examples/Basic/Scripts/BasicController.cs


using UnityEngine;
using UnityEngine.SceneManagement;
using MLAgents;
using Unity.MLAgents;
using UnityEngine.Serialization;
/// <summary>
/// An example of how to use ML-Agents without inheriting from the Agent class.

{
public float timeBetweenDecisionsAtInference;
float m_TimeSinceDecision;
[FormerlySerializedAs("m_Position")]
public int m_Position;
public int position;
const int k_SmallGoalPosition = 7;
const int k_LargeGoalPosition = 17;
public GameObject largeGoal;

public void OnEnable()
{
m_Agent = GetComponent<Agent>();
m_Position = 10;
transform.position = new Vector3(m_Position - 10f, 0f, 0f);
position = 10;
transform.position = new Vector3(position - 10f, 0f, 0f);
smallGoal.transform.position = new Vector3(k_SmallGoalPosition - 10f, 0f, 0f);
largeGoal.transform.position = new Vector3(k_LargeGoalPosition - 10f, 0f, 0f);
}

break;
}
m_Position += direction;
if (m_Position < k_MinPosition) { m_Position = k_MinPosition; }
if (m_Position > k_MaxPosition) { m_Position = k_MaxPosition; }
position += direction;
if (position < k_MinPosition) { position = k_MinPosition; }
if (position > k_MaxPosition) { position = k_MaxPosition; }
gameObject.transform.position = new Vector3(m_Position - 10f, 0f, 0f);
gameObject.transform.position = new Vector3(position - 10f, 0f, 0f);
if (m_Position == k_SmallGoalPosition)
if (position == k_SmallGoalPosition)
{
m_Agent.AddReward(0.1f);
m_Agent.EndEpisode();

if (m_Position == k_LargeGoalPosition)
if (position == k_LargeGoalPosition)
{
m_Agent.AddReward(1f);
m_Agent.EndEpisode();

public void ResetAgent()
{
{
// This is a very inefficient way to reset the scene. Used here for testing.
SceneManager.LoadScene(SceneManager.GetActiveScene().name);
m_Agent = null; // LoadScene only takes effect at the next Update.

7
Project/Assets/ML-Agents/Examples/Basic/Scripts/BasicSensorComponent.cs


using System;
using MLAgents.Sensors;
using UnityEngine.Serialization;
using Unity.MLAgents.Sensors;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
/// <summary>
/// A simple example of a SensorComponent.

{
// One-hot encoding of the position
Array.Clear(output, 0, output.Length);
output[basicController.m_Position] = 1;
output[basicController.position] = 1;
}
/// <inheritdoc/>

10
Project/Assets/ML-Agents/Examples/Basic/TFModels/Basic.nn


vector_observation���� action_masks����policy_1/concat_2/concatactionaction_output_shape������?action_output_shape memory_sizeversion_numberis_continuous_controlpolicy_1/add_2/ypolicy_1/add/ypolicy_1/Sum/reduction_indices$policy/main_graph_0/hidden_0/BiasAdd�����?vector_observation#policy/main_graph_0/hidden_0/kernel�!policy/main_graph_0/hidden_0/bias� policy/main_graph_0/hidden_0/Mul2 �����?$policy/main_graph_0/hidden_0/BiasAddpolicy_1/dense/MatMul�����? policy/main_graph_0/hidden_0/Mulpolicy/dense/kernel�<policy_1/dense/MatMul/patch:0�policy_1/strided_slice������? action_maskspolicy_1/Softmax2�����?policy_1/dense/MatMul policy_1/addd�����?policy_1/Softmaxpolicy_1/add/y policy_1/Mulf�����? policy_1/addpolicy_1/strided_slice policy_1/Sum������? policy_1/Mulpolicy_1/Sum/reduction_indicespolicy_1/truedivg�����? policy_1/Mul policy_1/Sumpolicy_1/add_2d�����?policy_1/truedivpolicy_1/add_2/ypolicy_1/Log_12r�����?policy_1/add_2policy_1/concat_2/concat2�����?policy_1/Log_1action2�����?policy_1/concat_2/concat@@@���3���3�?������>C�{>�����>M��;D�o�X|6>�q*>n����l?낝>>d=�ߓ���u�ȹ�<��"��8��cw�>��l�ʉ�>���="�����>�#�<�=���€�Ka����>aN+c>�|Խn�T�/ּ�m�>4S�nvm>�a �X쥽V_���l:=Y�:;=+P�*��.`k;����B>��UQv�t�=>�7)=��;;��Μ� VW;�̵��瀽=���kl�>��={1�>Q6�=��K�� w���D��$�=���>H��>>2��>K�>������>0��>!�C�&����.>��>�[�=��>���C1>x3�=��>��?=�%4��>�5�������Sý;T�>�k���?>|��=�I�;
~l>�V���,���ꂽi�߾��>�Ⱥ�{�%>]g�>#^ >���������}�>���K%��K\�<���>v^�<�u(>v���]�^=E�߽9~�> �}�kE��\�'�no�>��>
���cɾs��<�/z=#?��8��>�Ǽ=�^�=o*@����>H-�>��N� ���:>����=�Ȗ>�mҽ٨h�2 �>\V�=�T���/�=@}�=��ǽ�-">�>#~���i=�^�=4��>������>QgϾM�
���ʼ�렾�4>�=\T>l�վ�� � ��=������� 2��پuA>J��= ��w+���_'>�y�>(� ?� �>h�j>,~��~q`�%>�ʾv�n��;?�(�<h�ɾ0��a��<�:��*�=���#>^����4>.-�>T�>(�~>;�=���>ܼ�>�1?��?6�ɾ�/��=�>�>� �����=�����)���j=U6�F>>l��>A V?�϶��|?��g��6�>�>i�(?�C?���֒��>^%�>�)?Z&?GN.�P���a��>���}�.?D?]��>H�ܾ���>�U�o{?���㇡�ݳ�>1\�;�;���������=x�>���;�I��Z�>���.�>�s>��ٽT���թ �"��>��>*)������<R>��@����>���=���>���>Ya��V_=^�5=k/ʽ'F�>A�(=4)L>� ��| �>wQ�<����w7?���>^K�=Z"����z�?��>��`?PM%?C��>�;��=3���N> -)��6�>�k�=�P�>O=P? c�a|?J?��#?Z7�>��F�u��:�2>e�=2#�r8�>����ʊ�9D'���-<'�� κ>��>�!_���H>������>ޤ�>5��>��4=FU��������=M�>��>U��=.��i����L>q7ν�أ>M��=4��>:n#�:�<ْh����>n���k��X%����<W�#�A �<�R>9t�����)["��@�����>h�=��q�>�:z�]B
=�±��`��c>�>Ln=<2��(W�=���>�g���2���X��v��������>홼�&ܾɡ�>�#�o����<>�DE=e, �(�і���l��Z �´�4���%�����d��w%>aY����'>*�B��+0��v����e����6R�r�,��v">?S�>X͜>2�4?�:?4�??��#�J���3?Ĝ;?�}=?t??���v�g�<?�I�P+C?��C?p=?�4�yJ?n����<?9ݾ�꾙�(?��ܾ�]پ�S?�ھ����?8��>C5�>+��x��>U��>�M��DϾ����!�(?�#Ҿ(��R�?�ɾ <羚P?o�־�4���D2?���>+$�>�1����>�>S/�w+޾-���&,?���>�:�>��}׾����E�(?p]����ξ�j.?&�۾�ƾ؏!?���>���>?��/����~��[m�> ��>B��>� �����[���b�-?
vector_observation���� action_masks����policy_1/concat_2/concatactionaction_output_shape������?action_output_shape memory_sizeversion_numberis_continuous_controlpolicy_1/add_2/ypolicy_1/Sum/reduction_indicespolicy_1/add/y$policy/main_graph_0/hidden_0/BiasAdd�����?vector_observation#policy/main_graph_0/hidden_0/kernel�!policy/main_graph_0/hidden_0/bias� policy/main_graph_0/hidden_0/Mul2 �����?$policy/main_graph_0/hidden_0/BiasAddpolicy_1/dense/MatMul�����? policy/main_graph_0/hidden_0/Mulpolicy/dense/kernel�<policy_1/dense/MatMul/patch:0�policy_1/strided_slice������? action_maskspolicy_1/Softmax2�����?policy_1/dense/MatMul policy_1/addd�����?policy_1/Softmaxpolicy_1/add/y policy_1/Mulf�����? policy_1/addpolicy_1/strided_slice policy_1/Sum������? policy_1/Mulpolicy_1/Sum/reduction_indicespolicy_1/truedivg�����? policy_1/Mul policy_1/Sumpolicy_1/add_2d�����?policy_1/truedivpolicy_1/add_2/ypolicy_1/Log_12r�����?policy_1/add_2policy_1/concat_2/concat2�����?policy_1/Log_1action2�����?policy_1/concat_2/concat@@@���3�?���3�k��8�>�:���{i��#���Ld�n�<�B$�=6_��ސ��8�þ�Z=��>���;�/�O�>�����q�=VU�����=�l��e,�-X7������xQ>>�K��$>46���a><9�>�(>�(O>�(>�X�<��=�p��D��m��>���(z� ���x��;W�}J�;~7���:���G���%�=�> 8�<;��ݾ=��U>;H<8B%����Å2���?���=g0ѽ ������$־ڜ^>w����.��c��؟=Tgx�C��;�o:�z���)�=������eٳ>�f�9ˡ=���<
� ��)��R��=/)�<��z>��H�$5���>�����}y��l�>��[��@�;�Ӵ=�0�=8׾T;>яV�?h���>Y�k=��=�꽕!�>5�=P O�~���L�J=��̾�м.�<�W?�;��[.��h���?k�Q���b>� u=�;��+?�<IE�>��>wz�=�� � ��s�^>��O>�_>Wo�>t����f����%V>��c<B��>�1c�Q�,�j�<=���>�̭��G�$��<�� >0m¾����j�x�q��>�Y�=��>�^1=��Ͼ<�<ξTP��}:�j�>C�>��=��$>J��>ߖ>��>ג�>�g>�>������7=�|=�J?Q�=��> �j>�P>-�l>೘�W^�]�)>��w>� �=���*X��|�>4��?�X>�]�ώ�M�N��>�>B��8�)`��/ =�G�=���>�2�>���� >�>�_�>���<}�n��f>`�!��(Q�qr1�C�6��.�>��>}���%�Ͼ�Ɖ>�ܘ�N��ӶG<f1��d�?0�K>�꾀"k��Ϭ>���>^�?��'�+�e(��s��c��>7���J����&����>��e�D�(=]����q�rJ�>.y>�(����eϪ>�(>;�,==5�/�>S%>2} >n#n=�C=�i>y���d��=��ν�|�8[)���0�=L�
$�J�:>E�>��.��!��4�y=+r�>�|���t��}����~����>*%��+>0L��fB>[g�� ����>>��=$ݽ=މ?>�z@�y v�L��>�2h>Bl#>L(��#vZ>�ر���<���=s%�>jx?���=p�<3ś=�5^���>��=�D->��)>)�˽g ����s?H���ε�xc���Ҿ �M?hRӾ�D�S��P*,?�Ծ���f#�K��mC&?�(@?Pꬾ�ാ�x�>Z�?J��>��پk���s�u��&�����>] ��鸉���"���@?-r��Ҝ�x���F}���5�>�T�>��+�����(.?���>U���4n�=�w��`�*�$5>q6�>O�Z�=��=�k�=�L�<&q= �u>d#
=M���&����ꖾy:;>��²{>6ƀ<:�h�G^�������T���1�>N��[�s�� =�:!>o{>4���A䛾lo�� �]>{�v�`n�c=4���f?�>M���0m>�X>����Ҷ�<'�>����NV����|>�ꑾh�ɾ4#�悞<'K�>����~�=HO�>N8B��Ui�K �=u��=e`?��+�ǟ;���.�?�)�DP\?�-���@�;-�9V\?N�<��\2��5�ؗ'���e?��W?bl+�\�/��5[?��]?�O����A? L?^?�;E�'� ?n�>{�8��) ?�� ?��?�Ik?��>�F6�OI ����.@?)~?��>��8���?�-?�$@�=�>�>?)�4��_�+���9?��?�?��?��M?L?�7�h�?�<?8�@ ?�?G ;�vI&�`<�?�B?d��U��s�??o �>CI
?ݪ5��?��?��?��� �~��>�=?wC�f��L�W?

2
Project/Assets/ML-Agents/Examples/Bouncer/Demos/ExpertBouncer.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Bouncer/Demos/ExpertBouncer.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

5
Project/Assets/ML-Agents/Examples/Bouncer/Scripts/BouncerAgent.cs


using UnityEngine;
using MLAgents;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class BouncerAgent : Agent
{

2
Project/Assets/ML-Agents/Examples/Bouncer/Scripts/BouncerTarget.cs


using UnityEngine;
using MLAgents;
using Unity.MLAgents;
public class BouncerTarget : MonoBehaviour
{

154
Project/Assets/ML-Agents/Examples/Bouncer/TFModels/Bouncer.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerDyn.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerDyn.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

2
Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerSta.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerSta.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

6
Project/Assets/ML-Agents/Examples/Crawler/Scripts/CrawlerAgent.cs


using UnityEngine;
using MLAgents;
using MLAgentsExamples;
using MLAgents.Sensors;
using Unity.MLAgents;
using Unity.MLAgentsExamples;
using Unity.MLAgents.Sensors;
[RequireComponent(typeof(JointDriveController))] // Required to set joint forces
public class CrawlerAgent : Agent

1001
Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerDynamic.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerStatic.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/FoodCollector/Demos/ExpertFood.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/FoodCollector/Demos/ExpertFood.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

5
Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorAgent.cs


using UnityEngine;
using MLAgents;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class FoodCollectorAgent : Agent
{

2
Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorArea.cs


using UnityEngine;
using MLAgentsExamples;
using Unity.MLAgentsExamples;
public class FoodCollectorArea : Area
{

4
Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorSettings.cs


using UnityEngine;
using UnityEngine.UI;
using MLAgents;
using Unity.MLAgents;
public class FoodCollectorSettings : MonoBehaviour
{

m_Recorder = Academy.Instance.StatsRecorder;
}
private void EnvironmentReset()
void EnvironmentReset()
{
ClearObjects(GameObject.FindGameObjectsWithTag("food"));
ClearObjects(GameObject.FindGameObjectsWithTag("badFood"));

674
Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.nn
文件差异内容过多而无法显示
查看文件

4
Project/Assets/ML-Agents/Examples/GridWorld/Demos/ExpertGrid.demo.meta


fileFormatVersion: 2
guid: 3938e0ee1f99e473db8e45d334dfa329
guid: 0092f2e4aece345aea4730a37eeebf68
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

12
Project/Assets/ML-Agents/Examples/GridWorld/Scripts/GridAgent.cs


using System;
using UnityEngine;
using System.Linq;
using MLAgents;
using MLAgents.Sensors;
using Unity.MLAgents;
using MLAgents.SideChannels;
public class GridAgent : Agent
{

if (positionX == 0)
{
actionMasker.SetMask(0, new int[]{ k_Left});
actionMasker.SetMask(0, new []{ k_Left});
actionMasker.SetMask(0, new int[]{k_Right});
actionMasker.SetMask(0, new []{k_Right});
actionMasker.SetMask(0, new int[]{k_Down});
actionMasker.SetMask(0, new []{k_Down});
actionMasker.SetMask(0, new int[]{k_Up});
actionMasker.SetMask(0, new []{k_Up});
}
}
}

5
Project/Assets/ML-Agents/Examples/GridWorld/Scripts/GridArea.cs


using System.Collections.Generic;
using UnityEngine;
using System.Linq;
using MLAgents;
using MLAgents.SideChannels;
using Unity.MLAgents;
public class GridArea : MonoBehaviour

m_InitialPosition = transform.position;
}
private void SetEnvironment()
void SetEnvironment()
{
transform.position = m_InitialPosition * (m_ResetParams.GetWithDefault("gridSize", 5f) + 1);
var playersList = new List<int>();

3
Project/Assets/ML-Agents/Examples/GridWorld/Scripts/GridSettings.cs


using UnityEngine;
using MLAgents;
using MLAgents.SideChannels;
using Unity.MLAgents;
public class GridSettings : MonoBehaviour
{

1001
Project/Assets/ML-Agents/Examples/GridWorld/TFModels/GridWorld.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/Hallway/Demos/ExpertHallway.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Hallway/Demos/ExpertHallway.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

4
Project/Assets/ML-Agents/Examples/Hallway/Scripts/HallwayAgent.cs


using System.Collections;
using UnityEngine;
using MLAgents;
using MLAgents.Sensors;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class HallwayAgent : Agent
{

1001
Project/Assets/ML-Agents/Examples/Hallway/TFModels/Hallway.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/PushBlock/Demos/ExpertPush.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/PushBlock/Demos/ExpertPush.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

7
Project/Assets/ML-Agents/Examples/PushBlock/Scripts/PushAgentBasic.cs


using System.Collections;
using UnityEngine;
using MLAgents;
using MLAgents.SideChannels;
using Unity.MLAgents;
public class PushAgentBasic : Agent
{

/// </summary>
Renderer m_GroundRenderer;
private EnvironmentParameters m_ResetParams;
EnvironmentParameters m_ResetParams;
void Awake()
{

m_BlockRb.drag = m_ResetParams.GetWithDefault("block_drag", 0.5f);
}
private void SetResetParameters()
void SetResetParameters()
{
SetGroundMaterialFriction();
SetBlockProperties();

1001
Project/Assets/ML-Agents/Examples/PushBlock/TFModels/PushBlock.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/Pyramids/Demos/ExpertPyramid.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Pyramids/Demos/ExpertPyramid.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

4
Project/Assets/ML-Agents/Examples/Pyramids/Scripts/PyramidAgent.cs


using System.Linq;
using UnityEngine;
using Random = UnityEngine.Random;
using MLAgents;
using MLAgents.Sensors;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class PyramidAgent : Agent
{

2
Project/Assets/ML-Agents/Examples/Pyramids/Scripts/PyramidArea.cs


using UnityEngine;
using MLAgentsExamples;
using Unity.MLAgentsExamples;
public class PyramidArea : Area
{

1001
Project/Assets/ML-Agents/Examples/Pyramids/TFModels/Pyramids.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/Reacher/Demos/ExpertReacher.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Reacher/Demos/ExpertReacher.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

7
Project/Assets/ML-Agents/Examples/Reacher/Scripts/ReacherAgent.cs


using UnityEngine;
using MLAgents;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class ReacherAgent : Agent
{

// Frequency of the cosine deviation of the goal along the vertical dimension
float m_DeviationFreq;
private EnvironmentParameters m_ResetParams;
EnvironmentParameters m_ResetParams;
/// <summary>
/// Collect the rigidbodies of the reacher in order to resue them for

570
Project/Assets/ML-Agents/Examples/Reacher/TFModels/Reacher.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/Area.cs


using UnityEngine;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
public class Area : MonoBehaviour
{

2
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/CameraFollow.cs


using UnityEngine;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
public class CameraFollow : MonoBehaviour
{

2
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/FlyCamera.cs


using UnityEngine;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
public class FlyCamera : MonoBehaviour
{

4
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/GroundContact.cs


using UnityEngine;
using MLAgents;
using Unity.MLAgents;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
/// <summary>
/// This class contains logic for locomotion agents with joints which might make contact with the ground.

4
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/JointDriveController.cs


using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Serialization;
using MLAgents;
using Unity.MLAgents;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
/// <summary>
/// Used to store relevant information for acting and learning for each body part in agent.

14
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/ModelOverrider.cs


using UnityEngine;
using Unity.Barracuda;
using System.IO;
using MLAgents;
using MLAgents.Policies;
using Unity.MLAgents;
using Unity.MLAgents.Policies;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
/// <summary>
/// Utility class to allow the NNModel file for an agent to be overriden during inference.

{
m_Agent.LazyInitialize();
var bp = m_Agent.GetComponent<BehaviorParameters>();
var name = bp.BehaviorName;
var behaviorName = bp.BehaviorName;
var nnModel = GetModelForBehaviorName(name);
Debug.Log($"Overriding behavior {name} for agent with model {nnModel?.name}");
var nnModel = GetModelForBehaviorName(behaviorName);
Debug.Log($"Overriding behavior {behaviorName} for agent with model {nnModel?.name}");
m_Agent.SetModel($"Override_{name}", nnModel);
m_Agent.SetModel($"Override_{behaviorName}", nnModel);
}
}

3
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/Monitor.cs


using System.Linq;
using UnityEngine;
namespace MLAgents
namespace Unity.MLAgents
{
/// <summary>
/// Monitor is used to display information about the Agent within the Unity

s_TransformCamera = new Dictionary<Transform, Camera>();
}
/// <inheritdoc/>
void OnGUI()
{
if (!s_Initialized)

5
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/ProjectSettingsOverrides.cs


using UnityEngine;
using MLAgents;
using MLAgents.SideChannels;
using Unity.MLAgents;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
/// <summary>
/// A helper class for the ML-Agents example scenes to override various

4
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/SensorBase.cs


using MLAgents.Sensors;
using Unity.MLAgents.Sensors;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
/// <summary>
/// A simple sensor that provides a number default implementations.

2
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/TargetContact.cs


using UnityEngine;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
{
/// <summary>
/// This class contains logic for locomotion agents with joints which might make contact with a target.

9
Project/Assets/ML-Agents/Examples/Soccer/Scripts/AgentSoccer.cs


using System;
using UnityEngine;
using MLAgents;
using MLAgents.Policies;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgents.Policies;
public class AgentSoccer : Agent
{

float m_ForwardSpeed;
[HideInInspector]
public float timePenalty = 0;
public float timePenalty;
[HideInInspector]
public Rigidbody agentRb;

private EnvironmentParameters m_ResetParams;
EnvironmentParameters m_ResetParams;
public override void Initialize()
{

5
Project/Assets/ML-Agents/Examples/Soccer/Scripts/SoccerFieldArea.cs


using System.Collections;
using System.Collections.Generic;
using MLAgents;
using MLAgents.SideChannels;
using Unity.MLAgents;
using UnityEngine;
using UnityEngine.Serialization;

[HideInInspector]
public bool canResetBall;
private EnvironmentParameters m_ResetParams;
EnvironmentParameters m_ResetParams;
void Awake()
{

1001
Project/Assets/ML-Agents/Examples/Soccer/TFModels/SoccerTwos.nn
文件差异内容过多而无法显示
查看文件

8
Project/Assets/ML-Agents/Examples/Startup/Scripts/Startup.cs


using UnityEngine;
using UnityEngine.SceneManagement;
namespace MLAgentsExamples
namespace Unity.MLAgentsExamples
private const string k_SceneCommandLineFlag = "--mlagents-scene-name";
const string k_SceneCommandLineFlag = "--mlagents-scene-name";
// Check for the CLI '--scene-name' flag. This will be used if
// no scene environment variable is found.
var args = Environment.GetCommandLineArgs();

{
sceneName = sceneEnvironmentVariable;
}
SwitchScene(sceneName);
}

4
Project/Assets/ML-Agents/Examples/Template/Scripts/TemplateAgent.cs


using UnityEngine;
using MLAgents;
using MLAgents.Sensors;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class TemplateAgent : Agent
{

2
Project/Assets/ML-Agents/Examples/Tennis/Demos/ExpertTennis.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Tennis/Demos/ExpertTennis.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

5
Project/Assets/ML-Agents/Examples/Tennis/Scripts/TennisAgent.cs


using UnityEngine;
using UnityEngine.UI;
using MLAgents;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class TennisAgent : Agent
{

2
Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalker.demo.meta


fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/Walker/Demos/ExpertWalker.demo
externalObjects: {}
userData: ' (MLAgents.Demonstrations.DemonstrationSummary)'
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

7
Project/Assets/ML-Agents/Examples/Walker/Scripts/WalkerAgent.cs


using UnityEngine;
using MLAgents;
using MLAgentsExamples;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents;
using Unity.MLAgentsExamples;
using Unity.MLAgents.Sensors;
public class WalkerAgent : Agent
{

1001
Project/Assets/ML-Agents/Examples/Walker/TFModels/Walker.nn
文件差异内容过多而无法显示
查看文件

5
Project/Assets/ML-Agents/Examples/WallJump/Scripts/WallJumpAgent.cs


using System.Collections;
using UnityEngine;
using MLAgents;
using Unity.MLAgents;
using MLAgents.Sensors;
using MLAgents.SideChannels;
using Unity.MLAgents.Sensors;
public class WallJumpAgent : Agent
{

1001
Project/Assets/ML-Agents/Examples/WallJump/TFModels/BigWallJump.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/WallJump/TFModels/SmallWallJump.nn
文件差异内容过多而无法显示
查看文件

17
Project/Assets/ML-Agents/Examples/Worm/Scripts/WormAgent.cs


using System.Collections;
using MLAgents;
using MLAgentsExamples;
using MLAgents.Sensors;
using Unity.MLAgents;
using Unity.MLAgentsExamples;
using Unity.MLAgents.Sensors;
[RequireComponent(typeof(JointDriveController))] // Required to set joint forces
public class WormAgent : Agent

public bool respawnTargetWhenTouched;
public float targetSpawnRadius;
[Header("Body Parts")] [Space(10)]
[Header("Body Parts")] [Space(10)]
[Header("Joint Settings")] [Space(10)]
[Header("Joint Settings")] [Space(10)]
JointDriveController m_JdController;
Vector3 m_DirToTarget;
float m_MovingTowardsDot;

m_JdController.SetupBodyPart(bodySegment1);
m_JdController.SetupBodyPart(bodySegment2);
m_JdController.SetupBodyPart(bodySegment3);
//We only want the head to detect the target
//So we need to remove TargetContact from everything else
//This is a temp fix till we can redesign

}
//Get Joint Rotation Relative to the Connected Rigidbody
//We want to collect this info because it is the actual rotation, not the "target rotation"
public Quaternion GetJointRotation(ConfigurableJoint joint)

sensor.AddObservation(bp.currentStrength / m_JdController.maxJointForceLimit);
}
}
public override void CollectObservations(VectorSensor sensor)
{
m_JdController.GetCurrentJointForces();

144
README.md


<img src="docs/images/unity-wide.png" align="middle" width="3000"/>
<img src="docs/images/image-banner.png" align="middle" width="3000"/>
# Unity ML-Agents Toolkit
<img src="docs/images/image-banner.png" align="middle" width="3000"/>
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_1_docs/docs/)
# Unity ML-Agents Toolkit (Beta)
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/latest_release/docs/)
[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE)
([latest release](https://github.com/Unity-Technologies/ml-agents/releases/tag/latest_release))

Unity plugin that enables games and simulations to serve as environments for
project that enables games and simulations to serve as environments for
training intelligent agents. Agents can be trained using reinforcement learning,
imitation learning, neuroevolution, or other machine learning methods through a
simple-to-use Python API. We also provide implementations (based on TensorFlow)

## Features
* Unity environment control from Python
* 15+ sample Unity environments
* Two deep reinforcement learning algorithms,
[Proximal Policy Optimization](docs/Training-PPO.md)
(PPO) and [Soft Actor-Critic](docs/Training-SAC.md)
(SAC)
* Support for multiple environment configurations and training scenarios
* Self-play mechanism for training agents in adversarial scenarios
* Train memory-enhanced agents using deep reinforcement learning
* Easily definable Curriculum Learning and Generalization scenarios
* Built-in support for [Imitation Learning](docs/Training-Imitation-Learning.md) through Behavioral Cloning or Generative Adversarial Imitation Learning
* Flexible agent control with On Demand Decision Making
* Visualizing network outputs within the environment
* Wrap learning environments as a gym
* Utilizes the Unity Inference Engine
* Train using concurrent Unity environment instances
- 15+ [example Unity environments](docs/Learning-Environment-Examples.md)
- Support for multiple environment configurations and training scenarios
- Flexible Unity SDK that can be integrated into your game or custom Unity scene
- Training using two deep reinforcement learning algorithms, Proximal Policy
Optimization (PPO) and Soft Actor-Critic (SAC)
- Built-in support for Imitation Learning through Behavioral Cloning or
Generative Adversarial Imitation Learning
- Self-play mechanism for training agents in adversarial scenarios
- Easily definable Curriculum Learning scenarios for complex tasks
- Train robust agents using environment randomization
- Flexible agent control with On Demand Decision Making
- Train using multiple concurrent Unity environment instances
- Utilizes the [Unity Inference Engine](docs/Unity-Inference-Engine.md) to
provide native cross-platform support
- Unity environment [control from Python](docs/Python-API.md)
- Wrap Unity learning environments as a [gym](gym-unity/README.md)
See our [ML-Agents Overview](docs/ML-Agents-Overview.md) page for detailed
descriptions of all these features.
**Our latest, stable release is 0.15.1. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/latest_release/docs/Readme.md) to
**Our latest, stable release is `Release 1`. Click [here](docs/Readme.md) to
get started with the latest release of ML-Agents.**
The table below lists all our releases, including our `master` branch which is under active

details of the changes between versions.
* If you have used an earlier version of the ML-Agents Toolkit, we strongly recommend our
[guide on migrating from earlier versions](docs/Migrating.md).
| **0.15.1** | **March 30, 2020** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/0.15.1)** | **[docs](https://github.com/Unity-Technologies/ml-agents/tree/0.15.1/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/0.15.1.zip)** |
| **Release 1** | **April 30, 2020** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/release_1)** | **[docs](https://github.com/Unity-Technologies/ml-agents/tree/release_1/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/release_1.zip)** |
| **0.15.1** | March 30, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.15.1) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.15.1/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.15.1.zip) |
| **0.15.0** | March 18, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.15.0) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.15.0/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.15.0.zip) |
| **0.14.1** | February 26, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.14.1) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.14.1/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.14.1.zip) |
| **0.14.0** | February 13, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.14.0) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.14.0/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.14.0.zip) |

| **0.12.0** | December 2, 2019 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.12.0) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.12.0/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.12.0.zip) |
| **0.11.0** | November 4, 2019 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.11.0) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.11.0/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.11.0.zip) |
If you are a researcher interested in a discussion of Unity as an AI platform, see a pre-print
of our [reference paper on Unity and the ML-Agents Toolkit](https://arxiv.org/abs/1809.02627).
If you are a researcher interested in a discussion of Unity as an AI platform,
see a pre-print of our
[reference paper on Unity and the ML-Agents Toolkit](https://arxiv.org/abs/1809.02627).
If you use Unity or the ML-Agents Toolkit to conduct research, we ask that you cite the following
paper as a reference:
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). Unity: A General Platform for Intelligent Agents. *arXiv preprint arXiv:1809.02627.* https://github.com/Unity-Technologies/ml-agents.
If you use Unity or the ML-Agents Toolkit to conduct research, we ask that you
cite the following paper as a reference:
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D.
(2018). Unity: A General Platform for Intelligent Agents. _arXiv preprint
arXiv:1809.02627._ https://github.com/Unity-Technologies/ml-agents.
* (February 28, 2020) [Training intelligent adversaries using self-play with ML-Agents](https://blogs.unity3d.com/2020/02/28/training-intelligent-adversaries-using-self-play-with-ml-agents/)
* (November 11, 2019) [Training your agents 7 times faster with ML-Agents](https://blogs.unity3d.com/2019/11/11/training-your-agents-7-times-faster-with-ml-agents/)
* (October 21, 2019) [The AI@Unity interns help shape the world](https://blogs.unity3d.com/2019/10/21/the-aiunity-interns-help-shape-the-world/)
* (April 15, 2019) [Unity ML-Agents Toolkit v0.8: Faster training on real games](https://blogs.unity3d.com/2019/04/15/unity-ml-agents-toolkit-v0-8-faster-training-on-real-games/)
* (March 1, 2019) [Unity ML-Agents Toolkit v0.7: A leap towards cross-platform inference](https://blogs.unity3d.com/2019/03/01/unity-ml-agents-toolkit-v0-7-a-leap-towards-cross-platform-inference/)
* (December 17, 2018) [ML-Agents Toolkit v0.6: Improved usability of Brains and Imitation Learning](https://blogs.unity3d.com/2018/12/17/ml-agents-toolkit-v0-6-improved-usability-of-brains-and-imitation-learning/)
* (October 2, 2018) [Puppo, The Corgi: Cuteness Overload with the Unity ML-Agents Toolkit](https://blogs.unity3d.com/2018/10/02/puppo-the-corgi-cuteness-overload-with-the-unity-ml-agents-toolkit/)
* (September 11, 2018) [ML-Agents Toolkit v0.5, new resources for AI researchers available now](https://blogs.unity3d.com/2018/09/11/ml-agents-toolkit-v0-5-new-resources-for-ai-researchers-available-now/)
* (June 26, 2018) [Solving sparse-reward tasks with Curiosity](https://blogs.unity3d.com/2018/06/26/solving-sparse-reward-tasks-with-curiosity/)
* (June 19, 2018) [Unity ML-Agents Toolkit v0.4 and Udacity Deep Reinforcement Learning Nanodegree](https://blogs.unity3d.com/2018/06/19/unity-ml-agents-toolkit-v0-4-and-udacity-deep-reinforcement-learning-nanodegree/)
* (May 24, 2018) [Imitation Learning in Unity: The Workflow](https://blogs.unity3d.com/2018/05/24/imitation-learning-in-unity-the-workflow/)
* (March 15, 2018) [ML-Agents Toolkit v0.3 Beta released: Imitation Learning, feedback-driven features, and more](https://blogs.unity3d.com/2018/03/15/ml-agents-v0-3-beta-released-imitation-learning-feedback-driven-features-and-more/)
* (December 11, 2017) [Using Machine Learning Agents in a real game: a beginner’s guide](https://blogs.unity3d.com/2017/12/11/using-machine-learning-agents-in-a-real-game-a-beginners-guide/)
* (December 8, 2017) [Introducing ML-Agents Toolkit v0.2: Curriculum Learning, new environments, and more](https://blogs.unity3d.com/2017/12/08/introducing-ml-agents-v0-2-curriculum-learning-new-environments-and-more/)
* (September 19, 2017) [Introducing: Unity Machine Learning Agents Toolkit](https://blogs.unity3d.com/2017/09/19/introducing-unity-machine-learning-agents/)
* Overviewing reinforcement learning concepts
- (February 28, 2020)
[Training intelligent adversaries using self-play with ML-Agents](https://blogs.unity3d.com/2020/02/28/training-intelligent-adversaries-using-self-play-with-ml-agents/)
- (November 11, 2019)
[Training your agents 7 times faster with ML-Agents](https://blogs.unity3d.com/2019/11/11/training-your-agents-7-times-faster-with-ml-agents/)
- (October 21, 2019)
[The AI@Unity interns help shape the world](https://blogs.unity3d.com/2019/10/21/the-aiunity-interns-help-shape-the-world/)
- (April 15, 2019)
[Unity ML-Agents Toolkit v0.8: Faster training on real games](https://blogs.unity3d.com/2019/04/15/unity-ml-agents-toolkit-v0-8-faster-training-on-real-games/)
- (March 1, 2019)
[Unity ML-Agents Toolkit v0.7: A leap towards cross-platform inference](https://blogs.unity3d.com/2019/03/01/unity-ml-agents-toolkit-v0-7-a-leap-towards-cross-platform-inference/)
- (December 17, 2018)
[ML-Agents Toolkit v0.6: Improved usability of Brains and Imitation Learning](https://blogs.unity3d.com/2018/12/17/ml-agents-toolkit-v0-6-improved-usability-of-brains-and-imitation-learning/)
- (October 2, 2018)
[Puppo, The Corgi: Cuteness Overload with the Unity ML-Agents Toolkit](https://blogs.unity3d.com/2018/10/02/puppo-the-corgi-cuteness-overload-with-the-unity-ml-agents-toolkit/)
- (September 11, 2018)
[ML-Agents Toolkit v0.5, new resources for AI researchers available now](https://blogs.unity3d.com/2018/09/11/ml-agents-toolkit-v0-5-new-resources-for-ai-researchers-available-now/)
- (June 26, 2018)
[Solving sparse-reward tasks with Curiosity](https://blogs.unity3d.com/2018/06/26/solving-sparse-reward-tasks-with-curiosity/)
- (June 19, 2018)
[Unity ML-Agents Toolkit v0.4 and Udacity Deep Reinforcement Learning Nanodegree](https://blogs.unity3d.com/2018/06/19/unity-ml-agents-toolkit-v0-4-and-udacity-deep-reinforcement-learning-nanodegree/)
- (May 24, 2018)
[Imitation Learning in Unity: The Workflow](https://blogs.unity3d.com/2018/05/24/imitation-learning-in-unity-the-workflow/)
- (March 15, 2018)
[ML-Agents Toolkit v0.3 Beta released: Imitation Learning, feedback-driven features, and more](https://blogs.unity3d.com/2018/03/15/ml-agents-v0-3-beta-released-imitation-learning-feedback-driven-features-and-more/)
- (December 11, 2017)
[Using Machine Learning Agents in a real game: a beginner’s guide](https://blogs.unity3d.com/2017/12/11/using-machine-learning-agents-in-a-real-game-a-beginners-guide/)
- (December 8, 2017)
[Introducing ML-Agents Toolkit v0.2: Curriculum Learning, new environments, and more](https://blogs.unity3d.com/2017/12/08/introducing-ml-agents-v0-2-curriculum-learning-new-environments-and-more/)
- (September 19, 2017)
[Introducing: Unity Machine Learning Agents Toolkit](https://blogs.unity3d.com/2017/09/19/introducing-unity-machine-learning-agents/)
- Overviewing reinforcement learning concepts
In addition to our own documentation, here are some additional, relevant articles:
In addition to our own documentation, here are some additional, relevant
articles:
* [A Game Developer Learns Machine Learning](https://mikecann.co.uk/machine-learning/a-game-developer-learns-machine-learning-intent/)
* [Explore Unity Technologies ML-Agents Exclusively on Intel Architecture](https://software.intel.com/en-us/articles/explore-unity-technologies-ml-agents-exclusively-on-intel-architecture)
* [ML-Agents Penguins tutorial](https://learn.unity.com/project/ml-agents-penguins)
- [A Game Developer Learns Machine Learning](https://mikecann.co.uk/machine-learning/a-game-developer-learns-machine-learning-intent/)
- [Explore Unity Technologies ML-Agents Exclusively on Intel Architecture](https://software.intel.com/en-us/articles/explore-unity-technologies-ml-agents-exclusively-on-intel-architecture)
- [ML-Agents Penguins tutorial](https://learn.unity.com/project/ml-agents-penguins)
## Community and Feedback

For problems with the installation and setup of the the ML-Agents Toolkit, or
discussions about how to best setup or train your agents, please create a new
thread on the [Unity ML-Agents forum](https://forum.unity.com/forums/ml-agents.453/)
and make sure to include as much detail as possible.
If you run into any other problems using the ML-Agents Toolkit, or have a specific
feature requests, please [submit a GitHub issue](https://github.com/Unity-Technologies/ml-agents/issues).
thread on the
[Unity ML-Agents forum](https://forum.unity.com/forums/ml-agents.453/) and make
sure to include as much detail as possible. If you run into any other problems
using the ML-Agents Toolkit, or have a specific feature requests, please
[submit a GitHub issue](https://github.com/Unity-Technologies/ml-agents/issues).
Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents
Toolkit can we continue to improve and grow. Please take a few minutes to
Your opinion matters a great deal to us. Only by hearing your thoughts on the
Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few
minutes to
For any other questions or feedback, connect directly with the ML-Agents
team at ml-agents@unity3d.com.
For any other questions or feedback, connect directly with the ML-Agents team at
ml-agents@unity3d.com.
## License

6
SURVEY.md


# Unity ML-Agents Toolkit Survey
Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few minutes to let us know about it.
Your opinion matters a great deal to us. Only by hearing your thoughts on the
Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few
minutes to let us know about it.
[Fill out the survey](https://goo.gl/forms/qFMYSYr5TlINvG6f1)
[Fill out the survey](https://goo.gl/forms/qFMYSYr5TlINvG6f1)

153
com.unity.ml-agents/CHANGELOG.md


### Major Changes
#### com.unity.ml-agents (C#)
#### ml-agents / ml-agents-envs / gym-unity (Python)
- `max_step` in the `TerminalStep` and `TerminalSteps` objects was renamed `interrupted`.
- Curriculum and Parameter Randomization configurations have been merged
into the main training configuration file. Note that this means training
configuration files are now environment-specific. (#3791)
- Training artifacts (trained models, summaries) are now found in the `results/`
directory. (#3829)
- Unity Player logs are now written out to the results directory. (#3877)
- Run configuration YAML files are written out to the results directory at the end of the run. (#3815)
## [1.0.0-preview] - 2020-05-06
## [1.0.0-preview] - 2020-04-30
- Added new 3-joint Worm ragdoll environment. (#3798)
- The `--load` and `--train` command-line flags have been deprecated. Training
now happens by default, and use `--resume` to resume training instead. (#3705)
- The Jupyter notebooks have been removed from the repository.
- Removed the multi-agent gym option from the gym wrapper. For multi-agent
scenarios, use the [Low Level Python API](../docs/Python-API.md).
- The low level Python API has changed. You can look at the document
[Low Level Python API documentation](../docs/Python-API.md) for more
information. If you use `mlagents-learn` for training, this should be a
transparent change.
- Added ability to start training (initialize model weights) from a previous run
ID. (#3710)
- The internal event `Academy.AgentSetStatus` was renamed to
`Academy.AgentPreStep` and made public.
- The offset logic was removed from DecisionRequester.
- The signature of `Agent.Heuristic()` was changed to take a `float[]` as a
#### com.unity.ml-agents (C#)
- The `MLAgents` C# namespace was renamed to `Unity.MLAgents`, and other nested
namespaces were similarly renamed. (#3843)
- The offset logic was removed from DecisionRequester. (#3716)
- The signature of `Agent.Heuristic()` was changed to take a float array as a
source of error where users would return arrays of the wrong size.
source of error where users would return arrays of the wrong size. (#3765)
communication between Unity and the Python process.
communication between Unity and the Python process. (#3760)
`AgentAction` and `AgentReset` have been removed.
- The GhostTrainer has been extended to support asymmetric games and the
asymmetric example environment Strikers Vs. Goalie has been added.
- The SideChannel API has changed (#3833, #3660) :
`AgentAction` and `AgentReset` have been removed. (#3770)
- The SideChannel API has changed:
channels.
- `EnvironmentParameters` replaces the default `FloatProperties`.
You can access the `EnvironmentParameters` with
`Academy.Instance.EnvironmentParameters` on C# and create an
`EnvironmentParametersChannel` on Python
channels. (#3807)
- `Academy.FloatProperties` was replaced by `Academy.EnvironmentParameters`.
See the [Migration Guide](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Migrating.md)
for more details on upgrading. (#3807)
which is used when trying to read more data than the message contains.
which is used when trying to read more data than the message contains. (#3751)
(and other python StatsWriters). To do this from your code, use
`Academy.Instance.StatsRecorder.Add(key, value)`(#3660)
- CameraSensorComponent.m_Grayscale and RenderTextureSensorComponent.m_Grayscale
were changed from `public` to `private` (#3808).
- The `UnityEnv` class from the `gym-unity` package was renamed
`UnityToGymWrapper` and no longer creates the `UnityEnvironment`.
Instead, the `UnityEnvironment` must be passed as input to the
constructor of `UnityToGymWrapper`
(and other python StatsWriters). To do this from your code, use
`Academy.Instance.StatsRecorder.Add(key, value)`. (#3660)
- `CameraSensorComponent.m_Grayscale` and
`RenderTextureSensorComponent.m_Grayscale` were changed from `public` to
`private`. These can still be accessed via their corresponding properties.
(#3808)
- Update Barracuda to 0.7.0-preview which has breaking namespace and assembly name changes.
- `WriteAdapter` was renamed to `ObservationWriter`. If you have a custom
`ISensor` implementation, you will need to change the signature of its
`Write()` method. (#3834)
- The Barracuda dependency was upgraded to 0.7.0-preview (which has breaking
namespace and assembly name changes). (#3875)
#### ml-agents / ml-agents-envs / gym-unity (Python)
- The `--load` and `--train` command-line flags have been deprecated. Training
now happens by default, and use `--resume` to resume training instead of
`--load`. (#3705)
- The Jupyter notebooks have been removed from the repository. (#3704)
- The multi-agent gym option was removed from the gym wrapper. For multi-agent
scenarios, use the [Low Level Python API](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md). (#3681)
- The low level Python API has changed. You can look at the document
[Low Level Python API](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md)
documentation for more information. If you use `mlagents-learn` for training, this should be a
transparent change. (#3681)
- Added ability to start training (initialize model weights) from a previous run
ID. (#3710)
- The GhostTrainer has been extended to support asymmetric games and the
asymmetric example environment Strikers Vs. Goalie has been added. (#3653)
- The `UnityEnv` class from the `gym-unity` package was renamed
`UnityToGymWrapper` and no longer creates the `UnityEnvironment`. Instead, the
`UnityEnvironment` must be passed as input to the constructor of
`UnityToGymWrapper` (#3812)
#### com.unity.ml-agents (C#)
- Added new 3-joint Worm ragdoll environment. (#3798)
- `StackingSensor` was changed from `internal` visibility to `public`. (#3701)
- The internal event `Academy.AgentSetStatus` was renamed to
`Academy.AgentPreStep` and made public. (#3716)
- Academy.InferenceSeed property was added. This is used to initialize the
random number generator in ModelRunner, and is incremented for each
ModelRunner. (#3823)
- `Agent.GetObservations()` was added, which returns a read-only view of the
observations added in `CollectObservations()`. (#3825)
- `UnityRLCapabilities` was added to help inform users when RL features are
mismatched between C# and Python packages. (#3831)
#### ml-agents / ml-agents-envs / gym-unity (Python)
(#3646)
package version numbers.
package version numbers. (#3758)
environment port) will be used.
- Fixed an issue where exceptions from environments provided a returncode of 0.
(#3680)
environment port) will be used. (#3673)
- `StackingSensor` was changed from `internal` visibility to `public`
- Academy.InferenceSeed property was added. This is used to initialize the
random number generator in ModelRunner, and is incremented for each ModelRunner. (#3823)
- Added `Agent.GetObservations(), which returns a read-only view of the observations
added in `CollectObservations()`. (#3825)
- Model updates can now happen asynchronously with environment steps for better performance. (#3690)
- `num_updates` and `train_interval` for SAC were replaced with `steps_per_update`. (#3690)
- `WriteAdapter` was renamed to `ObservationWriter`. If you have a custom `ISensor` implementation,
you will need to change the signature of its `Write()` method. (#3834)
- The maximum compatible version of tensorflow was changed to allow tensorflow 2.1 and 2.2. This
will allow use with python 3.8 using tensorflow 2.2.0rc3.
- `UnityRLCapabilities` was added to help inform users when RL features are mismatched between C# and Python packages. (#3831)
- Model updates can now happen asynchronously with environment steps for better
performance. (#3690)
- `num_updates` and `train_interval` for SAC were replaced with
`steps_per_update`. (#3690)
- The maximum compatible version of tensorflow was changed to allow tensorflow
2.1 and 2.2. This will allow use with python 3.8 using tensorflow 2.2.0rc3.
(#3830)
- `mlagents-learn` will no longer set the width and height of the executable
window to 84x84 when no width nor height arguments are given. (#3867)
- Fixed a display bug when viewing Demonstration files in the inspector. The
shapes of the observations in the file now display correctly. (#3771)
#### com.unity.ml-agents (C#)
- Fixed a display bug when viewing Demonstration files in the inspector. The
shapes of the observations in the file now display correctly. (#3771)
#### ml-agents / ml-agents-envs / gym-unity (Python)
- Fixed an issue where exceptions from environments provided a return code of 0.
(#3680)
- Self-Play team changes will now trigger a full environment reset. This
prevents trajectories in progress during a team change from getting into the
buffer. (#3870)
## [0.15.1-preview] - 2020-03-30

76
com.unity.ml-agents/CONTRIBUTING.md


## Communication
First, please read through our [code of conduct](https://github.com/Unity-Technologies/ml-agents/blob/master/CODE_OF_CONDUCT.md), as we
expect all our contributors to follow it.
First, please read through our
[code of conduct](https://github.com/Unity-Technologies/ml-agents/blob/master/CODE_OF_CONDUCT.md),
as we expect all our contributors to follow it.
[Issues page](https://github.com/Unity-Technologies/ml-agents/issues)
and briefly outlining the changes you plan to make. This will enable us to
provide some context that may be helpful for you. This could range from advice
and feedback on how to optimally perform your changes or reasons for not doing
it.
[Issues page](https://github.com/Unity-Technologies/ml-agents/issues) and
briefly outlining the changes you plan to make. This will enable us to provide
some context that may be helpful for you. This could range from advice and
feedback on how to optimally perform your changes or reasons for not doing it.
Lastly, if you're looking for input on what to contribute, feel free to
reach out to us directly at ml-agents@unity3d.com and/or browse the GitHub
issues with the `contributions welcome` label.
Lastly, if you're looking for input on what to contribute, feel free to reach
out to us directly at ml-agents@unity3d.com and/or browse the GitHub issues with
the `contributions welcome` label.
The master branch corresponds to the most recent version of the project.
Note that this may be newer that the [latest release](https://github.com/Unity-Technologies/ml-agents/releases/tag/latest_release).
When contributing to the project, please make sure that your Pull Request (PR) contains the following:
The master branch corresponds to the most recent version of the project. Note
that this may be newer that the
[latest release](https://github.com/Unity-Technologies/ml-agents/releases/tag/latest_release).
When contributing to the project, please make sure that your Pull Request (PR)
contains the following:
* Detailed description of the changes performed
* Corresponding changes to documentation, unit tests and sample environments (if
- Detailed description of the changes performed
- Corresponding changes to documentation, unit tests and sample environments (if
* Summary of the tests performed to validate your changes
* Issue numbers that the PR resolves (if any)
- Summary of the tests performed to validate your changes
- Issue numbers that the PR resolves (if any)
examples, as long as they are small, simple, demonstrate a unique feature of
the platform, and provide a unique non-trivial challenge to modern
machine learning algorithms. Feel free to submit these environments with a
PR explaining the nature of the environment and task.
examples, as long as they are small, simple, demonstrate a unique feature of the
platform, and provide a unique non-trivial challenge to modern machine learning
algorithms. Feel free to submit these environments with a PR explaining the
nature of the environment and task.
Several static checks are run on the codebase using the [pre-commit framework](https://pre-commit.com/) during CI. To execute the same checks locally, install `pre-commit` and run `pre-commit run --all-files`. Some hooks (for example, `black`) will output the corrected version of the code; others (like `mypy`) may require more effort to fix.
Several static checks are run on the codebase using the
[pre-commit framework](https://pre-commit.com/) during CI. To execute the same
checks locally, install `pre-commit` and run `pre-commit run --all-files`. Some
hooks (for example, `black`) will output the corrected version of the code;
others (like `mypy`) may require more effort to fix.
All python code should be formatted with [`black`](https://github.com/ambv/black). Style and formatting for C# may be enforced later.
All python code should be formatted with
[`black`](https://github.com/ambv/black). Style and formatting for C# may be
enforced later.