浏览代码

Merge pull request #1764 from Unity-Technologies/release-v0.7

Release v0.7 into master
/hotfix-v0.9.2a
GitHub 5 年前
当前提交
275ff5d6
共有 253 个文件被更改,包括 3123 次插入706 次删除
  1. 2
      UnitySDK/Assets/ML-Agents/Editor/AgentEditor.cs
  2. 3
      UnitySDK/Assets/ML-Agents/Editor/LearningBrainEditor.cs
  3. 5
      UnitySDK/Assets/ML-Agents/Editor/Tests/EditModeTestInternalBrainTensorGenerator.cs
  4. 2
      UnitySDK/Assets/ML-Agents/Examples/3DBall/Brains/3DBallHardLearning.asset
  5. 2
      UnitySDK/Assets/ML-Agents/Examples/3DBall/Brains/3DBallLearning.asset
  6. 2
      UnitySDK/Assets/ML-Agents/Examples/3DBall/Prefabs/Game.prefab
  7. 2
      UnitySDK/Assets/ML-Agents/Examples/3DBall/Prefabs/GameHard.prefab
  8. 2
      UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/Brains/BananaLearning.asset
  9. 10
      UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/Prefabs/RLArea.prefab
  10. 3
      UnitySDK/Assets/ML-Agents/Examples/Basic/Brains/BasicLearning.asset
  11. 2
      UnitySDK/Assets/ML-Agents/Examples/Basic/Prefabs/Basic.prefab
  12. 2
      UnitySDK/Assets/ML-Agents/Examples/Bouncer/Brains/BouncerLearning.asset
  13. 2
      UnitySDK/Assets/ML-Agents/Examples/Bouncer/Prefabs/Environment.prefab
  14. 40
      UnitySDK/Assets/ML-Agents/Examples/Bouncer/Scenes/Bouncer.unity
  15. 2
      UnitySDK/Assets/ML-Agents/Examples/Crawler/Brains/CrawlerDynamicLearning.asset
  16. 2
      UnitySDK/Assets/ML-Agents/Examples/Crawler/Brains/CrawlerStaticLearning.asset
  17. 2
      UnitySDK/Assets/ML-Agents/Examples/GridWorld/Brains/GridWorldLearning.asset
  18. 9
      UnitySDK/Assets/ML-Agents/Examples/GridWorld/Resources/agent.prefab
  19. 13
      UnitySDK/Assets/ML-Agents/Examples/GridWorld/Scenes/GridWorld.unity
  20. 2
      UnitySDK/Assets/ML-Agents/Examples/Hallway/Brains/HallwayLearning.asset
  21. 2
      UnitySDK/Assets/ML-Agents/Examples/Hallway/Prefabs/HallwayArea.prefab
  22. 2
      UnitySDK/Assets/ML-Agents/Examples/PushBlock/Brains/PushBlockLearning.asset
  23. 2
      UnitySDK/Assets/ML-Agents/Examples/PushBlock/Prefabs/PushBlockArea.prefab
  24. 2
      UnitySDK/Assets/ML-Agents/Examples/Pyramids/Brains/PyramidsLearning.asset
  25. 2
      UnitySDK/Assets/ML-Agents/Examples/Pyramids/Prefabs/AreaPB.prefab
  26. 2
      UnitySDK/Assets/ML-Agents/Examples/Reacher/Brains/ReacherLearning.asset
  27. 2
      UnitySDK/Assets/ML-Agents/Examples/Soccer/Brains/GoalieLearning.asset
  28. 2
      UnitySDK/Assets/ML-Agents/Examples/Soccer/Brains/StrikerLearning.asset
  29. 8
      UnitySDK/Assets/ML-Agents/Examples/Soccer/Prefabs/SoccerFieldTwos.prefab
  30. 124
      UnitySDK/Assets/ML-Agents/Examples/Template/Scene.unity
  31. 3
      UnitySDK/Assets/ML-Agents/Examples/Tennis/Brains/TennisLearning.asset
  32. 4
      UnitySDK/Assets/ML-Agents/Examples/Tennis/Prefabs/TennisArea.prefab
  33. 2
      UnitySDK/Assets/ML-Agents/Examples/Tennis/Scripts/TennisAgent.cs
  34. 2
      UnitySDK/Assets/ML-Agents/Examples/Walker/Brains/WalkerLearning.asset
  35. 2
      UnitySDK/Assets/ML-Agents/Examples/WallJump/Brains/BigWallJumpLearning.asset
  36. 2
      UnitySDK/Assets/ML-Agents/Examples/WallJump/Brains/SmallWallJumpLearning.asset
  37. 80
      UnitySDK/Assets/ML-Agents/Plugins/ProtoBuffer/Grpc.Core.dll.meta
  38. 28
      UnitySDK/Assets/ML-Agents/Scripts/Academy.cs
  39. 21
      UnitySDK/Assets/ML-Agents/Scripts/Agent.cs
  40. 6
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityToExternalGrpc.cs
  41. 40
      UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/ApplierImpl.cs
  42. 61
      UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/GeneratorImpl.cs
  43. 2
      UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/ModelParamLoader.cs
  44. 3
      UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/TensorApplier.cs
  45. 4
      UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/TensorGenerator.cs
  46. 4
      UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/TensorNames.cs
  47. 124
      UnitySDK/Assets/ML-Agents/Scripts/LearningBrain.cs
  48. 20
      UnitySDK/Assets/ML-Agents/Scripts/RpcCommunicator.cs
  49. 1
      UnitySDK/Assets/ML-Agents/Scripts/SocketCommunicator.cs
  50. 6
      UnitySDK/ProjectSettings/ProjectSettings.asset
  51. 6
      docs/API-Reference.md
  52. 19
      docs/Background-TensorFlow.md
  53. 50
      docs/Basic-Guide.md
  54. 4
      docs/FAQ.md
  55. 16
      docs/Getting-Started-with-Balance-Ball.md
  56. 18
      docs/Installation-Windows.md
  57. 45
      docs/Learning-Environment-Create-New.md
  58. 4
      docs/Learning-Environment-Design-Academy.md
  59. 41
      docs/Learning-Environment-Design-Agents.md
  60. 10
      docs/Learning-Environment-Design-Learning-Brains.md
  61. 15
      docs/Learning-Environment-Design.md
  62. 4
      docs/Learning-Environment-Executable.md
  63. 4
      docs/ML-Agents-Overview.md
  64. 5
      docs/Readme.md
  65. 4
      docs/Training-Imitation-Learning.md
  66. 2
      docs/Training-ML-Agents.md
  67. 4
      docs/Training-on-Amazon-Web-Service.md
  68. 4
      docs/Training-on-Microsoft-Azure.md
  69. 2
      docs/Using-Docker.md
  70. 220
      gym-unity/README.md
  71. 84
      gym-unity/gym_unity/envs/unity_env.py
  72. 2
      gym-unity/setup.py
  73. 100
      gym-unity/tests/test_gym.py
  74. 2
      ml-agents/mlagents/envs/environment.py
  75. 3
      ml-agents/mlagents/envs/rpc_communicator.py
  76. 3
      ml-agents/mlagents/envs/socket_communicator.py
  77. 144
      ml-agents/mlagents/trainers/learn.py
  78. 2
      ml-agents/mlagents/trainers/models.py
  79. 12
      ml-agents/mlagents/trainers/policy.py
  80. 317
      ml-agents/mlagents/trainers/trainer_controller.py
  81. 5
      ml-agents/setup.py
  82. 2
      ml-agents/tests/mock_communicator.py
  83. 4
      ml-agents/tests/trainers/test_bc.py
  84. 4
      ml-agents/tests/trainers/test_ppo.py
  85. 358
      ml-agents/tests/trainers/test_trainer_controller.py
  86. 12
      protobuf-definitions/README.md
  87. 29
      UnitySDK/Assets/ML-Agents/Editor/NNModelImporter.cs
  88. 11
      UnitySDK/Assets/ML-Agents/Editor/NNModelImporter.cs.meta
  89. 11
      UnitySDK/Assets/ML-Agents/Editor/Tests/MultinomialTest.cs.meta
  90. 11
      UnitySDK/Assets/ML-Agents/Editor/Tests/RandomNormalTest.cs.meta
  91. 556
      UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHardLearning.nn
  92. 7
      UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHardLearning.nn.meta
  93. 511
      UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallLearning.nn
  94. 7
      UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallLearning.nn.meta
  95. 307
      UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/TFModels/BananaLearning.nn
  96. 7
      UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/TFModels/BananaLearning.nn.meta
  97. 12
      UnitySDK/Assets/ML-Agents/Examples/Basic/TFModels/BasicLearning.nn
  98. 7
      UnitySDK/Assets/ML-Agents/Examples/Basic/TFModels/BasicLearning.nn.meta
  99. 143
      UnitySDK/Assets/ML-Agents/Examples/Bouncer/TFModels/BouncerLearning.nn
  100. 7
      UnitySDK/Assets/ML-Agents/Examples/Bouncer/TFModels/BouncerLearning.nn.meta

2
UnitySDK/Assets/ML-Agents/Editor/AgentEditor.cs


EditorGUILayout.PropertyField(
actionsPerDecision,
new GUIContent(
"Decision Frequency",
"Decision Interval",
"The agent will automatically request a decision every X" +
" steps and perform an action at every step."));
actionsPerDecision.intValue = Mathf.Max(1, actionsPerDecision.intValue);

3
UnitySDK/Assets/ML-Agents/Editor/LearningBrainEditor.cs


public class LearningBrainEditor : BrainEditor
{
private const string ModelPropName = "model";
private const string InferenceDevicePropName = "inferenceDevice";
private const float TimeBetweenModelReloads = 2f;
// Time since the last reload of the model
private float _timeSinceModelReload;

serializedBrain.Update();
var tfGraphModel = serializedBrain.FindProperty(ModelPropName);
EditorGUILayout.ObjectField(tfGraphModel);
var inferenceDevice = serializedBrain.FindProperty(InferenceDevicePropName);
EditorGUILayout.PropertyField(inferenceDevice);
serializedBrain.ApplyModifiedProperties();
if (EditorGUI.EndChangeCheck())
{

5
UnitySDK/Assets/ML-Agents/Editor/Tests/EditModeTestInternalBrainTensorGenerator.cs


var inputTensor = new Tensor()
{
Shape = new long[] {2, 2},
ValueType = Tensor.TensorType.FloatingPoint
ValueType = Tensor.TensorType.Integer
};
var batchSize = 4;

Assert.Catch<NotImplementedException>(
() => generator.Generate(inputTensor, batchSize, agentInfos));
inputTensor.ValueType = Tensor.TensorType.Integer;
generator.Generate(inputTensor, batchSize, agentInfos);
Assert.IsNotNull(inputTensor.Data as int[,]);
Assert.AreEqual((inputTensor.Data as int[,])[0, 0], 1);

2
UnitySDK/Assets/ML-Agents/Examples/3DBall/Brains/3DBallHardLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: 8a2da2218425f46e9921caefda4b7813, type: 3}
model: {fileID: 11400000, guid: 8be33caeca04d43498913448b5364f2b, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/3DBall/Brains/3DBallLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: 9f58800fa9d54477aa01ee258842f6b3, type: 3}
model: {fileID: 11400000, guid: c282d4bbc4c8f4e78b2bb29eccd17557, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/3DBall/Prefabs/Game.prefab


m_Script: {fileID: 11500000, guid: aaba48bf82bee4751aa7b89569e57f73, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 97d8f9d40dc8c452f932f7caa9549c7d, type: 2}
brain: {fileID: 11400000, guid: 383c589e8bb76464eadc2525b5b0f2c1, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

2
UnitySDK/Assets/ML-Agents/Examples/3DBall/Prefabs/GameHard.prefab


m_Script: {fileID: 11500000, guid: edf26e11cf4ed42eaa3ffb7b91bb4676, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 55f48be32ac184c6ab67cf647100bac4, type: 2}
brain: {fileID: 11400000, guid: 4f74e089fbb75455ebf6f0495e30be6e, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

2
UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/Brains/BananaLearning.asset


-
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 69bd818d72b944849916d2fda9fe471b, type: 3}
model: {fileID: 11400000, guid: 9aed85b22394844eaa6db4d5e3c61adb, type: 3}

10
UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/Prefabs/RLArea.prefab


m_Script: {fileID: 11500000, guid: 700720465a0104fa586fa4a412b044f8, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: dff7429d656234fed84c4fac2a7a683c, type: 2}
brain: {fileID: 11400000, guid: 9e7865ec29c894c2d8c1617b0fa392f9, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

m_Script: {fileID: 11500000, guid: 700720465a0104fa586fa4a412b044f8, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: dff7429d656234fed84c4fac2a7a683c, type: 2}
brain: {fileID: 11400000, guid: 9e7865ec29c894c2d8c1617b0fa392f9, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

m_Script: {fileID: 11500000, guid: 700720465a0104fa586fa4a412b044f8, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: dff7429d656234fed84c4fac2a7a683c, type: 2}
brain: {fileID: 11400000, guid: 9e7865ec29c894c2d8c1617b0fa392f9, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

m_Script: {fileID: 11500000, guid: 700720465a0104fa586fa4a412b044f8, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: dff7429d656234fed84c4fac2a7a683c, type: 2}
brain: {fileID: 11400000, guid: 9e7865ec29c894c2d8c1617b0fa392f9, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

m_Script: {fileID: 11500000, guid: 700720465a0104fa586fa4a412b044f8, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: dff7429d656234fed84c4fac2a7a683c, type: 2}
brain: {fileID: 11400000, guid: 9e7865ec29c894c2d8c1617b0fa392f9, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

3
UnitySDK/Assets/ML-Agents/Examples/Basic/Brains/BasicLearning.asset


vectorActionDescriptions:
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 503ce1e8257904bd0b5be8f7fb4b5d28, type: 3}
model: {fileID: 11400000, guid: b9b3600e7ab99422684e1f3bf597a456, type: 3}
inferenceDevice: 0

2
UnitySDK/Assets/ML-Agents/Examples/Basic/Prefabs/Basic.prefab


m_Script: {fileID: 11500000, guid: 624480a72e46148118ab2e2d89b537de, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 1adbe3db6a2f94bf2b1e22a29b955387, type: 2}
brain: {fileID: 11400000, guid: e5cf0e35e16264ea483f8863e5115c3c, type: 2}
agentParameters:
agentCameras: []
maxStep: 0

2
UnitySDK/Assets/ML-Agents/Examples/Bouncer/Brains/BouncerLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: 760d2b8347b4b46e3a44d9b989e1304e, type: 3}
model: {fileID: 11400000, guid: 055df42a4cc114162939e523d053c4d7, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/Bouncer/Prefabs/Environment.prefab


m_Script: {fileID: 11500000, guid: 0f09741cbce2e44bc88d3e92917eea0e, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 5527511df7b944e8e9177dd69db5a9c1, type: 2}
brain: {fileID: 11400000, guid: 573920e3a672d40038169c7ffdbdca05, type: 2}
agentParameters:
agentCameras: []
maxStep: 0

40
UnitySDK/Assets/ML-Agents/Examples/Bouncer/Scenes/Bouncer.unity


m_ReflectionIntensity: 1
m_CustomReflection: {fileID: 0}
m_Sun: {fileID: 0}
m_IndirectSpecularColor: {r: 0.4482636, g: 0.49828887, b: 0.5755903, a: 1}
m_IndirectSpecularColor: {r: 0.44824862, g: 0.49827534, b: 0.57558274, a: 1}
--- !u!157 &3
LightmapSettings:
m_ObjectHideFlags: 0

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

objectReference: {fileID: 0}
- target: {fileID: 1397068878990112, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}
propertyPath: m_IsActive
value: 0
value: 1
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: e2c4e1ad4f2224d34bb09d20f26b3207, type: 2}

2
UnitySDK/Assets/ML-Agents/Examples/Crawler/Brains/CrawlerDynamicLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: 9482a8782450a4d87b20942c4523176b, type: 3}
model: {fileID: 11400000, guid: 94d1889cee00f4361b74cbc4eb129f11, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/Crawler/Brains/CrawlerStaticLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: e256bd37f98f246e5be72618766d0a93, type: 3}
model: {fileID: 11400000, guid: 9dbf8cc316ac9410b961ed268824778f, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/GridWorld/Brains/GridWorldLearning.asset


vectorActionDescriptions:
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 0fd168a0ea1d04ef9a68c80cf452ce3d, type: 3}
model: {fileID: 11400000, guid: 86e6c88c11c2a4a72bbe56a71c20bfff, type: 3}

9
UnitySDK/Assets/ML-Agents/Examples/GridWorld/Resources/agent.prefab


m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 4294967295
m_Materials:
- {fileID: 2100000, guid: 00d852aac9443402984416f9dbcd22ea, type: 2}
m_StaticBatchInfo:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 4294967295
m_Materials:
- {fileID: 2100000, guid: 00d852aac9443402984416f9dbcd22ea, type: 2}
m_StaticBatchInfo:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 4294967295
m_Materials:
- {fileID: 2100000, guid: 00d852aac9443402984416f9dbcd22ea, type: 2}
m_StaticBatchInfo:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 4294967295
m_Materials:
- {fileID: 2100000, guid: 00d852aac9443402984416f9dbcd22ea, type: 2}
m_StaticBatchInfo:

m_Script: {fileID: 11500000, guid: 857707f3f352541d5b858efca4479b95, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 0}
brain: {fileID: 11400000, guid: 2c1d51b7167874f31beda0b0cf0af468, type: 2}
agentParameters:
agentCameras:
- {fileID: 0}

numberOfActionsBetweenDecisions: 1
timeBetweenDecisionsAtInference: 0
timeBetweenDecisionsAtInference: 0.15
maskActions: 1

13
UnitySDK/Assets/ML-Agents/Examples/GridWorld/Scenes/GridWorld.unity


m_ReflectionIntensity: 1
m_CustomReflection: {fileID: 0}
m_Sun: {fileID: 0}
m_IndirectSpecularColor: {r: 0.43668893, g: 0.4842832, b: 0.56452656, a: 1}
m_IndirectSpecularColor: {r: 0.43667555, g: 0.4842717, b: 0.56452394, a: 1}
--- !u!157 &3
LightmapSettings:
m_ObjectHideFlags: 0

objectReference: {fileID: 0}
- target: {fileID: 114143683117020968, guid: 628960e910f094ad1909ecc88cc8016d,
type: 2}
propertyPath: brain
value:
objectReference: {fileID: 11400000, guid: 8096722eb0a294871857e202e0032082,
type: 2}
- target: {fileID: 114143683117020968, guid: 628960e910f094ad1909ecc88cc8016d,
type: 2}
- target: {fileID: 114143683117020968, guid: 628960e910f094ad1909ecc88cc8016d,
type: 2}
propertyPath: timeBetweenDecisionsAtInference
value: 0.15
objectReference: {fileID: 0}
m_RemovedComponents: []
m_ParentPrefab: {fileID: 100100000, guid: 628960e910f094ad1909ecc88cc8016d, type: 2}
m_IsPrefabParent: 0

2
UnitySDK/Assets/ML-Agents/Examples/Hallway/Brains/HallwayLearning.asset


vectorActionDescriptions:
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 84588668e6ea948d3ab55bb813cc769b, type: 3}
model: {fileID: 11400000, guid: 184185c35f3b14a56946e95977be2904, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/Hallway/Prefabs/HallwayArea.prefab


m_Script: {fileID: 11500000, guid: b446afae240924105b36d07e8d17a608, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 51f870f0190b643adae5432c0e6205e7, type: 2}
brain: {fileID: 11400000, guid: 533f2edd327794ca996d0320901b501c, type: 2}
agentParameters:
agentCameras: []
maxStep: 3000

2
UnitySDK/Assets/ML-Agents/Examples/PushBlock/Brains/PushBlockLearning.asset


vectorActionDescriptions:
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: e22850d2072904a0ab06069cda2599e5, type: 3}
model: {fileID: 11400000, guid: 16027726448534f92916ae237e0ba315, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/PushBlock/Prefabs/PushBlockArea.prefab


m_Script: {fileID: 11500000, guid: dea8c4f2604b947e6b7b97750dde87ca, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: dd07b1953eac4411b81fba032f394726, type: 2}
brain: {fileID: 11400000, guid: e8b2d719f6a324b1abb68d8cf2859f5c, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

2
UnitySDK/Assets/ML-Agents/Examples/Pyramids/Brains/PyramidsLearning.asset


vectorActionDescriptions:
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 7d1c7f27447234c3a81169de00dcaa8a, type: 3}
model: {fileID: 11400000, guid: 38874f53f06bd4f7782bc81d917a0442, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/Pyramids/Prefabs/AreaPB.prefab


m_Script: {fileID: 11500000, guid: b8db44472779248d3be46895c4d562d5, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: d60466fdbfb194c56bdaf78887f2afc8, type: 2}
brain: {fileID: 11400000, guid: 7b7715ed1d436417db67026a47f17576, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

2
UnitySDK/Assets/ML-Agents/Examples/Reacher/Brains/ReacherLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: 5fb4a3624e9ca4e1c81b51b5117cb31e, type: 3}
model: {fileID: 11400000, guid: 0c779bd93060f405cbe4446e1dcbf2a6, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/Soccer/Brains/GoalieLearning.asset


vectorActionDescriptions:
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 890ab8f03425c4a80a52ba674ddec3f3, type: 3}
model: {fileID: 11400000, guid: b6dd703e0bf914268a4e110ad85ab7a9, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/Soccer/Brains/StrikerLearning.asset


vectorActionDescriptions:
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 23410257d39d44616bfefdff59c7fbc9, type: 3}
model: {fileID: 11400000, guid: 0df960ed78a964cd4a671395390ac522, type: 3}

8
UnitySDK/Assets/ML-Agents/Examples/Soccer/Prefabs/SoccerFieldTwos.prefab


m_Script: {fileID: 11500000, guid: 2a2688ef4a36349f9aa010020c32d198, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: b7b6884feff2f4a17a645d7e0b9dc8f3, type: 2}
brain: {fileID: 11400000, guid: 090fa5a8588f5433bb7f878e6f5ac954, type: 2}
agentParameters:
agentCameras: []
maxStep: 3000

m_Script: {fileID: 11500000, guid: 2a2688ef4a36349f9aa010020c32d198, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 0e2b949bf7d37469786426a6d913f5af, type: 2}
brain: {fileID: 11400000, guid: 29ed78b3e8fef4340b3a1f6954b88f18, type: 2}
agentParameters:
agentCameras: []
maxStep: 3000

m_Script: {fileID: 11500000, guid: 2a2688ef4a36349f9aa010020c32d198, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: b7b6884feff2f4a17a645d7e0b9dc8f3, type: 2}
brain: {fileID: 11400000, guid: 090fa5a8588f5433bb7f878e6f5ac954, type: 2}
agentParameters:
agentCameras: []
maxStep: 3000

m_Script: {fileID: 11500000, guid: 2a2688ef4a36349f9aa010020c32d198, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 0e2b949bf7d37469786426a6d913f5af, type: 2}
brain: {fileID: 11400000, guid: 29ed78b3e8fef4340b3a1f6954b88f18, type: 2}
agentParameters:
agentCameras: []
maxStep: 3000

124
UnitySDK/Assets/ML-Agents/Examples/Template/Scene.unity


m_PVRDirectSampleCount: 32
m_PVRSampleCount: 500
m_PVRBounces: 2
m_PVRFiltering: 0
m_PVRFilterTypeDirect: 0
m_PVRFilterTypeIndirect: 0
m_PVRFilterTypeAO: 0
m_PVRFilteringAtrousColorSigma: 1
m_PVRFilteringAtrousNormalSigma: 1
m_PVRFilteringAtrousPositionSigma: 1
m_PVRFilteringAtrousPositionSigmaDirect: 0.5
m_PVRFilteringAtrousPositionSigmaIndirect: 2
m_PVRFilteringAtrousPositionSigmaAO: 1
m_ShowResolutionOverlay: 1
m_LightingDataAsset: {fileID: 0}
m_UseShadowmask: 1
--- !u!196 &4

manualTileSize: 0
tileSize: 256
accuratePlacement: 0
debug:
m_Flags: 0
m_NavMeshData: {fileID: 0}
--- !u!1 &762086410
GameObject:

m_Lightmapping: 4
m_AreaSize: {x: 1, y: 1}
m_BounceIntensity: 1
m_FalloffTable:
m_Table[0]: 0
m_Table[1]: 0
m_Table[2]: 0
m_Table[3]: 0
m_Table[4]: 0
m_Table[5]: 0
m_Table[6]: 0
m_Table[7]: 0
m_Table[8]: 0
m_Table[9]: 0
m_Table[10]: 0
m_Table[11]: 0
m_Table[12]: 0
m_ColorTemperature: 6570
m_UseColorTemperature: 0
m_ShadowRadius: 0

m_Father: {fileID: 0}
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 50, y: -30, z: 0}
--- !u!1 &846768603
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
serializedVersion: 5
m_Component:
- component: {fileID: 846768604}
- component: {fileID: 846768605}
m_Layer: 0
m_Name: Brain
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &846768604
Transform:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 846768603}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1574236049}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!114 &846768605
MonoBehaviour:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 846768603}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: c676a8ddf5a5f4f64b35e9ed5028679d, type: 3}
m_Name:
m_EditorClassIdentifier:
brainParameters:
stateSize: 12
actionSize: 4
memorySize: 0
cameraResolutions:
- width: 80
height: 80
blackAndWhite: 1
actionDescriptions:
- Up
- Down
- Left
- Right
actionSpaceType: 0
stateSpaceType: 1
brainType: 2
CoreBrains: []
CollectData: 0
instanceID: 0
--- !u!1 &1223085755
GameObject:
m_ObjectHideFlags: 0

m_Script: {fileID: 11500000, guid: 33bb739f1138d40798114d667776a1d6, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 846768605}
observations:
- {fileID: 1715640924}
maxStep: 100
resetOnDone: 1
id: 0
reward: 0
done: 0
value: 0
CummulativeReward: 0
stepCounter: 0
agentStoredAction: []
memory: []
brain: {fileID: 0}
agentParameters:
agentCameras: []
maxStep: 0
resetOnDone: 1
onDemandDecision: 0
numberOfActionsBetweenDecisions: 1
--- !u!4 &1223085757
Transform:
m_ObjectHideFlags: 0

m_Script: {fileID: 11500000, guid: 9af83cd96d4bc4088a966af174446d1b, type: 3}
m_Name:
m_EditorClassIdentifier:
broadcastHub:
broadcastingBrains: []
_brainsToControl: []
frameToSkip: 0
waitTime: 0
trainingConfiguration:
width: 80
height: 80

qualityLevel: 1
timeScale: 1
targetFrameRate: 60
defaultResetParameters: []
episodeCount: 0
done: 0
currentStep: 0
isInference: 0
windowResize: 0
resetParameters:
resetParameters: []
--- !u!4 &1574236049
Transform:
m_ObjectHideFlags: 0

m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0.71938086, y: 0.27357092, z: 4.1970553}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 846768604}
m_Children: []
m_Father: {fileID: 0}
m_RootOrder: 2
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}

m_TargetEye: 3
m_HDR: 1
m_AllowMSAA: 1
m_AllowDynamicResolution: 0
m_StereoMirrorMode: 0
--- !u!4 &1715640925
Transform:
m_ObjectHideFlags: 0

3
UnitySDK/Assets/ML-Agents/Examples/Tennis/Brains/TennisLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: 6d4281b70d41f48cb83d663b84f78c9a, type: 3}
model: {fileID: 11400000, guid: 0b22bc7e45e8347fd897bdf0e1d38c6e, type: 3}
inferenceDevice: 0

4
UnitySDK/Assets/ML-Agents/Examples/Tennis/Prefabs/TennisArea.prefab


m_Script: {fileID: 11500000, guid: e51a3fb0b3186433ea84fc1e0549cc91, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 6bf6a586a645b471bb9bd1194ae0e229, type: 2}
brain: {fileID: 11400000, guid: 1674996276be448c2ad51fb139e21e05, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

m_Script: {fileID: 11500000, guid: e51a3fb0b3186433ea84fc1e0549cc91, type: 3}
m_Name:
m_EditorClassIdentifier:
brain: {fileID: 11400000, guid: 6bf6a586a645b471bb9bd1194ae0e229, type: 2}
brain: {fileID: 11400000, guid: 1674996276be448c2ad51fb139e21e05, type: 2}
agentParameters:
agentCameras: []
maxStep: 5000

2
UnitySDK/Assets/ML-Agents/Examples/Tennis/Scripts/TennisAgent.cs


public override void InitializeAgent()
{
agentRb = GetComponent<Rigidbody>();
ballRb = GetComponent<Rigidbody>();
ballRb = ball.GetComponent<Rigidbody>();
var canvas = GameObject.Find(CanvasName);
GameObject scoreBoard;
if (invertX)

2
UnitySDK/Assets/ML-Agents/Examples/Walker/Brains/WalkerLearning.asset


-
-
vectorActionSpaceType: 1
model: {fileID: 4900000, guid: 48ab33cf9fbee4883948187618027835, type: 3}
model: {fileID: 11400000, guid: 097040deda0de41ddb3050c60d8cfc67, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/WallJump/Brains/BigWallJumpLearning.asset


-
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: c118879bb5db84f269e4da23ba8c4f61, type: 3}
model: {fileID: 11400000, guid: dc3a7a234db41444fb51386dafe44cbc, type: 3}

2
UnitySDK/Assets/ML-Agents/Examples/WallJump/Brains/SmallWallJumpLearning.asset


-
-
vectorActionSpaceType: 0
model: {fileID: 4900000, guid: 92cd96b2c34334db692e93af25b64d2a, type: 3}
model: {fileID: 11400000, guid: 7ed5da5ea3aa74cb4bf83bda60b1518e, type: 3}

80
UnitySDK/Assets/ML-Agents/Plugins/ProtoBuffer/Grpc.Core.dll.meta


isOverridable: 0
platformData:
- first:
'': Any
second:
enabled: 0
settings:
Exclude Android: 1
Exclude Editor: 0
Exclude Linux: 0
Exclude Linux64: 0
Exclude LinuxUniversal: 0
Exclude OSXUniversal: 0
Exclude Win: 0
Exclude Win64: 0
Exclude iOS: 1
- first:
Android: Android
second:
enabled: 0
settings:
CPU: ARMv7
- first:
enabled: 1
enabled: 0
enabled: 0
enabled: 1
CPU: AnyCPU
OS: AnyOS
- first:
Facebook: Win
second:
enabled: 0
settings:
CPU: AnyCPU
- first:
Facebook: Win64
second:
enabled: 0
settings:
CPU: AnyCPU
- first:
Standalone: Linux
second:
enabled: 1
settings:
CPU: x86
- first:
Standalone: Linux64
second:
enabled: 1
settings:
CPU: x86_64
- first:
Standalone: LinuxUniversal
second:
enabled: 1
settings: {}
- first:
Standalone: OSXUniversal
second:
enabled: 1
settings:
CPU: AnyCPU
- first:
Standalone: Win
second:
enabled: 1
settings:
CPU: AnyCPU
- first:
Standalone: Win64
second:
enabled: 1
settings:
CPU: AnyCPU
- first:
Windows Store Apps: WindowsStoreApps
second:

- first:
iPhone: iOS
second:
enabled: 0
settings:
CompileFlags:
FrameworkDependencies:
userData:
assetBundleName:
assetBundleVariant:

28
UnitySDK/Assets/ML-Agents/Scripts/Academy.cs


[SerializeField]
public BroadcastHub broadcastHub = new BroadcastHub();
private const string kApiVersion = "API-6";
private const string kApiVersion = "API-7";
/// Temporary storage for global gravity value
/// Used to restore oringal value when deriving Academy modifies it
private Vector3 originalGravity;
/// Temporary storage for global fixedDeltaTime value
/// Used to restore oringal value when deriving Academy modifies it
private float originalFixedDeltaTime;
/// Temporary storage for global maximumDeltaTime value
/// Used to restore oringal value when deriving Academy modifies it
private float originalMaximumDeltaTime;
// Fields provided in the Inspector

/// </summary>
private void InitializeEnvironment()
{
originalGravity = Physics.gravity;
originalFixedDeltaTime = Time.fixedDeltaTime;
originalMaximumDeltaTime = Time.maximumDeltaTime;
InitializeAcademy();
Communicator communicator = null;

void FixedUpdate()
{
EnvironmentStep();
}
/// <summary>
/// Cleanup function
/// </summary>
protected virtual void OnDestroy()
{
Physics.gravity = originalGravity;
Time.fixedDeltaTime = originalFixedDeltaTime;
Time.maximumDeltaTime = originalMaximumDeltaTime;
}
}
}

21
UnitySDK/Assets/ML-Agents/Scripts/Agent.cs


}
/// <summary>
/// Adds a float array observation to the vector observations of the agent.
/// Increases the size of the agents vector observation by size of array.
/// </summary>
/// <param name="observation">Observation.</param>
protected void AddVectorObs(float[] observation)
{
info.vectorObservation.AddRange(observation);
}
/// <summary>
/// Adds a float list observation to the vector observations of the agent.
/// Increases the size of the agents vector observation by size of list.
/// Adds a collection of float observations to the vector observations of the agent.
/// Increases the size of the agents vector observation by size of the collection.
protected void AddVectorObs(List<float> observation)
protected void AddVectorObs(IEnumerable<float> observation)
{
info.vectorObservation.AddRange(observation);
}

public void UpdateMemoriesAction(List<float> memories)
{
action.memories = memories;
}
public void AppendMemoriesAction(List<float> memories)
{
action.memories.AddRange(memories);
}
/// <summary>

6
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityToExternalGrpc.cs


// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: mlagents/envs/communicator_objects/unity_to_external.proto
// </auto-generated>
# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
#pragma warning disable 1591
#region Designer generated code

}
}
#endregion
#endif

40
UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/ApplierImpl.cs


using System.Collections.Generic;
using MLAgents.InferenceBrain.Utils;
using UnityEngine;
namespace MLAgents.InferenceBrain
{

public void Apply(Tensor tensor, Dictionary<Agent, AgentInfo> agentInfo)
{
var tensorDataAction = tensor.Data as float[,];
var actionSize = tensor.Shape[1];
var actionSize = tensor.Shape[tensor.Shape.Length - 1];
var agentIndex = 0;
foreach (var agent in agentInfo.Keys)
{

}
}
}
public class BarracudaMemoryOutputApplier : TensorApplier.Applier
{
private bool firstHalf = true;
public BarracudaMemoryOutputApplier(bool firstHalf)
{
this.firstHalf = firstHalf;
}
public void Apply(Tensor tensor, Dictionary<Agent, AgentInfo> agentInfo)
{
var tensorDataMemory = tensor.Data as float[,];
var agentIndex = 0;
var memorySize = tensor.Shape[tensor.Shape.Length - 1];
foreach (var agent in agentInfo.Keys)
{
var memory = new List<float>();
for (var j = 0; j < memorySize; j++)
{
memory.Add(tensorDataMemory[agentIndex, j]);
}
if (firstHalf)
{
agent.UpdateMemoriesAction(memory);
}
else
{
agent.AppendMemoriesAction(memory);
}
agentIndex++;
}
}
}
/// <summary>
/// The Applier for the Memory output tensor. Tensor is assumed to contain the new

{
var tensorDataMemory = tensor.Data as float[,];
var agentIndex = 0;
var memorySize = tensor.Shape[1];
var memorySize = tensor.Shape[tensor.Shape.Length - 1];
foreach (var agent in agentInfo.Keys)
{
var memory = new List<float>();

61
UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/GeneratorImpl.cs


{
public void Generate(Tensor tensor, int batchSize, Dictionary<Agent, AgentInfo> agentInfo)
{
var shapeSecondAxis = tensor.Shape[1];
var shapeSecondAxis = tensor.Shape[tensor.Shape.Length - 1];
tensor.Shape[0] = batchSize;
if (tensor.ValueType == Tensor.TensorType.FloatingPoint)
{

public void Generate(Tensor tensor, int batchSize, Dictionary<Agent, AgentInfo> agentInfo)
{
tensor.Shape[0] = batchSize;
var vecObsSizeT = tensor.Shape[1];
var vecObsSizeT = tensor.Shape[tensor.Shape.Length - 1];
tensor.Data = new float[batchSize, vecObsSizeT];
var agentIndex = 0;
foreach (var agent in agentInfo.Keys)

public void Generate(Tensor tensor, int batchSize, Dictionary<Agent, AgentInfo> agentInfo)
{
tensor.Shape[0] = batchSize;
var memorySize = tensor.Shape[1];
var memorySize = tensor.Shape[tensor.Shape.Length - 1];
tensor.Data = new float[batchSize, memorySize];
var agentIndex = 0;
foreach (var agent in agentInfo.Keys)

}
}
}
public class BarracudaRecurrentInputGenerator : TensorGenerator.Generator
{
private bool firstHalf = true;
public BarracudaRecurrentInputGenerator(bool firstHalf)
{
this.firstHalf = firstHalf;
}
public void Generate(Tensor tensor, int batchSize, Dictionary<Agent, AgentInfo> agentInfo)
{
tensor.Shape[0] = batchSize;
var memorySize = tensor.Shape[tensor.Shape.Length - 1];
tensor.Data = new float[batchSize, memorySize];
var agentIndex = 0;
foreach (var agent in agentInfo.Keys)
{
var memory = agentInfo[agent].memories;
int offset = 0;
if (!firstHalf)
{
offset = memory.Count - (int)memorySize;
}
if (memory == null)
{
agentIndex++;
continue;
}
for (var j = 0; j < memorySize; j++)
{
if (j >= memory.Count)
{
break;
}
tensor.Data.SetValue(memory[j + offset], new int[2] {agentIndex, j});
}
agentIndex++;
}
}
}
/// <summary>
/// Generates the Tensor corresponding to the Previous Action input : Will be a two

{
public void Generate(Tensor tensor, int batchSize, Dictionary<Agent, AgentInfo> agentInfo)
{
if (tensor.ValueType != Tensor.TensorType.Integer)
{
throw new NotImplementedException(
"Previous Action Inputs are only valid for discrete control");
}
var actionSize = tensor.Shape[1];
var actionSize = tensor.Shape[tensor.Shape.Length - 1];
tensor.Data = new int[batchSize, actionSize];
var agentIndex = 0;
foreach (var agent in agentInfo.Keys)

public void Generate(Tensor tensor, int batchSize, Dictionary<Agent, AgentInfo> agentInfo)
{
tensor.Shape[0] = batchSize;
var maskSize = tensor.Shape[1];
var maskSize = tensor.Shape[tensor.Shape.Length - 1];
tensor.Data = new float[batchSize, maskSize];
var agentIndex = 0;
foreach (var agent in agentInfo.Keys)

public void Generate(Tensor tensor, int batchSize, Dictionary<Agent, AgentInfo> agentInfo)
{
tensor.Shape[0] = batchSize;
var actionSize = tensor.Shape[1];
var actionSize = tensor.Shape[tensor.Shape.Length - 1];
tensor.Data = new float[batchSize, actionSize];
_randomNormal.FillTensor(tensor);
}

2
UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/ModelParamLoader.cs


Discrete,
Continuous
}
private const long ApiVersion = 1;
private const long ApiVersion = 2;
private TFSharpInferenceEngine _engine;
private BrainParameters _brainParameters;
private List<string> _failedModelChecks = new List<string>();

3
UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/TensorApplier.cs


bp.vectorActionSize, seed);
}
_dict[TensorNames.RecurrentOutput] = new MemoryOutputApplier();
_dict[TensorNames.RecurrentOutput_C] = new BarracudaMemoryOutputApplier(true);
_dict[TensorNames.RecurrentOutput_H] = new BarracudaMemoryOutputApplier(false);
}
/// <summary>

4
UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/TensorGenerator.cs


_dict[TensorNames.SequenceLengthPlaceholder] = new SequenceLengthGenerator();
_dict[TensorNames.VectorObservationPlacholder] = new VectorObservationGenerator();
_dict[TensorNames.RecurrentInPlaceholder] = new RecurrentInputGenerator();
_dict[TensorNames.RecurrentInPlaceholder_C] = new BarracudaRecurrentInputGenerator(true);
_dict[TensorNames.RecurrentInPlaceholder_H] = new BarracudaRecurrentInputGenerator(false);
_dict[TensorNames.PreviousActionPlaceholder] = new PreviousActionInputGenerator();
_dict[TensorNames.ActionMaskPlaceholder] = new ActionMaskInputGenerator();
_dict[TensorNames.RandomNormalEpsilonPlaceholder] = new RandomNormalInputGenerator(seed);

4
UnitySDK/Assets/ML-Agents/Scripts/InferenceBrain/TensorNames.cs


public const string SequenceLengthPlaceholder = "sequence_length";
public const string VectorObservationPlacholder = "vector_observation";
public const string RecurrentInPlaceholder = "recurrent_in";
public const string RecurrentInPlaceholder_H = "recurrent_in_h";
public const string RecurrentInPlaceholder_C = "recurrent_in_c";
public const string VisualObservationPlaceholderPrefix = "visual_observation_";
public const string PreviousActionPlaceholder = "prev_action";
public const string ActionMaskPlaceholder = "action_masks";

public const string RecurrentOutput = "recurrent_out";
public const string RecurrentOutput_H = "recurrent_out_h";
public const string RecurrentOutput_C = "recurrent_out_c";
public const string MemorySize = "memory_size";
public const string VersionNumber = "version_number";
public const string IsContinuousControl = "is_continuous_control";

124
UnitySDK/Assets/ML-Agents/Scripts/LearningBrain.cs


using System;
#define ENABLE_BARRACUDA
using System;
using Barracuda;
using Tensor = MLAgents.InferenceBrain.Tensor;
public enum InferenceDevice
{
CPU = 0,
GPU = 1
}
/// <summary>
/// The Learning Brain works differently if you are training it or not.
/// When training your Agents, drag the Learning Brain to the Academy's BroadcastHub and check

private TensorGenerator _tensorGenerator;
private TensorApplier _tensorApplier;
#if ENABLE_TENSORFLOW
private ModelParamLoader _modelParamLoader;
#endif
#if ENABLE_TENSORFLOW
private ModelParamLoader _modelParamLoader;
#elif ENABLE_BARRACUDA
public NNModel model;
private Model _barracudaModel;
private IWorker _engine;
private bool _verbose = false;
private BarracudaModelParamLoader _modelParamLoader;
private string[] _outputNames;
[Tooltip("Inference execution device. CPU is the fastest option for most of ML Agents models. " +
"(This field is not applicable for training).")]
public InferenceDevice inferenceDevice = InferenceDevice.CPU;
private IEnumerable<Tensor> _inferenceInputs;
private IEnumerable<Tensor> _inferenceOutputs;

_inferenceOutputs = _modelParamLoader.GetOutputTensors();
_tensorGenerator = new TensorGenerator(brainParameters, seed);
_tensorApplier = new TensorApplier(brainParameters, seed);
#elif ENABLE_BARRACUDA
if (model != null)
{
#if BARRACUDA_VERBOSE
_verbose = true;
#endif
D.logEnabled = _verbose;
// Cleanup previous instance
if (_engine != null)
_engine.Dispose();
_barracudaModel = ModelLoader.Load(model.Value);
var executionDevice = inferenceDevice == InferenceDevice.GPU
? BarracudaWorkerFactory.Type.ComputeFast
: BarracudaWorkerFactory.Type.CSharpFast;
_engine = BarracudaWorkerFactory.CreateWorker(executionDevice, _barracudaModel, _verbose);
}
else
{
_barracudaModel = null;
_engine = null;
}
_modelParamLoader = BarracudaModelParamLoader.GetLoaderAndCheck(_engine, _barracudaModel, brainParameters);
_inferenceInputs = _modelParamLoader.GetInputTensors();
_outputNames = _modelParamLoader.GetOutputNames();
_tensorGenerator = new TensorGenerator(brainParameters, seed);
_tensorApplier = new TensorApplier(brainParameters, seed);
#endif
}

{
#if ENABLE_TENSORFLOW
return (_modelParamLoader != null) ? _modelParamLoader.GetChecks() : new List<string>();
#elif ENABLE_BARRACUDA
return (_modelParamLoader != null) ? _modelParamLoader.GetChecks() : new List<string>();
#else
return new List<string>(){

// Update the outputs
_tensorApplier.ApplyTensors(_inferenceOutputs, agentInfos);
#elif ENABLE_BARRACUDA
if (_engine == null)
{
Debug.LogError($"No model was present for the Brain {name}.");
return;
}
// Prepare the input tensors to be feed into the engine
_tensorGenerator.GenerateTensors(_inferenceInputs, currentBatchSize, agentInfos);
var inputs = PrepareBarracudaInputs(_inferenceInputs);
// Execute the Model
Profiler.BeginSample($"MLAgents.{name}.ExecuteGraph");
_engine.Execute(inputs);
Profiler.EndSample();
_inferenceOutputs = FetchBarracudaOutputs(_outputNames);
CleanupBarracudaState(inputs);
// Update the outputs
_tensorApplier.ApplyTensors(_inferenceOutputs, agentInfos);
#else
if (agentInfos.Count > 0)
{

#endif
agentInfos.Clear();
}
#if ENABLE_BARRACUDA && !ENABLE_TENSORFLOW
protected Dictionary<string, Barracuda.Tensor> PrepareBarracudaInputs(IEnumerable<Tensor> infInputs)
{
var inputs = new Dictionary<string, Barracuda.Tensor>();
foreach (var inp in _inferenceInputs)
{
inputs[inp.Name] = BarracudaUtils.ToBarracuda(inp);
}
return inputs;
}
protected List<Tensor> FetchBarracudaOutputs(string[] names)
{
var outputs = new List<Tensor>();
foreach (var name in names)
{
var outp = _engine.Fetch(name);
outputs.Add(BarracudaUtils.FromBarracuda(outp, name));
outp.Dispose();
}
return outputs;
}
protected void CleanupBarracudaState(Dictionary<string, Barracuda.Tensor> inputs)
{
foreach (var key in inputs.Keys)
{
inputs[key].Dispose();
}
inputs.Clear();
}
public void OnDisable()
{
_engine?.Dispose();
}
#endif
}
}

20
UnitySDK/Assets/ML-Agents/Scripts/RpcCommunicator.cs


# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
#endif
using System.IO;
using System.Threading;
using System.Threading.Tasks;

/// If true, the communication is active.
bool m_isOpen;
# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
#endif
/// The communicator parameters sent at construction
CommunicatorParameters m_communicatorParameters;

public UnityInput Initialize(UnityOutput unityOutput,
out UnityInput unityInput)
{
# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
m_isOpen = true;
var channel = new Channel(
"localhost:"+m_communicatorParameters.port,

#endif
#endif
return result.UnityInput;
#else
throw new UnityAgentsException(
"You cannot perform training on this platform.");
#endif
}
/// <summary>

{
# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
if (!m_isOpen)
{
return;

{
return;
}
#else
throw new UnityAgentsException(
"You cannot perform training on this platform.");
#endif
}
/// <summary>

/// <param name="unityOutput">The UnityOutput to be sent.</param>
public UnityInput Exchange(UnityOutput unityOutput)
{
# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
if (!m_isOpen)
{
return null;

m_isOpen = false;
return null;
}
#else
throw new UnityAgentsException(
"You cannot perform training on this platform.");
#endif
}
/// <summary>

1
UnitySDK/Assets/ML-Agents/Scripts/SocketCommunicator.cs


using Google.Protobuf;
using Grpc.Core;
using System.Net.Sockets;
using UnityEngine;
using MLAgents.CommunicatorObjects;

6
UnitySDK/ProjectSettings/ProjectSettings.asset


androidSupportedAspectRatio: 1
androidMaxAspectRatio: 2.1
applicationIdentifier:
Android: com.Vince.Ball
Android: com.Company.ProductName
buildNumber:
iOS: 0
AndroidBundleVersionCode: 1

iOSURLSchemes: []
iOSBackgroundModes: 0
iOSMetalForceHardShadows: 0
metalEditorSupport: 0
metalEditorSupport: 1
metalAPIValidation: 1
iOSRenderExtraFrameOnPause: 1
appleDeveloperTeamID:

m_BuildTargetBatching: []
m_BuildTargetGraphicsAPIs:
- m_BuildTarget: MacStandaloneSupport
m_APIs: 1100000010000000
m_APIs: 10000000
m_Automatic: 0
m_BuildTargetVRSettings: []
m_BuildTargetEnableVuforiaSettings: []

6
docs/API-Reference.md


# API Reference
Our developer-facing C# classes (Academy, Agent, Decision and Monitor) have been
documented to be compatible with
[Doxygen](http://www.stack.nl/~dimitri/doxygen/) for auto-generating HTML
documented to be compatible with Doxygen for auto-generating HTML
To generate the API reference,
[download Doxygen](http://www.stack.nl/~dimitri/doxygen/download.html)
To generate the API reference, download Doxygen
and run the following command within the `docs/` directory:
```sh

19
docs/Background-TensorFlow.md


performing computations using data flow graphs, the underlying representation of
deep learning models. It facilitates training and inference on CPUs and GPUs in
a desktop, server, or mobile device. Within the ML-Agents toolkit, when you
train the behavior of an agent, the output is a TensorFlow model (.bytes) file
train the behavior of an agent, the output is a TensorFlow model (.nn) file
that you can then embed within a Learning Brain. Unless you implement a new
algorithm, the use of TensorFlow is mostly abstracted away and behind the
scenes.

hyperparameters and setting the optimal values for your Unity environment. We
provide more details on setting the hyperparameters in later parts of the
documentation, but, in the meantime, if you are unfamiliar with TensorBoard we
recommend our guide on [using Tensorboard with ML-Agents](Using-Tensorboard.md) or
recommend our guide on [using TensorBoard with ML-Agents](Using-Tensorboard.md) or
## TensorflowSharp
One of the drawbacks of TensorFlow is that it does not provide a native C# API.
This means that the Learning Brain is not natively supported since Unity scripts
are written in C#. Consequently, to enable the Learning Brain, we leverage a
third-party library
[TensorFlowSharp](https://github.com/migueldeicaza/TensorFlowSharp) which
provides .NET bindings to TensorFlow. Thus, when a Unity environment that
contains a Learning Brain is built, inference is performed via TensorFlowSharp.
We provide an additional in-depth overview of how to leverage
[TensorFlowSharp within Unity](Using-TensorFlow-Sharp-in-Unity.md)
which will become more
relevant once you install and start training behaviors within the ML-Agents
toolkit. Given the reliance on TensorFlowSharp, the Learning Brain is currently
marked as experimental.

50
docs/Basic-Guide.md


Equivalent or .NET 4.x Equivalent)**
6. Go to **File** > **Save Project**
## Setting up TensorFlowSharp
We provide pre-trained models (`.bytes` files) for all the agents
in all our demo environments. To be able to run those models, you'll
first need to set-up TensorFlowSharp support. Consequently, you need to install
the TensorFlowSharp plugin to be able to run these models within the Unity
Editor.
1. Download the [TensorFlowSharp Plugin](https://s3.amazonaws.com/unity-ml-agents/0.5/TFSharpPlugin.unitypackage)
2. Import it into Unity by double clicking the downloaded file. You can check
if it was successfully imported by checking the
TensorFlow files in the Project window under **Assets** > **ML-Agents** >
**Plugins** > **Computer**.
3. Go to **Edit** > **Project Settings** > **Player** and add `ENABLE_TENSORFLOW`
to the `Scripting Define Symbols` for each type of device you want to use
(**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**).
![Project Settings](images/project-settings.png)
**Note**: If you don't see anything under **Assets**, drag the
`UnitySDK/Assets/ML-Agents` folder under **Assets** within Project window.
## Running a Pre-trained Model
![Imported TensorFlowsharp](images/imported-tensorflowsharp.png)
## Running a Pre-trained Model
We've included pre-trained models for the 3D Ball example.
We include pre-trained models for our agents (`.nn` files) and we use the
[Unity Inference Engine](Unity-Inference-Engine.md) to run these models
inside Unity. In this section, we will use the pre-trained model for the
3D Ball example.
1. In the **Project** window, go to the `Assets/ML-Agents/Examples/3DBall/Scenes` folder
and open the `3DBall` scene file.

folder.
7. Drag the `3DBallLearning` model file from the `Assets/ML-Agents/Examples/3DBall/TFModels`
folder to the **Model** field of the **3DBallLearning** Brain in the **Inspector** window. __Note__ : All of the brains should now have `3DBallLearning` as the TensorFlow model in the `Model` property
8. Click the **Play** button and you will see the platforms balance the balls
using the pretrained model.
8. Select the **InferenceDevice** to use for this model (CPU or GPU).
_Note: CPU is faster for the majority of ML-Agents toolkit generated models_
9. Click the **Play** button and you will see the platforms balance the balls
using the pre-trained model.
![Running a pretrained model](images/running-a-pretrained-model.gif)
![Running a pre-trained model](images/running-a-pretrained-model.gif)
contains a simple walkthrough of the functionality of the Python API. It can
contains a simple walk-through of the functionality of the Python API. It can
also serve as a simple test that your environment is configured correctly.
Within `Basics`, be sure to set `env_name` to the name of the Unity executable
if you want to [use an executable](Learning-Environment-Executable.md) or to

## Training the Brain with Reinforcement Learning
### Setting up the enviornment for training
### Setting up the environment for training
To set up the environment for training, you will need to specify which agents are contributing
to the training and which Brain is being trained. You can only perform training with

### After training
You can press Ctrl+C to stop the training, and your trained model will be at
`models/<run-identifier>/<brain_name>.bytes` where
`models/<run-identifier>/<brain_name>.nn` where
`<brain_name>` is the name of the Brain corresponding to the model.
(**Note:** There is a known bug on Windows that causes the saving of the model to
fail when you early terminate the training, it's recommended to wait until Step

the steps described
[above](#play-an-example-environment-using-pretrained-model).
[above](#running-a-pre-trained-model).
5. Drag the `<brain_name>.bytes` file from the Project window of
5. Drag the `<brain_name>.nn` file from the Project window of
the Editor to the **Model** placeholder in the **3DBallLearning**
inspector window.
6. Press the :arrow_forward: button at the top of the Editor.

- For a "Hello World" introduction to creating your own Learning Environment,
check out the [Making a New Learning
Environment](Learning-Environment-Create-New.md) page.
- For a series of Youtube video tutorials, checkout the
- For a series of YouTube video tutorials, checkout the
[Machine Learning Agents PlayList](https://www.youtube.com/playlist?list=PLX2vGYjWbI0R08eWQkO7nQkGiicHAX7IX)
page.

4
docs/FAQ.md


following error message:
```console
UnityAgentsException: The brain 3DBallLearning was set to inference mode but the Tensorflow library is not present in the Unity project.
UnityAgentsException: The brain 3DBallLearning was set to inference mode but the TensorFlow library is not present in the Unity project.
```
This error message occurs because the TensorFlowSharp plugin won't be used
without the ENABLE_TENSORFLOW flag, refer to [Setting Up The ML-Agents Toolkit

* _Cause_: There may be no LearningBrain with `Control` option checked in the
`Broadcast Hub` of the Academy. In this case, the environment will not attempt
to communicate with python. _Solution_: Click `Add New` in your Academy's
to communicate with Python. _Solution_: Click `Add New` in your Academy's
`Broadcast Hub`, and drag your LearningBrain asset into the `Brains` field,
and check the `Control` toggle. Also you need to assign this LearningBrain
asset to all of the Agents you wish to do training on.

16
docs/Getting-Started-with-Balance-Ball.md


follow the instructions in
[Using an Executable](Learning-Environment-Executable.md).
**Note**: Re-running this command will start training from scratch again. To resume
a previous training run, append the `--load` flag and give the same `--run-id` as the
run you want to resume.
### Observing Training Progress
Once you start training using `mlagents-learn` in the way described in the

use it with Agents having a **Learning Brain**.
__Note:__ Do not just close the Unity Window once the `Saved Model` message appears.
Either wait for the training process to close the window or press Ctrl+C at the
command-line prompt. If you close the window manually, the `.bytes` file
command-line prompt. If you close the window manually, the `.nn` file
### Setting up TensorFlowSharp
Because TensorFlowSharp support is still experimental, it is disabled by
default. Please note that the `Learning` Brain inference can only be used with
TensorFlowSharp.
To set up the TensorFlowSharp Support, follow [Setting up ML-Agents Toolkit
within Unity](Basic-Guide.md#setting-up-ml-agents-within-unity) section. of the
Basic Guide page.
### Embedding the trained model into Unity

18
docs/Installation-Windows.md


```
You may be asked to install new packages. Type `y` and press enter _(make sure
you are connected to the internet)_. You must install these required packages.
you are connected to the Internet)_. You must install these required packages.
The new Conda environment is called ml-agents and uses Python version 3.6.
<p align="center">

package management system used to install Python packages. Latest versions of
TensorFlow won't work, so you will need to make sure that you install version
1.7.1. In the same Anaconda Prompt, type in the following command _(make sure
you are connected to the internet)_:
you are connected to the Internet)_:
```sh
pip install tensorflow==1.7.1

cd C:\Downloads\ml-agents\ml-agents
```
Make sure you are connected to the internet and then type in the Anaconda
Make sure you are connected to the Internet and then type in the Anaconda
Prompt within `ml-agents` subdirectory:
```sh

This will complete the installation of all the required Python packages to run
the ML-Agents toolkit.
Sometimes on Windows, when you use pip to install certain Python packages, the pip will get stuck when trying to read the cache of the package. If you see this, you can try:
```sh
pip install -e . --no-cache-dir
```
This `--no-cache-dir` tells the pip to disable the cache.
## (Optional) Step 4: GPU Training using The ML-Agents Toolkit

Next, install `tensorflow-gpu` using `pip`. You'll need version 1.7.1. In an
Anaconda Prompt with the Conda environment ml-agents activated, type in the
following command to uninstall TensorFlow for cpu and install TensorFlow
for gpu _(make sure you are connected to the internet)_:
for gpu _(make sure you are connected to the Internet)_:
```sh
pip uninstall tensorflow

Found device 0 with properties ...
```
## Acknowledgements
## Acknowledgments
We would like to thank
[Jason Weimann](https://unity3d.college/2017/10/25/machine-learning-in-unity3d-setting-up-the-environment-tensorflow-for-agentml-on-windows-10/)

45
docs/Learning-Environment-Create-New.md


The default settings for the Academy properties are also fine for this
environment, so we don't need to change anything for the RollerAcademy component
in the Inspector window.
in the Inspector window. You may not have the RollerBrain in the Broadcast Hub yet,
more on that later.
![The Academy properties](images/mlagents-NewTutAcademy.png)

window.
3. Drag the Brain **RollerBallPlayer** from the Project window to the
RollerAgent **Brain** field.
4. Change **Decision Frequency** from `1` to `10`.
4. Change **Decision Interval** from `1` to `10`.
5. Drag the Target GameObject from the Hierarchy window to the RollerAgent
Target field.

positive values and one to specify negative values for each action, for a total
of four keys.
1. Select the `RollerBallPlayer` Aset to view its properties in the Inspector.
1. Select the `RollerBallPlayer` Asset to view its properties in the Inspector.
2. Expand the **Key Continuous Player Actions** dictionary (only visible when using
a **PlayerBrain**).
3. Set **Size** to 4.

you pass to the `mlagents-learn` command for each training run. If you use
the same id value, the statistics for multiple runs are combined and become
difficult to interpret.
## Optional: Multiple Training Areas within the Same Scene
In many of the [example environments](Learning-Environment-Examples.md), many copies of
the training area are instantiated in the scene. This generally speeds up training,
allowing the environment to gather many experiences in parallel. This can be achieved
simply by instantiating many Agents which share the same Brain. Use the following steps to
parallelize your RollerBall environment.
### Instantiating Multiple Training Areas
1. Right-click on your Project Hierarchy and create a new empty GameObject.
Name it TrainingArea.
2. Reset the TrainingArea’s Transform so that it is at (0,0,0) with Rotation (0,0,0)
and Scale (1,1,1).
3. Drag the Floor, Target, and RollerAgent GameObjects in the Hierarchy into the
TrainingArea GameObject.
4. Drag the TrainingArea GameObject, along with its attached GameObjects, into your
Assets browser, turning it into a prefab.
5. You can now instantiate copies of the TrainingArea prefab. Drag them into your scene,
positioning them so that they do not overlap.
### Editing the Scripts
You will notice that in the previous section, we wrote our scripts assuming that our
TrainingArea was at (0,0,0), performing checks such as `this.transform.position.y < 0`
to determine whether our agent has fallen off the platform. We will need to change
this if we are to use multiple TrainingAreas throughout the scene.
A quick way to adapt our current code is to use
localPosition rather than position, so that our position reference is in reference
to the prefab TrainingArea's location, and not global coordinates.
1. Replace all references of `this.transform.position` in RollerAgent.cs with `this.transform.localPosition`.
2. Replace all references of `Target.position` in RollerAgent.cs with `Target.localPosition`.
This is only one way to achieve this objective. Refer to the
[example environments](Learning-Environment-Examples.md) for other ways we can achieve relative positioning.
## Review: Scene Layout

4
docs/Learning-Environment-Design-Academy.md


## Initializing an Academy
Initialization is performed once in an Academy object's lifecycle. Use the
Initialization is performed once in an Academy object's life cycle. Use the
`InitializeAcademy()` method for any logic you would normally perform in the
standard Unity `Start()` or `Awake()` methods.

* `Configuration` - The engine-level settings which correspond to rendering
quality and engine speed.
* `Width` - Width of the environment window in pixels.
* `Height` - Width of the environment window in pixels.
* `Height` - Height of the environment window in pixels.
* `Quality Level` - Rendering quality of environment. (Higher is better)
* `Time Scale` - Speed at which environment is run. (Higher is faster)
* `Target Frame Rate` - FPS engine attempts to maintain.

41
docs/Learning-Environment-Design-Agents.md


The observation feature vector is a list of floating point numbers, which means
you must convert any other data types to a float or a list of floats.
Integers can be be added directly to the observation vector. You must explicitly
convert Boolean values to a number:
```csharp
AddVectorObs(isTrueOrFalse ? 1 : 0);
```
For entities like positions and rotations, you can add their components to the
feature list individually. For example:
```csharp
Vector3 speed = ball.transform.GetComponent<Rigidbody>().velocity;
AddVectorObs(speed.x);
AddVectorObs(speed.y);
AddVectorObs(speed.z);
```
The `AddVectorObs` method provides a number of overloads for adding common types
of data to your observation vector. You can add Integers and booleans directly to
the observation vector, as well as some common Unity data types such as `Vector2`,
`Vector3`, and `Quaternion`.
Type enumerations should be encoded in the _one-hot_ style. That is, add an
element to the feature vector for each element of enumeration, setting the

{
AddVectorObs((int)currentItem == ci ? 1.0f : 0.0f);
}
}
```
`AddVectorObs` also provides a two-argument version as a shortcut for _one-hot_
style observations. The following example is identical to the previous one.
```csharp
enum CarriedItems { Sword, Shield, Bow, LastItem }
const int NUM_ITEM_TYPES = (int)CarriedItems.LastItem;
public override void CollectObservations()
{
// The first argument is the selection index; the second is the
// number of possibilities
AddVectorObs((int)currentItem, NUM_ITEM_TYPES);
}
```

The `Ball3DAgent` also assigns a negative penalty when the ball falls off the
platform.
Note that all of these environments make use of the `Done()` method, which manually
terminates an episode when a termination condition is reached. This can be
called independently of the `Max Step` property.
## Agent Properties
![Agent Inspector](images/agent.png)

* `RequestAction()` Signals that the Agent is requesting an action. The
action provided to the Agent in this case is the same action that was
provided the last time it requested a decision.
* `Decision Frequency` - The number of steps between decision requests. Not used
* `Decision Interval` - The number of steps between decision requests. Not used
if `On Demand Decision`, is true.
## Monitoring Agents

10
docs/Learning-Environment-Design-Learning-Brains.md


The **Learning Brain** works differently if you are training it or not.
When training your Agents, drag the **Learning Brain** to the
Academy's `Broadcast Hub` and check the checkbox `Control`. When using a pretrained
Academy's `Broadcast Hub` and check the checkbox `Control`. When using a pre-trained
model, just drag the Model file into the `Model` property of the **Learning Brain**.
## Training Mode / External Control

To use a graph model:
1. Select the **Learning Brain** asset in the **Project** window of the Unity Editor.
**Note:** In order to use the **Learning** Brain with inference, you need to have
TensorFlowSharp enabled. Refer to [this section](Basic-Guide.md#setting-up-ml-agents-within-unity) for more information.
2. Import the `model_name` file produced by the PPO training
program. (Where `model_name` is the name of the model file, which is
constructed from the name of your Unity environment executable and the run-id

[import assets into Unity](https://docs.unity3d.com/Manual/ImportingAssets.html)
in various ways. The easiest way is to simply drag the file into the
**Project** window and drop it into an appropriate folder.
3. Once the `model_name.bytes` file is imported, drag it from the **Project**
3. Once the `model_name.nn` file is imported, drag it from the **Project**
window to the **Model** field of the Brain component.
If you are using a model produced by the ML-Agents `mlagents-learn` command, use

The default values of the TensorFlow graph parameters work with the model
produced by the PPO and BC training code in the ML-Agents SDK. To use a default
ML-Agents model, the only parameter that you need to set is the `Model`,
which must be set to the `.bytes` file containing the trained model itself.
which must be set to the `.nn` file containing the trained model itself.
* `Model` : This must be the `.bytes` file corresponding to the pre-trained
* `Model` : This must be the `.nn` file corresponding to the pre-trained
TensorFlow graph. (You must first drag this file into your Project window
and then from the Resources folder into the inspector)

15
docs/Learning-Environment-Design.md


You must also determine how an Agent finishes its task or times out. You can
manually set an Agent to done in your `AgentAction()` function when the Agent
has finished (or irrevocably failed) its task. You can also set the Agent's `Max
Steps` property to a positive value and the Agent will consider itself done
after it has taken that many steps. When the Academy reaches its own `Max Steps`
count, it starts the next episode. If you set an Agent's `ResetOnDone` property
to true, then the Agent can attempt its task several times in one episode. (Use
the `Agent.AgentReset()` function to prepare the Agent to start again.)
has finished (or irrevocably failed) its task by calling the `Done()` function.
You can also set the Agent's `Max Steps` property to a positive value and the
Agent will consider itself done after it has taken that many steps. When the
Academy reaches its own `Max Steps` count, it starts the next episode. If you
set an Agent's `ResetOnDone` property to true, then the Agent can attempt its
task several times in one episode. (Use the `Agent.AgentReset()` function to
prepare the Agent to start again.)
about programing your own Agents.
about programming your own Agents.
## Environments

4
docs/Learning-Environment-Executable.md


```
You can press Ctrl+C to stop the training, and your trained model will be at
`models/<run-identifier>/<brain_name>.bytes`, which corresponds
`models/<run-identifier>/<brain_name>.nn`, which corresponds
to your model's latest checkpoint. (**Note:** There is a known bug on Windows
that causes the saving of the model to fail when you early terminate the
training, it's recommended to wait until Step has reached the max_steps

`UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/`.
2. Open the Unity Editor, and select the **3DBall** scene as described above.
3. Select the **Ball3DLearning** object from the Project window.
5. Drag the `<brain_name>.bytes` file from the Project window of
5. Drag the `<brain_name>.nn` file from the Project window of
the Editor to the **Model** placeholder in the **Ball3DLearning**
inspector window.
6. Remove the **Ball3DLearning** from the Academy's `Broadcast Hub`

4
docs/ML-Agents-Overview.md


model. This model is then embedded within the Learning Brain during inference to
generate the optimal actions for all Agents linked to that Brain.
**Note that our Learning Brain is currently experimental as it is limited to TensorFlow
models and leverages the third-party
[TensorFlowSharp](https://github.com/migueldeicaza/TensorFlowSharp) library.**
The
[Getting Started with the 3D Balance Ball Example](Getting-Started-with-Balance-Ball.md)
tutorial covers this training mode with the **3D Balance Ball** sample environment.

5
docs/Readme.md


* [Using TensorBoard to Observe Training](Using-Tensorboard.md)
## Inference
* [TensorFlowSharp in Unity (Experimental)](Using-TensorFlow-Sharp-in-Unity.md)
* [Unity Inference Engine](Unity-Inference-Engine.md)
## Help

* [API Reference](API-Reference.md)
* [How to use the Python API](Python-API.md)
* [Wrapping Learning Environment as a Gym](../gym-unity/README.md)
* [Wrapping Learning Environment as a Gym (+Baselines/Dopamine Integration)](../gym-unity/README.md)

4
docs/Training-Imitation-Learning.md


as the config parameter, and include the `--run-id` and `--train` as usual.
Provide your environment as the `--env` parameter if it has been compiled
as standalone, or omit to train in the editor.
7. (Optional) Observe training performance using Tensorboard.
7. (Optional) Observe training performance using TensorBoard.
This will use the demonstration file to train a neural network driven agent
to directly imitate the actions provided in the demonstration. The environment

similarly to the demonstrations.
11. Once the Student Agents are exhibiting the desired behavior, end the training
process with `CTL+C` from the command line.
12. Move the resulting `*.bytes` file into the `TFModels` subdirectory of the
12. Move the resulting `*.nn` file into the `TFModels` subdirectory of the
Assets folder (or a subdirectory within Assets of your choosing) , and use
with `Learning` Brain.

2
docs/Training-ML-Agents.md


When training is finished, you can find the saved model in the `models` folder
under the assigned run-id — in the cats example, the path to the model would be
`models/cob_1/CatsOnBicycles_cob_1.bytes`.
`models/cob_1/CatsOnBicycles_cob_1.nn`.
While this example used the default training hyperparameters, you can edit the
[training_config.yaml file](#training-config-file) with a text editor to set

4
docs/Training-on-Amazon-Web-Service.md


This page contains instructions for setting up an EC2 instance on Amazon Web
Service for training ML-Agents environments.
## Preconfigured AMI
## Pre-configured AMI
We've prepared a preconfigured AMI for you with the ID: `ami-016ff5559334f8619` in the
We've prepared a pre-configured AMI for you with the ID: `ami-016ff5559334f8619` in the
`us-east-1` region. It was created as a modification of [Deep Learning AMI
(Ubuntu)](https://aws.amazon.com/marketplace/pp/B077GCH38C). The AMI has been
tested with p2.xlarge instance. Furthermore, if you want to train without

4
docs/Training-on-Microsoft-Azure.md


If you've selected to run on a N-Series VM with GPU support, you can verify that
the GPU is being used by running `nvidia-smi` from the command line.
## Monitoring your Training Run with Tensorboard
## Monitoring your Training Run with TensorBoard
Once you have started training, you can [use Tensorboard to observe the
Once you have started training, you can [use TensorBoard to observe the
training](Using-Tensorboard.md).
1. Start by [opening the appropriate port for web traffic to connect to your VM](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/nsg-quickstart-portal).

2
docs/Using-Docker.md


For more detail on Docker mounts, check out
[these](https://docs.docker.com/storage/bind-mounts/) docs from Docker.
**NOTE** If you are training using docker for environments that use visual observations, you may need to increase the default memory that Docker allocates for the container. For example, see [here](https://docs.docker.com/docker-for-mac/#advanced) for instructions for Docker for Mac.
### Stopping Container and Saving State
If you are satisfied with the training progress, you can stop the Docker

220
gym-unity/README.md


## Using the Gym Wrapper
The gym interface is available from `gym_unity.envs`. To launch an environmnent
The gym interface is available from `gym_unity.envs`. To launch an environment
env = UnityEnv(environment_filename, worker_id, default_visual, multiagent)
env = UnityEnv(environment_filename, worker_id, use_visual, uint8_visual, multiagent)
* `environment_filename` refers to the path to the Unity environment.
* `worker_id` refers to the port to use for communication with the environment.
Defaults to `0`.
* `use_visual` refers to whether to use visual observations (True) or vector
observations (False) as the default observation provided by the `reset` and
`step` functions. Defaults to `False`.
* `multiagent` refers to whether you intent to launch an environment which
contains more than one agent. Defaults to `False`.
* `environment_filename` refers to the path to the Unity environment.
* `worker_id` refers to the port to use for communication with the environment.
Defaults to `0`.
* `use_visual` refers to whether to use visual observations (True) or vector
observations (False) as the default observation provided by the `reset` and
`step` functions. Defaults to `False`.
* `uint8_visual` refers to whether to output visual observations as `uint8` values
(0-255). Many common Gym environments (e.g. Atari) do this. By default they
will be floats (0.0-1.0). Defaults to `False`.
* `multiagent` refers to whether you intent to launch an environment which
contains more than one agent. Defaults to `False`.
* `flatten_branched` will flatten a branched discrete action space into a Gym Discrete.
Otherwise, it will be converted into a MultiDiscrete. Defaults to `False`.
The returned environment `env` will function as a gym.

Using the provided Gym wrapper, it is possible to train ML-Agents environments
using these algorithms. This requires the creation of custom training scripts to
launch each algorithm. In most cases these scripts can be created by making
slightly modifications to the ones provided for Atari and Mujoco environments.
slight modifications to the ones provided for Atari and Mujoco environments.
### Example - DQN Baseline

import gym
from baselines import deepq
from gym_unity.envs import UnityEnv
from baselines import logger
from gym_unity.envs.unity_env import UnityEnv
env = UnityEnv("./envs/GridWorld", 0, use_visual=True)
env = UnityEnv("./envs/GridWorld", 0, use_visual=True, uint8_visual=True)
logger.configure('./logs') # Çhange to log in a different directory
"mlp",
lr=1e-3,
total_timesteps=100000,
"cnn", # conv_only is also a good choice for GridWorld
lr=2.5e-4,
total_timesteps=1000000,
exploration_fraction=0.1,
exploration_final_eps=0.02,
print_freq=10
exploration_fraction=0.05,
exploration_final_eps=0.1,
print_freq=20,
train_freq=5,
learning_starts=20000,
target_network_update_freq=50,
gamma=0.99,
prioritized_replay=False,
checkpoint_freq=1000,
checkpoint_path='./logs', # Change to save model in a different directory
dueling=True
To start the training process, run the following from the root of the baselines
repository:
To start the training process, run the following from the directory containing
`train_unity.py`:
```sh
python -m train_unity

"""
def make_env(rank, use_visual=True): # pylint: disable=C0111
def _thunk():
env = UnityEnv(env_directory, rank, use_visual=use_visual)
env = UnityEnv(env_directory, rank, use_visual=use_visual, uint8_visual=True)
env = Monitor(env, logger.get_dir() and os.path.join(logger.get_dir(), str(rank)))
return env
return _thunk

if __name__ == '__main__':
main()
```
## Run Google Dopamine Algorithms
Google provides a framework [Dopamine](https://github.com/google/dopamine), and
implementations of algorithms, e.g. DQN, Rainbow, and the C51 variant of Rainbow.
Using the Gym wrapper, we can run Unity environments using Dopamine.
First, after installing the Gym wrapper, clone the Dopamine repository.
```
git clone https://github.com/google/dopamine
```
Then, follow the appropriate install instructions as specified on
[Dopamine's homepage](https://github.com/google/dopamine). Note that the Dopamine
guide specifies using a virtualenv. If you choose to do so, make sure your unity_env
package is also installed within the same virtualenv as Dopamine.
### Adapting Dopamine's Scripts
First, open `dopamine/atari/run_experiment.py`. Alternatively, copy the entire `atari`
folder, and name it something else (e.g. `unity`). If you choose the copy approach,
be sure to change the package names in the import statements in `train.py` to your new
directory.
Within `run_experiment.py`, we will need to make changes to which environment is
instantiated, just as in the Baselines example. At the top of the file, insert
```python
from gym_unity.envs import UnityEnv
```
to import the Gym Wrapper. Navigate to the `create_atari_environment` method
in the same file, and switch to instantiating a Unity environment by replacing
the method with the following code.
```python
game_version = 'v0' if sticky_actions else 'v4'
full_game_name = '{}NoFrameskip-{}'.format(game_name, game_version)
env = UnityEnv('./envs/GridWorld', 0, use_visual=True, uint8_visual=True)
return env
```
`./envs/GridWorld` is the path to your built Unity executable. For more information on
building Unity environments, see [here](../docs/Learning-Environment-Executable.md), and note
the Limitations section below.
Note that we are not using the preprocessor from Dopamine,
as it uses many Atari-specific calls. Furthermore, frame-skipping can be done from within Unity,
rather than on the Python side.
### Limitations
Since Dopamine is designed around variants of DQN, it is only compatible
with discrete action spaces, and specifically the Discrete Gym space. For environments
that use branched discrete action spaces (e.g.
[VisualBanana](../docs/Learning-Environment-Examples.md)), you can enable the
`flatten_branched` parameter in `UnityEnv`, which treats each combination of branched
actions as separate actions.
Furthermore, when building your environments, ensure that your
[Learning Brain](../docs/Learning-Environment-Design-Brains.md) is using visual
observations with greyscale enabled, and that the dimensions of the visual observations
is 84 by 84 (matches the parameter found in `dqn_agent.py` and `rainbow_agent.py`).
Dopamine's agents currently do not automatically adapt to the observation
dimensions or number of channels.
### Hyperparameters
The hyperparameters provided by Dopamine are tailored to the Atari games, and you will
likely need to adjust them for ML-Agents environments. Here is a sample
`dopamine/agents/rainbow/configs/rainbow.gin` file that is known to work with
GridWorld.
```python
import dopamine.agents.rainbow.rainbow_agent
import dopamine.unity.run_experiment
import dopamine.replay_memory.prioritized_replay_buffer
import gin.tf.external_configurables
RainbowAgent.num_atoms = 51
RainbowAgent.stack_size = 1
RainbowAgent.vmax = 10.
RainbowAgent.gamma = 0.99
RainbowAgent.update_horizon = 3
RainbowAgent.min_replay_history = 20000 # agent steps
RainbowAgent.update_period = 5
RainbowAgent.target_update_period = 50 # agent steps
RainbowAgent.epsilon_train = 0.1
RainbowAgent.epsilon_eval = 0.01
RainbowAgent.epsilon_decay_period = 50000 # agent steps
RainbowAgent.replay_scheme = 'prioritized'
RainbowAgent.tf_device = '/cpu:0' # use '/cpu:*' for non-GPU version
RainbowAgent.optimizer = @tf.train.AdamOptimizer()
tf.train.AdamOptimizer.learning_rate = 0.00025
tf.train.AdamOptimizer.epsilon = 0.0003125
Runner.game_name = "Unity" # any name can be used here
Runner.sticky_actions = False
Runner.num_iterations = 200
Runner.training_steps = 10000 # agent steps
Runner.evaluation_steps = 500 # agent steps
Runner.max_steps_per_episode = 27000 # agent steps
WrappedPrioritizedReplayBuffer.replay_capacity = 1000000
WrappedPrioritizedReplayBuffer.batch_size = 32
```
This example assumed you copied `atari` to a separate folder named `unity`.
Replace `unity` in `import dopamine.unity.run_experiment` with the folder you
copied your `run_experiment.py` and `trainer.py` files to.
If you directly modified the existing files, then use `atari` here.
### Starting a Run
You can now run Dopamine as you would normally:
```
python -um dopamine.unity.train \
--agent_name=rainbow \
--base_dir=/tmp/dopamine \
--gin_files='dopamine/agents/rainbow/configs/rainbow.gin'
```
Again, we assume that you've copied `atari` into a separate folder.
Remember to replace `unity` with the directory you copied your files into. If you
edited the Atari files directly, this should be `atari`.
### Example: GridWorld
As a baseline, here are rewards over time for the three algorithms provided with
Dopamine as run on the GridWorld example environment. All Dopamine (DQN, Rainbow,
C51) runs were done with the same epsilon, epsilon decay, replay history, training steps,
and buffer settings as specified above. Note that the first 20000 steps are used to pre-fill
the training buffer, and no learning happens.
We provide results from our PPO implementation and the DQN from Baselines as reference.
Note that all runs used the same greyscale GridWorld as Dopamine. For PPO, `num_layers`
was set to 2, and all other hyperparameters are the default for GridWorld in `trainer_config.yaml`.
For Baselines DQN, the provided hyperparameters in the previous section are used. Note
that Baselines implements certain features (e.g. dueling-Q) that are not enabled
in Dopamine DQN.
![Dopamine on GridWorld](images/dopamine_gridworld_plot.png)
### Example: VisualBanana
As an example of using the `flatten_branched` option, we also used the Rainbow
algorithm to train on the VisualBanana environment, and provide the results below.
The same hyperparameters were used as in the GridWorld case, except that
`replay_history` and `epsilon_decay` were increased to 100000.
![Dopamine on VisualBanana](images/dopamine_visualbanana_plot.png)

84
gym-unity/gym_unity/envs/unity_env.py


import logging
import itertools
import gym
import numpy as np
from mlagents.envs import UnityEnvironment

https://github.com/openai/multiagent-particle-envs
"""
def __init__(self, environment_filename: str, worker_id=0, use_visual=False, multiagent=False):
def __init__(self, environment_filename: str, worker_id=0, use_visual=False, uint8_visual=False, multiagent=False, flatten_branched=False):
:param uint8_visual: Return visual observations as uint8 (0-255) matrices instead of float (0.0-1.0).
:param flatten_branched: If True, turn branched discrete action spaces into a Discrete space rather than MultiDiscrete.
"""
self._env = UnityEnvironment(environment_filename, worker_id)
self.name = self._env.academy_name

self._multiagent = multiagent
self._flattener = None
self.game_over = False # Hidden flag used by Atari environments to determine if the game is over
# Check brain configuration
if len(self._env.brains) != 1:

" visual observations as part of this environment.")
self.use_visual = brain.number_visual_observations >= 1 and use_visual
if not use_visual and uint8_visual:
logger.warning("`uint8_visual was set to true, but visual observations are not in use. "
"This setting will not have any effect.")
else:
self.uint8_visual = uint8_visual
if brain.number_visual_observations > 1:
logger.warning("The environment contains more than one visual observation. "
"Please note that only the first will be provided in the observation.")

if len(brain.vector_action_space_size) == 1:
self._action_space = spaces.Discrete(brain.vector_action_space_size[0])
else:
self._action_space = spaces.MultiDiscrete(brain.vector_action_space_size)
if flatten_branched:
self._flattener = ActionFlattener(brain.vector_action_space_size)
self._action_space = self._flattener.action_space
else:
self._action_space = spaces.MultiDiscrete(brain.vector_action_space_size)
if flatten_branched:
logger.warning("The environment has a non-discrete action space. It will "
"not be flattened.")
high = np.array([1] * brain.vector_action_space_size[0])
self._action_space = spaces.Box(-high, high, dtype=np.float32)
high = np.array([np.inf] * brain.vector_observation_space_size)

info = self._env.reset()[self.brain_name]
n_agents = len(info.agents)
self._check_agents(n_agents)
self.game_over = False
if not self._multiagent:
obs, reward, done, info = self._single_step(info)

raise UnityGymException(
"The environment was expecting a list of {} actions.".format(self._n_agents))
else:
if self._flattener is not None:
# Action space is discrete and flattened - we expect a list of scalars
action = [self._flattener.lookup_action(_act) for _act in action]
else:
if self._flattener is not None:
# Translate action into list
action = self._flattener.lookup_action(action)
info = self._env.step(action)[self.brain_name]
n_agents = len(info.agents)

if not self._multiagent:
obs, reward, done, info = self._single_step(info)
self.game_over = done
self.game_over = all(done)
self.visual_obs = info.visual_observations[0][0, :, :, :]
self.visual_obs = self._preprocess_single(info.visual_observations[0][0, :, :, :])
default_observation = self.visual_obs
else:
default_observation = info.vector_observations[0, :]

"brain_info": info}
def _preprocess_single(self, single_visual_obs):
if self.uint8_visual:
return (255.0*single_visual_obs).astype(np.uint8)
else:
return single_visual_obs
self.visual_obs = info.visual_observations
self.visual_obs = self._preprocess_multi(info.visual_observations)
default_observation = self.visual_obs
else:
default_observation = info.vector_observations

def _preprocess_multi(self, multiple_visual_obs):
if self.uint8_visual:
return [(255.0*_visual_obs).astype(np.uint8) for _visual_obs in multiple_visual_obs]
else:
return multiple_visual_obs
def render(self, mode='rgb_array'):
return self.visual_obs

@property
def number_agents(self):
return self._n_agents
class ActionFlattener():
"""
Flattens branched discrete action spaces into single-branch discrete action spaces.
"""
def __init__(self,branched_action_space):
"""
Initialize the flattener.
:param branched_action_space: A List containing the sizes of each branch of the action
space, e.g. [2,3,3] for three branches with size 2, 3, and 3 respectively.
"""
self._action_shape = branched_action_space
self.action_lookup = self._create_lookup(self._action_shape)
self.action_space = spaces.Discrete(len(self.action_lookup))
@classmethod
def _create_lookup(self, branched_action_space):
"""
Creates a Dict that maps discrete actions (scalars) to branched actions (lists).
Each key in the Dict maps to one unique set of branched actions, and each value
contains the List of branched actions.
"""
possible_vals = [range(_num) for _num in branched_action_space]
all_actions = [list(_action) for _action in itertools.product(*possible_vals)]
# Dict should be faster than List for large action spaces
action_lookup = {_scalar: _action for (_scalar, _action) in enumerate(all_actions)}
return action_lookup
def lookup_action(self, action):
"""
Convert a scalar discrete action into a unique set of branched actions.
:param: action: A scalar value representing one of the discrete actions.
:return: The List containing the branched actions.
"""
return self.action_lookup[action]

2
gym-unity/setup.py


from setuptools import setup, find_packages
setup(name='gym_unity',
version='0.2.0',
version='0.3.0',
description='Unity Machine Learning Agents Gym Interface',
license='Apache License 2.0',
author='Unity Technologies',

100
gym-unity/tests/test_gym.py


import pytest
import numpy as np
from gym import spaces
from tests.mock_communicator import MockCommunicator
# Tests
@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_gym_wrapper(mock_communicator, mock_launcher):
mock_communicator.return_value = MockCommunicator(
discrete_action=False, visual_inputs=0, stack=False, num_agents=1)
# Test for incorrect number of agents.
with pytest.raises(UnityGymException):
UnityEnv(' ', use_visual=False, multiagent=True)
@mock.patch('gym_unity.envs.unity_env.UnityEnvironment')
def test_gym_wrapper(mock_env):
mock_brain = create_mock_brainparams()
mock_braininfo = create_mock_vector_braininfo()
setup_mock_unityenvironment(mock_env, mock_brain, mock_braininfo)
env = UnityEnv(' ', use_visual=False)
env = UnityEnv(' ', use_visual=False, multiagent=False)
assert isinstance(env, UnityEnv)
assert isinstance(env.reset(), np.ndarray)
actions = env.action_space.sample()

assert isinstance(done, bool)
assert isinstance(info, dict)
@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_multi_agent(mock_communicator, mock_launcher):
mock_communicator.return_value = MockCommunicator(
discrete_action=False, visual_inputs=0, stack=False, num_agents=2)
# Test for incorrect number of agents.
@mock.patch('gym_unity.envs.unity_env.UnityEnvironment')
def test_multi_agent(mock_env):
mock_brain = create_mock_brainparams()
mock_braininfo = create_mock_vector_braininfo(num_agents=2)
setup_mock_unityenvironment(mock_env, mock_brain, mock_braininfo)
with pytest.raises(UnityGymException):
UnityEnv(' ', multiagent=False)

assert isinstance(rew, list)
assert isinstance(done, list)
assert isinstance(info, dict)
@mock.patch('gym_unity.envs.unity_env.UnityEnvironment')
def test_branched_flatten(mock_env):
mock_brain = create_mock_brainparams(vector_action_space_type='discrete', vector_action_space_size=[2,2,3])
mock_braininfo = create_mock_vector_braininfo(num_agents=1)
setup_mock_unityenvironment(mock_env, mock_brain, mock_braininfo)
env = UnityEnv(' ', use_visual=False, multiagent=False, flatten_branched=True)
assert isinstance(env.action_space, spaces.Discrete)
assert env.action_space.n==12
assert env._flattener.lookup_action(0)==[0,0,0]
assert env._flattener.lookup_action(11)==[1,1,2]
# Check that False produces a MultiDiscrete
env = UnityEnv(' ', use_visual=False, multiagent=False, flatten_branched=False)
assert isinstance(env.action_space, spaces.MultiDiscrete)
# Helper methods
def create_mock_brainparams(number_visual_observations=0, num_stacked_vector_observations=1,
vector_action_space_type='continuous', vector_observation_space_size=3,
vector_action_space_size=None):
"""
Creates a mock BrainParameters object with parameters.
"""
# Avoid using mutable object as default param
if vector_action_space_size is None:
vector_action_space_size = [2]
mock_brain = mock.Mock();
mock_brain.return_value.number_visual_observations = number_visual_observations
mock_brain.return_value.num_stacked_vector_observations = num_stacked_vector_observations
mock_brain.return_value.vector_action_space_type = vector_action_space_type
mock_brain.return_value.vector_observation_space_size = vector_observation_space_size
mock_brain.return_value.vector_action_space_size = vector_action_space_size
return mock_brain()
def create_mock_vector_braininfo(num_agents = 1):
"""
Creates a mock BrainInfo with vector observations. Imitates constant
vector observations, rewards, dones, and agents.
:int num_agents: Number of "agents" to imitate in your BrainInfo values.
"""
mock_braininfo = mock.Mock()
mock_braininfo.return_value.vector_observations = np.array([num_agents*[1, 2, 3,]])
mock_braininfo.return_value.rewards = num_agents*[1.0]
mock_braininfo.return_value.local_done = num_agents*[False]
mock_braininfo.return_value.text_observations = num_agents*['']
mock_braininfo.return_value.agents = range(0,num_agents)
return mock_braininfo()
def setup_mock_unityenvironment(mock_env, mock_brain, mock_braininfo):
"""
Takes a mock UnityEnvironment and adds the appropriate properties, defined by the mock
BrainParameters and BrainInfo.
:Mock mock_env: A mock UnityEnvironment, usually empty.
:Mock mock_brain: A mock Brain object that specifies the params of this environment.
:Mock mock_braininfo: A mock BrainInfo object that will be returned at each step and reset.
"""
mock_env.return_value.academy_name = 'MockAcademy'
mock_env.return_value.brains = {'MockBrain':mock_brain}
mock_env.return_value.external_brain_names = ['MockBrain']
mock_env.return_value.reset.return_value = {'MockBrain':mock_braininfo}
mock_env.return_value.step.return_value = {'MockBrain':mock_braininfo}

2
ml-agents/mlagents/envs/environment.py


atexit.register(self._close)
self.port = base_port + worker_id
self._buffer_size = 12000
self._version_ = "API-6"
self._version_ = "API-7"
self._loaded = False # If true, this means the environment was successfully loaded
self.proc1 = None # The process that is started. If None, no process was started
self.communicator = self.get_communicator(worker_id, base_port)

3
ml-agents/mlagents/envs/rpc_communicator.py


raise UnityTimeOutException(
"The Unity environment took too long to respond. Make sure that :\n"
"\t The environment does not need user interaction to launch\n"
"\t The Academy and the External Brain(s) are attached to objects in the Scene\n"
"\t The Academy's Broadcast Hub is configured correctly\n"
"\t The Agents are linked to the appropriate Brains\n"
"\t The environment and the Python interface have compatible versions.")
aca_param = self.unity_to_external.parent_conn.recv().unity_output
message = UnityMessage()

3
ml-agents/mlagents/envs/socket_communicator.py


raise UnityTimeOutException(
"The Unity environment took too long to respond. Make sure that :\n"
"\t The environment does not need user interaction to launch\n"
"\t The Academy and the External Brain(s) are attached to objects in the Scene\n"
"\t The Academy's Broadcast Hub is configured correctly\n"
"\t The Agents are linked to the appropriate Brains\n"
"\t The environment and the Python interface have compatible versions.")
message = UnityMessage()
message.header.status = 200

144
ml-agents/mlagents/trainers/learn.py


import logging
from multiprocessing import Process, Queue
import os
import glob
import shutil
import yaml
from typing import Optional
from mlagents.trainers import MetaCurriculumError, MetaCurriculum
from mlagents.envs import UnityEnvironment
from mlagents.envs.exception import UnityEnvironmentException
def run_training(sub_id, run_seed, run_options, process_queue):
def run_training(sub_id: int, run_seed: int, run_options, process_queue):
"""
Launches training session.
:param process_queue: Queue used to send signal back to main.

"""
# Docker Parameters
docker_target_name = (run_options['--docker-target-name']
if run_options['--docker-target-name'] != 'None' else None)
if run_options['--docker-target-name'] != 'None' else None)
if run_options['--env'] != 'None' else None)
if run_options['--env'] != 'None' else None)
run_id = run_options['--run-id']
load_model = run_options['--load']
train_model = run_options['--train']

curriculum_file = (run_options['--curriculum']
if run_options['--curriculum'] != 'None' else None)
curriculum_folder = (run_options['--curriculum']
if run_options['--curriculum'] != 'None' else None)
# Create controller and launch environment.
tc = TrainerController(env_path, run_id + '-' + str(sub_id),
save_freq, curriculum_file, fast_simulation,
load_model, train_model, worker_id + sub_id,
keep_checkpoints, lesson, run_seed,
docker_target_name, trainer_config_path, no_graphics)
# Recognize and use docker volume if one is passed as an argument
if not docker_target_name:
model_path = './models/{run_id}'.format(run_id=run_id)
summaries_dir = './summaries'
else:
trainer_config_path = \
'/{docker_target_name}/{trainer_config_path}'.format(
docker_target_name=docker_target_name,
trainer_config_path=trainer_config_path)
if curriculum_folder is not None:
curriculum_folder = \
'/{docker_target_name}/{curriculum_folder}'.format(
docker_target_name=docker_target_name,
curriculum_folder=curriculum_folder)
model_path = '/{docker_target_name}/models/{run_id}'.format(
docker_target_name=docker_target_name,
run_id=run_id)
summaries_dir = '/{docker_target_name}/summaries'.format(
docker_target_name=docker_target_name)
trainer_config = load_config(trainer_config_path)
env = init_environment(env_path, docker_target_name, no_graphics, worker_id + sub_id, fast_simulation, run_seed)
maybe_meta_curriculum = try_create_meta_curriculum(curriculum_folder, env)
external_brains = {}
for brain_name in env.external_brain_names:
external_brains[brain_name] = env.brains[brain_name]
# Create controller and begin training.
tc = TrainerController(model_path, summaries_dir, run_id + '-' + str(sub_id),
save_freq, maybe_meta_curriculum,
load_model, train_model,
keep_checkpoints, lesson, external_brains, run_seed)
tc.start_learning()
tc.start_learning(env, trainer_config)
def try_create_meta_curriculum(curriculum_folder: Optional[str], env: UnityEnvironment) -> Optional[MetaCurriculum]:
if curriculum_folder is None:
return None
else:
meta_curriculum = MetaCurriculum(curriculum_folder, env._resetParameters)
if meta_curriculum:
for brain_name in meta_curriculum.brains_to_curriculums.keys():
if brain_name not in env.external_brain_names:
raise MetaCurriculumError('One of the curricula '
'defined in ' +
curriculum_folder + ' '
'does not have a corresponding '
'Brain. Check that the '
'curriculum file has the same '
'name as the Brain '
'whose curriculum it defines.')
return meta_curriculum
def prepare_for_docker_run(docker_target_name, env_path):
for f in glob.glob('/{docker_target_name}/*'.format(
docker_target_name=docker_target_name)):
if env_path in f:
try:
b = os.path.basename(f)
if os.path.isdir(f):
shutil.copytree(f,
'/ml-agents/{b}'.format(b=b))
else:
src_f = '/{docker_target_name}/{b}'.format(
docker_target_name=docker_target_name, b=b)
dst_f = '/ml-agents/{b}'.format(b=b)
shutil.copyfile(src_f, dst_f)
os.chmod(dst_f, 0o775) # Make executable
except Exception as e:
logging.getLogger('mlagents.trainers').info(e)
env_path = '/ml-agents/{env_path}'.format(env_path=env_path)
return env_path
def load_config(trainer_config_path):
try:
with open(trainer_config_path) as data_file:
trainer_config = yaml.load(data_file)
return trainer_config
except IOError:
raise UnityEnvironmentException('Parameter file could not be found '
'at {}.'
.format(trainer_config_path))
except UnicodeDecodeError:
raise UnityEnvironmentException('There was an error decoding '
'Trainer Config from this path : {}'
.format(trainer_config_path))
def init_environment(env_path, docker_target_name, no_graphics, worker_id, fast_simulation, seed):
if env_path is not None:
# Strip out executable extensions if passed
env_path = (env_path.strip()
.replace('.app', '')
.replace('.exe', '')
.replace('.x86_64', '')
.replace('.x86', ''))
docker_training = docker_target_name is not None
if docker_training and env_path is not None:
"""
Comments for future maintenance:
Some OS/VM instances (e.g. COS GCP Image) mount filesystems
with COS flag which prevents execution of the Unity scene,
to get around this, we will copy the executable into the
container.
"""
# Navigate in docker path and find env_path and copy it.
env_path = prepare_for_docker_run(docker_target_name,
env_path)
return UnityEnvironment(
file_name=env_path,
worker_id=worker_id,
seed=seed,
docker_training=docker_training,
no_graphics=no_graphics
)
def main():

2
ml-agents/mlagents/trainers/models.py


class LearningModel(object):
_version_number_ = 1
_version_number_ = 2
def __init__(self, m_size, normalize, use_recurrent, brain, seed):
tf.set_random_seed(seed)

12
ml-agents/mlagents/trainers/policy.py


import tensorflow as tf
from mlagents.trainers import UnityException
from mlagents.trainers.models import LearningModel
from mlagents.trainers import tensorflow_to_barracuda as tf2bc
logger = logging.getLogger("mlagents.trainers")

def export_model(self):
"""
Exports latest saved model to .tf format for Unity embedding.
Exports latest saved model to .nn format for Unity embedding.
with self.graph.as_default():
target_nodes = ','.join(self._process_graph())
ckpt = tf.train.get_checkpoint_state(self.model_path)

input_checkpoint=ckpt.model_checkpoint_path,
output_node_names=target_nodes,
output_graph=(self.model_path + '.bytes'),
output_graph=(self.model_path + '/frozen_graph_def.pb'),
logger.info('Exported ' + self.model_path + '.bytes file')
tf2bc.convert(self.model_path + '/frozen_graph_def.pb', self.model_path + '.nn')
logger.info('Exported ' + self.model_path + '.nn file')
def _process_graph(self):
"""

317
ml-agents/mlagents/trainers/trainer_controller.py


"""Launches trainers for each External Brains in a Unity Environment."""
import os
import glob
import logging
import shutil
import sys

from typing import *
import yaml
import re
from tensorflow.python.tools import freeze_graph
from mlagents.envs.environment import UnityEnvironment
from mlagents.envs.exception import UnityEnvironmentException
from mlagents.envs import BrainInfo
from mlagents.envs.exception import UnityEnvironmentException
from mlagents.trainers.exception import MetaCurriculumError
def __init__(self, env_path, run_id, save_freq, curriculum_folder,
fast_simulation, load, train, worker_id, keep_checkpoints,
lesson, seed, docker_target_name,
trainer_config_path, no_graphics):
def __init__(self, model_path: str, summaries_dir: str,
run_id: str, save_freq: int, meta_curriculum: Optional[MetaCurriculum],
load: bool, train: bool, keep_checkpoints: int, lesson: Optional[int],
external_brains: Dict[str, BrainInfo], training_seed: int):
:param env_path: Location to the environment executable to be loaded.
:param model_path: Path to save the model.
:param summaries_dir: Folder to save training summaries.
:param curriculum_folder: Folder containing JSON curriculums for the
environment.
:param fast_simulation: Whether to run the game at training speed.
:param meta_curriculum: MetaCurriculum object which stores information about all curricula.
:param worker_id: Number to add to communication port (5005).
Used for multi-environment
:param seed: Random seed used for training.
:param docker_target_name: Name of docker volume that will contain all
data.
:param trainer_config_path: Fully qualified path to location of trainer
configuration file.
:param no_graphics: Whether to run the Unity simulator in no-graphics
mode.
:param external_brains: dictionary of external brain names to BrainInfo objects.
:param training_seed: Seed to use for Numpy and Tensorflow random number generation.
if env_path is not None:
# Strip out executable extensions if passed
env_path = (env_path.strip()
.replace('.app', '')
.replace('.exe', '')
.replace('.x86_64', '')
.replace('.x86', ''))
# Recognize and use docker volume if one is passed as an argument
if not docker_target_name:
self.docker_training = False
self.trainer_config_path = trainer_config_path
self.model_path = './models/{run_id}'.format(run_id=run_id)
self.curriculum_folder = curriculum_folder
self.summaries_dir = './summaries'
else:
self.docker_training = True
self.trainer_config_path = \
'/{docker_target_name}/{trainer_config_path}'.format(
docker_target_name=docker_target_name,
trainer_config_path = trainer_config_path)
self.model_path = '/{docker_target_name}/models/{run_id}'.format(
docker_target_name=docker_target_name,
run_id=run_id)
if env_path is not None:
"""
Comments for future maintenance:
Some OS/VM instances (e.g. COS GCP Image) mount filesystems
with COS flag which prevents execution of the Unity scene,
to get around this, we will copy the executable into the
container.
"""
# Navigate in docker path and find env_path and copy it.
env_path = self._prepare_for_docker_run(docker_target_name,
env_path)
if curriculum_folder is not None:
self.curriculum_folder = \
'/{docker_target_name}/{curriculum_folder}'.format(
docker_target_name=docker_target_name,
curriculum_folder=curriculum_folder)
self.summaries_dir = '/{docker_target_name}/summaries'.format(
docker_target_name=docker_target_name)
self.model_path = model_path
self.summaries_dir = summaries_dir
self.external_brains = external_brains
self.external_brain_names = external_brains.keys()
self.fast_simulation = fast_simulation
self.worker_id = worker_id
self.seed = seed
self.meta_curriculum = meta_curriculum
self.seed = training_seed
self.env = UnityEnvironment(file_name=env_path,
worker_id=self.worker_id,
seed=self.seed,
docker_training=self.docker_training,
no_graphics=no_graphics)
if env_path is None:
self.env_name = 'editor_' + self.env.academy_name
else:
# Extract out name of environment
self.env_name = os.path.basename(os.path.normpath(env_path))
if curriculum_folder is None:
self.meta_curriculum = None
else:
self.meta_curriculum = MetaCurriculum(self.curriculum_folder,
self.env._resetParameters)
if self.meta_curriculum:
for brain_name in self.meta_curriculum.brains_to_curriculums.keys():
if brain_name not in self.env.external_brain_names:
raise MetaCurriculumError('One of the curriculums '
'defined in ' +
self.curriculum_folder + ' '
'does not have a corresponding '
'Brain. Check that the '
'curriculum file has the same '
'name as the Brain '
'whose curriculum it defines.')
def _prepare_for_docker_run(self, docker_target_name, env_path):
for f in glob.glob('/{docker_target_name}/*'.format(
docker_target_name=docker_target_name)):
if env_path in f:
try:
b = os.path.basename(f)
if os.path.isdir(f):
shutil.copytree(f,
'/ml-agents/{b}'.format(b=b))
else:
src_f = '/{docker_target_name}/{b}'.format(
docker_target_name=docker_target_name, b=b)
dst_f = '/ml-agents/{b}'.format(b=b)
shutil.copyfile(src_f, dst_f)
os.chmod(dst_f, 0o775) # Make executable
except Exception as e:
self.logger.info(e)
env_path = '/ml-agents/{env_name}'.format(env_name=env_path)
return env_path
def _get_measure_vals(self):
if self.meta_curriculum:

else:
return None
def _save_model(self,steps=0):
def _save_model(self, steps=0):
"""
Saves current model to checkpoint folder.
:param steps: Current number of steps in training process.

def _export_graph(self):
"""
Exports latest saved models to .bytes format for Unity embedding.
Exports latest saved models to .nn format for Unity embedding.
def _initialize_trainers(self, trainer_config):
def initialize_trainers(self, trainer_config):
for brain_name in self.env.external_brain_names:
for brain_name in self.external_brains:
trainer_parameters = trainer_config['default'].copy()
trainer_parameters['summary_path'] = '{basedir}/{name}'.format(
basedir=self.summaries_dir,

for k in trainer_config[_brain_key]:
trainer_parameters[k] = trainer_config[_brain_key][k]
trainer_parameters_dict[brain_name] = trainer_parameters.copy()
for brain_name in self.env.external_brain_names:
for brain_name in self.external_brains:
self.env.brains[brain_name],
self.external_brains[brain_name],
self.env.brains[brain_name],
self.external_brains[brain_name],
self.env.brains[brain_name],
self.external_brains[brain_name],
self.meta_curriculum
.brains_to_curriculums[brain_name]
.min_lesson_length if self.meta_curriculum else 0,

'brain {}'
.format(brain_name))
def _load_config(self):
try:
with open(self.trainer_config_path) as data_file:
trainer_config = yaml.load(data_file)
return trainer_config
except IOError:
raise UnityEnvironmentException('Parameter file could not be found '
'at {}.'
.format(self.trainer_config_path))
except UnicodeDecodeError:
raise UnityEnvironmentException('There was an error decoding '
'Trainer Config from this path : {}'
.format(self.trainer_config_path))
@staticmethod
def _create_model_path(model_path):
try:

'permissions are set correctly.'
.format(model_path))
def _reset_env(self):
def _reset_env(self, env):
"""Resets the environment.
Returns:

if self.meta_curriculum is not None:
return self.env.reset(config=self.meta_curriculum.get_config(),
train_mode=self.fast_simulation)
return env.reset(config=self.meta_curriculum.get_config())
return self.env.reset(train_mode=self.fast_simulation)
return env.reset()
def start_learning(self):
def start_learning(self, env, trainer_config):
trainer_config = self._load_config()
self._initialize_trainers(trainer_config)
self.initialize_trainers(trainer_config)
curr_info = self._reset_env()
curr_info = self._reset_env(env)
if self.train_model:
for brain_name, trainer in self.trainers.items():
trainer.write_tensorboard_text('Hyperparameters',

while any([t.get_step <= t.get_max_steps \
for k, t in self.trainers.items()]) \
or not self.train_model:
if self.meta_curriculum:
# Get the sizes of the reward buffers.
reward_buff_sizes = {k:len(t.reward_buffer) \
for (k,t) in self.trainers.items()}
# Attempt to increment the lessons of the brains who
# were ready.
lessons_incremented = \
self.meta_curriculum.increment_lessons(
self._get_measure_vals(),
reward_buff_sizes=reward_buff_sizes)
# If any lessons were incremented or the environment is
# ready to be reset
if (self.meta_curriculum
and any(lessons_incremented.values())):
curr_info = self._reset_env()
for brain_name, trainer in self.trainers.items():
trainer.end_episode()
for brain_name, changed in lessons_incremented.items():
if changed:
self.trainers[brain_name].reward_buffer.clear()
elif self.env.global_done:
curr_info = self._reset_env()
for brain_name, trainer in self.trainers.items():
trainer.end_episode()
# Decide and take an action
take_action_vector, \
take_action_memories, \
take_action_text, \
take_action_value, \
take_action_outputs \
= {}, {}, {}, {}, {}
for brain_name, trainer in self.trainers.items():
(take_action_vector[brain_name],
take_action_memories[brain_name],
take_action_text[brain_name],
take_action_value[brain_name],
take_action_outputs[brain_name]) = \
trainer.take_action(curr_info)
new_info = self.env.step(vector_action=take_action_vector,
memory=take_action_memories,
text_action=take_action_text,
value=take_action_value)
for brain_name, trainer in self.trainers.items():
trainer.add_experiences(curr_info, new_info,
take_action_outputs[brain_name])
trainer.process_experiences(curr_info, new_info)
if trainer.is_ready_update() and self.train_model \
and trainer.get_step <= trainer.get_max_steps:
# Perform gradient descent with experience buffer
trainer.update_policy()
# Write training statistics to Tensorboard.
if self.meta_curriculum is not None:
trainer.write_summary(
self.global_step,
lesson_num=self.meta_curriculum
.brains_to_curriculums[brain_name]
.lesson_num)
else:
trainer.write_summary(self.global_step)
if self.train_model \
and trainer.get_step <= trainer.get_max_steps:
trainer.increment_step_and_update_last_reward()
new_info = self.take_step(env, curr_info)
self.global_step += 1
if self.global_step % self.save_freq == 0 and self.global_step != 0 \
and self.train_model:

if self.train_model:
self._save_model_when_interrupted(steps=self.global_step)
pass
self.env.close()
env.close()
def take_step(self, env, curr_info):
if self.meta_curriculum:
# Get the sizes of the reward buffers.
reward_buff_sizes = {k: len(t.reward_buffer) \
for (k, t) in self.trainers.items()}
# Attempt to increment the lessons of the brains who
# were ready.
lessons_incremented = \
self.meta_curriculum.increment_lessons(
self._get_measure_vals(),
reward_buff_sizes=reward_buff_sizes)
# If any lessons were incremented or the environment is
# ready to be reset
if (self.meta_curriculum
and any(lessons_incremented.values())):
curr_info = self._reset_env(env)
for brain_name, trainer in self.trainers.items():
trainer.end_episode()
for brain_name, changed in lessons_incremented.items():
if changed:
self.trainers[brain_name].reward_buffer.clear()
elif env.global_done:
curr_info = self._reset_env(env)
for brain_name, trainer in self.trainers.items():
trainer.end_episode()
# Decide and take an action
take_action_vector, \
take_action_memories, \
take_action_text, \
take_action_value, \
take_action_outputs \
= {}, {}, {}, {}, {}
for brain_name, trainer in self.trainers.items():
(take_action_vector[brain_name],
take_action_memories[brain_name],
take_action_text[brain_name],
take_action_value[brain_name],
take_action_outputs[brain_name]) = \
trainer.take_action(curr_info)
new_info = env.step(vector_action=take_action_vector,
memory=take_action_memories,
text_action=take_action_text,
value=take_action_value)
for brain_name, trainer in self.trainers.items():
trainer.add_experiences(curr_info, new_info,
take_action_outputs[brain_name])
trainer.process_experiences(curr_info, new_info)
if trainer.is_ready_update() and self.train_model \
and trainer.get_step <= trainer.get_max_steps:
# Perform gradient descent with experience buffer
trainer.update_policy()
# Write training statistics to Tensorboard.
if self.meta_curriculum is not None:
trainer.write_summary(
self.global_step,
lesson_num=self.meta_curriculum
.brains_to_curriculums[brain_name]
.lesson_num)
else:
trainer.write_summary(self.global_step)
if self.train_model \
and trainer.get_step <= trainer.get_max_steps:
trainer.increment_step_and_update_last_reward()
return new_info

5
ml-agents/setup.py


setup(
name='mlagents',
version='0.6.0',
version='0.7.0',
description='Unity Machine Learning Agents',
long_description=long_description,
long_description_content_type='text/markdown',

'docopt',
'pyyaml',
'protobuf>=3.6,<3.7',
'grpcio>=1.11.0,<1.12.0'],
'grpcio>=1.11.0,<1.12.0',
'pypiwin32==223;platform_system=="Windows"'],
python_requires=">=3.6,<3.7",

2
ml-agents/tests/mock_communicator.py


)
rl_init = UnityRLInitializationOutput(
name="RealFakeAcademy",
version="API-6",
version="API-7",
log_path="",
brain_parameters=[bp]
)

4
ml-agents/tests/trainers/test_bc.py


@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_bc_policy_evaluate(mock_communicator, mock_launcher):
def test_bc_policy_evaluate(mock_communicator, mock_launcher, dummy_config):
tf.reset_default_graph()
mock_communicator.return_value = MockCommunicator(
discrete_action=False, visual_inputs=0)

trainer_parameters = dummy_config()
trainer_parameters = dummy_config
model_path = env.brain_names[0]
trainer_parameters['model_path'] = model_path
trainer_parameters['keep_checkpoints'] = 3

4
ml-agents/tests/trainers/test_ppo.py


@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_ppo_policy_evaluate(mock_communicator, mock_launcher):
def test_ppo_policy_evaluate(mock_communicator, mock_launcher, dummy_config):
tf.reset_default_graph()
mock_communicator.return_value = MockCommunicator(
discrete_action=False, visual_inputs=0)

trainer_parameters = dummy_config()
trainer_parameters = dummy_config
model_path = env.brain_names[0]
trainer_parameters['model_path'] = model_path
trainer_parameters['keep_checkpoints'] = 3

358
ml-agents/tests/trainers/test_trainer_controller.py


import json
import unittest.mock as mock
from unittest.mock import *
import tensorflow as tf
from mlagents.trainers.trainer_controller import TrainerController
from mlagents.trainers.ppo.trainer import PPOTrainer

from tests.mock_communicator import MockCommunicator
@pytest.fixture

curiosity_enc_size: 1
''')
@pytest.fixture
def dummy_offline_bc_config_with_override():
base = dummy_offline_bc_config()
base['testbrain'] = {}
base['testbrain']['normalize'] = False
return base
@pytest.fixture
def dummy_bad_config():

memory_size: 8
''')
@pytest.fixture
def basic_trainer_controller(brain_info):
return TrainerController(
model_path='test_model_path',
summaries_dir='test_summaries_dir',
run_id='test_run_id',
save_freq=100,
meta_curriculum=None,
load=True,
train=True,
keep_checkpoints=False,
lesson=None,
external_brains={'testbrain': brain_info},
training_seed=99
)
@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_initialization(mock_communicator, mock_launcher):
mock_communicator.return_value = MockCommunicator(
discrete_action=True, visual_inputs=1)
tc = TrainerController(' ', ' ', 1, None, True, True, False, 1,
1, 1, 1, '', "tests/test_mlagents.trainers.py", False)
assert (tc.env.brain_names[0] == 'RealFakeBrain')
@patch('numpy.random.seed')
@patch('tensorflow.set_random_seed')
def test_initialization_seed(numpy_random_seed, tensorflow_set_seed):
seed = 27
TrainerController('', '', '1', 1, None, True, False, False, None, {}, seed)
numpy_random_seed.assert_called_with(seed)
tensorflow_set_seed.assert_called_with(seed)
def assert_bc_trainer_constructed(trainer_cls, input_config, tc, expected_brain_info, expected_config):
def mock_constructor(self, brain, trainer_params, training, load, seed, run_id):
assert(brain == expected_brain_info)
assert(trainer_params == expected_config)
assert(training == tc.train_model)
assert(load == tc.load_model)
assert(seed == tc.seed)
assert(run_id == tc.run_id)
with patch.object(trainer_cls, "__init__", mock_constructor):
tc.initialize_trainers(input_config)
assert('testbrain' in tc.trainers)
assert(isinstance(tc.trainers['testbrain'], trainer_cls))
def assert_ppo_trainer_constructed(input_config, tc, expected_brain_info,
expected_config, expected_reward_buff_cap=0):
def mock_constructor(self, brain, reward_buff_cap, trainer_parameters, training, load, seed, run_id):
assert(brain == expected_brain_info)
assert(trainer_parameters == expected_config)
assert(reward_buff_cap == expected_reward_buff_cap)
assert(training == tc.train_model)
assert(load == tc.load_model)
assert(seed == tc.seed)
assert(run_id == tc.run_id)
with patch.object(PPOTrainer, "__init__", mock_constructor):
tc.initialize_trainers(input_config)
assert('testbrain' in tc.trainers)
assert(isinstance(tc.trainers['testbrain'], PPOTrainer))
@patch('mlagents.envs.BrainInfo')
def test_initialize_trainer_parameters_uses_defaults(BrainInfoMock):
brain_info_mock = BrainInfoMock()
tc = basic_trainer_controller(brain_info_mock)
full_config = dummy_offline_bc_config()
expected_config = full_config['default']
expected_config['summary_path'] = tc.summaries_dir + '/test_run_id_testbrain'
expected_config['model_path'] = tc.model_path + '/testbrain'
expected_config['keep_checkpoints'] = tc.keep_checkpoints
assert_bc_trainer_constructed(OfflineBCTrainer, full_config, tc, brain_info_mock, expected_config)
@patch('mlagents.envs.BrainInfo')
def test_initialize_trainer_parameters_override_defaults(BrainInfoMock):
brain_info_mock = BrainInfoMock()
tc = basic_trainer_controller(brain_info_mock)
full_config = dummy_offline_bc_config_with_override()
expected_config = full_config['default']
expected_config['summary_path'] = tc.summaries_dir + '/test_run_id_testbrain'
expected_config['model_path'] = tc.model_path + '/testbrain'
expected_config['keep_checkpoints'] = tc.keep_checkpoints
@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_load_config(mock_communicator, mock_launcher, dummy_config):
open_name = 'mlagents.trainers.trainer_controller' + '.open'
with mock.patch('yaml.load') as mock_load:
with mock.patch(open_name, create=True) as _:
mock_load.return_value = dummy_config
mock_communicator.return_value = MockCommunicator(
discrete_action=True, visual_inputs=1)
mock_load.return_value = dummy_config
tc = TrainerController(' ', ' ', 1, None, True, True, False, 1,
1, 1, 1, '', '', False)
config = tc._load_config()
assert (len(config) == 1)
assert (config['default']['trainer'] == "ppo")
# Override value from specific brain config
expected_config['normalize'] = False
assert_bc_trainer_constructed(OfflineBCTrainer, full_config, tc, brain_info_mock, expected_config)
@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_initialize_trainers(mock_communicator, mock_launcher, dummy_config,
dummy_offline_bc_config, dummy_online_bc_config, dummy_bad_config):
open_name = 'mlagents.trainers.trainer_controller' + '.open'
with mock.patch('yaml.load') as mock_load:
with mock.patch(open_name, create=True) as _:
mock_communicator.return_value = MockCommunicator(
discrete_action=True, visual_inputs=1)
tc = TrainerController(' ', ' ', 1, None, True, False, False, 1, 1,
1, 1, '', "tests/test_mlagents.trainers.py",
False)
# Test for PPO trainer
mock_load.return_value = dummy_config
config = tc._load_config()
tf.reset_default_graph()
tc._initialize_trainers(config)
assert (len(tc.trainers) == 1)
assert (isinstance(tc.trainers['RealFakeBrain'], PPOTrainer))
@patch('mlagents.envs.BrainInfo')
def test_initialize_online_bc_trainer(BrainInfoMock):
brain_info_mock = BrainInfoMock()
tc = basic_trainer_controller(brain_info_mock)
# Test for Online Behavior Cloning Trainer
mock_load.return_value = dummy_online_bc_config
config = tc._load_config()
tf.reset_default_graph()
tc._initialize_trainers(config)
assert (isinstance(tc.trainers['RealFakeBrain'], OnlineBCTrainer))
full_config = dummy_online_bc_config()
expected_config = full_config['default']
expected_config['summary_path'] = tc.summaries_dir + '/test_run_id_testbrain'
expected_config['model_path'] = tc.model_path + '/testbrain'
expected_config['keep_checkpoints'] = tc.keep_checkpoints
# Test for proper exception when trainer name is incorrect
mock_load.return_value = dummy_bad_config
config = tc._load_config()
tf.reset_default_graph()
with pytest.raises(UnityEnvironmentException):
tc._initialize_trainers(config)
assert_bc_trainer_constructed(OnlineBCTrainer, full_config, tc, brain_info_mock, expected_config)
@mock.patch('mlagents.envs.UnityEnvironment.executable_launcher')
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_initialize_offline_trainers(mock_communicator, mock_launcher, dummy_config,
dummy_offline_bc_config, dummy_online_bc_config, dummy_bad_config):
open_name = 'mlagents.trainers.trainer_controller' + '.open'
with mock.patch('yaml.load') as mock_load:
with mock.patch(open_name, create=True) as _:
mock_communicator.return_value = MockCommunicator(
discrete_action=False, stack=False, visual_inputs=0,
brain_name="Ball3DBrain", vec_obs_size=8)
tc = TrainerController(' ', ' ', 1, None, True, False, False, 1, 1,
1, 1, '', "tests/test_mlagents.trainers.py",
False)
@patch('mlagents.envs.BrainInfo')
def test_initialize_ppo_trainer(BrainInfoMock):
brain_info_mock = BrainInfoMock()
tc = basic_trainer_controller(brain_info_mock)
# Test for Offline Behavior Cloning Trainer
mock_load.return_value = dummy_offline_bc_config
config = tc._load_config()
tf.reset_default_graph()
tc._initialize_trainers(config)
assert (isinstance(tc.trainers['Ball3DBrain'], OfflineBCTrainer))
full_config = dummy_config()
expected_config = full_config['default']
expected_config['summary_path'] = tc.summaries_dir + '/test_run_id_testbrain'
expected_config['model_path'] = tc.model_path + '/testbrain'
expected_config['keep_checkpoints'] = tc.keep_checkpoints
assert_ppo_trainer_constructed(full_config, tc, brain_info_mock, expected_config)
@patch('mlagents.envs.BrainInfo')
def test_initialize_invalid_trainer_raises_exception(BrainInfoMock):
brain_info_mock = BrainInfoMock()
tc = basic_trainer_controller(brain_info_mock)
bad_config = dummy_bad_config()
try:
tc.initialize_trainers(bad_config)
assert(1 == 0, "Initialize trainers with bad config did not raise an exception.")
except UnityEnvironmentException:
pass
def trainer_controller_with_start_learning_mocks():
trainer_mock = MagicMock()
trainer_mock.get_step = 0
trainer_mock.get_max_steps = 5
trainer_mock.parameters = {'some': 'parameter'}
trainer_mock.write_tensorboard_text = MagicMock()
brain_info_mock = MagicMock()
tc = basic_trainer_controller(brain_info_mock)
tc.initialize_trainers = MagicMock()
tc.trainers = {'testbrain': trainer_mock}
tc.take_step = MagicMock()
def take_step_sideeffect(env, curr_info):
tc.trainers['testbrain'].get_step += 1
if tc.trainers['testbrain'].get_step > 10:
raise KeyboardInterrupt
tc.take_step.side_effect = take_step_sideeffect
tc._export_graph = MagicMock()
tc._save_model = MagicMock()
return tc, trainer_mock
@patch('tensorflow.reset_default_graph')
def test_start_learning_trains_forever_if_no_train_model(tf_reset_graph):
tc, trainer_mock = trainer_controller_with_start_learning_mocks()
tc.train_model = False
trainer_config = dummy_config()
tf_reset_graph.return_value = None
env_mock = MagicMock()
env_mock.close = MagicMock()
env_mock.reset = MagicMock()
tc.start_learning(env_mock, trainer_config)
tf_reset_graph.assert_called_once()
tc.initialize_trainers.assert_called_once_with(trainer_config)
env_mock.reset.assert_called_once()
assert (tc.take_step.call_count == 11)
tc._export_graph.assert_not_called()
tc._save_model.assert_not_called()
env_mock.close.assert_called_once()
@patch('tensorflow.reset_default_graph')
def test_start_learning_trains_until_max_steps_then_saves(tf_reset_graph):
tc, trainer_mock = trainer_controller_with_start_learning_mocks()
trainer_config = dummy_config()
tf_reset_graph.return_value = None
brain_info_mock = MagicMock()
env_mock = MagicMock()
env_mock.close = MagicMock()
env_mock.reset = MagicMock(return_value=brain_info_mock)
tc.start_learning(env_mock, trainer_config)
tf_reset_graph.assert_called_once()
tc.initialize_trainers.assert_called_once_with(trainer_config)
env_mock.reset.assert_called_once()
assert(tc.take_step.call_count == trainer_mock.get_max_steps + 1)
env_mock.close.assert_called_once()
tc._save_model.assert_called_once_with(steps=6)
def test_start_learning_updates_meta_curriculum_lesson_number():
tc, trainer_mock = trainer_controller_with_start_learning_mocks()
trainer_config = dummy_config()
brain_info_mock = MagicMock()
env_mock = MagicMock()
env_mock.close = MagicMock()
env_mock.reset = MagicMock(return_value=brain_info_mock)
meta_curriculum_mock = MagicMock()
meta_curriculum_mock.set_all_curriculums_to_lesson_num = MagicMock()
tc.meta_curriculum = meta_curriculum_mock
tc.lesson = 5
tc.start_learning(env_mock, trainer_config)
meta_curriculum_mock.set_all_curriculums_to_lesson_num.assert_called_once_with(tc.lesson)
def trainer_controller_with_take_step_mocks():
trainer_mock = MagicMock()
trainer_mock.get_step = 0
trainer_mock.get_max_steps = 5
trainer_mock.parameters = {'some': 'parameter'}
trainer_mock.write_tensorboard_text = MagicMock()
brain_info_mock = MagicMock()
tc = basic_trainer_controller(brain_info_mock)
tc.trainers = {'testbrain': trainer_mock}
return tc, trainer_mock
def test_take_step_resets_env_on_global_done():
tc, trainer_mock = trainer_controller_with_take_step_mocks()
brain_info_mock = MagicMock()
action_data_mock_out = [None, None, None, None, None]
trainer_mock.take_action = MagicMock(return_value=action_data_mock_out)
trainer_mock.add_experiences = MagicMock()
trainer_mock.process_experiences = MagicMock()
trainer_mock.update_policy = MagicMock()
trainer_mock.write_summary = MagicMock()
trainer_mock.trainer.increment_step_and_update_last_reward = MagicMock()
env_mock = MagicMock()
step_data_mock_out = MagicMock()
env_mock.step = MagicMock(return_value=step_data_mock_out)
env_mock.close = MagicMock()
env_mock.reset = MagicMock(return_value=brain_info_mock)
env_mock.global_done = True
tc.take_step(env_mock, brain_info_mock)
env_mock.reset.assert_called_once()
def test_take_step_adds_experiences_to_trainer_and_trains():
tc, trainer_mock = trainer_controller_with_take_step_mocks()
curr_info_mock = MagicMock()
trainer_action_output_mock = [
'action',
'memory',
'actiontext',
'value',
'output',
]
trainer_mock.take_action = MagicMock(return_value=trainer_action_output_mock)
trainer_mock.is_ready_update = MagicMock(return_value=True)
env_mock = MagicMock()
env_step_output_mock = MagicMock()
env_mock.step = MagicMock(return_value=env_step_output_mock)
env_mock.close = MagicMock()
env_mock.reset = MagicMock(return_value=curr_info_mock)
env_mock.global_done = False
tc.take_step(env_mock, curr_info_mock)
env_mock.reset.assert_not_called()
trainer_mock.take_action.assert_called_once_with(curr_info_mock)
env_mock.step.assert_called_once_with(
vector_action={'testbrain': trainer_action_output_mock[0]},
memory={'testbrain': trainer_action_output_mock[1]},
text_action={'testbrain': trainer_action_output_mock[2]},
value={'testbrain': trainer_action_output_mock[3]}
)
trainer_mock.add_experiences.assert_called_once_with(
curr_info_mock, env_step_output_mock, trainer_action_output_mock[4]
)
trainer_mock.process_experiences.assert_called_once_with(curr_info_mock, env_step_output_mock)
trainer_mock.update_policy.assert_called_once()
trainer_mock.write_summary.assert_called_once()
trainer_mock.increment_step_and_update_last_reward.assert_called_once()

12
protobuf-definitions/README.md


1. Install pre-requisites.
2. Un-comment line 4 in `make.bat`, and set to correct Grpc.Tools sub-directory.
3. Run `make.bat`
4. In the generated `UnityToExternalGrpc.cs` file in the `UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects` folder, you will need to add the following to the beginning of the file
```csharp
# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
```
and the following line to the end
```csharp
#endif
```
This is to make sure the generated code does not try to access the Grpc library
on platforms that are not supported by Grpc.

29
UnitySDK/Assets/ML-Agents/Editor/NNModelImporter.cs


using System.IO;
using UnityEditor;
using UnityEngine;
using UnityEditor.Experimental.AssetImporters;
using MLAgents.InferenceBrain;
namespace MLAgents
{
/// <summary>
/// Asset Importer of barracuda models.
/// </summary>
[ScriptedImporter(1, new[] {"nn"})]
public class NNModelImporter : ScriptedImporter {
private const string IconPath = "Assets/ML-Agents/Resources/NNModelIcon.png";
public override void OnImportAsset(AssetImportContext ctx)
{
var model = File.ReadAllBytes(ctx.assetPath);
var asset = ScriptableObject.CreateInstance<NNModel>();
asset.Value = model;
Texture2D texture = (Texture2D)
AssetDatabase.LoadAssetAtPath(IconPath, typeof(Texture2D));
ctx.AddObjectToAsset(ctx.assetPath, asset, texture);
ctx.SetMainObject(asset);
}
}
}

11
UnitySDK/Assets/ML-Agents/Editor/NNModelImporter.cs.meta


fileFormatVersion: 2
guid: 83221ad3db87f4b3b91b041047cb2bc5
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

11
UnitySDK/Assets/ML-Agents/Editor/Tests/MultinomialTest.cs.meta


fileFormatVersion: 2
guid: 668f4ac2d83814df5a8883722633e4e5
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

11
UnitySDK/Assets/ML-Agents/Editor/Tests/RandomNormalTest.cs.meta


fileFormatVersion: 2
guid: 518c8e6e10fd94059a064ffbe65557af
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

556
UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHardLearning.nn
文件差异内容过多而无法显示
查看文件

7
UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHardLearning.nn.meta


fileFormatVersion: 2
guid: 8be33caeca04d43498913448b5364f2b
ScriptedImporter:
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 83221ad3db87f4b3b91b041047cb2bc5, type: 3}

511
UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallLearning.nn
文件差异内容过多而无法显示
查看文件

7
UnitySDK/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallLearning.nn.meta


fileFormatVersion: 2
guid: c282d4bbc4c8f4e78b2bb29eccd17557
ScriptedImporter:
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 83221ad3db87f4b3b91b041047cb2bc5, type: 3}

307
UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/TFModels/BananaLearning.nn
文件差异内容过多而无法显示
查看文件

7
UnitySDK/Assets/ML-Agents/Examples/BananaCollectors/TFModels/BananaLearning.nn.meta


fileFormatVersion: 2
guid: 9aed85b22394844eaa6db4d5e3c61adb
ScriptedImporter:
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 83221ad3db87f4b3b91b041047cb2bc5, type: 3}

12
UnitySDK/Assets/ML-Agents/Examples/Basic/TFModels/BasicLearning.nn


vector_observation���� action_masks���� action_probsconcat_1actionvalue_estimatestrided_slice_1/stack_2������?strided_slice_1/stack_2strided_slice_1/stack_1������?strided_slice_1/stack_1strided_slice_1/stack������?strided_slice_1/stackstrided_slice/stack_2������?strided_slice/stack_2strided_slice/stack_1������?strided_slice/stack_1strided_slice/stack������?strided_slice/stack
action_output_shape������?action_output_shape  memory_size������? memory_size version_number������?version_numberis_continuous_control������?is_continuous_controlSum/reduction_indices������?Sum/reduction_indicesadd_1/y������?add_1/yadd/y������?add/ymain_graph_0/hidden_0/BiasAdd�����?vector_observationmain_graph_0/hidden_0/kernel�main_graph_0/hidden_0/bias�main_graph_0/hidden_0/Mul2 �����?main_graph_0/hidden_0/BiasAdd dense/MatMul�����?main_graph_0/hidden_0/Mul dense/kernel�<dense/MatMul/patch:0� action_probs2�����? dense/MatMulSoftmax2�����? action_probsaddd�����?Softmaxadd/yMulf�����?add action_masksSum������?MulSum/reduction_indicestruedivg�����?MulSumadd_1d�����?truedivadd_1/yLog_12r�����?add_1concat_12�����?Log_1action2�����?Log_1dense_1/BiasAdd�����?main_graph_0/hidden_0/Muldense_1/kernel� dense_1/bias
value_estimate2�����?dense_1/BiasAdd�?�?@@�?�?@@@@@�?���.���. �>>)L=(�V��j���Q�� �>�G߽1�p���Ѽ��/�H:%�Ţt�o�轏�ۆ�>�| �Os"�RiQ�Aɾ~m�=��������)�=A�]>�����_��씅�7�>S��<G����9>z|�=�J>>�i8>Dy�=��>��(�����k�,>V��>��������>򖿾�Ej>��8�� �=*ˈ���=W�J<d���n��>��_�f8=:wZ�A.����<��%����=�ī��n>Ћ��tz>��E>K�����s>�����襾���=M'�=���7UU�yq��P!��uj�>���3��>]E��K�>�� >������>��޾�ֽ�>�<n�8=`{�媫�gM���8� ��<ȁ������W(>���!$#>�R>(\u��EȽ>C��/��� �0��>c�k;�я���F>羒�Ѽ�Y�����=H�A=A��<>m���m>�>"��p����<�X�p�b�J��>r >u��>����[ 6>�2>>B#���Y����<>X�q��=dJ��|�=�����Y>���yŶ<=���i7�t��>��Ⱦ���>�͚>%����1�<��<f��=Pl>��$=�}�n = H�v\&��!�>�L���`>X���|�>+�����>��b=M�\>B|������1�;[��=I� ����99]�>\��>e��> ������Fy���G�2��> ]��5A������n ?�/n�V�����=�v�>S��>�Y���4u>zo�����>�>Yf >|�J>�f�� qk��}����R���=�(�>X���9�U�٧�����>?km >��X���Z���=�:V�-f�>a�I>��>�=}>��q>��Ͼ�!��pW�>�%?a"�>#B$=_�ȼ ���6�W��b�>�{#?�M?l<v�27u�+p?�l?q��>8t�>o��>� ?��+��H���O ?M2?� ?���=�3��#�1���N>��A?r?rƁ>yj���+a>�x"?H�>�wn>�=~O[>h��C�� �߾o� ?�Z�>U�>�\�>�`��&�¾��Ⱦ�^(?J&0>
��>ܢ�K�=:L�=Ui?M�=A�>%��>�1\>o�J�{(��i8�=ȟ+?&'�>0u>쐷��$1�p�>a� ?��?
��>�W,�\�?���>ʉ?&�X��2>�2+>6��=�Xr�g��YX>�.�>�i�>�"?q�R����������#?��>�7?A ž���>�?�� ?���> DZ>aM�>m��>����Ğe=T��=��>a@%>��3?F���fy��� G��
> ��>���>����\�=6B?�S4>���>~<?�?��+��Pؾ�P��J�>�?@p�><��>ar�����u�>����< I�����Q�a>h|����;\���gJ���d=y�\>{߾���>�ң�c4J�;�پ�U�~�=#�%�+�Խ�KȾ�+��������>��V�+#辘Y�>iV����=W�a>�ѽ� �������=���>�ɶ>�3s<�9 >߄w=L�*>y_���H��G
�>/S�>�8>�V�=ná>��=&�[>��)>\nQ���>U���ꈾ��H>F��>m�>=̛��� ۾Z�+?��*?{�?��}�?�?��?8c?�?�P?��?�(�|��3#?�
(?�*/?[&?����� �mw~>��)>�֤��mϾ������?����ǽ���!&?�S޾l���� ? �>s��>+�����þ��¾jF?��¾]g㾻l?r����LܾB�?�)��������>+�ʾ��ѾH�?�P׾����n"?�⌾
N����>D��>��>ò ����>*��>�����xȾg/ھ �?����gN��^?��޾��پE+?�þ�
ؾ��
?�E�>���>�e� 3�>8r�>��
�9S��.�I>D�!�!�������9W���>�>�>3�л����>���>�˩��}���a�y�ؾJd=��K�?�ɾ=�=F*�=

7
UnitySDK/Assets/ML-Agents/Examples/Basic/TFModels/BasicLearning.nn.meta


fileFormatVersion: 2
guid: b9b3600e7ab99422684e1f3bf597a456
ScriptedImporter:
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 83221ad3db87f4b3b91b041047cb2bc5, type: 3}

143
UnitySDK/Assets/ML-Agents/Examples/Bouncer/TFModels/BouncerLearning.nn


vector_observation����epsilon����action action_probsvalue_estimate6action_output_shape������?action_output_shape memory_size������? memory_sizeversion_number������?version_numberis_continuous_control������?is_continuous_control global_step������? global_stepnormalized_state/y������?normalized_state/yclip_by_value/Minimum/y������?clip_by_value/Minimum/ylog_sigma_squared������?log_sigma_squared truediv_3/y������? truediv_3/y
Log/x������?Log/x clip_by_value/y������?clip_by_value/y normalized_state/Minimum/y������?normalized_state/Minimum/y running_variance������?running_variancemul_3/x������?mul_3/x  running_mean������? running_mean!mul_4/x������?mul_4/x3mul_2/x������?mul_2/x4add_2/y������?add_2/y5sub_3e�����?vector_observation running_meanadd_2d�����? global_stepadd_2/y truediv_1g�����?running_varianceadd_2Sqrt2o�����? truediv_1 truediv_2g�����?sub_3Sqrtnormalized_state/Minimumn�����? truediv_2normalized_state/Minimum/ynormalized_stateo�����?normalized_state/Minimumnormalized_state/ymain_graph_0/hidden_0/BiasAdd�����?normalized_statemain_graph_0/hidden_0/kernel@6�main_graph_0/hidden_0/bias@�@main_graph_0/hidden_0/Mul2 �����?main_graph_0/hidden_0/BiasAddmain_graph_0/hidden_1/BiasAdd�����?main_graph_0/hidden_0/Mulmain_graph_0/hidden_1/kernel@@�main_graph_0/hidden_1/bias@�@main_graph_0/hidden_1/Mul2 �����?main_graph_0/hidden_1/BiasAddmain_graph_1/hidden_0/BiasAdd�����?normalized_statemain_graph_1/hidden_0/kernel@6�main_graph_1/hidden_0/bias@�@main_graph_1/hidden_0/Mul2 �����?main_graph_1/hidden_0/BiasAddmain_graph_1/hidden_1/BiasAdd�����?main_graph_1/hidden_0/Mulmain_graph_1/hidden_1/kernel@@�main_graph_1/hidden_1/bias@�)@main_graph_1/hidden_1/Mul2 �����?main_graph_1/hidden_1/BiasAdd dense/BiasAdd�����?main_graph_0/hidden_1/Mul dense/kernel@6*�
dense/bias�*Exp2q�����?log_sigma_squaredSqrt_12o�����?Expmul_1f�����?Sqrt_1epsilonadd_3d�����? dense/BiasAddmul_1clip_by_value/Minimumn�����?add_3clip_by_value/Minimum/y clip_by_valueo�����?clip_by_value/Minimumclip_by_value/y truediv_3g�����? clip_by_value truediv_3/yaction2�����? truediv_3sub_4e�����?add_3 dense/BiasAddSquaref�����?sub_4sub_4mul_2f�����?mul_2/xSquare truediv_4g�����?mul_2ExpLog2r�����?Log/xmul_3f�����?mul_3/xLogsub_5e�����? truediv_4mul_3mul_4f�����?mul_4/xlog_sigma_squaredsub_6e�����?sub_5mul_4 action_probs2�����?sub_6dense_1/BiasAdd�����?main_graph_1/hidden_1/Muldense_1/kernel@�*@ dense_1/bias9+value_estimate2�����?dense_1/BiasAdd@@@�? $�H��@@.U,� ѐ�P�*�@@��@@��@�ZkK=�I�iKOwhJv�I�7kJ:�|K��I�zK��vJ�μI��yJ1��KmI�KJ��J��I� �J?P��=�В?jKC>r'��\��@�����=�{�?�K>���Ll�@��g�=�5�?��T>�O��P�@K��?��?����p���O>�~�>`>�>���dQ�q��=���>!}ʽ�<+>��;��>P�d>?�l�<B�=Ϗ<��*������>i�=�4��Iy�"τ�<Ɠ��
�=kZ�>�1Ž�Mξ=�>K�>9a��u�T �>@�=��9�=�ݼ�A=強$�>���{>����(��>�*7=@�4>�`�v�ݾ��ݾs8�N��=�B]�������%=B���l����[�=M��=�e�<.�ؽ̾-�=��=Q"��� $�O6�>L�>�ǁu>��#>���>"O-����>Y5>bp�萕>��=Զo�!�`={g�=j�F��c��,.��վ�K���<=HV> y>���>Gc@���?���=��>�D.>6~N����>��6�=F�������iI ����> 9>f\��.��&��o=�d�>W�j��A>h��+T�<~S%�vs��V^V�۳�=5]d=W�7>�\�>����J�׽*⣽'�z��ؗ>2�ھ���ί>D�T>�U>)�-�c2�>�~=�ʣ>9A�>�;ս`A��ٸ���>�W*>��羚t�t�>�%]�ړ���>���C��>��>�2��Ւ�;0A\>A�>�����'�>�M�=KP�=!����>3�X��Kp=m��>�����=:Ȅ>b�,�"�6���>��>�Q^=�����=L����$�=
z�������ܾ�
�Y�� ��B�� ��S|>}�=��==0ԾjT��;ߔ�6�����D>��!>��� ��=�e�=߯>�c���`�>�R�m[ >�̎������5>�+� r>�O=.�=2�>O�X>W,ľ���=X� >�>9��Y�>F�^=�W�>x�D��4{�4Ӿi�ؾ�>쎨�!�*>�Hy�nǾ��V=�C��;�(<~�>zP0����=�t���:><a̾�����(��2Տ>¸��Rid>
!���B(�W�����=+� >�Ϛ>]>���>��=H���$�>dB�=!ǁ��z%�Րܽ���B>�Y&�=p2?�N�=򋔾FV��X��> ��=3oc>q�7>�ʾj��=>�F��A̼-O<��?>�;�>�L.�_�>�b���0�=QX�=��[>E9=1������>3�4>�/�=�>)tO>��@=:�B�T��=���=�D�=@+�<�>{i�N��P��(�#>ׇ��P��=ۿ>ԓ
���P>\���� �=�'^>��?���>�o�>�Ƞ�p;־>��>8��>��=m���*��=DwI>�s�=���<��ѽzw��ߠ='!���������ʾU>�>]dy<i�.���Ǿ]�4���d��=Z���@ͽk$r�+�*>�07=���=��������0�<b�7>���4��>�e>��>�/=M7���_<oo��#��=�P>8���&ي=F㨾r�N>Ǧ�>�8�jֽ������D�;>�=�e߾��>�ܬ� ���)�����j��g��=l�l�k��<}�_=��J�;�½08K�<3=��Ԉ^=�i�<W0�>�?!������\��>�`���HE�g��>}վ�J�>߽�=�M|���ؽa���� =ڻ'>�+>*��>M��뙝��܊>�ƌ�� n>��=e��>S愾1k���a�>���<�[�>ϩ���C�>��=��Ӿ�_N��;ܽ�2>g?��3Ь=[�/�ng>��>�L �X3>��ڽ E�=�L�=���Q�=������>G f��L�qJ�=c�=���;&�_� ��>J�n� &=m��K�{>$��>�+
��81>���=�����
�#>��=�^>{�L>҉�>�
;�_�����=5δ>���M^��.�=�?����E>={Z>g�!��t��7Ɏ>o�=tж>�@�=��@=e&?��{�m��>j[ᾉ}G��O���ʽ%��><�;�Gv���t>��ʽ��>,���N:�>X�ھ�׉=9� �480>�l\��e�>H�>����2&�=��4>%f>#ƃ����>�����J>�r�<��<8*�=�p=�ޓ>}m�>�o�>�o�>�>n���y|��S_�ۛ`�s �=eLY��oԽye��Ӽ>�yY>���=�����=FEK> q�>V����=8Qa=xf�>3?�Qx�=#f�<���=4 ��&�>��>�� �b���m>���=�ž�)3>�۟�O�b=f5����mg���%�=�>/��;Ѝ�>��ս���>� �=qG�:`),����>a^��� <����嵽>ϡ;�Fm>�����1��-�=W j����>��|���t�Gv���F>�]�4j�>�R���p9��� ��񕽈jy�\@�=(����6��}��>Ξ���ɍ�K��<��>�����M�>���>x!�����=��B�@�þN"�:�����N>�ھ�q<���<��_�!Q�>Kt�<�P�=�AJ��.����K=����i{\>��;�_� �N49�n&>�o5�񪩼7*��=�>A��>� ���21=��=A�c� ���t�=���=e��=_�<��=!�0>�ѻ>r�#>��>�\]�"�>URq�=��c=9�5>��=G>������=��Y>1b�����>%�O>��;>ˬ�88���>��+>4�?��<ɼ���=�;+� �\>��3����>2b��>��=3��>�X�cQ���j
>곎�Ƅ>����-�����D��]>��ҽ�#�=!�>������X�b��>M|�=wF\�i��>Uy־S���3����<�("��?6�����=Zs)�<�v���?�wb�=���ׇB>���>�7M���W��Y쾹���#�>���%���wQ��-�=;���`�����T�}�=m������ϯ��lW�;a�T>Tr+���5>���<)�>�J��տ>��
�O>?^�� n>�;;�G>� �����ĺ��pM��e���|ℾϾq;�#>o�¾�0�>ٽ�>��.��Ԏ��ވ=������4�>)1#�"� >���<A�> 0>�ϼ�
BA����=]� ��{��@�0�,��=hԾ�D�3i{>/'=�ȸ>& ����>���>}`4>+�>�8���-`���O�HE�Z� ��=,�>q�K��;�<�j�>œ�>�����˅�]6>鴛>*v�>�d����
?&�V�1?=��=M��>V #?k1G�y;�<7�>���=˚>t*=y`���` � +������-?��0?ѯ���i��s#��-�4� ?�����������?����=�>>e���<ɂ>� G�X�>��ʾ�+���>J�R������@8�0ui>Q �?T��`b�[�O<-�о}�;U/�>T���Κ�>�ƾ��99�%��8|�?�3����>:]>�D��Ѿ�~�����=��g����{��>(/9�E>�D�>��=�Z���
�������:������8o>�;ȼ[�ڽ�_N�_s����=G�=> :ؽ$��>!h<��0>c?�4!g>�SV>0��b�w���=�cL���<=V�;������<5��>�I�=���˾���f��h�?����%��<i\��1M��tފ���b�8���L�� ��>O>�>���=�#�=w�����`<�y�> ��=bA ?�|*�S��-�>�B>>�y���ܺ�sL�<^��>/k��Ͽ�>�u->�����=E�=�`�>p5�>`�?�&ھs��>@���x�>; �>]%��$x=��
>�h+>�B"�N9�>�9��̊d�i�ͽk�˼��>I�����SQ㻌��>̟�����8��<&Tg= ț=z8 �W�@>�� �j:�> ZE>�%?��羪&ھ�d���<��2>w?q����\�>G��^b��iR�>/b� ���*E�>�9>��=�����_J������߽PP�����>u��<8[Q�`��>A�=?�pd>J(^�=�-�.IY�n����C�=w�>:">$�u��@�>x�?ʒ�>&?%��>��� ,���s�=�}�<���=�S�=o��>�4��0CM�*$�>�j ?S�*���>�.>�ྣUB>�LV��XP�WMz>��z:�֤>��0=d'z>���,Xz��؏>� ���=���~��>��>� �.K?p�>JE�>`�J�O
��w���1���eZ���Խ��ξa����¼��Y;s�>zě=*L��{���b�>
�о�Ug�������{�\m�=v�J:%�$����=�֙>�[��t�?�3��j�R>I0��k�7����= ���9�5> Nf���>���>��a�;��w�;m��U
�>�뢾�������a�=���� ����f��,Cƾ1n4<�؊>*>`>��.���,�>Gg�=�k�������Vm=?v=���;y��>4S�=��->��>�B������ >�^�>�����2�����eY>��Ǿ�f>�ow=��R>A���l�?>l�N��I��Sή������)�& �S7j�<�Խid�=o�=7?5��<bܾmN�=N+2��?g��3�a���$�D�>?�> jD��Y#;>b��'>�QH�_�>;8�����>n��=I�7>ݦ>X��y~����=N��;$#�=�B>K߇>��>���=:�>�lv;��H��ս� �=V>nƾ3�r��#�=U"�=Z��= N�~D�= (�=E��=8�=yj�=�GY=��='�=Wj�=Fp>���=�X�=���=x�>���=[�N=� > ��=BӤ=��>��d��@�=��=�=3��=��=M�>�;I#�=���=A�8=�D�=I�=���=����͌�=.r~=���=���=&�1<H����=ո3=G�e<`��=�A=d�=�P]=X��=�U�=XW�=z��=��:<],�=�4C=!�$;:0=9�<2��=�GB=�8���{��^�>e�%>�� �����uk�>a���'��H��=�$�~/�p��>kf5>�`ӽ�O> ������=B��=%^�>f >!����~=Ʈe>�ZQ�f�%>���Ƚ���3>U�Q=����.�<��=P��=�@j>Gx#��E/>���8aS ���b>�Z�<*�$=*&=b_�<Mt ��ȋ>�w�=�ĉ>I�%�?&̽��_<'(<a>�X���F>���D�;E�>�و>�e��L>Ԅ>���~ D��d�=�� =M��Qv��[�>쑐;:> D���`>V>��ɽ� �>���=|� >����-1g>��%=q�2>��=��ؽ'0���h>fK>���=�g
���L<����dqѻ�)ؽ��=�T�=�m�=ۙq�k��=�Ch�A(�=}�N>�\�B���48{�ɅS>��/�t��> >4�=����jB��xU>�?�=��=������>(��=�s;̃���T<m���A�,���{�i�=�.C>g��=�����(>DJ��hG9=� �= �=�*���mp�N=�4 =O��;��={>(�
=���=.�=X�)>� �<� ���}�=&����>�� @�=j�]=�E�>པ>Ժ�h���⬘������=jra>�ٍ<��׼'�>3˯�/���>����j��<Nkսk�f=lX��<�=���<@8�����K&>�W�<B�}=� ���`�;�E;�Jj�pf�=�6��C�=yG���f��� �����=�w�=P��=��ܼ�YE>�s��j��<?ht����=���;��e��5��P{>�9N�+Fk�Q�= X2>N���A��j/%��W�=�<?=��
>p�4���=.}�>v^T�xB^>ZW�L -> ��=��T>��[>�K=��T��u�<+uz�`eн��:X��>7�Y��Tb���=o��=ɘX= �� =���D>Q�P<7E�=�렼�~˽bm�<���=����gq<=$�ȹ�=���<���<q�Q>�սd��=��>�2$��D>Ò1�C�g: ��=u�=�����P����ӑ�ʖ
>��j�w��=�˳=фh<=Ɵ��)�>P0�w`�� > �]=���<Ib;�6�3����<�g�>N�H�������(=\\�<���<JDl>~;=|���(�N�>����,�?=ᴂ>�~A��&/���|�•�=�a�=��>�v\���8��=���=,v������v̈>e�1�����:�5=!O>�TO>-F>�|#>**ȽL���P��� �=���>`R9>D�[������7>ɕ =�`>��@=�e> /`>ӱ�=�{,�;������n�=��>�!�>SPD��/$�� O>��A=�<��H�=Ly����=x���z$+>�BN>n����n�>���=U6�����:4ʫ=D��=�6>���L��'I�f��=g�.�ҕ,>�9��������<�,�<��$��#2>���g�A�a`�`�1=��z<���<w��=�믽g�f>����1�P�>cs�>��<=H��=��=>����ټ����1j�0��� ��=� >�;�=��;u�+�R�y>�)>xu
>a�f��QR>M��O^޼��7=##�ni�> >2��=Y��=�h��!��=o1.>6,��u�=�m>o;>�������Ly�U_(��}�<�sn=�IF>�L�;��;?����8(>���-���i>,�����.���r���>0�<<��=�����=Ȃ�<��=��<����������=��S
<��-��I6>$����n��I�*��>A��i>�AT�,������=0��֋<��m>�C�rӼ!�7>�����$�u�Y;�I�<�Z>�7�<pk>As߽�� =�x����=�8�=�C���i>=ɟ>��>Xd4���ĽI��Ŀl=�t�=j�<>V�3>S�o<o�U�:�< .���q={轚�N<d<�>_�L=��(������Y���P>x>�Cf�B��>n�+�����N=Y>L��R*=��>��3���b>$-���V�1�F���_=�d��wD�=šo>��=���>�V�>����d��� 6�Fn�=���=j>e�)r&>] s=-*�%�/>�l��EH>��>v��=�s�=ۃ >�8~>��O��: >dMF>�;��_C�� ټx>ؽ};���b>���=[$�=K���v�^>[Q�=�l<��\>�s�=�>q���ꄾ�9H>��K>�s̽�%?>�"��k���Q>%N��#t������=�ct>S��=oK>����iݽ�Q꽃�<9�L>���=_b�<8U�>���J��<pg�>�N#>�%ؽ��ؽ�"���Q>.����8==O4�����>ෆ>��<+����V��k=��]��3�<tk�=T�q>T���{==��G�w��=F[:,�ؽM��<�x���$M��:>���1 ����(�E�Q����=�␼��=��<>;��a���JW=��>I�!�� i=�*ɽbr����=���<c��$�<q��=�����+�ۼ &>�JP>���;��=�S���]+>��> �>��C=Pf��*� >�>Ь�>;x> T`��\,=>���_Ԏ�U��M��=��>�V���_�<66�=��=#�V>U4��q>D}�>�ߑ=
|���0�<<�J��>�8-�pI�=E��=6��>u�����;�Ia���r>̙�����mJ�=,�Z����=��ս�(6��������)�u#>����1�
�5�/�Fl�=� �>�R��>��Z=y
�>��������N�=q�c=,� =���=j�o>V�=�;>�=o.彈.��y�<�ާ�aTw�(��<�m_><��>��?��kW�mh��H�.��>��>\b�>%= �"�R����#����1q��N�<�>>�ż�P�����J>ə�=߆ >@C>���p�=
y��%޽��
��A��>�v;�ޢ�����/�J���!�;��=*�(�-pI<S�8=\U
>�R�>�Ľ*=��(����=s<=��;-�U�
��=���=���^v>��2�=[�=�Q ����Ó��n8��+o�>��w���>��A�,���菾1((>W�e�`���g�=5|b>�A�������z��E�A=S����Q���|��� }>~Q=�s>Ls��v�>e�=2>tX��e�J�H��ut���d>�~����=Ax=
˜=�)W>Zk>�("��s*=o} >���Z^=O�T>��0>s�=�O;Zm=�H\=�e�>/��ŝa����<k��>P47�o��=S��>�꠽T�<`WO<�� �2��=�?��ך�<��>�`���ƒ�9�1>$��=l�>y�p<��[�/��>� >����%[�cr�e�=S���#>����H�>�^ >bC>���'��=�~�M p��VͽȞ����=��^>�V >���=���=��S�b�6j�<+�:�rX�=���=z�x=p>���<�>��=4t�=>�"Y>߸�=vB>�:�nw>���=9�>�>���'��T�=���>�j�Vx>}�O>�l<��(��z]E���@={�=;1�W� ;J�y�> н��=�0޽|O�=�̼9��<k؂=&_T>�2��42���^<��N���9�lD>��T>6��<'(Q��9��❽�]��#!<!̯=�� �r"�=MN�G������|�>���=l���QX��5�=T�����>|{C��`��������<Y�*>��=̓�>�W��:ø�Z=j�r=^
%��9����H=7ւ>�:�>A-�=z��>��<C�x=Q�=뗻�����i;�~)=��D>�d��zLG=�` >e\�=�A!<o*6���}<d�R����o�>%>`P�=�ݼX;>J�L�6䁾q9q=w-=�Nh���=�=w��;;�^�V%���B���>kg�4���>���AƗ<J�;����.O">��>���<�@,����<�YʽM+�=?n=���=���=�������>p#o��Þ����=���=qcV>@�'=���*�Y<>B <$���y����t��[>iMx>ua����=��="��=����*�>��<>?!>Ƴ���=��s�$>�7K=_e">,
�=�r">H�ֽ�v
>�h������=A@#�q�>��|>��>��ڼ$U��<ŀ���۽�X:>�G�>��C=�b��ۧ�=��0>'�=�Z�>Q������Q�?=������<V�~=
�F>Y�K%����3�o����\�aΉ>�)s���8> �]�e��F}>�j}�>Vヲ�ȼ9�> ��=��ɸ��[>�#��rR���w&=M���/��=��>�ki�Xz�=�'���³�3'J��6I<���{�d&~=Ƶ�=0|�=��!<�q�<2�>�}>�T�=�� >l��>�X�=7X1>�c �?_��k= �1�S� �'��mx>�)<�$�=�d�m��Tْ;v�>�[�=�b���g �=i�<�Ƽ<���=V\<�)�B>�I��f�=$��=�ei=Ky_�������>gP=��=xT>OTR>y��=��½6V�=g�(>���==�S�4���N%>\����F�=��=�n>d���ͅ=\l��@x3=�k���
�0&� �=aH�<��=��>NU�<�L�<*��>����ۮ�<�z=��>��<���)>f���c�����=` �� � >�\p>4���?A�9ٟ���=~A8��w=��L>n� ���q��<�S���9<�$��ر8���Ľ b}��3н�&H�je�rJ��������=����K�^��N=��+�� W��� ��ܽ�~�>�<��J��V��<L�>�a=>".�>��L<�V=�Y�$�?�\�ဵ;�K۽������;�������>�&��P���%p�=]��="� <�]>����1R����0��5�=�s�=���=i���`
#>�սB,���=Gt=v<7>0ʵ�D$�=��>�z���M>�������=Ps��e������=���<z=>�`>RT���9׮���/>[��=f�c���2>1��;<�n��Xc�9�x���軌�����[=���=�Q��pż`򬾗�_>�[���<��O=z���"�=N����,f�#5�=^�_>.�a=��<��=���<���>q�h>4}=0�<��6>r�f�K=F����>D�f>�Hg>��X=̇6<>ބ= ���Ē>#�b>[w�=+R���wC>Ƈ/>{�̽�|=JY8�������G�:�ח>�>g
A>ϫ�=n��|ڱ�j��,oV>�d�=B�=<��^>N��PD�=�@�>�#)��ǂ�N���]>S�=�^d>�̘�| �Q���4����=�3��T�>�9T=n���Ɏ������ҡ��C���[��
�=]NE>x��=��H>:7����>���<�!��R��=�> yi�TZ�u�F>��;6��q��=�k%>��-�9k@<B,o>~���G=�U���*���9��3>+>��\>Ĝ�=�Z�>m���d�L=��ʘ��"�=�z׽iJ ������ې=��T>�ڑ>��>}Q�I�<O��=񰫻�Rz�� X��g��G�<�s½ �b;ʜ;>�\�Y2X���>�G��C�7��<��#����>x�2���&=쨽�"����N�Z�n>�}S>VG�=�" >2��=�F�=�<�=�����,ν�gU>;�N>��C;l�����= ��<����*�>艽�>[B��1������K�����=뉾Z�w��4n�0*Z��[>%���y> ��>����GP
�eh���y>�.><~�=��=��3�W������=p�>l�&�r.�<�Fɽp,��>�@>�F1�@��< ��=�����C�:G8>s$@>�NM>�G������F:��}�5����Ԑ=p>�f>l��:�3>��=֖�>�>-y��S�;� �>"B�(d�=��>�����<� �=X��=��=L��=bϽ�\���?�>S>y�G�{K>���>B�,�� >�KE>�M�����?���7���*���������=�;=�W�=N@="�<{��=�:4<8��Ӹ�=�>N��=&�D��ͻJt��9a��}ý�-_��<R�<6t>�Њ>����
����=.hg>�g�= ��� �r�J��=`���w��2�*����?T0�_{��6�=���;{�s���Z:ҥ���P��<��J>-�=Q)��Y���Fg��(�<��K� 8>(t�=��>���c>�=�=���>������fɽ��!�!�!> �G>6�=�Q�� ��*'=?P>�;�<si�=� ��%\=�4��� >o�=�W2�+� >zn�=%3���r9�սhqȻ���4�2��7r��,_>���=!�2��0>ވ<��=Թo>B+">���VL�2�{��=�T�<�J�=څ��@����콑�����ý%N9>RQ�>/>`=� U>T�q� �)���;�њ<��=��y>Bb�=E[�`\j<dD�:��< �?��ф=�l>�6Y;��Z<\�>����>�<c=�>̼҆>�'=F>ұ���Y�����/i=��i;���=Um�<�~�=EÂ�%�q�G��=g�>U-���s���=���=_�_�+�ɽGp�=_�>tF�=.1=
ˆ>^�>�;�Xȍ<��^=XO��?0�qD��H��q@��[��=��2=ߘq�\�=���"���>F$��$���(f���=��K>�6c=L_>\���!�=�)D=�z=j��^����">8gW>�"n>��=�z�=�w��
Β< ~�<cq~>�y�� Q�=��>}<_=Sc?>I =H�/= �s���,��7;�R�a�W���͊}=�8��� !�OIv��ϙ�u��>P�+=���p�(��E&>���f�=[[H��[{=��@>d�>�!>�N������,>�A >$o%�#~D��3s>K�=���� \� �e�(̴��Y~=��T;s=A� � �{�����6>܆�=*E�<�H(�tz=�j>Ȍ�q�P=�\V=|\�<�r���.=;!�>��>�왽���P�=u���4vN>��>�u1>��Q=�W�<E濽N��=nh>A��������2>��>n����� >� Z=:蟽������>�nĽ�/A>�2�=�Jb�ŧ!=;�p>�M-�s�,�_��B���g�=�)=�k,��<�Y�>�������q���_"�<H�F>�}�<��P�ގ�>�)>C�>�Y�Hfd<k��=�P9>�W�>�>��+n:���K=�`h�����I�=5r�<cn�=o��>��d>ᬣ=�^>2s�=�%�����
>��>!��>b,��0�ֱ�<�艽 )�=�1�>�^����Q>_�ǽ`��=Pj$�vv�=��n���U��9'>�8�<�n >9�� �������9���[�9&0�=�`R����<���<�k>Qf >8�n>=��> n>//�=C�(��=B�>�G|;�I7�e��<eOA�ԛ{<��~=3�*>=�>� ���=`�7��(�=vVD<ۂD>�{�=�X�P�-� ~��h�����z����>0���N�=�U<�����p��[/=��"��u������v>�����[�=4�
��?��;�����>��q= C =���<]��}�:������i��e�=�I:>wH!�]$��Ԁ� �=��̽XK�=!ڽL
=0>2���c#�+��G��><c>]�˽S�0>��d>��>��K���=,&��!���<��i(>�7=6�<�] �w�����Խ��M��b����=sNM>��R>��=P_>��=&�v�tD���zM�A0��=]<ʈ>��n=䍙��pϽ ބ���>O��>r눾y>�h��ӟ>s��; p�>#��=��^��T�=wt�>��+>��(��MX>&}�>�/
�����5꽫^]�E>�ʹ=�d����ŽAZ��d��=���=F�>�n�.oJ�瞅<r��>�(������o>�|]��^�N� >��&>G�`>���<�*�յ+�~���@��<�'M���=UR�����:�0v=uנ��u>��y�<����� ����=$^�=�Og<�D
==-��wܣ:.��>�*����;=��鼧���z:����+>㞾Rٽ�U�=}�[�>Ɣv>�|�<���<t�5={� ���H�.}k=�宽Mֽڜ㼋�v�x�0���`�<?�>n�e=��=�lS���`�þ�Id��ؽ~KC���>!J��z���=�I���L�)g���߽�rm>�u:�Ք�� �iM��\V�<1��]U=H�����/�)�6�bi^=#J�>m)�=<6�=���=8V�=�Ǽ�����>mҊ>Fs>LÍ==G ;R-K>(�=��s���<� >>����DY�>,g�=�%�=�6ŽQ:�� qZ���>�U�=\+�=�V_�6���d`�=�+н�E8=
�L>�e�>*^
��8���I�C��>3a�=U��>�*���k<Dw ��t>�@����`�*�=�B>�����r��,�=�����=ֱ>��!>LIw=�+>�ڙ=@1o=\W0>��w�֌ͼT��w�= h<>s�x>4|���T�=F�u>��v=� �<2>���<�=��CCl= �
>��̽�_$���!���������R͆�������=��ǽ�n�=E'}<�
���^��xUb>�$=�0�>`M�e\=�� �!��>��s���E>�h�=��ݼ�1��>�2>�}>�oν���=PQ=���6�">A�V�. �=�D>1]"�G;�=G��=������нQx<�b�LE>_Q��S�Q+���1����<�a�= ��q�[���=)���o��>�Zƻ=�<!�6>�����jf>���=1H>6�1�� =㥣�Qq+=�D%=}|>�Q���@'�����WQ>
�Ƚ�#>�Q�=V����mh=Od����=ڲ=�����ϡ�r#b���=�����u;=[�D�3'>�l�T��Զ�=�0��x.�<�p��q_=Na�=J�a>34=�kf>A��G��A�=w=32.�����Q�-�>_�׽ϰ�<��:���=}˂� �����Y����=����9�W�;��=�Z�W��=�S^����x�f��iݽ�� �Faǽ+����cc���l��,�>�c3>]�G��=9��=����A>�=в����$�;Hʽ���?b@�G >��]�4o<�(�>�L%>e�<Pt�V6k��@>��� &��:8�>У�K���|��=��"=(��=?=�sżq>�8��
0��� ���'�=���� V�=��=I!>S��<�V�=�`
�͞u=XY�y�><Q?���"ໝ ���o@=���݆��O�����<���>5��>���>����?l=�ॾCR{�� �>��=�e�����i�x=lj+=�#=�- �������C=�H��Д=��3>��͕��>?U=��{>.N��zI� p>1�9=���W��=�l��j��>�?�>/�N�|,T>�-��>>��h�qЧ���'��k >JIg=��>��<�u<9R�9�*>(Y7��{O��Z�<���?�i�[��=R�=��q�����>8�>{�(>��^�#�����>�{ >���=f�o=�i����E��Oབ*�=�X�C�>�L={,!����>컼��E>]GO>���=Ȥ�;�&@>�����&<��=�Z>�|�>���0P>&d=>O[������7�=��;�R��']�=DmU>��>�ļ��>�F�>�������<6�ý�ķ=F�=0�P�m��<��<,y=!���}�;k���$��,�=�>2�]L.=�䟽!S�<�xw>�N �ҽnJ =+�C��UU�j��&��P���H�N>�뙽AQ�=�2�=5�<��>��>�y�>��=-e�=���[3>B����H��a1%�T�;�n7�d{ +�"ľ�<_��"F>��ӻ��+;$�<����#��2�7}���#�>�C6���������2�>p�$>y�=
��>���=�|������+,�x�>��=�E�=
뻽>9����=�w�<^D|�hI#��_�>�邽+��p�t�8�r���5>9��=קǼ�ߊ��o�>�����| ;��>ĕF�`�F>�t��,�=/�������j��=�e=zm�=�ع�@�=�G9<�N��� �:a'>��>Lʼ�t�>�%_=��.������ٽ�8>l��<P�d>�X.=,v�=dB�=�y�����*�=�V�o��=P��=�^���)~>U�>@�=�� �>�>'�K�J>b�b<��2������T�=�q�:$1B<7d�=��V>�ߩ<e/�=݃�>���=�3��i��>���=>��>�����
�>鹯�)Rf<�u��\=��M_����=o9�=u�=�²>�Aw=~���T]��s)ƾ(��=��=۫2����=����x���f>4��Z�����=�]=��o��F|D�)L`=�NJ��1�̿ڽ�}��|���5�����F��2Fq�Ec =`cK���c=!�C� U>&+|=嘥=޲'�%L�>'F�;�߽�#,>t��=sN���kR�o>o�e�h�5=�p��s��=V�}>�!�=S]�O�=�aV>�Wq��BB=��>��N�O=!M��q &>�"�=�P[��Ƽ�
H��>+�� ?>���=�o<*Z~���>��:�<����B8=���=X�н3�d=���{c�>�X}<�>{po���=��=�|�=&=CJϽ���=�Y��=���=+Ci>�m��& >8��=~�>$�>�[d�j��j�����%=�ϓ<,��=@>N���M>��=+��P� ���3��
z>��:���ںf�j�^�1��K�>�ޅ>��f>EEC>�1�>̊ �O#����-��s�>N��>@fz>��=x?E<� >ӌ��Q�>5k�a�=�:�=m4>x�E��م�Q��=e��=�UͽOwl>:a+���2�� �����rY�>O�}>���r�J��ș>�9��>�Z�=0'N>�g|<ZI�f ���ʩ>@��=M��>�Κ���=GrY>�5.>4��=���H��=�0y�B��;�_7������k=��?���Ͻܘ4�z"�=�����O>Ԫ>$*��:����>[+>Y���EM����_���pw���&�䲮=�,/��� =s2:�擼{�E��p>\�)����4>�=NH�=��=z���ȵ=��o��q`����=CR���=b%=U�y����i��9`v�=� ��H�=y�L����=�[��켣��5,��Š>�^���/�<�5~=P��=k��f���5ͽ?4>��/= �=:�:s����������ց��W>3{>�qA>� 4���k���<�@�<29�=���<� �=h�>]����7=( ������a���"e��e�t;D�=���=�}����=�H��5r�=X�<��� ?��|1�����a�>=�]=uam���x�����d��:vҤ����h�=��=� G>Q���~��=��=����=8�G��`����=ρм��O����='�ҽ���Fw>�Z)>���������Z>��}<S�{>�ӹP V> O>Ly>,���;��!���=����J�A�v>A���1����쩻=���̛�;�i���$>'��<���=�{���Z�� �<x�>���<���D �<�R�<c>�G�=��5���=e�B>"ug>�G�>�<�=7o�=\_����=i6>S7�=��>Ŗ!>3ㆼ�o�X�=�{��:y>'i�>���=���=�X�<6,�=�P»m4�=؊B����= r�=�⏽|�=U��<�7���=� �# �� ��=�O���j*=�3����[���y>� ��q����/=��w=ޮ���h�H�;*j�=�|�=���<=n�⸽!n>ݤ�ET�=�>5I���������;>���=y>���>�ػ 9>�>H>���=�j�� Qo�m��<�a>���>��w����ڜ0�F�>1T=�K>ɾǽ�>�Z���*>7K�����3�a�U�����>c]��&K���T>->4����-�=�������<�脾>���i��<]��<Jp{<�1T>���GҒ=��;��
>�m�=��Z�N�;�:w�=n'�=Ҹ�>䧅=M'>8�J<"�k���������TV�V��=�\�=���>�=ٹ>��68l=~Y}>��5�:���r�=�6E>J++o�=2K��`">ln������XϽ��м�SO��V=��Z���<�2K�l�׽�᳽�W��JB��᫽�`]>�� =� ��>w��>�Dy>�{>0��=Φ����)���j[�>f�[=^:�s*E>�A>}H>>p�:T$>w�J�],��O}=�hD�~n�>�x���Ͻ�3F>�t^>6�h��8�7����<~�������#�czt=j~���[Z�WY�<��e�ed>t�m>�G��):q���ֽ�<@=��=�i�с��1A��`� >�'��Vv�=�x1>Zp�>EϽlŁ=ŷ��y��>�����B��W�=�� >
q��Z3�MU>Uy��O�=�����=-�ӽ{ þA����鼊��=��B��I�;�������t=�:=�3��䙲��&�=�(��<�����$�������C>�S]��'�={t>Y`��_:=�����M>PPY>:k�s�=���=��m>&`���k=#ǯ���=o����|:=(r��/ĽhB𽪄G>���Ȼ&�ۢ�=��K�Wy�=R�H>�S���,�=
��Z,�>���=`C����|6�B[��s�ս{��<�%T���w��}D>ˤ ��>�<V�>���=Q�>`� �}�K>�$ݽ �1>�}�=�H>��i�mԭ��Հ�ޣ<>��#�H�o>+�>y-���E>Jr>�`�>��<>���<|�y>ĦN=9�y=-g>���<z�=�0>#� ��VB>��r<X���{��#>�xb��Ky�7'^>�6��O����d=)��=C����*�=�������<:�<gff�-?=w�̽x{A>���=�or��)%=C�=��ҽEt>#!>�HG���>�R4��cP��#~�Xh�ZS>,z2>+麾M�b�+�=jݩ�͢w<�#�� �=�Hj>���=_FW>�v>/ !��vu��A�=R=x=h��<�ď��� ��HN���v�eщ����%F>>�d�&��EC�=0P�<%�>B�v>i3��A��o��t��<c�(�􌊾ߏt��F�����=�6�>�"=ײ�=��>9L=�`r�:=���;�Q{=.�E>�_�=�@>X���IF=ӊK<ɻ�=�������� ���B>~X�=�ƽ-�>[^���]���� �*( <+�<��6��a�=�|>ۣۼ�����P=�������C^�=F�4>.~[=j��=1�)>�᤽R.4=�
=�xU�po��������3Z�=G�k=�t>L;#��� �>DQ>9�<�/����B>�M�<�ڽA>��4>�!%��� >�����
ȳ�I�4>k$>�����)���(2�2��>��=�fS=���=,�=�=JS�>��<�1=H��>|D��1��ku �
�K>�<>�Zt>�q��D˽��پJ�s>���>�����Ѓ=)�Q>�/3���=;�� Y�l6��DaA�[��><T�m�=��=�X��&��=�y��bAq���8���P>D >�mL>?A>%=�y�阾q��iY��� �<d}>g��<���l�-�_�����>UK@>^-�=7�ٽ�ϔ>�o>}��<&�r���0>}���9�����9>Ւ�=�e�=��&��R�=[e�=��N=_ʮ=���<L
o=!߉=1%��#`�Z��X0g>��#<�䵾%���}�>n�������5<��E<=�@�Y�ɼ�������;,�>Z_4�w���Zb�����#z�>/�f��'�=G厽�,��c�A�Dr����=Y>�=
�>��>�g����;ж��� >n��>���=v�t<��|��� >�oֽC"�=^f���D>����TZ=��=S�Y=.I3=���ý�M6>� 2>? �<�홻�뾾kb <ݦe����==�?>�l>��[��.�����7d�?�����=�]�=D��=�1P�l�F=��<�j=Ŵ�����?*z�m��=��=�r>ĩ#>�A����H<��Ԝ��1�O��>1�*>�)}=�)��k�>���=���=���<&�>��:>%�b=��==N�><n>�ל=<&-��V�=�>���=�=j�9�,��>H�M<��<��(>�rݼ=��= ��<��7�_��>lD>N[��I>�g��>1~?�����e�}��g�>�>�/�=�����ـ�<!��'�>B�s>�t69Ae6����=Y��<F����1��Η>�^�-<'���7����
��ȩ�>���=���=� ��<�=TNo�F�.>� ����=_/->�&>�Lc=b�=Lk�=��6H��� >�A>)z`>Ua�=;eC>�{Ƚ|ߛ�.|=� P������T=��ּ�d>��D>M�=WK��C*�Bk3>���=�����T>_&�=�I����=p������p���=�eA>��Y>�R=�%���K>c@ >��>��`>[C���>} d>#t>�N�F�J=H0%>�0V��c½��=���<5�<>l����E:� �=�ㇾH�=�`�=x�n��2>��=��>�:�=I7s>�#=���Ŕ���a<��=g8?=�V>�9>��<��v�)}���7>ވ���>�B�:=��">mm �����H�>q�s=���’�=�� >M�B>e��>���<�'��w�>�^:� ݪ�ٳ���}�=kR�=�-߼�`'=ce5��ݤ���l�+=@)�0����㽈W�҂��B��nA&��Ȯ�IM_������ l>�� �����VN�=����APF>�������a��y�����=�Y=��S=x6>m�#�j��=�6�@i�I�(>[9`>T�&>�n�=��2��ǎ>�h�=���>A>����O�<C�L>��0�Rq�=r-�����;9�ӽ��= ���� ^��1;���=p��<��>=�w�;�d|=v���޷�>D�N=�>n>(:�=w[��� ͽ���>"G�%
漬��>A?v�A��=�#��(� 5��ʽ�{�>9_>� ���c?f>�̣�'�E�f��=ΰ= �H<�,���<A���q�=�K`=���=5�&�����~�������>��=�6��Æ~�9��Pn= �߽l(O<=Z�L:�<{���%4>����xWU>��:��)\�!����J>�m>2�=����ż���X�>6�:h $�#�����">�[ =�� =���=D�=7��<����]Zz�lB�0��=Q��=C:�N��<�Z�������!�%C>�ɜ=w���i(>�^�=��e>��������H״�苾: =�H�>P�޽!�Q>���=���U����sx��އ��/�=���=<�=��S�uN�=jL�=Կ6�G��2Y�τ$=�L���ԽR.�< �f��P���˼������=�*=�%齔��Q�Q=�rڽ��>����; wG�]��m���|���a���>�i�=5��=to������)>l�>� ��.��<���=�?`�5eB��J>f!p���)������>- 1����;���=��6��C>�Ir<f��=P"�=�m���/�=�XE���D�H��<`��=�6[=m��=O�<�n彤�优��=A-J<2�^�m��>��� ��Pm�=��S=1H�<D�����H�:ʁ=U�>m�ý��f�b�W<↓>bc=T`�>����3>�Ȧ=Vk���_>����T�=�T/>~Aǻ������޽2�>�z�=�>�>J��>`إ���O=�T&�a=:>}~<�=�d=����m�> �;���=tl߽�`�x��=�=��ݼ"��>D�<<��'>�;%>�������*�<�P�<|37=e�=���>����=��x>�}���h��bi������O��<�0>�=/:�=4p/��`��10�=�0!���=���=��>�D����p��Z>ߣc�tv޼z���^>�-
���=_�ŽdMo>�܈����[�=��W=f��>�ܽ��[=T��+����>���:>l��<�y;%�޽���� �K=}�=dQb<����M�;�-���7�bF#���{�:��T*�L����9��Sһj���S^>���B�7>F���X)R��;>=������L>bݑ=>�=a�w>�q>����C=ԭb>$��>##>��=~b=
�=:Y�����<�-����8>8��#1=^gн��)=��H>��N<�D�/���\T��z�O `>���_�H��=��?`����=!>.百��>X�Ƚ~�Z>�+���)�=��=*��=�%�=5��=
\�=Q��=b��=�ļ=
�=�L=A��=���=�ߪ=iz�=g��=���=,H�=jh�<M+�=���=�R�=��=d>M��=;�=���=W2�=X#�=�=H7>=[��=/ `==Ւ=A��=���=&�{=$=LKv=6ț=r�6=ϲ=C��=� =���=�5m<9>���=��=�0m=��X=ܕ�=_D�=���=y�="��=��=�=�(=��>�V�=�aP=���=���<�p>�ʻ>UpQ>5�>15>� սn��,`>"I ?�Y�<Y@(>�{�L����+>q A>��>S���=���k+M��ؕ=������>��B=�ޓ�y�Z>7靾�����>�m;��>c]����"�-;�L�<�醾�]�-Iν�aj���~�(�1��n=}蜾͍$=��>�ak������=�r��՝>��B>U�6>y~��8���>>�>R�Ⱦ 6>��>�N��$�L���C>�?L��#>�F���b��Ӷ�����;3>�5;�3m�e
Y>���=A���C��=j�z>F�0=a��=L�e>�C�>��N���,>��:���ѽ� �=��3�ƲԽJ:ʻ+3Ծ��>,=��<�=��b>�QĽ�l#<������> �M��4�>����>�O
���U��O�>����vB|>-��>L7����= m���9=FY<���>���>���<�5y>���=�G=���>|���]���G#��^>/��<�Xc>z�����>g��=f��>��+|��$���ġ�E�<H�w�)d >j��>dʾ�&c��c��������^ >�O�>)h���K:���Y;��<P�����=@ q�JM->5\��W���78�u�z>�-��>�?/� =夅>�?l����>b�������T��QT��`�N�LX>�}�>~�P>�i>��!��.<ѹ��ί ?𩽀b�Ǎվgt�>��O> �=LP���e� ����?>�Б�c�?>������\>��>N��kf��=~��ՙ�Q׊<���=�8��,C�۶=Լ���>+N�>m\=��>`�=�/��D��=.�ݼ!T��O>u�\�/>>��Y>f5v>�Sǽ�� �k9�����'3 =t)�<�{�=P��>�}k=����q�Խ����Rjy>HS >��3��ـ�g��=�S�<��>���=Sˌ����=~j:>�$�<��f>4�.�m��=�$�>�i�x�c>���=�_7���>�}$��F�>mU9�����q�=�ݏ>��1� 5��QQ�>v�=���=���<��w��Ȫ��b(�j��>ԣ#�冧��$r�
z1�Ȼ�=Hӽ���ـ���>���=���<���Ϩ�;�ʝ> �E�A%���=Pd��/Zm>����d���j���!&��>���:����>�i�>6I�>��l>g-q=�
�=#>��V>����?c>�h>媥>&&����H>�N�=�h:>�b��x٭> �>�!r>k?/e ��R�>�����@0����<�*?>Ͱ>=l���yf�>�^X�a(>:��R����|��������F�=j��>˼|�΁�=�MU��=j�<?�=���>�q����;[���q�>,K >��f>&�S>f8��Y��=\�˾��>à�>�bS�������)>���� ~�=Jܕ���+���[���I�� 0>�>l�>B>ȳQ>IQ��0t���_�>4�%>&`�=զ��PwE<��Y>�\>�ݾ=[h뾲l9>Ц�&��%���e`�d-��_�Ƹ�� >�l���,׾��=�̾^y���S�� �<6�j�����F=l��>±w>�@ؽB��>VR����>9�@�țk>�:��tW�:�|�F�������I�>T��>X�??���)�ͽ���=�9�=�I���M2>�p�=�B=�r����G&G>⍅��ʜ>h>��k�Wl�,B�>T}���:�=|d<>)���e��&��8��>]����>���l>�;�����h�ܾ�܁>h�>��������x}=�����L�>�#�~�Jx>��
>�~�>3;>�T���o�<nq>���>���L�(���/��+>4@�>�=���>l�>�l>I�W�B��䛻�: ��md���B>�]��q�>W�>���@�>�0� �M�ԽO�>m~���o�=ޕ�D �=����oc
=�k�=�ǽ�Ԋ>�/��r>�)h�( ����@>�K>u�߽_�о�e��ŝ������>��R>#+>�� �1�=7;%�@�޾�%r=��>/���ꭊ>N΢>߮�>�6���l �Ҽ½��L�՗4�Kf���>���>���>P�>���=�i����:T(�=��(>�|>�i:��v>��?|�b����>��?/��>O'y>�V������w>�<�g[>�2�<a&;=ʍv��_B=�����C���Y�������-����>��A�
Ö>��>o\�>���=hgý�^=���>����>E�ξe�2> �=gW��%����]���ƽ��|����>
�?�����Ku>J8,>40�9�ý>D�:�+NѾ�_꽲O��G�B>���f\>$'���$��ǒ<� v=� ž�c����=iم�X��>��b���0�l�/�N���=>!7�����:�>lH�����>�ھ�`���I�>Ż�>p𱼒��>�rY��o��襽>�K�#e���?=�j���f���d�>S�>.|�=|��>��׽xHD>���>�!7�������/>�Q��r�>N^�>���>�H�2ؓ>`&�=TXټtE>t徫�O=�w���A� ��hp='���W����ľ�Iz��q�>N]4�m֬�%�S>����=x�=��"�����?��n��M����q>Qx=�2�=������>�wܾ3� �dR��|���.�:=�����F3>�)N�j�>��wڶ>|-�=g��$=K����=M�<=:.>�4H>QN>k ��ޔ=�>�ް=jw�����</����w�>h����a?n�r>/��=N�1">�2>��@��M�>�5s>X�>o���Zj�*�2>"v<c��=�TC>�f�>�?����h��z\>��>�� �34�=�^(>0�ݽ�H��m�=~��% ����=є�=p��>jݏ</���ӣ>�V�9־����ȧ����>_�߽_h���ٽ�f�>�`j>��<�z��O�?�꾖�? �u��T�<�A�>#g�:��x�΋=������>��ʾ��=Z�v>�>��߾C�=�m=%,�>+�>����>���.���7[�<�#�=[cd>(:>�=H��%�g�>z�@�z=>� x>��1������/U�@ke>9��> �>ҳ��Ԫ�X����˂�A��&���������>?~�>� �U��Y�>��ڠ�<�|K>j⊽�Jc����!��P�ms<�mH>ɍ��KL�=�$��TG��;��ւ>D�޽���>.�>�����j�+�>�=͗c�4��z\�>���"�=�e�>����cF>7��æ�����>im>��Q>D�]���=�֌����_�6�r��=�𾧎>�Q�g;~=?�=�>>�4�=����t舽�G=��>'��>5RE>��f���i7�����>����D
��[�+>5�ž,�>�s�>$����g����M����>w6��Oy��NmO>�>��ǹ���uѾfM>�9�>d�RؾO砾3�=Z/>�ه�|̬<7�?�f�>"��>�g⾺e�>��z=k�>T�=E兾qTD�3��>�"+��V��s[��2�h�K��=}4��w��=<� �c��>{��=��>�{>��=�u�>����X@�BQ>���=��=~2>Ǟ�����>�ügѽ��u��=�9�>�?`=�C��Z��=��=Z���c[�z�S�_���s׽���>��}=wjX�:b>Xӑ��,<>y�����1�:恉�<0�>�1�>�5/>�Ǿ�G�>m�>ղp>�+��s��=s��{�D>�Zq= &=ӄ�����=��5> ��>I�g>��ž;�-=w�5�Т�����g���� �>��G>���U��>�w=�@�<�<�޼>iY=>���y�ƾh�$>|h�������m�>Y���g�ɾt9�=�c>�굾h�S��;���J�=r%]>���=2�<�0f����7��>�Đ<�hʾ�l�>r�J;���<)
u=]R?9����w�F�>t����@�=_� >K�Ӿ���>�v@��w?9��?�>H !� /�>G�'�8���ʪ������ ��K���f��sQ<�cN����=RyE>l���՞=���l�=�?�Og��A�>@j�<�žs#d>�>�Z�>���=f�E=�xE>�N������zH�>�0=�� >p���뇾D ?����;�5�>g_[�JI>64~>�g�=��y��}�����D�*�6�{=MA=������x��>A�>p�t>L���Ι�����jy���P=>g�1�,4>�^�=��>�V>�5>A�<,*!��L]>�7�Q ѽ&X8�"�5�C ��o�=<�U��x>�|,>��S=���-�����a=�2H��K�<�ۺ>�w�����O��=�N�>Y�D>Va��Ēھ-h\>�0���+���"<��w>��g>3�=i��H}���o�?�役�|�[�J=���>/�
?���>P�`>I�*>��Y��$��[7�=>���14�E�>��������A�}���>�z���j>??�G>C�p���z>1Y:�����v�>)��>{��>��e=��Ⱦ�;J=�<>�(�>9 1=`?
�#��>4������b5��4@�V����V��k*>ߨ�>��q>�Dн�U>�*>W��=���=�>�=}� >a��=���=0�!> � >� >.7��?�,���=0>���=@���Ž��!>G������=�?>b�1=�q�;�>�>A#M=4��<�=B>,{>�� >�E>�!>��>k��=��>�=A�
>ml_=��8>���=�֯<�j��7='�>��>|� >��2����=o@�=o>\��=���=��%>pf>�=�=��><0->�6>��D>���=7�>w�>�4��e����>�Iټ�}�>��w>6�d��=�1J=���<:G3��^=�v�= =M��=�ɢ=��r:����EW>��PF�y ��K R��P��l�;!�Q��ڋ����<�<��i�Y&d;F ����=���=L=�=�?<2�=���_������<�����
>I>>H+�����A>�S=?��<^�+>��A�����/ۆ=> >�<D] ��������|�=���� <=5�ս[��<b���nۼ��Ƽҙ�<�!�={T'���ӽ_��]l+����w�뽳���� p>_jY�k�=���<���cY��0����6ǽ����P�<]f�=N?���'��0�� �=;�]�=fG��U����$>�,Y>A��;�>��>����-��e>!?,<
-���'>���=�4�=���;���qJ�=�~�;��P���=�U��l��=M{6>� d>�gE=u�>�t����=����aڙ�jt��PX>��5��Q�=$�ٽ¡T=U�6>W3佹�_��-���X >Ğ9= �_>�&>5[>�}�=\��=��+>�D�e��=H6��sfW�;���[�ν�����;���<�������=/K<><�Ž,<$>q>b�="l��׃ĽBN��_ʽ��\=��~��f�=Р�{`Ͻ����q��i���75>�Ew>����0>v�&>�=�|K��y>��!>��K�>���=���=M�m>�0>l����0n��M����h>5��"kf���W=d�u�-�:>L>=y����v�<���5�2��N>cO\<�A���t/>d���Ge��e�=��w�\��;md=�%�������J>r�Z��a��!D>z������2�>�m&>�K>�_>:7r�g>3m�?p����6=���>�c>Lǽ*/c>F��� 9����X>�4��2O=v�>�U�f�۽�UF=��s�=!1 �U�2>�*�=�*����>Gs�=�J5�
9:u�#>. ׽�V��g���+��=[��$>ڰ�!xA>g� =_�*=��'=p� =;F��(�z=�L =ҳ<��<*4��]�ƽ.1#>��2=�������Ľ�L:����=���7*�=4�&>���= �>���=9��0=>���<� �=�;�=`L>Ee��� >u�h=���� �=[�=���=�{�=���=W��=�U>�T=>��(��ϩ��~պ�x����=n]}���)=fMQ>~�@�P�=�ͅ=�|<YN�=���=�����i:���~v[<��Y>���me�? �@2�<�p�<�tR>ύ�� ��I��Q[�=��T��\'���">ˈ�=}g*�?t���e=[���X&���D�R��o���;�-�! n��x��Nܽ-���p >N�>�K����9>'U��+)�<�u�<"�=#�>��0=��;��(�=j��;�@�=I\�<b�>i�<|xB>��׼��r39�d���)���H<
;Ƚ<k\=�*�=M���������k�*�Ӯ�ՒD��
�<6I��j >�[ֽJ�������뭽��L>8�L���F> Kk=�y>�坾VL�=�$Ͻ@�3�F���i)c�������D����=�[n>`�ֽ��s=g�������Ï=B-������$V�S>z��ą>��9>�(��w����=6H>�>��=۟����s���c��j�>���0R�=�*�=��.���=ϫw>�P�����>�zH��F9<�rɻ��<��� H��"0���ܽ�
6>י�>�a����(���⽬�>����M=�T��k�aִ���c�+�����^��'��Z��<��>��%>�p�=}�S�H�=x�T>Q��<]�]��%n�d�;��=�`���ɽ*����~�!����oŽX����ͬ<�N8=I�M>�>������=y�l���/=��üQӽ=Ml�;���Ks�=7��=.�]>$3��9��!p������S>SLɽh^><W����H�� n�i�ּ�����5{�� ��\'=Sýg'��JZ�>�����怽�ϵ�� ���}"�{��<���=~� �`⃾Y'�=K�Q�YT=G/����R=����~S�<�:�=Z����O��+ʽn�=A�;�����x�E�,�.�CjL=����Q��=�]�=��'=`�q���f����?��`��=��d�6�4=�x`� �g=��~�Zˠ���p���
3B�U���x�ǼB�I�:�=�B�=��>;�N>�v*>v=��
� ��=���=�+e�G��=�p@�>ɝ�Y5¹�� �J��<x�>Ά;�a�(�O�CN����=���=��|=ƍ�� ��=�2���H���'V����0ol�����o-�=��������a��U5�UO�FZ+��J�O �����Y���==Z��[v�_ 0>�֊> UO=��3>Tr�=�t'>�������=��ý�n���j>}��=�[���R= *'��+m=멍=`�>���>�)V�G)P��}9�N�$=EN����Wм�Ǯ�<r��L5�>�?>�g�=1�>(o|��-=">� \�gu�=����@K�<�v>��=k(�g�1�Q��<�j���=>H,V�f8 �U���a\=�7�=Ds�=3n1=�sH�����cS=eS���̈�ϡ�=Lul�Kt��w/���zx<O�E>�N ����܇H>J?3>���=��=���<��Y>�hs�͔�=�RJ�6�Y=J%��D~�-ݞ=��>>�l>���;S6�=� ��
�>��ݽRV���8���RS�MPD���!%���x�=&~����i�l��==\H>��=���տ>�9 ���>�R���i=dX��� >x��������E��)<����'�=�� >ʖl��aB=��>J��C~|�=Q=���O=<� �������-:�g�q|��V�=���>��4�z��=�\�R�ڼ��;����|8� ��=�dž>N����@>fĈ>�� ���i�=���6�<9l>����+yܼ�#B>F�P=$������=M�=��R�7Y6����<E�&����>wCM�^��Do���E�P!>z�����y�/�U>} %��IϽ��6��>�a�oh����1����P�>T�<?�I�X7�� 蟽�u=�,2>}���ڄ�d����<e>U��]SU���̽&�ɽ��+=��ڝG=\��:y����s=�)>�N$��z��:ዼ�R0�KDl=�i����q�d�۽`7�=d���pȳ��� �
��dE�=�Ӽ�.:�<?>���ꀾBqA>��d����G���>%T�=>��8Z>����������=�؛=B4�<�3�=�y�<'�h�,����1����^�澁����=b>$�_>���=(��뎽�My��������P�9=Q�=�ZZ=\���c6�"ډ=h����>]��r�Ƚ��.� j>����)=9$Z�+����ٲ����<7T�=�,����O��Y�=e2�=7N3>�yq>Z�����^>�3�=m>W�H=��0�3���lAh=y}� G���)��u�>�ƒ=I�c�w=�=��÷M�2�y>�,>A���㕘��`��n��=z2���{�<mK�=*�h>�Q�<��c>ל���H>�t����a>�3��A��>"�z���N����=�A�=CWa>�Yu���9<qCE�E�>�V+�2��� �G����>��=�"�� �\���d��/���6���=?��<i�k=������_>��>�(�����=�s�=���(>y8�<����q8���,>X.���l�3�l=K1 �~��=wM>�����q���f�`/ݽ.v�=�F/=���H��=Ъ@�� �=�� �T��=-�'>��[�$gq=���=��Y>�~ ����<b .>�I���5�>9�"�U�=Y�5�Q��=ؗ=�ֽ@=��k���Ľ��=�ˠ�s�<Z���mc<.a4>w�=wP�<���=b���1�}=����Dՠ�ԇ�<�+�>=8�,L> �>r0�� }>�&>�|��p��>2q�s�ؽe�Ƽ�p�=�BH=��M>�yn=]�]>����D��=;����B���GW�nڻ��t>��D>���=�#���`�=��+=NO��LQֽQb������_�ǽ�����b��"S��t4<�7�����{�;����(>�g<���=�oq�L ����=�G��#��<-kǽu��=G�)>�O̽��=��Z=�g�����)&>9��e�3=!8i�a���ܚ=/�D>��H�ey<�ӽ��J���<=Ui�mN�M��<ϱ�=���=��[�L9�����[��=ӓ ��y����:=�0�����ٹ���j>�,ݽ�f̽��f>��v=�k>Q:��#������Y����;���m� ��=�~=*�����P=Y�|�9M�=�xp�/��b�H�N�3�
m7>�FT�F��<9|�<c@7=7>O�E�fa>h�=��(��l2>[�>��e��]]�&�|��6�<��(> I���>K>�l!�>��?��4�<��)��
�zU�|>{�;��=J�Q=g�X��Ľ$�=�!۽�Y=/���E|���,>��.>�ju�w��� �<���uVȼ#����=���=��=Aj>��=m�6>���<i͗�cg(��~&>�� =�m�>j�1>4��A�=�-=����G��ٓ��wx��հ(�*ۛ<���=�<%�R� ����狼A`����=%����ý_ > ��<��0�0�����}>1�ѽ �Ľ��f='�����^��^�>��,>���=N���7���BH�=�>n��Y<>p�<�o{�0贽ϑ��Q�,�7A<�4��� �m߇<DԽ��f;`��=o�O>�>�o��u�o>;�'=�C=> ��Ӧ=�K�c�>>w���a��>z� >cu�;%F�� e��؈;N� =��X>6>���<̈=�|���V=x?�����ֻ,=�^a�F5�YP��6$=&2�� =�����&�=�ս��t�TS�>�2������5>쏆���q�7��>5*+�uyH�@^t;rR޽J1����7>�<�=4D�=o�
>!1.�ޚ�=F1]��q�=�-��8���ѽ�(]�5� >%)�>*��<��� j="��>J����t�n�>.j�W��-�j���½H�k�����]u���]�;�<��p<���8�1��<�1���Z=���.�����=��^��2���q='C�K�0>d�r��Jz�ǡ�=$���툽��ν�D��(<��1�'#��"�2�>�'=TLƻ�r:��W��"�� i�i}��OT���;<�� >��;�%�o��=������T���<DH��ڼ>�{�=�2�� 7)�n@���M��1���P�Q���ؽ�3j>��=5�=�;O%�%u�>'9�=̧ >v�_=��T>!�F>�c=�����>�%ڽK��<čL=��(��HW=0�����= 88��^ν� =�ν��WE=ٽ����t�=c��=)Ȋ�.�����6>%<�S>�U�;��t��p�����=5� :����t,>ƺ�;!‡>t2�>Mm�HNc�hF}�z�>����R����鞽�� =���<(�Y�Εe��>�=������<���=�qC=���&�½NZ���9�mW�߽[�����%�i���$��E��<<P>��<�Y��mS#�j�ft=�Z_��ւ=�(E�AY���K>��⼝���˔�=;����6>�F0�?Y�o��=����SK>�b!>� ����#�!�>:��=��]�(�j��]{=lϗ��ˇ�󟃾�'7>P�j�HO���
9>2 �;p�8�H��>���>��ϼ�+��Å�=�Ϲ�c$>ڀ�=����ߕ��� >�0�=:T�=��a>� �;᥀=FУ=Ԉ��mg���(�Q�P=ܣ��G��MU���L>��^�8V�2�ּ�S��t1����>/܋=�>T�];�6~;r�½_b���4�v��=�-X�n^=9��> ~"=X�=���=4����=q@��Q�<�{�<�G=�yN=q�����=�"�5��><0">ѱ��(ܘ=ߘ-> >{S����)>0+>Z4�>�0��Ș�>9׳���>���=C 5<�<�z#��Ri>�{{[>?.�=̥[>��!<��>Prq�k�<�] �N⫺���<�����<���<k������ ���>û�V���Ua��o�����2��ي=D�0>˅��C�z����=�Fa��9�=D�A��M=��{�y>�V��'��=@<
� �Y��ؚ=�ޥ=A�=�O3=�|�8c$>�%�=e�7�����h�����|�9>^ˉ=(N�;F�!>:�a;� �Cz�<���&�u��bQ= �g����=��߼��,��t��o��x�>W9=�{�=�C<=����?��}E=�ؙ>��"�/I,�Q��=��F> ��=\=�IB���l��8?>,e3=CCJ�m��=�#۽?�>�u>׻�<(C��a�ѽQj�=ү���{���!ϼՊV=���>����n���)�m�� ����=\�5>��=L�q��$��潂�>��O>��l���>8P�>6�2>^��=0)��\��l�c=�_����=%t�>@MO�N�� :໼~�=�B;����=i˼����05 =!���=� ���i�կ��L����>.����*�z��=eͧ�<�M>�D%=�68>)�x�� �=��<E_C�Y�z���1��uo>X���fD>D���e���c�;�����ջ[���wv�;��=q�s���=�a~=K =L�I=J�z>��>Kת=O�=��h���ٽ��=︞���=!Jǽb���݄=}D">�E��0m>7�V>"�=wq�=�)H��6v�x� >G">ת��8S��������<�ށ���=滽���<C�p>��߽ �'��VE>��=���=�:μ4[��J6"�G�'���V=Y�����9��:����=���(m�>�ha�{!��X�A�?����?��!p��f=%�>�h1�=�I�8]��%bX<�n��=K��U��{٩=Q|z=E7,���L�Y�H=)�A��5<��C<R�ѽ�K��c�<���� #<>CǤ��e��B�=d 5=�<�k0=�e�i&�>8%(���e>�y>���4����=3pw�.)�=u�5��#,>NE1<�#����j=Q׃���=�Yܻ08�������&�>�n9�_1����<��I>�)�+co=(�2���s��&> �>$�\>��J>E�<iw�<���=�P���h��=��^<�C彈�[�3�&�V�Q�K��=Ff��:'׼$����4�f"���[�=��l�ZI��/%���H;��^>OO; ;>� ,>1 �>���;�6=�̔����^�<��D>�f�*J(����=v�E>���<�5>�o����<��b�[���o��<"�V�[=%k >�����y7s�ż<$<��>͡v��s =�$n���=>[���ϒ=�:=�m�f<�!_>: ��1��=k� ��ľ=Z���`<�m;�7�Z�X���D�gY>���ʺ�QZ��*���-��&�J���פ>A�m<��2��:�r`$>�+�<G}=6S�=i�M����=�M�=����A�=L�f��e�� =�=�=�#�)<�=ٛO>)-�=r=��>k�2=rB}=�+o�K�=�2�u��<:��<����?��
>�� �Yx=,9~���Խ�}��^>2э�C)����>��]=9cֽ �8�4���n���싽~��<�݆���+>��j�|�(��䳼U��~��=�Š�!���ߕ�_��#h�= 7�=$Ƶ�~�/>�^�=ge�ǰ&>5C>>�I�<y�ؼ�Ç<ڹ���f<�4ؼs#������*�>SD=�C�Q��=��=��R=�:��ٽ�w��K�Ƚ̓�T#�o}��U�=���=�^G>'6��:�t>/`_>Jk='�ȽtR�.>2G��p>���2��=���<�.=�gɼ!��<�N���m=��>$Ut�8��=�Y�=�����)���6�="�.=�X뽿4�=��
>�N��.������A�ս�YϽT�=a�B��'��\�]>L�\="�H=�:>OW<���䫇�yl��[/�:Ih>S�^�������>0��Cgx>�}�>�m�=���>��m�U�L�W�+�V����[:��O}��^>mR����=��<.0��&^�0����H(>��=��$���>�74��E��������b �]������=Î��e��>�%����:>ѓ���E >�P=�8>�=�=悖�!�N��K��7�޽#�= ����mm:���V�j�=9q>��D�M��+,>�����l���6>�b=e���C<>�eʽ��<=��=� >�g�=�9�=3�>_][��\z��}�=,& >Mӂ>yGY>t������$�>!RA�q4,���E>��=^>��<� L���
�������X>R�����㠎>�u=A ��<���v�;��n=e6���=��B �̐�<�J�<���=�z��@���6��=Z��=g�>l>V�d���=�;�BZ�2�O=Y� ��C��E�)�Ϸy�خ�=o�n>�½��`�j>?;w�/�=���=���dyW�8��P����X�r=T>��躘a���3
��t�G|�=W?�=IEX> �>���=�h����=y�>�c���#@=��>W�����=��=O���!�=�I={��=�*P�����Bu=�IM��4�=D�<(9�=����q4=i�ݼ� �t���>TM�>�+�=� >�� ��@���1<8��;���R4=��:� e�=�����;� ���W��/>p�5=�@}�N�;���=����Lf=��t3=�0 ��~�>y��
����( >�м.���,1>i��%~���j;>�9>Ӏ�=�/< 5<��P�1��<��f�
T��[(ԼEY�{x��)��=p�F����;.�=�u�<~\*>�r�����jK=��=�x�%��+�D���½�9>-�!� ]�<� �=��ʽz���;L>�R=������=P#��n%�=��=I؁�q����􂽟`�<�݊�j|]�$��֍�>�"ս-����� ��lc�-;E=I6S>�����ͼV�漷)�72�ۼNi�\�M����>"=���3�<P��=�/4����=�n�=�J>���="�p�Z�G��.���೽��k� 먻�R��>� �O0<�×>��{���������%=�<����O2����<h�������ғ>���=���>�\��)��Hh�<J1��Ǡ���;��LQ�J��=;��=c6\��]��-����>���"���˽_7(�$�=0�����Z>Wӄ>,�i�х>�1'>�����K�LG4>��e>�'>K��<Z�s=��T���>(cƽH��=4�����<���=4:
>����'>���=j�<�{��q(�=���9�+��{!��Q�=.x��#����7��B 1�=�i=����g����\*=�Y�>�m�=��ܽC�$>K1�� t[=�` <G�>��1<>��>P� �$��=c˛�.뵽�RB�� �=ͻO�C�R�\�.&���fϽ@ ����>���<O���Ԁ=�v@�7,
>��M����=5'�=���>�6�=�� �ڽ�)�<Q=��4�=�q��d�X��=|*.>��>f�n�q�N�=�Ve�������r����.���6؆=;�M���=�P��h�c>�>>�>������=�ge���v=\#&=H$b�dml���\����=6N�<: `=tY
�� �0�=3������=r� ��}�KUG�Z��6�<b�T>����.n�d h�ZtL�<!�;�K����=kL�P�L�v�=R6>��<ؼ�<��`���2>"�>e._��&�y�c>�x= ���!�=�LT���[=Р>e�E>ʉz>�ֽ5D�:�;}>�M�>�+Ļ��M���9=�V�=�����û=(��=�0 �Ր�>l�>#eu:l#��6s>��L�(7y� �g��w>����]��&�>���=;�=*j��zI�$�g��~�����<���=�܎����� �:��-<�a��Z^O>1�Y�ĩý5�������l>%�w>|�Y��m$����<��+��Ev={� �h:����B�=��5>-QO=i��=V�޽z��=j�: �=��g8<�А��Gѽo8!>S��>��<hp�=��>?C�1���q�M>���Dxx���,>�];� o4>�>W>��F=NC��� >�V�=J�=q� >7I�=¯�=�4���ȽJ'>��=A�7>��i=Ϟ�~Y+��$[��4��4C�ѥ%��k��ƂA�'��J�½�r)=d�ɽW�����h��1g=���>T����� �ѽ����@G��!@= �<=9� {>ZY
�{�l�J��:֍�=�>�����꨽d��=�OM�������5���]t�=��=�J�=ܭ-�Ń�.{�=;�&>�j��ٽ��^>��򽀍 ����C�e>xz�9Ϫ��}�=�?�= t=�5�=��a>pӯ�y�K��]<,R����=~=x�!>m۴�V�a�Kc>�D>�2=K`N�" ?���ٽ <�=�M�=��ʽ����&�m�-ֱ=kGZ>�W$>Ll�=t��=9m>���JZ>Oom�0Y`�G)=�
(>���暶=��=�x��8 �I��2�=�9<K�Ƚ}R�=��i>��<'�E�~$�<�7 =�+��� ��Y�>}(���cE�l�=:�I�'�G�2x�>�� ������?��)=ğ<I��=�4i=c�¼�q<��.�=��/�����[<�J�� >�m>��M=���.m�=�;|=��)���P�^�&���G�у(�����I�<&�-F�<��ƽm�Y>,��=�����_�=�Q�=��">(�4�*>P�+��8��!�="�&=Cl�[=��2i>J�&>Y��<� ���=~j�=��M�v ����\�=~]�=��E�!�-���b=������<P�=���=��p>)��=�'�\>��=5��=7 �=8����>��=��;>����b�=��2�D�>��Y>6r��[3�8���l+�� H��LS�=)嬼jB=�J�Ƽx'ż�6.�إ~�r���d�<�X>��,��ZZ>1����a��'|;��o�="�=�G@�~�F�)��<�X`�X�=��=N�<U�8>��=�I=��=D��=w~>wa�=\xٽ��1��'>"A/���^�ЀǼs�=���;i艾���=�m�>ϝ��T)�=~�>N�
e�r ��N����ֽ�j>�ҽ��e=H����f>���<����\�D>�p��+w������|RȽ�,�W�缃���7��`j�b�V�L`=/rG=��ü���=N��O �����y�>z�t==��/�s����=�0>�!>�=�O�1��n�=]�R>��=���c@��H�>_�> �;�x��X�;<�>D�7>6�!�R3$=Ζ�> s#>
[ҽ�8���_K>��-<���<��-=g��<Tf��E�3>��3>'�~=�M���wT��C$>F� ��4>L:t����w��H>���M�T=��=Oğ=�xe�E5=�a]<3�>�`a�y��<�����ְ��Cj=�d�*C��=9���9�-Wx�B���_K��w��%�WB=�Vd�as�>i-m�O����U���+��M>�z*����=����3C>c �.�e>ӱ=>�a�<�<M>E���t��`� =��>R�Q��&�}&�=*�+��v�=�M���/>
)� i>M遼��f=,�8=ܠ��~�w��.�Q�F���7>�=���T>��;��>.!�=af�=k�ͽ��!�/߯=u��=Ȝ�<�#q;��׽X:=�������82�UT�=X9���"u��A>o��<m�?���>{ٽ�]i����=�ڼnJV=�~�=x�<Ӎ%�%=8����<Ю�=��V>�⽑;H>�,�=X��<&�#��ʹ=@#�����r���B;�|`�K��<�����a=���;ٚ� a%<uuH�#�0<@H�<�Lm���>;� =U��D����q�='ǽC�����潓X� ��=����|��=,�@�6Cg>�w�= OV��˖=�>�.��Yu�Yt�� ��>G��Q��9����1�����,������ê�B
�=����Uf�n>W��]��T>]��=�x�=��U�,>�s�<�&�>d�">>�= ��=4��=��>Բl>��->sn�;ګb���=����:�:C>�0�=@�D�WS}=����� ս��l>���=fOѽ<��ͣ�<��;�s��Ѐ �� ��3��w���{�q=����s���Wo���<r�\�{�U�=RX�O�>�J�Ϫ]�Y�8>G-s����V<���ѽ^�������8$��Ҙ�= h1>[�Z=��>���=�q�<�`�>��m��|=%�c�~Y =:\�=���6�o��&<����{�=Va>;�I>��s�e����=*=���=��2>����Ӯ�= ����4�J9�;s������;�n@>b�T��>>�O���P�=����kj�=x�6��=>]�Ѽ��d����=cx�=n���)�>�7=���>�&��3���_[�=�H���<k.A�$�νq�=&-|=^U����Ǽ���K>�h=�^�=nD*�0KA���U��U��Q>�>�R
>�q�>�૽�R8>�>=� �<>��=��b��}�=#Q��i���ǡ=��@�=
Ĺ<좉=��>���m�˼9C�=��2=�oF=j�S�+�� ��=4'\>��<kC�:�#>6���i���H�Œ�=Չ��=������ �K`����
�앴=�s�<\��PJ>`�>���w:xp��}t> ��=��v=�RG>E!S����>�I��_�����>���;n�켷8���+%�� ���:I>R\�=�#?��(�=6�=��=����=���=�xĽI@̽-�����C�����нa;>ds�����;�U�<�DY=Ccc��E>>ǫ<�+��B�x��D�=��U��
�w��(����<nd�='>���=��!��������|���?=q��=o)>�����g��dsڽ� >� ��R��>���H�M<1��� ���U����p��3a�)W��S�*�YFĽ� ���(b�R���f}�}�;6'��n �ԢN>UT������"���RO>�9(=+��>�̴�b�+�@k�=/d�Po��}ɺ�%>@��=U>p?��oX>.� �>���o<XW���9��Q2��7V�:E=>�J<ܓ�=/Td���>�J��N<FF�>�˄���ֽ�H�=g?�F����F�� gV=�Z��b>
�>c���nC��A�����A7>{f>!Z#�7R�<񋐽K0+�O咼7��gq��ıy=V�<^��=v���� =�|�<�T�=��Q�E4�=p�p=p��ȏ=���=�ځ�@3>J>K�ҽ"��=V6>,�=���>LBV=���=I���w�>�΀>����$9#�P�
=���=��x�pm�=�>�#j���0�������=!V�����U�>}���P�����kh�=�n��FPҼ�_�=G\!>��~<���<h54>
�(<ש�=���=cٽȦx��iQ�;k
>�w��@1o�/
������`Un� �>�[ҽ�E���yB<�B�=���>;��=l�m���>Q:�;O�=�=�qK<A�/>�Ԋ>3���� M�Umy=��_�z��<�V�=�0A>_Iǽ��F��>>���=d�����Y=K{a�B>�U>ڎ���n=Պ�>ү>q��:.��=�KK��*F>C]'>!��=�ծ<j`c<r��>��=���ń�=�ð=��<0�=�鬽H��=hG=6~=���=�>D�> 6j=>���Dg�>� n���ҽ@kS��B =��>`��X��=S� �1��<�.�<_؋=*��$�=Ҷ=����O�#��.>�N���$�����p��mjo= O��8���#��=�@�<΁��k_;obH�gX< ��8���x���>�ۘ��gG��B��LZ=�$佌EZ�� >6�aH�=�ͽ-��=&��;×�,�$>V:�o����n�<1��=|�B���ɽ�8��������=WR�=�k
�?���Ĵ�>�-ý�O^���>� r�ђ��?��$A���<Y����u�����u7����ڽ �Z=�f��'�+>�#=O�5>P�>��G�S���t(o�)���.����;����^��6�=#�X�����t�<��!>c}���d9>3Ǖ�I�^>�u�=��=�[|�����mt�<���J��K.���/���Y=��>�Hj>c��=z_�;��R�����)S��މ�z����н[b�|�f����f=6�a��Q�=�Y*�vʭ=NG�=���'�=���;voR<�PG=4�=rP3>ɾ=>V:�(�8�Ǎ>�>�/ >�����i6>���f�>�"b�:�==E|ͼ�;=�5b=8�����><���'>�8A>�ؙ���;��J>µ2>���]D�>��<mK>���<��i�nx#>��B� k��^*�]"��
�I>�'7����=�� �7�]}>IY<)x >Dz���R>ʻ+>iTܼ=�f��*��� Z���;$�Ǽ"�<ŃU=UB�>-E��xL>��y<�n��U�� ���YNA��Y0>n<~<1���� ����<\'N����O�v���>:!#>��=b��>�����"���>Q�!=�d�W�>r�(>�>� �;/� ��
2>۶�>Nc�=����Z��=�c>��Z�F{�=4J�<
��\��8ߠ> ۍ�Z?2>K��>��i��I=M�����䓾 &t=+f>�d��3�������g>��齨� �y��>fh��Ag>�7��+�>S�=��t6<fG��O��<�>'[���& ���=�k����I;��w����ް���f/>6�=7�R���6�~o�là=����s�{`z<,�U>�Ꮍ�.g=F�>�\q>L�<KБ>J��A�=*��=ug�;R�I���<46��Ȭ���8��ѽ!<,�nپ�2^�Y�=��O>����Z4�����h�ֽՅ�=�G&�OT>ᇐ�Td����=7���䨦�Lu}����>�>�<=����8��ݨ=�^Y>H��=����2��C�?�lS,�s杼���A��<~�A�D�)�6�m�ӂ����ӽ#̸<�0&���>-_�>?p>_7N=�>�=(/�>.�=��)>��2;ڤ�=�,ļ� >_�n>A.>;�U�Z񽆊r�$��>C�p=�ɟ:Oy=@B�=���:��2�녾��=��<�m>;�m�qlY=˗X���D=a#��������|��=�u��zJ�g���*>yaR�V���BJ�<B� �� >�V=�r�֥����'=�Ƅ=�Ξ�]�c>Wc}����=D)��Tş=|�<I�Ǽ4莾����㽮&= /+>�Ѯ�C~#>��*�~����`��)�>o���` ���W=�Z=q�={�`���p��㖿=ꍁ�k� �M�]���?��)�]�$�����ڌS�r�=��^��a�=�3=UT>�>�=�8>ϵ��6@�=Z�j�E�x���<s�c:(Df�*u�������=g��=�l����_��Ui1��yl>V���{�i�]�½7I���0��P���<������-꡽�м9�w���M<�I[�N]t�|;0�@�e>4cF>��J�M�w����9<S���7>��=0�H��#��v������e��� ��t=��=��@�ӪE����=2=V= �>��3>�^�� `>����8�\=rv�>��0���><�����)��\9<v�">�c��/.���p>2�3���O=4�}�|����s!�\�9����=4�g<��o� #H�*��=p���q=M��<���� �<��>N��=i4�P\w=�b
����6�;L�佫h=v[\��p�m�e���>b�v=�3>X雾a��=�@=�Y��U!h=<=>��3� �<>|\�=i�=����>(.�=�΄=3��<c%8>XCR�w� >&W-�S�>:<T���u��ђ<J�(����} ܽ�U�Fc�>b���v�= \K>Vc����ry��2M=x>ǀ����H>/�>m@��,�O<Ŋ!��᧾�>j����r�=�
g��]@�+`Z��� �E���(�Z=�BR>ҀI�j"��*���<�3���*=�P�w��� �Q��6½k>ȟ�=`�l��~>2c�=��R>Z�6���_�A���}˽��"�U!ҽ"�>;|;>J��E�2=�V�����;?8��Oڽi
>�C��FF=�0�=�̝>�w}�i
���;���ӽ��(>A�'�qF&=�_;�N��=��.� ��ͮ�</v����M*�'�G� V�kQ�=M ���=_���$X�=��뽊x�=ս��>�|ɽ �ڽs��=\*��x�ܽ�轇�����=�������+�޽�8ؽf,
>:Y�=|
ֽ5�>��=|�Խ(q�=���=������=$�=�w߽��>wQ�=�c׽>g�=�\�=�۽���=���=֋�=EA�=&�=����0�=dt�=h@������m�=����*��={N�=��ǽ[Tӽ�;�=�(�=�E��R���G��=�0��q*���Ž��=��ֽ��׽
B����j�/=�=16��~P�ߴq��˽�e��N쳽�+��_�e�q�������jặo�=����tu˼;�=�=�=�����F�޽�����=}�����A�ڝ�=$ ]����r��X�E�k�= �N��f�� �3�sF�����=�h�*ýl#>J�<���?������<�̐������� �}$�=��r��g��wJ�=+���G��;��.�wy�=�p5<����=���$�� �=_� ��L�=�N�%uF�#⽼��=�õ�����]��� ��Fý�p���D���`�;_�9<YH�=Y�=YJ=� >Dt�=I=ּb;�=( =x{�����=�=����$ݽ$=.C��bӽ��v<2�v�G(����=��ݺ�e�KX�=.�y:5��=�2<�"���B�=pq=���⦚=�G�=-^��]���ƾ=� 0��K ��Ę��z�����<s�����%=?6���'��ky���t=�8�=�q=ͥ�=���l{���ͽ�uZ=[|�Xܝ=�ӳ�{��<(s�=��$=
�lً9#m
�-\�<t�z^߽0��w�(��=�ּ�Z㽹ۥ=Uԧ����=Mڽ�,dD=�’=bE��Z��y��=���= }���=W)��&YL<����{v����{����=ի̽���������=�E=y �=KN�=d�=���<��\=_%< H�=�W���g[�Ļ=���=9ѧ�� �=0��<P�@<?����>a6����<�v�=�vL�-�=�`��j|Ѽ&�'<y��>���U�=IKS���>�a�A>ku�����L��>~C˾q���˾���\E����==@�ݰ�����Zɂ�d΁>064>������>�h>u�g�C�>�B>�T^����=ֻJ>xȿ�4ϫ=ru>h3o�PA>��^>_v�}]3>qw}>b�[>-�6>���=�ς�m&[>��>�p6�l�-��O�=mT��yQ>u��=Y�ѽm�����=U�->�$��tDȾ/�>v�g�r������.>�u��Q'��#��=

7
UnitySDK/Assets/ML-Agents/Examples/Bouncer/TFModels/BouncerLearning.nn.meta


fileFormatVersion: 2
guid: 055df42a4cc114162939e523d053c4d7
ScriptedImporter:
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 83221ad3db87f4b3b91b041047cb2bc5, type: 3}

部分文件因为文件数量过多而无法显示

正在加载...
取消
保存