浏览代码

Update v2-staging from main (March 15) (#5123)

/v2-staging-rebase
GitHub 4 年前
当前提交
f16ce486
共有 354 个文件被更改,包括 3026 次插入1897 次删除
  1. 1
      .github/workflows/pre-commit.yml
  2. 23
      .github/workflows/pytest.yml
  3. 7
      .pre-commit-config.yaml
  4. 8
      .yamato/com.unity.ml-agents-pack.yml
  5. 1
      .yamato/com.unity.ml-agents-test.yml
  6. 2
      .yamato/python-ll-api-test.yml
  7. 2
      .yamato/standalone-build-test.yml
  8. 6
      DevProject/Packages/packages-lock.json
  9. 2
      Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawler.demo.meta
  10. 13
      Project/Assets/ML-Agents/Examples/Crawler/Prefabs/Crawler.prefab
  11. 7
      Project/Assets/ML-Agents/Examples/Crawler/Scenes/Crawler.unity
  12. 84
      Project/Assets/ML-Agents/Examples/Crawler/Scripts/CrawlerAgent.cs
  13. 380
      Project/Assets/ML-Agents/Examples/FoodCollector/Prefabs/FoodCollectorArea.prefab
  14. 5
      Project/Assets/ML-Agents/Examples/FoodCollector/Prefabs/FoodCollectorArea.prefab.meta
  15. 862
      Project/Assets/ML-Agents/Examples/FoodCollector/Scenes/FoodCollector.unity
  16. 5
      Project/Assets/ML-Agents/Examples/FoodCollector/Scenes/FoodCollector.unity.meta
  17. 1001
      Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.onnx
  18. 2
      Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.onnx.meta
  19. 2
      Project/Assets/ML-Agents/Examples/GridWorld/Demos/ExpertGridWorld.demo.meta
  20. 14
      Project/Assets/ML-Agents/Examples/Match3/Prefabs/Match3VectorObs.prefab
  21. 14
      Project/Assets/ML-Agents/Examples/Match3/Prefabs/Match3VisualObs.prefab
  22. 2
      Project/Assets/ML-Agents/Examples/PushBlock/Demos/ExpertPushBlock.demo.meta
  23. 139
      Project/Assets/ML-Agents/Examples/Soccer/Prefabs/SoccerFieldTwos.prefab
  24. 236
      Project/Assets/ML-Agents/Examples/Soccer/Prefabs/StrikersVsGoalieField.prefab
  25. 57
      Project/Assets/ML-Agents/Examples/Soccer/Scripts/AgentSoccer.cs
  26. 12
      Project/Assets/ML-Agents/Examples/Soccer/Scripts/SoccerBallController.cs
  27. 5
      Project/Assets/ML-Agents/Examples/Sorter/Scripts/SorterAgent.cs
  28. 2
      Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalker.demo.meta
  29. 2
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Ragdoll/WalkerRagdoll.prefab
  30. 9
      Project/Assets/ML-Agents/Examples/Worm/Prefabs/PlatformWorm.prefab
  31. 13
      Project/Assets/ML-Agents/Examples/Worm/Prefabs/Worm.prefab
  32. 52
      Project/Assets/ML-Agents/Examples/Worm/Scripts/WormAgent.cs
  33. 5
      Project/Packages/manifest.json
  34. 1
      Project/Project.sln.DotSettings
  35. 16
      README.md
  36. 2
      com.unity.ml-agents.extensions/Documentation~/Grid-Sensor.md
  37. 2
      com.unity.ml-agents.extensions/Documentation~/Match3.md
  38. 16
      com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md
  39. 7
      com.unity.ml-agents.extensions/Runtime/Input/Adaptors/ButtonInputActionAdaptor.cs
  40. 7
      com.unity.ml-agents.extensions/Runtime/Input/Adaptors/DoubleInputActionAdaptor.cs
  41. 6
      com.unity.ml-agents.extensions/Runtime/Input/Adaptors/FloatInputActionAdaptor.cs
  42. 6
      com.unity.ml-agents.extensions/Runtime/Input/Adaptors/IntegerInputActionAdaptor.cs
  43. 7
      com.unity.ml-agents.extensions/Runtime/Input/Adaptors/Vector2InputActionAdaptor.cs
  44. 4
      com.unity.ml-agents.extensions/Runtime/Input/IRLActionInputAdaptor.cs
  45. 12
      com.unity.ml-agents.extensions/Runtime/Input/InputActionActuator.cs
  46. 63
      com.unity.ml-agents.extensions/Runtime/Input/InputActuatorComponent.cs
  47. 4
      com.unity.ml-agents.extensions/Runtime/Input/Unity.ML-Agents.Extensions.Input.asmdef
  48. 12
      com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/ButtonInputActionAdaptorTests.cs
  49. 12
      com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/DoubleInputActionAdaptorTests.cs
  50. 12
      com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/FloatInputActionAdapatorTests.cs
  51. 12
      com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/IntegerInputActionAdaptorTests.cs
  52. 12
      com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/Vector2InputActionAdaptorTests.cs
  53. 17
      com.unity.ml-agents.extensions/Tests/Runtime/Input/InputActionActuatorTests.cs
  54. 2
      com.unity.ml-agents.extensions/Tests/Runtime/Input/InputActuatorComponentTests.cs
  55. 2
      com.unity.ml-agents.extensions/Tests/Runtime/Input/Unity.ML-Agents.Extensions.Input.Tests.Runtime.asmdef
  56. 4
      com.unity.ml-agents.extensions/package.json
  57. 4
      com.unity.ml-agents/.gitignore
  58. 22
      com.unity.ml-agents/CHANGELOG.md
  59. 4
      com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
  60. 8
      com.unity.ml-agents/Runtime/Academy.cs
  61. 2
      com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs
  62. 2
      com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs
  63. 26
      com.unity.ml-agents/Runtime/Agent.cs
  64. 28
      com.unity.ml-agents/Runtime/Analytics/InferenceAnalytics.cs
  65. 38
      com.unity.ml-agents/Runtime/Analytics/TrainingAnalytics.cs
  66. 33
      com.unity.ml-agents/Runtime/Communicator/GrpcExtensions.cs
  67. 4
      com.unity.ml-agents/Runtime/Communicator/RpcCommunicator.cs
  68. 5
      com.unity.ml-agents/Runtime/Communicator/UnityRLCapabilities.cs
  69. 2
      com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs
  70. 40
      com.unity.ml-agents/Runtime/Grpc/CommunicatorObjects/Capabilities.cs
  71. 48
      com.unity.ml-agents/Runtime/Grpc/CommunicatorObjects/Observation.cs
  72. 7
      com.unity.ml-agents/Runtime/Policies/BehaviorParameters.cs
  73. 3
      com.unity.ml-agents/Runtime/Policies/RemotePolicy.cs
  74. 2
      com.unity.ml-agents/Runtime/SimpleMultiAgentGroup.cs
  75. 9
      com.unity.ml-agents/Runtime/Unity.ML-Agents.asmdef
  76. 3
      com.unity.ml-agents/Tests/Editor/Analytics/InferenceAnalyticsTests.cs
  77. 6
      com.unity.ml-agents/Tests/Editor/Communicator/GrpcExtensionsTests.cs
  78. 7
      com.unity.ml-agents/Tests/Editor/Unity.ML-Agents.Editor.Tests.asmdef
  79. 7
      com.unity.ml-agents/package.json
  80. 6
      config/imitation/Crawler.yaml
  81. 4
      config/ppo/FoodCollector.yaml
  82. 2
      config/ppo/Crawler.yaml
  83. 4
      config/ppo/Walker.yaml
  84. 2
      config/ppo/Worm.yaml
  85. 10
      config/sac/FoodCollector.yaml
  86. 2
      config/sac/Crawler.yaml
  87. 2
      config/sac/Walker.yaml
  88. 2
      config/sac/Worm.yaml
  89. 8
      docs/Getting-Started.md
  90. 8
      docs/Installation-Anaconda-Windows.md
  91. 17
      docs/Installation.md
  92. 119
      docs/Learning-Environment-Design-Agents.md
  93. 160
      docs/Learning-Environment-Examples.md
  94. 33
      docs/ML-Agents-Overview.md
  95. 12
      docs/Training-Configuration-File.md
  96. 2
      docs/Training-ML-Agents.md
  97. 2
      docs/Training-on-Amazon-Web-Service.md
  98. 2
      docs/Training-on-Microsoft-Azure.md
  99. 4
      docs/Unity-Inference-Engine.md
  100. 999
      docs/images/example-envs.png

1
.github/workflows/pre-commit.yml


- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.x'
- run: dotnet tool install -g dotnet-format --version 4.1.131201
- uses: pre-commit/action@v2.0.0
markdown-link-check:

23
.github/workflows/pytest.yml


TEST_ENFORCE_BUFFER_KEY_TYPES: 1
strategy:
matrix:
python-version: [3.6.x, 3.7.x, 3.8.x]
python-version: [3.6.x, 3.7.x, 3.8.x, 3.9.x]
include:
- python-version: 3.6.x
pip_constraints: test_constraints_min_version.txt
- python-version: 3.7.x
pip_constraints: test_constraints_mid_version.txt
- python-version: 3.8.x
pip_constraints: test_constraints_mid_version.txt
- python-version: 3.9.x
pip_constraints: test_constraints_max_version.txt
steps:
- uses: actions/checkout@v2
- name: Set up Python

# This path is specific to Ubuntu
path: ~/.cache/pip
# Look to see if there is a cache hit for the corresponding requirements file
key: ${{ runner.os }}-pip-${{ hashFiles('ml-agents/setup.py', 'ml-agents-envs/setup.py', 'gym-unity/setup.py', 'test_requirements.txt') }}
key: ${{ runner.os }}-pip-${{ hashFiles('ml-agents/setup.py', 'ml-agents-envs/setup.py', 'gym-unity/setup.py', 'test_requirements.txt', matrix.pip_constraints) }}
restore-keys: |
${{ runner.os }}-pip-
${{ runner.os }}-

run: |
python -m pip install --upgrade pip
python -m pip install --upgrade setuptools
python -m pip install --progress-bar=off -e ./ml-agents-envs
python -m pip install --progress-bar=off -e ./ml-agents
python -m pip install --progress-bar=off -r test_requirements.txt
python -m pip install --progress-bar=off -e ./gym-unity
python -m pip install --progress-bar=off -e ./ml-agents-plugin-examples
python -m pip install --progress-bar=off -e ./ml-agents-envs -c ${{ matrix.pip_constraints }}
python -m pip install --progress-bar=off -e ./ml-agents -c ${{ matrix.pip_constraints }}
python -m pip install --progress-bar=off -r test_requirements.txt -c ${{ matrix.pip_constraints }}
python -m pip install --progress-bar=off -e ./gym-unity -c ${{ matrix.pip_constraints }}
python -m pip install --progress-bar=off -e ./ml-agents-plugin-examples -c ${{ matrix.pip_constraints }}
- name: Save python dependencies
run: |
pip freeze > pip_versions-${{ matrix.python-version }}.txt

7
.pre-commit-config.yaml


types: [markdown]
exclude: ".*localized.*"
- repo: https://github.com/dotnet/format
rev: "7e343070a0355c86f72bdee226b5e19ffcbac931" # TODO - update to a tagged version when one that includes the hook is ready.
hooks:
- id: dotnet-format
args: [--folder, --include]
# "Local" hooks, see https://pre-commit.com/#repository-local-hooks
- repo: local
hooks:

name: validate release links
language: script
entry: utils/validate_release_links.py

8
.yamato/com.unity.ml-agents-pack.yml


image: package-ci/ubuntu:stable
flavor: b1.small
commands:
- npm install upm-ci-utils@stable -g --registry https://artifactory.prd.cds.internal.unity3d.com/artifactory/api/npm/upm-npm
- upm-ci project pack --project-path Project
- |
python3 -m pip install unity-downloader-cli --index-url https://artifactory.prd.it.unity3d.com/artifactory/api/pypi/pypi/simple --upgrade
unity-downloader-cli -u 2018.4 -c editor --wait --fast
./.Editor/Unity -projectPath Project -batchMode -executeMethod Unity.MLAgents.SampleExporter.ExportCuratedSamples -logFile -
npm install upm-ci-utils@stable -g --registry https://artifactory.prd.cds.internal.unity3d.com/artifactory/api/npm/upm-npm
upm-ci project pack --project-path Project
artifacts:
packages:
paths:

1
.yamato/com.unity.ml-agents-test.yml


{% endfor %}
{% endfor %}
{% endfor %}

2
.yamato/python-ll-api-test.yml


python ml-agents/tests/yamato/scripts/run_llapi.py
python ml-agents/tests/yamato/scripts/run_llapi.py --env=artifacts/testPlayer-Basic
python ml-agents/tests/yamato/scripts/run_llapi.py --env=artifacts/testPlayer-WallJump
python ml-agents/tests/yamato/scripts/run_llapi.py --env=artifacts/testPlayer-Bouncer
python ml-agents/tests/yamato/scripts/run_llapi.py --env=artifacts/testPlayer-Match3
dependencies:
- .yamato/standalone-build-test.yml#test_linux_standalone_{{ editor.version }}
triggers:

2
.yamato/standalone-build-test.yml


- unity-downloader-cli -u {{ editor.version }} -c editor --wait --fast
- python3 -u -m ml-agents.tests.yamato.standalone_build_tests --build-target=linux
- python3 -u -m ml-agents.tests.yamato.standalone_build_tests --build-target=linux --scene=Assets/ML-Agents/Examples/Basic/Scenes/Basic.unity
- python3 -u -m ml-agents.tests.yamato.standalone_build_tests --build-target=linux --scene=Assets/ML-Agents/Examples/Bouncer/Scenes/Bouncer.unity
- python3 -u -m ml-agents.tests.yamato.standalone_build_tests --build-target=linux --scene=Assets/ML-Agents/Examples/Match3/Scenes/Match3.unity
- python3 -u -m ml-agents.tests.yamato.standalone_build_tests --build-target=linux --scene=Assets/ML-Agents/Examples/WallJump/Scenes/WallJump.unity
- python3 -u -m ml-agents.tests.yamato.standalone_build_tests --build-target=linux --scene=Assets/ML-Agents/TestScenes/TestCompressedGrid/TestGridCompressed.unity
- python3 -u -m ml-agents.tests.yamato.standalone_build_tests --build-target=linux --scene=Assets/ML-Agents/TestScenes/TestCompressedTexture/TestTextureCompressed.unity

6
DevProject/Packages/packages-lock.json


"url": "https://artifactory.prd.cds.internal.unity3d.com/artifactory/api/npm/upm-candidates"
},
"com.unity.barracuda": {
"version": "1.3.0-preview",
"version": "1.3.1-preview",
"depth": 1,
"source": "registry",
"dependencies": {

"depth": 0,
"source": "local",
"dependencies": {
"com.unity.barracuda": "1.3.0-preview",
"com.unity.barracuda": "1.3.1-preview",
"com.unity.modules.imageconversion": "1.0.0",
"com.unity.modules.jsonserialize": "1.0.0",
"com.unity.modules.physics": "1.0.0",

"depth": 0,
"source": "local",
"dependencies": {
"com.unity.ml-agents": "1.7.2-preview"
"com.unity.ml-agents": "1.8.0-preview"
}
},
"com.unity.multiplayer-hlapi": {

2
Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawler.demo.meta


guid: 34586a8d0f1c342a49973b36a609e73b
ScriptedImporter:
fileIDToRecycleName:
11400002: Assets/ML-Agents/Examples/Crawler/Demos/ExpCrawlerDynVS.demo
11400002: Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawler.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:

13
Project/Assets/ML-Agents/Examples/Crawler/Prefabs/Crawler.prefab


VectorActionDescriptions: []
VectorActionSpaceType: 1
hasUpgradedBrainParametersWithActionSpec: 1
m_Model: {fileID: 11400000, guid: c6509001ba679447fba27f894761c3ba, type: 3}
m_Model: {fileID: 11400000, guid: 0d9a992c217a44684b41c7663f3eab3d, type: 3}
m_BehaviorName:
m_BehaviorName: Crawler
TeamId: 0
m_UseChildSensors: 1
m_UseChildActuators: 1

maxStep: 0
hasUpgradedFromAgentParameters: 1
MaxStep: 5000
typeOfCrawler: 0
crawlerDyModel: {fileID: 11400000, guid: 2dc51465533e7468d8bcafc17250cebf, type: 3}
crawlerDyVSModel: {fileID: 11400000, guid: 0d9a992c217a44684b41c7663f3eab3d, type: 3}
crawlerStModel: {fileID: 11400000, guid: e88b5542c96104c01b56f1ed82d8ccc8, type: 3}
crawlerStVSModel: {fileID: 11400000, guid: e0800a8eb11a34c138fa8186124af9dc, type: 3}
dynamicTargetPrefab: {fileID: 3839136118347789758, guid: 46734abd0de454192b407379c6a4ab8d,
type: 3}
staticTargetPrefab: {fileID: 3839136118347789758, guid: 2173d15c0b5fc49e5870c9d1c7f7ee8e,
TargetPrefab: {fileID: 3839136118347789758, guid: 46734abd0de454192b407379c6a4ab8d,
type: 3}
body: {fileID: 4845971001588102148}
leg0Upper: {fileID: 4845971001327157979}

7
Project/Assets/ML-Agents/Examples/Crawler/Scenes/Crawler.unity


m_Name:
m_EditorClassIdentifier:
target: {fileID: 1018218737}
smoothingTime: 0
--- !u!1001 &1481808307
PrefabInstance:
m_ObjectHideFlags: 0

propertyPath: typeOfCrawler
value: 1
objectReference: {fileID: 0}
- target: {fileID: 3421283062001101770, guid: 0058b366f9d6d44a3ba35beb06b0174b,
type: 3}
propertyPath: TargetPrefab
value:
objectReference: {fileID: 3839136118347789758, guid: 46734abd0de454192b407379c6a4ab8d,
type: 3}
- target: {fileID: 6810587057221831324, guid: 0058b366f9d6d44a3ba35beb06b0174b,
type: 3}
propertyPath: m_LocalPosition.x

84
Project/Assets/ML-Agents/Examples/Crawler/Scripts/CrawlerAgent.cs


[RequireComponent(typeof(JointDriveController))] // Required to set joint forces
public class CrawlerAgent : Agent
{
//The type of crawler behavior we want to use.
//This setting will determine how the agent is set up during initialization.
public enum CrawlerAgentBehaviorType
{
CrawlerDynamic,
CrawlerDynamicVariableSpeed,
CrawlerStatic,
CrawlerStaticVariableSpeed
}
[Tooltip(
"VariableSpeed - The agent will sample random speed magnitudes while training.\n" +
"Dynamic - The agent will run towards a target that changes position.\n" +
"Static - The agent will run towards a static target. "
)]
public CrawlerAgentBehaviorType typeOfCrawler;
//Crawler Brains
//A different brain will be used depending on the CrawlerAgentBehaviorType selected
[Header("NN Models")] public NNModel crawlerDyModel;
public NNModel crawlerDyVSModel;
public NNModel crawlerStModel;
public NNModel crawlerStVSModel;
[Header("Walk Speed")]
[Range(0.1f, m_maxWalkingSpeed)]

set { m_TargetWalkingSpeed = Mathf.Clamp(value, .1f, m_maxWalkingSpeed); }
}
//Should the agent sample a new goal velocity each episode?
//If true, TargetWalkingSpeed will be randomly set between 0.1 and m_maxWalkingSpeed in OnEpisodeBegin()
//If false, the goal velocity will be m_maxWalkingSpeed
private bool m_RandomizeWalkSpeedEachEpisode;
[Header("Target To Walk Towards")] public Transform dynamicTargetPrefab; //Target prefab to use in Dynamic envs
public Transform staticTargetPrefab; //Target prefab to use in Static envs
[Header("Target To Walk Towards")]
public Transform TargetPrefab; //Target prefab to use in Dynamic envs
private Transform m_Target; //Target the agent will walk towards during training.
[Header("Body Parts")] [Space(10)] public Transform body;

public override void Initialize()
{
SetAgentType();
SpawnTarget(TargetPrefab, transform.position); //spawn target
m_OrientationCube = GetComponentInChildren<OrientationCubeController>();
m_DirectionIndicator = GetComponentInChildren<DirectionIndicator>();

}
/// <summary>
/// Set up the agent based on the typeOfCrawler
/// </summary>
void SetAgentType()
{
var behaviorParams = GetComponent<Unity.MLAgents.Policies.BehaviorParameters>();
switch (typeOfCrawler)
{
case CrawlerAgentBehaviorType.CrawlerDynamic:
{
behaviorParams.BehaviorName = "CrawlerDynamic"; //set behavior name
if (crawlerDyModel)
behaviorParams.Model = crawlerDyModel; //assign the model
m_RandomizeWalkSpeedEachEpisode = false; //do not randomize m_TargetWalkingSpeed during training
SpawnTarget(dynamicTargetPrefab, transform.position); //spawn target
break;
}
case CrawlerAgentBehaviorType.CrawlerDynamicVariableSpeed:
{
behaviorParams.BehaviorName = "CrawlerDynamicVariableSpeed"; //set behavior name
if (crawlerDyVSModel)
behaviorParams.Model = crawlerDyVSModel; //assign the model
m_RandomizeWalkSpeedEachEpisode = true; //randomize m_TargetWalkingSpeed during training
SpawnTarget(dynamicTargetPrefab, transform.position); //spawn target
break;
}
case CrawlerAgentBehaviorType.CrawlerStatic:
{
behaviorParams.BehaviorName = "CrawlerStatic"; //set behavior name
if (crawlerStModel)
behaviorParams.Model = crawlerStModel; //assign the model
m_RandomizeWalkSpeedEachEpisode = false; //do not randomize m_TargetWalkingSpeed during training
SpawnTarget(staticTargetPrefab, transform.TransformPoint(new Vector3(0, 0, 1000))); //spawn target
break;
}
case CrawlerAgentBehaviorType.CrawlerStaticVariableSpeed:
{
behaviorParams.BehaviorName = "CrawlerStaticVariableSpeed"; //set behavior name
if (crawlerStVSModel)
behaviorParams.Model = crawlerStVSModel; //assign the model
m_RandomizeWalkSpeedEachEpisode = true; //randomize m_TargetWalkingSpeed during training
SpawnTarget(staticTargetPrefab, transform.TransformPoint(new Vector3(0, 0, 1000))); //spawn target
break;
}
}
}
/// <summary>
/// Loop over body parts and reset them to initial conditions.
/// </summary>
public override void OnEpisodeBegin()

UpdateOrientationObjects();
//Set our goal walking speed
TargetWalkingSpeed =
m_RandomizeWalkSpeedEachEpisode ? Random.Range(0.1f, m_maxWalkingSpeed) : TargetWalkingSpeed;
TargetWalkingSpeed = Random.Range(0.1f, m_maxWalkingSpeed);
}
/// <summary>

380
Project/Assets/ML-Agents/Examples/FoodCollector/Prefabs/FoodCollectorArea.prefab


- component: {fileID: 54936164982484646}
- component: {fileID: 114374774605792098}
- component: {fileID: 114176228333253036}
- component: {fileID: 114725457980523372}
- component: {fileID: 6035497842152854922}
m_Layer: 0
m_Name: Agent
m_TagString: agent

m_Name:
m_EditorClassIdentifier:
m_BrainParameters:
VectorObservationSize: 4
VectorObservationSize: 0
NumStackedVectorObservations: 1
m_ActionSpec:
m_NumContinuousActions: 3

VectorActionSpaceType: 1
VectorActionSpaceType: 0
m_Model: {fileID: 11400000, guid: 3210b528a2bc44a86bd6bd1d571070f8, type: 3}
m_Model: {fileID: 11400000, guid: 75910f45f20be49b18e2b95879a217b2, type: 3}
m_BehaviorName: FoodCollector
m_BehaviorName: GridFoodCollector
TeamId: 0
m_UseChildSensors: 1
m_UseChildActuators: 1

goodMaterial: {fileID: 2100000, guid: c67450f290f3e4897bc40276a619e78d, type: 2}
frozenMaterial: {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
myLaser: {fileID: 1081721624670010}
contribute: 1
useVectorObs: 1
contribute: 0
useVectorObs: 0
--- !u!114 &114725457980523372
--- !u!114 &8297075921230369060
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 6bb6b867a41448888c1cd4f99643ad71, type: 3}
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_SensorName: RayPerceptionSensor
m_DetectableTags:
- food
- agent
- wall
- badFood
- frozenAgent
m_RaysPerDirection: 3
m_MaxRayDegrees: 70
m_SphereCastRadius: 0.5
m_RayLength: 50
m_RayLayerMask:
serializedVersion: 2
m_Bits: 4294967291
m_ObservationStacks: 1
rayHitColor: {r: 1, g: 0, b: 0, a: 1}
rayMissColor: {r: 1, g: 1, b: 1, a: 1}
m_StartVerticalOffset: 0
m_EndVerticalOffset: 0
--- !u!114 &8297075921230369060
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
--- !u!114 &1222199865870203693
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_Script: {fileID: 11500000, guid: 3a6da8f78a394c6ab027688eab81e04d, type: 3}
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
--- !u!114 &1222199865870203693
debugCommandLineOverride:
--- !u!114 &6035497842152854922
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a6da8f78a394c6ab027688eab81e04d, type: 3}
m_Script: {fileID: 11500000, guid: 801669c0cdece6b40b2e741ad0b119ac, type: 3}
debugCommandLineOverride:
Name:
CellScaleX: 1
CellScaleZ: 1
GridNumSideX: 40
GridNumSideZ: 40
CellScaleY: 0.01
RotateToAgent: 1
ChannelDepth: 06000000
DetectableObjects:
- food
- agent
- wall
- badFood
- frozenAgent
ObserveMask:
serializedVersion: 2
m_Bits: 307
gridDepthType: 1
rootReference: {fileID: 0}
ObservationPerCell: 0
NumberOfObservations: 0
ChannelOffsets:
DebugColors:
- {r: 0.4039216, g: 0.7372549, b: 0.41960788, a: 0}
- {r: 0.12941177, g: 0.5882353, b: 0.95294124, a: 0}
- {r: 0.3921569, g: 0.3921569, b: 0.3921569, a: 0}
- {r: 0.74509805, g: 0.227451, b: 0.15294118, a: 0}
- {r: 0, g: 0, b: 0, a: 0}
GizmoYOffset: 0
ShowGizmos: 0
CompressionType: 1
--- !u!1 &1482701732800114
GameObject:
m_ObjectHideFlags: 0

- component: {fileID: 54504078365531932}
- component: {fileID: 114522573150607728}
- component: {fileID: 114711827726849508}
- component: {fileID: 114443152683847924}
- component: {fileID: 3067525015186813280}
m_Layer: 0
m_Name: Agent (1)
m_TagString: agent

m_Name:
m_EditorClassIdentifier:
m_BrainParameters:
VectorObservationSize: 4
VectorObservationSize: 0
NumStackedVectorObservations: 1
m_ActionSpec:
m_NumContinuousActions: 3

VectorActionSpaceType: 1
VectorActionSpaceType: 0
m_Model: {fileID: 11400000, guid: 3210b528a2bc44a86bd6bd1d571070f8, type: 3}
m_Model: {fileID: 11400000, guid: 75910f45f20be49b18e2b95879a217b2, type: 3}
m_BehaviorName: FoodCollector
m_BehaviorName: GridFoodCollector
TeamId: 0
m_UseChildSensors: 1
m_UseChildActuators: 1

frozenMaterial: {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
myLaser: {fileID: 1941433838307300}
contribute: 0
useVectorObs: 1
useVectorObs: 0
--- !u!114 &114443152683847924
--- !u!114 &259154752087955944
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 6bb6b867a41448888c1cd4f99643ad71, type: 3}
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_SensorName: RayPerceptionSensor
m_DetectableTags:
- food
- agent
- wall
- badFood
- frozenAgent
m_RaysPerDirection: 3
m_MaxRayDegrees: 70
m_SphereCastRadius: 0.5
m_RayLength: 50
m_RayLayerMask:
serializedVersion: 2
m_Bits: 4294967291
m_ObservationStacks: 1
rayHitColor: {r: 1, g: 0, b: 0, a: 1}
rayMissColor: {r: 1, g: 1, b: 1, a: 1}
m_StartVerticalOffset: 0
m_EndVerticalOffset: 0
--- !u!114 &259154752087955944
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
--- !u!114 &3067525015186813280
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_Script: {fileID: 11500000, guid: 801669c0cdece6b40b2e741ad0b119ac, type: 3}
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
Name:
CellScaleX: 1
CellScaleZ: 1
GridNumSideX: 40
GridNumSideZ: 40
CellScaleY: 0.01
RotateToAgent: 1
ChannelDepth: 06000000
DetectableObjects:
- food
- agent
- wall
- badFood
- frozenAgent
ObserveMask:
serializedVersion: 2
m_Bits: 307
gridDepthType: 1
rootReference: {fileID: 0}
ObservationPerCell: 0
NumberOfObservations: 0
ChannelOffsets:
DebugColors:
- {r: 0.4039216, g: 0.7372549, b: 0.41960788, a: 0}
- {r: 0.12941177, g: 0.5882353, b: 0.95294124, a: 0}
- {r: 0.3921569, g: 0.3921569, b: 0.3921569, a: 0}
- {r: 0.74509805, g: 0.227451, b: 0.15294118, a: 0}
- {r: 0, g: 0, b: 0, a: 0}
GizmoYOffset: 0
ShowGizmos: 0
CompressionType: 1
--- !u!1 &1528397385587768
GameObject:
m_ObjectHideFlags: 0

- component: {fileID: 54961653455021136}
- component: {fileID: 114980787530065684}
- component: {fileID: 114542632553128056}
- component: {fileID: 114986980423924774}
- component: {fileID: 8466013622553267624}
m_Layer: 0
m_Name: Agent (2)
m_TagString: agent

m_Name:
m_EditorClassIdentifier:
m_BrainParameters:
VectorObservationSize: 4
VectorObservationSize: 0
NumStackedVectorObservations: 1
m_ActionSpec:
m_NumContinuousActions: 3

VectorActionSpaceType: 1
VectorActionSpaceType: 0
m_Model: {fileID: 11400000, guid: 3210b528a2bc44a86bd6bd1d571070f8, type: 3}
m_Model: {fileID: 11400000, guid: 75910f45f20be49b18e2b95879a217b2, type: 3}
m_BehaviorName: FoodCollector
m_BehaviorName: GridFoodCollector
TeamId: 0
m_UseChildSensors: 1
m_UseChildActuators: 1

frozenMaterial: {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
myLaser: {fileID: 1421240237750412}
contribute: 0
useVectorObs: 1
useVectorObs: 0
--- !u!114 &114986980423924774
--- !u!114 &5519119940433428255
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 6bb6b867a41448888c1cd4f99643ad71, type: 3}
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_SensorName: RayPerceptionSensor
m_DetectableTags:
- food
- agent
- wall
- badFood
- frozenAgent
m_RaysPerDirection: 3
m_MaxRayDegrees: 70
m_SphereCastRadius: 0.5
m_RayLength: 50
m_RayLayerMask:
serializedVersion: 2
m_Bits: 4294967291
m_ObservationStacks: 1
rayHitColor: {r: 1, g: 0, b: 0, a: 1}
rayMissColor: {r: 1, g: 1, b: 1, a: 1}
m_StartVerticalOffset: 0
m_EndVerticalOffset: 0
--- !u!114 &5519119940433428255
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
--- !u!114 &8466013622553267624
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_Script: {fileID: 11500000, guid: 801669c0cdece6b40b2e741ad0b119ac, type: 3}
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
Name:
CellScaleX: 1
CellScaleZ: 1
GridNumSideX: 40
GridNumSideZ: 40
CellScaleY: 0.01
RotateToAgent: 1
ChannelDepth: 06000000
DetectableObjects:
- food
- agent
- wall
- badFood
- frozenAgent
ObserveMask:
serializedVersion: 2
m_Bits: 307
gridDepthType: 1
rootReference: {fileID: 0}
ObservationPerCell: 0
NumberOfObservations: 0
ChannelOffsets:
DebugColors:
- {r: 0.4039216, g: 0.7372549, b: 0.41960788, a: 0}
- {r: 0.12941177, g: 0.5882353, b: 0.95294124, a: 0}
- {r: 0.3921569, g: 0.3921569, b: 0.3921569, a: 0}
- {r: 0.74509805, g: 0.227451, b: 0.15294118, a: 0}
- {r: 0, g: 0, b: 0, a: 0}
GizmoYOffset: 0
ShowGizmos: 0
CompressionType: 1
--- !u!1 &1617924810425504
GameObject:
m_ObjectHideFlags: 0

- component: {fileID: 54819001862035794}
- component: {fileID: 114878550018296316}
- component: {fileID: 114189751434580810}
- component: {fileID: 114644889237473510}
- component: {fileID: 6247312751399400490}
m_Layer: 0
m_Name: Agent (4)
m_TagString: agent

m_Name:
m_EditorClassIdentifier:
m_BrainParameters:
VectorObservationSize: 4
VectorObservationSize: 0
NumStackedVectorObservations: 1
m_ActionSpec:
m_NumContinuousActions: 3

VectorActionSpaceType: 1
VectorActionSpaceType: 0
m_Model: {fileID: 11400000, guid: 3210b528a2bc44a86bd6bd1d571070f8, type: 3}
m_Model: {fileID: 11400000, guid: 75910f45f20be49b18e2b95879a217b2, type: 3}
m_BehaviorName: FoodCollector
m_BehaviorName: GridFoodCollector
TeamId: 0
m_UseChildSensors: 1
m_UseChildActuators: 1

frozenMaterial: {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
myLaser: {fileID: 1617924810425504}
contribute: 0
useVectorObs: 1
useVectorObs: 0
--- !u!114 &114644889237473510
--- !u!114 &5884750436653390196
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 6bb6b867a41448888c1cd4f99643ad71, type: 3}
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_SensorName: RayPerceptionSensor
m_DetectableTags:
- food
- agent
- wall
- badFood
- frozenAgent
m_RaysPerDirection: 3
m_MaxRayDegrees: 70
m_SphereCastRadius: 0.5
m_RayLength: 50
m_RayLayerMask:
serializedVersion: 2
m_Bits: 4294967291
m_ObservationStacks: 1
rayHitColor: {r: 1, g: 0, b: 0, a: 1}
rayMissColor: {r: 1, g: 1, b: 1, a: 1}
m_StartVerticalOffset: 0
m_EndVerticalOffset: 0
--- !u!114 &5884750436653390196
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
--- !u!114 &6247312751399400490
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_Script: {fileID: 11500000, guid: 801669c0cdece6b40b2e741ad0b119ac, type: 3}
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
Name:
CellScaleX: 1
CellScaleZ: 1
GridNumSideX: 40
GridNumSideZ: 40
CellScaleY: 0.01
RotateToAgent: 1
ChannelDepth: 06000000
DetectableObjects:
- food
- agent
- wall
- badFood
- frozenAgent
ObserveMask:
serializedVersion: 2
m_Bits: 307
gridDepthType: 1
rootReference: {fileID: 0}
ObservationPerCell: 0
NumberOfObservations: 0
ChannelOffsets:
DebugColors:
- {r: 0.4039216, g: 0.7372549, b: 0.41960788, a: 0}
- {r: 0.12941177, g: 0.5882353, b: 0.95294124, a: 0}
- {r: 0.3921569, g: 0.3921569, b: 0.3921569, a: 0}
- {r: 0.74509805, g: 0.227451, b: 0.15294118, a: 0}
- {r: 0, g: 0, b: 0, a: 0}
GizmoYOffset: 0
ShowGizmos: 0
CompressionType: 1
--- !u!1 &1688105343773098
GameObject:
m_ObjectHideFlags: 0

- component: {fileID: 54895479068989492}
- component: {fileID: 114035338027591536}
- component: {fileID: 114235147148547996}
- component: {fileID: 114276061479012222}
- component: {fileID: 5837508007780682603}
m_Layer: 0
m_Name: Agent (3)
m_TagString: agent

m_Name:
m_EditorClassIdentifier:
m_BrainParameters:
VectorObservationSize: 4
VectorObservationSize: 0
NumStackedVectorObservations: 1
m_ActionSpec:
m_NumContinuousActions: 3

VectorActionSpaceType: 1
VectorActionSpaceType: 0
m_Model: {fileID: 11400000, guid: 3210b528a2bc44a86bd6bd1d571070f8, type: 3}
m_Model: {fileID: 11400000, guid: 75910f45f20be49b18e2b95879a217b2, type: 3}
m_BehaviorName: FoodCollector
m_BehaviorName: GridFoodCollector
TeamId: 0
m_UseChildSensors: 1
m_UseChildActuators: 1

frozenMaterial: {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
myLaser: {fileID: 1045923826166930}
contribute: 0
useVectorObs: 1
useVectorObs: 0
--- !u!114 &114276061479012222
--- !u!114 &4768752321433982785
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 6bb6b867a41448888c1cd4f99643ad71, type: 3}
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_SensorName: RayPerceptionSensor
m_DetectableTags:
- food
- agent
- wall
- badFood
- frozenAgent
m_RaysPerDirection: 3
m_MaxRayDegrees: 70
m_SphereCastRadius: 0.5
m_RayLength: 50
m_RayLayerMask:
serializedVersion: 2
m_Bits: 4294967291
m_ObservationStacks: 1
rayHitColor: {r: 1, g: 0, b: 0, a: 1}
rayMissColor: {r: 1, g: 1, b: 1, a: 1}
m_StartVerticalOffset: 0
m_EndVerticalOffset: 0
--- !u!114 &4768752321433982785
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
--- !u!114 &5837508007780682603
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_Script: {fileID: 11500000, guid: 801669c0cdece6b40b2e741ad0b119ac, type: 3}
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
Name:
CellScaleX: 1
CellScaleZ: 1
GridNumSideX: 40
GridNumSideZ: 40
CellScaleY: 0.01
RotateToAgent: 1
ChannelDepth: 06000000
DetectableObjects:
- food
- agent
- wall
- badFood
- frozenAgent
ObserveMask:
serializedVersion: 2
m_Bits: 307
gridDepthType: 1
rootReference: {fileID: 0}
ObservationPerCell: 0
NumberOfObservations: 0
ChannelOffsets:
DebugColors:
- {r: 0.4039216, g: 0.7372549, b: 0.41960788, a: 0}
- {r: 0.12941177, g: 0.5882353, b: 0.95294124, a: 0}
- {r: 0.3921569, g: 0.3921569, b: 0.3921569, a: 0}
- {r: 0.74509805, g: 0.227451, b: 0.15294118, a: 0}
- {r: 0, g: 0, b: 0, a: 0}
GizmoYOffset: 0
ShowGizmos: 0
CompressionType: 1
--- !u!1 &1729825611722018
GameObject:
m_ObjectHideFlags: 0

- component: {fileID: 4688212428263696}
- component: {fileID: 114181230191376748}
m_Layer: 0
m_Name: FoodCollectorArea
m_Name: GridFoodCollectorArea
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0

5
Project/Assets/ML-Agents/Examples/FoodCollector/Prefabs/FoodCollectorArea.prefab.meta


fileFormatVersion: 2
guid: 38400a68c4ea54b52998e34ee238d1a7
NativeFormatImporter:
guid: b5339e4b990ade14f992aadf3bf8591b
PrefabImporter:
mainObjectFileID: 100100000
userData:
assetBundleName:
assetBundleVariant:

862
Project/Assets/ML-Agents/Examples/FoodCollector/Scenes/FoodCollector.unity


m_ReflectionIntensity: 1
m_CustomReflection: {fileID: 0}
m_Sun: {fileID: 0}
m_IndirectSpecularColor: {r: 0.4497121, g: 0.49977785, b: 0.57563704, a: 1}
m_IndirectSpecularColor: {r: 0.44971168, g: 0.4997775, b: 0.57563686, a: 1}
m_UseRadianceAmbientProbe: 0
--- !u!157 &3
LightmapSettings:

m_PVRFilterTypeDirect: 0
m_PVRFilterTypeIndirect: 0
m_PVRFilterTypeAO: 0
m_PVRFilteringMode: 1
m_PVRFilteringMode: 2
m_PVRCulling: 1
m_PVRFilteringGaussRadiusDirect: 1
m_PVRFilteringGaussRadiusIndirect: 5

debug:
m_Flags: 0
m_NavMeshData: {fileID: 0}
--- !u!1001 &89545475
--- !u!1001 &190823800
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2

- target: {fileID: 1819751139121548, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 1819751139121548, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
value: FoodCollectorArea (1)
value: GridFoodCollectorArea
objectReference: {fileID: 0}
- target: {fileID: 4137908820211030, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.x
value: -17.2
objectReference: {fileID: 0}
- target: {fileID: 4259834826122778, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.x
value: -23.9
objectReference: {fileID: 0}
- target: {fileID: 4419274671784554, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.x
value: -8.9
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
value: -50
value: 0
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
value: 7
value: 6
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
- target: {fileID: 4756368533889646, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.x
value: -30.4
objectReference: {fileID: 0}
- target: {fileID: 4756368533889646, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.z
value: -9.9
objectReference: {fileID: 0}
- target: {fileID: 3067525015186813280, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: NumCollidersPerCell
value: 1
objectReference: {fileID: 0}
- target: {fileID: 3067525015186813280, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: EstimatedMaxCollidersPerCell
value: 4
objectReference: {fileID: 0}
- target: {fileID: 5837508007780682603, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: ChannelOffsets.Array.size
value: 1
objectReference: {fileID: 0}
- target: {fileID: 5837508007780682603, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: ShowGizmos
value: 0
objectReference: {fileID: 0}
- target: {fileID: 5837508007780682603, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: ObservationPerCell
value: 6
objectReference: {fileID: 0}
- target: {fileID: 5837508007780682603, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: NumberOfObservations
value: 9600
objectReference: {fileID: 0}
- target: {fileID: 5837508007780682603, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: m_Enabled
value: 1
objectReference: {fileID: 0}
- target: {fileID: 5837508007780682603, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
propertyPath: rootReference
value:
objectReference: {fileID: 190823801}
m_SourcePrefab: {fileID: 100100000, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
--- !u!1001 &269100759
m_SourcePrefab: {fileID: 100100000, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
--- !u!1 &190823801 stripped
GameObject:
m_CorrespondingSourceObject: {fileID: 1706274796045088, guid: b5339e4b990ade14f992aadf3bf8591b,
type: 3}
m_PrefabInstance: {fileID: 190823800}
m_PrefabAsset: {fileID: 0}
--- !u!1001 &392794583
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2

- target: {fileID: 1819751139121548, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 1819751139121548, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
value: FoodCollectorArea (3)
value: GridFoodCollectorArea (1)
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 1819751139121548, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_IsActive
value: 1
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
value: -150
value: -50
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
value: 9
value: 7
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
m_SourcePrefab: {fileID: 100100000, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
--- !u!1 &273651478
m_SourcePrefab: {fileID: 100100000, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
--- !u!1 &625137506
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Component:
- component: {fileID: 273651479}
- component: {fileID: 273651481}
- component: {fileID: 273651480}
- component: {fileID: 625137507}
- component: {fileID: 625137509}
- component: {fileID: 625137508}
m_Layer: 5
m_Name: Text
m_TagString: Untagged

m_IsActive: 1
--- !u!224 &273651479
--- !u!224 &625137507
m_GameObject: {fileID: 273651478}
m_GameObject: {fileID: 625137506}
m_Father: {fileID: 1799584681}
m_Father: {fileID: 965533424}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
m_AnchorMin: {x: 0, y: 0}

m_Pivot: {x: 0.5, y: 0.5}
--- !u!114 &273651480
--- !u!114 &625137508
m_GameObject: {fileID: 273651478}
m_GameObject: {fileID: 625137506}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 708705254, guid: f70555f144d8491a825f0804e09c671c, type: 3}

m_VerticalOverflow: 0
m_LineSpacing: 1
m_Text: NOM
--- !u!222 &273651481
--- !u!222 &625137509
m_GameObject: {fileID: 273651478}
m_GameObject: {fileID: 625137506}
--- !u!1 &378228137
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 378228141}
- component: {fileID: 378228140}
- component: {fileID: 378228139}
- component: {fileID: 378228138}
m_Layer: 5
m_Name: Canvas
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!114 &378228138
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 378228137}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 1301386320, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_IgnoreReversedGraphics: 1
m_BlockingObjects: 0
m_BlockingMask:
serializedVersion: 2
m_Bits: 4294967295
--- !u!114 &378228139
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 378228137}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 1980459831, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_UiScaleMode: 1
m_ReferencePixelsPerUnit: 100
m_ScaleFactor: 1
m_ReferenceResolution: {x: 800, y: 600}
m_ScreenMatchMode: 0
m_MatchWidthOrHeight: 0.5
m_PhysicalUnit: 3
m_FallbackScreenDPI: 96
m_DefaultSpriteDPI: 96
m_DynamicPixelsPerUnit: 1
--- !u!223 &378228140
Canvas:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 378228137}
m_Enabled: 1
serializedVersion: 3
m_RenderMode: 0
m_Camera: {fileID: 0}
m_PlaneDistance: 100
m_PixelPerfect: 0
m_ReceivesEvents: 1
m_OverrideSorting: 0
m_OverridePixelPerfect: 0
m_SortingBucketNormalizedSize: 0
m_AdditionalShaderChannelsFlag: 0
m_SortingLayerID: 0
m_SortingOrder: 0
m_TargetDisplay: 0
--- !u!224 &378228141
RectTransform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 378228137}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 0, y: 0, z: 0}
m_Children:
- {fileID: 1799584681}
- {fileID: 1086444498}
m_Father: {fileID: 0}
m_RootOrder: 2
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
m_AnchorMin: {x: 0, y: 0}
m_AnchorMax: {x: 0, y: 0}
m_AnchoredPosition: {x: 0, y: 0}
m_SizeDelta: {x: 0, y: 0}
m_Pivot: {x: 0, y: 0}
--- !u!1 &499540684
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 499540687}
- component: {fileID: 499540686}
- component: {fileID: 499540685}
m_Layer: 0
m_Name: EventSystem
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!114 &499540685
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 499540684}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 1077351063, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_HorizontalAxis: Horizontal
m_VerticalAxis: Vertical
m_SubmitButton: Submit
m_CancelButton: Cancel
m_InputActionsPerSecond: 10
m_RepeatDelay: 0.5
m_ForceModuleActive: 0
--- !u!114 &499540686
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 499540684}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: -619905303, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_FirstSelected: {fileID: 0}
m_sendNavigationEvents: 1
m_DragThreshold: 5
--- !u!4 &499540687
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 499540684}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 0}
m_RootOrder: 4
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!1001 &587417076
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications:
- target: {fileID: 1819751139121548, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_Name
value: FoodCollectorArea (2)
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalPosition.y
value: -100
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.x
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.y
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.z
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_RootOrder
value: 8
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
--- !u!1001 &916917435
PrefabInstance:
m_ObjectHideFlags: 0

objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: 5889392e3f05b448a8a06c5def6c2dec, type: 3}
--- !u!1 &965533423
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 965533424}
- component: {fileID: 965533426}
- component: {fileID: 965533425}
m_Layer: 5
m_Name: Panel
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 0
--- !u!224 &965533424
RectTransform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 965533423}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 625137507}
m_Father: {fileID: 1064449898}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
m_AnchorMin: {x: 0, y: 0}
m_AnchorMax: {x: 1, y: 1}
m_AnchoredPosition: {x: 0, y: 0}
m_SizeDelta: {x: 0, y: 0}
m_Pivot: {x: 0.5, y: 0.5}
--- !u!114 &965533425
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 965533423}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: -765806418, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_Material: {fileID: 0}
m_Color: {r: 0, g: 0, b: 0, a: 0.472}
m_RaycastTarget: 1
m_OnCullStateChanged:
m_PersistentCalls:
m_Calls: []
m_Sprite: {fileID: 10907, guid: 0000000000000000f000000000000000, type: 0}
m_Type: 1
m_PreserveAspect: 0
m_FillCenter: 1
m_FillMethod: 4
m_FillAmount: 1
m_FillClockwise: 1
m_FillOrigin: 0
m_UseSpriteMesh: 0
--- !u!222 &965533426
CanvasRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 965533423}
m_CullTransparentMesh: 0
--- !u!1 &1009000883
GameObject:
m_ObjectHideFlags: 0

m_OcclusionCulling: 1
m_StereoConvergence: 10
m_StereoSeparation: 0.022
--- !u!1 &1086444495
--- !u!1001 &1043871087
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications:
- target: {fileID: 1819751139121548, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_Name
value: GridFoodCollectorArea (2)
objectReference: {fileID: 0}
- target: {fileID: 1819751139121548, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_IsActive
value: 1
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.y
value: -100
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.x
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.y
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.z
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_RootOrder
value: 8
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
--- !u!1 &1064449894
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Component:
- component: {fileID: 1086444498}
- component: {fileID: 1086444497}
- component: {fileID: 1086444496}
- component: {fileID: 1064449898}
- component: {fileID: 1064449897}
- component: {fileID: 1064449896}
- component: {fileID: 1064449895}
m_Layer: 5
m_Name: Canvas
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!114 &1064449895
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1064449894}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 1301386320, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_IgnoreReversedGraphics: 1
m_BlockingObjects: 0
m_BlockingMask:
serializedVersion: 2
m_Bits: 4294967295
--- !u!114 &1064449896
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1064449894}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 1980459831, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_UiScaleMode: 1
m_ReferencePixelsPerUnit: 100
m_ScaleFactor: 1
m_ReferenceResolution: {x: 800, y: 600}
m_ScreenMatchMode: 0
m_MatchWidthOrHeight: 0.5
m_PhysicalUnit: 3
m_FallbackScreenDPI: 96
m_DefaultSpriteDPI: 96
m_DynamicPixelsPerUnit: 1
--- !u!223 &1064449897
Canvas:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1064449894}
m_Enabled: 1
serializedVersion: 3
m_RenderMode: 0
m_Camera: {fileID: 0}
m_PlaneDistance: 100
m_PixelPerfect: 0
m_ReceivesEvents: 1
m_OverrideSorting: 0
m_OverridePixelPerfect: 0
m_SortingBucketNormalizedSize: 0
m_AdditionalShaderChannelsFlag: 0
m_SortingLayerID: 0
m_SortingOrder: 0
m_TargetDisplay: 0
--- !u!224 &1064449898
RectTransform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1064449894}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 0, y: 0, z: 0}
m_Children:
- {fileID: 965533424}
- {fileID: 1418304525}
m_Father: {fileID: 0}
m_RootOrder: 2
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
m_AnchorMin: {x: 0, y: 0}
m_AnchorMax: {x: 0, y: 0}
m_AnchoredPosition: {x: 0, y: 0}
m_SizeDelta: {x: 0, y: 0}
m_Pivot: {x: 0, y: 0}
--- !u!1 &1418304524
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 1418304525}
- component: {fileID: 1418304527}
- component: {fileID: 1418304526}
m_Layer: 5
m_Name: Text
m_TagString: Untagged

m_IsActive: 1
--- !u!114 &1086444496
--- !u!224 &1418304525
RectTransform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1418304524}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1064449898}
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
m_AnchorMin: {x: 0.5, y: 0.5}
m_AnchorMax: {x: 0.5, y: 0.5}
m_AnchoredPosition: {x: -1000, y: -239.57645}
m_SizeDelta: {x: 160, y: 30}
m_Pivot: {x: 0.5, y: 0.5}
--- !u!114 &1418304526
m_GameObject: {fileID: 1086444495}
m_GameObject: {fileID: 1418304524}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 708705254, guid: f70555f144d8491a825f0804e09c671c, type: 3}

m_VerticalOverflow: 0
m_LineSpacing: 1
m_Text: New Text
--- !u!222 &1086444497
--- !u!222 &1418304527
m_GameObject: {fileID: 1086444495}
m_GameObject: {fileID: 1418304524}
--- !u!224 &1086444498
RectTransform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1086444495}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 378228141}
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
m_AnchorMin: {x: 0.5, y: 0.5}
m_AnchorMax: {x: 0.5, y: 0.5}
m_AnchoredPosition: {x: -1000, y: -239.57645}
m_SizeDelta: {x: 160, y: 30}
m_Pivot: {x: 0.5, y: 0.5}
--- !u!1001 &1142607725
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications:
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalPosition.y
value: 12.3
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.x
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.y
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.z
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
propertyPath: m_RootOrder
value: 6
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: 38400a68c4ea54b52998e34ee238d1a7, type: 3}
--- !u!1 &1574236047
GameObject:
m_ObjectHideFlags: 0

agents: []
listArea: []
totalScore: 0
scoreText: {fileID: 1086444496}
scoreText: {fileID: 1418304526}
--- !u!4 &1574236049
Transform:
m_ObjectHideFlags: 0

m_Father: {fileID: 0}
m_RootOrder: 3
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!1 &1799584680
--- !u!1 &1956702417
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Component:
- component: {fileID: 1799584681}
- component: {fileID: 1799584683}
- component: {fileID: 1799584682}
m_Layer: 5
m_Name: Panel
- component: {fileID: 1956702420}
- component: {fileID: 1956702419}
- component: {fileID: 1956702418}
m_Layer: 0
m_Name: EventSystem
m_IsActive: 0
--- !u!224 &1799584681
RectTransform:
m_IsActive: 1
--- !u!114 &1956702418
MonoBehaviour:
m_GameObject: {fileID: 1799584680}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 273651479}
m_Father: {fileID: 378228141}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
m_AnchorMin: {x: 0, y: 0}
m_AnchorMax: {x: 1, y: 1}
m_AnchoredPosition: {x: 0, y: 0}
m_SizeDelta: {x: 0, y: 0}
m_Pivot: {x: 0.5, y: 0.5}
--- !u!114 &1799584682
m_GameObject: {fileID: 1956702417}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 1077351063, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Name:
m_EditorClassIdentifier:
m_HorizontalAxis: Horizontal
m_VerticalAxis: Vertical
m_SubmitButton: Submit
m_CancelButton: Cancel
m_InputActionsPerSecond: 10
m_RepeatDelay: 0.5
m_ForceModuleActive: 0
--- !u!114 &1956702419
m_GameObject: {fileID: 1799584680}
m_GameObject: {fileID: 1956702417}
m_Script: {fileID: -765806418, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Script: {fileID: -619905303, guid: f70555f144d8491a825f0804e09c671c, type: 3}
m_Material: {fileID: 0}
m_Color: {r: 0, g: 0, b: 0, a: 0.472}
m_RaycastTarget: 1
m_OnCullStateChanged:
m_PersistentCalls:
m_Calls: []
m_Sprite: {fileID: 10907, guid: 0000000000000000f000000000000000, type: 0}
m_Type: 1
m_PreserveAspect: 0
m_FillCenter: 1
m_FillMethod: 4
m_FillAmount: 1
m_FillClockwise: 1
m_FillOrigin: 0
m_UseSpriteMesh: 0
--- !u!222 &1799584683
CanvasRenderer:
m_FirstSelected: {fileID: 0}
m_sendNavigationEvents: 1
m_DragThreshold: 5
--- !u!4 &1956702420
Transform:
m_GameObject: {fileID: 1799584680}
m_CullTransparentMesh: 0
m_GameObject: {fileID: 1956702417}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 0}
m_RootOrder: 4
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!1001 &1985725465
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications:
- target: {fileID: 1819751139121548, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_Name
value: GridFoodCollectorArea (3)
objectReference: {fileID: 0}
- target: {fileID: 1819751139121548, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_IsActive
value: 1
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.y
value: -150
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.x
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.y
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.z
value: -0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_RootOrder
value: 9
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 4688212428263696, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: b5339e4b990ade14f992aadf3bf8591b, type: 3}
--- !u!1001 &2124876351
PrefabInstance:
m_ObjectHideFlags: 0

5
Project/Assets/ML-Agents/Examples/FoodCollector/Scenes/FoodCollector.unity.meta


fileFormatVersion: 2
guid: 11583205ab5b74bb4bb1b9951cf9e437
timeCreated: 1506808980
licenseType: Pro
guid: 74aeee1f5073c4998840fc784793f1ef
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

1001
Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.onnx
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.onnx.meta


fileFormatVersion: 2
guid: 3210b528a2bc44a86bd6bd1d571070f8
guid: 75910f45f20be49b18e2b95879a217b2
ScriptedImporter:
fileIDToRecycleName:
11400000: main obj

2
Project/Assets/ML-Agents/Examples/GridWorld/Demos/ExpertGridWorld.demo.meta


guid: 0092f2e4aece345aea4730a37eeebf68
ScriptedImporter:
fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/GridWorld/Demos/ExpertGrid.demo
11400002: Assets/ML-Agents/Examples/GridWorld/Demos/ExpertGridWorld.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:

14
Project/Assets/ML-Agents/Examples/Match3/Prefabs/Match3VectorObs.prefab


- component: {fileID: 2118285884327540687}
- component: {fileID: 2118285884327540680}
- component: {fileID: 3357012711826686276}
- component: {fileID: 2164669533582273470}
m_Layer: 0
m_Name: Match3 Agent
m_TagString: Untagged

ActuatorName: Match3 Actuator
RandomSeed: -1
ForceHeuristic: 0
--- !u!114 &2164669533582273470
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 2118285884327540673}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a6da8f78a394c6ab027688eab81e04d, type: 3}
m_Name:
m_EditorClassIdentifier:
debugCommandLineOverride:

14
Project/Assets/ML-Agents/Examples/Match3/Prefabs/Match3VisualObs.prefab


- component: {fileID: 3019509692332007776}
- component: {fileID: 3019509692332007783}
- component: {fileID: 8270768986451624427}
- component: {fileID: 5564406567458194538}
m_Layer: 0
m_Name: Match3 Agent
m_TagString: Untagged

ActuatorName: Match3 Actuator
RandomSeed: -1
ForceHeuristic: 0
--- !u!114 &5564406567458194538
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3019509692332007790}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a6da8f78a394c6ab027688eab81e04d, type: 3}
m_Name:
m_EditorClassIdentifier:
debugCommandLineOverride:

2
Project/Assets/ML-Agents/Examples/PushBlock/Demos/ExpertPushBlock.demo.meta


guid: 7f11f35191533404c9957443a681aaee
ScriptedImporter:
fileIDToRecycleName:
11400000: Assets/ML-Agents/Examples/PushBlock/Demos/ExpertPush.demo
11400002: Assets/ML-Agents/Examples/PushBlock/Demos/ExpertPushBlock.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:

139
Project/Assets/ML-Agents/Examples/Soccer/Prefabs/SoccerFieldTwos.prefab


m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_GateFitMode: 2
m_FOVAxisMode: 0
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

VectorActionDescriptions: []
VectorActionSpaceType: 0
hasUpgradedBrainParametersWithActionSpec: 1
m_Model: {fileID: 11400000, guid: b0a629580a0ab48a5a774f90ff1fb48b, type: 3}
m_Model: {fileID: 5022602860645237092, guid: 8cd4584c2f2cb4c5fb51675d364e10ec, type: 3}
m_InferenceDevice: 2
m_BehaviorType: 0
m_BehaviorName: SoccerTwos

hasUpgradedFromAgentParameters: 1
MaxStep: 3000
team: 0
area: {fileID: 114559182131992928}
timePenalty: 0
initialPos: {x: 0, y: 0, z: 0}
rotSign: 0
--- !u!114 &114320493772006642
MonoBehaviour:
m_ObjectHideFlags: 0

m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_GateFitMode: 2
m_FOVAxisMode: 0
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2

VectorActionDescriptions: []
VectorActionSpaceType: 0
hasUpgradedBrainParametersWithActionSpec: 1
m_Model: {fileID: 11400000, guid: b0a629580a0ab48a5a774f90ff1fb48b, type: 3}
m_Model: {fileID: 5022602860645237092, guid: 8cd4584c2f2cb4c5fb51675d364e10ec, type: 3}
m_InferenceDevice: 2
m_BehaviorType: 0
m_BehaviorName: SoccerTwos

hasUpgradedFromAgentParameters: 1
MaxStep: 3000
team: 1
area: {fileID: 114559182131992928}
timePenalty: 0
initialPos: {x: 0, y: 0, z: 0}
rotSign: 0
--- !u!114 &114516244030127556
MonoBehaviour:
m_ObjectHideFlags: 0

serializedVersion: 6
m_Component:
- component: {fileID: 4558743310993102}
- component: {fileID: 114559182131992928}
- component: {fileID: 8122248192225965164}
m_Layer: 0
m_Name: SoccerFieldTwos
m_TagString: Untagged

m_Father: {fileID: 0}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!114 &114559182131992928
--- !u!114 &8122248192225965164
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: efd705d0a5b1e405eb1869b7cbe47dda, type: 3}
m_Script: {fileID: 11500000, guid: 4e397bc3ae78c466a8d44400f5b68e38, type: 3}
MaxEnvironmentSteps: 5000
ground: {fileID: 0}
centerPitch: {fileID: 0}
playerStates: []
ballStartingPos: {x: 0, y: 0, z: 0}
goalTextUI: {fileID: 0}
canResetBall: 0
AgentsList:
- Agent: {fileID: 114850431417842684}
StartingPos: {x: 0, y: 0, z: 0}
StartingRot: {x: 0, y: 0, z: 0, w: 0}
Rb: {fileID: 0}
- Agent: {fileID: 114492261207303438}
StartingPos: {x: 0, y: 0, z: 0}
StartingRot: {x: 0, y: 0, z: 0, w: 0}
Rb: {fileID: 0}
- Agent: {fileID: 5379409612883756837}
StartingPos: {x: 0, y: 0, z: 0}
StartingRot: {x: 0, y: 0, z: 0, w: 0}
Rb: {fileID: 0}
- Agent: {fileID: 5320024511406682322}
StartingPos: {x: 0, y: 0, z: 0}
StartingRot: {x: 0, y: 0, z: 0, w: 0}
Rb: {fileID: 0}
--- !u!1 &1366507812774098
GameObject:
m_ObjectHideFlags: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_Script: {fileID: 11500000, guid: 93558b952b37a4b0ebaca3ca6711bcc4, type: 3}
m_Name:
m_EditorClassIdentifier:
area: {fileID: 0}
area: {fileID: 1141134673700168}
envController: {fileID: 0}
purpleGoalTag: purpleGoal
blueGoalTag: blueGoal
--- !u!54 &54100138833592438

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_GateFitMode: 2
m_FOVAxisMode: 0
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2

m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_GateFitMode: 2
m_FOVAxisMode: 0
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

VectorActionDescriptions: []
VectorActionSpaceType: 0
hasUpgradedBrainParametersWithActionSpec: 1
m_Model: {fileID: 11400000, guid: b0a629580a0ab48a5a774f90ff1fb48b, type: 3}
m_Model: {fileID: 5022602860645237092, guid: 8cd4584c2f2cb4c5fb51675d364e10ec, type: 3}
m_InferenceDevice: 2
m_BehaviorType: 0
m_BehaviorName: SoccerTwos

hasUpgradedFromAgentParameters: 1
MaxStep: 3000
team: 0
area: {fileID: 114559182131992928}
timePenalty: 0
initialPos: {x: 0, y: 0, z: 0}
rotSign: 0
--- !u!114 &1023485123796557062
MonoBehaviour:
m_ObjectHideFlags: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

VectorActionDescriptions: []
VectorActionSpaceType: 0
hasUpgradedBrainParametersWithActionSpec: 1
m_Model: {fileID: 11400000, guid: b0a629580a0ab48a5a774f90ff1fb48b, type: 3}
m_Model: {fileID: 5022602860645237092, guid: 8cd4584c2f2cb4c5fb51675d364e10ec, type: 3}
m_InferenceDevice: 2
m_BehaviorType: 0
m_BehaviorName: SoccerTwos

hasUpgradedFromAgentParameters: 1
MaxStep: 3000
team: 1
area: {fileID: 114559182131992928}
timePenalty: 0
initialPos: {x: 0, y: 0, z: 0}
rotSign: 0
--- !u!114 &2562571719799803906
MonoBehaviour:
m_ObjectHideFlags: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

236
Project/Assets/ML-Agents/Examples/Soccer/Prefabs/StrikersVsGoalieField.prefab


m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 11
m_RootOrder: 12
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &253232880
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 1
m_RootOrder: 2
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &444114137
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 100, y: 100, z: 100}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 14
m_RootOrder: 15
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!65 &474940251
BoxCollider:

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 6
m_RootOrder: 7
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &615018297
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 3
m_RootOrder: 4
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &654598890
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 8
m_RootOrder: 9
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &738577478
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 9
m_RootOrder: 10
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &922552527
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 300, y: 10, z: 1200}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 12
m_RootOrder: 13
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &1358559440
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 5
m_RootOrder: 6
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &1411349256
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 0.01, y: 0.01, z: 0.01}
m_Children:
- {fileID: 5380420931288637108}
- {fileID: 1838648424}
- {fileID: 444114135}
- {fileID: 1905370727}

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 7
m_RootOrder: 8
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &1641907513
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 4
m_RootOrder: 5
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &1685058926
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 10
m_RootOrder: 11
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &1728133460
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_LocalScale: {x: 300, y: 10, z: 1200}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 13
m_RootOrder: 14
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &1790209497
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

- {fileID: 1280034523}
- {fileID: 473053728}
m_Father: {fileID: 1590368733}
m_RootOrder: 0
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!1 &1905370726
GameObject:

m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 2
m_RootOrder: 3
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &1905370729
MeshFilter:

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_GateFitMode: 2
m_FOVAxisMode: 0
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1095606497496374}
m_LocalRotation: {x: 0, y: -0.7071068, z: 0, w: 0.7071068}
m_LocalPosition: {x: 8, y: 0.5, z: 0}
m_LocalPosition: {x: 4, y: 0.5, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 4540034559941056}

hasUpgradedFromAgentParameters: 1
MaxStep: 3000
team: 0
area: {fileID: 114559182131992928}
timePenalty: 0
initialPos: {x: 0, y: 0, z: 0}
rotSign: 0
--- !u!114 &114320493772006642
MonoBehaviour:
m_ObjectHideFlags: 0

m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_GateFitMode: 2
m_FOVAxisMode: 0
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2

hasUpgradedFromAgentParameters: 1
MaxStep: 3000
team: 1
area: {fileID: 114559182131992928}
timePenalty: 0
initialPos: {x: 0, y: 0, z: 0}
rotSign: 0
--- !u!114 &114516244030127556
MonoBehaviour:
m_ObjectHideFlags: 0

serializedVersion: 6
m_Component:
- component: {fileID: 4558743310993102}
- component: {fileID: 114559182131992928}
- component: {fileID: 5003424191498964318}
m_Layer: 0
m_Name: StrikersVsGoalieField
m_TagString: Untagged

m_Father: {fileID: 0}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!114 &114559182131992928
--- !u!114 &5003424191498964318
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: efd705d0a5b1e405eb1869b7cbe47dda, type: 3}
m_Script: {fileID: 11500000, guid: 4e397bc3ae78c466a8d44400f5b68e38, type: 3}
MaxEnvironmentSteps: 5000
ground: {fileID: 0}
centerPitch: {fileID: 0}
playerStates: []
ballStartingPos: {x: 0, y: 0, z: 0}
goalTextUI: {fileID: 0}
canResetBall: 0
AgentsList:
- Agent: {fileID: 114850431417842684}
StartingPos: {x: 0, y: 0, z: 0}
StartingRot: {x: 0, y: 0, z: 0, w: 0}
Rb: {fileID: 0}
- Agent: {fileID: 5379409612883756837}
StartingPos: {x: 0, y: 0, z: 0}
StartingRot: {x: 0, y: 0, z: 0, w: 0}
Rb: {fileID: 0}
- Agent: {fileID: 114492261207303438}
StartingPos: {x: 0, y: 0, z: 0}
StartingRot: {x: 0, y: 0, z: 0, w: 0}
Rb: {fileID: 0}
--- !u!1 &1366507812774098
GameObject:
m_ObjectHideFlags: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_Script: {fileID: 11500000, guid: 93558b952b37a4b0ebaca3ca6711bcc4, type: 3}
m_Name:
m_EditorClassIdentifier:
area: {fileID: 0}
area: {fileID: 1141134673700168}
envController: {fileID: 0}
purpleGoalTag: purpleGoal
blueGoalTag: blueGoal
--- !u!54 &54100138833592438

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_GateFitMode: 2
m_FOVAxisMode: 0
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:

m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0

hasUpgradedFromAgentParameters: 1
MaxStep: 3000
team: 1
area: {fileID: 114559182131992928}
timePenalty: 0
initialPos: {x: 0, y: 0, z: 0}
rotSign: 0
--- !u!114 &2562571719799803906
MonoBehaviour:
m_ObjectHideFlags: 0

m_Name:
m_EditorClassIdentifier:
debugCommandLineOverride:
--- !u!1 &9012161277676694912
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 5380420931288637108}
- component: {fileID: 9218808494928946219}
- component: {fileID: 751017937587398034}
- component: {fileID: 5544574640864840267}
m_Layer: 0
m_Name: BlueGoalBlocker
m_TagString: wall
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &5380420931288637108
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 9012161277676694912}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: -1551, y: 155, z: 0}
m_LocalScale: {x: 100, y: 395.9755, z: 791.466}
m_Children: []
m_Father: {fileID: 1590368733}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &9218808494928946219
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 9012161277676694912}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &751017937587398034
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 9012161277676694912}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RayTracingMode: 2
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_ReceiveGI: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 1
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!65 &5544574640864840267
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 9012161277676694912}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}

57
Project/Assets/ML-Agents/Examples/Soccer/Scripts/AgentSoccer.cs


using Unity.MLAgents.Actuators;
using Unity.MLAgents.Policies;
public enum Team
{
Blue = 0,
Purple = 1
}
public class AgentSoccer : Agent
{
// Note that that the detectable tags are different for the blue and purple teams. The order is

// * wall
// * own teammate
// * opposing player
public enum Team
{
Blue = 0,
Purple = 1
}
public enum Position
{

[HideInInspector]
public Team team;
float m_KickPower;
int m_PlayerIndex;
public SoccerFieldArea area;
// The coefficient for the reward for colliding with a ball. Set using curriculum.
float m_BallTouch;
public Position position;

float m_LateralSpeed;
float m_ForwardSpeed;
[HideInInspector]
public float timePenalty;
Vector3 m_Transform;
public Vector3 initialPos;
public float rotSign;
EnvironmentParameters m_ResetParams;

if (m_BehaviorParameters.TeamId == (int)Team.Blue)
{
team = Team.Blue;
m_Transform = new Vector3(transform.position.x - 4f, .5f, transform.position.z);
initialPos = new Vector3(transform.position.x - 5f, .5f, transform.position.z);
rotSign = 1f;
m_Transform = new Vector3(transform.position.x + 4f, .5f, transform.position.z);
initialPos = new Vector3(transform.position.x + 5f, .5f, transform.position.z);
rotSign = -1f;
}
if (position == Position.Goalie)
{

agentRb = GetComponent<Rigidbody>();
agentRb.maxAngularVelocity = 500;
var playerState = new PlayerState
{
agentRb = agentRb,
startingPos = transform.position,
agentScript = this,
};
area.playerStates.Add(playerState);
m_PlayerIndex = area.playerStates.IndexOf(playerState);
playerState.playerIndex = m_PlayerIndex;
m_ResetParams = Academy.Instance.EnvironmentParameters;
}

// Existential penalty for Strikers
AddReward(-m_Existential);
}
else
{
// Existential penalty cumulant for Generic
timePenalty -= m_Existential;
}
MoveAgent(actionBuffers.DiscreteActions);
}

public override void OnEpisodeBegin()
{
timePenalty = 0;
if (team == Team.Purple)
{
transform.rotation = Quaternion.Euler(0f, -90f, 0f);
}
else
{
transform.rotation = Quaternion.Euler(0f, 90f, 0f);
}
transform.position = m_Transform;
agentRb.velocity = Vector3.zero;
agentRb.angularVelocity = Vector3.zero;
SetResetParameters();
public void SetResetParameters()
{
area.ResetBall();
}
}

12
Project/Assets/ML-Agents/Examples/Soccer/Scripts/SoccerBallController.cs


public class SoccerBallController : MonoBehaviour
{
public GameObject area;
public SoccerFieldArea area;
public SoccerEnvController envController;
void Start()
{
envController = area.GetComponent<SoccerEnvController>();
}
area.GoalTouched(AgentSoccer.Team.Blue);
envController.GoalTouched(Team.Blue);
area.GoalTouched(AgentSoccer.Team.Purple);
envController.GoalTouched(Team.Purple);
}
}
}

5
Project/Assets/ML-Agents/Examples/Sorter/Scripts/SorterAgent.cs


int m_NumberOfTilesToSpawn;
int m_MaxNumberOfTiles;
PushBlockSettings m_PushBlockSettings;
Rigidbody m_AgentRb;
// The BufferSensorComponent is the Sensor that allows the Agent to observe

m_MaxNumberOfTiles = k_HighestTileValue;
m_ResetParams = Academy.Instance.EnvironmentParameters;
m_BufferSensor = GetComponent<BufferSensorComponent>();
m_PushBlockSettings = FindObjectOfType<PushBlockSettings>();
m_AgentRb = GetComponent<Rigidbody>();
m_StartingPos = transform.position;
}

}
transform.Rotate(rotateDir, Time.deltaTime * 200f);
m_AgentRb.AddForce(dirToGo * m_PushBlockSettings.agentRunSpeed,
ForceMode.VelocityChange);
m_AgentRb.AddForce(dirToGo * 2, ForceMode.VelocityChange);
}

2
Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalker.demo.meta


guid: a4b02e2c382c247919eb63ce72e90a3b
ScriptedImporter:
fileIDToRecycleName:
11400002: Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerDyVS.demo
11400002: Assets/ML-Agents/Examples/Walker/Demos/ExpertWalker.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:

2
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Ragdoll/WalkerRagdoll.prefab


- component: {fileID: 895268871377934302}
- component: {fileID: 895268871377934301}
m_Layer: 0
m_Name: WalkerRagdollBase
m_Name: WalkerRagdoll
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0

9
Project/Assets/ML-Agents/Examples/Worm/Prefabs/PlatformWorm.prefab


m_Modification:
m_TransformParent: {fileID: 7519741477752072726}
m_Modifications:
- target: {fileID: 2461460301642470340, guid: ff2999c8614d848f8a7e55e3a6fb9282,
type: 3}
propertyPath: targetToLookAt
value:
objectReference: {fileID: 0}
- target: {fileID: 7430253518223459950, guid: ff2999c8614d848f8a7e55e3a6fb9282,
type: 3}
propertyPath: m_Name

- target: {fileID: 7430253518223459951, guid: ff2999c8614d848f8a7e55e3a6fb9282,
type: 3}
propertyPath: m_RootOrder
value: 3
value: 2
objectReference: {fileID: 0}
- target: {fileID: 7430253518223459951, guid: ff2999c8614d848f8a7e55e3a6fb9282,
type: 3}

- target: {fileID: 845566399918322646, guid: d6fc96a99a9754f07b48abf1e0d55a5c,
type: 3}
propertyPath: m_Name
value: PlatformWormDynamicTarget
value: PlatformWorm
objectReference: {fileID: 0}
- target: {fileID: 845742365997159796, guid: d6fc96a99a9754f07b48abf1e0d55a5c,
type: 3}

13
Project/Assets/ML-Agents/Examples/Worm/Prefabs/Worm.prefab


- component: {fileID: 7430253518223459946}
- component: {fileID: 7430253518223459945}
m_Layer: 0
m_Name: WormBasePrefab
m_Name: Worm
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0

VectorActionDescriptions: []
VectorActionSpaceType: 1
hasUpgradedBrainParametersWithActionSpec: 1
m_Model: {fileID: 11400000, guid: e81305346bd9b408c8871523f9088c2a, type: 3}
m_Model: {fileID: 11400000, guid: 117512193457f4b35994eedc14532276, type: 3}
m_BehaviorName: WormDynamic
m_BehaviorName: Worm
TeamId: 0
m_UseChildSensors: 1
m_UseChildActuators: 1

maxStep: 0
hasUpgradedFromAgentParameters: 1
MaxStep: 5000
typeOfWorm: 0
wormDyModel: {fileID: 11400000, guid: 117512193457f4b35994eedc14532276, type: 3}
wormStModel: {fileID: 11400000, guid: fc1e2a84251634459bfd8edc900e2e71, type: 3}
dynamicTargetPrefab: {fileID: 3839136118347789758, guid: 46734abd0de454192b407379c6a4ab8d,
type: 3}
staticTargetPrefab: {fileID: 3839136118347789758, guid: 2173d15c0b5fc49e5870c9d1c7f7ee8e,
TargetPrefab: {fileID: 3839136118347789758, guid: 46734abd0de454192b407379c6a4ab8d,
type: 3}
bodySegment0: {fileID: 7430253517585478437}
bodySegment1: {fileID: 7430253518698367209}

52
Project/Assets/ML-Agents/Examples/Worm/Scripts/WormAgent.cs


[RequireComponent(typeof(JointDriveController))] // Required to set joint forces
public class WormAgent : Agent
{
//The type of agent behavior we want to use.
//This setting will determine how the agent is set up during initialization.
public enum WormAgentBehaviorType
{
WormDynamic,
WormStatic
}
[Tooltip(
"Dynamic - The agent will run towards a target that changes position.\n\n" +
"Static - The agent will run towards a static target. "
)]
public WormAgentBehaviorType typeOfWorm;
//Brains
//A different brain will be used depending on the CrawlerAgentBehaviorType selected
[Header("NN Models")] public NNModel wormDyModel;
public NNModel wormStModel;
[Header("Target Prefabs")] public Transform dynamicTargetPrefab; //Target prefab to use in Dynamic envs
public Transform staticTargetPrefab; //Target prefab to use in Static envs
[Header("Target Prefabs")] public Transform TargetPrefab; //Target prefab to use in Dynamic envs
private Transform m_Target; //Target the agent will walk towards during training.
[Header("Body Parts")] public Transform bodySegment0;

public override void Initialize()
{
SetAgentType();
SpawnTarget(TargetPrefab, transform.position); //spawn target
m_StartingPos = bodySegment0.position;
m_OrientationCube = GetComponentInChildren<OrientationCubeController>();

void SpawnTarget(Transform prefab, Vector3 pos)
{
m_Target = Instantiate(prefab, pos, Quaternion.identity, transform);
}
/// <summary>
/// Set up the agent based on the type
/// </summary>
void SetAgentType()
{
var behaviorParams = GetComponent<Unity.MLAgents.Policies.BehaviorParameters>();
switch (typeOfWorm)
{
case WormAgentBehaviorType.WormDynamic:
{
behaviorParams.BehaviorName = "WormDynamic"; //set behavior name
if (wormDyModel)
behaviorParams.Model = wormDyModel; //assign the brain
SpawnTarget(dynamicTargetPrefab, transform.position); //spawn target
break;
}
case WormAgentBehaviorType.WormStatic:
{
behaviorParams.BehaviorName = "WormStatic"; //set behavior name
if (wormStModel)
behaviorParams.Model = wormStModel; //assign the brain
SpawnTarget(staticTargetPrefab, transform.TransformPoint(new Vector3(0, 0, 1000))); //spawn target
break;
}
}
}
/// <summary>

5
Project/Packages/manifest.json


{
"dependencies": {
"com.unity.ads": "2.0.8",
"com.unity.analytics": "3.2.3",
"com.unity.purchasing": "2.2.1",
"com.unity.textmeshpro": "1.4.1",
"com.unity.modules.ai": "1.0.0",
"com.unity.modules.animation": "1.0.0",

"com.unity.modules.video": "1.0.0",
"com.unity.modules.vr": "1.0.0",
"com.unity.modules.wind": "1.0.0",
"com.unity.modules.xr": "1.0.0"
"com.unity.modules.xr": "1.0.0",
"com.unity.nuget.newtonsoft-json": "2.0.0"
},
"testables": [
"com.unity.ml-agents",

1
Project/Project.sln.DotSettings


<wpf:ResourceDictionary xml:space="preserve" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:s="clr-namespace:System;assembly=mscorlib" xmlns:ss="urn:shemas-jetbrains-com:settings-storage-xaml" xmlns:wpf="http://schemas.microsoft.com/winfx/2006/xaml/presentation">
<s:String x:Key="/Default/CodeStyle/Naming/CSharpNaming/Abbreviations/=ML/@EntryIndexedValue">ML</s:String>
<s:Boolean x:Key="/Default/UserDictionary/Words/=Dont/@EntryIndexedValue">True</s:Boolean></wpf:ResourceDictionary>

16
README.md


# Unity ML-Agents Toolkit
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/)
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/)
[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE)

## Releases & Documentation
**Our latest, stable release is `Release 13`. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Readme.md)
**Our latest, stable release is `Release 14`. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Readme.md)
to get started with the latest release of ML-Agents.**
The table below lists all our releases, including our `main` branch which is

| **Version** | **Release Date** | **Source** | **Documentation** | **Download** | **Python Package** | **Unity Package** |
|:-------:|:------:|:-------------:|:-------:|:------------:|:------------:|:------------:|
| **main (unstable)** | -- | [source](https://github.com/Unity-Technologies/ml-agents/tree/main) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/main/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/main.zip) | -- | -- |
| **Release 14** | **March 5, 2021** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/release_14)** | **[docs](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/release_14.zip)** | **[0.24.1](https://pypi.org/project/mlagents/0.24.1/)** | **[1.8.1](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.8/manual/index.html)** |
| **Verified Package 1.0.7** | **March 8, 2021** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/com.unity.ml-agents_1.0.7)** | **[docs](https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/com.unity.ml-agents_1.0.7.zip)** | **[0.16.1](https://pypi.org/project/mlagents/0.16.1/)** | **[1.0.7](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.0/manual/index.html)** |
| **Verified Package 1.0.6** | **November 16, 2020** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/com.unity.ml-agents_1.0.6)** | **[docs](https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/com.unity.ml-agents_1.0.6.zip)** | **[0.16.1](https://pypi.org/project/mlagents/0.16.1/)** | **[1.0.6](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.0/manual/index.html)** |
| **Verified Package 1.0.6** | November 16, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/com.unity.ml-agents_1.0.6) | [docs](https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/com.unity.ml-agents_1.0.6.zip) | [0.16.1](https://pypi.org/project/mlagents/0.16.1/) | [1.0.6](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.0/manual/index.html) |
| **Verified Package 1.0.5** | September 23, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/com.unity.ml-agents_1.0.5) | [docs](https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/com.unity.ml-agents_1.0.5.zip) | [0.16.1](https://pypi.org/project/mlagents/0.16.1/) | [1.0.5](https://docs.unity3d.com/Packages/com.unity.ml-agents@1.0/manual/index.html) |
If you are a researcher interested in a discussion of Unity as an AI platform,
see a pre-print of our

sure to include as much detail as possible. If you run into any other problems
using the ML-Agents Toolkit or have a specific feature request, please
[submit a GitHub issue](https://github.com/Unity-Technologies/ml-agents/issues).
Please tell us which samples you would like to see shipped with the ML-Agents Unity
package by replying to
[this forum thread](https://forum.unity.com/threads/feedback-wanted-shipping-sample-s-with-the-ml-agents-package.1073468/).
Your opinion matters a great deal to us. Only by hearing your thoughts on the
Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few

2
com.unity.ml-agents.extensions/Documentation~/Grid-Sensor.md


An image can be thought of as a matrix of a predefined width (W) and a height (H) and each pixel can be thought of as simply an array of length 3 (in the case of RGB), `[Red, Green, Blue]` holding the different channel information of the color (channel) intensities at that pixel location. Thus an image is just a 3 dimensional matrix of size WxHx3. A Grid Observation can be thought of as a generalization of this setup where in place of a pixel there is a "cell" which is an array of length N representing different channel intensities at that cell position. From a Convolutional Neural Network point of view, the introduction of multiple channels in an "image" isn't a new concept. One such example is using an RGB-Depth image which is used in several robotics applications. The distinction of Grid Observations is what the data within the channels represents. Instead of limiting the channels to color intensities, the channels within a cell of a Grid Observation generalize to any data that can be represented by a single number (float or int).
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
The Food Collector environment can be described as:
* Set-up: A multi-agent environment where agents compete to collect food.

2
com.unity.ml-agents.extensions/Documentation~/Match3.md


This implementation includes:
* C# implementation catered toward a Match-3 setup including concepts around encoding for moves based on [Human Like Playtesting with Deep Learning](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)
* An example Match-3 scene with ML-Agents implemented (located under /Project/Assets/ML-Agents/Examples/Match3). More information, on Match-3 example [here](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/docs/Learning-Environment-Examples.md#match-3).
* An example Match-3 scene with ML-Agents implemented (located under /Project/Assets/ML-Agents/Examples/Match3). More information, on Match-3 example [here](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/docs/Learning-Environment-Examples.md#match-3).
### Feedback
If you are a Match-3 developer and are trying to leverage ML-Agents for this scenario, [we want to hear from you](https://forms.gle/TBsB9jc8WshgzViU9). Additionally, we are also looking for interested Match-3 teams to speak with us for 45 minutes. If you are interested, please indicate that in the [form](https://forms.gle/TBsB9jc8WshgzViU9). If selected, we will provide gift cards as a token of appreciation.

16
com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md


recommended ways to install the package:
### Local Installation
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Installation.md#advanced-local-installation-for-development-1)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/images/unity_package_manager_git_url.png)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/images/unity_package_manager_git_url.png)
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_14
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions",
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_14",
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information.
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
may take several minutes to resolve the packages the first time that you add it.
## Requirements

- No way to customize the action space of the `InputActuatorComponent`
## Need Help?
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/README.md) contains links for contacting the team or getting support.
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/README.md) contains links for contacting the team or getting support.

7
com.unity.ml-agents.extensions/Runtime/Input/Adaptors/ButtonInputActionAdaptor.cs


using Unity.MLAgents.Actuators;
using UnityEngine;
using UnityEngine.InputSystem;
using UnityEngine.InputSystem.Controls;
using UnityEngine.InputSystem.LowLevel;
namespace Unity.MLAgents.Extensions.Input

}
/// TODO again this might need to be more nuanced for things like continuous buttons.
/// <inheritdoc cref="IRLActionInputAdaptor.QueueInputEventForAction"/>
public void QueueInputEventForAction(InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToInputEventForAction"/>
public void WriteToInputEventForAction(InputEventPtr eventPtr, InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
InputSystem.QueueDeltaStateEvent(control, (byte)val);
((ButtonControl)control).WriteValueIntoEvent((float)val, eventPtr);
}
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToHeuristic"/>>

7
com.unity.ml-agents.extensions/Runtime/Input/Adaptors/DoubleInputActionAdaptor.cs


#if MLA_INPUT_SYSTEM && UNITY_2019_4_OR_NEWER
using Unity.MLAgents.Actuators;
using UnityEngine.InputSystem;
using UnityEngine.InputSystem.Controls;
using UnityEngine.InputSystem.LowLevel;
namespace Unity.MLAgents.Extensions.Input

return ActionSpec.MakeContinuous(1);
}
/// <inheritdoc cref="IRLActionInputAdaptor.QueueInputEventForAction"/>
public void QueueInputEventForAction(InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToInputEventForAction"/>
public void WriteToInputEventForAction(InputEventPtr eventPtr, InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
InputSystem.QueueDeltaStateEvent(control,(double)val);
((DoubleControl)control).WriteValueIntoEvent((double)val, eventPtr);
}
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToHeuristic"/>

6
com.unity.ml-agents.extensions/Runtime/Input/Adaptors/FloatInputActionAdaptor.cs


return ActionSpec.MakeContinuous(1);
}
/// <inheritdoc cref="IRLActionInputAdaptor.QueueInputEventForAction"/>
public void QueueInputEventForAction(InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToInputEventForAction"/>
public void WriteToInputEventForAction(InputEventPtr eventPtr, InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
InputSystem.QueueDeltaStateEvent(control, val);
control.WriteValueIntoEvent(val, eventPtr);
}
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToHeuristic"/>

6
com.unity.ml-agents.extensions/Runtime/Input/Adaptors/IntegerInputActionAdaptor.cs


return ActionSpec.MakeDiscrete(2);
}
/// <inheritdoc cref="IRLActionInputAdaptor.QueueInputEventForAction"/>
public void QueueInputEventForAction(InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToInputEventForAction"/>
public void WriteToInputEventForAction(InputEventPtr eventPtr, InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
InputSystem.QueueDeltaStateEvent(control, val);
control.WriteValueIntoEvent(val, eventPtr);
}
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToHeuristic"/>

7
com.unity.ml-agents.extensions/Runtime/Input/Adaptors/Vector2InputActionAdaptor.cs


using Unity.MLAgents.Actuators;
using UnityEngine;
using UnityEngine.InputSystem;
using UnityEngine.InputSystem.Controls;
using UnityEngine.InputSystem.LowLevel;
namespace Unity.MLAgents.Extensions.Input

return ActionSpec.MakeContinuous(2);
}
/// <inheritdoc cref="IRLActionInputAdaptor.QueueInputEventForAction"/>
public void QueueInputEventForAction(InputAction action,
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToInputEventForAction"/>
public void WriteToInputEventForAction(InputEventPtr eventPtr, InputAction action,
InputControl control,
ActionSpec actionSpec,
in ActionBuffers actionBuffers)

InputSystem.QueueDeltaStateEvent(control, new Vector2(x, y));
control.WriteValueIntoEvent(new Vector2(x, y), eventPtr);
}
/// <inheritdoc cref="IRLActionInputAdaptor.WriteToHeuristic"/>

4
com.unity.ml-agents.extensions/Runtime/Input/IRLActionInputAdaptor.cs


using System;
using Unity.MLAgents.Actuators;
using UnityEngine.InputSystem;
using UnityEngine.InputSystem.LowLevel;
namespace Unity.MLAgents.Extensions.Input
{

/// <summary>
/// Translates data from the <see cref="ActionBuffers"/> object to the <see cref="InputSystem"/>.
/// </summary>
/// <param name="eventPtr">The Event pointer to write to.</param>
void QueueInputEventForAction(InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers);
void WriteToInputEventForAction(InputEventPtr eventPtr, InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers);
/// <summary>
/// Writes data from the <paramref name="action"/> to the <paramref name="actionBuffers"/>.

12
com.unity.ml-agents.extensions/Runtime/Input/InputActionActuator.cs


readonly BehaviorParameters m_BehaviorParameters;
readonly InputAction m_Action;
readonly IRLActionInputAdaptor m_InputAdaptor;
InputActuatorEventContext m_InputActuatorEventContext;
InputDevice m_Device;
InputControl m_Control;

/// via the <see cref="IRLActionInputAdaptor"/>.</param>
/// <param name="adaptor">The <see cref="IRLActionInputAdaptor"/> that will convert data between ML-Agents
/// and the <see cref="InputSystem"/>.</param>
/// <param name="inputActuatorEventContext">The object that will provide the event ptr to write to.</param>
IRLActionInputAdaptor adaptor)
IRLActionInputAdaptor adaptor,
InputActuatorEventContext inputActuatorEventContext)
m_InputActuatorEventContext = inputActuatorEventContext;
ActionSpec = adaptor.GetActionSpecForInputAction(m_Action);
m_Device = inputDevice;
m_Control = m_Device?.GetChildControl(m_Action.name);

Profiler.BeginSample("InputActionActuator.OnActionReceived");
if (!m_BehaviorParameters.IsInHeuristicMode())
{
m_InputAdaptor.QueueInputEventForAction(m_Action, m_Control, ActionSpec, actionBuffers);
using (m_InputActuatorEventContext.GetEventForFrame(out var eventPtr))
{
m_InputAdaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, ActionSpec, actionBuffers);
}
}
Profiler.EndSample();
}

63
com.unity.ml-agents.extensions/Runtime/Input/InputActuatorComponent.cs


#if MLA_INPUT_SYSTEM && UNITY_2019_4_OR_NEWER
using System;
using System.Collections.Generic;
using Unity.Collections;
using Unity.MLAgents.Actuators;
using Unity.MLAgents.Policies;
using UnityEngine;

using UnityEngine.InputSystem.LowLevel;
using UnityEngine.InputSystem.Layouts;
using UnityEngine.InputSystem.Utilities;
#if UNITY_EDITOR

get
{
#if UNITY_EDITOR
FindNeededComponents();
var actuators = CreateActuatorsFromMap(m_InputAsset.FindActionMap(m_PlayerInput.defaultActionMap), m_BehaviorParameters, null);
m_ActionSpec = CombineActuatorActionSpecs(actuators);
if (!EditorApplication.isPlaying && m_ActionSpec.NumContinuousActions == 0
&& m_ActionSpec.BranchSizes == null
|| m_ActionSpec.BranchSizes.Length == 0)
{
FindNeededComponents();
var actuators = CreateActuatorsFromMap(m_InputAsset.FindActionMap(m_PlayerInput.defaultActionMap),
m_BehaviorParameters,
null,
InputActuatorEventContext.s_EditorContext);
m_ActionSpec = CombineActuatorActionSpecs(actuators);
}
#endif
return m_ActionSpec;
}

RegisterLayoutBuilder(inputActionMap, m_LayoutName);
m_Device = InputSystem.AddDevice(m_LayoutName);
m_Actuators = CreateActuatorsFromMap(inputActionMap, m_BehaviorParameters, m_Device);
var context = new InputActuatorEventContext(inputActionMap.actions.Count, m_Device);
m_Actuators = CreateActuatorsFromMap(inputActionMap, m_BehaviorParameters, m_Device, context);
UpdateDeviceBinding(m_BehaviorParameters.IsInHeuristicMode());
inputActionMap.Enable();

internal static IActuator[] CreateActuatorsFromMap(InputActionMap inputActionMap,
BehaviorParameters behaviorParameters,
InputDevice inputDevice)
InputDevice inputDevice,
InputActuatorEventContext context)
{
var actuators = new IActuator[inputActionMap.actions.Count];
for (var i = 0; i < inputActionMap.actions.Count; i++)

var adaptor = (IRLActionInputAdaptor)Activator.CreateInstance(controlTypeToAdaptorType[actionLayout.type]);
actuators[i] = new InputActionActuator(inputDevice, behaviorParameters, action, adaptor);
actuators[i] = new InputActionActuator(inputDevice, behaviorParameters, action, adaptor, context);
// Reasonably, the input system starts adding numbers after the first none numbered name
// is added. So for device ID of 0, we use the empty string in the path.

action.processors,
mlAgentsControlSchemeName);
action.bindingMask = InputBinding.MaskByGroup(mlAgentsControlSchemeName);
}
return actuators;
}

m_PlayerInput = null;
m_BehaviorParameters = null;
m_Device = null;
}
int m_ActuatorsWrittenToEvent;
NativeArray<byte> m_InputBufferForFrame;
InputEventPtr m_InputEventPtrForFrame;
public InputEventPtr GetEventForFrame()
{
#if UNITY_EDITOR
if (!EditorApplication.isPlaying)
{
return new InputEventPtr();
}
#endif
if (m_ActuatorsWrittenToEvent % m_Actuators.Length == 0 || !m_InputEventPtrForFrame.valid)
{
m_ActuatorsWrittenToEvent = 0;
m_InputEventPtrForFrame = new InputEventPtr();
m_InputBufferForFrame = StateEvent.From(m_Device, out m_InputEventPtrForFrame);
}
return m_InputEventPtrForFrame;
}
public void EventProcessedInFrame()
{
#if UNITY_EDITOR
if (!EditorApplication.isPlaying)
{
return;
}
#endif
m_ActuatorsWrittenToEvent++;
if (m_ActuatorsWrittenToEvent == m_Actuators.Length && m_InputEventPtrForFrame.valid)
{
InputSystem.QueueEvent(m_InputEventPtrForFrame);
m_InputBufferForFrame.Dispose();
}
}
}
}

4
com.unity.ml-agents.extensions/Runtime/Input/Unity.ML-Agents.Extensions.Input.asmdef


"versionDefines": [
{
"name": "com.unity.inputsystem",
"expression": "1.1.0-preview",
"expression": "1.1.0-preview.3",
}
}

12
com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/ButtonInputActionAdaptorTests.cs


public void TestQueueEvent()
{
var actionBuffers = new ActionBuffers(ActionSegment<float>.Empty, new ActionSegment<int>(new[] { 1 }));
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var val = m_Action.ReadValue<float>();
Assert.IsTrue(Mathf.Approximately(1f, val));

public void TestWriteToHeuristic()
{
var actionBuffers = new ActionBuffers(ActionSegment<float>.Empty, new ActionSegment<int>(new[] { 1 }));
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var buffer = new ActionBuffers(ActionSegment<float>.Empty, new ActionSegment<int>(new[] { 1 }));
m_Adaptor.WriteToHeuristic(m_Action, buffer);

12
com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/DoubleInputActionAdaptorTests.cs


public void TestQueueEvent()
{
var actionBuffers = new ActionBuffers(new ActionSegment<float>(new[] { 1f }), ActionSegment<int>.Empty);
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
Assert.IsTrue(Mathf.Approximately(1f, (float)m_Action.ReadValue<double>()));
}

{
var actionBuffers = new ActionBuffers(new ActionSegment<float>(new[] { 1f }), ActionSegment<int>.Empty);
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var buffer = new ActionBuffers(new ActionSegment<float>(new[] { 1f }), ActionSegment<int>.Empty);
m_Adaptor.WriteToHeuristic(m_Action, buffer);

12
com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/FloatInputActionAdapatorTests.cs


public void TestQueueEvent()
{
var actionBuffers = new ActionBuffers(new ActionSegment<float>(new[] { 1f }), ActionSegment<int>.Empty);
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var val = m_Action.ReadValue<float>();
Assert.IsTrue(Mathf.Approximately(1f, val));

public void TestWriteToHeuristic()
{
var actionBuffers = new ActionBuffers(new ActionSegment<float>(new[] { 1f }), ActionSegment<int>.Empty);
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var buffer = new ActionBuffers(new ActionSegment<float>(new[] { 1f }), ActionSegment<int>.Empty);
m_Adaptor.WriteToHeuristic(m_Action, buffer);

12
com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/IntegerInputActionAdaptorTests.cs


public void TestQueueEvent()
{
var actionBuffers = new ActionBuffers(ActionSegment<float>.Empty, new ActionSegment<int>(new[] { 1 }));
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var val = m_Action.ReadValue<int>();
Assert.IsTrue(val == 1);

public void TestWriteToHeuristic()
{
var actionBuffers = new ActionBuffers(ActionSegment<float>.Empty, new ActionSegment<int>(new[] { 1 }));
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var buffer = new ActionBuffers(ActionSegment<float>.Empty, new ActionSegment<int>(new int[1]));
m_Adaptor.WriteToHeuristic(m_Action, buffer);

12
com.unity.ml-agents.extensions/Tests/Runtime/Input/Adaptors/Vector2InputActionAdaptorTests.cs


public void TestQueueEvent()
{
var actionBuffers = new ActionBuffers(new ActionSegment<float>(new[] { 0f, 1f }), ActionSegment<int>.Empty);
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var val = m_Action.ReadValue<Vector2>();
Assert.IsTrue(Mathf.Approximately(0f, val.x));

public void TestWriteToHeuristic()
{
var actionBuffers = new ActionBuffers(new ActionSegment<float>(new[] { 0f, 1f }), ActionSegment<int>.Empty);
m_Adaptor.QueueInputEventForAction(m_Action, m_Control, new ActionSpec(), actionBuffers);
var context = new InputActuatorEventContext(1, m_Device);
using (context.GetEventForFrame(out var eventPtr))
{
m_Adaptor.WriteToInputEventForAction(eventPtr, m_Action, m_Control, new ActionSpec(), actionBuffers);
}
InputSystem.Update();
var buffer = new ActionBuffers(new ActionSegment<float>(new float[2]), ActionSegment<int>.Empty);
m_Adaptor.WriteToHeuristic(m_Action, buffer);

17
com.unity.ml-agents.extensions/Tests/Runtime/Input/InputActionActuatorTests.cs


using Unity.MLAgents.Policies;
using UnityEngine;
using UnityEngine.InputSystem;
using UnityEngine.InputSystem.LowLevel;
public bool eventQueued;
public bool eventWritten;
public bool writtenToHeuristic;
public ActionSpec GetActionSpecForInputAction(InputAction action)

public void QueueInputEventForAction(InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
public void WriteToInputEventForAction(InputEventPtr eventPtr, InputAction action, InputControl control, ActionSpec actionSpec, in ActionBuffers actionBuffers)
eventQueued = true;
eventWritten = true;
}
public void WriteToHeuristic(InputAction action, in ActionBuffers actionBuffers)

public void Reset()
{
eventQueued = false;
eventWritten = false;
writtenToHeuristic = false;
}
}

m_BehaviorParameters = go.AddComponent<BehaviorParameters>();
var action = new InputAction("action");
m_Adaptor = new TestAdaptor();
m_Actuator = new InputActionActuator(null, m_BehaviorParameters, action, m_Adaptor);
m_Actuator = new InputActionActuator(null, m_BehaviorParameters, action, m_Adaptor, new InputActuatorEventContext(1, InputSystem.AddDevice<Gamepad>()));
}
[Test]

m_Actuator.OnActionReceived(new ActionBuffers());
m_Actuator.Heuristic(new ActionBuffers());
Assert.IsFalse(m_Adaptor.eventQueued);
Assert.IsFalse(m_Adaptor.eventWritten);
Assert.IsFalse(m_Adaptor.eventQueued);
Assert.IsFalse(m_Adaptor.eventWritten);
Assert.IsTrue(m_Adaptor.eventQueued);
Assert.IsTrue(m_Adaptor.eventWritten);
m_Adaptor.Reset();
Assert.AreEqual(m_Actuator.Name, "InputActionActuator-action");

2
com.unity.ml-agents.extensions/Tests/Runtime/Input/InputActuatorComponentTests.cs


var device = InputSystem.AddDevice("TestLayout");
var actuators = InputActuatorComponent.CreateActuatorsFromMap(inputActionMap, m_BehaviorParameters, device);
var actuators = InputActuatorComponent.CreateActuatorsFromMap(inputActionMap, m_BehaviorParameters, device, new InputActuatorEventContext());
Assert.IsTrue(actuators.Length == 2);
Assert.IsTrue(actuators[0].ActionSpec.Equals(ActionSpec.MakeContinuous(2)));
Assert.IsTrue(actuators[1].ActionSpec.NumDiscreteActions == 1);

2
com.unity.ml-agents.extensions/Tests/Runtime/Input/Unity.ML-Agents.Extensions.Input.Tests.Runtime.asmdef


"versionDefines": [
{
"name": "com.unity.inputsystem",
"expression": "1.1.0",
"expression": "1.1.0-preview.3",
"define": "MLA_INPUT_TESTS"
}
],

4
com.unity.ml-agents.extensions/package.json


{
"name": "com.unity.ml-agents.extensions",
"displayName": "ML Agents Extensions",
"version": "0.1.0-preview",
"version": "0.3.0-preview",
"com.unity.ml-agents": "1.8.0-preview"
"com.unity.ml-agents": "1.9.0-preview"
}
}

4
com.unity.ml-agents/.gitignore


/Assets/Plugins*
/Assets/Demonstrations*
/csharp_timers.json
/Samples/
/Samples.meta
*.api.meta
*.api.meta

22
com.unity.ml-agents/CHANGELOG.md


### Major Changes
#### com.unity.ml-agents (C#)
- The `BufferSensor` and `BufferSensorComponent` have been added. They allow the Agent to observe variable number of entities. (#4909)
- The `SimpleMultiAgentGroup` class and `IMultiAgentGroup` interface have been added. These allow Agents to be given rewards and
end episodes in groups. (#4923)
- The MA-POCA trainer has been added. This is a new trainer that enables Agents to learn how to work together in groups. Configure
`poca` as the trainer in the configuration YAML after instantiating a `SimpleMultiAgentGroup` to use this feature. (#5005)
- Updated com.unity.barracuda to 1.3.2-preview. (#5084)
- Make com.unity.modules.unityanalytics an optional dependency. (#5109)
- Added 3D Ball to the `com.unity.ml-agents` samples. (#5077)
- Sensor names are now passed through to `ObservationSpec.name`. (#5036)
#### com.unity.ml-agents (C#)
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Made the error message when observations of different shapes are sent to the trainer clearer. (#5030)
- An issue that prevented curriculums from incrementing with self-play has been fixed. (#5098)
## [1.8.1-preview] - 2021-03-08
### Minor Changes
#### ml-agents / ml-agents-envs / gym-unity (Python)
- The `cattrs` version dependency was updated to allow `>=1.1.0` on Python 3.8 or higher. (#4821)
### Bug Fixes
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Fix an issue where queuing InputEvents overwrote data from previous events in the same frame. (#5034)
## [1.8.0-preview] - 2021-02-17
### Major Changes

4
com.unity.ml-agents/Documentation~/com.unity.ml-agents.md


[unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
[unity inference engine]: https://docs.unity3d.com/Packages/com.unity.barracuda@latest/index.html
[package manager documentation]: https://docs.unity3d.com/Manual/upm-ui-install.html
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Installation.md
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Installation.md
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/com.unity.ml-agents.extensions
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/com.unity.ml-agents.extensions

8
com.unity.ml-agents/Runtime/Academy.cs


* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/
* https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/
*/
namespace Unity.MLAgents

/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/" +
"docs/Learning-Environment-Design.md")]
public class Academy : IDisposable
{

/// </item>
/// <item>
/// <term>1.5.0</term>
/// <description>Support variable length observation training.</description>
/// <description>Support variable length observation training and multi-agent groups.</description>
/// </item>
/// </list>
/// </remarks>

/// Unity package version of com.unity.ml-agents.
/// This must match the version string in package.json and is checked in a unit test.
/// </summary>
internal const string k_PackageVersion = "1.8.0-preview";
internal const string k_PackageVersion = "1.9.0-preview";
const int k_EditorTrainingPort = 5004;

2
com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);

2
com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs


///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndex">Index of the action</param>

26
com.unity.ml-agents/Runtime/Agent.cs


/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Readme.md
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)

/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)

/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>

/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{

///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }

///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <param name="actions">
/// Struct containing the buffers of actions to be executed at this step.

28
com.unity.ml-agents/Runtime/Analytics/InferenceAnalytics.cs


#if MLA_UNITY_ANALYTICS_MODULE || !UNITY_2019_4_OR_NEWER
#define MLA_UNITY_ANALYTICS_MODULE_ENABLED
#endif
using System.Diagnostics;
using Unity.Barracuda;
using Unity.MLAgents.Actuators;
using Unity.MLAgents.Inference;

#if MLA_UNITY_ANALYTICS_MODULE_ENABLED
#endif
#if MLA_UNITY_ANALYTICS_MODULE_ENABLED
#endif
#endif // MLA_UNITY_ANALYTICS_MODULE_ENABLED
#endif // UNITY_EDITOR
namespace Unity.MLAgents.Analytics

return true;
}
#if UNITY_EDITOR
#if UNITY_EDITOR && MLA_UNITY_ANALYTICS_MODULE_ENABLED
#else
if (result == AnalyticsResult.Ok)
{
s_EventRegistered = true;
}
#elif MLA_UNITY_ANALYTICS_MODULE_ENABLED
#endif
#endif
if (s_EventRegistered && s_SentModels == null)
{
s_SentModels = new HashSet<NNModel>();

/// <param name="actionSpec">ActionSpec for the Agent. Used to generate information about the action space.</param>
/// <param name="actuators">List of IActuators for the Agent. Used to generate information about the action space.</param>
/// <returns></returns>
[Conditional("MLA_UNITY_ANALYTICS_MODULE_ENABLED")]
public static void InferenceModelSet(
NNModel nnModel,
string behaviorName,

var data = GetEventForModel(nnModel, behaviorName, inferenceDevice, sensors, actionSpec, actuators);
// Note - to debug, use JsonUtility.ToJson on the event.
// Debug.Log(JsonUtility.ToJson(data, true));
#if UNITY_EDITOR
#if UNITY_EDITOR && MLA_UNITY_ANALYTICS_MODULE_ENABLED
if (AnalyticsUtils.s_SendEditorAnalytics)
{
EditorAnalytics.SendEventWithLimit(k_EventName, data, k_EventVersion);

38
com.unity.ml-agents/Runtime/Analytics/TrainingAnalytics.cs


#if MLA_UNITY_ANALYTICS_MODULE || !UNITY_2019_4_OR_NEWER
#define MLA_UNITY_ANALYTICS_MODULE_ENABLED
#endif
using System.Diagnostics;
#if MLA_UNITY_ANALYTICS_MODULE_ENABLED
#if UNITY_EDITOR
using UnityEditor.Analytics;
#endif
#endif
using UnityEditor.Analytics;
#endif
namespace Unity.MLAgents.Analytics

static bool EnableAnalytics()
{
#if MLA_UNITY_ANALYTICS_MODULE_ENABLED
#else
AnalyticsResult result = AnalyticsResult.UnsupportedPlatform;
#endif
#else
return false;
#endif // UNITY_EDITOR
}
s_EventsRegistered = true;

}
return s_EventsRegistered;
#else
return false;
#endif // MLA_UNITY_ANALYTICS_MODULE_ENABLED
}
/// <summary>

/// <param name="packageVersion"></param>
[Conditional("MLA_UNITY_ANALYTICS_MODULE_ENABLED")]
public static void SetTrainerInformation(string packageVersion, string communicationVersion)
{
s_TrainerPackageVersion = packageVersion;

public static bool IsAnalyticsEnabled()
{
#if UNITY_EDITOR
#if UNITY_EDITOR && MLA_UNITY_ANALYTICS_MODULE_ENABLED
return EditorAnalytics.enabled;
#else
return false;

[Conditional("MLA_UNITY_ANALYTICS_MODULE_ENABLED")]
public static void TrainingEnvironmentInitialized(TrainingEnvironmentInitializedEvent tbiEvent)
{
if (!IsAnalyticsEnabled())

// Debug.Log(
// $"Would send event {k_TrainingEnvironmentInitializedEventName} with body {JsonUtility.ToJson(tbiEvent, true)}"
// );
#if UNITY_EDITOR
#if UNITY_EDITOR && MLA_UNITY_ANALYTICS_MODULE_ENABLED
#else
return;
[Conditional("MLA_UNITY_ANALYTICS_MODULE_ENABLED")]
public static void RemotePolicyInitialized(
string fullyQualifiedBehaviorName,
IList<ISensor> sensors,

// Debug.Log(
// $"Would send event {k_RemotePolicyInitializedEventName} with body {JsonUtility.ToJson(data, true)}"
// );
#if UNITY_EDITOR
#if UNITY_EDITOR && MLA_UNITY_ANALYTICS_MODULE_ENABLED
#else
return;
#endif
}

return fullyQualifiedBehaviorName.Substring(0, lastQuestionIndex);
}
[Conditional("MLA_UNITY_ANALYTICS_MODULE_ENABLED")]
public static void TrainingBehaviorInitialized(TrainingBehaviorInitializedEvent tbiEvent)
{
if (!IsAnalyticsEnabled())

// Debug.Log(
// $"Would send event {k_TrainingBehaviorInitializedEventName} with body {JsonUtility.ToJson(tbiEvent, true)}"
// );
#if UNITY_EDITOR
#if UNITY_EDITOR && MLA_UNITY_ANALYTICS_MODULE_ENABLED
if (AnalyticsUtils.s_SendEditorAnalytics)
{
EditorAnalytics.SendEventWithLimit(k_TrainingBehaviorInitializedEventName, tbiEvent);

33
com.unity.ml-agents/Runtime/Communicator/GrpcExtensions.cs


using UnityEngine;
using System.Runtime.CompilerServices;
using Unity.MLAgents.Actuators;
using Unity.MLAgents.Analytics;
using Unity.MLAgents.Analytics;
[assembly: InternalsVisibleTo("Unity.ML-Agents.Editor")]
[assembly: InternalsVisibleTo("Unity.ML-Agents.Editor.Tests")]

internal static class GrpcExtensions
{
#region AgentInfo
/// <summary>
/// Static flag to make sure that we only fire the warning once.
/// </summary>
private static bool s_HaveWarnedTrainerCapabilitiesAgentGroup = false;
/// <summary>
/// Converts a AgentInfo to a protobuf generated AgentInfoActionPairProto
/// </summary>

/// <returns>The protobuf version of the AgentInfo.</returns>
public static AgentInfoProto ToAgentInfoProto(this AgentInfo ai)
{
if (ai.groupId > 0)
{
var trainerCanHandle = Academy.Instance.TrainerCapabilities == null || Academy.Instance.TrainerCapabilities.MultiAgentGroups;
if (!trainerCanHandle)
{
if (!s_HaveWarnedTrainerCapabilitiesAgentGroup)
{
Debug.LogWarning(
$"Attached trainer doesn't support Multi Agent Groups; group rewards will be ignored." +
"Please find the versions that work best together from our release page: " +
"https://github.com/Unity-Technologies/ml-agents/releases"
);
s_HaveWarnedTrainerCapabilitiesAgentGroup = true;
}
}
}
var agentInfoProto = new AgentInfoProto
{
Reward = ai.reward,

if (dimensionPropertySensor != null)
{
var dimensionProperties = dimensionPropertySensor.GetDimensionProperties();
int[] intDimensionProperties = new int[dimensionProperties.Length];
for (int i = 0; i < dimensionProperties.Length; i++)
{
observationProto.DimensionProperties.Add((int)dimensionProperties[i]);

}
}
observationProto.Shape.AddRange(shape);
var sensorName = sensor.GetName();
if (!string.IsNullOrEmpty(sensorName))
{
observationProto.Name = sensorName;
}
// Add the observation type, if any, to the observationProto
var typeSensor = sensor as ITypedSensor;

HybridActions = proto.HybridActions,
TrainingAnalytics = proto.TrainingAnalytics,
VariableLengthObservation = proto.VariableLengthObservation,
MultiAgentGroups = proto.MultiAgentGroups,
};
}

HybridActions = rlCaps.HybridActions,
TrainingAnalytics = rlCaps.TrainingAnalytics,
VariableLengthObservation = rlCaps.VariableLengthObservation,
MultiAgentGroups = rlCaps.MultiAgentGroups,
};
}

}
#region Analytics
internal static TrainingEnvironmentInitializedEvent ToTrainingEnvironmentInitializedEvent(
this TrainingEnvironmentInitialized inputProto)
{

NumNetworkHiddenUnits = inputProto.NumNetworkHiddenUnits,
};
}
#endregion
}

4
com.unity.ml-agents/Runtime/Communicator/RpcCommunicator.cs


using System.Linq;
using UnityEngine;
using Unity.MLAgents.Actuators;
using Unity.MLAgents.Analytics;
using Unity.MLAgents.Analytics;
namespace Unity.MLAgents
{

var pythonPackageVersion = initializationInput.RlInitializationInput.PackageVersion;
var pythonCommunicationVersion = initializationInput.RlInitializationInput.CommunicationVersion;
TrainingAnalytics.SetTrainerInformation(pythonPackageVersion, pythonCommunicationVersion);
var communicationIsCompatible = CheckCommunicationVersionsAreCompatible(

5
com.unity.ml-agents/Runtime/Communicator/UnityRLCapabilities.cs


public bool HybridActions;
public bool TrainingAnalytics;
public bool VariableLengthObservation;
public bool MultiAgentGroups;
/// <summary>
/// A class holding the capabilities flags for Reinforcement Learning across C# and the Trainer codebase. This

bool compressedChannelMapping = true,
bool hybridActions = true,
bool trainingAnalytics = true,
bool variableLengthObservation = true)
bool variableLengthObservation = true,
bool multiAgentGroups = true)
{
BaseRLCapabilities = baseRlCapabilities;
ConcatenatedPngObservations = concatenatedPngObservations;

VariableLengthObservation = variableLengthObservation;
MultiAgentGroups = multiAgentGroups;
}
/// <summary>

2
com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs


/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]

40
com.unity.ml-agents/Runtime/Grpc/CommunicatorObjects/Capabilities.cs


byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"CjVtbGFnZW50c19lbnZzL2NvbW11bmljYXRvcl9vYmplY3RzL2NhcGFiaWxp",
"dGllcy5wcm90bxIUY29tbXVuaWNhdG9yX29iamVjdHMi0gEKGFVuaXR5UkxD",
"dGllcy5wcm90bxIUY29tbXVuaWNhdG9yX29iamVjdHMi7AEKGFVuaXR5UkxD",
"Z3RoT2JzZXJ2YXRpb24YBiABKAhCJaoCIlVuaXR5Lk1MQWdlbnRzLkNvbW11",
"bmljYXRvck9iamVjdHNiBnByb3RvMw=="));
"Z3RoT2JzZXJ2YXRpb24YBiABKAgSGAoQbXVsdGlBZ2VudEdyb3VwcxgHIAEo",
"CEIlqgIiVW5pdHkuTUxBZ2VudHMuQ29tbXVuaWNhdG9yT2JqZWN0c2IGcHJv",
"dG8z"));
new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.UnityRLCapabilitiesProto), global::Unity.MLAgents.CommunicatorObjects.UnityRLCapabilitiesProto.Parser, new[]{ "BaseRLCapabilities", "ConcatenatedPngObservations", "CompressedChannelMapping", "HybridActions", "TrainingAnalytics", "VariableLengthObservation" }, null, null, null)
new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.UnityRLCapabilitiesProto), global::Unity.MLAgents.CommunicatorObjects.UnityRLCapabilitiesProto.Parser, new[]{ "BaseRLCapabilities", "ConcatenatedPngObservations", "CompressedChannelMapping", "HybridActions", "TrainingAnalytics", "VariableLengthObservation", "MultiAgentGroups" }, null, null, null)
}));
}
#endregion

hybridActions_ = other.hybridActions_;
trainingAnalytics_ = other.trainingAnalytics_;
variableLengthObservation_ = other.variableLengthObservation_;
multiAgentGroups_ = other.multiAgentGroups_;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}

}
}
/// <summary>Field number for the "multiAgentGroups" field.</summary>
public const int MultiAgentGroupsFieldNumber = 7;
private bool multiAgentGroups_;
/// <summary>
/// Support for multi agent groups and group rewards
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool MultiAgentGroups {
get { return multiAgentGroups_; }
set {
multiAgentGroups_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as UnityRLCapabilitiesProto);

if (HybridActions != other.HybridActions) return false;
if (TrainingAnalytics != other.TrainingAnalytics) return false;
if (VariableLengthObservation != other.VariableLengthObservation) return false;
if (MultiAgentGroups != other.MultiAgentGroups) return false;
return Equals(_unknownFields, other._unknownFields);
}

if (HybridActions != false) hash ^= HybridActions.GetHashCode();
if (TrainingAnalytics != false) hash ^= TrainingAnalytics.GetHashCode();
if (VariableLengthObservation != false) hash ^= VariableLengthObservation.GetHashCode();
if (MultiAgentGroups != false) hash ^= MultiAgentGroups.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}

if (VariableLengthObservation != false) {
output.WriteRawTag(48);
output.WriteBool(VariableLengthObservation);
}
if (MultiAgentGroups != false) {
output.WriteRawTag(56);
output.WriteBool(MultiAgentGroups);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);

if (VariableLengthObservation != false) {
size += 1 + 1;
}
if (MultiAgentGroups != false) {
size += 1 + 1;
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}

if (other.VariableLengthObservation != false) {
VariableLengthObservation = other.VariableLengthObservation;
}
if (other.MultiAgentGroups != false) {
MultiAgentGroups = other.MultiAgentGroups;
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}

}
case 48: {
VariableLengthObservation = input.ReadBool();
break;
}
case 56: {
MultiAgentGroups = input.ReadBool();
break;
}
}

48
com.unity.ml-agents/Runtime/Grpc/CommunicatorObjects/Observation.cs


byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"CjRtbGFnZW50c19lbnZzL2NvbW11bmljYXRvcl9vYmplY3RzL29ic2VydmF0",
"aW9uLnByb3RvEhRjb21tdW5pY2F0b3Jfb2JqZWN0cyKBAwoQT2JzZXJ2YXRp",
"aW9uLnByb3RvEhRjb21tdW5pY2F0b3Jfb2JqZWN0cyKPAwoQT2JzZXJ2YXRp",
"b25Qcm90bxINCgVzaGFwZRgBIAMoBRJEChBjb21wcmVzc2lvbl90eXBlGAIg",
"ASgOMiouY29tbXVuaWNhdG9yX29iamVjdHMuQ29tcHJlc3Npb25UeXBlUHJv",
"dG8SGQoPY29tcHJlc3NlZF9kYXRhGAMgASgMSAASRgoKZmxvYXRfZGF0YRgE",

"b25fdHlwZRgHIAEoDjIqLmNvbW11bmljYXRvcl9vYmplY3RzLk9ic2VydmF0",
"aW9uVHlwZVByb3RvGhkKCUZsb2F0RGF0YRIMCgRkYXRhGAEgAygCQhIKEG9i",
"c2VydmF0aW9uX2RhdGEqKQoUQ29tcHJlc3Npb25UeXBlUHJvdG8SCAoETk9O",
"RRAAEgcKA1BORxABKkYKFE9ic2VydmF0aW9uVHlwZVByb3RvEgsKB0RFRkFV",
"TFQQABIICgRHT0FMEAESCgoGUkVXQVJEEAISCwoHTUVTU0FHRRADQiWqAiJV",
"bml0eS5NTEFnZW50cy5Db21tdW5pY2F0b3JPYmplY3RzYgZwcm90bzM="));
"aW9uVHlwZVByb3RvEgwKBG5hbWUYCCABKAkaGQoJRmxvYXREYXRhEgwKBGRh",
"dGEYASADKAJCEgoQb2JzZXJ2YXRpb25fZGF0YSopChRDb21wcmVzc2lvblR5",
"cGVQcm90bxIICgROT05FEAASBwoDUE5HEAEqRgoUT2JzZXJ2YXRpb25UeXBl",
"UHJvdG8SCwoHREVGQVVMVBAAEggKBEdPQUwQARIKCgZSRVdBUkQQAhILCgdN",
"RVNTQUdFEANCJaoCIlVuaXR5Lk1MQWdlbnRzLkNvbW11bmljYXRvck9iamVj",
"dHNiBnByb3RvMw=="));
new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Parser, new[]{ "Shape", "CompressionType", "CompressedData", "FloatData", "CompressedChannelMapping", "DimensionProperties", "ObservationType" }, new[]{ "ObservationData" }, null, new pbr::GeneratedClrTypeInfo[] { new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData.Parser, new[]{ "Data" }, null, null, null)})
new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Parser, new[]{ "Shape", "CompressionType", "CompressedData", "FloatData", "CompressedChannelMapping", "DimensionProperties", "ObservationType", "Name" }, new[]{ "ObservationData" }, null, new pbr::GeneratedClrTypeInfo[] { new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData.Parser, new[]{ "Data" }, null, null, null)})
}));
}
#endregion

compressedChannelMapping_ = other.compressedChannelMapping_.Clone();
dimensionProperties_ = other.dimensionProperties_.Clone();
observationType_ = other.observationType_;
name_ = other.name_;
switch (other.ObservationDataCase) {
case ObservationDataOneofCase.CompressedData:
CompressedData = other.CompressedData;

}
}
/// <summary>Field number for the "name" field.</summary>
public const int NameFieldNumber = 8;
private string name_ = "";
/// <summary>
/// Optional name of the observation.
/// This will be set to the ISensor name when writing,
/// and read into the ObservationSpec in the low-level API
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public string Name {
get { return name_; }
set {
name_ = pb::ProtoPreconditions.CheckNotNull(value, "value");
}
}
private object observationData_;
/// <summary>Enum of possible cases for the "observation_data" oneof.</summary>
public enum ObservationDataOneofCase {

if(!compressedChannelMapping_.Equals(other.compressedChannelMapping_)) return false;
if(!dimensionProperties_.Equals(other.dimensionProperties_)) return false;
if (ObservationType != other.ObservationType) return false;
if (Name != other.Name) return false;
if (ObservationDataCase != other.ObservationDataCase) return false;
return Equals(_unknownFields, other._unknownFields);
}

hash ^= compressedChannelMapping_.GetHashCode();
hash ^= dimensionProperties_.GetHashCode();
if (ObservationType != 0) hash ^= ObservationType.GetHashCode();
if (Name.Length != 0) hash ^= Name.GetHashCode();
hash ^= (int) observationDataCase_;
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();

output.WriteRawTag(56);
output.WriteEnum((int) ObservationType);
}
if (Name.Length != 0) {
output.WriteRawTag(66);
output.WriteString(Name);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}

if (ObservationType != 0) {
size += 1 + pb::CodedOutputStream.ComputeEnumSize((int) ObservationType);
}
if (Name.Length != 0) {
size += 1 + pb::CodedOutputStream.ComputeStringSize(Name);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}

dimensionProperties_.Add(other.dimensionProperties_);
if (other.ObservationType != 0) {
ObservationType = other.ObservationType;
}
if (other.Name.Length != 0) {
Name = other.Name;
}
switch (other.ObservationDataCase) {
case ObservationDataOneofCase.CompressedData:

}
case 56: {
observationType_ = (global::Unity.MLAgents.CommunicatorObjects.ObservationTypeProto) input.ReadEnum();
break;
}
case 66: {
Name = input.ReadString();
break;
}
}

7
com.unity.ml-agents/Runtime/Policies/BehaviorParameters.cs


[HideInInspector, SerializeField]
BrainParameters m_BrainParameters = new BrainParameters();
/// <summary>
/// Delegate for receiving events about Policy Updates.
/// </summary>
/// <param name="isInHeuristicMode">Whether or not the current policy is running in heuristic mode.</param>
/// <summary>
/// Event that fires when an Agent's policy is updated.
/// </summary>
internal event PolicyUpdated OnPolicyUpdated;
/// <summary>

3
com.unity.ml-agents/Runtime/Policies/RemotePolicy.cs


using Unity.MLAgents.Sensors;
using Unity.MLAgents.Sensors;
namespace Unity.MLAgents.Policies

2
com.unity.ml-agents/Runtime/SimpleMultiAgentGroup.cs


/// <summary>
/// A basic class implementation of MultiAgentGroup.
/// </summary>
internal class SimpleMultiAgentGroup : IMultiAgentGroup, IDisposable
public class SimpleMultiAgentGroup : IMultiAgentGroup, IDisposable
{
readonly int m_Id = MultiAgentGroupIdCounter.GetGroupId();
HashSet<Agent> m_Agents = new HashSet<Agent>();

9
com.unity.ml-agents/Runtime/Unity.ML-Agents.asmdef


"Grpc.Core.dll"
],
"autoReferenced": true,
"defineConstraints": []
"defineConstraints": [],
"versionDefines": [
{
"name": "com.unity.modules.unityanalytics",
"expression": "1.0.0",
"define": "MLA_UNITY_ANALYTICS_MODULE"
}
]
}

3
com.unity.ml-agents/Tests/Editor/Analytics/InferenceAnalyticsTests.cs


using UnityEngine;
using Unity.Barracuda;
using Unity.MLAgents.Actuators;
using Unity.MLAgents.Analytics;
using Unity.MLAgents.Analytics;
namespace Unity.MLAgents.Tests.Analytics
{

6
com.unity.ml-agents/Tests/Editor/Communicator/GrpcExtensionsTests.cs


using NUnit.Framework;
using Unity.MLAgents.Actuators;
using Unity.MLAgents.Analytics;
using Unity.MLAgents.CommunicatorObjects;
using Unity.MLAgents.Analytics;
using Unity.MLAgents.CommunicatorObjects;
namespace Unity.MLAgents.Tests
{

sparseChannelSensor.Mapping = new[] { 0, 0, 0, 1, 1, 1 };
Assert.AreEqual(GrpcExtensions.IsTrivialMapping(sparseChannelSensor), false);
}
[Test]
public void TestDefaultTrainingEvents()
{

7
com.unity.ml-agents/Tests/Editor/Unity.ML-Agents.Editor.Tests.asmdef


"autoReferenced": false,
"defineConstraints": [
"UNITY_INCLUDE_TESTS"
],
"versionDefines": [
{
"name": "com.unity.modules.unityanalytics",
"expression": "1.0.0",
"define": "MLA_UNITY_ANALYTICS_MODULE"
}
]
}

7
com.unity.ml-agents/package.json


{
"name": "com.unity.ml-agents",
"displayName": "ML Agents",
"version": "1.8.0-preview",
"version": "1.9.0-preview",
"com.unity.barracuda": "1.3.1-preview",
"com.unity.barracuda": "1.3.2-preview",
"com.unity.modules.physics2d": "1.0.0",
"com.unity.modules.unityanalytics": "1.0.0"
"com.unity.modules.physics2d": "1.0.0"
}
}

6
config/imitation/Crawler.yaml


behaviors:
CrawlerStatic:
Crawler:
trainer_type: ppo
hyperparameters:
batch_size: 2024

learning_rate: 0.0003
use_actions: false
use_vail: false
demo_path: Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerSta.demo
demo_path: Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawler.demo
keep_checkpoints: 5
max_steps: 10000000
time_horizon: 1000

demo_path: Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawlerSta.demo
demo_path: Project/Assets/ML-Agents/Examples/Crawler/Demos/ExpertCrawler.demo
steps: 50000
strength: 0.5
samples_per_update: 0

4
config/ppo/FoodCollector.yaml


learning_rate_schedule: linear
network_settings:
normalize: false
hidden_units: 128
num_layers: 2
hidden_units: 256
num_layers: 1
vis_encode_type: simple
reward_signals:
extrinsic:

2
config/ppo/Crawler.yaml


behaviors:
CrawlerDynamic:
Crawler:
trainer_type: ppo
hyperparameters:
batch_size: 2048

4
config/ppo/Walker.yaml


behaviors:
CrawlerDynamicVariableSpeed:
Walker:
trainer_type: ppo
hyperparameters:
batch_size: 2048

gamma: 0.995
strength: 1.0
keep_checkpoints: 5
max_steps: 10000000
max_steps: 30000000
time_horizon: 1000
summary_freq: 30000
threaded: true

2
config/ppo/Worm.yaml


behaviors:
WormDynamic:
Worm:
trainer_type: ppo
hyperparameters:
batch_size: 2024

10
config/sac/FoodCollector.yaml


learning_rate: 0.0003
learning_rate_schedule: constant
batch_size: 256
buffer_size: 500000
buffer_size: 2048
buffer_init_steps: 0
tau: 0.005
steps_per_update: 10.0

network_settings:
normalize: false
hidden_units: 128
num_layers: 2
hidden_units: 256
num_layers: 1
vis_encode_type: simple
reward_signals:
extrinsic:

max_steps: 2000000
time_horizon: 64
summary_freq: 10000
threaded: true
summary_freq: 60000
threaded: true

2
config/sac/Crawler.yaml


behaviors:
CrawlerDynamic:
Crawler:
trainer_type: sac
hyperparameters:
learning_rate: 0.0003

2
config/sac/Walker.yaml


behaviors:
WalkerDynamic:
Walker:
trainer_type: sac
hyperparameters:
learning_rate: 0.0003

2
config/sac/Worm.yaml


behaviors:
WormDynamic:
Worm:
trainer_type: sac
hyperparameters:
learning_rate: 0.0003

8
docs/Getting-Started.md


If you haven't already, follow the [installation instructions](Installation.md).
Afterwards, open the Unity Project that contains all the example environments:
1. Launch Unity Hub
1. On the Projects dialog, choose the **Add** option at the top of the window.
1. Using the file dialog that opens, locate the `Project` folder within the
ML-Agents Toolkit and click **Open**.
1. Open the Package Manager Window by navigating to `Window -> Package Manager`
in the menu.
1. Navigate to the ML-Agents Package and click on it.
1. Find the `3D Ball` sample and click `Import`.
1. In the **Project** window, go to the
`Assets/ML-Agents/Examples/3DBall/Scenes` folder and open the `3DBall` scene
file.

8
docs/Installation-Anaconda-Windows.md


the ml-agents Conda environment by typing `activate ml-agents`)_:
```sh
git clone --branch release_13 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_14 https://github.com/Unity-Technologies/ml-agents.git
The `--branch release_13` option will switch to the tag of the latest stable
The `--branch release_14` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially
unstable.

connected to the Internet and then type in the Anaconda Prompt:
```console
pip install mlagents
python -m pip install mlagents==0.24.1
```
This will complete the installation of all the required Python packages to run

this, you can try:
```console
pip install mlagents --no-cache-dir
python -m pip install mlagents==0.24.1 --no-cache-dir
```
This `--no-cache-dir` tells the pip to disable the cache.

17
docs/Installation.md


The ML-Agents Toolkit contains several components:
- Unity package ([`com.unity.ml-agents`](../com.unity.ml-agents/)) contains the
Unity C# SDK that will be integrated into your Unity scene.
Unity C# SDK that will be integrated into your Unity project. This package contains
a sample to help you get started with ML-Agents.
- Unity package
([`com.unity.ml-agents.extensions`](../com.unity.ml-agents.extensions/))
contains experimental C#/Unity components that are not yet ready to be part

example environments and training configurations to experiment with them (some
of our tutorials / guides assume you have access to our example environments).
**NOTE:** There are samples shipped with the Unity Package. You only need to clone
the repository if you would like to explore more examples.
git clone --branch release_13 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_14 https://github.com/Unity-Technologies/ml-agents.git
The `--branch release_13` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially
unstable.
The `--branch release_14` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially unstable.
back, make sure to clone the `main` branch (by omitting `--branch release_13`
back, make sure to clone the `main` branch (by omitting `--branch release_14`
from the command above). See our
[Contributions Guidelines](../com.unity.ml-agents/CONTRIBUTING.md) for more
information on contributing to the ML-Agents Toolkit.

run from the command line:
```sh
pip3 install mlagents
python -m pip install mlagents==0.24.1
```
Note that this will install `mlagents` from PyPi, _not_ from the cloned

119
docs/Learning-Environment-Design-Agents.md


- [Rewards Summary & Best Practices](#rewards-summary--best-practices)
- [Agent Properties](#agent-properties)
- [Destroying an Agent](#destroying-an-agent)
- [Defining Teams for Multi-agent Scenarios](#defining-teams-for-multi-agent-scenarios)
- [Defining Multi-agent Scenarios](#defining-multi-agent-scenarios)
- [Recording Demonstrations](#recording-demonstrations)
An agent is an entity that can observe its environment, decide on the best

the order of the entities, so there is no need to properly "order" the
entities before feeding them into the `BufferSensor`.
The the `BufferSensorComponent` Editor inspector have two arguments:
The `BufferSensorComponent` Editor inspector has two arguments:
- `Observation Size` : This is how many floats each entities will be
represented with. This number is fixed and all entities must
have the same representation. For example, if the entities you want to

training process learns to control the speed of the Agent through this
parameter.
The [Reacher example](Learning-Environment-Examples.md#reacher) uses
The [3DBall example](Learning-Environment-Examples.md#3dball-3d-balance-ball) uses
![reacher](images/reacher.png)
![3DBall](images/balance.png)
These control values are applied as torques to the bodies making up the arm:
These control values are applied as rotation to the cube:
var torqueX = Mathf.Clamp(actionBuffers.ContinuousActions[0], -1f, 1f) * 150f;
var torqueZ = Mathf.Clamp(actionBuffers.ContinuousActions[1], -1f, 1f) * 150f;
m_RbA.AddTorque(new Vector3(torqueX, 0f, torqueZ));
var actionZ = 2f * Mathf.Clamp(actionBuffers.ContinuousActions[0], -1f, 1f);
var actionX = 2f * Mathf.Clamp(actionBuffers.ContinuousActions[1], -1f, 1f);
torqueX = Mathf.Clamp(actionBuffers.ContinuousActions[2], -1f, 1f) * 150f;
torqueZ = Mathf.Clamp(actionBuffers.ContinuousActions[3], -1f, 1f) * 150f;
m_RbB.AddTorque(new Vector3(torqueX, 0f, torqueZ));
gameObject.transform.Rotate(new Vector3(0, 0, 1), actionZ);
gameObject.transform.Rotate(new Vector3(1, 0, 0), actionX);
}
```

Agent every time one is destroyed or by re-spawning new Agents when the whole
environment resets.
## Defining Teams for Multi-agent Scenarios
## Defining Multi-agent Scenarios
### Teams for Adversarial Scenarios
Self-play is triggered by including the self-play hyperparameter hierarchy in
the [trainer configuration](Training-ML-Agents.md#training-configurations). To

provide examples of symmetric games. To train an asymmetric game, specify
trainer configurations for each of your behavior names and include the self-play
hyperparameter hierarchy in both.
### Groups for Cooperative Scenarios
Cooperative behavior in ML-Agents can be enabled by instantiating a `SimpleMultiAgentGroup`,
typically in an environment controller or similar script, and adding agents to it
using the `RegisterAgent(Agent agent)` method. Note that all agents added to the same `SimpleMultiAgentGroup`
must have the same behavior name and Behavior Parameters. Using `SimpleMultiAgentGroup` enables the
agents within a group to learn how to work together to achieve a common goal (i.e.,
maximize a group-given reward), even if one or more of the group members are removed
before the episode ends. You can then use this group to add/set rewards, end or interrupt episodes
at a group level using the `AddGroupReward()`, `SetGroupReward()`, `EndGroupEpisode()`, and
`GroupEpisodeInterrupted()` methods. For example:
```csharp
// Create a Multi Agent Group in Start() or Initialize()
m_AgentGroup = new SimpleMultiAgentGroup();
// Register agents in group at the beginning of an episode
for (var agent in AgentList)
{
m_AgentGroup.RegisterAgent(agent);
}
// if the team scores a goal
m_AgentGroup.AddGroupReward(rewardForGoal);
// If the goal is reached and the episode is over
m_AgentGroup.EndGroupEpisode();
ResetScene();
// If time ran out and we need to interrupt the episode
m_AgentGroup.GroupEpisodeInterrupted();
ResetScene();
```
Multi Agent Groups should be used with the MA-POCA trainer, which is explicitly designed to train
cooperative environments. This can be enabled by using the `poca` trainer - see the
[training configurations](Training-Configuration-File.md) doc for more information on
configuring MA-POCA. When using MA-POCA, agents which are deactivated or removed from the Scene
during the episode will still learn to contribute to the group's long term rewards, even
if they are not active in the scene to experience them.
**NOTE**: Groups differ from Teams (for competitive settings) in the following way - Agents
working together should be added to the same Group, while agents playing against each other
should be given different Team Ids. If in the Scene there is one playing field and two teams,
there should be two Groups, one for each team, and each team should be assigned a different
Team Id. If this playing field is duplicated many times in the Scene (e.g. for training
speedup), there should be two Groups _per playing field_, and two unique Team Ids
_for the entire Scene_. In environments with both Groups and Team Ids configured, MA-POCA and
self-play can be used together for training. In the diagram below, there are two agents on each team,
and two playing fields where teams are pitted against each other. All the blue agents should share a Team Id
(and the orange ones a different ID), and there should be four group managers, one per pair of agents.
<p align="center">
<img src="images/groupmanager_teamid.png"
alt="Group Manager vs Team Id"
width="650" border="10" />
</p>
Please see the [SoccerTwos](Learning-Environment-Examples.md#soccer-twos) environment for an example.
#### Cooperative Behaviors Notes and Best Practices
* An agent can only be registered to one MultiAgentGroup at a time. If you want to re-assign an
agent from one group to another, you have to unregister it from the current group first.
* Agents with different behavior names in the same group are not supported.
* Agents within groups should always set the `Max Steps` parameter in the Agent script to 0.
Instead, handle Max Steps using the MultiAgentGroup by ending the episode for the entire
Group using `GroupEpisodeInterrupted()`.
* `EndGroupEpisode` and `GroupEpisodeInterrupted` do the same job in the game, but has
slightly different effect on the training. If the episode is completed, you would want to call
`EndGroupEpisode`. But if the episode is not over but it has been running for enough steps, i.e.
reaching max step, you would call `GroupEpisodeInterrupted`.
* If an agent finished earlier, e.g. completed tasks/be removed/be killed in the game, do not call
`EndEpisode()` on the Agent. Instead, disable the agent and re-enable it when the next episode starts,
or destroy the agent entirely. This is because calling `EndEpisode()` will call `OnEpisodeBegin()`, which
will reset the agent immediately. While it is possible to call `EndEpisode()` in this way, it is usually not the
desired behavior when training groups of agents.
* If an agent that was disabled in a scene needs to be re-enabled, it must be re-registered to the MultiAgentGroup.
* Group rewards are meant to reinforce agents to act in the group's best interest instead of
individual ones, and are treated differently than individual agent rewards during
training. So calling `AddGroupReward()` is not equivalent to calling agent.AddReward() on each agent
in the group.
* You can still add incremental rewards to agents using `Agent.AddReward()` if they are
in a Group. These rewards will only be given to those agents and are received when the
Agent is active.
* Environments which use Multi Agent Groups can be trained using PPO or SAC, but agents will
not be able to learn from group rewards after deactivation/removal, nor will they behave as cooperatively.
## Recording Demonstrations

160
docs/Learning-Environment-Examples.md


number of goals.
- Benchmark Mean Reward: 0.8
## Tennis
![Tennis](images/tennis.png)
- Set-up: Two-player game where agents control rackets to hit a ball over the
net.
- Goal: The agents must hit the ball so that the opponent cannot hit a valid
return.
- Agents: The environment contains two agent with same Behavior Parameters.
After training you can set the `Behavior Type` to `Heuristic Only` on one of
the Agent's Behavior Parameters to play against your trained model.
- Agent Reward Function (independent):
- +1.0 To the agent that wins the point. An agent wins a point by preventing
the opponent from hitting a valid return.
- -1.0 To the agent who loses the point.
- Behavior Parameters:
- Vector Observation space: 9 variables corresponding to position, velocity
and orientation of ball and racket.
- Actions: 3 continuous actions, corresponding to movement
toward net or away from net, jumping and rotation.
- Visual Observations: None
- Float Properties: Three
- gravity: Magnitude of gravity
- Default: 9.81
- Recommended Minimum: 6
- Recommended Maximum: 20
- scale: Specifies the scale of the ball in the 3 dimensions (equal across the
three dimensions)
- Default: .5
- Recommended Minimum: 0.2
- Recommended Maximum: 5
## Push Block
![Push](images/push.png)

block).
- Actions: 1 discrete action branch with 7 actions, corresponding to turn clockwise
and counterclockwise, move along four different face directions, or do nothing.
- Visual Observations (Optional): One first-person camera. Use
`VisualPushBlock` scene. **The visual observation version of this
environment does not train with the provided default training parameters.**
- Float Properties: Four
- block_scale: Scale of the block along the x and z dimensions
- Default: 2

- Float Properties: Four
- Benchmark Mean Reward (Big & Small Wall): 0.8
## Reacher
![Reacher](images/reacher.png)
- Set-up: Double-jointed arm which can move to target locations.
- Goal: The agents must move its hand to the goal location, and keep it there.
- Agents: The environment contains 10 agent with same Behavior Parameters.
- Agent Reward Function (independent):
- +0.1 Each step agent's hand is in goal location.
- Behavior Parameters:
- Vector Observation space: 26 variables corresponding to position, rotation,
velocity, and angular velocities of the two arm rigid bodies.
- Actions: 4 continuous actions, corresponding to torque
applicable to two joints.
- Visual Observations: None.
- Float Properties: Five
- goal_size: radius of the goal zone
- Default: 5
- Recommended Minimum: 1
- Recommended Maximum: 10
- goal_speed: speed of the goal zone around the arm (in radians)
- Default: 1
- Recommended Minimum: 0.2
- Recommended Maximum: 4
- gravity
- Default: 9.81
- Recommended Minimum: 4
- Recommended Maximum: 20
- deviation: Magnitude of sinusoidal (cosine) deviation of the goal along the
vertical dimension
- Default: 0
- Recommended Minimum: 0
- Recommended Maximum: 5
- deviation_freq: Frequency of the cosine deviation of the goal along the
vertical dimension
- Default: 0
- Recommended Minimum: 0
- Recommended Maximum: 3
- Benchmark Mean Reward: 30
## Crawler
![Crawler](images/crawler.png)

- `CrawlerDynamicTarget`- Goal direction is randomized.
- `CrawlerDynamicVariableSpeed`- Goal direction and walking speed are randomized.
- `CrawlerStaticTarget` - Goal direction is always forward.
- `CrawlerStaticVariableSpeed`- Goal direction is always forward. Walking speed is randomized
- Agents: The environment contains 10 agents with same Behavior Parameters.
- Agent Reward Function (independent):
The reward function is now geometric meaning the reward each step is a product

rotations for joints.
- Visual Observations: None
- Float Properties: None
- Benchmark Mean Reward for `CrawlerDynamicTarget`: 2000
- Benchmark Mean Reward for `CrawlerDynamicVariableSpeed`: 3000
- Benchmark Mean Reward for `CrawlerStaticTarget`: 4000
- Benchmark Mean Reward for `CrawlerStaticVariableSpeed`: 4000
- Benchmark Mean Reward: 3000
## Worm

- Goal: The agents must move its body toward the goal direction.
- `WormStaticTarget` - Goal direction is always forward.
- `WormDynamicTarget`- Goal direction is randomized.
- Agents: The environment contains 10 agents with same Behavior Parameters.
- Agent Reward Function (independent):
The reward function is now geometric meaning the reward each step is a product

rotations for joints.
- Visual Observations: None
- Float Properties: None
- Benchmark Mean Reward for `WormStaticTarget`: 1200
- Benchmark Mean Reward for `WormDynamicTarget`: 800
- Benchmark Mean Reward: 800
## Food Collector

- -1 for interaction with red spheres
- Behavior Parameters:
- Vector Observation space: 53 corresponding to velocity of agent (2), whether
agent is frozen and/or shot its laser (2), plus ray-based perception of
objects around agent's forward direction (49; 7 raycast angles with 7
measurements for each).
agent is frozen and/or shot its laser (2), plus grid based perception of
objects around agent's forward direction (40 by 40 with 6 different categories).
- Actions:
- 3 continuous actions correspond to Forward Motion, Side Motion and Rotation
- 1 discrete acion branch for Laser with 2 possible actions corresponding to

objects, goals, and walls.
- Actions: 1 discrete action Branch, with 4 actions corresponding to agent
rotation and forward/backward movement.
- Visual Observations (Optional): First-person view for the agent. Use
`VisualHallway` scene. **The visual observation version of this environment
does not train with the provided default training parameters.**
## Bouncer
![Bouncer](images/bouncer.png)
- Set-up: Environment where the agent needs on-demand decision making. The agent
must decide how perform its next bounce only when it touches the ground.
- Goal: Catch the floating green cube. Only has a limited number of jumps.
- Agents: The environment contains one agent.
- Agent Reward Function (independent):
- +1 For catching the green cube.
- -1 For bouncing out of bounds.
- -0.05 Times the action squared. Energy expenditure penalty.
- Behavior Parameters:
- Vector Observation space: 6 corresponding to local position of agent and
green cube.
- Actions: 3 continuous actions corresponding to agent force applied for
the jump.
- Visual Observations: None
- Float Properties: Two
- target_scale: The scale of the green cube in the 3 dimensions
- Default: 150
- Recommended Minimum: 50
- Recommended Maximum: 250
- Benchmark Mean Reward: 10
## Soccer Twos
![SoccerTwos](images/soccer.png)

- Get the ball into the opponent's goal while preventing the ball from
entering own goal.
- Agents: The environment contains four agents, with the same Behavior
- Agents: The environment contains two different Multi Agent Groups with two agents in each.
Parameters : SoccerTwos.
- Agent Reward Function (dependent):
- (1 - `accumulated time penalty`) When ball enters opponent's goal

- Goal:
- Striker: Get the ball into the opponent's goal.
- Goalie: Keep the ball out of the goal.
- Agents: The environment contains three agents. Two Strikers and one Goalie.
- Agents: The environment contains two different Multi Agent Groups. One with two Strikers and the other one Goalie.
Behavior Parameters : Striker, Goalie.
- Striker Agent Reward Function (dependent):
- +1 When ball enters opponent's goal.

correspond to articulation of the following body-parts: hips, chest, spine,
head, thighs, shins, feet, arms, forearms and hands.
- Goal: The agents must move its body toward the goal direction without falling.
- `WalkerDynamic`- Goal direction is randomized.
- `WalkerDynamicVariableSpeed`- Goal direction and walking speed are randomized.
- `WalkerStatic` - Goal direction is always forward.
- `WalkerStaticVariableSpeed` - Goal direction is always forward. Walking
speed is randomized
- Agents: The environment contains 10 independent agents with same Behavior
Parameters.
- Agent Reward Function (independent):

- Default: 8
- Recommended Minimum: 3
- Recommended Maximum: 20
- Benchmark Mean Reward for `WalkerDynamic`: 2500
- Benchmark Mean Reward for `WalkerDynamicVariableSpeed`: 2500
- Benchmark Mean Reward for `WalkerStatic`: 3500
- Benchmark Mean Reward for `WalkerStaticVariableSpeed`: 3500
- Benchmark Mean Reward : 2500
## Pyramids

state.
- Actions: 1 discrete action branch, with 4 actions corresponding to agent rotation and
forward/backward movement.
- Visual Observations (Optional): First-person camera per-agent. Us
`VisualPyramids` scene. **The visual observation version of this environment
does not train with the provided default training parameters.**
- Float Properties: None
- Benchmark Mean Reward: 1.75

- Recommended Minimum: 1
- Recommended Maximum: 20
- Benchmark Mean Reward: Depends on the number of tiles.
## Cooperative Push Block
![CoopPushBlock](images/cooperative_pushblock.png)
- Set-up: Similar to Push Block, the agents are in an area with blocks that need
to be pushed into a goal. Small blocks can be pushed by one agents and are worth
+1 value, medium blocks require two agents to push in and are worth +2, and large
blocks require all 3 agents to push and are worth +3.
- Goal: Push all blocks into the goal.
- Agents: The environment contains three Agents in a Multi Agent Group.
- Agent Reward Function:
- -0.0001 Existential penalty, as a group reward.
- +1, +2, or +3 for pushing in a block, added as a group reward.
- Behavior Parameters:
- Observation space: A single Grid Sensor with separate tags for each block size,
the goal, the walls, and other agents.
- Actions: 1 discrete action branch with 7 actions, corresponding to turn clockwise
and counterclockwise, move along four different face directions, or do nothing.
- Float Properties: None
- Benchmark Mean Reward: 11 (Group Reward)

33
docs/ML-Agents-Overview.md


previous section, the ML-Agents Toolkit provides additional methods that can aid
in training behaviors for specific types of environments.
### Training in Multi-Agent Environments with Self-Play
### Training in Competitive Multi-Agent Environments with Self-Play
ML-Agents provides the functionality to train both symmetric and asymmetric
adversarial games with

our
[blog post on self-play](https://blogs.unity3d.com/2020/02/28/training-intelligent-adversaries-using-self-play-with-ml-agents/)
for additional information.
### Training In Cooperative Multi-Agent Environments with MA-POCA
![PushBlock with Agents Working Together](images/cooperative_pushblock.png)
ML-Agents provides the functionality for training cooperative behaviors - i.e.,
groups of agents working towards a common goal, where the success of the individual
is linked to the success of the whole group. In such a scenario, agents typically receive
rewards as a group. For instance, if a team of agents wins a game against an opposing
team, everyone is rewarded - even agents who did not directly contribute to the win. This
makes learning what to do as an individual difficult - you may get a win
for doing nothing, and a loss for doing your best.
In ML-Agents, we provide MA-POCA (MultiAgent POsthumous Credit Assignment), which
is a novel multi-agent trainer that trains a _centralized critic_, a neural network
that acts as a "coach" for a whole group of agents. You can then give rewards to the team
as a whole, and the agents will learn how best to contribute to achieving that reward.
Agents can _also_ be given rewards individually, and the team will work together to help the
individual achieve those goals. During an episode, agents can be added or removed from the group,
such as when agents spawn or die in a game. If agents are removed mid-episode (e.g., if teammates die
or are removed from the game), they will still learn whether their actions contributed
to the team winning later, enabling agents to take group-beneficial actions even if
they result in the individual being removed from the game (i.e., self-sacrifice).
MA-POCA can also be combined with self-play to train teams of agents to play against each other.
To learn more about enabling cooperative behaviors for agents in an ML-Agents environment,
check out [this page](Learning-Environment-Design-Agents.md#cooperative-scenarios).
For further reading, MA-POCA builds on previous work in multi-agent cooperative learning
([Lowe et al.](https://arxiv.org/abs/1706.02275), [Foerster et al.](https://arxiv.org/pdf/1705.08926.pdf),
among others) to enable the above use-cases.
### Solving Complex Tasks using Curriculum Learning

12
docs/Training-Configuration-File.md


## Common Trainer Configurations
One of the first decisions you need to make regarding your training run is which
trainer to use: PPO or SAC. There are some training configurations that are
trainer to use: PPO, SAC, or POCA. There are some training configurations that are
| `trainer_type` | (default = `ppo`) The type of trainer to use: `ppo` or `sac` |
| `trainer_type` | (default = `ppo`) The type of trainer to use: `ppo`, `sac`, or `poca`. |
| `summary_freq` | (default = `50000`) Number of experiences that needs to be collected before generating and displaying training statistics. This determines the granularity of the graphs in Tensorboard. |
| `time_horizon` | (default = `64`) How many steps of experience to collect per-agent before adding it to the experience buffer. When this limit is reached before the end of an episode, a value estimate is used to predict the overall expected reward from the agent's current state. As such, this parameter trades off between a less biased, but higher variance estimate (long time horizon) and more biased, but less varied estimate (short time horizon). In cases where there are frequent rewards within an episode, or episodes are prohibitively large, a smaller number can be more ideal. This number should be large enough to capture all the important behavior within a sequence of an agent's actions. <br><br> Typical range: `32` - `2048` |
| `max_steps` | (default = `500000`) Total number of steps (i.e., observation collected and action taken) that must be taken in the environment (or across all environments if using multiple in parallel) before ending the training process. If you have multiple agents with the same behavior name within your environment, all steps taken by those agents will contribute to the same `max_steps` count. <br><br>Typical range: `5e5` - `1e7` |

| `network_settings -> hidden_units` | (default = `128`) Number of units in the hidden layers of the neural network. Correspond to how many units are in each fully connected layer of the neural network. For simple problems where the correct action is a straightforward combination of the observation inputs, this should be small. For problems where the action is a very complex interaction between the observation variables, this should be larger. <br><br> Typical range: `32` - `512` |
| `network_settings -> num_layers` | (default = `2`) The number of hidden layers in the neural network. Corresponds to how many hidden layers are present after the observation input, or after the CNN encoding of the visual observation. For simple problems, fewer layers are likely to train faster and more efficiently. More layers may be necessary for more complex control problems. <br><br> Typical range: `1` - `3` |
| `network_settings -> normalize` | (default = `false`) Whether normalization is applied to the vector observation inputs. This normalization is based on the running average and variance of the vector observation. Normalization can be helpful in cases with complex continuous control problems, but may be harmful with simpler discrete control problems. |
| `network_settings -> vis_encoder_type` | (default = `simple`) Encoder type for encoding visual observations. <br><br> `simple` (default) uses a simple encoder which consists of two convolutional layers, `nature_cnn` uses the CNN implementation proposed by [Mnih et al.](https://www.nature.com/articles/nature14236), consisting of three convolutional layers, and `resnet` uses the [IMPALA Resnet](https://arxiv.org/abs/1802.01561) consisting of three stacked layers, each with two residual blocks, making a much larger network than the other two. `match3` is a smaller CNN ([Gudmundsoon et al.](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)) that is optimized for board games, and can be used down to visual observation sizes of 5x5. |
| `network_settings -> vis_encode_type` | (default = `simple`) Encoder type for encoding visual observations. <br><br> `simple` (default) uses a simple encoder which consists of two convolutional layers, `nature_cnn` uses the CNN implementation proposed by [Mnih et al.](https://www.nature.com/articles/nature14236), consisting of three convolutional layers, and `resnet` uses the [IMPALA Resnet](https://arxiv.org/abs/1802.01561) consisting of three stacked layers, each with two residual blocks, making a much larger network than the other two. `match3` is a smaller CNN ([Gudmundsoon et al.](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)) that is optimized for board games, and can be used down to visual observation sizes of 5x5. |
## Trainer-specific Configurations

| `hyperparameters -> tau` | (default = `0.005`) How aggressively to update the target network used for bootstrapping value estimation in SAC. Corresponds to the magnitude of the target Q update during the SAC model update. In SAC, there are two neural networks: the target and the policy. The target network is used to bootstrap the policy's estimate of the future rewards at a given state, and is fixed while the policy is being updated. This target is then slowly updated according to tau. Typically, this value should be left at 0.005. For simple problems, increasing tau to 0.01 might reduce the time it takes to learn, at the cost of stability. <br><br>Typical range: `0.005` - `0.01` |
| `hyperparameters -> steps_per_update` | (default = `1`) Average ratio of agent steps (actions) taken to updates made of the agent's policy. In SAC, a single "update" corresponds to grabbing a batch of size `batch_size` from the experience replay buffer, and using this mini batch to update the models. Note that it is not guaranteed that after exactly `steps_per_update` steps an update will be made, only that the ratio will hold true over many steps. Typically, `steps_per_update` should be greater than or equal to 1. Note that setting `steps_per_update` lower will improve sample efficiency (reduce the number of steps required to train) but increase the CPU time spent performing updates. For most environments where steps are fairly fast (e.g. our example environments) `steps_per_update` equal to the number of agents in the scene is a good balance. For slow environments (steps take 0.1 seconds or more) reducing `steps_per_update` may improve training speed. We can also change `steps_per_update` to lower than 1 to update more often than once per step, though this will usually result in a slowdown unless the environment is very slow. <br><br>Typical range: `1` - `20` |
| `hyperparameters -> reward_signal_num_update` | (default = `steps_per_update`) Number of steps per mini batch sampled and used for updating the reward signals. By default, we update the reward signals once every time the main policy is updated. However, to imitate the training procedure in certain imitation learning papers (e.g. [Kostrikov et. al](http://arxiv.org/abs/1809.02925), [Blondé et. al](http://arxiv.org/abs/1809.02064)), we may want to update the reward signal (GAIL) M times for every update of the policy. We can change `steps_per_update` of SAC to N, as well as `reward_signal_steps_per_update` under `reward_signals` to N / M to accomplish this. By default, `reward_signal_steps_per_update` is set to `steps_per_update`. |
### MA-POCA-specific Configurations
MA-POCA uses the same configurations as PPO, and there are no additional POCA-specific parameters.
**NOTE**: Reward signals other than Extrinsic Rewards have not been extensively tested with MA-POCA,
though they can still be added and used for training on a your-mileage-may-vary basis.
## Reward Signals

2
docs/Training-ML-Agents.md


# Configuration of the neural network (common to PPO/SAC)
network_settings:
vis_encoder_type: simple
vis_encode_type: simple
normalize: false
hidden_units: 128
num_layers: 2

2
docs/Training-on-Amazon-Web-Service.md


2. Clone the ML-Agents repo and install the required Python packages
```sh
git clone --branch release_13 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_14 https://github.com/Unity-Technologies/ml-agents.git
cd ml-agents/ml-agents/
pip3 install -e .
```

2
docs/Training-on-Microsoft-Azure.md


instance, and set it as the working directory.
2. Install the required packages:
Torch: `pip3 install torch==1.7.0 -f https://download.pytorch.org/whl/torch_stable.html` and
MLAgents: `pip3 install mlagents`
MLAgents: `python -m pip install mlagents==0.24.1`
## Testing

4
docs/Unity-Inference-Engine.md


loading expects certain conventions for constants and tensor names. While it is
possible to construct a model that follows these conventions, we don't provide
any additional help for this. More details can be found in
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
[BarracudaModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs).
[BarracudaModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs).
If you wish to run inference on an externally trained model, you should use
Barracuda directly, instead of trying to run it through ML-Agents.

999
docs/images/example-envs.png
文件差异内容过多而无法显示
查看文件

部分文件因为文件数量过多而无法显示

正在加载...
取消
保存