比较提交

...
此合并请求有变更与目标分支冲突。
/protobuf-definitions/proto/mlagents_envs/communicator_objects/observation.proto
/Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/DirectionIndicator.cs
/Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/OrientationCubeController.cs
/Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/SensorBase.cs
/Project/Assets/ML-Agents/Examples/Walker/Scripts/WalkerAgent.cs
/com.unity.ml-agents/Tests/Editor/MLAgentsEditModeTest.cs
/com.unity.ml-agents/Runtime/Communicator/GrpcExtensions.cs
/com.unity.ml-agents/Runtime/Academy.cs
/com.unity.ml-agents/Runtime/Agent.cs
/com.unity.ml-agents/Runtime/Grpc/CommunicatorObjects/Observation.cs
/com.unity.ml-agents/Runtime/Sensors/RayPerceptionSensor.cs
/com.unity.ml-agents/Runtime/Sensors/Reflection/ReflectionSensorBase.cs
/com.unity.ml-agents/Runtime/Sensors/RenderTextureSensor.cs
/com.unity.ml-agents/Runtime/Sensors/StackingSensor.cs
/com.unity.ml-agents/Runtime/Sensors/CameraSensor.cs
/com.unity.ml-agents/Runtime/Sensors/ISensor.cs
/com.unity.ml-agents/Runtime/Sensors/VectorSensor.cs
/ml-agents-envs/mlagents_envs/rpc_utils.py
/ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.py
/ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.pyi
/ml-agents-envs/mlagents_envs/base_env.py
/docs/Learning-Environment-Examples.md
/ml-agents/mlagents/trainers/settings.py
/ml-agents/mlagents/trainers/learn.py
/ml-agents/mlagents/trainers/trainer_controller.py
/ml-agents/mlagents/trainers/stats.py
/ml-agents/mlagents/trainers/subprocess_env_manager.py
/ml-agents/mlagents/trainers/agent_processor.py
/com.unity.ml-agents/Tests/Editor/Sensor/FloatVisualSensorTests.cs
/com.unity.ml-agents/Tests/Editor/Sensor/SensorShapeValidatorTests.cs
/Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamicVariableSpeed.unity.meta
/Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/PlatformDynamicTarget.prefab
/Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/Targets
/Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms
/Project/Assets/ML-Agents/Examples/Walker/Prefabs/Ragdoll
/com.unity.ml-agents/Tests/Editor/ParameterLoaderTest.cs
/Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamic.unity.meta
/Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerStatic.unity
/Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamic.unity
/Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStatic.nn
/Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStatic.nn.meta
/config/ppo/WalkerDynamic.yaml
/config/ppo/WalkerStatic.yaml

10 次代码提交

作者 SHA1 备注 提交日期
Scott Jordan e33168d6 Added comments and new yaml files for variable speed walker 4 年前
Scott Jordan d188890b fixed settings imports for active learning 4 年前
Scott Jordan f9748b70 moved batch outside of active learner 4 年前
Scott Jordan cf2a06ad Added num repeat parameter for tasks 4 年前
Scott Jordan cab9d77e Added Batch setting to active learning 4 年前
Scott Jordan 87969325 added histogram recorded, fixed active learning bug 4 年前
Scott Jordan 78f8a9a2 Updated task manager 4 年前
Scott Jordan 56745026 Initial commit of running active learning code 4 年前
Scott Jordan 9f3d3428 Merge branch 'master' into active-variablespeed 4 年前
Scott Jordan d695c044 initial addition of active learning (incomplete) 4 年前
共有 149 个文件被更改,包括 10563 次插入2529 次删除
  1. 7
      protobuf-definitions/proto/mlagents_envs/communicator_objects/observation.proto
  2. 7
      ml-agents-envs/mlagents_envs/base_env.py
  3. 51
      ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.py
  4. 87
      ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.pyi
  5. 4
      ml-agents-envs/mlagents_envs/rpc_utils.py
  6. 7
      com.unity.ml-agents/Runtime/Academy.cs
  7. 5
      com.unity.ml-agents/Runtime/Agent.cs
  8. 2
      com.unity.ml-agents/Runtime/Communicator/GrpcExtensions.cs
  9. 51
      com.unity.ml-agents/Runtime/Grpc/CommunicatorObjects/Observation.cs
  10. 10
      com.unity.ml-agents/Runtime/Sensors/CameraSensor.cs
  11. 29
      com.unity.ml-agents/Runtime/Sensors/ISensor.cs
  12. 6
      com.unity.ml-agents/Runtime/Sensors/RayPerceptionSensor.cs
  13. 6
      com.unity.ml-agents/Runtime/Sensors/Reflection/ReflectionSensorBase.cs
  14. 9
      com.unity.ml-agents/Runtime/Sensors/RenderTextureSensor.cs
  15. 6
      com.unity.ml-agents/Runtime/Sensors/StackingSensor.cs
  16. 6
      com.unity.ml-agents/Runtime/Sensors/VectorSensor.cs
  17. 5
      com.unity.ml-agents/Tests/Editor/MLAgentsEditModeTest.cs
  18. 5
      com.unity.ml-agents/Tests/Editor/ParameterLoaderTest.cs
  19. 5
      com.unity.ml-agents/Tests/Editor/Sensor/FloatVisualSensorTests.cs
  20. 5
      com.unity.ml-agents/Tests/Editor/Sensor/SensorShapeValidatorTests.cs
  21. 31
      docs/Learning-Environment-Examples.md
  22. 19
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/DirectionIndicator.cs
  23. 4
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/OrientationCubeController.cs
  24. 6
      Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/SensorBase.cs
  25. 2
      Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamic.unity.meta
  26. 962
      Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerStatic.unity
  27. 977
      Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamic.unity
  28. 1001
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStatic.nn
  29. 2
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStatic.nn.meta
  30. 158
      Project/Assets/ML-Agents/Examples/Walker/Scripts/WalkerAgent.cs
  31. 2
      config/ppo/WalkerDynamic.yaml
  32. 2
      config/ppo/WalkerStatic.yaml
  33. 137
      ml-agents/mlagents/trainers/stats.py
  34. 20
      ml-agents/mlagents/trainers/subprocess_env_manager.py
  35. 66
      ml-agents/mlagents/trainers/agent_processor.py
  36. 6
      ml-agents/mlagents/trainers/learn.py
  37. 32
      ml-agents/mlagents/trainers/trainer_controller.py
  38. 55
      ml-agents/mlagents/trainers/settings.py
  39. 19
      Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/Targets/StaticTarget.prefab
  40. 82
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Ragdoll/WalkerRagdollBase.prefab
  41. 34
      ml-agents-envs/mlagents_envs/side_channel/agent_parameters_channel.py
  42. 50
      com.unity.ml-agents/Runtime/AgentParameters.cs
  43. 11
      com.unity.ml-agents/Runtime/AgentParameters.cs.meta
  44. 65
      com.unity.ml-agents/Runtime/SideChannels/AgentParametersChannel.cs
  45. 11
      com.unity.ml-agents/Runtime/SideChannels/AgentParametersChannel.cs.meta
  46. 18
      send_obs.py
  47. 737
      Project/Assets/ML-Agents/Examples/3DBall/Prefabs/3DBallTask.prefab
  48. 8
      Project/Assets/ML-Agents/Examples/3DBall/Prefabs/3DBallTask.prefab.meta
  49. 1001
      Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBallTask.unity
  50. 9
      Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBallTask.unity.meta
  51. 93
      Project/Assets/ML-Agents/Examples/3DBall/Scripts/Task3DAgent.cs
  52. 12
      Project/Assets/ML-Agents/Examples/3DBall/Scripts/Task3DAgent.cs.meta
  53. 48
      Project/Assets/ML-Agents/Examples/3DBall/Scripts/TaskSensorComponent.cs
  54. 11
      Project/Assets/ML-Agents/Examples/3DBall/Scripts/TaskSensorComponent.cs.meta
  55. 252
      Project/Assets/ML-Agents/Examples/3DBall/TFModels/My3DBall.nn
  56. 11
      Project/Assets/ML-Agents/Examples/3DBall/TFModels/My3DBall.nn.meta
  57. 7
      Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/PlatformDynamicTarget.prefab.meta
  58. 8
      Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/Targets.meta
  59. 523
      Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/PlatformDynamicTarget.prefab
  60. 10
      Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerDy.demo.meta
  61. 10
      Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerDyVS.demo.meta
  62. 10
      Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerSt.demo.meta
  63. 10
      Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerStVS.demo.meta
  64. 8
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Ragdoll.meta
  65. 8
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms.meta
  66. 9
      Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerStaticVariableSpeed.unity.meta
  67. 1001
      Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamicVariableSpeed.unity
  68. 7
      Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamicVariableSpeed.unity.meta
  69. 1001
      Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerStaticVariableSpeed.unity
  70. 1001
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamicVariableSpeed.nn
  71. 11
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamicVariableSpeed.nn.meta
  72. 1001
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamic.nn
  73. 11
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamic.nn.meta
  74. 1001
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStaticVariableSpeed.nn
  75. 11
      Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStaticVariableSpeed.nn.meta
  76. 8
      Project/Assets/ML-Agents/Examples/Walker/TFModels/ObserveDist.meta
  77. 8
      Project/Assets/ML-Agents/Examples/Walker/TFModels/ObserveDistWithHH.meta
  78. 8
      Project/Assets/ML-Agents/Examples/Walker/TFModels/OrigTarg.meta
  79. 26
      config/ppo/WalkerDynamicVariableSpeed.yaml
  80. 27
      config/ppo/RollerBall.yaml
  81. 35
      config/ppo/WalkerStaticVariableSpeed.yaml
  82. 42
      config/ppo/WalkerStaticVariableSpeedActive.yaml
  83. 233
      ml-agents/mlagents/trainers/active_learning.py
  84. 175
      ml-agents/mlagents/trainers/task_manager.py
  85. 146
      Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/Targets/DynamicTarget.prefab
  86. 7
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicSingleSpeed.prefab.meta
  87. 157
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicSingleSpeed.prefab
  88. 7
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicVariableSpeed.prefab.meta
  89. 298
      Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicVariableSpeed.prefab

7
protobuf-definitions/proto/mlagents_envs/communicator_objects/observation.proto


PNG = 1;
}
enum SensorTypeProto {
OBSERVATION = 0;
PARAMETERIZATION = 1;
REWARD = 2;
}
message ObservationProto {
message FloatData {
repeated float data = 1;

bytes compressed_data = 3;
FloatData float_data = 4;
}
SensorTypeProto sensor_type = 5;
}

7
ml-agents-envs/mlagents_envs/base_env.py


CONTINUOUS = 1
class SensorType(Enum):
OBSERVATION = 0
PARAMETERIZATION = 1
REWARD = 2
class BehaviorSpec(NamedTuple):
"""
A NamedTuple to containing information about the observations and actions

"""
observation_shapes: List[Tuple]
sensor_types: List[SensorType]
action_type: ActionType
action_shape: Union[int, Tuple[int, ...]]

51
ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.py


name='mlagents_envs/communicator_objects/observation.proto',
package='communicator_objects',
syntax='proto3',
serialized_pb=_b('\n4mlagents_envs/communicator_objects/observation.proto\x12\x14\x63ommunicator_objects\"\xf9\x01\n\x10ObservationProto\x12\r\n\x05shape\x18\x01 \x03(\x05\x12\x44\n\x10\x63ompression_type\x18\x02 \x01(\x0e\x32*.communicator_objects.CompressionTypeProto\x12\x19\n\x0f\x63ompressed_data\x18\x03 \x01(\x0cH\x00\x12\x46\n\nfloat_data\x18\x04 \x01(\x0b\x32\x30.communicator_objects.ObservationProto.FloatDataH\x00\x1a\x19\n\tFloatData\x12\x0c\n\x04\x64\x61ta\x18\x01 \x03(\x02\x42\x12\n\x10observation_data*)\n\x14\x43ompressionTypeProto\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03PNG\x10\x01\x42%\xaa\x02\"Unity.MLAgents.CommunicatorObjectsb\x06proto3')
serialized_pb=_b('\n4mlagents_envs/communicator_objects/observation.proto\x12\x14\x63ommunicator_objects\"\xb5\x02\n\x10ObservationProto\x12\r\n\x05shape\x18\x01 \x03(\x05\x12\x44\n\x10\x63ompression_type\x18\x02 \x01(\x0e\x32*.communicator_objects.CompressionTypeProto\x12\x19\n\x0f\x63ompressed_data\x18\x03 \x01(\x0cH\x00\x12\x46\n\nfloat_data\x18\x04 \x01(\x0b\x32\x30.communicator_objects.ObservationProto.FloatDataH\x00\x12:\n\x0bsensor_type\x18\x05 \x01(\x0e\x32%.communicator_objects.SensorTypeProto\x1a\x19\n\tFloatData\x12\x0c\n\x04\x64\x61ta\x18\x01 \x03(\x02\x42\x12\n\x10observation_data*)\n\x14\x43ompressionTypeProto\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03PNG\x10\x01*D\n\x0fSensorTypeProto\x12\x0f\n\x0bOBSERVATION\x10\x00\x12\x14\n\x10PARAMETERIZATION\x10\x01\x12\n\n\x06REWARD\x10\x02\x42%\xaa\x02\"Unity.MLAgents.CommunicatorObjectsb\x06proto3')
)
_COMPRESSIONTYPEPROTO = _descriptor.EnumDescriptor(

],
containing_type=None,
options=None,
serialized_start=330,
serialized_end=371,
serialized_start=390,
serialized_end=431,
_SENSORTYPEPROTO = _descriptor.EnumDescriptor(
name='SensorTypeProto',
full_name='communicator_objects.SensorTypeProto',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='OBSERVATION', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PARAMETERIZATION', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='REWARD', index=2, number=2,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=433,
serialized_end=501,
)
_sym_db.RegisterEnumDescriptor(_SENSORTYPEPROTO)
SensorTypeProto = enum_type_wrapper.EnumTypeWrapper(_SENSORTYPEPROTO)
OBSERVATION = 0
PARAMETERIZATION = 1
REWARD = 2

extension_ranges=[],
oneofs=[
],
serialized_start=283,
serialized_end=308,
serialized_start=343,
serialized_end=368,
)
_OBSERVATIONPROTO = _descriptor.Descriptor(

message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='sensor_type', full_name='communicator_objects.ObservationProto.sensor_type', index=4,
number=5, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],

index=0, containing_type=None, fields=[]),
],
serialized_start=79,
serialized_end=328,
serialized_end=388,
_OBSERVATIONPROTO.fields_by_name['sensor_type'].enum_type = _SENSORTYPEPROTO
_OBSERVATIONPROTO.oneofs_by_name['observation_data'].fields.append(
_OBSERVATIONPROTO.fields_by_name['compressed_data'])
_OBSERVATIONPROTO.fields_by_name['compressed_data'].containing_oneof = _OBSERVATIONPROTO.oneofs_by_name['observation_data']

DESCRIPTOR.message_types_by_name['ObservationProto'] = _OBSERVATIONPROTO
DESCRIPTOR.enum_types_by_name['CompressionTypeProto'] = _COMPRESSIONTYPEPROTO
DESCRIPTOR.enum_types_by_name['SensorTypeProto'] = _SENSORTYPEPROTO
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
ObservationProto = _reflection.GeneratedProtocolMessageType('ObservationProto', (_message.Message,), dict(

87
ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.pyi


from google.protobuf.descriptor import (
Descriptor as google___protobuf___descriptor___Descriptor,
EnumDescriptor as google___protobuf___descriptor___EnumDescriptor,
FileDescriptor as google___protobuf___descriptor___FileDescriptor,
)
from google.protobuf.internal.containers import (

from google.protobuf.internal.enum_type_wrapper import (
_EnumTypeWrapper as google___protobuf___internal___enum_type_wrapper____EnumTypeWrapper,
)
from google.protobuf.message import (
Message as google___protobuf___message___Message,
)

List as typing___List,
NewType as typing___NewType,
Tuple as typing___Tuple,
cast as typing___cast,
)

builtin___bytes = bytes
builtin___float = float
builtin___int = int
builtin___str = str
class CompressionTypeProto(builtin___int):
DESCRIPTOR: google___protobuf___descriptor___FileDescriptor = ...
CompressionTypeProtoValue = typing___NewType('CompressionTypeProtoValue', builtin___int)
type___CompressionTypeProtoValue = CompressionTypeProtoValue
CompressionTypeProto: _CompressionTypeProto
class _CompressionTypeProto(google___protobuf___internal___enum_type_wrapper____EnumTypeWrapper[CompressionTypeProtoValue]):
@classmethod
def Name(cls, number: builtin___int) -> builtin___str: ...
@classmethod
def Value(cls, name: builtin___str) -> 'CompressionTypeProto': ...
@classmethod
def keys(cls) -> typing___List[builtin___str]: ...
@classmethod
def values(cls) -> typing___List['CompressionTypeProto']: ...
@classmethod
def items(cls) -> typing___List[typing___Tuple[builtin___str, 'CompressionTypeProto']]: ...
NONE = typing___cast('CompressionTypeProto', 0)
PNG = typing___cast('CompressionTypeProto', 1)
NONE = typing___cast('CompressionTypeProto', 0)
PNG = typing___cast('CompressionTypeProto', 1)
NONE = typing___cast(CompressionTypeProtoValue, 0)
PNG = typing___cast(CompressionTypeProtoValue, 1)
NONE = typing___cast(CompressionTypeProtoValue, 0)
PNG = typing___cast(CompressionTypeProtoValue, 1)
type___CompressionTypeProto = CompressionTypeProto
SensorTypeProtoValue = typing___NewType('SensorTypeProtoValue', builtin___int)
type___SensorTypeProtoValue = SensorTypeProtoValue
SensorTypeProto: _SensorTypeProto
class _SensorTypeProto(google___protobuf___internal___enum_type_wrapper____EnumTypeWrapper[SensorTypeProtoValue]):
DESCRIPTOR: google___protobuf___descriptor___EnumDescriptor = ...
OBSERVATION = typing___cast(SensorTypeProtoValue, 0)
PARAMETERIZATION = typing___cast(SensorTypeProtoValue, 1)
REWARD = typing___cast(SensorTypeProtoValue, 2)
OBSERVATION = typing___cast(SensorTypeProtoValue, 0)
PARAMETERIZATION = typing___cast(SensorTypeProtoValue, 1)
REWARD = typing___cast(SensorTypeProtoValue, 2)
type___SensorTypeProto = SensorTypeProto
data = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[builtin___float]
data: google___protobuf___internal___containers___RepeatedScalarFieldContainer[builtin___float] = ...
@classmethod
def FromString(cls, s: builtin___bytes) -> ObservationProto.FloatData: ...
def MergeFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
def CopyFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
if sys.version_info >= (3,):
def ClearField(self, field_name: typing_extensions___Literal[u"data"]) -> None: ...
else:
def ClearField(self, field_name: typing_extensions___Literal[u"data",b"data"]) -> None: ...
def ClearField(self, field_name: typing_extensions___Literal[u"data",b"data"]) -> None: ...
type___FloatData = FloatData
shape = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[builtin___int]
compression_type = ... # type: CompressionTypeProto
compressed_data = ... # type: builtin___bytes
shape: google___protobuf___internal___containers___RepeatedScalarFieldContainer[builtin___int] = ...
compression_type: type___CompressionTypeProtoValue = ...
compressed_data: builtin___bytes = ...
sensor_type: type___SensorTypeProtoValue = ...
def float_data(self) -> ObservationProto.FloatData: ...
def float_data(self) -> type___ObservationProto.FloatData: ...
compression_type : typing___Optional[CompressionTypeProto] = None,
compression_type : typing___Optional[type___CompressionTypeProtoValue] = None,
float_data : typing___Optional[ObservationProto.FloatData] = None,
float_data : typing___Optional[type___ObservationProto.FloatData] = None,
sensor_type : typing___Optional[type___SensorTypeProtoValue] = None,
@classmethod
def FromString(cls, s: builtin___bytes) -> ObservationProto: ...
def MergeFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
def CopyFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
if sys.version_info >= (3,):
def HasField(self, field_name: typing_extensions___Literal[u"compressed_data",u"float_data",u"observation_data"]) -> builtin___bool: ...
def ClearField(self, field_name: typing_extensions___Literal[u"compressed_data",u"compression_type",u"float_data",u"observation_data",u"shape"]) -> None: ...
else:
def HasField(self, field_name: typing_extensions___Literal[u"compressed_data",b"compressed_data",u"float_data",b"float_data",u"observation_data",b"observation_data"]) -> builtin___bool: ...
def ClearField(self, field_name: typing_extensions___Literal[u"compressed_data",b"compressed_data",u"compression_type",b"compression_type",u"float_data",b"float_data",u"observation_data",b"observation_data",u"shape",b"shape"]) -> None: ...
def HasField(self, field_name: typing_extensions___Literal[u"compressed_data",b"compressed_data",u"float_data",b"float_data",u"observation_data",b"observation_data"]) -> builtin___bool: ...
def ClearField(self, field_name: typing_extensions___Literal[u"compressed_data",b"compressed_data",u"compression_type",b"compression_type",u"float_data",b"float_data",u"observation_data",b"observation_data",u"sensor_type",b"sensor_type",u"shape",b"shape"]) -> None: ...
type___ObservationProto = ObservationProto

4
ml-agents-envs/mlagents_envs/rpc_utils.py


from mlagents_envs.base_env import (
BehaviorSpec,
ActionType,
SensorType,
DecisionSteps,
TerminalSteps,
)

:return: BehaviorSpec object.
"""
observation_shape = [tuple(obs.shape) for obs in agent_info.observations]
sensor_type = [SensorType(obs.sensor_type) for obs in agent_info.observations]
action_type = (
ActionType.DISCRETE
if brain_param_proto.vector_action_space_type == 0

] = brain_param_proto.vector_action_size[0]
else:
action_shape = tuple(brain_param_proto.vector_action_size)
return BehaviorSpec(observation_shape, action_type, action_shape)
return BehaviorSpec(observation_shape, sensor_type, action_type, action_shape)
@timed

7
com.unity.ml-agents/Runtime/Academy.cs


}
EnvironmentParameters m_EnvironmentParameters;
AgentParameters m_AgentParameters;
StatsRecorder m_StatsRecorder;
/// <summary>

public EnvironmentParameters EnvironmentParameters
{
get { return m_EnvironmentParameters; }
}
public AgentParameters AgentParameters
{
get { return m_AgentParameters; }
}
/// <summary>

SideChannelManager.RegisterSideChannel(new EngineConfigurationChannel());
m_EnvironmentParameters = new EnvironmentParameters();
m_AgentParameters = new AgentParameters();
m_StatsRecorder = new StatsRecorder();
// Try to launch the communicator by using the arguments passed at launch

5
com.unity.ml-agents/Runtime/Agent.cs


Array.Copy(action, m_Action.vectorActions, action.Length);
}
}
public float GetParameterWithDefault(string key, float defaultValue)
{
return Academy.Instance.AgentParameters.GetWithDefault(m_EpisodeId, key, defaultValue);
}
}
}

2
com.unity.ml-agents/Runtime/Communicator/GrpcExtensions.cs


CompressionType = (CompressionTypeProto)obs.CompressionType,
};
}
obsProto.Shape.AddRange(obs.Shape);
return obsProto;
}

CompressionType = (CompressionTypeProto)sensor.GetCompressionType(),
};
}
observationProto.SensorType = (SensorTypeProto)sensor.GetSensorType();
observationProto.Shape.AddRange(shape);
return observationProto;
}

51
com.unity.ml-agents/Runtime/Grpc/CommunicatorObjects/Observation.cs


byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"CjRtbGFnZW50c19lbnZzL2NvbW11bmljYXRvcl9vYmplY3RzL29ic2VydmF0",
"aW9uLnByb3RvEhRjb21tdW5pY2F0b3Jfb2JqZWN0cyL5AQoQT2JzZXJ2YXRp",
"aW9uLnByb3RvEhRjb21tdW5pY2F0b3Jfb2JqZWN0cyK1AgoQT2JzZXJ2YXRp",
"RmxvYXREYXRhSAAaGQoJRmxvYXREYXRhEgwKBGRhdGEYASADKAJCEgoQb2Jz",
"ZXJ2YXRpb25fZGF0YSopChRDb21wcmVzc2lvblR5cGVQcm90bxIICgROT05F",
"EAASBwoDUE5HEAFCJaoCIlVuaXR5Lk1MQWdlbnRzLkNvbW11bmljYXRvck9i",
"amVjdHNiBnByb3RvMw=="));
"RmxvYXREYXRhSAASOgoLc2Vuc29yX3R5cGUYBSABKA4yJS5jb21tdW5pY2F0",
"b3Jfb2JqZWN0cy5TZW5zb3JUeXBlUHJvdG8aGQoJRmxvYXREYXRhEgwKBGRh",
"dGEYASADKAJCEgoQb2JzZXJ2YXRpb25fZGF0YSopChRDb21wcmVzc2lvblR5",
"cGVQcm90bxIICgROT05FEAASBwoDUE5HEAEqRAoPU2Vuc29yVHlwZVByb3Rv",
"Eg8KC09CU0VSVkFUSU9OEAASFAoQUEFSQU1FVEVSSVpBVElPThABEgoKBlJF",
"V0FSRBACQiWqAiJVbml0eS5NTEFnZW50cy5Db21tdW5pY2F0b3JPYmplY3Rz",
"YgZwcm90bzM="));
new pbr::GeneratedClrTypeInfo(new[] {typeof(global::Unity.MLAgents.CommunicatorObjects.CompressionTypeProto), }, new pbr::GeneratedClrTypeInfo[] {
new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Parser, new[]{ "Shape", "CompressionType", "CompressedData", "FloatData" }, new[]{ "ObservationData" }, null, new pbr::GeneratedClrTypeInfo[] { new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData.Parser, new[]{ "Data" }, null, null, null)})
new pbr::GeneratedClrTypeInfo(new[] {typeof(global::Unity.MLAgents.CommunicatorObjects.CompressionTypeProto), typeof(global::Unity.MLAgents.CommunicatorObjects.SensorTypeProto), }, new pbr::GeneratedClrTypeInfo[] {
new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Parser, new[]{ "Shape", "CompressionType", "CompressedData", "FloatData", "SensorType" }, new[]{ "ObservationData" }, null, new pbr::GeneratedClrTypeInfo[] { new pbr::GeneratedClrTypeInfo(typeof(global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData), global::Unity.MLAgents.CommunicatorObjects.ObservationProto.Types.FloatData.Parser, new[]{ "Data" }, null, null, null)})
}));
}
#endregion

internal enum CompressionTypeProto {
[pbr::OriginalName("NONE")] None = 0,
[pbr::OriginalName("PNG")] Png = 1,
}
internal enum SensorTypeProto {
[pbr::OriginalName("OBSERVATION")] Observation = 0,
[pbr::OriginalName("PARAMETERIZATION")] Parameterization = 1,
[pbr::OriginalName("REWARD")] Reward = 2,
}
#endregion

public ObservationProto(ObservationProto other) : this() {
shape_ = other.shape_.Clone();
compressionType_ = other.compressionType_;
sensorType_ = other.sensorType_;
switch (other.ObservationDataCase) {
case ObservationDataOneofCase.CompressedData:
CompressedData = other.CompressedData;

}
}
/// <summary>Field number for the "sensor_type" field.</summary>
public const int SensorTypeFieldNumber = 5;
private global::Unity.MLAgents.CommunicatorObjects.SensorTypeProto sensorType_ = 0;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::Unity.MLAgents.CommunicatorObjects.SensorTypeProto SensorType {
get { return sensorType_; }
set {
sensorType_ = value;
}
}
private object observationData_;
/// <summary>Enum of possible cases for the "observation_data" oneof.</summary>
public enum ObservationDataOneofCase {

if (CompressionType != other.CompressionType) return false;
if (CompressedData != other.CompressedData) return false;
if (!object.Equals(FloatData, other.FloatData)) return false;
if (SensorType != other.SensorType) return false;
if (ObservationDataCase != other.ObservationDataCase) return false;
return Equals(_unknownFields, other._unknownFields);
}

if (CompressionType != 0) hash ^= CompressionType.GetHashCode();
if (observationDataCase_ == ObservationDataOneofCase.CompressedData) hash ^= CompressedData.GetHashCode();
if (observationDataCase_ == ObservationDataOneofCase.FloatData) hash ^= FloatData.GetHashCode();
if (SensorType != 0) hash ^= SensorType.GetHashCode();
hash ^= (int) observationDataCase_;
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();

output.WriteRawTag(34);
output.WriteMessage(FloatData);
}
if (SensorType != 0) {
output.WriteRawTag(40);
output.WriteEnum((int) SensorType);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}

if (observationDataCase_ == ObservationDataOneofCase.FloatData) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(FloatData);
}
if (SensorType != 0) {
size += 1 + pb::CodedOutputStream.ComputeEnumSize((int) SensorType);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}

shape_.Add(other.shape_);
if (other.CompressionType != 0) {
CompressionType = other.CompressionType;
}
if (other.SensorType != 0) {
SensorType = other.SensorType;
}
switch (other.ObservationDataCase) {
case ObservationDataOneofCase.CompressedData:

}
input.ReadMessage(subBuilder);
FloatData = subBuilder;
break;
}
case 40: {
sensorType_ = (global::Unity.MLAgents.CommunicatorObjects.SensorTypeProto) input.ReadEnum();
break;
}
}

10
com.unity.ml-agents/Runtime/Sensors/CameraSensor.cs


}
/// <summary>
/// Camera sensors are always Observations.
/// </summary>
/// <returns>Sensor type of observation.</returns>
public SensorType GetSensorType()
{
return SensorType.Observation;
}
/// <summary>
/// Generates a compressed image. This can be valuable in speeding-up training.
/// </summary>
/// <returns>Compressed image.</returns>

29
com.unity.ml-agents/Runtime/Sensors/ISensor.cs


}
/// <summary>
/// The compression setting for visual/camera observations.
/// </summary>
public enum SensorType
{
/// <summary>
/// Sensor represents an agent's observation.
/// </summary>
Observation,
/// <summary>
/// Sensor represents an agent's task/goal parameterization.
/// </summary>
Parameterization,
/// <summary>
/// Sensor represents one or more reward signals.
/// </summary>
Reward
}
/// <summary>
/// Sensor interface for generating observations.
/// </summary>
public interface ISensor

/// </summary>
/// <returns>The name of the sensor.</returns>
string GetName();
/// <summary>
/// Get the semantic meaning of the sensor, i.e. whether it is an observation or other type
/// of data to be sent to the Agent.
/// </summary>
/// <returns>The type of the sensor.</returns>
SensorType GetSensorType();
}

6
com.unity.ml-agents/Runtime/Sensors/RayPerceptionSensor.cs


return SensorCompressionType.None;
}
/// <inheritdoc/>
public SensorType GetSensorType()
{
return SensorType.Observation;
}
/// <summary>
/// Evaluates the raycasts to be used as part of an observation of an agent.
/// </summary>

6
com.unity.ml-agents/Runtime/Sensors/Reflection/ReflectionSensorBase.cs


}
/// <inheritdoc/>
public SensorType GetSensorType()
{
return SensorType.Observation;
}
/// <inheritdoc/>
public string GetName()
{
return m_SensorName;

9
com.unity.ml-agents/Runtime/Sensors/RenderTextureSensor.cs


}
/// <summary>
/// RenderTexture sensors are always Observations.
/// </summary>
/// <returns>Sensor type of observation.</returns>
public SensorType GetSensorType()
{
return SensorType.Observation;
}
/// <summary>
/// Converts a RenderTexture to a 2D texture.
/// </summary>
/// <returns>The 2D texture.</returns>

6
com.unity.ml-agents/Runtime/Sensors/StackingSensor.cs


return SensorCompressionType.None;
}
/// <inheritdoc/>
public SensorType GetSensorType()
{
return SensorType.Observation;
}
// TODO support stacked compressed observations (byte stream)
}
}

6
com.unity.ml-agents/Runtime/Sensors/VectorSensor.cs


return SensorCompressionType.None;
}
/// <inheritdoc/>
public virtual SensorType GetSensorType()
{
return SensorType.Observation;
}
void Clear()
{
m_Observations.Clear();

5
com.unity.ml-agents/Tests/Editor/MLAgentsEditModeTest.cs


{
numResetCalls++;
}
public SensorType GetSensorType()
{
return SensorType.Observation;
}
}
[TestFixture]

5
com.unity.ml-agents/Tests/Editor/ParameterLoaderTest.cs


return SensorCompressionType.None;
}
public SensorType GetSensorType()
{
return SensorType.Observation;
}
public string GetName()
{
return m_Name;

5
com.unity.ml-agents/Tests/Editor/Sensor/FloatVisualSensorTests.cs


{
return SensorCompressionType.None;
}
public SensorType GetSensorType()
{
return SensorType.Observation;
}
}
public class FloatVisualSensorTests

5
com.unity.ml-agents/Tests/Editor/Sensor/SensorShapeValidatorTests.cs


{
return SensorCompressionType.None;
}
public SensorType GetSensorType()
{
return SensorType.Observation;
}
}
public class SensorShapeValidatorTests

31
docs/Learning-Environment-Examples.md


- Set-up: Physics-based Humanoid agents with 26 degrees of freedom. These DOFs
correspond to articulation of the following body-parts: hips, chest, spine,
head, thighs, shins, feet, arms, forearms and hands.
- Goal: The agents must move its body toward the goal direction as quickly as
possible without falling.
- `WalkerStatic` - Goal direction is always forward.
- Goal: The agents must move its body toward the goal direction without falling.
- `WalkerDynamicVariableSpeed`- Goal direction and walking speed are randomized.
- `WalkerStatic` - Goal direction is always forward.
- `WalkerStaticVariableSpeed` - Goal direction is always forward. Walking
speed is randomized
- +0.02 times body velocity in the goal direction. (run towards target)
- +0.01 times head direction alignment with goal direction. (face towards target)
- +0.005 times head y position - left foot y position. (encourage head height)
- +0.005 times head y position - right foot y position. (encourage head height)
The reward function is now geometric meaning the reward each step is a product
of all the rewards instead of a sum, this helps the agent try to maximize all
rewards instead of the easiest rewards.
- Body velocity matches goal velocity. (normalized between (0,1))
- Head direction alignment with goal direction. (normalized between (0,1))
- Vector Observation space: 236 variables corresponding to position, rotation,
- Vector Observation space: 238 variables corresponding to position, rotation,
velocity, and angular velocities of each limb, along with goal direction.
- Vector Action space: (Continuous) Size of 39, corresponding to target
rotations and strength applicable to the joints.

- Recommended Minimum:
- Recommended Maximum:
- hip_mass: Mass of the hip component of the walker
- Default: 15
- Default: 8
- Recommended Minimum: 7
- Recommended Maximum: 28
- chest_mass: Mass of the chest component of the walker

- spine_mass: Mass of the spine component of the walker
- Default: 10
- Default: 8
- Benchmark Mean Reward for `WalkerStatic`: 1500
- Benchmark Mean Reward for `WalkerDynamic`: 700
- Benchmark Mean Reward for `WalkerDynamic`: 2500
- Benchmark Mean Reward for `WalkerDynamicVariableSpeed`: 1200
- Benchmark Mean Reward for `WalkerStatic`: 3500
- Benchmark Mean Reward for `WalkerStaticVariableSpeed`: 3000
## Pyramids

19
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/DirectionIndicator.cs


using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine;
public bool updatedByAgent; //should this be updated by the agent? If not, it will use local settings
void OnEnable()
{
m_StartingYPos = transform.position.y;

{
transform.position = new Vector3(transformToFollow.position.x, m_StartingYPos + heightOffset, transformToFollow.position.z);
if (updatedByAgent) return;
transform.position = new Vector3(transformToFollow.position.x, m_StartingYPos + heightOffset,
transformToFollow.position.z);
}
//Public method to allow an agent to directly update this component
public void MatchOrientation(Transform t)
{
transform.position = new Vector3(t.position.x, m_StartingYPos + heightOffset, t.position.z);
transform.rotation = t.rotation;
}
}
}

4
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/OrientationCubeController.cs


/// </summary>
public class OrientationCubeController : MonoBehaviour
{
//Update position and Rotation
//Public method to allow Agent to set look rotation of this transform
dirVector.y = 0; //flatten dir on the y. this will only work on level, uneven surfaces
dirVector.y = 0; //flatten dir on the y. this will only work on level surfaces
var lookRot =
dirVector == Vector3.zero
? Quaternion.identity

6
Project/Assets/ML-Agents/Examples/SharedAssets/Scripts/SensorBase.cs


{
return SensorCompressionType.None;
}
/// <inheritdoc/>
public virtual SensorType GetSensorType()
{
return SensorType.Observation;
}
}
}

2
Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamic.unity.meta


fileFormatVersion: 2
guid: 79d5d2687bfbe45f5b78bd6c04992e0d
guid: 65c87f50b8c81433d8fd7f6550773467
DefaultImporter:
externalObjects: {}
userData:

962
Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerStatic.unity
文件差异内容过多而无法显示
查看文件

977
Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamic.unity
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStatic.nn
文件差异内容过多而无法显示
查看文件

2
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStatic.nn.meta


fileFormatVersion: 2
guid: 8dfd4337ed40e4d48872a4f86919c9da
guid: d8bebea7ecfd0470f87cbab469bd1411
ScriptedImporter:
fileIDToRecycleName:
11400000: main obj

158
Project/Assets/ML-Agents/Examples/Walker/Scripts/WalkerAgent.cs


using System;
using MLAgentsExamples;
using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgentsExamples;

public class WalkerAgent : Agent
{
public float maximumWalkingSpeed = 999; //The max walk velocity magnitude an agent will be rewarded for
Vector3 m_WalkDir; //Direction to the target
// Quaternion m_WalkDirLookRot; //Will hold the rotation to our target
[Header("Walk Speed")] [Range(0.1f, 10)]
public float targetWalkingSpeed = 10; //The walking speed to try and achieve
[Header("Target To Walk Towards")] [Space(10)]
public TargetController target; //Target the agent will walk towards.
const float m_maxWalkingSpeed = 10; //The max walking speed
[Header("Body Parts")] [Space(10)] public Transform hips;
//Should the agent sample a new goal velocity each episode?
//If true, walkSpeed will be randomly set between zero and m_maxWalkingSpeed in OnEpisodeBegin()
//If false, the goal velocity will be walkingSpeed
public bool randomizeWalkSpeedEachEpisode;
//The direction an agent will walk during training.
public Vector3 worldDirToWalk = Vector3.right;
[Header("Target To Walk Towards")] public Transform target; //Target the agent will walk towards during training.
[Header("Body Parts")] public Transform hips;
public Transform chest;
public Transform spine;
public Transform head;

public Transform forearmR;
public Transform handR;
[Header("Orientation")] [Space(10)]
public OrientationCubeController orientationCube;
OrientationCubeController m_OrientationCube;
//The indicator graphic gameobject that points towards the target
DirectionIndicator m_DirectionIndicator;
orientationCube.UpdateOrientation(hips, target.transform);
m_OrientationCube = GetComponentInChildren<OrientationCubeController>();
m_DirectionIndicator = GetComponentInChildren<DirectionIndicator>();
//Setup each body part
m_JdController = GetComponent<JointDriveController>();

m_ResetParams = Academy.Instance.EnvironmentParameters;
SetResetParameters();
SetTaskParameters();
}
/// <summary>

}
//Random start rotation to help generalize
transform.rotation = Quaternion.Euler(0, Random.Range(0.0f, 360.0f), 0);
hips.rotation = Quaternion.Euler(0, Random.Range(0.0f, 360.0f), 0);
orientationCube.UpdateOrientation(hips, target.transform);
UpdateOrientationObjects();
//Set our goal walking speed
// targetWalkingSpeed =
// randomizeWalkSpeedEachEpisode ? Random.Range(0.1f, m_maxWalkingSpeed) : targetWalkingSpeed;
SetTaskParameters();
}
/// <summary>

//Get velocities in the context of our orientation cube's space
//Note: You can get these velocities in world space as well but it may not train as well.
sensor.AddObservation(orientationCube.transform.InverseTransformDirection(bp.rb.velocity));
sensor.AddObservation(orientationCube.transform.InverseTransformDirection(bp.rb.angularVelocity));
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(bp.rb.velocity));
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(bp.rb.angularVelocity));
sensor.AddObservation(orientationCube.transform.InverseTransformDirection(bp.rb.position - hips.position));
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(bp.rb.position - hips.position));
if (bp.rb.transform != hips && bp.rb.transform != handL && bp.rb.transform != handR)
{

/// </summary>
public override void CollectObservations(VectorSensor sensor)
{
sensor.AddObservation(Quaternion.FromToRotation(hips.forward, orientationCube.transform.forward));
sensor.AddObservation(Quaternion.FromToRotation(head.forward, orientationCube.transform.forward));
var cubeForward = m_OrientationCube.transform.forward;
//velocity we want to match
var velGoal = cubeForward * targetWalkingSpeed;
//ragdoll's avg vel
var avgVel = GetAvgVelocity();
//current ragdoll velocity. normalized
sensor.AddObservation(Vector3.Distance(velGoal, avgVel));
//avg body vel relative to cube
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(avgVel));
//vel goal relative to cube
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(velGoal));
//rotation deltas
sensor.AddObservation(Quaternion.FromToRotation(hips.forward, cubeForward));
sensor.AddObservation(Quaternion.FromToRotation(head.forward, cubeForward));
sensor.AddObservation(orientationCube.transform.InverseTransformPoint(target.transform.position));
//Position of target position relative to cube
sensor.AddObservation(m_OrientationCube.transform.InverseTransformPoint(target.transform.position));
foreach (var bodyPart in m_JdController.bodyPartsList)
{

bpDict[forearmR].SetJointStrength(vectorAction[++i]);
}
//Update OrientationCube and DirectionIndicator
void UpdateOrientationObjects()
{
worldDirToWalk = target.position - hips.position;
m_OrientationCube.UpdateOrientation(hips, target);
if (m_DirectionIndicator)
{
m_DirectionIndicator.MatchOrientation(m_OrientationCube.transform);
}
}
var cubeForward = orientationCube.transform.forward;
orientationCube.UpdateOrientation(hips, target.transform);
SetTaskParameters();
UpdateOrientationObjects();
var cubeForward = m_OrientationCube.transform.forward;
// a. Velocity alignment with goal direction.
var moveTowardsTargetReward = Vector3.Dot(cubeForward,
Vector3.ClampMagnitude(m_JdController.bodyPartsDict[hips].rb.velocity, maximumWalkingSpeed));
if (float.IsNaN(moveTowardsTargetReward))
// a. Match target speed
//This reward will approach 1 if it matches perfectly and approach zero as it deviates
var matchSpeedReward = GetMatchingVelocityReward(cubeForward * targetWalkingSpeed, GetAvgVelocity());
//Check for NaNs
if (float.IsNaN(matchSpeedReward))
$" cubeForward: {cubeForward}\n"+
$" hips.velocity: {m_JdController.bodyPartsDict[hips].rb.velocity}\n"+
$" maximumWalkingSpeed: {maximumWalkingSpeed}"
$" cubeForward: {cubeForward}\n" +
$" hips.velocity: {m_JdController.bodyPartsDict[hips].rb.velocity}\n" +
$" maximumWalkingSpeed: {m_maxWalkingSpeed}"
// b. Rotation alignment with goal direction.
var lookAtTargetReward = Vector3.Dot(cubeForward, head.forward);
// b. Rotation alignment with target direction.
//This reward will approach 1 if it faces the target direction perfectly and approach zero as it deviates
var lookAtTargetReward = (Vector3.Dot(cubeForward, head.forward) + 1) * .5F;
//Check for NaNs
$" cubeForward: {cubeForward}\n"+
$" cubeForward: {cubeForward}\n" +
// c. Encourage head height. //Should normalize to ~1
var headHeightOverFeetReward =
((head.position.y - footL.position.y) + (head.position.y - footR.position.y) / 10);
if (float.IsNaN(headHeightOverFeetReward))
AddReward(matchSpeedReward * lookAtTargetReward);
}
//Returns the average velocity of all of the body parts
//Using the velocity of the hips only has shown to result in more erratic movement from the limbs, so...
//...using the average helps prevent this erratic movement
Vector3 GetAvgVelocity()
{
Vector3 velSum = Vector3.zero;
Vector3 avgVel = Vector3.zero;
//ALL RBS
int numOfRB = 0;
foreach (var item in m_JdController.bodyPartsList)
throw new ArgumentException(
"NaN in headHeightOverFeetReward.\n" +
$" head.position: {head.position}\n"+
$" footL.position: {footL.position}\n"+
$" footR.position: {footR.position}"
);
numOfRB++;
velSum += item.rb.velocity;
AddReward(
+ 0.02f * moveTowardsTargetReward
+ 0.02f * lookAtTargetReward
+ 0.005f * headHeightOverFeetReward
);
avgVel = velSum / numOfRB;
return avgVel;
}
//normalized value of the difference in avg speed vs goal walking speed.
public float GetMatchingVelocityReward(Vector3 velocityGoal, Vector3 actualVelocity)
{
//distance between our actual velocity and goal velocity
var velDeltaMagnitude = Mathf.Clamp(Vector3.Distance(actualVelocity, velocityGoal), 0, targetWalkingSpeed);
//return the value on a declining sigmoid shaped curve that decays from 1 to 0
//This reward will approach 1 if it matches perfectly and approach zero as it deviates
return Mathf.Pow(1 - Mathf.Pow(velDeltaMagnitude / targetWalkingSpeed, 2), 2);
}
/// <summary>

m_JdController.bodyPartsDict[hips].rb.mass = m_ResetParams.GetWithDefault("hip_mass", 8);
}
public void SetTaskParameters()
{
targetWalkingSpeed = GetParameterWithDefault("targetWalkingSpeed", targetWalkingSpeed);
}
public void SetResetParameters()
{
SetTorsoMass();

2
config/ppo/WalkerDynamic.yaml


gamma: 0.995
strength: 1.0
keep_checkpoints: 5
max_steps: 20000000
max_steps: 30000000
time_horizon: 1000
summary_freq: 30000
threaded: true

2
config/ppo/WalkerStatic.yaml


gamma: 0.995
strength: 1.0
keep_checkpoints: 5
max_steps: 20000000
max_steps: 30000000
time_horizon: 1000
summary_freq: 30000
threaded: true

137
ml-agents/mlagents/trainers/stats.py


from typing import List, Dict, NamedTuple, Any, Optional
import numpy as np
import abc
import csv
import os
import time
from threading import RLock

class StatsPropertyType(Enum):
HYPERPARAMETERS = "hyperparameters"
SELF_PLAY = "selfplay"
SALIENCY = "saliency"
class StatsWriter(abc.ABC):

"""
Add a generic property to the StatsWriter. This could be e.g. a Dict of hyperparameters,
a max step count, a trainer type, etc. Note that not all StatsWriters need to be compatible
with all types of properties. For instance, a TB writer doesn't need a max step.
with all types of properties. For instance, a TB writer doesn't need a max step, nor should
we write hyperparameters to the CSV.
:param category: The category that the property belongs to.
:param type: The type of property.
:param value: The property itself.

class GaugeWriter(StatsWriter):
"""
Write all stats that we receive to the timer gauges, so we can track them offline easily
Write all stats that we recieve to the timer gauges, so we can track them offline easily
"""
@staticmethod

) -> None:
is_training = "Not Training."
if "Is Training" in values:
stats_summary = values["Is Training"]
stats_summary = stats_summary = values["Is Training"]
elapsed_time = time.time() - self.training_start_time
log_info: List[str] = [category]
log_info.append(f"Step: {step}")
log_info.append(f"Time Elapsed: {elapsed_time:0.3f} s")
log_info.append(f"Mean Reward: {stats_summary.mean:0.3f}")
log_info.append(f"Std of Reward: {stats_summary.std:0.3f}")
log_info.append(is_training)
logger.info(
"{}: Step: {}. "
"Time Elapsed: {:0.3f} s "
"Mean "
"Reward: {:0.3f}"
". Std of Reward: {:0.3f}. {}".format(
category,
step,
time.time() - self.training_start_time,
stats_summary.mean,
stats_summary.std,
is_training,
)
)
log_info.append(f"ELO: {elo_stats.mean:0.3f}")
logger.info(f"{category} ELO: {elo_stats.mean:0.3f}. ")
log_info.append("No episode was completed since last summary")
log_info.append(is_training)
logger.info(". ".join(log_info))
logger.info(
"{}: Step: {}. No episode was completed since last summary. {}".format(
category, step, is_training
)
)
def add_property(
self, category: str, property_type: StatsPropertyType, value: Any

self.summary_writers: Dict[str, tf.summary.FileWriter] = {}
self.base_dir: str = base_dir
self._clear_past_data = clear_past_data
self.trajectories = 0
def write_stats(
self, category: str, values: Dict[str, StatsSummary], step: int

if summary is not None:
self.summary_writers[category].add_summary(summary, 0)
elif property_type == StatsPropertyType.SALIENCY:
self._maybe_create_summary_writer(category)
# adapted from https://gist.github.com/gyglim/1f8dfb1b5c82627ae3efcfbbadb9f514
def create_summary(label, values):
values = np.array(values)
counts, bin_edges = np.histogram(values, bins=len(values))
hist = tf.HistogramProto()
# value = value / np.sum(value)
# value = np.log(value)
# value = value - np.min(value)
# value = value / np.sum(value)
# for obs, grad in sorted(enumerate(value), reverse=True, key=lambda x: x[1]):
# print(f"Observation {obs} has relevance {grad}")
hist.min = float(np.min(values))
hist.max = float(np.max(values))
hist.num = int(np.prod(values.shape))
hist.sum = float(np.sum(values))
hist.sum_squares = float(np.sum(np.square(values)))
# hist.min = 0.0
# hist.max = float(np.max(value))
# hist.num = len(value)
# hist.sum = float(np.sum(value))
# hist.sum_squares = float(np.sum(value ** 2))
bin_edges = bin_edges[1:]
for edge in bin_edges:
hist.bucket_limit.append(edge)
for c in counts:
hist.bucket.append(c)
return tf.Summary.Value(tag=label, histo=hist)
if isinstance(value, dict):
svals = [create_summary(k,v) for k,v in value.items()]
else:
svals = create_summary("Saliency", value)
# Create and write Summary
# summary = tf.Summary(value=[tf.Summary.Value(tag="Saliency", histo=hist)])
summary = tf.Summary(value=svals)
self.summary_writers[category].add_summary(summary, self.trajectories)
self.summary_writers[category].flush()
self.trajectories += 1
def _dict_to_tensorboard(
self, name: str, input_dict: Dict[str, Any]
) -> Optional[bytes]:

return None
class CSVWriter(StatsWriter):
def __init__(self, base_dir: str, required_fields: List[str] = None):
"""
A StatsWriter that writes to a Tensorboard summary.
:param base_dir: The directory within which to place the CSV file, which will be {base_dir}/{category}.csv.
:param required_fields: If provided, the CSV writer won't write until these fields have statistics to write for
them.
"""
# We need to keep track of the fields in the CSV, as all rows need the same fields.
self.csv_fields: Dict[str, List[str]] = {}
self.required_fields = required_fields if required_fields else []
self.base_dir: str = base_dir
def write_stats(
self, category: str, values: Dict[str, StatsSummary], step: int
) -> None:
if self._maybe_create_csv_file(category, list(values.keys())):
row = [str(step)]
# Only record the stats that showed up in the first valid row
for key in self.csv_fields[category]:
_val = values.get(key, None)
row.append(str(_val.mean) if _val else "None")
with open(self._get_filepath(category), "a") as file:
writer = csv.writer(file)
writer.writerow(row)
def _maybe_create_csv_file(self, category: str, keys: List[str]) -> bool:
"""
If no CSV file exists and the keys have the required values,
make the CSV file and write hte title row.
Returns True if there is now (or already is) a valid CSV file.
"""
if category not in self.csv_fields:
summary_dir = self.base_dir
os.makedirs(summary_dir, exist_ok=True)
# Only store if the row contains the required fields
if all(item in keys for item in self.required_fields):
self.csv_fields[category] = keys
with open(self._get_filepath(category), "w") as file:
title_row = ["Steps"]
title_row.extend(keys)
writer = csv.writer(file)
writer.writerow(title_row)
return True
return False
return True
def _get_filepath(self, category: str) -> str:
file_dir = os.path.join(self.base_dir, category + ".csv")
return file_dir
class StatsReporter:
writers: List[StatsWriter] = []
stats_dict: Dict[str, Dict[str, List]] = defaultdict(lambda: defaultdict(list))

"""
Add a generic property to the StatsReporter. This could be e.g. a Dict of hyperparameters,
a max step count, a trainer type, etc. Note that not all StatsWriters need to be compatible
with all types of properties. For instance, a TB writer doesn't need a max step.
with all types of properties. For instance, a TB writer doesn't need a max step, nor should
we write hyperparameters to the CSV.
:param key: The type of property.
:param value: The property itself.
"""

20
ml-agents/mlagents/trainers/subprocess_env_manager.py


from mlagents_envs.side_channel.environment_parameters_channel import (
EnvironmentParametersChannel,
)
from mlagents_envs.side_channel.agent_parameters_channel import AgentParametersChannel
from mlagents_envs.side_channel.engine_configuration_channel import (
EngineConfigurationChannel,
EngineConfig,

RESET = 4
CLOSE = 5
ENV_EXITED = 6
AGENT_PARAMETERS = 7
class EnvironmentRequest(NamedTuple):

[int, List[SideChannel]], UnityEnvironment
] = cloudpickle.loads(pickled_env_factory)
env_parameters = EnvironmentParametersChannel()
agent_parameters = AgentParametersChannel()
engine_configuration_channel = EngineConfigurationChannel()
engine_configuration_channel.set_configuration(engine_configuration)
stats_channel = StatsSideChannel()

try:
env = env_factory(
worker_id, [env_parameters, engine_configuration_channel, stats_channel]
worker_id, [env_parameters, agent_parameters, engine_configuration_channel, stats_channel]
)
while True:
req: EnvironmentRequest = parent_conn.recv()

for k, v in req.payload.items():
if isinstance(v, ParameterRandomizationSettings):
v.apply(k, env_parameters)
elif req.cmd == EnvironmentCommand.AGENT_PARAMETERS:
to_assign = req.payload
if isinstance(to_assign, List):
for local_id, task in to_assign:
for param, value in task.items():
agent_parameters.set_float_parameter(local_id, param, value)
elif req.cmd == EnvironmentCommand.RESET:
env.reset()
all_step_result = _generate_all_results()

"""
for ew in self.env_workers:
ew.send(EnvironmentCommand.ENVIRONMENT_PARAMETERS, config)
def set_agent_parameters(self, worker_id, local_id, task) -> None:
"""
Sends environment parameter settings to C# via the
AgentParametersSidehannel for each worker.
:param config: Dict of environment parameter keys and values
"""
self.env_workers[worker_id].send(EnvironmentCommand.AGENT_PARAMETERS, (local_id, task))
@property
def training_behaviors(self) -> Dict[BehaviorName, BehaviorSpec]:

66
ml-agents/mlagents/trainers/agent_processor.py


from collections import defaultdict, Counter
import queue
import numpy as np
from mlagents_envs.base_env import (
DecisionSteps,
DecisionStep,

behavior_id: str,
stats_reporter: StatsReporter,
max_trajectory_length: int = sys.maxsize,
set_task_params_fn = None
):
"""
Create an AgentProcessor.

self.policy = policy
self.episode_steps: Counter = Counter()
self.episode_rewards: Dict[str, float] = defaultdict(float)
self.episode_tasks: Dict[str, Dict[str, float]] = {}
self.task_queue: List[Dict[str, float]] = []
self.task_perf_queue: List[Tuple[Dict[str, float],float]] = []
self.set_task_params_fn = set_task_params_fn
self.tasks_needed: Dict[str, Tuple[str, str]] = {}
def add_experiences(
self,

action_global_agent_ids = [
get_global_agent_id(worker_id, ag_id) for ag_id in previous_action.agent_ids
]
for global_id in action_global_agent_ids:
for global_id, local_id in zip(action_global_agent_ids, previous_action.agent_ids):
if global_id not in self.episode_tasks.keys():
# print("gid: {0} not in episodes tasks - prev actions".format(global_id))
self._assign_task(worker_id, global_id, local_id)
# Iterate over all the terminal steps
for terminal_step in terminal_steps.values():

terminal_step, global_id, terminal_steps.agent_id_to_index[local_id]
)
if global_id not in self.episode_tasks.keys():
# print("gid: {0} not in episodes tasks - terminal".format(global_id))
self._assign_task(worker_id, global_id, local_id)
# Iterate over all the decision steps
for ongoing_step in decision_steps.values():
local_id = ongoing_step.agent_id

)
if global_id not in self.episode_tasks.keys():
# print("gid: {0} not in episodes tasks - decision".format(global_id))
self._assign_task(worker_id, global_id, local_id)
for _gid in action_global_agent_ids:
# If the ID doesn't have a last step result, the agent just reset,

self.policy.save_previous_action(
[_gid], take_action_outputs["action"]
)
def _assign_task(self, worker_id:str, global_id: str, local_id: int):
if len(self.task_queue) > 0:
task = self.task_queue.pop(0)
self.episode_tasks[global_id] = task
self.set_task_params_fn(worker_id, local_id, task)
# print("assigned gid {0} to task {1}".format(global_id, task))
else:
if global_id not in self.tasks_needed.keys():
# print("gid {0} has not already requested a task, requesting new task".format(global_id))
self.tasks_needed[global_id] = (worker_id, local_id)
def get_num_tasks_needed(self):
return len(self.tasks_needed)
def add_new_tasks(self, tasks):
self.task_queue.extend(tasks)
to_del = []
for global_id, (worker_id, local_id) in self.tasks_needed.items():
if len(self.task_queue) > 0:
self._assign_task(worker_id, global_id, local_id)
to_del.append(global_id)
for key in to_del:
self._safe_delete(self.tasks_needed, key)
def _process_step(
self, step: Union[TerminalStep, DecisionStep], global_id: str, index: int

interrupted=interrupted,
memory=memory,
)
if not terminated:
self.episode_steps[global_id] += 1

for traj_queue in self.trajectory_queues:
traj_queue.put(trajectory)
self.experience_buffers[global_id] = []
self.publish_task_performance_queue(self.episode_tasks[global_id], self.episode_rewards[global_id])
self._clean_agent_data(global_id)
def _clean_agent_data(self, global_id: str) -> None:

self._safe_delete(self.last_step_result, global_id)
self._safe_delete(self.episode_steps, global_id)
self._safe_delete(self.episode_rewards, global_id)
self._safe_delete(self.episode_tasks, global_id)
self.policy.remove_previous_action([global_id])
self.policy.remove_memories([global_id])

:param trajectory_queue: Trajectory queue to publish to.
"""
self.trajectory_queues.append(trajectory_queue)
def publish_task_performance_queue(self, task: Dict[str, float], performance: float):
"""
Adds the performance of a given task to the queue to be processed by the task manager
:param task: Dictionary of the mapping of task parameter name to its value
:param performance: scalar value representing the performance (return) of the agent while executing this task
"""
self.task_perf_queue.append((task, performance))
def end_episode(self) -> None:
"""

self._queue.put(item)
# TODO: Callback new agent, callback episode end
class AgentManager(AgentProcessor):
"""
An AgentManager is an AgentProcessor that also holds a single trajectory and policy queue.

stats_reporter: StatsReporter,
max_trajectory_length: int = sys.maxsize,
threaded: bool = True,
**kwargs
super().__init__(policy, behavior_id, stats_reporter, max_trajectory_length)
super().__init__(policy, behavior_id, stats_reporter, max_trajectory_length, **kwargs)
trajectory_queue_len = 20 if threaded else 0
self.trajectory_queue: AgentManagerQueue[Trajectory] = AgentManagerQueue(
self.behavior_id, maxlen=trajectory_queue_len

6
ml-agents/mlagents/trainers/learn.py


from mlagents import tf_utils
from mlagents.trainers.trainer_controller import TrainerController
from mlagents.trainers.environment_parameter_manager import EnvironmentParameterManager
from mlagents.trainers.task_manager import TaskManager
from mlagents.trainers.trainer_util import TrainerFactory, handle_existing_directories
from mlagents.trainers.stats import (
TensorboardWriter,

options.environment_parameters, run_seed, restore=checkpoint_settings.resume
)
task_parameter_manager = TaskManager(
options.agent_parameters, restore=checkpoint_settings.resume
)
trainer_factory = TrainerFactory(
options.behaviors,
write_path,

write_path,
checkpoint_settings.run_id,
env_parameter_manager,
task_parameter_manager,
not checkpoint_settings.inference,
run_seed,
)

32
ml-agents/mlagents/trainers/trainer_controller.py


)
from mlagents.trainers.trainer import Trainer
from mlagents.trainers.environment_parameter_manager import EnvironmentParameterManager
from mlagents.trainers.task_manager import TaskManager
from mlagents.trainers.stats import StatsPropertyType
class TrainerController:
def __init__(

run_id: str,
param_manager: EnvironmentParameterManager,
task_manager: TaskManager,
train: bool,
training_seed: int,
):

self.run_id = run_id
self.train_model = train
self.param_manager = param_manager
self.task_manager = task_manager
self.ghost_controller = self.trainer_factory.ghost_controller
self.registered_behavior_ids: Set[str] = set()

A Data structure corresponding to the initial reset state of the
environment.
"""
new_config = self.param_manager.get_current_samplers()
new_config = self.param_manager.get_current_samplers() # TODO add parameter sample
env_manager.reset(config=new_config)
# Register any new behavior ids that were generated on the reset.
self._register_new_behaviors(env_manager, env_manager.first_step_infos)

trainer.stats_reporter,
trainer.parameters.time_horizon,
threaded=trainer.threaded,
set_task_params_fn = lambda worker_id, local_id, task: env_manager.set_agent_parameters(worker_id, local_id, task)
)
env_manager.set_agent_manager(name_behavior_id, agent_manager)
env_manager.set_policy(name_behavior_id, policy)

reward_buff = {k: list(t.reward_buffer) for (k, t) in self.trainers.items()}
curr_step = {k: int(t.step) for (k, t) in self.trainers.items()}
max_step = {k: int(t.get_max_steps) for (k, t) in self.trainers.items()}
task_perf = {}
for k, v in env.agent_managers.items():
perfs = v.task_perf_queue
v.task_perf_queue.clear()
task_perf[k] = perfs
# Attempt to increment the lessons of the brains who
# were ready.
updated, param_must_reset = self.param_manager.update_lessons(

trainer.stats_reporter.set_stat(
f"Environment/Lesson Number/{param_name}", lesson_number
)
for behavior_name, manager in env_manager.agent_managers.items():
task_perf = manager.task_perf_queue
manager.task_perf_queue = []
N = len(task_perf) # TODO get info about how many are active if no tasks are completed
if N > 0:
self.task_manager.update(behavior_name, task_perf)
K = manager.get_num_tasks_needed()
if K > 0:
# print("num tasks needed: ", K)
new_tasks = self.task_manager.get_tasks(behavior_name, K)
manager.add_new_tasks(new_tasks)
if len(self.task_manager.report_buffer) >= 16:
d = defaultdict(list)
for task in self.task_manager.report_buffer:
for k,v in task.items():
d[k].append(v)
manager.stats_reporter.add_property(StatsPropertyType.SALIENCY, d)
self.task_manager.report_buffer = []
for trainer in self.trainers.values():
if not trainer.threaded:

55
ml-agents/mlagents/trainers/settings.py


return True, smoothing
return False, smoothing
@attr.s(auto_attribs=True)
class ActiveLearnerSettings:
warmup_steps:int=30 # number of data points before active learning is used
capacity:int=600 # maximum number of data points to store
num_mc:int=50 # number of monte-carlo points to intergrate over task distribution
beta:float=1.96 # upper confidence bound parameter ucb = mean + beta * std
raw_samples:int=128 # number of task samples to generate before selecting one to optimized
num_restarts:int=1 # how many different task parameters to try and optimize at once before choosing the best.
@attr.s(auto_attribs=True)
class TaskParameterSettings:
parameters: Dict[str, UniformSettings]
active_learning: Optional[ActiveLearnerSettings] = None
num_repeat:Optional[int]=1 # number of times to repeat a sampled skill
num_batch:Optional[int]=1 # minimum number of skills to get at once
@staticmethod
def structure(d: Mapping, t: type) -> Dict[str, "TaskParameterSettings"]:
"""
Helper method to structure a Dict of EnvironmentParameterSettings class. Meant
to be registered with cattr.register_structure_hook() and called with
cattr.structure().
"""
if not isinstance(d, Mapping):
raise TrainerConfigError(
f"Unsupported agent environment parameter settings {d}."
)
d_final: Dict[str, TaskParameterSettings] = {}
for behavior_name, behavior_config in d.items():
activelearner_settings = None
tmp_settings: Dict[str, UniformSettings] = {}
num_repeat = 1
num_batch = 1
for agent_parameter, agent_parameter_config in behavior_config.items():
if agent_parameter == "active_learner":
activelearner_settings = ActiveLearnerSettings(**agent_parameter_config)
elif agent_parameter == "num_repeat":
num_repeat = agent_parameter_config
elif agent_parameter == "num_batch":
num_batch = agent_parameter_config
else:
sampler = ParameterRandomizationSettings.structure(
agent_parameter_config, ParameterRandomizationSettings
)
tmp_settings[agent_parameter] = sampler
d_final[behavior_name] = TaskParameterSettings(parameters=tmp_settings, active_learning=activelearner_settings, num_batch=num_batch, num_repeat=num_repeat)
return d_final
@attr.s(auto_attribs=True)
class Lesson:

env_settings: EnvironmentSettings = attr.ib(factory=EnvironmentSettings)
engine_settings: EngineSettings = attr.ib(factory=EngineSettings)
environment_parameters: Optional[Dict[str, EnvironmentParameterSettings]] = None
agent_parameters: Optional[Dict[str, TaskParameterSettings]] = None
checkpoint_settings: CheckpointSettings = attr.ib(factory=CheckpointSettings)
# These are options that are relevant to the run itself, and not the engine or environment.

cattr.register_structure_hook(CheckpointSettings, strict_to_cls)
cattr.register_structure_hook(
Dict[str, EnvironmentParameterSettings], EnvironmentParameterSettings.structure
)
cattr.register_structure_hook(
Dict[str, TaskParameterSettings], TaskParameterSettings.structure
)
cattr.register_structure_hook(Lesson, strict_to_cls)
cattr.register_structure_hook(

19
Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/Targets/StaticTarget.prefab


m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3840539935788495952}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: 6.2, y: 1.15, z: 3.824}
m_LocalScale: {x: 1.2356956, y: 1.2356961, z: 1.2356961}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 1, y: 1, z: 1}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 0}
m_RootOrder: 0

m_IsKinematic: 0
m_Interpolate: 0
m_Constraints: 0
m_CollisionDetection: 0
m_CollisionDetection: 3
--- !u!114 &3631016866778687563
MonoBehaviour:
m_ObjectHideFlags: 0

m_Name:
m_EditorClassIdentifier:
tagToDetect: agent
moveTargetToRandomPosIfTouched: 0
targetSpawnRadius: 0
onTrtesiggerEnterEvent:
m_PersistentCalls:
m_Calls: []
triggerIsTouching: 0
spawnRadius: 0
respawnIfTouched: 0
respawnIfFallsOffPlatform: 1
fallDistance: 5
onTriggerEnterEvent:
m_PersistentCalls:
m_Calls: []

onTriggerExitEvent:
m_PersistentCalls:
m_Calls: []
colliderIsTouching: 0
onCollisionEnterEvent:
m_PersistentCalls:
m_Calls: []

82
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Ragdoll/WalkerRagdollBase.prefab


m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 895268871264836243}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0.15, z: 0}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 895268873051627235}

- component: {fileID: 895268871377934302}
- component: {fileID: 895268871377934301}
m_Layer: 0
m_Name: WalkerRagdoll
m_Name: WalkerRagdollBase
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0

m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 895268871377934275}
m_LocalRotation: {x: 0, y: 0.7071068, z: 0, w: 0.7071068}
m_LocalPosition: {x: 0, y: 3.07, z: 0}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 3, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 895268871264836332}

m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 90, z: 0}
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!114 &895268871377934297
MonoBehaviour:
m_ObjectHideFlags: 0

m_Name:
m_EditorClassIdentifier:
m_BrainParameters:
VectorObservationSize: 236
VectorObservationSize: 243
m_Model: {fileID: 11400000, guid: 3c6170922a9ad4d9f85261699ca00f5d, type: 3}
m_Model: {fileID: 11400000, guid: f598eaeeef9f94691989a2cfaaafb565, type: 3}
m_InferenceDevice: 0
m_BehaviorType: 0
m_BehaviorName: WalkerDynamic

maxStep: 0
hasUpgradedFromAgentParameters: 1
MaxStep: 5000
maximumWalkingSpeed: 999
targetWalkingSpeed: 10
randomizeWalkSpeedEachEpisode: 1
walkDirectionMethod: 0
worldDirToWalk: {x: 1, y: 0, z: 0}
worldPosToWalkTo: {x: 0, y: 0, z: 0}
target: {fileID: 0}
hips: {fileID: 895268871264836332}
chest: {fileID: 7933235354845945071}

armR: {fileID: 7933235355057813930}
forearmR: {fileID: 7933235353195701980}
handR: {fileID: 7933235354616748502}
orientationCube: {fileID: 7559180363928843817}
--- !u!114 &895268871377934303
MonoBehaviour:
m_ObjectHideFlags: 0

m_Name:
m_EditorClassIdentifier:
maxJointSpring: 40000
jointDampen: 3000
maxJointForceLimit: 10000
jointDampen: 5000
maxJointForceLimit: 20000
bodyPartsList: []
--- !u!114 &895268871377934302
MonoBehaviour:

m_Script: {fileID: 11500000, guid: 1513f8a85fedd47efba089213b7c5bde, type: 3}
m_Name:
m_EditorClassIdentifier:
updatedByAgent: 0
transformToFollow: {fileID: 895268871264836332}
targetToLookAt: {fileID: 0}
heightOffset: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 7933235353030744116}
serializedVersion: 2
m_Mass: 3
m_Mass: 4
m_Drag: 0.05
m_AngularDrag: 0.05
m_UseGravity: 1

m_Anchor: {x: 0.55, y: 0, z: 0}
m_Axis: {x: 0, y: -1, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: -0.7000002, y: 0, z: 0}
m_ConnectedAnchor: {x: -0.7000001, y: 0.00000011920929, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: 1}
m_XMotion: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 7933235353041637840}
serializedVersion: 2
m_Mass: 1
m_Mass: 2
m_Drag: 0.05
m_AngularDrag: 0.05
m_UseGravity: 1

m_Anchor: {x: 0, y: 0, z: 0}
m_Axis: {x: 1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: -0.70000064, y: 0, z: 0}
m_ConnectedAnchor: {x: -0.70000017, y: 0.00000011920929, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: 1}
m_XMotion: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 7933235353195701956}
serializedVersion: 2
m_Mass: 2
m_Mass: 3
m_Drag: 0.05
m_AngularDrag: 0.05
m_UseGravity: 1

m_Anchor: {x: -0.5, y: 0, z: 0}
m_Axis: {x: 0, y: 1, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0.5000005, y: 0, z: 0}
m_ConnectedAnchor: {x: 0.5, y: 0.00000011920929, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: 1}
m_XMotion: 0

m_Anchor: {x: 0, y: 0.5, z: 0}
m_Axis: {x: -1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: -0.39999408, y: -0.29999986, z: 0}
m_ConnectedAnchor: {x: -0.39999396, y: -0.29999995, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: -1}
m_XMotion: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 7933235353240438151}
serializedVersion: 2
m_Mass: 2
m_Mass: 3
m_Drag: 0.05
m_AngularDrag: 0.05
m_UseGravity: 1

m_Anchor: {x: 0.5, y: 0, z: 0}
m_Axis: {x: 0, y: -1, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: -0.5000005, y: 0, z: 0}
m_ConnectedAnchor: {x: -0.5, y: 0.00000011920929, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: 1}
m_XMotion: 0

m_Anchor: {x: 0, y: 0.5, z: 0}
m_Axis: {x: -1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: -0.00000011920929, y: -0.5, z: 0}
m_ConnectedAnchor: {x: 0, y: -0.5, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: -1}
m_XMotion: 0

m_Anchor: {x: 0, y: 0, z: -0.1}
m_Axis: {x: 1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0.00000011920929, y: -0.60000014, z: 0}
m_ConnectedAnchor: {x: 0, y: -0.60000014, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 1, z: 0}
m_XMotion: 0

m_Anchor: {x: 0, y: 0.5, z: 0}
m_Axis: {x: -1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0.00000011920929, y: -0.5, z: 0}
m_ConnectedAnchor: {x: 0, y: -0.5, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: -1}
m_XMotion: 0

m_Anchor: {x: 0, y: 0.5, z: 0}
m_Axis: {x: -1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0.39999408, y: -0.29999986, z: 0}
m_ConnectedAnchor: {x: 0.39999396, y: -0.29999995, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: -1}
m_XMotion: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 7933235354074184675}
serializedVersion: 2
m_Mass: 5
m_Mass: 6
m_Drag: 0.05
m_AngularDrag: 0.05
m_UseGravity: 1

m_Anchor: {x: 0, y: -0.85, z: 0}
m_Axis: {x: 1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0, y: 0.5119996, z: 0}
m_ConnectedAnchor: {x: 0, y: 0.5119997, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: -1}
m_XMotion: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 7933235354616748503}
serializedVersion: 2
m_Mass: 1
m_Mass: 2
m_Drag: 0.05
m_AngularDrag: 0.05
m_UseGravity: 1

m_Anchor: {x: 0, y: 0, z: 0}
m_Axis: {x: 1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0.70000064, y: 0, z: 0}
m_ConnectedAnchor: {x: 0.70000017, y: 0.00000011920929, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: 1}
m_XMotion: 0

m_Anchor: {x: 0, y: -0.3, z: 0}
m_Axis: {x: 1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0, y: 0.383, z: 0}
m_ConnectedAnchor: {x: 0, y: 0.3829999, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: -1}
m_XMotion: 0

m_Anchor: {x: 0, y: -0.5, z: 0}
m_Axis: {x: 1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0, y: 0.3050003, z: 0}
m_ConnectedAnchor: {x: 0, y: 0.30500042, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: -1}
m_XMotion: 0

m_Anchor: {x: 0, y: 0, z: -0.1}
m_Axis: {x: 1, y: 0, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: -0.00000011920929, y: -0.60000014, z: 0}
m_ConnectedAnchor: {x: 0, y: -0.60000014, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 1, z: 0}
m_XMotion: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 7933235355057813906}
serializedVersion: 2
m_Mass: 3
m_Mass: 4
m_Drag: 0.05
m_AngularDrag: 0.05
m_UseGravity: 1

m_Anchor: {x: -0.55, y: 0, z: 0}
m_Axis: {x: 0, y: 1, z: 0}
m_AutoConfigureConnectedAnchor: 1
m_ConnectedAnchor: {x: 0.7000002, y: 0, z: 0}
m_ConnectedAnchor: {x: 0.7000001, y: 0.00000011920929, z: 0}
serializedVersion: 2
m_SecondaryAxis: {x: 0, y: 0, z: 1}
m_XMotion: 0

type: 3}
m_PrefabInstance: {fileID: 7597605653427724053}
m_PrefabAsset: {fileID: 0}
--- !u!114 &7559180363928843817 stripped
MonoBehaviour:
m_CorrespondingSourceObject: {fileID: 114705911240010044, guid: 72f745913c5a34df5aaadd5c1f0024cb,
type: 3}
m_PrefabInstance: {fileID: 7597605653427724053}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 0}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 771e78c5e980e440e8cd19716b55075f, type: 3}
m_Name:
m_EditorClassIdentifier:

34
ml-agents-envs/mlagents_envs/side_channel/agent_parameters_channel.py


from mlagents_envs.side_channel import SideChannel, IncomingMessage, OutgoingMessage
from mlagents_envs.exception import UnityCommunicationException
from mlagents_envs.base_env import AgentId
import uuid
class AgentParametersChannel(SideChannel):
"""
This is the SideChannel for sending agent-specific parameters to Unity.
You can send parameters to an environment with the command
set_float_parameter.
"""
def __init__(self) -> None:
channel_id = uuid.UUID(("534c891e-810f-11ea-a9d0-822485860401"))
super().__init__(channel_id)
def on_message_received(self, msg: IncomingMessage) -> None:
raise UnityCommunicationException(
"The EnvironmentParametersChannel received a message from Unity, "
+ "this should not have happend."
)
def set_float_parameter(self, agent_id: AgentId, key: str, value: float) -> None:
"""
Sets a float environment parameter in the Unity Environment.
:param key: The string identifier of the parameter.
:param value: The float value of the parameter.
"""
msg = OutgoingMessage()
msg.write_int32(agent_id)
msg.write_string(key)
msg.write_float32(value)
super().queue_message_to_send(msg)

50
com.unity.ml-agents/Runtime/AgentParameters.cs


using System;
using System.Collections.Generic;
using Unity.MLAgents.SideChannels;
namespace Unity.MLAgents
{
/// <summary>
/// A container for the Environment Parameters that may be modified during training.
/// The keys for those parameters are defined in the trainer configurations and the
/// the values are generated from the training process in features such as Curriculum Learning
/// and Environment Parameter Randomization.
///
/// One current assumption for all the environment parameters is that they are of type float.
/// </summary>
public sealed class AgentParameters
{
/// <summary>
/// The side channel that is used to receive the new parameter values.
/// </summary>
readonly AgentParametersChannel m_Channel;
/// <summary>
/// Constructor.
/// </summary>
internal AgentParameters()
{
m_Channel = new AgentParametersChannel();
SideChannelManager.RegisterSideChannel(m_Channel);
}
/// <summary>
/// Returns the parameter value for the specified key. Returns the default value provided
/// if this parameter key does not have a value. Only returns a parameter value if it is
/// of type float.
/// </summary>
/// <param name="key">The parameter key</param>
/// <param name="defaultValue">Default value for this parameter.</param>
/// <returns></returns>
public float GetWithDefault(int episodeId, string key, float defaultValue)
{
return m_Channel.GetWithDefault(episodeId, key, defaultValue);
}
internal void Dispose()
{
SideChannelManager.UnregisterSideChannel(m_Channel);
}
}
}

11
com.unity.ml-agents/Runtime/AgentParameters.cs.meta


fileFormatVersion: 2
guid: c6d4c5ad59e7b4066b64fa47b5205889
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

65
com.unity.ml-agents/Runtime/SideChannels/AgentParametersChannel.cs


using System.Collections.Generic;
using System;
using UnityEngine;
namespace Unity.MLAgents.SideChannels
{
internal class AgentParametersChannel : SideChannel
{
Dictionary<int, Dictionary<string, float>> m_Parameters = new Dictionary<int, Dictionary<string, float>>();
const string k_EnvParamsId = "534c891e-810f-11ea-a9d0-822485860401";
/// <summary>
/// Initializes the side channel. The constructor is internal because only one instance is
/// supported at a time, and is created by the Academy.
/// </summary>
internal AgentParametersChannel()
{
ChannelId = new Guid(k_EnvParamsId);
}
/// <inheritdoc/>
protected override void OnMessageReceived(IncomingMessage msg)
{
var episodeId = msg.ReadInt32();
var key = msg.ReadString();
var value = msg.ReadFloat32();
if(!m_Parameters.ContainsKey(episodeId))
{
m_Parameters[episodeId] = new Dictionary<string, float>();
}
m_Parameters[episodeId][key] = value;
}
/// <summary>
/// Returns the parameter value associated with the provided key. Returns the default
/// value if one doesn't exist.
/// </summary>
/// <param name="key">Parameter key.</param>
/// <param name="defaultValue">Default value to return.</param>
/// <returns></returns>
public float GetWithDefault(int episodeId, string key, float defaultValue)
{
float value = defaultValue;
bool hasKey = false;
Dictionary<string, float> agent_dict;
if(m_Parameters.TryGetValue(episodeId, out agent_dict))
{
agent_dict.TryGetValue(key, out value);
}
return value;
}
/// <summary>
/// Returns all parameter keys that have a registered value.
/// </summary>
/// <returns></returns>
public IList<string> ListParameters(int episodeId)
{
Dictionary<string, float> agent_dict;
m_Parameters.TryGetValue(episodeId, out agent_dict);
return new List<string>(agent_dict.Keys);
}
}
}

11
com.unity.ml-agents/Runtime/SideChannels/AgentParametersChannel.cs.meta


fileFormatVersion: 2
guid: 22884d2b9466b4a589e059247a2f519f
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

18
send_obs.py


from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.agent_parameters_channel import AgentParametersChannel
# Getting observation types
agent_params = AgentParametersChannel()
env = UnityEnvironment(side_channels=[agent_params])
env.reset()
bspec = list(env.behavior_specs.values())[0]
print(bspec.sensor_types)
dsteps, tsteps = env.get_steps(list(env.behavior_specs.keys())[0])
print(dsteps.obs)
# Sending agent parameterizations
for i, _id in enumerate(dsteps.agent_id):
agent_params.set_float_parameter(_id, "test_param", i * 1000)
env.reset()
env.step()

737
Project/Assets/ML-Agents/Examples/3DBall/Prefabs/3DBallTask.prefab


%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!1 &1036225416237908
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4259352808153402}
- component: {fileID: 33399516572863624}
- component: {fileID: 135693586603893602}
- component: {fileID: 23610325484096200}
- component: {fileID: 54597526346971362}
m_Layer: 0
m_Name: Ball
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4259352808153402
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1036225416237908}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: 0, y: 4.31, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 4679453577574622}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &33399516572863624
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1036225416237908}
m_Mesh: {fileID: 10207, guid: 0000000000000000e000000000000000, type: 0}
--- !u!135 &135693586603893602
SphereCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1036225416237908}
m_Material: {fileID: 13400000, guid: 56162663048874fd4b10e065f9cf78b7, type: 2}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Radius: 0.5
m_Center: {x: 0, y: 0, z: 0}
--- !u!23 &23610325484096200
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1036225416237908}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: cf2a3769e6d5446698f2e3f5aab68915, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!54 &54597526346971362
Rigidbody:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1036225416237908}
serializedVersion: 2
m_Mass: 1
m_Drag: 0
m_AngularDrag: 0.01
m_UseGravity: 1
m_IsKinematic: 0
m_Interpolate: 0
m_Constraints: 0
m_CollisionDetection: 0
--- !u!1 &1218265376493012
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4082575947564308}
- component: {fileID: 33986757750372936}
- component: {fileID: 23248495933290848}
m_Layer: 0
m_Name: eye
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4082575947564308
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1218265376493012}
m_LocalRotation: {x: -0, y: 1, z: -0, w: 0}
m_LocalPosition: {x: 0.29999995, y: 0.07399994, z: 0.50040054}
m_LocalScale: {x: 0.29457998, y: 0.29457998, z: 0.29457998}
m_Children: []
m_Father: {fileID: 4294419716796784}
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 0, y: 180, z: 0}
--- !u!33 &33986757750372936
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1218265376493012}
m_Mesh: {fileID: 10210, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &23248495933290848
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1218265376493012}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: f731be6866ce749fd8349e67ae81f76a, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!1 &1321468028730240
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4679453577574622}
m_Layer: 0
m_Name: 3DBallTask
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4679453577574622
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1321468028730240}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 5}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 4259352808153402}
- {fileID: 4780098186595842}
m_Father: {fileID: 0}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!1 &1424713891854676
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4780098186595842}
- component: {fileID: 65010516625723872}
- component: {fileID: 114368073295828880}
- component: {fileID: 1306725529891448089}
- component: {fileID: 1758424554059689351}
- component: {fileID: 1287132100719688250}
- component: {fileID: 6825609771517314395}
m_Layer: 0
m_Name: Agent
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4780098186595842
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1424713891854676}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 5, y: 5, z: 5}
m_Children:
- {fileID: 4294419716796784}
m_Father: {fileID: 4679453577574622}
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!65 &65010516625723872
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1424713891854676}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}
--- !u!114 &114368073295828880
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1424713891854676}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 5d1c4e0b1822b495aa52bc52839ecb30, type: 3}
m_Name:
m_EditorClassIdentifier:
m_BrainParameters:
VectorObservationSize: 8
NumStackedVectorObservations: 1
VectorActionSize: 02000000
VectorActionDescriptions: []
VectorActionSpaceType: 1
m_Model: {fileID: 11400000, guid: 20a7b83be6b0c493d9271c65c897eb9b, type: 3}
m_InferenceDevice: 0
m_BehaviorType: 0
m_BehaviorName: 3DBall
TeamId: 0
m_UseChildSensors: 1
m_ObservableAttributeHandling: 0
--- !u!114 &1306725529891448089
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1424713891854676}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a5c9d521e5ef4759a8246a07d52221e, type: 3}
m_Name:
m_EditorClassIdentifier:
DecisionPeriod: 5
TakeActionsBetweenDecisions: 1
--- !u!114 &1758424554059689351
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1424713891854676}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3a6da8f78a394c6ab027688eab81e04d, type: 3}
m_Name:
m_EditorClassIdentifier:
debugCommandLineOverride:
--- !u!114 &1287132100719688250
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1424713891854676}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3bb324825cf6e4dd6838288011a49a2b, type: 3}
m_Name:
m_EditorClassIdentifier:
agentParameters:
maxStep: 0
hasUpgradedFromAgentParameters: 1
MaxStep: 5000
ball: {fileID: 1036225416237908}
--- !u!114 &6825609771517314395
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1424713891854676}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 176ce9d33a1e6436e87ae60ed5d5a450, type: 3}
m_Name:
m_EditorClassIdentifier:
observationSize: 1
--- !u!1 &1533320402322554
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4667923401885968}
- component: {fileID: 20793118999710892}
m_Layer: 0
m_Name: AgentCamera
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 0
--- !u!4 &4667923401885968
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1533320402322554}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0.15}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 4294419716796784}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!20 &20793118999710892
Camera:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1533320402322554}
m_Enabled: 1
serializedVersion: 2
m_ClearFlags: 2
m_BackGroundColor: {r: 0.46666667, g: 0.5647059, b: 0.60784316, a: 1}
m_projectionMatrixMode: 1
m_SensorSize: {x: 36, y: 24}
m_LensShift: {x: 0, y: 0}
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2
x: 0
y: 0
width: 1
height: 1
near clip plane: 0.3
far clip plane: 1000
field of view: 60
orthographic: 0
orthographic size: 5
m_Depth: 0
m_CullingMask:
serializedVersion: 2
m_Bits: 4294950911
m_RenderingPath: -1
m_TargetTexture: {fileID: 0}
m_TargetDisplay: 0
m_TargetEye: 3
m_HDR: 1
m_AllowMSAA: 1
m_AllowDynamicResolution: 0
m_ForceIntoRT: 0
m_OcclusionCulling: 1
m_StereoConvergence: 10
m_StereoSeparation: 0.022
--- !u!1 &1619100162539582
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4425897039098228}
- component: {fileID: 33259119028337980}
- component: {fileID: 23108868206887546}
m_Layer: 0
m_Name: mouth
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4425897039098228
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1619100162539582}
m_LocalRotation: {x: -0, y: 1, z: -0, w: 0}
m_LocalPosition: {x: 0, y: -0.18299997, z: 0.50040054}
m_LocalScale: {x: 0.27602, y: 0.042489994, z: 0.13891}
m_Children: []
m_Father: {fileID: 4294419716796784}
m_RootOrder: 3
m_LocalEulerAnglesHint: {x: 0, y: 180, z: 0}
--- !u!33 &33259119028337980
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1619100162539582}
m_Mesh: {fileID: 10210, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &23108868206887546
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1619100162539582}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: f731be6866ce749fd8349e67ae81f76a, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!1 &1854695166504686
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4300192163442926}
- component: {fileID: 33165976320323760}
- component: {fileID: 23468552506669568}
m_Layer: 0
m_Name: Headband
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4300192163442926
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1854695166504686}
m_LocalRotation: {x: -0, y: -0, z: 0.016506119, w: 0.9998638}
m_LocalPosition: {x: 0, y: 0.341, z: 0}
m_LocalScale: {x: 1.0441425, y: 0.19278127, z: 1.0441422}
m_Children: []
m_Father: {fileID: 4294419716796784}
m_RootOrder: 4
m_LocalEulerAnglesHint: {x: 0, y: -179.99998, z: 1.8920001}
--- !u!33 &33165976320323760
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1854695166504686}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &23468552506669568
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1854695166504686}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: 04be259c590de46f69db4cbd1da877d5, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!1 &1859240399150782
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4294419716796784}
- component: {fileID: 33973749152356522}
- component: {fileID: 23340305563606254}
m_Layer: 0
m_Name: AgentCube_Blue
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4294419716796784
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1859240399150782}
m_LocalRotation: {x: 0, y: 1, z: 0, w: 0}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 4667923401885968}
- {fileID: 4082575947564308}
- {fileID: 4144856465265480}
- {fileID: 4425897039098228}
- {fileID: 4300192163442926}
m_Father: {fileID: 4780098186595842}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 180, z: 0}
--- !u!33 &33973749152356522
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1859240399150782}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &23340305563606254
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1859240399150782}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: c9fa44c2c3f8ce74ca39a3355ea42631, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!1 &1999020414315134
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 4144856465265480}
- component: {fileID: 33069174244444078}
- component: {fileID: 23048386147321498}
m_Layer: 0
m_Name: eye
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &4144856465265480
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1999020414315134}
m_LocalRotation: {x: -0, y: 1, z: -0, w: 0}
m_LocalPosition: {x: -0.29999995, y: 0.07399994, z: 0.50040054}
m_LocalScale: {x: 0.29457998, y: 0.29457998, z: 0.29457998}
m_Children: []
m_Father: {fileID: 4294419716796784}
m_RootOrder: 2
m_LocalEulerAnglesHint: {x: 0, y: 180, z: 0}
--- !u!33 &33069174244444078
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1999020414315134}
m_Mesh: {fileID: 10210, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &23048386147321498
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1999020414315134}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: f731be6866ce749fd8349e67ae81f76a, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0

8
Project/Assets/ML-Agents/Examples/3DBall/Prefabs/3DBallTask.prefab.meta


fileFormatVersion: 2
guid: 26c60e3742e9c4a3d9106c283a4a0b9c
NativeFormatImporter:
externalObjects: {}
mainObjectFileID: 100100000
userData:
assetBundleName:
assetBundleVariant:

1001
Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBallTask.unity
文件差异内容过多而无法显示
查看文件

9
Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBallTask.unity.meta


fileFormatVersion: 2
guid: ef7bf52aa263d4784bbee42176046057
timeCreated: 1513216032
licenseType: Pro
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

93
Project/Assets/ML-Agents/Examples/3DBall/Scripts/Task3DAgent.cs


using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class Task3DAgent : Agent
{
[Header("Specific to Ball3D")]
public GameObject ball;
Rigidbody m_BallRb;
EnvironmentParameters m_ResetParams;
TaskSensorComponent m_TaskSensor;
public override void Initialize()
{
m_BallRb = ball.GetComponent<Rigidbody>();
m_ResetParams = Academy.Instance.EnvironmentParameters;
SetResetParameters();
}
public override void CollectObservations(VectorSensor sensor)
{
sensor.AddObservation(gameObject.transform.rotation.z);
sensor.AddObservation(gameObject.transform.rotation.x);
sensor.AddObservation(ball.transform.position - gameObject.transform.position);
sensor.AddObservation(m_BallRb.velocity);
// Example of setting task parameterization to ball x position
m_TaskSensor = this.GetComponent<TaskSensorComponent>();
m_TaskSensor.AddParameterizaton(ball.transform.position.x);
}
public override void OnActionReceived(float[] vectorAction)
{
var actionZ = 2f * Mathf.Clamp(vectorAction[0], -1f, 1f);
var actionX = 2f * Mathf.Clamp(vectorAction[1], -1f, 1f);
if ((gameObject.transform.rotation.z < 0.25f && actionZ > 0f) ||
(gameObject.transform.rotation.z > -0.25f && actionZ < 0f))
{
gameObject.transform.Rotate(new Vector3(0, 0, 1), actionZ);
}
if ((gameObject.transform.rotation.x < 0.25f && actionX > 0f) ||
(gameObject.transform.rotation.x > -0.25f && actionX < 0f))
{
gameObject.transform.Rotate(new Vector3(1, 0, 0), actionX);
}
if ((ball.transform.position.y - gameObject.transform.position.y) < -2f ||
Mathf.Abs(ball.transform.position.x - gameObject.transform.position.x) > 3f ||
Mathf.Abs(ball.transform.position.z - gameObject.transform.position.z) > 3f)
{
SetReward(-1f);
EndEpisode();
}
else
{
SetReward(0.1f);
}
}
public override void OnEpisodeBegin()
{
gameObject.transform.rotation = new Quaternion(0f, 0f, 0f, 0f);
gameObject.transform.Rotate(new Vector3(1, 0, 0), Random.Range(-10f, 10f));
gameObject.transform.Rotate(new Vector3(0, 0, 1), Random.Range(-10f, 10f));
m_BallRb.velocity = new Vector3(0f, 0f, 0f);
ball.transform.position = new Vector3(Random.Range(-1.5f, 1.5f), 4f, Random.Range(-1.5f, 1.5f))
+ gameObject.transform.position;
//Reset the parameters when the Agent is reset.
SetResetParameters();
}
public override void Heuristic(float[] actionsOut)
{
actionsOut[0] = -Input.GetAxis("Horizontal");
actionsOut[1] = Input.GetAxis("Vertical");
}
public void SetBall()
{
//Set the attributes of the ball by fetching the information from the academy
m_BallRb.mass = m_ResetParams.GetWithDefault("mass", 1.0f);
var scale = m_ResetParams.GetWithDefault("scale", 1.0f);
ball.transform.localScale = new Vector3(scale, scale, scale);
}
public void SetResetParameters()
{
SetBall();
// Get agent parameters
Debug.Log(GetParameterWithDefault("test_param", 0));
}
}

12
Project/Assets/ML-Agents/Examples/3DBall/Scripts/Task3DAgent.cs.meta


fileFormatVersion: 2
guid: 3bb324825cf6e4dd6838288011a49a2b
timeCreated: 1502223572
licenseType: Free
MonoImporter:
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

48
Project/Assets/ML-Agents/Examples/3DBall/Scripts/TaskSensorComponent.cs


using Unity.MLAgents.Sensors;
public class TaskSensorComponent : SensorComponent
{
public int observationSize;
public TaskSensor task_sensor;
/// <summary>
/// Creates a TaskSensor.
/// </summary>
/// <returns></returns>
public override ISensor CreateSensor()
{
task_sensor = new TaskSensor(observationSize);
return task_sensor;
}
/// <inheritdoc/>
public override int[] GetObservationShape()
{
return new[] { observationSize };
}
public void AddParameterizaton(float parameter)
{
if(task_sensor != null)
{
task_sensor.AddObservation(parameter);
}
}
}
public class TaskSensor : VectorSensor
{
public TaskSensor(int observationSize, string name = null) : base(observationSize)
{
if (name == null)
{
name = $"TaskSensor_size{observationSize}";
}
}
public override SensorType GetSensorType()
{
return SensorType.Parameterization;
}
}

11
Project/Assets/ML-Agents/Examples/3DBall/Scripts/TaskSensorComponent.cs.meta


fileFormatVersion: 2
guid: 176ce9d33a1e6436e87ae60ed5d5a450
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

252
Project/Assets/ML-Agents/Examples/3DBall/TFModels/My3DBall.nn
文件差异内容过多而无法显示
查看文件

11
Project/Assets/ML-Agents/Examples/3DBall/TFModels/My3DBall.nn.meta


fileFormatVersion: 2
guid: 5fa1fb7f873d34cb5aad88303fef876d
ScriptedImporter:
fileIDToRecycleName:
11400000: main obj
11400002: model data
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 19ed1486aa27d4903b34839f37b8f69f, type: 3}

7
Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/PlatformDynamicTarget.prefab.meta


fileFormatVersion: 2
guid: f0d7741d9e06247f6843b921a206b978
PrefabImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

8
Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/Targets.meta


fileFormatVersion: 2
guid: 88818c9b63c96424aa8e0fca85552133
folderAsset: yes
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

523
Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/PlatformDynamicTarget.prefab


%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!1 &6907050159044240885
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 6902197503240654641}
- component: {fileID: 6894500521640151429}
- component: {fileID: 6885223417161833361}
- component: {fileID: 6859132155796343735}
m_Layer: 0
m_Name: Wall (1)
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 4294967295
m_IsActive: 1
--- !u!4 &6902197503240654641
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907050159044240885}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: -50, y: 0, z: 0}
m_LocalScale: {x: 1, y: 5, z: 101}
m_Children: []
m_Father: {fileID: 6902102727328990095}
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &6894500521640151429
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907050159044240885}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &6885223417161833361
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907050159044240885}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!65 &6859132155796343735
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907050159044240885}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}
--- !u!1 &6907401236047902865
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 6902265967514060089}
- component: {fileID: 6891025662345346653}
- component: {fileID: 6859036447448677835}
- component: {fileID: 6884684845870454579}
m_Layer: 14
m_Name: Ground
m_TagString: ground
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 4294967295
m_IsActive: 1
--- !u!4 &6902265967514060089
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907401236047902865}
m_LocalRotation: {x: 0, y: 0.7071068, z: 0, w: 0.7071068}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 100, y: 1, z: 100}
m_Children: []
m_Father: {fileID: 6902107422946006027}
m_RootOrder: 1
m_LocalEulerAnglesHint: {x: 0, y: 90, z: 0}
--- !u!33 &6891025662345346653
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907401236047902865}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!65 &6859036447448677835
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907401236047902865}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}
--- !u!23 &6884684845870454579
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907401236047902865}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: acba6bf2a290a496bb8989b42bf8698d, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!1 &6907666814270504157
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 6902102727328990095}
m_Layer: 0
m_Name: Walls
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 4294967295
m_IsActive: 1
--- !u!4 &6902102727328990095
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907666814270504157}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: 0, y: 2, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 6901873285403999439}
- {fileID: 6902197503240654641}
- {fileID: 6901900959948323433}
- {fileID: 6905948743199606957}
m_Father: {fileID: 6902107422946006027}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!1 &6907680617094430597
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 6901873285403999439}
- component: {fileID: 6894618984257886823}
- component: {fileID: 6884854148710353183}
- component: {fileID: 6863062098498978603}
m_Layer: 0
m_Name: Wall
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 4294967295
m_IsActive: 1
--- !u!4 &6901873285403999439
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907680617094430597}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 50, y: 0, z: 0}
m_LocalScale: {x: 1, y: 5, z: 101}
m_Children: []
m_Father: {fileID: 6902102727328990095}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &6894618984257886823
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907680617094430597}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &6884854148710353183
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907680617094430597}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!65 &6863062098498978603
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907680617094430597}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}
--- !u!1 &6907740118844148851
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 6902107422946006027}
m_Layer: 0
m_Name: PlatformDynamicTarget
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &6902107422946006027
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907740118844148851}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children:
- {fileID: 6902102727328990095}
- {fileID: 6902265967514060089}
m_Father: {fileID: 0}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!1 &6907828132384848309
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 6905948743199606957}
- component: {fileID: 6894463671975680535}
- component: {fileID: 6884868534516719387}
- component: {fileID: 6859048605259525735}
m_Layer: 0
m_Name: Wall (3)
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 4294967295
m_IsActive: 1
--- !u!4 &6905948743199606957
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907828132384848309}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: -50}
m_LocalScale: {x: 100, y: 5, z: 1}
m_Children: []
m_Father: {fileID: 6902102727328990095}
m_RootOrder: 3
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &6894463671975680535
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907828132384848309}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &6884868534516719387
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907828132384848309}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!65 &6859048605259525735
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907828132384848309}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}
--- !u!1 &6907860845836169157
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 6901900959948323433}
- component: {fileID: 6893927248293796423}
- component: {fileID: 6885176866006237333}
- component: {fileID: 6859395915623032135}
m_Layer: 0
m_Name: Wall (2)
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 4294967295
m_IsActive: 1
--- !u!4 &6901900959948323433
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907860845836169157}
m_LocalRotation: {x: -0, y: -0, z: -0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 50}
m_LocalScale: {x: 100, y: 5, z: 1}
m_Children: []
m_Father: {fileID: 6902102727328990095}
m_RootOrder: 2
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &6893927248293796423
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907860845836169157}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!23 &6885176866006237333
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907860845836169157}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: 66163cf35956a4be08e801b750c26f33, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 0
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!65 &6859395915623032135
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 6907860845836169157}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}

10
Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerDy.demo.meta


fileFormatVersion: 2
guid: d0bff0b25f0d247f8a3951edb90cc71a
ScriptedImporter:
fileIDToRecycleName:
11400002: Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerDy.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

10
Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerDyVS.demo.meta


fileFormatVersion: 2
guid: 023d43e719a4140a2a683974ce7bb955
ScriptedImporter:
fileIDToRecycleName:
11400002: Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerDyVS.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

10
Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerSt.demo.meta


fileFormatVersion: 2
guid: 6568f9aea39f34581b4df153e4a8bdd3
ScriptedImporter:
fileIDToRecycleName:
11400002: Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerSt.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

10
Project/Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerStVS.demo.meta


fileFormatVersion: 2
guid: 06aba1fa7650441b1bc832f52f9801ee
ScriptedImporter:
fileIDToRecycleName:
11400002: Assets/ML-Agents/Examples/Walker/Demos/ExpertWalkerStVS.demo
externalObjects: {}
userData: ' (Unity.MLAgents.Demonstrations.DemonstrationSummary)'
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 7bd65ce151aaa4a41a45312543c56be1, type: 3}

8
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Ragdoll.meta


fileFormatVersion: 2
guid: d64d77dc566364a31896e5da2ac8534b
folderAsset: yes
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

8
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms.meta


fileFormatVersion: 2
guid: cd296ba30964e4cf086044f1a7618c0b
folderAsset: yes
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

9
Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerStaticVariableSpeed.unity.meta


fileFormatVersion: 2
guid: 0295e51cc064f41b28ef97e70902cf13
timeCreated: 1520420566
licenseType: Free
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

1001
Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamicVariableSpeed.unity
文件差异内容过多而无法显示
查看文件

7
Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerDynamicVariableSpeed.unity.meta


fileFormatVersion: 2
guid: 2b839ee93e7a4467f9f8b4803c4a239b
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

1001
Project/Assets/ML-Agents/Examples/Walker/Scenes/WalkerStaticVariableSpeed.unity
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamicVariableSpeed.nn
文件差异内容过多而无法显示
查看文件

11
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamicVariableSpeed.nn.meta


fileFormatVersion: 2
guid: 1a6e4a4e15a5d49a7acac1f78bc1f514
ScriptedImporter:
fileIDToRecycleName:
11400000: main obj
11400002: model data
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 19ed1486aa27d4903b34839f37b8f69f, type: 3}

1001
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamic.nn
文件差异内容过多而无法显示
查看文件

11
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerDynamic.nn.meta


fileFormatVersion: 2
guid: 2c37bf93cad864ba5a91689aa40eba6d
ScriptedImporter:
fileIDToRecycleName:
11400000: main obj
11400002: model data
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 19ed1486aa27d4903b34839f37b8f69f, type: 3}

1001
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStaticVariableSpeed.nn
文件差异内容过多而无法显示
查看文件

11
Project/Assets/ML-Agents/Examples/Walker/TFModels/WalkerStaticVariableSpeed.nn.meta


fileFormatVersion: 2
guid: a473273f582f94ecbbb2df833dd7251d
ScriptedImporter:
fileIDToRecycleName:
11400000: main obj
11400002: model data
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:
script: {fileID: 11500000, guid: 19ed1486aa27d4903b34839f37b8f69f, type: 3}

8
Project/Assets/ML-Agents/Examples/Walker/TFModels/ObserveDist.meta


fileFormatVersion: 2
guid: a4c86e35548874a8d80fa1c183b68018
folderAsset: yes
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

8
Project/Assets/ML-Agents/Examples/Walker/TFModels/ObserveDistWithHH.meta


fileFormatVersion: 2
guid: d7305d5b0d0e24dd494eabf514fa7801
folderAsset: yes
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

8
Project/Assets/ML-Agents/Examples/Walker/TFModels/OrigTarg.meta


fileFormatVersion: 2
guid: 774a63148bc504bbea95350864917703
folderAsset: yes
DefaultImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

26
config/ppo/WalkerDynamicVariableSpeed.yaml


behaviors:
WalkerDynamicVariableSpeed:
trainer_type: ppo
hyperparameters:
batch_size: 2048
buffer_size: 20480
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
network_settings:
normalize: true
hidden_units: 512
num_layers: 3
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.995
strength: 1.0
keep_checkpoints: 5
max_steps: 30000000
time_horizon: 1000
summary_freq: 30000
threaded: true

27
config/ppo/RollerBall.yaml


behaviors:
RollerBall:
trainer_type: ppo
hyperparameters:
batch_size: 64
buffer_size: 12000
learning_rate: 0.0003
beta: 0.001
epsilon: 0.2
lambd: 0.99
num_epoch: 5
learning_rate_schedule: linear
network_settings:
normalize: true
hidden_units: 64
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.995
strength: 1.0
output_path: default
keep_checkpoints: 5
max_steps: 1000000
time_horizon: 1000
summary_freq: 12000
threaded: true

35
config/ppo/WalkerStaticVariableSpeed.yaml


behaviors:
WalkerStaticVariableSpeed:
trainer_type: ppo
hyperparameters:
batch_size: 2048
buffer_size: 20480
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
network_settings:
normalize: true
hidden_units: 512
num_layers: 3
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.995
strength: 1.0
keep_checkpoints: 5
max_steps: 30000000
time_horizon: 1000
summary_freq: 30000
threaded: true
agent_parameters:
WalkerStaticVariableSpeed:
num_repeat: 1
num_batch: 1
targetWalkingSpeed:
sampler_type: uniform
sampler_parameters:
min_value: 0.1
max_value: 10

42
config/ppo/WalkerStaticVariableSpeedActive.yaml


behaviors:
WalkerStaticVariableSpeed:
trainer_type: ppo
hyperparameters:
batch_size: 2048
buffer_size: 20480
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
network_settings:
normalize: true
hidden_units: 512
num_layers: 3
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.995
strength: 1.0
keep_checkpoints: 5
max_steps: 30000000
time_horizon: 1000
summary_freq: 30000
threaded: true
agent_parameters:
WalkerStaticVariableSpeed:
num_repeat: 8
num_batch: 16
active_learner:
warmup_steps: 600
capacity: 600
num_mc: 100
beta: 1.96
raw_samples: 100
num_restarts: 1
targetWalkingSpeed:
sampler_type: uniform
sampler_parameters:
min_value: 0.1
max_value: 10

233
ml-agents/mlagents/trainers/active_learning.py


import torch
from torch import Tensor
from botorch import settings
from botorch.acquisition.monte_carlo import MCAcquisitionFunction
from botorch.acquisition.objective import ScalarizedObjective, IdentityMCObjective
from botorch.models.gpytorch import GPyTorchModel
from botorch.models.model import Model
from botorch.models import SingleTaskGP
from botorch.sampling.samplers import MCSampler, SobolQMCNormalSampler
from botorch.utils.transforms import concatenate_pending_points, t_batch_mode_transform
from botorch.fit import fit_gpytorch_model
from botorch.optim import optimize_acqf_cyclic, optimize_acqf
from botorch.optim.initializers import initialize_q_batch_nonneg
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.distributions import MultivariateNormal
from gpytorch.means import ConstantMean
from gpytorch.models import ExactGP
from gpytorch.mlls import ExactMarginalLogLikelihood
from gpytorch.kernels import ScaleKernel, RBFKernel, Kernel, ProductKernel, AdditiveKernel, GridInterpolationKernel, AdditiveStructureKernel, ProductStructureKernel
from gpytorch.utils.grid import choose_grid_size
from typing import Optional, Union
class qEISP(MCAcquisitionFunction):
def __init__(
self,
model: Model,
beta: Union[float, Tensor],
mc_points: Tensor,
sampler: Optional[MCSampler] = None,
objective: Optional[ScalarizedObjective] = None,
X_pending: Optional[Tensor] = None,
maximize: bool = True,
) -> None:
r"""q-Espected Improvement of Skill Performance.
Args:
model: A fitted model.
beta: value to trade off between upper confidence bound and mean of fantasized performance.
mc_points: A `batch_shape x N x d` tensor of points to use for
MC-integrating the posterior variance. Usually, these are qMC
samples on the whole design space, but biased sampling directly
allows weighted integration of the posterior variance.
sampler: The sampler used for drawing fantasy samples. In the basic setting
of a standard GP (default) this is a dummy, since the variance of the
model after conditioning does not actually depend on the sampled values.
objective: A ScalarizedObjective. Required for multi-output models.
X_pending: A `n' x d`-dim Tensor of `n'` design points that have
points that have been submitted for function evaluation but
have not yet been evaluated.
maximize: If true uses the UCB of performance scaled by beta, else it uses LCB
Docstring from BOTorch class and same with comments below
"""
super().__init__(model=model, objective=objective)
if sampler is None:
# If no sampler is provided, we use the following dummy sampler for the
# fantasize() method in forward. IMPORTANT: This assumes that the posterior
# variance does not depend on the samples y (only on x), which is true for
# standard GP models, but not in general (e.g. for other likelihoods or
# heteroskedastic GPs using a separate noise model fit on data).
sampler = SobolQMCNormalSampler(
num_samples=1, resample=False, collapse_batch_dims=True
)
if not torch.is_tensor(beta):
beta = torch.tensor(beta)
self.register_buffer("beta", beta)
self.sampler = sampler
self.X_pending = X_pending
self.register_buffer("mc_points", mc_points)
self.maximize = maximize
@concatenate_pending_points
@t_batch_mode_transform()
def forward(self, X: Tensor) -> Tensor:
self.beta = self.beta.to(X)
with settings.propagate_grads(True):
posterior = self.model.posterior(X=X)
batch_shape = X.shape[:-2]
mean = posterior.mean.view(*batch_shape, X.shape[-2], -1)
variance = posterior.variance.view(*batch_shape, X.shape[-2], -1)
delta = self.beta.expand_as(mean) * variance.sqrt()
if self.maximize:
Yhat = mean + delta
else:
Yhat = mean - delta
bdims = tuple(1 for _ in X.shape[:-2])
if self.model.num_outputs > 1:
# We use q=1 here b/c ScalarizedObjective currently does not fully exploit
# lazy tensor operations and thus may be slow / overly memory-hungry.
# TODO (T52818288): Properly use lazy tensors in scalarize_posterior
mc_points = self.mc_points.view(-1, *bdims, 1, X.size(-1))
else:
# While we only need marginal variances, we can evaluate for q>1
# b/c for GPyTorch models lazy evaluation can make this quite a bit
# faster than evaluting in t-batch mode with q-batch size of 1
mc_points = self.mc_points.view(*bdims, -1, X.size(-1))
Yhat = Yhat.view(*batch_shape, X.shape[-2], -1)
fantasy_model = self.model.condition_on_observations(X=X, Y=Yhat)
posterior1 = self.model.posterior(mc_points)
posterior2 = fantasy_model.posterior(mc_points)
# transform with the scalarized objective
posterior1 = self.objective(posterior1.mean)
posterior2 = self.objective(posterior2.mean)
improvement = posterior2 - posterior1
return improvement.mean(dim=-1)
class StandardActiveLearningGP(ExactGP, GPyTorchModel):
_num_outputs = 1 # to inform GPyTorchModel API
def __init__(self, train_X, train_Y, bounds=None):
# squeeze output dim before passing train_Y to ExactGP
super(StandardActiveLearningGP, self).__init__(train_X, train_Y.squeeze(-1), GaussianLikelihood())
self.mean_module = ConstantMean()
xdims = train_X.shape[-1]
self.Kspatial = ScaleKernel(RBFKernel(active_dims=torch.tensor(list(range(xdims-1)))))
self.Ktime = ScaleKernel(RBFKernel(active_dims=torch.tensor([xdims-1])))
self.covar_module = AdditiveKernel(self.Kspatial, ProductKernel(self.Kspatial, self.Ktime))
self.to(train_X) # make sure we're on the right device/dtype
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
class ActiveLearningTaskSampler(object):
def __init__(self,ranges, warmup_steps:int=30, capacity:int=600, num_mc:int=500, beta:float=1.96, raw_samples:int=128, num_restarts:int=1, num_batch:int=16):
self.ranges = ranges
self.warmup_steps = warmup_steps
self.capacity = capacity
self.num_mc = num_mc
self.beta = beta
self.raw_samples = raw_samples
self.num_restarts = num_restarts
self.xdim = ranges.shape[0] + 1
self.model = None
self.mll = None
self.X = None
self.Y = None
self.bounds = torch.tensor(ranges)
self.bounds = torch.cat([self.bounds, torch.tensor([[0.0,1.0]])]).T
def update_model(self, new_X, new_Y, refit=False):
if self.X is not None:
new_X = new_X.to(self.X)
new_Y = new_Y.to(self.X)
self.X = torch.cat([self.X, new_X.to(self.X)])
self.Y = torch.cat([self.Y, new_Y.to(self.X)])
if self.model is not None:
state_dict = self.model.state_dict()
else:
state_dict = None
else:
self.X = new_X.float()
self.Y = new_Y.float()
state_dict = None
T = self.capacity
if self.X.shape[0] >= T:
self.X = self.X[-T:, :]
self.Y = self.Y[-T:, :]
if self.X.shape[0] < self.warmup_steps: # TODO seems to throw an error if only one sample is present. Refitting should probably only happen every N data points anyways
return None
if refit:
model = StandardActiveLearningGP(self.X, self.Y, bounds=self.bounds)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
self.model = model
self.mll = mll
if state_dict is not None:
self.model.load_state_dict(state_dict)
fit_gpytorch_model(mll)
# elif self.model is not None:
# self.model.set_train_data(self.X, self.Y)
# self.model = self.model.condition_on_observations(new_X, new_Y) # TODO: might be faster than setting the data need to test
def get_design_points(self, num_points:int=1, time=None):
if not self.model or time < self.warmup_steps:
return sample_random_points(self.bounds, num_points)
if not time:
time = self.X[:, -1].max() + 1
bounds = self.bounds
bounds[:, -1] = time
num_mc = self.num_mc
mc_points = torch.rand(num_mc, bounds.size(1), device=self.X.device, dtype=self.X.dtype)
mc_points = bounds[0] + (bounds[1] - bounds[0]) * mc_points
qeisp = qEISP(self.model, mc_points=mc_points, beta=self.beta)
try:
candidates, acq_value = optimize_acqf(
acq_function=qeisp,
bounds=bounds,
raw_samples=self.raw_samples,
q=num_points,
num_restarts=self.num_restarts,
return_best_only=True,
)
return candidates
except:
return sample_random_points(self.bounds, num_points)
def sample_random_points(bounds, num_points):
points = torch.rand(num_points, bounds.size(1), device=bounds.device, dtype=bounds.dtype)
points = bounds[0] + (bounds[1] - bounds[0]) * points
return points

175
ml-agents/mlagents/trainers/task_manager.py


from typing import Dict, List, Tuple, Optional
from mlagents.trainers.settings import (
TaskParameterSettings,
ParameterRandomizationSettings,
)
from collections import defaultdict
from mlagents.trainers.training_status import GlobalTrainingStatus, StatusType
from mlagents_envs.logging_util import get_logger
from mlagents.trainers.environment_parameter_manager import EnvironmentParameterManager
from mlagents.trainers.active_learning import ActiveLearningTaskSampler, sample_random_points
logger = get_logger(__name__)
import torch
import numpy as np
class TaskManager:
def __init__(
self,
settings: Optional[Dict[str, TaskParameterSettings]] = None,
restore: bool = False,
):
"""
EnvironmentParameterManager manages all the environment parameters of a training
session. It determines when parameters should change and gives access to the
current sampler of each parameter.
:param settings: A dictionary from environment parameter to
EnvironmentParameterSettings.
:param restore: If true, the EnvironmentParameterManager will use the
GlobalTrainingStatus to try and reload the lesson status of each environment
parameter.
"""
if settings is None:
settings = {}
self._dict_settings = settings
self.behavior_names = list(self._dict_settings.keys())
self.param_names = {name: list(self._dict_settings[name].parameters.keys()) for name in self.behavior_names}
self._taskSamplers = {}
self.report_buffer = []
self.num_repeat = {name: 1 for name in self.behavior_names}
self.task_completed = {name: defaultdict(list) for name in self.behavior_names}
self.num_batch = {name: 1 for name in self.behavior_names}
for behavior_name in self.behavior_names:
lows = []
highs = []
parameters = self._dict_settings[behavior_name].parameters
for parameter_name in self.param_names[behavior_name]:
low = parameters[parameter_name].min_value
high = parameters[parameter_name].max_value
lows.append(low)
highs.append(high)
task_ranges = torch.tensor([lows, highs]).float().T
self.num_repeat[behavior_name] = self._dict_settings[behavior_name].num_repeat
self.num_batch[behavior_name] = self._dict_settings[behavior_name].num_batch
active_hyps = self._dict_settings[behavior_name].active_learning
if active_hyps: # use active learning
self._taskSamplers[behavior_name] = ActiveLearningTaskSampler(task_ranges,
warmup_steps=active_hyps.warmup_steps, capacity=active_hyps.capacity,
num_mc=active_hyps.num_mc, beta=active_hyps.beta,
raw_samples=active_hyps.raw_samples, num_restarts=active_hyps.num_restarts,
)
else: # use uniform random sampling
self._taskSamplers[behavior_name] = lambda n: sample_random_points(task_ranges.T, n)
self.t = {name: 0.0 for name in self.behavior_names}
self.counter = {name: 0 for name in self.behavior_names}
def _make_task(self, behavior_name, tau):
"""
converts array to dictionary so it can be passed to c# side through agent parameter channel
"""
task = {}
for i, name in enumerate(self.param_names[behavior_name]):
task[name] = tau[i]
return task
def _build_tau(self, behavior_name, task, time):
"""
converts a dictionary description of the task to a vector representation and adds the time parameter.
"""
tau = []
for name in self.param_names[behavior_name]:
tau.append(task[name])
tau.append(time)
return torch.tensor(tau).float()
def get_tasks(self, behavior_name, num_samples) -> Dict[str, ParameterRandomizationSettings]:
"""
Samples task parameters to pass to agents
"""
behavior_name = [bname for bname in self.behavior_names if bname in behavior_name][0] # TODO make work with actual behavior names
current_time = self.t[behavior_name] + 1
if isinstance(self._taskSamplers[behavior_name], ActiveLearningTaskSampler):
num_points = max(num_samples, self.num_batch[behavior_name])
taus = self._taskSamplers[behavior_name].get_design_points(num_points=num_points, time=current_time).data.numpy().tolist()
else:
taus = self._taskSamplers[behavior_name](num_samples).tolist()
# print("sampled taus", current_time, taus)
tasks = [self._make_task(behavior_name, tau) for tau in taus]
self.report_buffer.extend(tasks)
tasks_repeated = []
for i in range(self.num_repeat[behavior_name]):
tasks_repeated.extend(tasks)
return tasks_repeated
def add_run(self, behavior_name, tau, perf):
"""
adds a finished run to the buffer organized by tau
"""
k = tuple(tau.data.numpy().flatten()[:-1].tolist())
self.task_completed[behavior_name][k].append(perf)
def get_data(self, behavior_name, last=True):
"""
Compiles performances that have been completed
"""
taus = []
perfs = []
t = self.t[behavior_name]
for k, v in self.task_completed[behavior_name].items():
tau = torch.tensor(k + (t,)).float()
taus.append(tau)
if last:
perf = v[-1]
else:
perf = np.mean(v)
perfs.append(perf)
X = torch.stack(taus, dim=0)
Y = torch.tensor(perfs).float().reshape(-1, 1)
return X, Y
def update(self, behavior_name: str, task_perfs: List[Tuple[Dict, float]]
) -> Tuple[bool, bool]:
"""
Updates the model of the task performance
"""
must_reset = False
updated = False
behavior_name = [bname for bname in self.behavior_names if bname in behavior_name][0] # TODO make work with actual behavior names
if isinstance(self._taskSamplers[behavior_name], ActiveLearningTaskSampler):
for task, perf in task_perfs:
# perfs.append(perf)
# self.t[behavior_name] = self.t[behavior_name] + 1
tau = self._build_tau(behavior_name, task, self.t[behavior_name])
# taus.append(tau)
self.add_run(behavior_name, tau, perf)
N = len(task_perfs)
self.counter[behavior_name] += N
M = self.num_repeat[behavior_name] * self.num_batch[behavior_name]
if self.counter[behavior_name] >= M:
updated = True
self.t[behavior_name] += 1
X, Y = self.get_data(behavior_name, last=True)
self.task_completed[behavior_name] = defaultdict(list)
self._taskSamplers[behavior_name].update_model(X, Y, refit=True)
return updated, must_reset
def uniform_sample(ranges, num_samples):
low = ranges[:, 0]
high = ranges[:, 1]
points = np.random.uniform(low=low, high=high, size=num_samples).reshape(num_samples, -1)
return points

146
Project/Assets/ML-Agents/Examples/SharedAssets/Prefabs/Targets/DynamicTarget.prefab


%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!1 &3840539935788495952
GameObject:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 3839136118347789758}
- component: {fileID: 3836793085241645916}
- component: {fileID: 3868551391811062596}
- component: {fileID: 3826955612593018128}
- component: {fileID: 3858402326794362026}
- component: {fileID: 3631016866778687563}
m_Layer: 0
m_Name: DynamicTarget
m_TagString: target
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &3839136118347789758
Transform:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3840539935788495952}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 1, z: 1}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 0}
m_RootOrder: 0
m_LocalEulerAnglesHint: {x: 0, y: 0, z: 0}
--- !u!33 &3836793085241645916
MeshFilter:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3840539935788495952}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!65 &3868551391811062596
BoxCollider:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3840539935788495952}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
serializedVersion: 2
m_Size: {x: 1, y: 1, z: 1}
m_Center: {x: 0, y: 0, z: 0}
--- !u!23 &3826955612593018128
MeshRenderer:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3840539935788495952}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_DynamicOccludee: 1
m_MotionVectors: 1
m_LightProbeUsage: 1
m_ReflectionProbeUsage: 1
m_RenderingLayerMask: 1
m_RendererPriority: 0
m_Materials:
- {fileID: 2100000, guid: c67450f290f3e4897bc40276a619e78d, type: 2}
m_StaticBatchInfo:
firstSubMesh: 0
subMeshCount: 0
m_StaticBatchRoot: {fileID: 0}
m_ProbeAnchor: {fileID: 0}
m_LightProbeVolumeOverride: {fileID: 0}
m_ScaleInLightmap: 1
m_PreserveUVs: 1
m_IgnoreNormalsForChartDetection: 0
m_ImportantGI: 0
m_StitchLightmapSeams: 0
m_SelectedEditorRenderState: 3
m_MinimumChartSize: 4
m_AutoUVMaxDistance: 0.5
m_AutoUVMaxAngle: 89
m_LightmapParameters: {fileID: 0}
m_SortingLayerID: 0
m_SortingLayer: 0
m_SortingOrder: 0
--- !u!54 &3858402326794362026
Rigidbody:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3840539935788495952}
serializedVersion: 2
m_Mass: 1
m_Drag: 0
m_AngularDrag: 0.05
m_UseGravity: 1
m_IsKinematic: 0
m_Interpolate: 0
m_Constraints: 0
m_CollisionDetection: 3
--- !u!114 &3631016866778687563
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 3840539935788495952}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 3c8f113a8b8d94967b1b1782c549be81, type: 3}
m_Name:
m_EditorClassIdentifier:
tagToDetect: agent
spawnRadius: 40
respawnIfTouched: 1
respawnIfFallsOffPlatform: 1
fallDistance: 5
onTriggerEnterEvent:
m_PersistentCalls:
m_Calls: []
onTriggerStayEvent:
m_PersistentCalls:
m_Calls: []
onTriggerExitEvent:
m_PersistentCalls:
m_Calls: []
onCollisionEnterEvent:
m_PersistentCalls:
m_Calls: []
onCollisionStayEvent:
m_PersistentCalls:
m_Calls: []
onCollisionExitEvent:
m_PersistentCalls:
m_Calls: []

7
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicSingleSpeed.prefab.meta


fileFormatVersion: 2
guid: 8daf438e1a41f4d06850cacc91aa175f
PrefabImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

157
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicSingleSpeed.prefab


%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!1001 &2161650710944920434
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 1988560127670737702}
m_Modifications:
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalPosition.y
value: 3
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalRotation.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalRotation.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalRotation.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_RootOrder
value: 2
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162976, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 8228205183255162979, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_Model
value:
objectReference: {fileID: 11400000, guid: e3bbf8676df3c415dac2a116b343b9f1,
type: 3}
- target: {fileID: 8228205183255163001, guid: 095e843732bad4a02b273c01eaff4b26,
type: 3}
propertyPath: m_Name
value: WalkerRagdollSingleSpeed
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: 095e843732bad4a02b273c01eaff4b26, type: 3}
--- !u!1001 &4922996001064177453
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications:
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalPosition.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.x
value: -0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.y
value: -0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.z
value: -0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_RootOrder
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6907740118844148851, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_Name
value: PlatformDynamicTarget
objectReference: {fileID: 0}
- target: {fileID: 6907845698621467345, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_IsActive
value: 0
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: f0d7741d9e06247f6843b921a206b978, type: 3}
--- !u!4 &1988560127670737702 stripped
Transform:
m_CorrespondingSourceObject: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
m_PrefabInstance: {fileID: 4922996001064177453}
m_PrefabAsset: {fileID: 0}

7
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicVariableSpeed.prefab.meta


fileFormatVersion: 2
guid: 84359146bf7af47e58c229d877e801d7
PrefabImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:

298
Project/Assets/ML-Agents/Examples/Walker/Prefabs/Platforms/PlatformWalkerDynamicVariableSpeed.prefab


%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!1001 &213013859575191415
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications:
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalPosition.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.x
value: -0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.y
value: -0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.z
value: -0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_RootOrder
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 6907740118844148851, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
propertyPath: m_Name
value: PlatformDynamicTarget
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: f0d7741d9e06247f6843b921a206b978, type: 3}
--- !u!4 &6718791046026642300 stripped
Transform:
m_CorrespondingSourceObject: {fileID: 6902107422946006027, guid: f0d7741d9e06247f6843b921a206b978,
type: 3}
m_PrefabInstance: {fileID: 213013859575191415}
m_PrefabAsset: {fileID: 0}
--- !u!1001 &5259696646250713865
PrefabInstance:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 6718791046026642300}
m_Modifications:
- target: {fileID: 1077752704035527913, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_Model
value:
objectReference: {fileID: 11400000, guid: 0a55a602a15e94f2baac4fadf9db5809,
type: 3}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalPosition.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalPosition.y
value: 3
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalPosition.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalRotation.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalRotation.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalRotation.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalRotation.w
value: 1
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_RootOrder
value: 2
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalEulerAnglesHint.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalEulerAnglesHint.y
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527914, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_LocalEulerAnglesHint.z
value: 0
objectReference: {fileID: 0}
- target: {fileID: 1077752704035527923, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_Name
value: WalkerRagdollVariableSpeed Variant
objectReference: {fileID: 0}
- target: {fileID: 7230369666416833497, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: rotTest
value:
objectReference: {fileID: 5047057869477872341}
- target: {fileID: 7818481575132336858, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0.70000017
objectReference: {fileID: 0}
- target: {fileID: 7818481575132336858, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.00000011920929
objectReference: {fileID: 0}
- target: {fileID: 7818481575774466713, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0.7000001
objectReference: {fileID: 0}
- target: {fileID: 7818481575774466713, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.00000011920929
objectReference: {fileID: 0}
- target: {fileID: 7818481575902529964, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.3829999
objectReference: {fileID: 0}
- target: {fileID: 7818481575932963433, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 7818481575932963433, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: -0.60000014
objectReference: {fileID: 0}
- target: {fileID: 7818481575961221082, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.30500042
objectReference: {fileID: 0}
- target: {fileID: 7818481576440584935, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: -0.70000017
objectReference: {fileID: 0}
- target: {fileID: 7818481576440584935, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.00000011920929
objectReference: {fileID: 0}
- target: {fileID: 7818481576458883963, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: -0.7000001
objectReference: {fileID: 0}
- target: {fileID: 7818481576458883963, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.00000011920929
objectReference: {fileID: 0}
- target: {fileID: 7818481576468061547, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 7818481576468061547, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: -0.5
objectReference: {fileID: 0}
- target: {fileID: 7818481576500842154, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: -0.5
objectReference: {fileID: 0}
- target: {fileID: 7818481576500842154, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.00000011920929
objectReference: {fileID: 0}
- target: {fileID: 7818481576528932668, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: -0.39999396
objectReference: {fileID: 0}
- target: {fileID: 7818481576528932668, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: -0.29999995
objectReference: {fileID: 0}
- target: {fileID: 7818481576563420651, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0.5
objectReference: {fileID: 0}
- target: {fileID: 7818481576563420651, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.00000011920929
objectReference: {fileID: 0}
- target: {fileID: 7818481576732930262, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: 0.5119997
objectReference: {fileID: 0}
- target: {fileID: 7818481576882516786, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 7818481576882516786, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: -0.60000014
objectReference: {fileID: 0}
- target: {fileID: 7818481577110242852, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0.39999396
objectReference: {fileID: 0}
- target: {fileID: 7818481577110242852, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: -0.29999995
objectReference: {fileID: 0}
- target: {fileID: 7818481577111017235, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.x
value: 0
objectReference: {fileID: 0}
- target: {fileID: 7818481577111017235, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
propertyPath: m_ConnectedAnchor.y
value: -0.5
objectReference: {fileID: 0}
m_RemovedComponents: []
m_SourcePrefab: {fileID: 100100000, guid: 90f3321ab5beb43e28bc651909d99e39, type: 3}
--- !u!4 &5047057869477872341 stripped
Transform:
m_CorrespondingSourceObject: {fileID: 1077752704392483292, guid: 90f3321ab5beb43e28bc651909d99e39,
type: 3}
m_PrefabInstance: {fileID: 5259696646250713865}
m_PrefabAsset: {fileID: 0}

部分文件因为文件数量过多而无法显示

正在加载...
取消
保存