浏览代码

Split `mlagents` into two packages (#1812)

* Reogranize project

* Fix all tests

* Address comments

* Delete init file

* Update requirements

* Tick version

* Add timeout wait parameter (mlagents_envs) (#1699)

* Add timeout wait param

* Remove unnecessary function

* Add new meta files for communicator objects

* Fix all tests

* update circleci

* Reorganize mlagents_envs tests

* WIP: test removing circleci cache

* Move gym tests

* Namespaced packages

* Update installation instructions for separate packages

* Remove unused package from setup script

* Add Readme for ml-agents-envs

* Clarify docs and re-comment compiler in make.bat

* Add more doc to installation

* Add back fix for Hololens

* Recompile Protobufs

* Change mlagents_envs to mlagents.envs in trainer_controller

* Remove extraneous files, fix win bat script

* Support Python 3.7 for envs package
/develop-generalizationTraining-TrainerController
Ervin T 5 年前
当前提交
b30f4c90
共有 65 个文件被更改,包括 244 次插入79 次删除
  1. 3
      .circleci/config.yml
  2. 6
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/AgentActionProto.cs
  3. 6
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/AgentInfoProto.cs
  4. 2
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/BrainParametersProto.cs
  5. 6
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/EnvironmentParametersProto.cs
  6. 12
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityInput.cs
  7. 18
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityMessage.cs
  8. 12
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityOutput.cs
  9. 6
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityRlInitializationOutput.cs
  10. 8
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityRlInput.cs
  11. 7
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityToExternalGrpc.cs
  12. 56
      docs/Installation-Windows.md
  13. 37
      docs/Installation.md
  14. 2
      gym-unity/setup.py
  15. 2
      ml-agents/mlagents/trainers/models.py
  16. 2
      ml-agents/mlagents/trainers/ppo/models.py
  17. 2
      ml-agents/mlagents/trainers/trainer.py
  18. 4
      ml-agents/setup.py
  19. 2
      protobuf-definitions/make.bat
  20. 2
      protobuf-definitions/make_for_win.bat
  21. 4
      gym-unity/gym_unity/tests/test_gym.py
  22. 1
      ml-agents-envs/mlagents/envs/communicator_objects/agent_action_proto_pb2.py
  23. 1
      ml-agents-envs/mlagents/envs/communicator_objects/agent_info_proto_pb2.py
  24. 1
      ml-agents-envs/mlagents/envs/communicator_objects/brain_parameters_proto_pb2.py
  25. 1
      ml-agents-envs/mlagents/envs/communicator_objects/command_proto_pb2.py
  26. 1
      ml-agents-envs/mlagents/envs/communicator_objects/custom_action_pb2.py
  27. 1
      ml-agents-envs/mlagents/envs/communicator_objects/custom_observation_pb2.py
  28. 1
      ml-agents-envs/mlagents/envs/communicator_objects/custom_reset_parameters_pb2.py
  29. 1
      ml-agents-envs/mlagents/envs/communicator_objects/demonstration_meta_proto_pb2.py
  30. 1
      ml-agents-envs/mlagents/envs/communicator_objects/engine_configuration_proto_pb2.py
  31. 1
      ml-agents-envs/mlagents/envs/communicator_objects/environment_parameters_proto_pb2.py
  32. 1
      ml-agents-envs/mlagents/envs/communicator_objects/header_pb2.py
  33. 1
      ml-agents-envs/mlagents/envs/communicator_objects/resolution_proto_pb2.py
  34. 1
      ml-agents-envs/mlagents/envs/communicator_objects/space_type_proto_pb2.py
  35. 1
      ml-agents-envs/mlagents/envs/communicator_objects/unity_input_pb2.py
  36. 1
      ml-agents-envs/mlagents/envs/communicator_objects/unity_message_pb2.py
  37. 1
      ml-agents-envs/mlagents/envs/communicator_objects/unity_output_pb2.py
  38. 1
      ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_initialization_input_pb2.py
  39. 1
      ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_initialization_output_pb2.py
  40. 1
      ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_input_pb2.py
  41. 1
      ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_output_pb2.py
  42. 3
      ml-agents-envs/mlagents/envs/tests/test_envs.py
  43. 2
      ml-agents/mlagents/trainers/tests/test_bc.py
  44. 2
      ml-agents/mlagents/trainers/tests/test_ppo.py
  45. 4
      ml-agents-envs/mlagents/envs/mock_communicator.py
  46. 11
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/CustomAction.cs.meta
  47. 11
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/CustomObservation.cs.meta
  48. 11
      UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/CustomResetParameters.cs.meta
  49. 28
      ml-agents-envs/README.md
  50. 32
      ml-agents-envs/setup.py
  51. 0
      ml-agents-envs/mlagents/envs/utilities.py
  52. 0
      ml-agents/mlagents/trainers/tests/__init__.py
  53. 0
      /gym-unity/gym_unity/tests
  54. 0
      /ml-agents-envs/__init__.py
  55. 0
      /ml-agents-envs/mlagents/envs
  56. 0
      /ml-agents-envs/mlagents/envs/tests/__init__.py
  57. 0
      /ml-agents/mlagents/trainers/tests/__init__.py
  58. 0
      /ml-agents-envs/mlagents/envs/tests/test_envs.py
  59. 0
      /ml-agents-envs/mlagents/envs/tests/test_rpc_communicator.py
  60. 0
      /ml-agents/mlagents/trainers/tests
  61. 0
      /ml-agents-envs/mlagents/envs/mock_communicator.py

3
.circleci/config.yml


command: |
python3 -m venv venv
. venv/bin/activate
cd ml-agents && pip install -e .
cd ml-agents-envs && pip install -e .
cd ../ml-agents && pip install -e .
pip install pytest-cov==2.6.1 codacy-coverage==1.3.11
cd ../gym-unity && pip install -e .

6
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/AgentActionProto.cs


}
if (other.customAction_ != null) {
if (customAction_ == null) {
customAction_ = new global::MLAgents.CommunicatorObjects.CustomAction();
CustomAction = new global::MLAgents.CommunicatorObjects.CustomAction();
}
CustomAction.MergeFrom(other.CustomAction);
}

}
case 42: {
if (customAction_ == null) {
customAction_ = new global::MLAgents.CommunicatorObjects.CustomAction();
CustomAction = new global::MLAgents.CommunicatorObjects.CustomAction();
input.ReadMessage(customAction_);
input.ReadMessage(CustomAction);
break;
}
}

6
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/AgentInfoProto.cs


actionMask_.Add(other.actionMask_);
if (other.customObservation_ != null) {
if (customObservation_ == null) {
customObservation_ = new global::MLAgents.CommunicatorObjects.CustomObservation();
CustomObservation = new global::MLAgents.CommunicatorObjects.CustomObservation();
}
CustomObservation.MergeFrom(other.CustomObservation);
}

}
case 98: {
if (customObservation_ == null) {
customObservation_ = new global::MLAgents.CommunicatorObjects.CustomObservation();
CustomObservation = new global::MLAgents.CommunicatorObjects.CustomObservation();
input.ReadMessage(customObservation_);
input.ReadMessage(CustomObservation);
break;
}
}

2
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/BrainParametersProto.cs


break;
}
case 48: {
vectorActionSpaceType_ = (global::MLAgents.CommunicatorObjects.SpaceTypeProto) input.ReadEnum();
VectorActionSpaceType = (global::MLAgents.CommunicatorObjects.SpaceTypeProto) input.ReadEnum();
break;
}
case 58: {

6
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/EnvironmentParametersProto.cs


floatParameters_.Add(other.floatParameters_);
if (other.customResetParameters_ != null) {
if (customResetParameters_ == null) {
customResetParameters_ = new global::MLAgents.CommunicatorObjects.CustomResetParameters();
CustomResetParameters = new global::MLAgents.CommunicatorObjects.CustomResetParameters();
}
CustomResetParameters.MergeFrom(other.CustomResetParameters);
}

}
case 18: {
if (customResetParameters_ == null) {
customResetParameters_ = new global::MLAgents.CommunicatorObjects.CustomResetParameters();
CustomResetParameters = new global::MLAgents.CommunicatorObjects.CustomResetParameters();
input.ReadMessage(customResetParameters_);
input.ReadMessage(CustomResetParameters);
break;
}
}

12
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityInput.cs


}
if (other.rlInput_ != null) {
if (rlInput_ == null) {
rlInput_ = new global::MLAgents.CommunicatorObjects.UnityRLInput();
RlInput = new global::MLAgents.CommunicatorObjects.UnityRLInput();
rlInitializationInput_ = new global::MLAgents.CommunicatorObjects.UnityRLInitializationInput();
RlInitializationInput = new global::MLAgents.CommunicatorObjects.UnityRLInitializationInput();
}
RlInitializationInput.MergeFrom(other.RlInitializationInput);
}

break;
case 10: {
if (rlInput_ == null) {
rlInput_ = new global::MLAgents.CommunicatorObjects.UnityRLInput();
RlInput = new global::MLAgents.CommunicatorObjects.UnityRLInput();
input.ReadMessage(rlInput_);
input.ReadMessage(RlInput);
rlInitializationInput_ = new global::MLAgents.CommunicatorObjects.UnityRLInitializationInput();
RlInitializationInput = new global::MLAgents.CommunicatorObjects.UnityRLInitializationInput();
input.ReadMessage(rlInitializationInput_);
input.ReadMessage(RlInitializationInput);
break;
}
}

18
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityMessage.cs


}
if (other.header_ != null) {
if (header_ == null) {
header_ = new global::MLAgents.CommunicatorObjects.Header();
Header = new global::MLAgents.CommunicatorObjects.Header();
unityOutput_ = new global::MLAgents.CommunicatorObjects.UnityOutput();
UnityOutput = new global::MLAgents.CommunicatorObjects.UnityOutput();
unityInput_ = new global::MLAgents.CommunicatorObjects.UnityInput();
UnityInput = new global::MLAgents.CommunicatorObjects.UnityInput();
}
UnityInput.MergeFrom(other.UnityInput);
}

break;
case 10: {
if (header_ == null) {
header_ = new global::MLAgents.CommunicatorObjects.Header();
Header = new global::MLAgents.CommunicatorObjects.Header();
input.ReadMessage(header_);
input.ReadMessage(Header);
unityOutput_ = new global::MLAgents.CommunicatorObjects.UnityOutput();
UnityOutput = new global::MLAgents.CommunicatorObjects.UnityOutput();
input.ReadMessage(unityOutput_);
input.ReadMessage(UnityOutput);
unityInput_ = new global::MLAgents.CommunicatorObjects.UnityInput();
UnityInput = new global::MLAgents.CommunicatorObjects.UnityInput();
input.ReadMessage(unityInput_);
input.ReadMessage(UnityInput);
break;
}
}

12
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityOutput.cs


}
if (other.rlOutput_ != null) {
if (rlOutput_ == null) {
rlOutput_ = new global::MLAgents.CommunicatorObjects.UnityRLOutput();
RlOutput = new global::MLAgents.CommunicatorObjects.UnityRLOutput();
rlInitializationOutput_ = new global::MLAgents.CommunicatorObjects.UnityRLInitializationOutput();
RlInitializationOutput = new global::MLAgents.CommunicatorObjects.UnityRLInitializationOutput();
}
RlInitializationOutput.MergeFrom(other.RlInitializationOutput);
}

break;
case 10: {
if (rlOutput_ == null) {
rlOutput_ = new global::MLAgents.CommunicatorObjects.UnityRLOutput();
RlOutput = new global::MLAgents.CommunicatorObjects.UnityRLOutput();
input.ReadMessage(rlOutput_);
input.ReadMessage(RlOutput);
rlInitializationOutput_ = new global::MLAgents.CommunicatorObjects.UnityRLInitializationOutput();
RlInitializationOutput = new global::MLAgents.CommunicatorObjects.UnityRLInitializationOutput();
input.ReadMessage(rlInitializationOutput_);
input.ReadMessage(RlInitializationOutput);
break;
}
}

6
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityRlInitializationOutput.cs


brainParameters_.Add(other.brainParameters_);
if (other.environmentParameters_ != null) {
if (environmentParameters_ == null) {
environmentParameters_ = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
EnvironmentParameters = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
}
EnvironmentParameters.MergeFrom(other.EnvironmentParameters);
}

}
case 50: {
if (environmentParameters_ == null) {
environmentParameters_ = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
EnvironmentParameters = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
input.ReadMessage(environmentParameters_);
input.ReadMessage(EnvironmentParameters);
break;
}
}

8
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityRlInput.cs


agentActions_.Add(other.agentActions_);
if (other.environmentParameters_ != null) {
if (environmentParameters_ == null) {
environmentParameters_ = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
EnvironmentParameters = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
}
EnvironmentParameters.MergeFrom(other.EnvironmentParameters);
}

}
case 18: {
if (environmentParameters_ == null) {
environmentParameters_ = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
EnvironmentParameters = new global::MLAgents.CommunicatorObjects.EnvironmentParametersProto();
input.ReadMessage(environmentParameters_);
input.ReadMessage(EnvironmentParameters);
break;
}
case 24: {

case 32: {
command_ = (global::MLAgents.CommunicatorObjects.CommandProto) input.ReadEnum();
Command = (global::MLAgents.CommunicatorObjects.CommandProto) input.ReadEnum();
break;
}
}

7
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/UnityToExternalGrpc.cs


# if UNITY_EDITOR || UNITY_STANDALONE_WIN || UNITY_STANDALONE_OSX || UNITY_STANDALONE_LINUX
#pragma warning disable 1591
#pragma warning disable 0414, 1591
namespace MLAgents.CommunicatorObjects {
public static partial class UnityToExternal

}
#endregion
#endif
#endif

56
docs/Installation-Windows.md


It also contains many [example environments](Learning-Environment-Examples.md)
to help you get started.
The `ml-agents` subdirectory contains Python packages which provide
trainers and a Python API to interface with Unity.
The `ml-agents` subdirectory contains a Python package which provides deep reinforcement
learning trainers to use with Unity environments.
The `gym-unity` subdirectory contains a package to interface with OpenAI Gym.
In our example, the files are located in `C:\Downloads`. After you have either
cloned or downloaded the files, from the Anaconda Prompt, change to the ml-agents
subdirectory inside the ml-agents directory:
The `ml-agents-envs` subdirectory contains a Python API to interface with Unity, which
the `ml-agents` package depends on.
```console
cd C:\Downloads\ml-agents\ml-agents
```
The `gym-unity` subdirectory contains a package to interface with OpenAI Gym.
Keep in mind where the files were downloaded, as you will need the
trainer config files in this directory when running `mlagents-learn`.
Prompt within `ml-agents` subdirectory:
Prompt:
```sh
pip install -e .
```console
pip install mlagents
```
This will complete the installation of all the required Python packages to run

```sh
pip install -e . --no-cache-dir
```console
pip install mlagents --no-cache-dir
### Installing for Development
If you intend to make modifications to `ml-agents` or `ml-agents-envs`, you should install
the packages from the cloned repo rather than from PyPi. To do this, you will need to install
`ml-agents` and `ml-agents-envs` separately.
In our example, the files are located in `C:\Downloads`. After you have either
cloned or downloaded the files, from the Anaconda Prompt, change to the ml-agents
subdirectory inside the ml-agents directory:
```console
cd C:\Downloads\ml-agents
```
From the repo's main directory, now run:
```console
cd ml-agents-envs
pip install -e .
cd ..
cd ml-agents
pip install -e .
```
Running pip with the `-e` flag will let you make changes to the Python files directly and have those
reflected when you run `mlagents-learn`. It is important to install these packages in this order as the
`mlagents` package depends on `mlagents_envs`, and installing it in the other
order will download `mlagents_envs` from PyPi.
## (Optional) Step 4: GPU Training using The ML-Agents Toolkit

37
docs/Installation.md


It also contains many [example environments](Learning-Environment-Examples.md)
to help you get started.
The `ml-agents` subdirectory contains Python packages which provide
trainers and a Python API to interface with Unity.
The `ml-agents` subdirectory contains a Python package which provides deep reinforcement
learning trainers to use with Unity environments.
The `ml-agents-envs` subdirectory contains a Python API to interface with Unity, which
the `ml-agents` package depends on.
The `gym-unity` subdirectory contains a package to interface with OpenAI Gym.

[instructions](https://packaging.python.org/guides/installing-using-linux-tools/#installing-pip-setuptools-wheel-with-linux-package-managers)
on installing it.
To install the dependencies and `mlagents` Python package, enter the
`ml-agents/` subdirectory and run from the command line:
To install the dependencies and `mlagents` Python package, run from the command line:
pip3 install -e .
pip3 install mlagents
Note that this will install `ml-agents` from PyPi, _not_ from the cloned repo.
`mlagents-learn --help`
`mlagents-learn --help`, after which you will see the Unity logo and the command line
parameters you can use with `mlagents-learn`.
**Notes:**

[link](https://www.tensorflow.org/install/pip)
on how to install TensorFlow in an Anaconda environment.
### Installing for Development
If you intend to make modifications to `ml-agents` or `ml-agents-envs`, you should install
the packages from the cloned repo rather than from PyPi. To do this, you will need to install
`ml-agents` and `ml-agents-envs` separately. Do this by running (starting from the repo's main
directory):
```sh
cd ml-agents-envs
pip3 install -e ./
cd ..
cd ml-agents
pip3 install -e ./
```
Running pip with the `-e` flag will let you make changes to the Python files directly and have those
reflected when you run `mlagents-learn`. It is important to install these packages in this order as the
`mlagents` package depends on `mlagents_envs`, and installing it in the other
order will download `mlagents_envs` from PyPi.
## Docker-based Installation

2
gym-unity/setup.py


author_email='ML-Agents@unity3d.com',
url='https://github.com/Unity-Technologies/ml-agents',
packages=find_packages(),
install_requires=['gym', 'mlagents']
install_requires=['gym', 'mlagents_envs']
)

2
ml-agents/mlagents/trainers/models.py


import tensorflow as tf
import tensorflow.contrib.layers as c_layers
logger = logging.getLogger("mlagents.envs")
logger = logging.getLogger("mlagents.trainers")
class LearningModel(object):

2
ml-agents/mlagents/trainers/ppo/models.py


import tensorflow as tf
from mlagents.trainers.models import LearningModel
logger = logging.getLogger("mlagents.envs")
logger = logging.getLogger("mlagents.trainers")
class PPOModel(LearningModel):

2
ml-agents/mlagents/trainers/trainer.py


class Trainer(object):
"""This class is the base class for the mlagents.trainers"""
"""This class is the base class for the mlagents.envs.trainers"""
def __init__(self, brain, trainer_parameters, training, run_id):
"""

4
ml-agents/setup.py


'Programming Language :: Python :: 3.6'
],
packages=find_packages(exclude=['tests', 'tests.*', '*.tests', '*.tests.*']), # Required
packages=['mlagents.trainers'], # Required
zip_safe=False,
'mlagents_envs==0.7.0',
'tensorflow>=1.7,<1.8',
'Pillow>=4.2.1',
'matplotlib',

2
protobuf-definitions/make.bat


SRC_DIR=proto/mlagents/envs/communicator_objects
DST_DIR_C=../UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects
DST_DIR_P=../ml-agents
DST_DIR_P=../ml-agents-envs
PROTO_PATH=proto
PYTHON_PACKAGE=mlagents/envs/communicator_objects

2
protobuf-definitions/make_for_win.bat


set SRC_DIR=proto\mlagents\envs\communicator_objects
set DST_DIR_C=..\UnitySDK\Assets\ML-Agents\Scripts\CommunicatorObjects
set DST_DIR_P=..\ml-agents
set DST_DIR_P=..\ml-agents-envs
set PROTO_PATH=proto
set PYTHON_PACKAGE=mlagents\envs\communicator_objects

4
gym-unity/gym_unity/tests/test_gym.py


from gym import spaces
from gym_unity.envs import UnityEnv, UnityGymException
# Tests
@mock.patch('gym_unity.envs.unity_env.UnityEnvironment')
def test_gym_wrapper(mock_env):

# Avoid using mutable object as default param
if vector_action_space_size is None:
vector_action_space_size = [2]
mock_brain = mock.Mock();
mock_brain = mock.Mock()
mock_brain.return_value.number_visual_observations = number_visual_observations
mock_brain.return_value.num_stacked_vector_observations = num_stacked_vector_observations
mock_brain.return_value.vector_action_space_type = vector_action_space_type

1
ml-agents-envs/mlagents/envs/communicator_objects/agent_action_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/agent_action_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/agent_info_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/agent_info_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/brain_parameters_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/brain_parameters_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/command_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/command_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/custom_action_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/custom_action.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/custom_observation_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/custom_observation.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/custom_reset_parameters_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/custom_reset_parameters.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/demonstration_meta_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/demonstration_meta_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/engine_configuration_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/engine_configuration_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/environment_parameters_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/environment_parameters_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/header_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/header.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/resolution_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/resolution_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/space_type_proto_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/space_type_proto.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/unity_input_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/unity_input.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/unity_message_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/unity_message.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/unity_output_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/unity_output.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_initialization_input_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/unity_rl_initialization_input.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_initialization_output_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/unity_rl_initialization_output.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_input_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/unity_rl_input.proto

1
ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_output_pb2.py


# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: mlagents/envs/communicator_objects/unity_rl_output.proto

3
ml-agents-envs/mlagents/envs/tests/test_envs.py


import unittest.mock as mock
import pytest
import struct
from tests.mock_communicator import MockCommunicator
from mlagents.envs.mock_communicator import MockCommunicator
@mock.patch('mlagents.envs.UnityEnvironment.get_communicator')
def test_handles_bad_filename(get_communicator):

2
ml-agents/mlagents/trainers/tests/test_bc.py


from mlagents.trainers.bc.models import BehavioralCloningModel
from mlagents.trainers.bc.policy import BCPolicy
from mlagents.envs import UnityEnvironment
from tests.mock_communicator import MockCommunicator
from mlagents.envs.mock_communicator import MockCommunicator
@pytest.fixture

2
ml-agents/mlagents/trainers/tests/test_ppo.py


from mlagents.trainers.ppo.trainer import discount_rewards
from mlagents.trainers.ppo.policy import PPOPolicy
from mlagents.envs import UnityEnvironment
from tests.mock_communicator import MockCommunicator
from mlagents.envs.mock_communicator import MockCommunicator
@pytest.fixture

4
ml-agents-envs/mlagents/envs/mock_communicator.py


from mlagents.envs.communicator import Communicator
from mlagents.envs.communicator_objects import UnityMessage, UnityOutput, UnityInput, \
from .communicator import Communicator
from .communicator_objects import UnityOutput, UnityInput, \
ResolutionProto, BrainParametersProto, UnityRLInitializationOutput, \
AgentInfoProto, UnityRLOutput

11
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/CustomAction.cs.meta


fileFormatVersion: 2
guid: a8d11b50ed9ce45f7827f5117b65db06
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

11
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/CustomObservation.cs.meta


fileFormatVersion: 2
guid: 896847c1364a7475d9094058ff93b7f0
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

11
UnitySDK/Assets/ML-Agents/Scripts/CommunicatorObjects/CustomResetParameters.cs.meta


fileFormatVersion: 2
guid: a071e48ae56b2424e8b59aad01646f59
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:

28
ml-agents-envs/README.md


# Unity ML-Agents Python Interface
The `mlagents_envs` Python package is part of the
[ML-Agents Toolkit](https://github.com/Unity-Technologies/ml-agents).
`mlagents_envs` provides a Python API that allows direct interaction with the Unity
game engine. It is used by the trainer implementation in `mlagents` as well as
the `gym-unity` package to perform reinforcement learning within Unity. `mlagents_envs` can be
used independently of `mlagents` for Python communication.
The `mlagents_envs` Python package contains one sub package:
* `mlagents.envs`: A low level API which allows you to interact directly with a
Unity Environment. See
[here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md)
for more information on using this package.
## Installation
Install the `mlagents_envs` package with:
```sh
pip install mlagents_envs
```
## Usage & More Information
For more detailed documentation, check out the
[ML-Agents Toolkit documentation.](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Readme.md)

32
ml-agents-envs/setup.py


from setuptools import setup
from os import path
here = path.abspath(path.dirname(__file__))
setup(
name='mlagents_envs',
version='0.7.0',
description='Unity Machine Learning Agents Interface',
url='https://github.com/Unity-Technologies/ml-agents',
author='Unity Technologies',
author_email='ML-Agents@unity3d.com',
classifiers=[
'Intended Audience :: Developers',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 3.6'
],
packages=['mlagents.envs'], # Required
zip_safe=False,
install_requires=[
'Pillow>=4.2.1,<=5.4.1',
'numpy>=1.13.3,<=1.16.1',
'pytest>=3.2.2,<4.0.0',
'protobuf>=3.6,<3.7',
'grpcio>=1.11.0,<1.12.0'],
python_requires=">=3.5,<3.8",
)

0
ml-agents-envs/mlagents/envs/utilities.py

0
ml-agents/mlagents/trainers/tests/__init__.py

/gym-unity/tests → /gym-unity/gym_unity/tests

/ml-agents/mlagents/__init__.py → /ml-agents-envs/__init__.py

/ml-agents/mlagents/envs → /ml-agents-envs/mlagents/envs

/ml-agents/tests/__init__.py → /ml-agents-envs/mlagents/envs/tests/__init__.py

/ml-agents/tests/envs/__init__.py → /ml-agents/mlagents/trainers/tests/__init__.py

/ml-agents/tests/envs/test_envs.py → /ml-agents-envs/mlagents/envs/tests/test_envs.py

/ml-agents/tests/envs/test_rpc_communicator.py → /ml-agents-envs/mlagents/envs/tests/test_rpc_communicator.py

/ml-agents/tests/trainers → /ml-agents/mlagents/trainers/tests

/ml-agents/tests/mock_communicator.py → /ml-agents-envs/mlagents/envs/mock_communicator.py

正在加载...
取消
保存