This tutorial walks through the process of creating a Unity Environment from scratch. We recommend first reading the [Getting Started](Getting-Started.md) guide to understand the concepts presented here first in an already-built environment.
This tutorial walks through the process of creating a Unity Environment from
scratch. We recommend first reading the [Getting Started](Getting-Started.md)
guide to understand the concepts presented here first in an already-built
environment.
In this example, we will create an agent capable of controlling a ball on a platform. We will then train the agent to roll the ball toward the cube while avoiding falling off the platform.
In this example, we will create an agent capable of controlling a ball on a
platform. We will then train the agent to roll the ball toward the cube while
avoiding falling off the platform.
Using the ML-Agents toolkit in a Unity project involves the following basic
Using the ML-Agents Toolkit in a Unity project involves the following basic
from a simple physical simulation containing a few objects to an entire game
or ecosystem.
2. Implement your Agent subclasses. An Agent subclass defines the code an Agent
uses to observe its environment, to carry out assigned actions, and to
calculate the rewards used for reinforcement training. You can also implement
optional methods to reset the Agent when it has finished or failed its task.
3. Add your Agent subclasses to appropriate GameObjects, typically, the object
in the scene that represents the Agent in the simulation.
from a simple physical simulation containing a few objects to an entire game
or ecosystem.
1. Implement your Agent subclasses. An Agent subclass defines the code an Agent
uses to observe its environment, to carry out assigned actions, and to
calculate the rewards used for reinforcement training. You can also implement
optional methods to reset the Agent when it has finished or failed its task.
1. Add your Agent subclasses to appropriate GameObjects, typically, the object
in the scene that represents the Agent in the simulation.
**Note:** If you are unfamiliar with Unity, refer to
[Learning the interface](https://docs.unity3d.com/Manual/LearningtheInterface.html)
The first task to accomplish is simply creating a new Unity project and
importing the ML-Agents assets into it:
1. Launch the Unity Editor and create a new project named "RollerBall".
2. Make sure that the Scripting Runtime Version for the project is set to use
**.NET 4.x Equivalent** (This is an experimental option in Unity 2017,
but is the default as of 2018.3.)
3. In a file system window, navigate to the folder containing your cloned
ML-Agents repository.
4. Open the `manifest.json` file in the `Packages` directory of your project.
Add the following line to your project's package dependencies:
The C# editor code and python trainer code are not compatible between releases. This means that if you upgrade one, you *must* upgrade the other as well. If you experience new errors or unable to connect to training after updating, please double-check that the versions are in the same.
The versions can be found in
* `Academy.k_ApiVersion` in Academy.cs ([example](https://github.com/Unity-Technologies/ml-agents/blob/b255661084cb8f701c716b040693069a3fb9a257/UnitySDK/Assets/ML-Agents/Scripts/Academy.cs#L95))
* `UnityEnvironment.API_VERSION` in environment.py ([example](https://github.com/Unity-Technologies/ml-agents/blob/b255661084cb8f701c716b040693069a3fb9a257/ml-agents-envs/mlagents/envs/environment.py#L45))
The C# editor code and python trainer code are not compatible between releases.
This means that if you upgrade one, you _must_ upgrade the other as well. If you
experience new errors or unable to connect to training after updating, please
double-check that the versions are in the same. The versions can be found in
* The `--load` and `--train` command-line flags have been deprecated and replaced with `--resume` and `--inference`.
* Running with the same `--run-id` twice will now throw an error.
* Removed the multi-agent gym option from the gym wrapper. For multi-agent scenarios, use the [Low Level Python API](Python-API.md).
* The low level Python API has changed. You can look at the document [Low Level Python API documentation](Python-API.md) for more information. If you use `mlagents-learn` for training, this should be a transparent change.
* The obsolete `Agent` methods `GiveModel`, `Done`, `InitializeAgent`, `AgentAction` and `AgentReset` have been removed.
* The signature of `Agent.Heuristic()` was changed to take a `float[]` as a parameter, instead of returning the array. This was done to prevent a common source of error where users would return arrays of the wrong size.
* `num_updates` and `train_interval` for SAC have been replaced with `steps_per_update`.
- The `--load` and `--train` command-line flags have been deprecated and
replaced with `--resume` and `--inference`.
- Running with the same `--run-id` twice will now throw an error.
- The `play_against_current_self_ratio` self-play trainer hyperparameter has
been renamed to `play_against_latest_model_ratio`
- Removed the multi-agent gym option from the gym wrapper. For multi-agent
scenarios, use the [Low Level Python API](Python-API.md).
- The low level Python API has changed. You can look at the document
[Low Level Python API documentation](Python-API.md) for more information. If
you use `mlagents-learn` for training, this should be a transparent change.
- The obsolete `Agent` methods `GiveModel`, `Done`, `InitializeAgent`,
`AgentAction` and `AgentReset` have been removed.
- The signature of `Agent.Heuristic()` was changed to take a `float[]` as a
parameter, instead of returning the array. This was done to prevent a common
source of error where users would return arrays of the wrong size.
- `num_updates` and `train_interval` for SAC have been replaced with `steps_per_update`.
* Replace the `--load` flag with `--resume` when calling `mlagents-learn`, and don't use the `--train` flag as training
will happen by default. To run with inference instead of training, use `--inference`.
* To force-overwrite files from a pre-existing run, add the `--force` command-line flag.
* The Jupyter notebooks have been removed from the repository.
* `Academy.FloatProperties` was removed.
* `Academy.RegisterSideChannel` and `Academy.UnregisterSideChannel` were removed.
* Replace `Academy.FloatProperties` with `SideChannelUtils.GetSideChannel<FloatPropertiesChannel>()`.
* Replace `Academy.RegisterSideChannel` with `SideChannelUtils.RegisterSideChannel()`.
* Replace `Academy.UnregisterSideChannel` with `SideChannelUtils.UnregisterSideChannel`.
* If your Agent class overrides `Heuristic()`, change the signature to `public override void Heuristic(float[] actionsOut)` and assign values to `actionsOut` instead of returning an array.
* `steps_per_update` should be around equal to the number of agents in your environment, times `num_updates`
and divided by `train_interval`.
- Replace the `--load` flag with `--resume` when calling `mlagents-learn`, and
don't use the `--train` flag as training will happen by default. To run
without training, use `--inference`.
- To force-overwrite files from a pre-existing run, add the `--force`
command-line flag.
- The Jupyter notebooks have been removed from the repository.
- `Academy.FloatProperties` was removed.
- `Academy.RegisterSideChannel` and `Academy.UnregisterSideChannel` were
- If your Agent class overrides `Heuristic()`, change the signature to
`public override void Heuristic(float[] actionsOut)` and assign values to
`actionsOut` instead of returning an array.
- Set `steps_per_update` to be around equal to the number of agents in your environment,
times `num_updates` and divided by `train_interval`.
* The `Agent.CollectObservations()` virtual method now takes as input a `VectorSensor` sensor as argument. The `Agent.AddVectorObs()` methods were removed.
* The `SetMask` was renamed to `SetMask` method must now be called on the `DiscreteActionMasker` argument of the `CollectDiscreteActionMasks` virtual method.
* We consolidated our API for `DiscreteActionMasker`. `SetMask` takes two arguments : the branch index and the list of masked actions for that branch.
* The `Monitor` class has been moved to the Examples Project. (It was prone to errors during testing)
* The `MLAgents.Sensors` namespace has been introduced. All sensors classes are part of the `MLAgents.Sensors` namespace.
* The `MLAgents.SideChannels` namespace has been introduced. All side channel classes are part of the `MLAgents.SideChannels` namespace.
* The interface for `RayPerceptionSensor.PerceiveStatic()` was changed to take an input class and write to an output class, and the method was renamed