* Several changes were made to how agents are reset and marked as done:
* Calling `Done()` on the Agent will now reset it immediately and call the `AgentReset` virtual method. (This is to simplify the previous logic in which the Agent had to wait for the next `EnvironmentStep` to reset)
* The "Reset on Done" setting in AgentParameters was removed; this is now effectively always true. `AgentOnDone` virtual method on the Agent has been removed.
* The `--num-runs` command-line option has been removed.
* The "Reset on Done" setting in AgentParameters was removed; this is now effectively always true. `AgentOnDone` virtual method on the Agent has been removed.
* Agents will always request a decision after being marked as `Done()` and will no longer wait for the next call to `RequestDecision()`.
* The `agentParameters` field of the Agent has been removed. (Contained only `maxStep` information)
* `maxStep` is now a public field on the Agent. (Was moved from `agentParameters`)
* The `Info` field of the Agent has been made private. (Was only used internally and not meant to be modified outside of the Agent)
* The `GetReward()` method on the Agent has been removed. (It was being confused with `GetCumulativeReward()`)
* The `AgentAction` struct no longer contains a `value` field. (Value estimates were not set during inference)
* The `GetValueEstimate()` method on the Agent has been removed.
* The `UpdateValueAction()` method on the Agent has been removed.
* Calling `Done()` on the Agent will now reset it immediately and call the `AgentReset` virtual method. (This is to simplify the previous logic in which the Agent had to wait for the next `EnvironmentStep` to reset)
* The `--num-runs` command-line option has been removed from `mlagents-learn`.
* Several fields on the Agent were removed or made private in order to simplify the interface.
* The `agentParameters` field of the Agent has been removed. (Contained only `maxStep` information)
* `maxStep` is now a public field on the Agent. (Was moved from `agentParameters`)
* The `Info` field of the Agent has been made private. (Was only used internally and not meant to be modified outside of the Agent)
* The `GetReward()` method on the Agent has been removed. (It was being confused with `GetCumulativeReward()`)
* The `AgentAction` struct no longer contains a `value` field. (Value estimates were not set during inference)
* The `GetValueEstimate()` method on the Agent has been removed.
* The `UpdateValueAction()` method on the Agent has been removed.
* The deprecated `RayPerception3D` and `RayPerception3D` classes were removed, and the `legacyHitFractionBehavior` argument was removed from `RayPerceptionSensor.PerceiveStatic()`.
* If your Agent implemented `AgentOnDone` and did not have the checkbox `Reset On Done` checked in the inspector, you must call the code that was in `AgentOnDone` manually.
* If you give your Agent a reward or penalty at the end of an episode (e.g. for reaching a goal or falling off of a platform), make sure you call `AddReward()` or `SetReward()`*before* calling `Done()`. Previously, the order didn't matter.
* If you were not using `On Demand Decision` for your Agent, you **must** add a `DecisionRequester` component to your Agent GameObject and set its `Decision Period` field to the old `Decision Period` of the Agent.
* If you have a class that inherits from Academy:
* If the class didn't override any of the virtual methods and didn't store any additional data, you can just remove the old script from the scene.
* Multiply `max_steps` and `summary_steps` in your `trainer_config.yaml` by the number of Agents in the scene.
* Combine curriculum configs into a single file. See [the WallJump curricula](../config/curricula/wall_jump.yaml) for an example of the new curriculum config format.
A tool like https://www.json2yaml.com may be useful to help with the conversion.
* If your Agent implements `AgentOnDone` and your Agent does not have the checkbox `Reset On Done` checked in the inspector, you must call the code that was in `AgentOnDone` manually.
## Migrating from ML-Agents toolkit v0.12.0 to v0.13.0