浏览代码

Adjust documentation (#1096)

/develop-generalizationTraining-TrainerController
GitHub 6 年前
当前提交
de9d73f3
共有 1 个文件被更改,包括 2 次插入18 次删除
  1. 20
      docs/Learning-Environment-Create-New.md

20
docs/Learning-Environment-Create-New.md


**Note:** When you mark an agent as done, it stops its activity until it is reset. You can have the agent reset immediately, by setting the Agent.ResetOnDone property to true in the inspector or you can wait for the Academy to reset the environment. This RollerBall environment relies on the `ResetOnDone` mechanism and doesn't set a `Max Steps` limit for the Academy (so it never resets the environment).
To encourage the agent along, we also reward it for getting closer to the target (saving the previous distance measurement between steps):
```csharp
// Getting closer
if (distanceToTarget < previousDistance)
{
AddReward(0.1f);
}
```
It can also encourage an agent to finish a task more quickly to assign a negative reward at each step:
```csharp

Done();
}
// Getting closer
if (distanceToTarget < previousDistance)
{
AddReward(0.1f);
}
// Time penalty
AddReward(-0.05f);

AddReward(-1.0f);
Done();
}
previousDistance = distanceToTarget;
// Actions, size = 2
Vector3 controlSignal = Vector3.zero;

## Final Editor Setup
Now, that all the GameObjects and ML-Agent components are in place, it is time to connect everything together in the Unity Editor. This involves assigning the Brain object to the Agent and setting the Brain properties so that they are compatible with our agent code.
Now, that all the GameObjects and ML-Agent components are in place, it is time to connect everything together in the Unity Editor. This involves assigning the Brain object to the Agent, changing some of the Agent Components properties, and setting the Brain properties so that they are compatible with our agent code.
4. Change `Decision Frequency` from `1` to `5`.
![Assign the Brain to the RollerAgent](images/mlagents-NewTutAssignBrain.png)

正在加载...
取消
保存