浏览代码

[Docs] Update Balance Ball experiment eliminating graph placeholders (#338)

/develop-generalizationTraining-TrainerController
Arthur Juliani 7 年前
当前提交
2b8ad888
共有 1 个文件被更改,包括 1 次插入3 次删除
  1. 4
      docs/Getting-Started-with-Balance-Ball.md

4
docs/Getting-Started-with-Balance-Ball.md


4. Select the `Ball3DBrain` object from the Scene hierarchy.
5. Change the `Type of Brain` to `Internal`.
6. Drag the `<env_name>.bytes` file from the Project window of the Editor to the `Graph Model` placeholder in the `3DBallBrain` inspector window.
7. Set the `Graph Placeholder` size to 1 (_Note that step 7 and 8 are done because 3DBall is a continuous control environment, and the TensorFlow model requires a noise parameter to decide actions. In cases with discrete control, epsilon is not needed_).
8. Add a placeholder called `epsilon` with a type of `floating point` and a range of values from `0` to `0`.
9. Press the Play button at the top of the editor.
7. Press the Play button at the top of the editor.
If you followed these steps correctly, you should now see the trained model being used to control the behavior of the balance ball within the Editor itself. From here you can re-build the Unity binary, and run it standalone with your agent's new learned behavior built right in.
正在加载...
取消
保存