浏览代码

Release 0.14.0 more doc fixes (#3388)

* Update Learning-Environment-Create-New.md (#3356)

* Update Learning-Environment-Create-New.md

In the "Final Editor Setup" , I think their should be a Step to add Decision Parameters Script and it says Decision Period from 1 to 20.
Without this their was no action taken by the RolerAgent. After adding this step it worked for me.

* Update docs/Learning-Environment-Create-New.md

Co-Authored-By: Chris Elion <celion@gmail.com>

* Update docs/Learning-Environment-Create-New.md

Co-Authored-By: Chris Elion <celion@gmail.com>

Co-authored-by: Chris Elion <celion@gmail.com>

* migration fixes

Co-authored-by: Medhavi Monish <39962268+MedhaviMonish@users.noreply.github.com>
/release-0.14.0
GitHub 5 年前
当前提交
94c948c9
共有 2 个文件被更改,包括 7 次插入8 次删除
  1. 11
      docs/Learning-Environment-Create-New.md
  2. 4
      docs/Migrating.md

11
docs/Learning-Environment-Create-New.md


1. Select the **RollerAgent** GameObject to show its properties in the Inspector
window.
2. Change **Decision Interval** from `1` to `10`.
3. Drag the Target GameObject from the Hierarchy window to the RollerAgent
2. Add the Decision Requester script with the Add Component button from the RollerAgent Inspector.
3. Change **Decision Period** to `10`.
4. Drag the Target GameObject from the Hierarchy window to the RollerAgent
4. Add the Behavior Parameters script with the Add Component button from the RollerAgent Inspector.
5. Modify the Behavior Parameters of the Agent :
5. Add the Behavior Parameters script with the Add Component button from the RollerAgent Inspector.
6. Modify the Behavior Parameters of the Agent :
* `Behavior Name` to *RollerBallBrain*
* `Vector Observation` `Space Size` = 8
* `Vector Action` `Space Type` = **Continuous**

* If you are using multiple training areas, make sure all the Agents have the same `Behavior Name`
and `Behavior Parameters`

4
docs/Migrating.md


* If you were not using `On Demand Decision` for your Agent, you **must** add a `DecisionRequester` component to your Agent GameObject and set its `Decision Period` field to the old `Decision Period` of the Agent.
* If you have a class that inherits from Academy:
* If the class didn't override any of the virtual methods and didn't store any additional data, you can just remove the old script from the scene.
* If the class had additional data, create a new MonoBehaviour and store the data on this instead.
* If the class had additional data, create a new MonoBehaviour and store the data in the new MonoBehaviour instead.
* If the class overrode the virtual methods, create a new MonoBehaviour and move the logic to it:
* Move the InitializeAcademy code to MonoBehaviour.OnAwake
* Move the AcademyStep code to MonoBehaviour.FixedUpdate

* Text actions and observations, and custom action and observation protos have been removed.
* RayPerception3D and RayPerception2D are marked deprecated, and will be removed in a future release. They can be replaced by RayPerceptionSensorComponent3D and RayPerceptionSensorComponent2D.
* The `Use Heuristic` checkbox in Behavior Parameters has been replaced with a `Behavior Type` dropdown menu. This has the following options:
* `Default` corresponds to the previous unchecked behavior, meaning that Agents will train if they connect to a python trainer, otherwise they will performance inference.
* `Default` corresponds to the previous unchecked behavior, meaning that Agents will train if they connect to a python trainer, otherwise they will perform inference.
* `Heuristic Only` means the Agent will always use the `Heuristic()` method. This corresponds to having "Use Heuristic" selected in 0.11.0.
* `Inference Only` means the Agent will always perform inference.
* Barracuda was upgraded to 0.3.2, and it is now installed via the Unity Package Manager.

正在加载...
取消
保存