浏览代码

Documentation for Goal conditioning (#5149)

* Documentation for Goal conditioning

* hyper is the default

* Update docs/Training-Configuration-File.md

Co-authored-by: Arthur Juliani <awjuliani@gmail.com>

* Update docs/Learning-Environment-Design-Agents.md

Co-authored-by: Arthur Juliani <awjuliani@gmail.com>

* addressing comments: Renaming goal observation to goal signal in docs

* addressing comments

* Update docs/Learning-Environment-Design-Agents.md

Co-authored-by: Ervin T. <ervin@unity3d.com>

* Update docs/Learning-Environment-Design-Agents.md

* Update docs/Learning-Environment-Design-Agents.md

* Update docs/Learning-Environment-Design-Agents.md

* Update docs/Learning-Environment-Design-Agents.md

* Update docs/Learning-Environment-Design-Agents.md

Co-authored-by: Arthur Juliani <awjuliani@gmail.com>
Co-authored-by: Ervin T. <ervin@unity3d.com>
/check-for-ModelOverriders
GitHub 3 年前
当前提交
2fcf8425
共有 2 个文件被更改,包括 33 次插入0 次删除
  1. 32
      docs/Learning-Environment-Design-Agents.md
  2. 1
      docs/Training-Configuration-File.md

32
docs/Learning-Environment-Design-Agents.md


- [RayCast Observation Summary & Best Practices](#raycast-observation-summary--best-practices)
- [Variable Length Observations](#variable-length-observations)
- [Variable Length Observation Summary & Best Practices](#variable-length-observation-summary--best-practices)
- [Goal Signal](#goal-signal)
- [Goal Signal Summary & Best Practices](#goal-signal-summary--best-practices)
- [Actions and Actuators](#actions-and-actuators)
- [Continuous Actions](#continuous-actions)
- [Discrete Actions](#discrete-actions)

of an entity to the `BufferSensor`.
- Normalize the entities observations before feeding them into the `BufferSensor`.
### Goal Signal
It is possible for agents to collect observations that will be treated as "goal signal".
A goal signal is used to condition the policy of the agent, meaning that if the goal
changes, the policy (i.e. the mapping from observations to actions) will change
as well. Note that this is true
for any observation since all observations influence the policy of the Agent to
some degree. But by specifying a goal signal explicitly, we can make this conditioning
more important to the agent. This feature can be used in settings where an agent
must learn to solve different tasks that are similar by some aspects because the
agent will learn to reuse learnings from different tasks to generalize better.
In Unity, you can specify that a `VectorSensor` or
a `CameraSensor` is a goal by attaching a `VectorSensorComponent` or a
`CameraSensorComponent` to the Agent and selecting `Goal Signal` as `Observation Type`.
On the trainer side, there are two different ways to condition the policy. This
setting is determined by the
[conditioning_type parameter](Training-Configuration-File.md#common-trainer-configurations).
If set to `hyper` (default) a [HyperNetwork](https://arxiv.org/pdf/1609.09106.pdf)
will be used to generate some of the
weights of the policy using the goal observations as input. Note that using a
HyperNetwork requires a lot of computations, it is recommended to use a smaller
number of hidden units in the policy to alleviate this.
If set to `none` the goal signal will be considered as regular observations.
#### Goal Signal Summary & Best Practices
- Attach a `VectorSensorComponent` or `CameraSensorComponent` to an agent and
set the observation type to goal to use the feature.
- Set the conditioning_type parameter in the training configuration.
- Reduce the number of hidden units in the network when using the HyperNetwork
conditioning type.
## Actions and Actuators

1
docs/Training-Configuration-File.md


| `network_settings -> num_layers` | (default = `2`) The number of hidden layers in the neural network. Corresponds to how many hidden layers are present after the observation input, or after the CNN encoding of the visual observation. For simple problems, fewer layers are likely to train faster and more efficiently. More layers may be necessary for more complex control problems. <br><br> Typical range: `1` - `3` |
| `network_settings -> normalize` | (default = `false`) Whether normalization is applied to the vector observation inputs. This normalization is based on the running average and variance of the vector observation. Normalization can be helpful in cases with complex continuous control problems, but may be harmful with simpler discrete control problems. |
| `network_settings -> vis_encode_type` | (default = `simple`) Encoder type for encoding visual observations. <br><br> `simple` (default) uses a simple encoder which consists of two convolutional layers, `nature_cnn` uses the CNN implementation proposed by [Mnih et al.](https://www.nature.com/articles/nature14236), consisting of three convolutional layers, and `resnet` uses the [IMPALA Resnet](https://arxiv.org/abs/1802.01561) consisting of three stacked layers, each with two residual blocks, making a much larger network than the other two. `match3` is a smaller CNN ([Gudmundsoon et al.](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)) that is optimized for board games, and can be used down to visual observation sizes of 5x5. |
| `network_settings -> conditioning_type` | (default = `hyper`) Conditioning type for the policy using goal observations. <br><br> `none` treats the goal observations as regular observations, `hyper` (default) uses a HyperNetwork with goal observations as input to generate some of the weights of the policy. Note that when using `hyper` the number of parameters of the network increases greatly. Therefore, it is recommended to reduce the number of `hidden_units` when using this `conditioning_type`
## Trainer-specific Configurations

正在加载...
取消
保存