Skip to content

Commit 96a5aaf

Browse files
vincentpierreawjulianiErvin T.
authored
Documentation for Goal conditioning (#5149)
* Documentation for Goal conditioning * hyper is the default * Update docs/Training-Configuration-File.md Co-authored-by: Arthur Juliani <[email protected]> * Update docs/Learning-Environment-Design-Agents.md Co-authored-by: Arthur Juliani <[email protected]> * addressing comments: Renaming goal observation to goal signal in docs * addressing comments * Update docs/Learning-Environment-Design-Agents.md Co-authored-by: Ervin T. <[email protected]> * Update docs/Learning-Environment-Design-Agents.md * Update docs/Learning-Environment-Design-Agents.md * Update docs/Learning-Environment-Design-Agents.md * Update docs/Learning-Environment-Design-Agents.md * Update docs/Learning-Environment-Design-Agents.md Co-authored-by: Arthur Juliani <[email protected]> Co-authored-by: Ervin T. <[email protected]>
1 parent ca0fca7 commit 96a5aaf

File tree

2 files changed

+33
-0
lines changed

2 files changed

+33
-0
lines changed

docs/Learning-Environment-Design-Agents.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@
1919
- [RayCast Observation Summary & Best Practices](#raycast-observation-summary--best-practices)
2020
- [Variable Length Observations](#variable-length-observations)
2121
- [Variable Length Observation Summary & Best Practices](#variable-length-observation-summary--best-practices)
22+
- [Goal Signal](#goal-signal)
23+
- [Goal Signal Summary & Best Practices](#goal-signal-summary--best-practices)
2224
- [Actions and Actuators](#actions-and-actuators)
2325
- [Continuous Actions](#continuous-actions)
2426
- [Discrete Actions](#discrete-actions)
@@ -562,6 +564,36 @@ between -1 and 1.
562564
of an entity to the `BufferSensor`.
563565
- Normalize the entities observations before feeding them into the `BufferSensor`.
564566

567+
### Goal Signal
568+
569+
It is possible for agents to collect observations that will be treated as "goal signal".
570+
A goal signal is used to condition the policy of the agent, meaning that if the goal
571+
changes, the policy (i.e. the mapping from observations to actions) will change
572+
as well. Note that this is true
573+
for any observation since all observations influence the policy of the Agent to
574+
some degree. But by specifying a goal signal explicitly, we can make this conditioning
575+
more important to the agent. This feature can be used in settings where an agent
576+
must learn to solve different tasks that are similar by some aspects because the
577+
agent will learn to reuse learnings from different tasks to generalize better.
578+
In Unity, you can specify that a `VectorSensor` or
579+
a `CameraSensor` is a goal by attaching a `VectorSensorComponent` or a
580+
`CameraSensorComponent` to the Agent and selecting `Goal Signal` as `Observation Type`.
581+
On the trainer side, there are two different ways to condition the policy. This
582+
setting is determined by the
583+
[conditioning_type parameter](Training-Configuration-File.md#common-trainer-configurations).
584+
If set to `hyper` (default) a [HyperNetwork](https://arxiv.org/pdf/1609.09106.pdf)
585+
will be used to generate some of the
586+
weights of the policy using the goal observations as input. Note that using a
587+
HyperNetwork requires a lot of computations, it is recommended to use a smaller
588+
number of hidden units in the policy to alleviate this.
589+
If set to `none` the goal signal will be considered as regular observations.
590+
591+
#### Goal Signal Summary & Best Practices
592+
- Attach a `VectorSensorComponent` or `CameraSensorComponent` to an agent and
593+
set the observation type to goal to use the feature.
594+
- Set the conditioning_type parameter in the training configuration.
595+
- Reduce the number of hidden units in the network when using the HyperNetwork
596+
conditioning type.
565597

566598
## Actions and Actuators
567599

docs/Training-Configuration-File.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ choice of the trainer (which we review on subsequent sections).
4343
| `network_settings -> num_layers` | (default = `2`) The number of hidden layers in the neural network. Corresponds to how many hidden layers are present after the observation input, or after the CNN encoding of the visual observation. For simple problems, fewer layers are likely to train faster and more efficiently. More layers may be necessary for more complex control problems. <br><br> Typical range: `1` - `3` |
4444
| `network_settings -> normalize` | (default = `false`) Whether normalization is applied to the vector observation inputs. This normalization is based on the running average and variance of the vector observation. Normalization can be helpful in cases with complex continuous control problems, but may be harmful with simpler discrete control problems. |
4545
| `network_settings -> vis_encode_type` | (default = `simple`) Encoder type for encoding visual observations. <br><br> `simple` (default) uses a simple encoder which consists of two convolutional layers, `nature_cnn` uses the CNN implementation proposed by [Mnih et al.](https://www.nature.com/articles/nature14236), consisting of three convolutional layers, and `resnet` uses the [IMPALA Resnet](https://arxiv.org/abs/1802.01561) consisting of three stacked layers, each with two residual blocks, making a much larger network than the other two. `match3` is a smaller CNN ([Gudmundsoon et al.](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)) that is optimized for board games, and can be used down to visual observation sizes of 5x5. |
46+
| `network_settings -> conditioning_type` | (default = `hyper`) Conditioning type for the policy using goal observations. <br><br> `none` treats the goal observations as regular observations, `hyper` (default) uses a HyperNetwork with goal observations as input to generate some of the weights of the policy. Note that when using `hyper` the number of parameters of the network increases greatly. Therefore, it is recommended to reduce the number of `hidden_units` when using this `conditioning_type`
4647

4748

4849
## Trainer-specific Configurations

0 commit comments

Comments
 (0)