<!-- Note that the above values aren't normalized, which we recommend! -->
The feature vector must always contain the same number of elements and observations must always be in the same position within the list. If the number of observed entities in an environment can vary you can pad the feature vector with zeros for any missing entities in a specific observation or you can limit an agent's observations to a fixed subset. For example, instead of observing every enemy agent in an environment, you could only observe the closest five.
When you set up an Agent's brain in the Unity Editor, set the following properties to use a continuous vector observation:
}
}
<!--
How to handle things like large numbers of words or symbols? Should you use a very long one-hot vector? Or a single index into a table?
Colors? Better to use a single color number or individual components?
For information about imitation learning, which uses a different training algorithm, see [Imitation Learning](Training-Imitation-Learning).
<!-- Need a description of PPO that provides a general overview of the algorithm and, more specifically, puts all the hyperparameters and Academy/Brain/Agent settings (like max_steps and done) into context. Oh, and which is also short and understandable by laymen. -->
## Best Practices when training with PPO
Successfully training a Reinforcement Learning model often involves tuning the training hyperparameters. This guide contains some best practices for tuning the training process when the default parameters don't seem to be giving the level of performance you would like.