Previously in v0.8 we added parallel environments via the
SubprocessUnityEnvironment, which exposed the same abstraction as
UnityEnvironment while actually wrapping many parallel environments
via subprocesses.
Wrapping many environments with the same interface as a single
environment had some downsides, however:
* Ordering needed to be preserved for agents across different envs,
complicating the SubprocessEnvironment logic
* Asynchronous environments with steps taken out of sync with the
trainer aren't viable with the Environment abstraction
This PR introduces a new EnvManager abstraction which exposes a
reduced subset of the UnityEnvironment abstraction and a
SubprocessEnvManager implementation which replaces the
SubprocessUnityEnvironment.
At each step, an unused `last_reward` variable in the TF graph is
updated in our PPO trainer. There are also related unused methods
in various places in the codebase. This change removes them.
* Create new class (RewardSignal) that represents a reward signal.
* Add value heads for each reward signal in the PPO model.
* Make summaries agnostic to the type of reward signals, and log weighted rewards per reward signal.
* Move extrinsic and curiosity rewards into this new structure.
* Allow defining multiple reward signals in YAML file. Add documentation for this new structure.
* WIP precommit on top level
* update CI
* circleci fixes
* intentionally fail black
* use --show-diff-on-failure in CI
* fix command order
* rebreak a file
* apply black
* WIP enable mypy
* run mypy on each package
* fix trainer_metrics mypy errors
* more mypy errors
* more mypy
* Fix some partially typed functions
* types for take_action_outputs
* fix formatting
* cleanup
* generate stubs for proto objects
* fix ml-agents-env mypy errors
* disallow-incomplete-defs for gym-unity
* Add CI notes to CONTRIBUTING.md
* WIP precommit on top level
* update CI
* circleci fixes
* intentionally fail black
* use --show-diff-on-failure in CI
* fix command order
* rebreak a file
* apply black
* run on whole repo
- Fix re-install directions to include -e modifer
- Move re-install directions from creating-custom... to protobuf readme
- Add how to see confirmation that install worked
- Notes for where to enter commands to start with
- Select a particular version of grpcio-tools
- Note how to get nuget if needed
- Directory independent nuget install
- Remove instruction to download protoc since it comes with grpc.tools
- Add instructions for windows in ##Running and directories for clarification
- Fix slash direction for windows in COMPILER definition
- Fix missing COMPILER variables when calling protoc
- Fix call to "python" instead of "python3"
* Update Learning-Environment-Create-New.md
- Clarify that training is done in the original ml-agents project folder
- Remove mistype
- In the future it could help to show the user that they can copy the config folder and run training in a new project folder so they don't have to mix project settings in the original config folder
* Update Learning-Environment-Create-New.md
Add file paths
When using parallel SubprocessUnityEnvironment instances along
with Academy Done(), a new step might be taken when reset should
have been called because some environments may have been done while
others were not (making "global done" less useful).
This change manages the reset on `global_done` at the level of the
environment worker, and removes the global reset from
TrainerController.