* Initial Commit
* attempt at refactor
* Put all static methods into the CoreInternalBrain
* improvements
* more testing
* modifications
* renamed epsilon
* misc
* Now supports discrete actions
* added discrete support and RNN and visual. Left to do is refactor and save variables into models
* code cleaning
* made a tensor generator and applier
* fix on the models.py file
* Moved the Checks to a different Class
* Added some unit tests
* BugFix
* Need to generate the output tensors as well as inputs before executing the graph
* Made NodeNames static and created a new namespace
* Added comments to the TensorAppliers
* Started adding comments on the TensorGenerators code
* Added comments for the Tensor Generator
* Moving the helper classes into a separate folder
* Added initial comments to the TensorChecks
* Renamed NodeNames -> TensorNames
* Removing warnings in tests
* Now using Aut...
* Create new class (RewardSignal) that represents a reward signal.
* Add value heads for each reward signal in the PPO model.
* Make summaries agnostic to the type of reward signals, and log weighted rewards per reward signal.
* Move extrinsic and curiosity rewards into this new structure.
* Allow defining multiple reward signals in YAML file. Add documentation for this new structure.
At each step, an unused `last_reward` variable in the TF graph is
updated in our PPO trainer. There are also related unused methods
in various places in the codebase. This change removes them.
Previously in v0.8 we added parallel environments via the
SubprocessUnityEnvironment, which exposed the same abstraction as
UnityEnvironment while actually wrapping many parallel environments
via subprocesses.
Wrapping many environments with the same interface as a single
environment had some downsides, however:
* Ordering needed to be preserved for agents across different envs,
complicating the SubprocessEnvironment logic
* Asynchronous environments with steps taken out of sync with the
trainer aren't viable with the Environment abstraction
This PR introduces a new EnvManager abstraction which exposes a
reduced subset of the UnityEnvironment abstraction and a
SubprocessEnvManager implementation which replaces the
SubprocessUnityEnvironment.
* Timer proof-of-concept
* micro optimizations
* add some timers
* cleanup, add asserts
* Cleanup (no start/end methods) and handle exceptions
* unit test and decorator
* move output code, add a decorator
* cleanup
* module docstring
* actually write the timings when done with training
* use __qualname__ instead
* add a few more timers
* fix mock import
* fix unit test
* don't need fwd reference
* cleanup root
* always write timers, add comments
* undo accidental change
Based on the new reward signals architecture, add BC pretrainer and GAIL for PPO. Main changes:
- A new GAILRewardSignal and GAILModel for GAIL/VAIL
- A BCModule component (not a reward signal) to do pretraining during RL
- Documentation for both of these
- Change to Demo Loader that lets you load multiple demo files in a folder
- Example Demo files for all of our tested sample environments (for future regression testing)
Fixes shuffling issue with newer versions of numpy (#1798).
* make get_value_estimates output a dict of floats
* Use np.append instead of convert to list, unconvert
* Add type hints and test for get_value_estimates
* Don't 0 value bootstrap for GAIL and Curiosity
* Add gradient penalties to GAN to help with stability
* Add gail_config.yaml with GAIL examples
* Cleaned up trainer_config.yaml and unnecessary gammas
* Documentation updates
* Code cleanup
- Move common functions to trainer.py, model.pyfromppo/trainer.py, ppo/policy.pyandppo/model.py'
- Introduce RLTrainer class and move most of add_experiences and some common reward
signal code there. PPO and SAC will inherit from this, not so much BC Trainer.
- Add methods to Buffer to enable sampling, truncating, and save/loading.
- Add scoping to create encoders in model.py
* Normalize observations when adding experiences
This change moves normalization of vector observations into the trainer's
"add_experiences" interface.
Prior to this change, normalization occurred at inference time. This
was somewhat confusing since usually executing a forward pass shouldn't
have side-effects which would change the training step. Also, in a
asynchronous or distributed setting where we copy the neural network
weights from a trainer to a remote actor / inference worker we'd end up
with training issues because of the weights being different on the trainer
than the workers.
We have been ignoring unused imports and star imports via flake8. These are
both bad practice and grow over time without automated checking. This
commit attempts to fix all existing import errors and add back the corresponding
flake8 checks.
* Initial commit removing memories from C# and deprecating memory fields in proto
* initial changes to Python
* Adding functionalities
* Fixes
* adding the memories to the dictionary
* Fixing bugs
* tweeks
* Resolving bugs
* Recreating the proto
* Addressing comments
* Passing by reference does not work. Do not merge
* Fixing huge bug in Inference
* Applying patches
* fixing tests
* Addressing comments
* Renaming variable to reflect type
* test
* Modifying the .proto files
* attempt 1 at refactoring Python
* works for ppo hallway
* changing the documentation
* now works with both sac and ppo both training and inference
* Ned to fix the tests
* TODOs :
- Fix the demonstration recorder
- Fix the demonstration loader
- verify the intrinsic reward signals work
- Fix the tests on Python
- Fix the C# tests
* Regenerating the protos
* fix proto typo
* protos and modifying the C# demo recorder
* modified the demo loader
* Demos are loading
* IMPORTANT : THESE ARE THE FILES USED FOR CONVERSION FROM OLD TO NEW FORMAT
* Modified all the demo files
* Fixing all the tests
* fixing ci
* addressing comments
* removing reference to memories in the ll-api