* added the pypiwin32 package
* fixed the break on mac, fixed part of pytest above version 4
* added something to the windows to help unstuck people
* resolved the comment
* Create new class (RewardSignal) that represents a reward signal.
* Add value heads for each reward signal in the PPO model.
* Make summaries agnostic to the type of reward signals, and log weighted rewards per reward signal.
* Move extrinsic and curiosity rewards into this new structure.
* Allow defining multiple reward signals in YAML file. Add documentation for this new structure.
Previously in v0.8 we added parallel environments via the
SubprocessUnityEnvironment, which exposed the same abstraction as
UnityEnvironment while actually wrapping many parallel environments
via subprocesses.
Wrapping many environments with the same interface as a single
environment had some downsides, however:
* Ordering needed to be preserved for agents across different envs,
complicating the SubprocessEnvironment logic
* Asynchronous environments with steps taken out of sync with the
trainer aren't viable with the Environment abstraction
This PR introduces a new EnvManager abstraction which exposes a
reduced subset of the UnityEnvironment abstraction and a
SubprocessEnvManager implementation which replaces the
SubprocessUnityEnvironment.
Fixes shuffling issue with newer versions of numpy (#1798).
* make get_value_estimates output a dict of floats
* Use np.append instead of convert to list, unconvert
* Add type hints and test for get_value_estimates
* Don't 0 value bootstrap for GAIL and Curiosity
* Add gradient penalties to GAN to help with stability
* Add gail_config.yaml with GAIL examples
* Cleaned up trainer_config.yaml and unnecessary gammas
* Documentation updates
* Code cleanup
We have been ignoring unused imports and star imports via flake8. These are
both bad practice and grow over time without automated checking. This
commit attempts to fix all existing import errors and add back the corresponding
flake8 checks.
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* Modified the Academy UI to remove the control checkbox and replaced it with a train in the editor checkbox
* Removed the Broadcast functionality from the non-Learning brains
* Bug fix
* Note that the scenes are broken since the BroadcastHub has changed
* Modified the LL-API for Python to remove the broadcasting functiuonality.
* All unit tests are running
* Modifie...
* Add test for curiosity + SAC
* Use actions for all curiosity (need to test on PPO)
* Fix issue with reward signals updating multiple times
* Put curiosity actions in the right placeholder
* Test PPO curiosity update
* ISensor and SensorBase
* camera and rendertex first pass
* use isensors for visual obs
* Update gridworld with CameraSensors
* compressed obs for reals
* Remove AgentInfo.visualObservations
* better separation of train and inference sensor calls
* compressed obs proto - need CI to generate code
* int32
* get proto name right
* run protoc locally for new fiels
* apply generated proto patch (pyi files were weird)
* don't repeat bytes
* hook up compressedobs
* dont send BrainParameters until there's an AgentInfo
* python BrainParameters now needs an AgentInfo to create
* remove last (I hope) dependency on camerares
* remove CameraResolutions and AgentInfo.visual_observations
* update mypy-protobuf version
* cleanup todos
* python cleanup
* more unit test fixes
* more unit test fix
* camera sensors for VisualFood collector, record demo
* SensorCompon...
This is the first in a series of PRs that intend to move the agent processing logic (add_experiences and process_experiences) out of the trainer and into a separate class. The plan is to do so in steps:
- Split the processing buffers (keeping track of agent trajectories and assembling trajectories) and update buffer (complete trajectories to be used for training) within the Trainer (this PR)
- Move the processing buffer and add/process experiences into a separate, outside class
- Change the data type of the update buffer to be a Trajectory
- Place and read Trajectories from queues, add subscription mechanism for both AgentProcessor and Trainers
* initial commit for LL-API
* fixing ml-agents-envs tests
* Implementing action masks
* training is fixed for 3DBall
* Tests all fixed, gym is broken and missing documentation changes
* adding case where no vector obs
* Fixed Gym
* fixing tests of float64
* fixing float64
* reverting some of brain.py
* removing old proto apis
* comment type fixes
* added properties to AgentGroupSpec and edited the notebooks.
* clearing the notebook outputs
* Update gym-unity/gym_unity/tests/test_gym.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* Update gym-unity/gym_unity/tests/test_gym.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* Update ml-agents-envs/mlagents/envs/base_env.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* Update ml-agents-envs/mlagents/envs/base_env.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* addressing first comments
* NaN checks for r...
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* trainer_controller expects name_behavior_ids
* add_policy and create_policy separated
* adjusting tests to expect trainer.add_policy to be called
* fixing tests
* fixed naming ...
* [bug-fix] Fix regression in --initialize-from feature (#4086)
* Fixed text in GettingStarted page specifying the logdir for tensorboard. Before it was in a directory summaries which no longer existed. Results are now saved to the results dir. (#4085)
* [refactor] Remove nonfunctional `output_path` option from TrainerSettings (#4087)
* Reverting bug introduced in #4071 (#4101)
Co-authored-by: Scott <Scott.m.jordan91@gmail.com>
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
* Update Dockerfile
* Separate send environment data from reset (#4128)
* Fixed a typo on ML-Agents-Overview.md (#4130)
Fixed redundant "to" word from the sentence since it is probably a typo in document.
* Updated the badge’s link to point to the newest doc version
* Replaced all of the doc to release_3_doc
* Fix 3DBall and 3DBallHard SAC regressions (#4132)
* Move memory validation to settings
* Update docs
* Add settings test
* Update to release_3 in installation.md (#4144)
* rename to SideChannelManager +backcompat (#4137)
* Remove comment about logo with --help (#4148)
* [bugfix] Make FoodCollector heuristic playable (#4147)
* Make FoodCollector heuristic playable
* Update changelog
* script to check for old release links and references (#4153)
* Remove package validation suite from Project (#4146)
* RayPerceptionSensor: handle empty and invalid tags (#4155...
This change adds an export to .nn for each checkpoint generated by
RLTrainer and adds a NNCheckpointManager to track the generated
checkpoints and final model in training_status.json.
Co-authored-by: Jonathan Harper <jharper+moar@unity3d.com>