* Remove env creation logic from TrainerController
Currently TrainerController includes logic related to creating the
UnityEnvironment, which causes poor separation of concerns between
the learn.py application script, TrainerController and UnityEnvironment:
* TrainerController must know about the proper way to instantiate the
UnityEnvironment, which may differ from application to application.
This also makes mocking or subclassing UnityEnvironment more
difficult.
* Many arguments are passed by learn.py to TrainerController and passed
along to UnityEnvironment.
This change moves environment construction logic into learn.py, as part
of the greater refactor to separate trainer logic from actor / environment.
* Switched default Mac GFX API to Metal
* Added Barracuda pre-0.1.5
* Added basic integration with Barracuda Inference Engine
* Use predefined outputs the same way as for TF engine
* Fixed discrete action + LSTM support
* Switch Unity Mac Editor to Metal GFX API
* Fixed null model handling
* All examples converted to support Barracuda
* Added model conversion from Tensorflow to Barracuda
copied the barracuda.py file to ml-agents/mlagents/trainers
copied the tensorflow_to_barracuda.py file to ml-agents/mlagents/trainers
modified the tensorflow_to_barracuda.py file so it could be called from mlagents
modified ml-agents/mlagents/trainers/policy.py to convert the tf models to barracuda compatible .bytes file
* Added missing iOS BLAS plugin
* Added forgotten prefab changes
* Removed GLCore GFX backend for Mac, because it doesn't support Compute shaders
* Exposed GPU support for LearningBrain inference
...
* added the pypiwin32 package
* fixed the break on mac, fixed part of pytest above version 4
* added something to the windows to help unstuck people
* resolved the comment
* Move 'take_action' into Policy class
This refactor is part of Actor-Trainer separation. Since policies
will be distributed across actors in separate processes which share
a single trainer, taking an action should be the responsibility of
the policy.
This change makes a few smaller changes:
* Combines `take_action` logic between trainers, making it more
generic
* Adds an `ActionInfo` data class to be more explicit about the
data returned by the policy, only used by TrainerController and
policy for now.
* Moves trainer stats logic out of `take_action` and into
`add_experiences`
* Renames 'take_action' to 'get_action'
This commit adds support for running Unity environments in parallel.
An abstract base class was created for UnityEnvironment which a new
SubprocessUnityEnvironment inherits from.
SubprocessUnityEnvironment communicates through a pipe in order to
send commands which will be run in parallel to its workers.
A few significant changes needed to be made as a side-effect:
* UnityEnvironments are created via a factory method (a closure)
rather than being directly created by the main process.
* In mlagents-learn "worker-id" has been replaced by "base-port"
and "num-envs", and worker_ids are automatically assigned across runs.
* BrainInfo objects now convert all fields to numpy arrays or lists to
avoid serialization issues.
On Windows the interrupt for subprocesses works in a different
way from OSX/Linux. The result is that child subprocesses and
their pipes may close while the parent process is still running
during a keyboard (ctrl+C) interrupt.
To handle this, this change adds handling for EOFError and
BrokenPipeError exceptions when interacting with subprocess
environments. Additional management is also added to be sure
when using parallel runs using the "num-runs" option that
the threads for each run are joined and KeyboardInterrupts are
handled.
These changes made the "_win_handler" we used to specially
manage interrupts on Windows unnecessary, so they have been
removed.
A change was made to the way the "train_mode" flag was used by
environments when SubprocessUnityEnvironment was added which was
intended to be part of a separate change set. This broke the CLI
'--slow' flag. This change undoes those changes, so that the slow
/ fast simulation option works correctly.
As a minor additional change, the remaining tests from top level
'tests' folders have been moved into the new test folders.
When using parallel SubprocessUnityEnvironment instances along
with Academy Done(), a new step might be taken when reset should
have been called because some environments may have been done while
others were not (making "global done" less useful).
This change manages the reset on `global_done` at the level of the
environment worker, and removes the global reset from
TrainerController.
* WIP precommit on top level
* update CI
* circleci fixes
* intentionally fail black
* use --show-diff-on-failure in CI
* fix command order
* rebreak a file
* apply black
* WIP enable mypy
* run mypy on each package
* fix trainer_metrics mypy errors
* more mypy errors
* more mypy
* Fix some partially typed functions
* types for take_action_outputs
* fix formatting
* cleanup
* generate stubs for proto objects
* fix ml-agents-env mypy errors
* disallow-incomplete-defs for gym-unity
* Add CI notes to CONTRIBUTING.md
* Create new class (RewardSignal) that represents a reward signal.
* Add value heads for each reward signal in the PPO model.
* Make summaries agnostic to the type of reward signals, and log weighted rewards per reward signal.
* Move extrinsic and curiosity rewards into this new structure.
* Allow defining multiple reward signals in YAML file. Add documentation for this new structure.
At each step, an unused `last_reward` variable in the TF graph is
updated in our PPO trainer. There are also related unused methods
in various places in the codebase. This change removes them.
Previously in v0.8 we added parallel environments via the
SubprocessUnityEnvironment, which exposed the same abstraction as
UnityEnvironment while actually wrapping many parallel environments
via subprocesses.
Wrapping many environments with the same interface as a single
environment had some downsides, however:
* Ordering needed to be preserved for agents across different envs,
complicating the SubprocessEnvironment logic
* Asynchronous environments with steps taken out of sync with the
trainer aren't viable with the Environment abstraction
This PR introduces a new EnvManager abstraction which exposes a
reduced subset of the UnityEnvironment abstraction and a
SubprocessEnvManager implementation which replaces the
SubprocessUnityEnvironment.
TrainerController depended on an external_brains dictionary with
brain params in its constructor but only used it in a single function
call. The same function call (start_learning) takes the environment
as an argument, which is the source of the external_brains.
This change removes the dependency of TrainerController on external
brains and removes the two class members related to external_brains
and retrieves the brains directly from the environment.
Based on the new reward signals architecture, add BC pretrainer and GAIL for PPO. Main changes:
- A new GAILRewardSignal and GAILModel for GAIL/VAIL
- A BCModule component (not a reward signal) to do pretraining during RL
- Documentation for both of these
- Change to Demo Loader that lets you load multiple demo files in a folder
- Example Demo files for all of our tested sample environments (for future regression testing)
Fixes shuffling issue with newer versions of numpy (#1798).
* make get_value_estimates output a dict of floats
* Use np.append instead of convert to list, unconvert
* Add type hints and test for get_value_estimates
* Don't 0 value bootstrap for GAIL and Curiosity
* Add gradient penalties to GAN to help with stability
* Add gail_config.yaml with GAIL examples
* Cleaned up trainer_config.yaml and unnecessary gammas
* Documentation updates
* Code cleanup
* Removes unused SubprocessEnvManager import in trainer_controller
* Removes unused `steps` argument to `TrainerController._save_model`
* Consolidates unnecessary branching for curricula in
`TrainerController.advance`
* Moves `reward_buffer` into `TFPolicy` from `PPOPolicy` and adds
`BCTrainer` support so that we don't have a broken interface /
undefined behavior when BCTrainer is used with curricula.
* Add Sampler and SamplerManager
* Enable resampling of reset parameters during training
* Documentation for Sampler and example YAML configuration file
This change moves trainer initialization outside of TrainerController,
reducing some of the constructor arguments of TrainerController and
setting up the ability for trainers to be initialized in the case where
a TrainerController isn't needed.
- Move common functions to trainer.py, model.pyfromppo/trainer.py, ppo/policy.pyandppo/model.py'
- Introduce RLTrainer class and move most of add_experiences and some common reward
signal code there. PPO and SAC will inherit from this, not so much BC Trainer.
- Add methods to Buffer to enable sampling, truncating, and save/loading.
- Add scoping to create encoders in model.py
- Fix issue with BC Trainer `increment_steps`.
- Fix issue with Demonstration Recorder and visual observations (memory leak fix was deleting vis obs too early).
- Make Samplers sample from the same random seed every time, so generalization runs are repeatable.
- Fix crash when using GAIL, Curiosity, and visual observations together.
* Adds evaluate_batch to reward signals. Evaluates on minibatch rather than on BrainInfo.
* Changes the way reward signal results are reported in rl_trainer so that we get the pure, unprocessed environment reward separate from the reward signals.
* Moves end_episode to rl_trainer
* Fixed bug with BCModule with RNN
* Add Soft Actor-Critic model, trainer, and policy and sac_trainer_config.yaml
* Add documentation for SAC and tweak PPO documentation to reference the new pages.
* Add tests for SAC, change simple_rl test to run both PPO and SAC.
* Initial Commit
* Remove the Academy Done flag from the protobuf definitions
* remove global_done in the environment
* Removed irrelevant unitTests
* Remove the max_step from the Academy inspector
* Removed global_done from the python scripts
* Modified and removed some tests
* This actually does not break either curriculum nor generalization training
* Replace global_done with reserved.
Addressing Chris Elion's comment regarding the deprecation of the global_done field. We will use a reserved field to make sure the global done does not get replaced in the future causing errors.
* Removed unused fake brain
* Tested that the first call to step was the same as a reset call
* black formating
* Added documentation changes
* Editing the migrating doc
* Addressing comments on the Migrating doc
* Addressing comments :
- Removing dead code
- Resolving forgotten merged conflicts
- Editing documentations...
We have been ignoring unused imports and star imports via flake8. These are
both bad practice and grow over time without automated checking. This
commit attempts to fix all existing import errors and add back the corresponding
flake8 checks.
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* Modified the Academy UI to remove the control checkbox and replaced it with a train in the editor checkbox
* Removed the Broadcast functionality from the non-Learning brains
* Bug fix
* Note that the scenes are broken since the BroadcastHub has changed
* Modified the LL-API for Python to remove the broadcasting functiuonality.
* All unit tests are running
* Modifie...
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* Modified the Academy UI to remove the control checkbox and replaced it with a train in the editor checkbox
* Removed the Broadcast functionality from the non-Learning brains
* Bug fix
* Note that the scenes are broken since the BroadcastHub has changed
* Modified the LL-API for Python to remove the broadcasting functiuonality.
* All unit tests are running
* Modified the scen...
* Add test for curiosity + SAC
* Use actions for all curiosity (need to test on PPO)
* Fix issue with reward signals updating multiple times
* Put curiosity actions in the right placeholder
* Test PPO curiosity update
* ISensor and SensorBase
* camera and rendertex first pass
* use isensors for visual obs
* Update gridworld with CameraSensors
* compressed obs for reals
* Remove AgentInfo.visualObservations
* better separation of train and inference sensor calls
* compressed obs proto - need CI to generate code
* int32
* get proto name right
* run protoc locally for new fiels
* apply generated proto patch (pyi files were weird)
* don't repeat bytes
* hook up compressedobs
* dont send BrainParameters until there's an AgentInfo
* python BrainParameters now needs an AgentInfo to create
* remove last (I hope) dependency on camerares
* remove CameraResolutions and AgentInfo.visual_observations
* update mypy-protobuf version
* cleanup todos
* python cleanup
* more unit test fixes
* more unit test fix
* camera sensors for VisualFood collector, record demo
* SensorCompon...
* Initial commit removing memories from C# and deprecating memory fields in proto
* initial changes to Python
* Adding functionalities
* Fixes
* adding the memories to the dictionary
* Fixing bugs
* tweeks
* Resolving bugs
* Recreating the proto
* Addressing comments
* Passing by reference does not work. Do not merge
* Fixing huge bug in Inference
* Applying patches
* fixing tests
* Addressing comments
* Renaming variable to reflect type
* test
* Modifying the .proto files
* attempt 1 at refactoring Python
* works for ppo hallway
* changing the documentation
* now works with both sac and ppo both training and inference
* Ned to fix the tests
* TODOs :
- Fix the demonstration recorder
- Fix the demonstration loader
- verify the intrinsic reward signals work
- Fix the tests on Python
- Fix the C# tests
* Regenerating the protos
* fix proto typo
* protos and modifying the C# demo recorder
* modified the demo loader
* Demos are loading
* IMPORTANT : THESE ARE THE FILES USED FOR CONVERSION FROM OLD TO NEW FORMAT
* Modified all the demo files
* Fixing all the tests
* fixing ci
* addressing comments
* removing reference to memories in the ll-api
This is the first in a series of PRs that intend to move the agent processing logic (add_experiences and process_experiences) out of the trainer and into a separate class. The plan is to do so in steps:
- Split the processing buffers (keeping track of agent trajectories and assembling trajectories) and update buffer (complete trajectories to be used for training) within the Trainer (this PR)
- Move the processing buffer and add/process experiences into a separate, outside class
- Change the data type of the update buffer to be a Trajectory
- Place and read Trajectories from queues, add subscription mechanism for both AgentProcessor and Trainers
* [WIP] Side Channel initial layout
* Working prototype for raw bytes
* fixing format mistake
* Added some errors and some unit tests in C#
* Added the side channel for the Engine Configuration. (#2958)
* Added the side channel for the Engine Configuration.
Note that this change does not require modifying a lot of files :
- Adding a sender in Python
- Adding a receiver in C#
- subscribe the receiver to the communicator (here is a one liner in the Academy)
- Add the side channel to the Python UnityEnvironment (not represented here)
Adding the side channel to the environment would look like such :
```python
from mlagents.envs.environment import UnityEnvironment
from mlagents.envs.side_channel.raw_bytes_channel import RawBytesChannel
from mlagents.envs.side_channel.engine_configuration_channel import EngineConfigurationChannel
channel0 = RawBytesChannel()
channel1 = EngineConfigurationChanne...
* initial commit for LL-API
* fixing ml-agents-envs tests
* Implementing action masks
* training is fixed for 3DBall
* Tests all fixed, gym is broken and missing documentation changes
* adding case where no vector obs
* Fixed Gym
* fixing tests of float64
* fixing float64
* reverting some of brain.py
* removing old proto apis
* comment type fixes
* added properties to AgentGroupSpec and edited the notebooks.
* clearing the notebook outputs
* Update gym-unity/gym_unity/tests/test_gym.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* Update gym-unity/gym_unity/tests/test_gym.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* Update ml-agents-envs/mlagents/envs/base_env.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* Update ml-agents-envs/mlagents/envs/base_env.py
Co-Authored-By: Chris Elion <chris.elion@unity3d.com>
* addressing first comments
* NaN checks for r...
Our tests were using pytest fixtures by actually calling the fixture
methods, but in newer 5.x versions of pytest this causes test failures.
The recommended method for using fixtures is dependency injection.
This change updates the relevant test fixtures to either not use
`pytest.fixture` or to use dependency injection to pass the fixture.
The version range requirements in `test_requirements.txt` were also
updated accordingly.
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* trainer_controller expects name_behavior_ids
* add_policy and create_policy separated
* adjusting tests to expect trainer.add_policy to be called
* fixing tests
* fixed naming ...
Previously the Curriculum and MetaCurriculum classes required file / folder
paths for initialization. These methods loaded the configuration for the
curricula from the filesystem. Requiring files for configuring curricula
makes testing and updating our config format more difficult.
This change moves the file loading into static methods, so that Curricula /
MetaCurricula can be initialized from dictionaries only.
The "num-runs" command-line option provides the ability to run multiple
identically-configured training runs in separate processes by running
mlagents-learn only once. This is a rarely used ML-Agents feature,
but it adds complexity to other parts of the system by adding the need
to support multiprocessing and managing of ports for the parallel training
runs. It also doesn't provide truly reproducible experiments, since there
is no guarantee of resource isolation between the trials.
This commit removes the --num-runs option, with the idea that users will
manage parallel or sequential runs of the same experiment themselves in the
future.
This change adds a new 'mlagents-run-experiment' endpoint which
accepts a single YAML/JSON file providing all of the information that
mlagents-learn accepts via command-line arguments and file inputs.
As part of this change the curriculum configuration is simplified to
accept only a single file for all the curricula in an environment
rather than a file for each behavior.
This PR makes it so that the env_manager only sends one current BrainInfo and the previous actions (if any) to the AgentManager. The list of agents was added to the ActionInfo and used appropriately.
This PR moves the AgentManagers from the TrainerController into the env_manager. This way, the TrainerController only needs to create the components (Trainers, AgentManagers) and call advance() on the EnvManager and the Trainers.
Convert the UnitySDK to a Packman Package.
- Separate Examples into a sample project.
- Move core UnitySDK Code into com.unity.ml-agents.
- Create asmdefs for the ml-agents package.
- Add package validation tests for win/linux/max.
- Update protobuf generation scripts.
- Add Barracuda as a package dependency for ML-Agents. (users no longer have to install it themselves).
* [bug-fix] Increase height of wall in CrawlerStatic (#3650)
* [bug-fix] Improve performance for PPO with continuous actions (#3662)
* Corrected a typo in a name of a function (#3670)
OnEpsiodeBegin was corrected to OnEpisodeBegin in Migrating.md document
* Add Academy.AutomaticSteppingEnabled to migration (#3666)
* Fix editor port in Dockerfile (#3674)
* Hotfix memory leak on Python (#3664)
* Hotfix memory leak on Python
* Fixing
* Fixing a bug in the heuristic policy. A decision should not be requested when the agent is done
* [bug-fix] Make Python able to deal with 0-step episodes (#3671)
* adding some comments
Co-authored-by: Ervin T <ervin@unity3d.com>
* Remove vis_encode_type from list of required (#3677)
* Update changelog (#3678)
* Shorten timeout duration for environment close (#3679)
The timeout duration for closing an environment was set to the
same duration as the timeout when waiting ...
* Hotfix memory leak on Python
* Fixing
* Fixing a bug in the heuristic policy. A decision should not be requested when the agent is done
* [bug-fix] Make Python able to deal with 0-step episodes (#3671)
* adding some comments
Co-authored-by: Ervin T <ervin@unity3d.com>
The "docker target" feature and associated command-line flag
--docker-target-name were created for use with the now-deprecated
Docker setup. This feature redirects the paths used by learn.py
for the environment and config files to be based from a directory
other than the current working directory. Additionally it wrapped
the environment execution with xvfb-run.
This commit removes the "docker target" feature because:
* Renaming the paths doesn't fix any problem. Absolute paths can
already be passed for configs and environment executables.
* Use of xserver, Xvfb, or xvfb-run are independent of mlagents-learn
and can be used outside of the mlagents-learn call. Further, xvfb-run
is not the only solution for software rendering.
This commit surfaces exceptions from environment worker subprocesses,
and changes the SubprocessEnvManager to raise those exceptions when
caught. Additionally TrainerController was changed to treat environment
exceptions differently than KeyboardInterrupts. We now raise the
environment exceptions after exporting the model, so that ML-Agents will
correctly exit with a non-zero return code.
* [skip ci] WIP : Modify the base_env.py file
* [skip ci] typo
* [skip ci] renamed some methods
* [skip ci] Incorporated changes from our meeting
* [skip ci] everything is broken
* [skip ci] everything is broken
* [skip ci] formatting
* Fixing the gym tests
* Fixing bug, C# has an error that needs fixing
* Fixing the test
* relaxing the threshold of 0.99 to 0.9
* fixing the C# side
* formating
* Fixed the llapi integratio test
* [Increasing steps for testing]
* Fixing the python tests
* Need __contains__ after all
* changing the max_steps in the tests
* addressing comments
* Making env_manager logic clearer as proposed in the comments
* Remove duplicated logic and added back in episode length (#3728)
* removing mentions of multi-agent in gym and changed the docstring in base_env.py
* Edited the Documentation for the changes to the LLAPI (#3733)
* Edite...
* Make EnvironmentParameters a first-class citizen in the API
Missing: Python conterparts and testing.
* Minor comment fix to Engine Parameters
* A second minor fix.
* Make EngineConfigChannel Internal and add a singleton/sealed accessor
* Make StatsSideChannel Internal and add a singleton/sealed accessor
* Changes to SideChannelUtils
- Disallow two sidechannels of the same type to be added
- Remove GetSideChannels that return a list as that is now unnecessary
- Make most methods except (register/unregister) internal to limit users impacting the “system-level” side channels
- Add an improved comment to SideChannel.cs
* Added Dispose methods to system-level sidechannel wrappers
- Specifically to StatsRecorder, EnvironmentParameters and EngineParameters.
- Updated Academy.Dispose to take advantage of these.
- Updated Editor tests to cover all three “system-level” side channels.
Kudos to Unit Tests (TestAcade...
* support tf2.x and python3.8
* tensorflow==2.2.0rc3 for python3.8
* stick with tf2.1 and py3.7 for now
* More gail visual steps in simple test (#3836)
* increase gail visual ppo steps
* increase to 2000
* tune steps down to 750
Co-authored-by: andrewcoh <54679309+andrewcoh@users.noreply.github.com>
* [bug-fix] Fix issue with initialize not resetting step count (#3962)
* Develop better error message for #3953 (#3963)
* Making the error for wrong number of agents raise consistently
* Better error message for inputs of wrong dimensions
* Fix#3932, stop the editor from going into a loop when a prefab is selected. (#3949)
* Minor doc updates to release
* add unit tests and fix exceptions (#3930)
Co-authored-by: Ervin T <ervin@unity3d.com>
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
Co-authored-by: Chris Goy <christopherg@unity3d.com>
* Fix typo
* Made a side channel utils to reduce the complexity of UnityEnvironment
* Added a get_side_channel_dict utils method
* Better executable launcher (unarguably)
* Fixing the broken test
* Addressing comments
* [skip ci] Update ml-agents-envs/mlagents_envs/side_channel/side_channel_manager.py
Co-authored-by: Jonathan Harper <jharper+moar@unity3d.com>
* No catch all
Co-authored-by: Jonathan Harper <jharper+moar@unity3d.com>
* Replaced get_behavior_names and get_behavior_spec with behavior_specs property
* Fixing the test
* [ci]
* addressing some comments
* use typing.Mapping (#3948)
* Update ml-agents-envs/mlagents_envs/base_env.py
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
* Adding the documentation
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
* [bug-fix] Fix regression in --initialize-from feature (#4086)
* Fixed text in GettingStarted page specifying the logdir for tensorboard. Before it was in a directory summaries which no longer existed. Results are now saved to the results dir. (#4085)
* [refactor] Remove nonfunctional `output_path` option from TrainerSettings (#4087)
* Reverting bug introduced in #4071 (#4101)
Co-authored-by: Scott <Scott.m.jordan91@gmail.com>
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
* Update Dockerfile
* Separate send environment data from reset (#4128)
* Fixed a typo on ML-Agents-Overview.md (#4130)
Fixed redundant "to" word from the sentence since it is probably a typo in document.
* Updated the badge’s link to point to the newest doc version
* Replaced all of the doc to release_3_doc
* Fix 3DBall and 3DBallHard SAC regressions (#4132)
* Move memory validation to settings
* Update docs
* Add settings test
* Update to release_3 in installation.md (#4144)
* rename to SideChannelManager +backcompat (#4137)
* Remove comment about logo with --help (#4148)
* [bugfix] Make FoodCollector heuristic playable (#4147)
* Make FoodCollector heuristic playable
* Update changelog
* script to check for old release links and references (#4153)
* Remove package validation suite from Project (#4146)
* RayPerceptionSensor: handle empty and invalid tags (#4155...
* Added Reward Providers for Torch
* Use NetworkBody to encode state in the reward providers
* Integrating the reward prodiders with ppo and torch
* work in progress, integration with PPO. Not training properly Pyramids at the moment
* Integration in PPO
* Removing duplicate file
* Gail and Curiosity working
* addressing comments
* Enfore float32 for tests
* enfore np.float32 in buffer
* Layer initialization + swish as a layer
* integrating with the existing layers
* fixing tests
* setting the seed for a test
* Using swish and fixing tests
* Introduced the Constant Parameter Sampler that will be useful later as samplers and floats can be used interchangeably
* Refactored the settings.py to refect the new format of the config.yaml
* First working version
* Added the unit tests
* Update to Upgrade for Updates
* fixing the tests
* Upgraded the config files
* Fixes
* Additional error catching
* addressing some comments
* Making the code nicer with cattr
* Added and registered an unstructure hook for PrameterRandomization
* Updating C# Walljump
* Adding comments
* Add test for settings export (#4164)
* Add test for settings export
* Update ml-agents/mlagents/trainers/tests/test_settings.py
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
* Including environment parameters for the test for settings export
* First documentation up...
This change adds an export to .nn for each checkpoint generated by
RLTrainer and adds a NNCheckpointManager to track the generated
checkpoints and final model in training_status.json.
Co-authored-by: Jonathan Harper <jharper+moar@unity3d.com>
* test initalize steps to 100
* use mean of first trajectory to initialize the normalizer
* remove blank line
* update changelog
* cleaned up initialization of variance/mean
* large normalization obs unit test
* add --upgrade to pip to get newer downloader (#4338)
* Fix format of the changelog for validation. (#4340)
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
Co-authored-by: Chris Goy <christopherg@unity3d.com>
* Begin porting work
* Add ResNet and distributions
* Dynamically construct actor and critic
* Initial optimizer port
* Refactoring policy and optimizer
* Resolving a few bugs
* Share more code between tf and torch policies
* Slightly closer to running model
* Training runs, but doesn’t actually work
* Fix a couple additional bugs
* Add conditional sigma for distribution
* Fix normalization
* Support discrete actions as well
* Continuous and discrete now train
* Mulkti-discrete now working
* Visual observations now train as well
* GRU in-progress and dynamic cnns
* Fix for memories
* Remove unused arg
* Combine actor and critic classes. Initial export.
* Support tf and pytorch alongside one another
* Prepare model for onnx export
* Use LSTM and fix a few merge errors
* Fix bug in probs calculation
* Optimize np -> tensor operations
* Time action sample funct...
* Move linear encoding to NetworkBody
* moved encoders to processors (#4420)
* fix bad merge
* Get it running
* Replace mentions of visual_encoders
* Remove output_size property
* Fix tests
* Fix some references
* Revert test_simple_rl
* Fix networks test
* Make curiosity test more accomodating
* Rename total_input_size
* [Bug fix] Fix bug in GAIL gradient penalty (#4425) (#4426)
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
* Up number of steps
* Rename to visual_processors and vector_processors
Co-authored-by: andrewcoh <54679309+andrewcoh@users.noreply.github.com>
Co-authored-by: Andrew Cohen <andrew.cohen@unity3d.com>
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
* Fixes for Torch SAC and tests
* FIx recurrent sac test
* Properly update normalization for SAC-continuous
* Fix issue with log ent coef reporting in SAC Torch
* Add torch_utils
* Use torch from torch_utils
* Add torch to banned modules in CI
* Better import error handling
* Fix flake8 errors
* Address comments
* Move networks to GPU if enabled
* Switch to torch_utils
* More flake8 problems
* Move reward providers to GPU/CPU
* Remove anothere set default tensor
* Fix banned import in test
* Don't run value during inference
* Execute critic with LSTM
* Address comments
* Unformat
* Optimized soft update
* Move soft update to model utils
* Add test for soft update
* initial commit
* works with Pyramids
* added unit tests and a separate config file
* Adding first batch of documentation
* adding in the docs that rnd is only for PyTorch
* adding newline at the end of the config files
* adding some docs
* Code comments
* no normalization of the reward
* Fixing the tests
* [skip ci]
* [skip ci] Make sure RND will only work for Torch by editing the config file
* [skip ci] Additional information in the Documentation
* Remove the _has_updated_once flag
* Moved components to the tf folder and moved the TrainerFactory to the `trainer` folder
* Addressing comments
* Editing the migrating doc
* fixing test
* [bug-fix] Don't load non-wrapped policy (#4593)
* pin cattrs version
* cap PyTorch version
* use v2 action and pin python version (#4568)
Co-authored-by: Ervin T <ervin@unity3d.com>
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
* Torch setup.py
* Set torch to default
* Make torch default in setup.py
* Remove indents
* Remove other instances of TF being used
* Add tensorboard to setup.py
* Adding correst setup commands for verifying torch is installed (#4524)
* Adding correst setup commands for verifying torch is installed
* Editing the test_requirments to add tf and remove torch
* Develop torchdefault raise outside setup (#4530)
* Torch not imported error to raise at first usage
* Torch not imported error to raise at first usage
* [refactor] Use PyTorch TensorBoard utils (#4518)
* Convert stats writer to use PyTorch TB support
* Use common function to print params
* Update test
* Bump tensorboard to 1.15 to fix the tests
* putting tensorboard 1.15.0 as min version requirement
Co-authored-by: vincentpierre <vincentpierre@unity3d.com>
* [Docs] Initial documentation changes for making...
* Proper dimensions for entropy, sum before bonus in PPO
* Make entropy reporting same as TF
* Always use separate critic
* Revert to shared
* Remove unneeded extra line
* Change entropy shape in test
* Change another entropy shape
* Add entropy summing to evaluate_actions
* Add notes about torch.abs(policy_loss)
* Use float64 in GAIL tests
* Use float32 when converting np arrays by default
* Enforce torch 1.7.x or below
* Add comment about Windows install
* Adjust tests
* use int64 steps
* check for NaN actions
Co-authored-by: Ruo-Ping Dong <ruoping.dong@unity3d.com>
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
* Add hybrid action capability flag (#4576)
* Change BrainParametersProto to support ActionSpec (#4579)
* Assign new BrainParametersProto fields based on capabilities (#4581)
* ActionBuffer with hybrid actions for RemotePolicy (#4592)
* Barracuda inference for hybrid actions (#4611)
* Refactor BarracudaModel loader checks (#4629)
* Export separate nodes for continuous/discrete actions (#4655)
* Separate continuous/discrete actions in AgentActionProto (#4698)
* Force different nodes for new and deprecated action output (#4705)
* separate entity encoder and RSA
* clean up args in mha
* more cleanups
* fixed tests
* entity embeddings have no max option
* Add exceptions for variable export
* Fix test
* Add docstrings
Co-authored-by: Andrew Cohen <andrew.cohen@unity3d.com>
* Make buffer type-agnostic
* Edit types of Apped method
* Change comment
* Collaborative walljump
* Make collab env harder
* Add group ID
* Add collab obs to trajectory
* Fix bug; add critic_obs to buffer
* Set group ids for some envs
* Pretty broken
* Less broken PPO
* Update SAC, fix PPO batching
* Fix SAC interrupted condition and typing
* Fix SAC interrupted again
* Remove erroneous file
* Fix multiple obs
* Update curiosity reward provider
* Update GAIL and BC
* Multi-input network
* Some minor tweaks but still broken
* Get next critic observations into value estimate
* Temporarily disable exporting
* Use Vince's ONNX export code
* Cleanup
* Add walljump collab YAML
* Lower max height
* Update prefab
* Update prefab
* Collaborative Hallway
* Set num teammates to 2
* Add config and group ids to HallwayCollab
* Fix bug with hallway collab
* E...
* Fix end episode for POCA, add warning for group reward if not POCA (#5113)
* Fix end episode for POCA, add warning for group reward if not POCA
* Add missing imports
* Use np.any, which is faster
* Don't clear update buffer, but don't append to it either
* Update changelog
* Address comments
* Make experience replay buffer saving more verbose
(cherry picked from commit 63e7ad44d96b7663b91f005ca1d88f4f3b11dd2a)
* Adding Hypernetwork modules and unit tests
* Edits
* Integration of the hypernetowrk to the trainer
* Update ml-agents/mlagents/trainers/torch/networks.py
Co-authored-by: Arthur Juliani <awjuliani@gmail.com>
* Making the default hyper and added the conditioning type None
* Reducing the number of hypernetwork layers
* addressing comments
Co-authored-by: Arthur Juliani <awjuliani@gmail.com>
* Fixing networks.py for the merge
* fix compile error
* Adding the goal conditioning sensors with the new observation specs
* addressing feedback
* I forgot to change the m_observationType
* Renaming Goal to GoalSignal (#5190)
* Renaming GOAL to GOAL_SIGNAL
* VectorSensorComponent to use new API
* Adding docstrings
* verbose pytest on github action
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
* collecting latest step as a stat
* adding a list of hidden_keys to TB summarywriter to hide unnecessary stats from user
* fixing precommit
* fixing precommit
* formating
* defined the property types
* moving custom defaults to get_default_stats_writers
* new test for TensorboardWriter.hidden_keys
* improved testing
* explicit None evaluation
Co-authored-by: Ervin T. <ervin@unity3d.com>
* make hidden_keys optional
Co-authored-by: Ervin T. <ervin@unity3d.com>
* adding optional argument
* lowering the training threshold to 0.8 on test_var_len_obs_and_goal_poca
* Update pytest.yml
* Do not merge! droping pytest 3.9 job
* -add back pytest
-format imports and comments
* back to default threshold for test_var_len_obs_and_goal_poca
Co-authored-by: mahon94 <maryam.honari@unity3d.com>
Co-authored-by: Ervin T. <ervin@unity3d.com>
* collecting latest step as a stat
* adding a list of hidden_keys to TB summarywriter to hide unnecessary stats from user
* fixing precommit
* formating
* defined the property types
* moving custom defaults to get_default_stats_writers
* new test for TensorboardWriter.hidden_keys
* improved testing
* explicit None evaluation
Co-authored-by: Ervin T. <ervin@unity3d.com>
* make hidden_keys optional
Co-authored-by: Ervin T. <ervin@unity3d.com>
* adding optional argument
* lowering the training threshold to 0.8 on test_var_len_obs_and_goal_poca
* Update pytest.yml
* Do not merge! droping pytest 3.9 job
* -add back pytest
-format imports and comments
* back to default threshold for test_var_len_obs_and_goal_poca
Co-authored-by: mahon94 <maryam.honari@unity3d.com>
Co-authored-by: Ervin T. <ervin@unity3d.com>
Co-authored-by: mahon94 <maryam.honari@unity3d.com>
Co-authored-by: Ervin...
* initial commit for a fully connected visual encoder
* adding a test
* addressing comments
* Fixing error with minimal size of fully connected network
* adding documentation and changelog
* Fix of the target entropy for continuous SAC
* Lowering required steps of test and remove unecessary unsqueeze
* Changing the target from -dim(a)^2 to -dim(a) by removing implicit broadcasting