This change adds an export to .nn for each checkpoint generated by
RLTrainer and adds a NNCheckpointManager to track the generated
checkpoints and final model in training_status.json.
Co-authored-by: Jonathan Harper <jharper+moar@unity3d.com>
* Introduced the Constant Parameter Sampler that will be useful later as samplers and floats can be used interchangeably
* Refactored the settings.py to refect the new format of the config.yaml
* First working version
* Added the unit tests
* Update to Upgrade for Updates
* fixing the tests
* Upgraded the config files
* Fixes
* Additional error catching
* addressing some comments
* Making the code nicer with cattr
* Added and registered an unstructure hook for PrameterRandomization
* Updating C# Walljump
* Adding comments
* Add test for settings export (#4164)
* Add test for settings export
* Update ml-agents/mlagents/trainers/tests/test_settings.py
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
* Including environment parameters for the test for settings export
* First documentation up...
* doc updates
getting started page now uses consistent run-id
re-order create-new docs to have less back/forth between unity and text editor
* add link explaining decisions where we tell the reader to modify its parameter
* note about required Windows Python x86-64
Co-authored-by: Arthur Juliani <awjuliani@gmail.com>
Co-authored-by: andrewcoh <54679309+andrewcoh@users.noreply.github.com>
* added Target and OCube controllers. updated crawler envs
* update walker prefab
* add refs to prefab
* Update Crawler.prefab
* update platform, ragdoll, ocube prefabs
* reformat file
* reformat files
* fix behavior name
* add final retrained crawler and walker nn files
* collect hip ocube rot in world space
* update crawler observations and update prefabs
* change to 20M steps
* update crwl prefab to 142 observ
* update obsvs to 241. add expvel reward
* change walkspeed to 3
* add new crawler and walker nn files
* adjust rewards
* enable other pairs
* add RewardManager
* cleanup about to do final training
* cleanup add nn files for increased facing rew reduced height rew
* try no facing rew
* add vel only policy, try dy target
* inc torq on cube
* added dynamic cube nn. gonna try 40M steps
* add 40M step test, more cleanup
* ch...
* Modifying the documentation to explain that Heuristic method default action will be the previous action decided by the heuristic. Changing this behavior would be a breking change.
* Rephrase the working of the documentation of the default action of the Heuristic method
* Forgot an import