* Feature use extra_requirement to add torch as optional dependency to mlagents
- todo: documentation
* Update ml-agents/setup.py
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
* Begin porting work
* Add ResNet and distributions
* Dynamically construct actor and critic
* Initial optimizer port
* Refactoring policy and optimizer
* Resolving a few bugs
* Share more code between tf and torch policies
* Slightly closer to running model
* Training runs, but doesn’t actually work
* Fix a couple additional bugs
* Add conditional sigma for distribution
* Fix normalization
* Support discrete actions as well
* Continuous and discrete now train
* Mulkti-discrete now working
* Visual observations now train as well
* GRU in-progress and dynamic cnns
* Fix for memories
* Remove unused arg
* Combine actor and critic classes. Initial export.
* Support tf and pytorch alongside one another
* Prepare model for onnx export
* Use LSTM and fix a few merge errors
* Fix bug in probs calculation
* Optimize np -> tensor operations
* Time action sample funct...
* init
* Add reward manager and hurryUpReward
* fix hurry reward/ add awful first training
* Turn off head height and hurry rew
* changed max speed to 15. added small hh rew
* add NaN check for reward manager. start vel penalty
* add bpVel pen
* add new BPVelPen nn file
* remove outdated nn file
* add randomize speed bool
* try rewad product
* change coeff to 1
* try avg vel of all bp for reward
* move outside loop
* try linear inverselerp for vel
* add avg rew matchspeed15 nn file. looks much better
* save scene
* no hand penalty, random walk speed
* fix inverse lerp
* try new reward falloff
* cleanup
* added new nn file. don't allow hand contact
* update obsv
* remove hh rew. add trained no-hh model
* add new nn file
* new curve
* add new models. try no reset
* add hh rew
* clamp hh
* zero rewards if ground contact
* switch to approved with movi...