* Begin porting work
* Add ResNet and distributions
* Dynamically construct actor and critic
* Initial optimizer port
* Refactoring policy and optimizer
* Resolving a few bugs
* Share more code between tf and torch policies
* Slightly closer to running model
* Training runs, but doesn’t actually work
* Fix a couple additional bugs
* Add conditional sigma for distribution
* Fix normalization
* Support discrete actions as well
* Continuous and discrete now train
* Mulkti-discrete now working
* Visual observations now train as well
* GRU in-progress and dynamic cnns
* Fix for memories
* Remove unused arg
* Combine actor and critic classes. Initial export.
* Support tf and pytorch alongside one another
* Prepare model for onnx export
* Use LSTM and fix a few merge errors
* Fix bug in probs calculation
* Optimize np -> tensor operations
* Time action sample funct...
* Add torch_utils
* Use torch from torch_utils
* Add torch to banned modules in CI
* Better import error handling
* Fix flake8 errors
* Address comments
* Move networks to GPU if enabled
* Switch to torch_utils
* More flake8 problems
* Move reward providers to GPU/CPU
* Remove anothere set default tensor
* Fix banned import in test
* initial commit
* works with Pyramids
* added unit tests and a separate config file
* Adding first batch of documentation
* adding in the docs that rnd is only for PyTorch
* adding newline at the end of the config files
* adding some docs
* Code comments
* no normalization of the reward
* Fixing the tests
* [skip ci]
* [skip ci] Make sure RND will only work for Torch by editing the config file
* [skip ci] Additional information in the Documentation
* Remove the _has_updated_once flag