* [Cold Fix] Split the way cummulative rewards and episode length are counted
The reward is appended at each step to the cummulative reward
The episode count is ONLY incremented when d_t+1 is false
This PR makes the following changes:
* Moves clipping of continuous control model into model itself. Output is now always [-1, 1].
* Internal model values are now clipped between [-3, 3] before being rescaled to [-1, 1] for output. * This improves training performance by providing a wider range of values within which the pdf of the gaussian can fall. Output of [-1, 1] is used to be more environment-creator friendly.
* Fixes issue where epsilon was erroneously being used to reconstruct old probabilities during PPO update, leading to reduced learning performance.
* Introduce ScaleAction() function within python to easily rescale values from [-1, 1] to arbitrary range.
* Re-train all CC models using improved algorithm. All performance levels are equal or improved. In the case of Crawler, improvement is drastic.
* Update documentation appropriately.
* Made miscellaneous minor code style and optimization improvements within environments.
* [CoreBrain] Bug fix in the internal brain
Discrete vector observations did not have the right size
* [Docs] Removed all references to the unitypackages other than the TensorFlowSharp.unitypackage
.
* [Basic] Updated the bytes file of basic
* [Docs] Addressed comments
* [Docs] Re-addressed the comments
* [Bug Fix] Scalling the visual input between 0 and 1
* [Comments] Added comments to the
BatchVisualObservations method of the CoreInternalBrain.
* [Renaming] Renamed BlackAndWhite to blackAndWhite
Fixes the following issues:
* Missing component reference in BananaRL environment.
* Neural Network for multiple visual observations was not properly generated.
* Episode time-out value estimate bootstrapping used incorrect observation as input.
There were some important things that should have been mentioned in this tutorial, and it took me a while to figure them out. Most importantly, it was never mentioned how to properly end a training session in the Anaconda prompt to receive an exported .bytes file.