* Modifying the .proto files
* attempt 1 at refactoring Python
* works for ppo hallway
* changing the documentation
* now works with both sac and ppo both training and inference
* Ned to fix the tests
* TODOs :
- Fix the demonstration recorder
- Fix the demonstration loader
- verify the intrinsic reward signals work
- Fix the tests on Python
- Fix the C# tests
* Regenerating the protos
* fix proto typo
* protos and modifying the C# demo recorder
* modified the demo loader
* Demos are loading
* IMPORTANT : THESE ARE THE FILES USED FOR CONVERSION FROM OLD TO NEW FORMAT
* Modified all the demo files
* Fixing all the tests
* fixing ci
* addressing comments
* removing reference to memories in the ll-api
* WIP VectorSensor and StackedSensor
* fix a few dumb mistakes
* more VectorSensor
* remove Update(), add util methods, hook into TensorGenerator
* WriteApdater to write to tensors and arrays
* write float observations
* used circular buffer for stacked obs
* cleanup
* fix unit tests
* docstrings
* undo accidental checkins
* rider suggestions, add range check
* bounds check before writing
* undo ProjectVersion.txt change
* fix unit tests
* unit test for VectorSensor
* StackingSensor tests
* missing meta file
* missing meta file
* WriteAdapter tests
* Update package and communicator versions to 0.11
* Remove pip cache fallback for CircleCI
This change removes the caching fallback in the case where dependencies
change, since it can cause CI failures when we have incompatible
dependencies in the cache.
* Limit Tensorflow version for tests to <2.0
* Use stable bokken image. (#2815)
* build fixes for 2018+ (#2808)
* rename CompressionType enum
* fix standalone build test for 2018+
* Add more editor versions for testing. (#2809)
* class variable for API verison, fix env tests (#2817)
* fixed area prefab
agents were pointing to the wrong laser gameObject.
When we initially connect to the environment using RPCCommunicator,
the connection is polled so we don't hang forever on `.recv()` when
the environment wasn't launched or failed. However we don't currently
have any similar check for the exchanges mid-training-run.
This change applies the same timeout from initialization to each exchange,
and extends the default `timeout_wait` to 60 seconds to generally improve
the chances we won't have a mismatch between environment launch time and
the trainer timeout.
Tested on: single-env and multi-env cases. Killed 1 environment process
manually and saw that the model was saved appropriately and all processes
closed.