* ISensor and SensorBase
* camera and rendertex first pass
* use isensors for visual obs
* Update gridworld with CameraSensors
* compressed obs for reals
* Remove AgentInfo.visualObservations
* better separation of train and inference sensor calls
* compressed obs proto - need CI to generate code
* int32
* get proto name right
* run protoc locally for new fiels
* apply generated proto patch (pyi files were weird)
* don't repeat bytes
* hook up compressedobs
* dont send BrainParameters until there's an AgentInfo
* python BrainParameters now needs an AgentInfo to create
* remove last (I hope) dependency on camerares
* remove CameraResolutions and AgentInfo.visual_observations
* update mypy-protobuf version
* cleanup todos
* python cleanup
* more unit test fixes
* more unit test fix
* camera sensors for VisualFood collector, record demo
* SensorCompon...
* 1 to 1 Brain to Agent
This is a work in progess
In this PR :
- Deleted all Brain Objects
- Moved the BrainParameters into the Agent
- Gave the Agent a Heuristic method (see Balance Ball for example)
- Modified the Communicator and ModelRunner : Put can only take one agent at a time
- Made the IBrain Interface with RequestDecision and DecideAction method
No changes made to Python
[Design Doc](https://docs.google.com/document/d/1hBhBxZ9lepGF4H6fc6Hu6AW7UwOmnyX3trmgI3HpOmo/edit#)
* Removing editorconfig
* Updating BallanceBall scene
* grammar mistake
* Clearing the Agents of the Model runner
* Added Documentation on IBrain
* Modified comments on GiveModel
* Introduced a factory
* Split Learning Brain in two
* Changes to walljump
* Fixing the Unit tests
* Renaming the Brain to Policy
* Heuristic now has priority over training
* Edited code comments
* Fixing bugs
* Develop one to one scene edits...
* Triming some of the methods of the agent but left SetReward
* Fixing bugs
* modifying the environments
* Reintroducing IsDone and IsMaxStepReached
* Updating the Migrating doc
* more details on the Migration
Updates to CameraSensor / CameraSensorComponent:
* new settings object to set up layer masks, depth, etc
* support for arbitrary # channels returned by the sensor
* channels include depth, layer mask, RGB, or grayscale
Updates to VisualFoodCollector:
* food doesn't roll (for now), simplifies learning without stacking
* added layers
Updates to mlagents_envs:
* decode multiple pngs packed into a single observation
* Aded the Goal conditioned GridWorld to replace regular gridworld
* adding missing files
* Code improvements
* Documentation change on gridworld
* resolving conflicts
* new model
* Addressing comments
* comments and renames
* Update docs/Learning-Environment-Examples.md
Co-authored-by: Ervin T. <ervin@unity3d.com>
* adding reference to gridworld in docs about goal signal
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
Co-authored-by: Ervin T. <ervin@unity3d.com>