* added broadcast to the player and heuristic brain.
Allows the python API to record actions taken along with the states and rewards
* removed the broadcast checkbox
Added a Handshake method for the communicator
The academy will try to handshake regardless of the brains present
Player and Heuristic brains will send their information through the communicator but will not receive commands
* bug fix : The environment only requests actions from external brains when unique
* added warning in case no brins are set to external
* fix on the instanciation of coreBrains,
fix on the conversion of actions to arrays in the BrainInfo received from step
* default discrete action is now 0
bug fix for discrete broadcast action (the action size should be one in Agents.cs)
modified Tennis so that the default action is no action
modified the TemplateDecsion.cs to ensure non null values are sent from Decide() and MakeMemory()
* minor fixes
* need to convert the s...
Greatly simplified GridWorld code. It now also only uses a visual observation rather than state vector in order to demonstrate learning purely from a visual input.
* Add support for stacking past n states to allow network to learn temporal dependencies.
* Add Banana Collector environment for demonstrating partially observable multi-agent environments.
* Add 3DBall Hard which lacks velocity information in state representation. Used as test for LSTM and state-stacking features.
* Rework Tennis environment to be continuous control and trainable in 100k steps.
* [Semantics] Modified the semantics for the documentation
* [Semantics] Updated the images
* [Semantics] Made further changes to the docs based of the comments received
* [New Bouncer] Revamped the Bouncer to be in 3D
* [Bouncer Configuration file] Added the BouncerBrain configuration
* [Documentation] Added the Bouncer tot he documentation page
* [Fixes] Fixed lines too long and the documentation typo
* Slight adjustments to bouncer environment
* Don't default to internal brain on bouncer
Fixes the following issues:
* Missing component reference in BananaRL environment.
* Neural Network for multiple visual observations was not properly generated.
* Episode time-out value estimate bootstrapping used incorrect observation as input.
* Adds implementation of Curiosity-driven Exploration by Self-supervised Prediction (https://arxiv.org/abs/1705.05363) to PPO trainer.
* To enable, set use_curiosity flag to true in hyperparameter file.
* Includes refactor of unitytrainers model code to accommodate new feature.
* Adds new Pyramids environment (w/ documentation). Environment contains sparse reward, and can only be solved using PPO+Curiosity.
* Revamps agent code for walker and crawler environments to use shared JointDriveController system.
* Crawler has been reworked to be very cute.
* Crawler & Walker environments have been reworked to be visually consistent.
* Added Dynamic Crawler scene.
* All scenes re-trained and new models added.
* Documentation changes.
* GridWorld now uses action masking
* Addressed the comments
* addressed comments
* Added checkbox to turn action masking on/off (#1146)
* Added checkbox to turn action masking on/off
* Fix to handle the no-action option
* Added comment to GridWorld mentioning the use of action masking. (#1153)
* Documentation Update
* addressed comments
* new images for the recorder
* Improvements to the docs
* Address the comments
* Core_ML typo
* Updated the links to inference repo
* Put back Inference-Engine.md
* fix typos : brain
* Readd deleted file
* fix typos
* Addressed comments
* Included explicit version # for ZN
* added explicit version for KR docs
* minor fix in installation doc
* Consistency with numbers for reset parameters
* Removed extra verbiage. minor consistency
* minor consistency
* Cleaned up IL language
* moved parameter sampling above in list
* Cleaned up language in Env Parameter sampling
* Cleaned up migrating content
* updated consistency of Reset Parameter Sampling
* Rename Training-Generalization-Learning.md to Training-Generalization-Reinforcement-Learning-Agents.md
* Updated doc link for generalization
* Rename Training-Generalization-Reinforcement-Learning-Agents.md to Training-Generalized-Reinforcement-Learning-Agents.md
* Re-wrote the intro paragraph for generalization
* add titles, cleaned up language for reset params
* Update Training-Generalized-Reinforcement-Learning-Agents.md
* cleanup of generalization doc
* More cleanu...
* new env styles rebased on develop
* added new trained models
* renamed food collector platforms
* reduce training timescale on WallJump from 100 to 10
* uncheck academy control on walljump
* new banner image
* rename banner file
* new example env images
* add foodCollector image
* change Banana to FoodCollector and update image
* change bouncer description to include green cube
* update image
* update gridworld image
* cleanup prefab names and tags
* updated soccer env to reference purple agent instead of red
* remove unused mats
* rename files
* remove more unused tags
* update image
* change platform to agent cube
* update text. change platform to agents head
* cleanup
* cleaned up weird unused meta files
* add new wall jump nn files and rename a prefab
* walker change stacked states from 5 to 1
walker collects physics observations so stacked states are not need...
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* 1 to 1 Brain to Agent
This is a work in progess
In this PR :
- Deleted all Brain Objects
- Moved the BrainParameters into the Agent
- Gave the Agent a Heuristic method (see Balance Ball for example)
- Modified the Communicator and ModelRunner : Put can only take one agent at a time
- Made the IBrain Interface with RequestDecision and DecideAction method
No changes made to Python
[Design Doc](https://docs.google.com/document/d/1hBhBxZ9lepGF4H6fc6Hu6AW7UwOmnyX3trmgI3HpOmo/edit#)
* Removing editorconfig
* Updating BallanceBall scene
* grammar mistake
* Clearing the Agents of the Model runner
* Added Documentation on IBrain
* Modified comments on GiveModel
* Introduced a factory
* Split Learning Brain in two
* Changes to walljump
* Fixing the Unit tests
* Renaming the Brain to Policy
* Heuristic now has priority over training
* Edited code comments
* Fixing bugs
* Develop one to one scene edits...
* [WIP] Side Channel initial layout
* Working prototype for raw bytes
* fixing format mistake
* Added some errors and some unit tests in C#
* Added the side channel for the Engine Configuration. (#2958)
* Added the side channel for the Engine Configuration.
Note that this change does not require modifying a lot of files :
- Adding a sender in Python
- Adding a receiver in C#
- subscribe the receiver to the communicator (here is a one liner in the Academy)
- Add the side channel to the Python UnityEnvironment (not represented here)
Adding the side channel to the environment would look like such :
```python
from mlagents.envs.environment import UnityEnvironment
from mlagents.envs.side_channel.raw_bytes_channel import RawBytesChannel
from mlagents.envs.side_channel.engine_configuration_channel import EngineConfigurationChannel
channel0 = RawBytesChannel()
channel1 = EngineConfigurationChanne...
Convert the UnitySDK to a Packman Package.
- Separate Examples into a sample project.
- Move core UnitySDK Code into com.unity.ml-agents.
- Create asmdefs for the ml-agents package.
- Add package validation tests for win/linux/max.
- Update protobuf generation scripts.
- Add Barracuda as a package dependency for ML-Agents. (users no longer have to install it themselves).
* Various doc improvements
For Using-Virtual-Environment.md:
- Made a note regarding updating setuptools and pip.
- Changed lists from "-" to "*"
For Using-Tensorboard.md:
- Changed the ordered list to use "1."
For Training-on-Microsoft-Azure-Custom-Instance.md:
- Deleted as it was not linked anywhere
For FAQ.md
- Removed stale issues given upgrade to 2018.3
For Readme.md
- Added links for Reward Signals, Self-Play and Profiling Trainers
For Learning-Environment-Executable.md
- Changed the ordered list to use "1."
For Learning-Environment-Examples.md
- Minor rewording of intro paragraphs
* consolidating custom instances page in main page
So we have a single page for Azure.
Adding warning note for deprecated docs
* Fixing doc links that are failing CI
* Running prettier formatting as a follow-up PR to #3775
Running the files (except Training-ML-Agents.md) through the `prettier` linter.
https://github.com/Unity-Technologies/ml-agents/pull/3775/files
* minor fixes
Changed a header from “Custom Metrics from C#” to “Custom Metrics from Unity”
Fixed formatting in FAQ
* Minor correction.
* Several, small documentation improvements
- Re-organize main repo README
- Minor clean-ups to Python package-specific readme files
- Clean-up to Unity Inference Engine page
- Update to the docs README
- Added a specific cross-platform section in ML-Agents Overview to amplify Barracuda
- Updated the links in Limitations.md to point to the specific subsections
- Cleaned up the Designing a Learning Environment page. Added an intro paragraph.
- Updated the installation guide to specifically call out local installation
- A few minor formatting, spelling errors fixed.
* about to implement orientation cube
* oCube spawining works. ready to train
* working. about to try com
* ready for training
* add random rot on episode start
* feet now alternate but runs backwards
* still running with right leg in front
* increased joint strength to 40k
* removed texture example
* reduced maxAngVel, enabled enhanced determinism, cont spec
* rebuilt walker ragdoll to scale 1
* rebuilt ragdoll ready
* update walker pair prefab
* fixed bp heirarchy
* added trained model, renamed scene, usecollisioncallbacks
* updated dynamic platforms
* added dynamic walker tf file. max speed 5
* DynamicWalker working. has working nn file
* collect local rotations
* added new dynamic nn file
* hip facing reward
* Create WalkerDynamic.yaml
* fix hip rotation
* about to clean up code
* added dirIndicator and orentCubeGizmo
* clean up
* clea...
* about to implement orientation cube
* oCube spawining works. ready to train
* working. about to try com
* ready for training
* add random rot on episode start
* feet now alternate but runs backwards
* still running with right leg in front
* increased joint strength to 40k
* removed texture example
* reduced maxAngVel, enabled enhanced determinism, cont spec
* rebuilt walker ragdoll to scale 1
* rebuilt ragdoll ready
* update walker pair prefab
* fixed bp heirarchy
* added trained model, renamed scene, usecollisioncallbacks
* updated dynamic platforms
* added dynamic walker tf file. max speed 5
* DynamicWalker working. has working nn file
* collect local rotations
* added new dynamic nn file
* hip facing reward
* Create WalkerDynamic.yaml
* fix hip rotation
* about to clean up code
* added dirIndicator and orentCubeGizmo
* clean up
* cleanup
* up...
* init
* Add reward manager and hurryUpReward
* fix hurry reward/ add awful first training
* Turn off head height and hurry rew
* changed max speed to 15. added small hh rew
* add NaN check for reward manager. start vel penalty
* add bpVel pen
* add new BPVelPen nn file
* remove outdated nn file
* add randomize speed bool
* try rewad product
* change coeff to 1
* try avg vel of all bp for reward
* move outside loop
* try linear inverselerp for vel
* add avg rew matchspeed15 nn file. looks much better
* save scene
* no hand penalty, random walk speed
* fix inverse lerp
* try new reward falloff
* cleanup
* added new nn file. don't allow hand contact
* update obsv
* remove hh rew. add trained no-hh model
* add new nn file
* new curve
* add new models. try no reset
* add hh rew
* clamp hh
* zero rewards if ground contact
* switch to approved with movi...
VisualFoodCollector is now an example environment of using a mix of visual and vector observation and is able to train with default config file.
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>
* Removing some scenes, All the Static and all the non variable speed environments. Also removed Bouncer, PushBlock, WallJump and reacher. Removed a bunch of visual environements as well. Removed 3DBallHard and FoodCollector (kept Visual and Grid FoodCollector)
* readding 3DBallHard
* readding pushblock and walljump
* Removing tennis
* removing mentions of removed environments
* removing unused images
* Renaming Crawler demos
* renaming some demo files
* removing and modifying some config files
* new examples image?
* removing Bouncer from build list
* replacing the Bouncer environment with Match3 for llapi tests
* Typo in yamato test
* Add pushblock collab
* Make SimpleMultiAgentGroup public
* Remove GoalDetectTrigger
* Remove GDT meta file
* Remove some comments
* Add training configuration
* Rename behavior
* Add to docs
* Change the reward structure in docs
* Add back GoalDetectTrigger
Co-authored-by: HH <brandonh@unity3d.com>
* Aded the Goal conditioned GridWorld to replace regular gridworld
* adding missing files
* Code improvements
* Documentation change on gridworld
* resolving conflicts
* new model
* Addressing comments
* comments and renames
* Update docs/Learning-Environment-Examples.md
Co-authored-by: Ervin T. <ervin@unity3d.com>
* adding reference to gridworld in docs about goal signal
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
Co-authored-by: Ervin T. <ervin@unity3d.com>