* Simplifying the Agent reset logic
- Agents will reset in ResetIfDone immediately after being marked Done
- Agents will always request a decision right after reset
- This change implies that additional messages might be sent to Python
* Fixing the Unit Tests
* Added a note in the Migrating.md document
* convert edit mode test runner to python
* unbuffered, strip slash in dir
* cleanup
* try to get artifacts
* parse xml instead of magic shell commands
* always copy results.xml
* convert standalone-build-test to python
* run as module
* os.getcwd()
* initial commit
* Fixed the compilation errors
* fixing the tests
* Addressing the comment about the brain parameters
* Fixing typo
* Made timers more accurate
* addressing comments
* Better memory allocation
* Added some docstrings
* Adding better sensor validation
* Wrapped in #if DEBUG and also wrapped GenerateSensorData in a timer
* Timer changes
This PR makes it so that the env_manager only sends one current BrainInfo and the previous actions (if any) to the AgentManager. The list of agents was added to the ActionInfo and used appropriately.
This change adds a new 'mlagents-run-experiment' endpoint which
accepts a single YAML/JSON file providing all of the information that
mlagents-learn accepts via command-line arguments and file inputs.
As part of this change the curriculum configuration is simplified to
accept only a single file for all the curricula in an environment
rather than a file for each behavior.
The "num-runs" command-line option provides the ability to run multiple
identically-configured training runs in separate processes by running
mlagents-learn only once. This is a rarely used ML-Agents feature,
but it adds complexity to other parts of the system by adding the need
to support multiprocessing and managing of ports for the parallel training
runs. It also doesn't provide truly reproducible experiments, since there
is no guarantee of resource isolation between the trials.
This commit removes the --num-runs option, with the idea that users will
manage parallel or sequential runs of the same experiment themselves in the
future.
* pass shape to WriteAdapter
* handle floats on python side
* cleanup
* whitespace
* rename GetFloatObservationShape, support uncompressed in RenderTexture sensor
* numpy float32
* remove unused using
* Float sensor and unit test
* replace asserts with exceptions, docstrings
Previously the Curriculum and MetaCurriculum classes required file / folder
paths for initialization. These methods loaded the configuration for the
curricula from the filesystem. Requiring files for configuring curricula
makes testing and updating our config format more difficult.
This change moves the file loading into static methods, so that Curricula /
MetaCurricula can be initialized from dictionaries only.
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* trainer_controller expects name_behavior_ids
* add_policy and create_policy separated
* adjusting tests to expect trainer.add_policy to be called
* fixing tests
* fixed naming ...