This PR makes it so that the env_manager only sends one current BrainInfo and the previous actions (if any) to the AgentManager. The list of agents was added to the ActionInfo and used appropriately.
This change adds a new 'mlagents-run-experiment' endpoint which
accepts a single YAML/JSON file providing all of the information that
mlagents-learn accepts via command-line arguments and file inputs.
As part of this change the curriculum configuration is simplified to
accept only a single file for all the curricula in an environment
rather than a file for each behavior.
The "num-runs" command-line option provides the ability to run multiple
identically-configured training runs in separate processes by running
mlagents-learn only once. This is a rarely used ML-Agents feature,
but it adds complexity to other parts of the system by adding the need
to support multiprocessing and managing of ports for the parallel training
runs. It also doesn't provide truly reproducible experiments, since there
is no guarantee of resource isolation between the trials.
This commit removes the --num-runs option, with the idea that users will
manage parallel or sequential runs of the same experiment themselves in the
future.
* pass shape to WriteAdapter
* handle floats on python side
* cleanup
* whitespace
* rename GetFloatObservationShape, support uncompressed in RenderTexture sensor
* numpy float32
* remove unused using
* Float sensor and unit test
* replace asserts with exceptions, docstrings
Previously the Curriculum and MetaCurriculum classes required file / folder
paths for initialization. These methods loaded the configuration for the
curricula from the filesystem. Requiring files for configuring curricula
makes testing and updating our config format more difficult.
This change moves the file loading into static methods, so that Curricula /
MetaCurricula can be initialized from dictionaries only.
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* added team id and identifier concat to behavior parameters
* splitting brain params into brain name and identifiers
* set team id in prefab
* recieves brain_name and identifier on python side
* rebased with develop
* Correctly calls concatBehaviorIdentifiers
* trainer_controller expects name_behavior_ids
* add_policy and create_policy separated
* adjusting tests to expect trainer.add_policy to be called
* fixing tests
* fixed naming ...
Our tests were using pytest fixtures by actually calling the fixture
methods, but in newer 5.x versions of pytest this causes test failures.
The recommended method for using fixtures is dependency injection.
This change updates the relevant test fixtures to either not use
`pytest.fixture` or to use dependency injection to pass the fixture.
The version range requirements in `test_requirements.txt` were also
updated accordingly.