In the previous PR, steps were processed when the env manager was reset. This was an issue for the very first reset, where we don't actually know which agent groups (and AgentManagers) we needed to send the steps to. These steps were being thrown away.
This PR moves the processing of steps to advance(), so that the initial reset steps are simply processed when the next advance(). This also removes the need for an additional block of code in TrainerController to handle the initial reset.
Convert the UnitySDK to a Packman Package.
- Separate Examples into a sample project.
- Move core UnitySDK Code into com.unity.ml-agents.
- Create asmdefs for the ml-agents package.
- Add package validation tests for win/linux/max.
- Update protobuf generation scripts.
- Add Barracuda as a package dependency for ML-Agents. (users no longer have to install it themselves).
* Made the Agent reset immediately
* fixing the C# tests
* Fixing the tests still
* Trying with incremental episode ids
* deleting buffer rather than using an empty list
* Addressing the comments
* Forgot to edit the comment on AgentInfo
* Updating the migrating doc
* Fixed an obvious bug
* cleaning after an agent is done in agent processor
* Fixing the pytest errors
* Triming some of the methods of the agent but left SetReward
* Fixing bugs
* modifying the environments
* Reintroducing IsDone and IsMaxStepReached
* Updating the Migrating doc
* more details on the Migration
This PR moves the AgentManagers from the TrainerController into the env_manager. This way, the TrainerController only needs to create the components (Trainers, AgentManagers) and call advance() on the EnvManager and the Trainers.
* Simplifying the Agent reset logic
- Agents will reset in ResetIfDone immediately after being marked Done
- Agents will always request a decision right after reset
- This change implies that additional messages might be sent to Python
* Fixing the Unit Tests
* Added a note in the Migrating.md document
* convert edit mode test runner to python
* unbuffered, strip slash in dir
* cleanup
* try to get artifacts
* parse xml instead of magic shell commands
* always copy results.xml
* convert standalone-build-test to python
* run as module
* os.getcwd()
* initial commit
* Fixed the compilation errors
* fixing the tests
* Addressing the comment about the brain parameters
* Fixing typo
* Made timers more accurate
* addressing comments
* Better memory allocation
* Added some docstrings
* Adding better sensor validation
* Wrapped in #if DEBUG and also wrapped GenerateSensorData in a timer
* Timer changes
This PR makes it so that the env_manager only sends one current BrainInfo and the previous actions (if any) to the AgentManager. The list of agents was added to the ActionInfo and used appropriately.
This change adds a new 'mlagents-run-experiment' endpoint which
accepts a single YAML/JSON file providing all of the information that
mlagents-learn accepts via command-line arguments and file inputs.
As part of this change the curriculum configuration is simplified to
accept only a single file for all the curricula in an environment
rather than a file for each behavior.
The "num-runs" command-line option provides the ability to run multiple
identically-configured training runs in separate processes by running
mlagents-learn only once. This is a rarely used ML-Agents feature,
but it adds complexity to other parts of the system by adding the need
to support multiprocessing and managing of ports for the parallel training
runs. It also doesn't provide truly reproducible experiments, since there
is no guarantee of resource isolation between the trials.
This commit removes the --num-runs option, with the idea that users will
manage parallel or sequential runs of the same experiment themselves in the
future.
* pass shape to WriteAdapter
* handle floats on python side
* cleanup
* whitespace
* rename GetFloatObservationShape, support uncompressed in RenderTexture sensor
* numpy float32
* remove unused using
* Float sensor and unit test
* replace asserts with exceptions, docstrings