目录树:
3d5f4e35
/main
/develop-generalizationTraining-TrainerController
/tag-0.2.0
/tag-0.2.1
/tag-0.2.1a
/tag-0.2.1c
/tag-0.2.1d
/hotfix-v0.9.2a
/develop-gpu-test
/0.10.1
/develop-pyinstaller
/develop-horovod
/PhysXArticulations20201
/importdocfix
/develop-resizetexture
/hh-develop-walljump_bugfixes
/develop-walljump-fix-sac
/hh-develop-walljump_rnd
/tag-0.11.0.dev0
/develop-pytorch
/tag-0.11.0.dev2
/develop-newnormalization
/tag-0.11.0.dev3
/develop
/release-0.12.0
/tag-0.12.0-dev
/tag-0.12.0.dev0
/tag-0.12.1
/2D-explorations
/asymm-envs
/tag-0.12.1.dev0
/2D-exploration-raycast
/tag-0.12.1.dev1
/release-0.13.0
/release-0.13.1
/plugin-proof-of-concept
/release-0.14.0
/hotfix-bump-version-master
/soccer-fives
/release-0.14.1
/bug-failed-api-check
/test-recurrent-gail
/hh-add-icons
/release-0.15.0
/release-0.15.1
/hh-develop-all-posed-characters
/internal-policy-ghost
/distributed-training
/hh-develop-improve_tennis
/test-tf-ver
/release_1_branch
/tennis-time-horizon
/whitepaper-experiments
/r2v-yamato-linux
/docs-update
/release_2_branch
/exp-mede
/sensitivity
/release_2_verified_load_fix
/test-sampler
/release_2_verified
/hh-develop-ragdoll-testing
/origin-develop-taggedobservations
/MLA-1734-demo-provider
/sampler-refactor-copy
/PhysXArticulations20201Package
/tag-com.unity.ml-agents_1.0.8
/release_3_branch
/github-actions
/release_3_distributed
/fix-batch-tennis
/distributed-ppo-sac
/gridworld-custom-obs
/hw20-segmentation
/hh-develop-gamedev-demo
/active-variablespeed
/release_4_branch
/fix-env-step-loop
/release_5_branch
/fix-walker
/release_6_branch
/hh-32-observation-crawler
/trainer-plugin
/hh-develop-max-steps-demo-recorder
/hh-develop-loco-walker-variable-speed
/exp-0002
/experiment-less-max-step
/hh-develop-hallway-wall-mesh-fix
/release_7_branch
/exp-vince
/hh-develop-gridsensor-tests
/tag-release_8_test0
/tag-release_8_test1
/release_8_branch
/docfix-end-episode
/release_9_branch
/hybrid-action-rewardsignals
/MLA-462-yamato-win
/exp-alternate-atten
/hh-develop-fps_game_project
/fix-conflict-base-env
/release_10_branch
/exp-bullet-hell-trainer
/ai-summit-exp
/comms-grad
/walljump-pushblock
/goal-conditioning
/release_11_branch
/hh-develop-water-balloon-fight
/gc-hyper
/layernorm
/yamato-linux-debug-venv
/soccer-comms
/hh-develop-pushblockcollab
/release_12_branch
/fix-get-step-sp-curr
/continuous-comms
/no-comms
/hh-develop-zombiepushblock
/hypernetwork
/revert-4859-develop-update-readme
/sequencer-env-attention
/hh-develop-variableobs
/exp-tanh
/reward-dist
/exp-weight-decay
/exp-robot
/bullet-hell-barracuda-test-1.3.1
/release_13_branch
/release_14_branch
/exp-clipped-gaussian-entropy
/tic-tac-toe
/hh-develop-dodgeball
/repro-vis-obs-perf
/v2-staging-rebase
/release_15_branch
/release_15_removeendepisode
/release_16_branch
/release_16_fix_gridsensor
/ai-hw-2021
/check-for-ModelOverriders
/fix-grid-obs-shape-init
/fix-gym-needs-reset
/fix-resume-imi
/release_17_branch
/release_17_branch_gpu_test
/colab-links
/exp-continuous-div
/release_17_branch_gpu_2
/exp-diverse-behavior
/grid-onehot-extra-dim-empty
/2.0-verified
/faster-entropy-coeficient-convergence
/pre-r18-update-changelog
/release_18_branch
/main/tracking
/main/reward-providers
/main/project-upgrade
/main/limitation-docs
/develop/nomaxstep-test
/develop/tf2.0
/develop/tanhsquash
/develop/magic-string
/develop/trainerinterface
/develop/separatevalue
/develop/nopreviousactions
/develop/reenablerepeatactions
/develop/0memories
/develop/fixmemoryleak
/develop/reducewalljump
/develop/removeactionholder-onehot
/develop/canonicalize-quaternions
/develop/self-playassym
/develop/demo-load-seek
/develop/progress-bar
/develop/sac-apex
/develop/cubewars
/develop/add-fire
/develop/gym-wrapper
/develop/mm-docs-main-readme
/develop/mm-docs-overview
/develop/no-threading
/develop/dockerfile
/develop/model-store
/develop/checkout-conversion-rebase
/develop/model-transfer
/develop/bisim-review
/develop/taggedobservations
/develop/transfer-bisim
/develop/bisim-sac-transfer
/develop/basketball
/develop/torchmodules
/develop/fixmarkdown
/develop/shortenstrikervsgoalie
/develop/shortengoalie
/develop/torch-save-rp
/develop/torch-to-np
/develop/torch-omp-no-thread
/develop/actionmodel-csharp
/develop/torch-extra
/develop/restructure-torch-networks
/develop/jit
/develop/adjust-cpu-settings-experiment
/develop/torch-sac-threading
/develop/wb
/develop/amrl
/develop/memorydump
/develop/permutepytorch
/develop/sac-targetq
/develop/actions-out
/develop/reshapeonnxmemories
/develop/crawlergail
/develop/debugtorchfood
/develop/hybrid-actions
/develop/bullet-hell
/develop/action-spec-gym
/develop/battlefoodcollector
/develop/use-action-buffers
/develop/hardswish
/develop/leakyrelu
/develop/torch-clip-scale
/develop/contentropy
/develop/manch
/develop/torchcrawlerdebug
/develop/fix-nan
/develop/multitype-buffer
/develop/windows-delay
/develop/torch-tanh
/develop/gail-norm
/develop/multiprocess
/develop/unified-obs
/develop/rm-rf-new-models
/develop/skipcritic
/develop/centralizedcritic
/develop/dodgeball-tests
/develop/cc-teammanager
/develop/weight-decay
/develop/singular-embeddings
/develop/zombieteammanager
/develop/superpush
/develop/teammanager
/develop/zombie-exp
/develop/update-readme
/develop/readme-fix
/develop/coma-noact
/develop/coma-withq
/develop/coma2
/develop/action-slice
/develop/gru
/develop/critic-op-lstm-currentmem
/develop/decaygail
/develop/gail-srl-hack
/develop/rear-pad
/develop/mm-copyright-dates
/develop/dodgeball-raycasts
/develop/collab-envs-exp-ervin
/develop/pushcollabonly
/develop/sample-curation
/develop/soccer-groupman
/develop/input-actuator-tanks
/develop/validate-release-fix
/develop/new-console-log
/develop/lex-walker-model
/develop/lstm-burnin
/develop/grid-vaiable-names
/develop/fix-attn-embedding
/develop/api-documentation-update-some-fixes
/develop/update-grpc
/develop/grid-rootref-debug
/develop/pbcollab-rays
/develop/2.0-verified-pre
/develop/parameterizedenvs
/develop/custom-ray-sensor
/develop/mm-add-v2blog
/develop/custom-raycast
/develop/area-manager
/develop/remove-unecessary-lr
/develop/use-base-env-in-learn
/soccer-fives/multiagent
/develop/cubewars/splashdamage
/develop/add-fire/exp
/develop/add-fire/jit
/develop/add-fire/speedtest
/develop/add-fire/bc
/develop/add-fire/ckpt-2
/develop/add-fire/normalize-context
/develop/add-fire/components-dir
/develop/add-fire/halfentropy
/develop/add-fire/memoryclass
/develop/add-fire/categoricaldist
/develop/add-fire/mm
/develop/add-fire/sac-lst
/develop/add-fire/mm3
/develop/add-fire/continuous
/develop/add-fire/ghost
/develop/add-fire/policy-tests
/develop/add-fire/export-discrete
/develop/add-fire/test-simple-rl-fix-resnet
/develop/add-fire/remove-currdoc
/develop/add-fire/clean2
/develop/add-fire/doc-cleanups
/develop/add-fire/changelog
/develop/add-fire/mm2
/develop/model-transfer/add-physics
/develop/model-transfer/train
/develop/jit/experiments
/exp-vince/sep30-2020
/hh-develop-gridsensor-tests/static
/develop/hybrid-actions/distlist
/develop/bullet-hell/buffer
/goal-conditioning/new
/goal-conditioning/sensors-2
/goal-conditioning/sensors-3-pytest-fix
/goal-conditioning/grid-world
/soccer-comms/disc
/develop/centralizedcritic/counterfact
/develop/centralizedcritic/mm
/develop/centralizedcritic/nonego
/develop/zombieteammanager/disableagent
/develop/zombieteammanager/killfirst
/develop/superpush/int
/develop/superpush/branch-cleanup
/develop/teammanager/int
/develop/teammanager/cubewar-nocycle
/develop/teammanager/cubewars
/develop/superpush/int/hunter
/goal-conditioning/new/allo-crawler
/develop/coma2/clip
/develop/coma2/singlenetwork
/develop/coma2/samenet
/develop/coma2/fixgroup
/develop/coma2/samenet/sum
/hh-develop-dodgeball/goy-input
/develop/soccer-groupman/mod
/develop/soccer-groupman/mod/hunter
/develop/soccer-groupman/mod/hunter/cine
/ai-hw-2021/tensor-applier
0.1.1
0.1.2
0.10.0
0.10.1
0.11.0
0.11.0.dev0
0.11.0.dev1
0.11.0.dev2
0.11.0.dev3
0.12.0
0.12.0-dev
0.12.0.dev0
0.12.1
0.12.1.dev0
0.12.1.dev1
0.13.0
0.13.1
0.14.0
0.14.1
0.15.0
0.15.1
0.2.0
0.2.1
0.2.1a
0.2.1b
0.2.1c
0.2.1d
0.3.0
0.3.0a
0.3.0b
0.3.1
0.3.1a
0.3.1b
0.4.0
0.4.0a
0.4.0b
0.5.0
0.5.0a
0.6.0
0.6.0a
0.7.0
0.8.0
0.8.1
0.8.2
0.9.0
0.9.1
0.9.2
0.9.3
com.unity.ml-agents_1.0.0
com.unity.ml-agents_1.0.2
com.unity.ml-agents_1.0.3
com.unity.ml-agents_1.0.4
com.unity.ml-agents_1.0.5
com.unity.ml-agents_1.0.6
com.unity.ml-agents_1.0.7
com.unity.ml-agents_1.0.8
com.unity.ml-agents_1.1.0
com.unity.ml-agents_1.2.0
com.unity.ml-agents_1.3.0
com.unity.ml-agents_1.4.0
com.unity.ml-agents_1.5.0
com.unity.ml-agents_1.6.0
com.unity.ml-agents_1.7.0
com.unity.ml-agents_1.7.2
com.unity.ml-agents_1.8.0
com.unity.ml-agents_1.8.1
com.unity.ml-agents_1.9.0
com.unity.ml-agents_1.9.1
com.unity.ml-agents_2.0.0
com.unity.ml-agents_2.1.0
latest_release
python-packages_0.16.0
python-packages_0.16.1
python-packages_0.17.0
python-packages_0.18.0
python-packages_0.18.1
python-packages_0.19.0
python-packages_0.20.0
python-packages_0.21.0
python-packages_0.21.1
python-packages_0.22.0
python-packages_0.23.0
python-packages_0.24.0
python-packages_0.24.1
python-packages_0.25.0
python-packages_0.25.1
python-packages_0.26.0
python-packages_0.27.0
release_1
release_10
release_10_docs
release_11
release_11_docs
release_12
release_12_docs
release_13
release_13_docs
release_14
release_14_docs
release_15
release_15_docs
release_16
release_16_docs
release_17
release_17_docs
release_18
release_18_docs
release_1_docs
release_2
release_2_docs
release_2_verified_docs
release_3
release_3_docs
release_4
release_4_docs
release_5
release_5_docs
release_6
release_6_docs
release_7
release_7_docs
release_8
release_8_docs
release_8_test0
release_8_test1
release_9
release_9_docs
v0.1
14 次代码提交 (3d5f4e35-8403-4171-a3a3-1f42c06293c4)
作者 | SHA1 | 备注 | 提交日期 |
---|---|---|---|
GitHub | c17937ef |
Curiosity Driven Exploration & Pyramids Environments (#739)
* Adds implementation of Curiosity-driven Exploration by Self-supervised Prediction (https://arxiv.org/abs/1705.05363) to PPO trainer. * To enable, set use_curiosity flag to true in hyperparameter file. * Includes refactor of unitytrainers model code to accommodate new feature. * Adds new Pyramids environment (w/ documentation). Environment contains sparse reward, and can only be solved using PPO+Curiosity. |
7 年前 |
GitHub | 7eebb5f6 |
Fixes for broken materials (#855)
* Fix materials for wall and checker * Make the ball orange again |
7 年前 |
Vincent(Yuan) Gao | 7ce0b834 |
Add Brains for Pyramids, Reacher, SoccerTwos, Tennis, Bouncer, and CrawlerDynamic (#1313)
* New brains for Pyramid scene * Add reacher brains * New brains for Soccer agents * New Tennis Brains * Set prefabs correctly * New brains for bouncer * New Dynamic Crawler Brains |
6 年前 |
Vincent-Pierre BERGES | 4a6ae4e0 |
Barracuda integration into ML-Agents (#1557)
* Switched default Mac GFX API to Metal * Added Barracuda pre-0.1.5 * Added basic integration with Barracuda Inference Engine * Use predefined outputs the same way as for TF engine * Fixed discrete action + LSTM support * Switch Unity Mac Editor to Metal GFX API * Fixed null model handling * All examples converted to support Barracuda * Added model conversion from Tensorflow to Barracuda copied the barracuda.py file to ml-agents/mlagents/trainers copied the tensorflow_to_barracuda.py file to ml-agents/mlagents/trainers modified the tensorflow_to_barracuda.py file so it could be called from mlagents modified ml-agents/mlagents/trainers/policy.py to convert the tf models to barracuda compatible .bytes file * Added missing iOS BLAS plugin * Added forgotten prefab changes * Removed GLCore GFX backend for Mac, because it doesn't support Compute shaders * Exposed GPU support for LearningBrain inference ... |
6 年前 |
GitHub | bebdb293 |
ML-Agents Branding & Color Updates (#2583)
* new env styles rebased on develop * added new trained models * renamed food collector platforms * reduce training timescale on WallJump from 100 to 10 * uncheck academy control on walljump * new banner image * rename banner file * new example env images * add foodCollector image * change Banana to FoodCollector and update image * change bouncer description to include green cube * update image * update gridworld image * cleanup prefab names and tags * updated soccer env to reference purple agent instead of red * remove unused mats * rename files * remove more unused tags * update image * change platform to agent cube * update text. change platform to agents head * cleanup * cleaned up weird unused meta files * add new wall jump nn files and rename a prefab * walker change stacked states from 5 to 1 walker collects physics observations so stacked states are not need... |
5 年前 |
GitHub | 99146e97 |
1 to 1 Brain to Agent (#2729)
* 1 to 1 Brain to Agent This is a work in progess In this PR : - Deleted all Brain Objects - Moved the BrainParameters into the Agent - Gave the Agent a Heuristic method (see Balance Ball for example) - Modified the Communicator and ModelRunner : Put can only take one agent at a time - Made the IBrain Interface with RequestDecision and DecideAction method No changes made to Python [Design Doc](https://docs.google.com/document/d/1hBhBxZ9lepGF4H6fc6Hu6AW7UwOmnyX3trmgI3HpOmo/edit#) * Removing editorconfig * Updating BallanceBall scene * grammar mistake * Clearing the Agents of the Model runner * Added Documentation on IBrain * Modified comments on GiveModel * Introduced a factory * Split Learning Brain in two * Changes to walljump * Fixing the Unit tests * Renaming the Brain to Policy * Heuristic now has priority over training * Edited code comments * Fixing bugs * Develop one to one scene edits... |
5 年前 |
Jonathan Harper | c561dfbf |
Update NN models for all example scenes (v0.11.0)
This change updates all of the .nn models, and uses the new output filenames (e.g. 3DBallLearning.nn becomes 3dBall.nn). |
5 年前 |
GitHub | 03664e75 |
Make On Demand Decision the default (#3243)
* Added a simple Decision Requester * Modified the prefabs * Fixing the tests and removing fields from Agent parameters * Migrating.md * addressing comments * addressing comments |
5 年前 |
GitHub | a1a1126d |
Trim some public fields on the Agent (#3269)
* Triming some of the methods of the agent but left SetReward * Fixing bugs * modifying the environments * Reintroducing IsDone and IsMaxStepReached * Updating the Migrating doc * more details on the Migration |
5 年前 |
GitHub | ceebacd6 | Convert pyramids to Raycast sensor (#3299) | 5 年前 |
Yuan Gao | 24a681bf | Updated the prefebs to enable inference | 5 年前 |
vincentpierre | 89605d02 |
Replaced .nn models with .onnx models
Missing so far : - Visual3DBall - CrawlerDynamicVariableSpeed - CrawlerStaticVariableSpeed - GridFoodCollector - VisualFoodCollector - GridWorld - Match3 |
4 年前 |
GitHub | bc0ba098 | add option for Burst inference (#4925) | 4 年前 |
vincentpierre | 4e14879d |
Updating the barracuda 1.4.0 (#5291)
Initial commit second commit. The no-extrinsic was trained without the log reward (reward = prob) while the new one is (reward = log_prob - log_prior) A few results, it looks like Walker-diverse-r05-bigger.onnx is doing something Modified pushblock using next state and action. Did not help Fixing bug that had 9 diversity settings instead of 8 removing results |
4 年前 |