Unity 机器学习代理工具包 (ML-Agents) 是一个开源项目,它使游戏和模拟能够作为训练智能代理的环境。
您最多选择25个主题 主题必须以中文或者字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符
 
 
 
 
 
GitHub d8b93f8f [Bug fix] Hard reset when team changes (#3870) (#3899) 5 年前
..
components Catch dimension mismatches between demos and policy (#3821) 5 年前
ghost [Bug fix] Hard reset when team changes (#3870) (#3899) 5 年前
optimizer [change] Organize trainer files a bit better (#3538) 5 年前
policy WIP : Changes to the LL-API - Refactor of “done” logic (#3681) 5 年前
ppo [refactor] Run Trainers in separate threads (#3690) 5 年前
sac [refactor] Run Trainers in separate threads (#3690) 5 年前
tests [Bug fix] Hard reset when team changes (#3870) (#3899) 5 年前
trainer [bug-fix] Bugfixes for Threaded Trainers (#3817) 5 年前
__init__.py Release tooling (#3856) 5 年前
action_info.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
agent_processor.py Update comment on time horizon in agent processor (#3842) 5 年前
barracuda.py fix errors from new flake8-comprehensions (#2917) 5 年前
behavior_id_utils.py Asymmetric self-play (#3653) 5 年前
brain.py removed extraneous logging imports and loggers 5 年前
brain_conversion_utils.py WIP : Changes to the LL-API - Refactor of “done” logic (#3681) 5 年前
buffer.py Fix clear update buffer when trainer stops training, add test (#3422) 5 年前
curriculum.py Hotfixes for Release 0.15.1 (#3698) 5 年前
demo_loader.py Catch dimension mismatches between demos and policy (#3821) 5 年前
distributions.py Hotfixes for Release 0.15.1 (#3698) 5 年前
env_manager.py [WIP] Side Channel Design Changes (#3807) 5 年前
exception.py Combined model and policy for PPO 5 年前
learn.py Removed the default for width and height of the executable training. (#3867) 5 年前
meta_curriculum.py Hotfixes for Release 0.15.1 (#3698) 5 年前
models.py [change] Remove concatenate in discrete action probabilities to improve inference performance (#3598) 5 年前
run_experiment.py Add 'run-experiment' script, simpler curriculum config (#3186) 5 年前
sampler_class.py Moving Env Manager to Trainers (#3062) The Env Manager is only used by the trainer codebase. The entry point to interact with an environment is UnityEnvironment. 5 年前
simple_env_manager.py [WIP] Side Channel Design Changes (#3807) 5 年前
stats.py Asymmetric self-play (#3653) 5 年前
subprocess_env_manager.py [bug-fix] Fix exception thrown when quitting in-editor training from editor (#3885) 5 年前
tensorflow_to_barracuda.py backport tf2bc changes from barracuda-release (#3341) 5 年前
trainer_controller.py [Bug fix] Hard reset when team changes (#3870) (#3899) 5 年前
trainer_util.py [feature] Add --initialize-from option (#3710) 5 年前
trajectory.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前