Unity 机器学习代理工具包 (ML-Agents) 是一个开源项目,它使游戏和模拟能够作为训练智能代理的环境。
您最多选择25个主题 主题必须以中文或者字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符
 
 
 
 
 
Ervin Teng bcc25d59 Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
..
common Clean up nn_policy 5 年前
components Update docstring 5 年前
ghost Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
ppo Removed floating constants 5 年前
sac Used NamedTuple for create normalization tensors 5 年前
tests Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
__init__.py set package and API to 0.15.0-dev0 (#3369) 5 年前
action_info.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
agent_processor.py Clear agent processor properly on episode reset (#3437) 5 年前
barracuda.py fix errors from new flake8-comprehensions (#2917) 5 年前
behavior_id_utils.py Self-play for symmetric games (#3194) 5 年前
brain.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前
brain_conversion_utils.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
buffer.py Fix clear update buffer when trainer stops training, add test (#3422) 5 年前
curriculum.py Allow curricula to be created without files (#3145) 5 年前
demo_loader.py [bug-fix] Use correct agent_ids for demo loader (#3464) 5 年前
env_manager.py Move processing of steps after reset to advance() (#3271) 5 年前
exception.py Combined model and policy for PPO 5 年前
learn.py Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
meta_curriculum.py Add 'run-experiment' script, simpler curriculum config (#3186) 5 年前
models.py Used NamedTuple for create normalization tensors 5 年前
policy.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前
rl_trainer.py Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
run_experiment.py Add 'run-experiment' script, simpler curriculum config (#3186) 5 年前
sampler_class.py Moving Env Manager to Trainers (#3062) The Env Manager is only used by the trainer codebase. The entry point to interact with an environment is UnityEnvironment. 5 年前
simple_env_manager.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
stats.py Make the timer output format consistent (#3472) 5 年前
subprocess_env_manager.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
tensorflow_to_barracuda.py backport tf2bc changes from barracuda-release (#3341) 5 年前
tf_policy.py Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
trainer.py Merge branch 'master' into develop-splitpolicyoptimizer 5 年前
trainer_controller.py Make the timer output format consistent (#3472) 5 年前
trainer_util.py Temporarily remove multi-GPU 5 年前
trajectory.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前