Unity 机器学习代理工具包 (ML-Agents) 是一个开源项目,它使游戏和模拟能够作为训练智能代理的环境。
您最多选择25个主题 主题必须以中文或者字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符
 
 
 
 
 
GitHub be14dd42 Make the timer output format consistent (#3472) 5 年前
..
components Replace BrainInfos with BatchedStepResult (#3207) 5 年前
ghost [bug-fix] Empty ignored trajectory queues, make sure queues don't overflow (#3451) 5 年前
ppo Fix extra summary being written when loading from checkpoint (#3272) 5 年前
sac Fix extra summary being written when loading from checkpoint (#3272) 5 年前
tests Make the timer output format consistent (#3472) 5 年前
__init__.py set package and API to 0.15.0-dev0 (#3369) 5 年前
action_info.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
agent_processor.py Clear agent processor properly on episode reset (#3437) 5 年前
barracuda.py fix errors from new flake8-comprehensions (#2917) 5 年前
behavior_id_utils.py Self-play for symmetric games (#3194) 5 年前
brain.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前
brain_conversion_utils.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
buffer.py Fix clear update buffer when trainer stops training, add test (#3422) 5 年前
curriculum.py Allow curricula to be created without files (#3145) 5 年前
demo_loader.py [bug-fix] Use correct agent_ids for demo loader (#3464) 5 年前
env_manager.py Move processing of steps after reset to advance() (#3271) 5 年前
exception.py Better error handling if trainer config doesn't contain "default" section (#3063) 5 年前
learn.py Make the timer output format consistent (#3472) 5 年前
meta_curriculum.py Add 'run-experiment' script, simpler curriculum config (#3186) 5 年前
models.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前
policy.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前
rl_trainer.py Fix clear update buffer when trainer stops training, add test (#3422) 5 年前
run_experiment.py Add 'run-experiment' script, simpler curriculum config (#3186) 5 年前
sampler_class.py Moving Env Manager to Trainers (#3062) The Env Manager is only used by the trainer codebase. The entry point to interact with an environment is UnityEnvironment. 5 年前
simple_env_manager.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
stats.py Make the timer output format consistent (#3472) 5 年前
subprocess_env_manager.py Move advance() logic for environment manager out of trainer_controller (#3234) 5 年前
tensorflow_to_barracuda.py backport tf2bc changes from barracuda-release (#3341) 5 年前
tf_policy.py Change checkpoint suffix to "ckpt" (#3470) 5 年前
trainer.py Support for ONNX export (#3101) 5 年前
trainer_controller.py Make the timer output format consistent (#3472) 5 年前
trainer_util.py Self-play for symmetric games (#3194) 5 年前
trajectory.py Replace BrainInfos with BatchedStepResult (#3207) 5 年前