16 次代码提交 (a3ec270f-4959-4f44-8f42-ca58228dbf77)

作者 SHA1 备注 提交日期
GitHub 4ac79742 Refactor reward signals into separate class (#2144) 6 年前
GitHub b05c9ac1 Add environment manager for parallel environments (#2209) 5 年前
GitHub d80d5852 add some types to the reward signals (#2215) 5 年前
GitHub 7b69bd14 Refactor Trainer and Model (#2360) 5 年前
GitHub bd7eb286 Update reward signals in parallel with policy (#2362) 5 年前
GitHub 689765d6 Modification of reward signals and rl_trainer for SAC (#2433) 5 年前
GitHub 67d754c5 Fix flake8 import warnings (#2584) 5 年前
GitHub c6c01a03 Enable pylint and fix a few things (#2767) 5 年前
GitHub 69d1a033 Develop remove past action communication (#2913) 5 年前
GitHub 652488d9 check for numpy float64 (#2948) 5 年前
GitHub 36048cb6 Moving Env Manager to Trainers (#3062) The Env Manager is only used by the trainer codebase. The entry point to interact with an environment is UnityEnvironment. 5 年前
GitHub 2fd305e7 Move add_experiences out of trainer, add Trajectories (#3067) 5 年前
Ervin Teng 9d1eff12 Fix one more np float32 issue 5 年前
GitHub f058b18c Replace BrainInfos with BatchedStepResult (#3207) 5 年前
GitHub ffd8f855 [bug-fix] Fix crash when demo size is smaller than batch size (#3591) 5 年前
GitHub e92b4f88 [refactor] Structure configuration files into classes (#3936) 5 年前