浏览代码

step increment moved to _update_policy, fixed exit status issue

/distributed-training
Anupam Bhatnagar 5 年前
当前提交
06a54ae8
共有 3 个文件被更改,包括 1 次插入2 次删除
  1. 1
      config/trainer_config.yaml
  2. 2
      ml-agents/mlagents/trainers/optimizer/tf_optimizer.py

1
config/trainer_config.yaml


time_horizon: 1000
lambd: 0.99
beta: 0.001
max_steps: 1.0e5
3DBallHard:
normalize: true

2
ml-agents/mlagents/trainers/optimizer/tf_optimizer.py


if hvd is not None:
adam_optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate * hvd.size(), name=name
learning_rate=learning_rate, name=name
)
horovod_optimizer = hvd.DistributedOptimizer(adam_optimizer)
else:

正在加载...
取消
保存