浏览代码

Fix observations on PPO trainer (#340)

* Fix observations on PPO trainer

* tested and fixed the fix
/tag-0.2.1d
GitHub 7 年前
当前提交
faa53e35
共有 1 个文件被更改,包括 1 次插入1 次删除
  1. 2
      python/ppo/trainer.py

2
python/ppo/trainer.py


else:
feed_dict = {self.model.batch_size: len(info.states)}
if self.use_observations:
for i in range(self.info.observations):
for i in range(len(info.observations)):
feed_dict[self.model.observation_in[i]] = info.observations[i]
if self.use_states:
feed_dict[self.model.state_in] = info.states

正在加载...
取消
保存