浏览代码

[Fixed BC with LSTM] (#766)

Fixes the issue raised by @hsaikia in #552
Added the memory_size variable to the BC model
Added memory_size and recurrent_out to the output nodes of the graph when using BC with LSTM
/develop-generalizationTraining-TrainerController
GitHub 7 年前
当前提交
38098a12
共有 2 个文件被更改,包括 4 次插入4 次删除
  1. 1
      python/unitytrainers/bc/models.py
  2. 7
      python/unitytrainers/trainer_controller.py

1
python/unitytrainers/bc/models.py


self.dropout_rate = tf.placeholder(dtype=tf.float32, shape=[], name="dropout_rate")
hidden_reg = tf.layers.dropout(hidden, self.dropout_rate)
if self.use_recurrent:
tf.Variable(self.m_size, name="memory_size", trainable=False, dtype=tf.int32)
self.memory_in = tf.placeholder(shape=[None, self.m_size], dtype=tf.float32, name='recurrent_in')
hidden_reg, self.memory_out = self.create_recurrent_encoder(hidden_reg, self.memory_in)
self.memory_out = tf.identity(self.memory_out, name='recurrent_out')

7
python/unitytrainers/trainer_controller.py


scopes += [scope]
if self.trainers[brain_name].parameters["trainer"] == "imitation":
nodes += [scope + x for x in ["action"]]
elif not self.trainers[brain_name].parameters["use_recurrent"]:
else:
else:
node_list = ["action", "value_estimate", "action_probs", "recurrent_out", "memory_size"]
nodes += [scope + x for x in node_list]
if self.trainers[brain_name].parameters["use_recurrent"]:
nodes += [scope + x for x in ["recurrent_out", "memory_size"]]
if len(scopes) > 1:
self.logger.info("List of available scopes :")
for scope in scopes:

正在加载...
取消
保存