浏览代码

PPO 3dball config

/develop/bisim-sac-transfer
yanchaosun 4 年前
当前提交
910707dd
共有 4 个文件被更改,包括 80 次插入5 次删除
  1. 17
      config/ppo_transfer/3DBall.yaml
  2. 20
      config/ppo_transfer/3DBallHard.yaml
  3. 3
      ml-agents/mlagents/trainers/sac_transfer/optimizer.py
  4. 45
      config/ppo_transfer/3DBallHardTransfer.yaml

17
config/ppo_transfer/3DBall.yaml


lambd: 0.99
num_epoch: 3
learning_rate_schedule: linear
model_schedule: constant
encoder_layers: 1
policy_layers: 1
forward_layers: 1
value_layers: 1
feature_size: 16
reuse_encoder: false
in_epoch_alter: false
in_batch_alter: true
use_op_buffer: false
use_var_predict: true
with_prior: false
predict_return: true
use_bisim: false
separate_value_net: false
num_layers: 1
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:

20
config/ppo_transfer/3DBallHard.yaml


lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
conv_thres: 1e-3
use_transfer: true
transfer_path: "results/single_ball/3DBall"
model_schedule: constant
encoder_layers: 1
policy_layers: 1
forward_layers: 1
value_layers: 1
feature_size: 16
reuse_encoder: false
in_epoch_alter: false
in_batch_alter: true
use_op_buffer: false
use_var_predict: true
with_prior: false
predict_return: true
use_bisim: false
separate_value_net: false
num_layers: 1
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:

3
ml-agents/mlagents/trainers/sac_transfer/optimizer.py


self.update_dict = {
"value_loss": self.total_value_loss,
"policy_loss": self.policy_loss,
# "model_loss": self.model_loss,
# "model_learning_rate": self.model_learning_rate,
# "reward_loss": self.policy.reward_loss,
"q1_loss": self.q1_loss,
"q2_loss": self.q2_loss,
"entropy_coef": self.ent_coef,

45
config/ppo_transfer/3DBallHardTransfer.yaml


behaviors:
3DBallHard:
trainer_type: ppo_transfer
hyperparameters:
batch_size: 1200
buffer_size: 12000
learning_rate: 0.0003
beta: 0.001
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
model_schedule: constant
encoder_layers: 1
policy_layers: 1
forward_layers: 1
value_layers: 1
feature_size: 16
reuse_encoder: false
in_epoch_alter: false
in_batch_alter: true
use_op_buffer: false
use_var_predict: true
with_prior: false
predict_return: true
use_bisim: false
separate_value_net: false
use_transfer: true
transfer_path: "results/ball/3DBall"
load_model: true
train_model: false
network_settings:
normalize: true
hidden_units: 128
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.995
strength: 1.0
keep_checkpoints: 5
max_steps: 4000000
time_horizon: 1000
summary_freq: 12000
threaded: true
正在加载...
取消
保存