58 次代码提交 (7196d44b-5d74-4364-a616-617b84cfd448)

作者 SHA1 备注 提交日期
GitHub 6b193d03 Develop add fire layers (#4321) 4 年前
GitHub f374f87a [add-fire] Add LSTM to SAC, LSTM fixes and initializations (#4324) 4 年前
Ervin Teng eeae6d97 Proper initialization and SAC masking 4 年前
Ervin Teng 50b1470e Experimental amrl layer 4 年前
Ervin Teng ef857c34 Add extra FF layer 4 年前
Ervin Teng b44b5a24 Faster implementation 4 年前
Ervin Teng 11b30916 Add comment 4 年前
Ervin Teng 46f3a9b9 Passthrough max 4 年前
Ervin Teng 13f15086 Merge branch 'develop-add-fire' into develop-add-fire-amrl 4 年前
Ervin Teng cb0085a7 Memory size abstraction and fixes 4 年前
Ervin Teng d22d2e26 LSTM class 4 年前
GitHub 6a1d993f [add-fire] Memory class abstraction (#4375) 4 年前
Ervin Teng aeda0b32 Don't use torch.split in LSTM 4 年前
Ervin Teng 1dca75d8 Move linear encoding to NetworkBody 4 年前
GitHub 4e93cb6e [torch] Restructure PyTorch encoders (#4421) 4 年前
GitHub 6f534366 Add torch_utils class, auto-detect CUDA availability (#4403) 4 年前
Ervin Teng 14a7e29b Add AMRL layer 4 年前
Ervin Teng 9f96a495 Use built-in cumulative max 4 年前
vincentpierre 96452986 Initial commit for multi head attention 4 年前
Ervin Teng 99ec16e6 Hard Swish 4 年前
vincentpierre d3d4eb90 Trainer with attention 4 年前
Ervin Teng 5d3ad161 Leaky ReLU 4 年前
vincentpierre 7ef3c9a1 Trainer with attention 4 年前
Ervin Teng e80d418b Use lower scaling value 4 年前
Ervin Teng c3cec801 Use linear gain for KaimingHe 4 年前
GitHub 88d3ec3e Merge master into hybrid actions staging branch (#4704) 4 年前
vincentpierre 6fcbba53 Refactoring the code to make it more flexible. Still a hack 4 年前
vincentpierre 0b6c2ed3 Fixing some bugs 4 年前
Ervin Teng 3b3b53e2 Improve comment 4 年前
Andrew Cohen 84cc2b84 concat x self before attention 4 年前
vincentpierre e14e1c4d Improvements and new tests 4 年前
vincentpierre f283cb60 different architecture 4 年前
Arthur Juliani 79898e06 Use hypernetwork in both places 4 年前
vincentpierre f7a4a31f [Experiment] Bullet hell 4 年前
Andrew Cohen f57875e0 layer norm 4 年前
Andrew Cohen bc77c990 layer norm and weight decay with fixed architecture 4 年前
GitHub e344fe79 Make memory contiguous (#4804) 4 年前
Andrew Cohen fad37dc5 add default args to LinearEncoder 4 年前
Andrew Cohen 21365c04 formatting 4 年前
Andrew Cohen 96c01a63 custom layer norm 4 年前
GitHub 9689449f Refactor of attention (#4840) 4 年前
Andrew Cohen 540b930b add defaults to linear encoder, initialize ent encoders 4 年前
GitHub 0ac990e0 add LayerNorm (#4847) 4 年前
Arthur Juliani da0c8b9d Add hypernetwork 4 年前
Arthur Juliani 1cf97635 Additional conditional experiments 4 年前
Arthur Juliani d2526ce2 Modify CrawlerDynamic 4 年前
Arthur Juliani b8e81b00 Make lists modulelists 4 年前
Arthur Juliani 759fd2b5 PushJump modifications 4 年前
Arthur Juliani 21aaa5fe Add goal to hyper input 4 年前
Arthur Juliani a180dbf7 Add visual version of task and simply encoders 4 年前
Arthur Juliani 7165e9cf Make conditiontype a setting 4 年前
vincentpierre 1acdc155 Changes to hypernet 4 年前
vincentpierre 04bdb40c Reorder operations 4 年前
Arthur Juliani 4413203d Sensor cleanup 4 年前
Ervin Teng 7c826fb1 Working GRU 4 年前
Ervin Teng e9025079 Properly use MemoryModule abstraction 4 年前
Arthur Juliani ce1d3d88 Resolve conflicts in networkbody 4 年前
GitHub dffc37bf Update to barracuda 1.3.3 and changes to the model inputs and outputs for LSTM (#5236) 4 年前