浏览代码
Merge branch 'develop' into develop-runs
Merge branch 'develop' into develop-runs
# Conflicts: # python/learn.py # python/unitytrainers/trainer.py/develop-generalizationTraining-TrainerController
Arthur Juliani
7 年前
当前提交
195ac934
共有 223 个文件被更改,包括 3062 次插入 和 3350 次删除
-
1.gitignore
-
4README.md
-
4docs/Basic-Guide.md
-
9docs/Getting-Started-with-Balance-Ball.md
-
29docs/Installation-Windows.md
-
3docs/Learning-Environment-Create-New.md
-
27docs/Learning-Environment-Design-Agents.md
-
3docs/Learning-Environment-Design-Brains.md
-
1docs/Learning-Environment-Design-Heuristic-Brains.md
-
41docs/Learning-Environment-Examples.md
-
6docs/ML-Agents-Overview.md
-
4docs/Python-API.md
-
2docs/Readme.md
-
2docs/Training-Imitation-Learning.md
-
3docs/Training-ML-Agents.md
-
192docs/Training-on-Amazon-Web-Service.md
-
2docs/Using-Docker.md
-
2docs/dox-ml-agents.conf
-
999docs/images/crawler.png
-
999docs/images/walker.png
-
2docs/localized/zh-CN/README.md
-
2docs/localized/zh-CN/docs/Readme.md
-
13python/communicator_objects/agent_action_proto_pb2.py
-
27python/communicator_objects/agent_info_proto_pb2.py
-
43python/communicator_objects/brain_parameters_proto_pb2.py
-
15python/communicator_objects/brain_type_proto_pb2.py
-
13python/communicator_objects/command_proto_pb2.py
-
19python/communicator_objects/engine_configuration_proto_pb2.py
-
18python/communicator_objects/environment_parameters_proto_pb2.py
-
11python/communicator_objects/header_pb2.py
-
13python/communicator_objects/resolution_proto_pb2.py
-
11python/communicator_objects/space_type_proto_pb2.py
-
22python/communicator_objects/unity_input_pb2.py
-
13python/communicator_objects/unity_message_pb2.py
-
22python/communicator_objects/unity_output_pb2.py
-
9python/communicator_objects/unity_rl_initialization_input_pb2.py
-
17python/communicator_objects/unity_rl_initialization_output_pb2.py
-
28python/communicator_objects/unity_rl_input_pb2.py
-
24python/communicator_objects/unity_rl_output_pb2.py
-
6python/learn.py
-
2python/requirements.txt
-
17python/tests/mock_communicator.py
-
54python/tests/test_bc.py
-
237python/tests/test_ppo.py
-
8python/tests/test_unityagents.py
-
20python/tests/test_unitytrainers.py
-
14python/trainer_config.yaml
-
33python/unityagents/brain.py
-
3python/unityagents/communicator.py
-
34python/unityagents/environment.py
-
5python/unitytrainers/bc/models.py
-
38python/unitytrainers/bc/trainer.py
-
14python/unitytrainers/buffer.py
-
84python/unitytrainers/models.py
-
52python/unitytrainers/ppo/models.py
-
264python/unitytrainers/ppo/trainer.py
-
25python/unitytrainers/trainer.py
-
16python/unitytrainers/trainer_controller.py
-
10unity-environment/Assets/ML-Agents/Editor/BrainEditor.cs
-
63unity-environment/Assets/ML-Agents/Examples/BananaCollectors/Prefabs/TeachingArea.prefab
-
147unity-environment/Assets/ML-Agents/Examples/BananaCollectors/Scenes/Banana.unity
-
294unity-environment/Assets/ML-Agents/Examples/BananaCollectors/Scenes/BananaIL.unity
-
124unity-environment/Assets/ML-Agents/Examples/BananaCollectors/Scenes/VisualBanana.unity
-
28unity-environment/Assets/ML-Agents/Examples/Basic/Scripts/BasicAgent.cs
-
165unity-environment/Assets/ML-Agents/Examples/Basic/TFModels/Basic.bytes
-
2unity-environment/Assets/ML-Agents/Examples/Basic/TFModels/Basic.bytes.meta
-
11unity-environment/Assets/ML-Agents/Examples/Bouncer/Scripts/BouncerAgent.cs
-
5unity-environment/Assets/ML-Agents/Examples/Crawler.meta
-
2unity-environment/Assets/ML-Agents/Examples/Crawler/Prefabs.meta
-
881unity-environment/Assets/ML-Agents/Examples/Crawler/Prefabs/Crawler.prefab
-
5unity-environment/Assets/ML-Agents/Examples/Crawler/Prefabs/Crawler.prefab.meta
-
2unity-environment/Assets/ML-Agents/Examples/Crawler/Scripts.meta
-
8unity-environment/Assets/ML-Agents/Examples/Crawler/Scripts/CrawlerAcademy.cs
-
5unity-environment/Assets/ML-Agents/Examples/Crawler/Scripts/CrawlerAcademy.cs.meta
-
5unity-environment/Assets/ML-Agents/Examples/Crawler/TFModels.meta
-
33unity-environment/Assets/ML-Agents/Examples/Hallway/Scenes/Hallway.unity
-
124unity-environment/Assets/ML-Agents/Examples/Hallway/Scenes/VisualHallway.unity
-
138unity-environment/Assets/ML-Agents/Examples/PushBlock/Scenes/PushBlock.unity
-
134unity-environment/Assets/ML-Agents/Examples/PushBlock/Scenes/VisualPushBlock.unity
-
2unity-environment/Assets/ML-Agents/Examples/PushBlock/Scripts/PushAgentBasic.cs
-
42unity-environment/Assets/ML-Agents/Examples/Pyramids/Prefabs/AreaPB.prefab
-
50unity-environment/Assets/ML-Agents/Examples/Pyramids/Prefabs/StonePyramid.prefab
-
42unity-environment/Assets/ML-Agents/Examples/Pyramids/Prefabs/VisualAreaPB.prefab
-
187unity-environment/Assets/ML-Agents/Examples/Pyramids/Scenes/Pyramids.unity
-
90unity-environment/Assets/ML-Agents/Examples/Pyramids/Scenes/PyramidsIL.unity
-
180unity-environment/Assets/ML-Agents/Examples/Pyramids/Scenes/VisualPyramids.unity
-
5unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Agent.mat.meta
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Ball.mat
-
4unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Ball.mat.meta
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Black.mat.meta
-
4unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Block.mat.meta
-
6unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/BlueAgent.mat
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/CheckerGoal.mat.meta
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/CheckerMany.mat.meta
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/CheckerRectangle.mat.meta
-
12unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/CheckerSquare.mat
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/FailGround.mat.meta
-
5unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Goal.mat.meta
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Grass.mat.meta
-
2unity-environment/Assets/ML-Agents/Examples/SharedAssets/Materials/Ground.mat.meta
|
|||
# Training on Amazon Web Service |
|||
|
|||
This page contains instructions for setting up an EC2 instance on Amazon Web Service for training ML-Agents environments. Visual observations is not supported currently and you need to enable headless mode. |
|||
This page contains instructions for setting up an EC2 instance on Amazon Web Service for training ML-Agents environments. |
|||
## Recommended AMI |
|||
You can get started with an EC2 instance with the Deep Learning AMI (Ubuntu) listed under AWS Marketplace [AMI](https://aws.amazon.com/marketplace/pp/B077GCH38C). Choose the python3 environment within that ami which gives you the python3 and CUDA 9.0 environment. |
|||
## Preconfigured AMI |
|||
|
|||
We've prepared an preconfigured AMI for you with the ID: `ami-6880c317` in the `us-east-1` region. It was created as a modification of [Deep Learning AMI (Ubuntu)](https://aws.amazon.com/marketplace/pp/B077GCH38C). If you want to do training without the headless mode, you need to enable X Server on it. After launching your EC2 instance using the ami and ssh into it, run the following commands to enable it: |
|||
|
|||
``` |
|||
//Start the X Server, press Enter to come to the command line |
|||
sudo /usr/bin/X :0 & |
|||
|
|||
//Check if Xorg process is running |
|||
//You will have a list of processes running on the GPU, Xorg should be in the list, as shown below |
|||
nvidia-smi |
|||
/* |
|||
* Thu Jun 14 20:27:26 2018 |
|||
* +-----------------------------------------------------------------------------+ |
|||
* | NVIDIA-SMI 390.67 Driver Version: 390.67 | |
|||
* |-------------------------------+----------------------+----------------------+ |
|||
* | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | |
|||
* | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |
|||
* |===============================+======================+======================| |
|||
* | 0 Tesla K80 On | 00000000:00:1E.0 Off | 0 | |
|||
* | N/A 35C P8 31W / 149W | 9MiB / 11441MiB | 0% Default | |
|||
* +-------------------------------+----------------------+----------------------+ |
|||
* |
|||
* +-----------------------------------------------------------------------------+ |
|||
* | Processes: GPU Memory | |
|||
* | GPU PID Type Process name Usage | |
|||
* |=============================================================================| |
|||
* | 0 2331 G /usr/lib/xorg/Xorg 8MiB | |
|||
* +-----------------------------------------------------------------------------+ |
|||
*/ |
|||
|
|||
//Make the ubuntu use X Server for display |
|||
export DISPLAY=:0 |
|||
``` |
|||
To begin with, you will need an EC2 instance which contains the latest Nvidia drivers, CUDA9, and cuDNN. There are a number of external tutorials which describe this (Note: You will need to tweak some steps in these tutorials for CUDA 9): |
|||
You could also choose to configure your own instance. To begin with, you will need an EC2 instance which contains the latest Nvidia drivers, CUDA9, and cuDNN. In this tutorial we used the [Deep Learning AMI (Ubuntu)](https://aws.amazon.com/marketplace/pp/B077GCH38C) listed under AWS Marketplace with a p2.xlarge instance. |
|||
* [Getting CUDA 8 to Work With openAI Gym on AWS and Compiling TensorFlow for CUDA 8 Compatibility](https://davidsanwald.github.io/2016/11/13/building-tensorflow-with-gpu-support.html) |
|||
* [Installing TensorFlow on an AWS EC2 P2 GPU Instance](http://expressionflow.com/2016/10/09/installing-tensorflow-on-an-aws-ec2-p2-gpu-instance/) |
|||
* [Updating Nvidia CUDA to 8.0.x in Ubuntu 16.04 – EC2 Gx instance](https://aichamp.wordpress.com/2016/11/09/updating-nvidia-cuda-to-8-0-x-in-ubuntu-16-04-ec2-gx-instance/) |
|||
### Installing ML-Agents on the instance |
|||
## Installing ML-Agents |
|||
After launching your EC2 instance using the ami and ssh into it: |
|||
1. Move `python` sub-folder of this ml-agents repo to the remote ECS instance, and set it as the working directory. |
|||
2. Install the required packages with `pip3 install .`. |
|||
1. Activate the python3 environment |
|||
## Testing |
|||
``` |
|||
source activate python3 |
|||
``` |
|||
To verify that all steps worked correctly: |
|||
2. Clone the ML-Agents repo and install the required python packages |
|||
|
|||
``` |
|||
git clone https://github.com/Unity-Technologies/ml-agents.git |
|||
cd ml-agents/python |
|||
pip3 install . |
|||
``` |
|||
|
|||
### Setting up X Server (optional) |
|||
|
|||
X Server setup is only necessary if you want to do training that requires visual observation input. _Instructions here are adapted from this [Medium post](https://medium.com/towards-data-science/how-to-run-unity-on-amazon-cloud-or-without-monitor-3c10ce022639) on running general Unity applications in the cloud._ |
|||
|
|||
Current limitations of the Unity Engine require that a screen be available to render to when using visual observations. In order to make this possible when training on a remote server, a virtual screen is required. We can do this by installing Xorg and creating a virtual screen. Once installed and created, we can display the Unity environment in the virtual environment, and train as we would on a local machine. Ensure that `headless` mode is disabled when building linux executables which use visual observations. |
|||
|
|||
1. Install and setup Xorg: |
|||
|
|||
``` |
|||
//Install Xorg |
|||
sudo apt-get update |
|||
sudo apt-get install -y xserver-xorg mesa-utils |
|||
sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 |
|||
|
|||
//Get the BusID information |
|||
nvidia-xconfig --query-gpu-info |
|||
|
|||
//Add the BusID information to your /etc/X11/xorg.conf file |
|||
sudo sed -i 's/ BoardName "Tesla K80"/ BoardName "Tesla K80"\n BusID "0:30:0"/g' /etc/X11/xorg.conf |
|||
|
|||
//Remove the Section "Files" from the /etc/X11/xorg.conf file |
|||
sudo vim /etc/X11/xorg.conf //And remove two lines that contain Section "Files" and EndSection |
|||
``` |
|||
|
|||
2. Update and setup Nvidia driver: |
|||
|
|||
``` |
|||
//Download and install the latest Nvidia driver for ubuntu |
|||
wget http://download.nvidia.com/XFree86/Linux-x86_64/390.67/NVIDIA-Linux-x86_64-390.67.run |
|||
sudo /bin/bash ./NVIDIA-Linux-x86_64-390.67.run --accept-license --no-questions --ui=none |
|||
|
|||
//Disable Nouveau as it will clash with the Nvidia driver |
|||
sudo echo 'blacklist nouveau' | sudo tee -a /etc/modprobe.d/blacklist.conf |
|||
sudo echo 'options nouveau modeset=0' | sudo tee -a /etc/modprobe.d/blacklist.conf |
|||
sudo echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf |
|||
sudo update-initramfs -u |
|||
``` |
|||
|
|||
2. Restart the EC2 instance: |
|||
|
|||
``` |
|||
sudo reboot now |
|||
``` |
|||
|
|||
3. Make sure there are no Xorg processes running: |
|||
|
|||
``` |
|||
//Kill any possible running Xorg processes |
|||
//Note that you might have to run this command multiple times depending on how Xorg is configured. |
|||
sudo killall Xorg |
|||
|
|||
//Check if there is any Xorg process left |
|||
//You will have a list of processes running on the GPU, Xorg should not be in the list, as shown below. |
|||
nvidia-smi |
|||
/* |
|||
* Thu Jun 14 20:21:11 2018 |
|||
* +-----------------------------------------------------------------------------+ |
|||
* | NVIDIA-SMI 390.67 Driver Version: 390.67 | |
|||
* |-------------------------------+----------------------+----------------------+ |
|||
* | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | |
|||
* | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |
|||
* |===============================+======================+======================| |
|||
* | 0 Tesla K80 On | 00000000:00:1E.0 Off | 0 | |
|||
* | N/A 37C P8 31W / 149W | 0MiB / 11441MiB | 0% Default | |
|||
* +-------------------------------+----------------------+----------------------+ |
|||
* |
|||
* +-----------------------------------------------------------------------------+ |
|||
* | Processes: GPU Memory | |
|||
* | GPU PID Type Process name Usage | |
|||
* |=============================================================================| |
|||
* | No running processes found | |
|||
* +-----------------------------------------------------------------------------+ |
|||
*/ |
|||
``` |
|||
|
|||
4. Start X Server and make the ubuntu use X Server for display: |
|||
|
|||
``` |
|||
//Start the X Server, press Enter to come to the command line |
|||
sudo /usr/bin/X :0 & |
|||
|
|||
//Check if Xorg process is running |
|||
//You will have a list of processes running on the GPU, Xorg should be in the list. |
|||
nvidia-smi |
|||
|
|||
//Make the ubuntu use X Server for display |
|||
export DISPLAY=:0 |
|||
``` |
|||
|
|||
5. Ensure the Xorg is correctly configured: |
|||
|
|||
``` |
|||
//For more information on glxgears, see ftp://www.x.org/pub/X11R6.8.1/doc/glxgears.1.html. |
|||
glxgears |
|||
//If Xorg is configured correctly, you should see the following message |
|||
/* |
|||
* Running synchronized to the vertical refresh. The framerate should be |
|||
* approximately the same as the monitor refresh rate. |
|||
* 137296 frames in 5.0 seconds = 27459.053 FPS |
|||
* 141674 frames in 5.0 seconds = 28334.779 FPS |
|||
* 141490 frames in 5.0 seconds = 28297.875 FPS |
|||
*/ |
|||
``` |
|||
|
|||
## Training on EC2 instance |
|||
4. Check Headless Mode. |
|||
4. Check Headless Mode (If you haven't setup the X Server). |
|||
6. Upload the executable to your EC2 instance. |
|||
6. Upload the executable to your EC2 instance within `ml-agents/python` folder. |
|||
```python |
|||
from unityagents import UnityEnvironment |
|||
```python |
|||
from unityagents import UnityEnvironment |
|||
env = UnityEnvironment(<your_env>) |
|||
``` |
|||
Where `<your_env>` corresponds to the path to your environment executable. |
|||
|
|||
You should receive a message confirming that the environment was loaded successfully. |
|||
env = UnityEnvironment(<your_env>) |
|||
``` |
|||
Where `<your_env>` corresponds to the path to your environment executable. |
|||
|
|||
You should receive a message confirming that the environment was loaded successfully. |
|||
8. Train the executable |
|||
|
|||
``` |
|||
//cd into your ml-agents/python folder |
|||
chmod +x <your_env>.x86_64 |
|||
python learn.py <your_env> --train |
|||
``` |
999
docs/images/crawler.png
文件差异内容过多而无法显示
查看文件
文件差异内容过多而无法显示
查看文件
999
docs/images/walker.png
文件差异内容过多而无法显示
查看文件
文件差异内容过多而无法显示
查看文件
|
|||
|
|||
L |
|||
vector_observationPlaceholder* |
|||
dtype0* |
|||
shape:��������� |
|||
D |
|||
Reshape/shapeConst* |
|||
valueB: |
|||
���������* |
|||
dtype0 |
|||
L |
|||
ReshapeReshapevector_observation
Reshape/shape* |
|||
T0* |
|||
Tshape0 |
|||
? |
|||
OneHotEncoding/ToInt64CastReshape* |
|||
|
|||
SrcT0* |
|||
|
|||
DstT0 |
|||
F |
|||
OneHotEncoding/one_hot/depthConst* |
|||
value B :* |
|||
dtype0 |
|||
L |
|||
OneHotEncoding/one_hot/on_valueConst* |
|||
valueB |
|||
* �?* |
|||
vector_observationPlaceholder* |
|||
shape:���������* |
|||
dtype0 |
|||
� |
|||
main_graph_0/hidden_0/kernelConst*� |
|||
value�B�"����_>��7�ȣ*��Y{>7ˤ�u9>q��>�I��=�\>�u�<���/����_N>4�>3��D��=6����q���)�6*��˵ ?F=���>`wy>y�>쳻�����r �=���>`�\�q<��r�>6�<&�|h���9>WU<��K�h�_=N����2~>��=���|��>���4S�>���< <��U����A>��V����>5˟�-я�\DP=�p>M�ٽ9q�S���E��=`��I �>/������>���g��� >�\��G���;6���~���"�}�{>���>鿥>���<�|!�)�&�.x�=�����ԫ>���=�.��N�=��7����=��)>��>��->�s�=�������cRz�%��QB�=�u>��-���o>>I��CE�v��=���>����f�>�ƌ=��E>��`>2�V�.���6�>j��>[I�>/'�=y����5=�\��A�=��o>e���I ��k4e����=��G���������>7�g�a��<+�>H���x�v>��;>N~��J� �������=���=���>P��=ͫw��r��l�㾡l徏%�w�>�l0=�!�>x��>.��U�V�{��=��6>w�$=�y��T�s>�z�R#���E¨=Nl>"���Ă�=g�&���+��?���^<4��"xA>����B�����=�a�M |
|||
�>c�y7�=�;ͽ�`.=�觾}�>���>?���iպ>]�]�Ph>��>뚪=9�~>���>���=��)>���>�-�=i��=;��>
�j��U[�]Gj>:�>01�<z+�> ��=���Ȣ�<���*���b.��K�>G��=߆�>�cE=fX=^�k��Ӿ���<R>=][�>�P>�vA>�늽 �?�k�>ZV>@��>����S���?��X>��2?'>��ɾ�p>��K���,>�2?�&���>?9��91}>e& ��J�=�l�>6ۊ�o'�� �>� |
|||
<>J×>��>M�*�?�?�o+��B�}���q7>X��?�?��>](���h>��?��>�/7�lJ-�Ϯ��?�?ٓ��>�[�>)�D�B?�;����x2�������3D��J�>��$��<�����=���>�,�>���>������{ ?͜?�ӻ��>�L�/|-?+f�pRJ��5о5CK>є���?f=rz���(�>���>J��<���>�cw���ɾ�~? ��>���<���>�H����>E#���#��&�v�>sk �:;?G�g> ��X?ċ�>��Ǿ�f�>^�o�IT�Ϫ)?D`�>���>>��,�p�?���Qz�����5>Z=,I�>��q>�U�=e���ؗZ>a��=Lo�<�u*>�rs��@0>{�L>8�����>�͞>Ũ�>��ܼK����>Ni��^`]�e��4>��>���������[��p^�=h��>8/����Z��ZB�=T[\>���=�m�=�t�@�<��� h�<�� |
|||
�w�>O�'��/>%n��Us����=��ɽ���^z�=6r�6Z=1*�>4�J:+P>@��>��Z>os<߿~>�Ų���7�8��<�Ѡ�* |
|||
dtype0 |
|||
� |
|||
!main_graph_0/hidden_0/kernel/readIdentitymain_graph_0/hidden_0/kernel* |
|||
T0*/ |
|||
_class% |
|||
#!loc:@main_graph_0/hidden_0/kernel |
|||
� |
|||
main_graph_0/hidden_0/biasConst*e |
|||
value\BZ"PR��?��?��>P ?<�ܾ7Ǿ�?��>Q[�>}D?j���I?6�־ |
|||
1������t?4����?P8J>* |
|||
M |
|||
OneHotEncoding/one_hot/off_valueConst* |
|||
valueB |
|||
* * |
|||
|
|||
main_graph_0/hidden_0/bias/readIdentitymain_graph_0/hidden_0/bias* |
|||
T0*- |
|||
_class# |
|||
!loc:@main_graph_0/hidden_0/bias |
|||
� |
|||
main_graph_0/hidden_0/MatMulMatMulvector_observation!main_graph_0/hidden_0/kernel/read* |
|||
T0* |
|||
transpose_a( * |
|||
transpose_b( |
|||
� |
|||
main_graph_0/hidden_0/BiasAddBiasAddmain_graph_0/hidden_0/MatMulmain_graph_0/hidden_0/bias/read* |
|||
T0* |
|||
data_formatNHWC |
|||
P |
|||
main_graph_0/hidden_0/SigmoidSigmoidmain_graph_0/hidden_0/BiasAdd* |
|||
T0 |
|||
g |
|||
main_graph_0/hidden_0/MulMulmain_graph_0/hidden_0/BiasAddmain_graph_0/hidden_0/Sigmoid* |
|||
T0 |
|||
� |
|||
dense/kernelConst*� |
|||
value�B�"����>0���}*��Z�>�������>��f��$Y>H����k�>�>����HM�>va������_?)Wʾ�w�>!�̾�_�>k� |
|||
�D?$c�>L*����v�>N��>�G��R�?>T���>�I�� �ؾ^'�>��>����X���c |
|||
?Q�z���T>* |
|||
� |
|||
OneHotEncoding/one_hotOneHotOneHotEncoding/ToInt64OneHotEncoding/one_hot/depthOneHotEncoding/one_hot/on_value OneHotEncoding/one_hot/off_value* |
|||
axis���������* |
|||
T0* |
|||
TI0 |
|||
� |
|||
dense/kernelConst* |
|||
dtype0*� |
|||
value�B�"�F� �/R�>Im?\��>�Zֽ��E�:㍾�6����ܾ�(�>O���>���(?��=v�>�,'>\�%?Q��ڧ��Ƶɾ[� ��0�>��F?:�s>��A?Y��>1Rv�A=)���Ͼ��b?Y|��96�c[�> |