浏览代码
Several, small documentation improvements (#3903)
Several, small documentation improvements (#3903)
* Several, small documentation improvements - Re-organize main repo README - Minor clean-ups to Python package-specific readme files - Clean-up to Unity Inference Engine page - Update to the docs README - Added a specific cross-platform section in ML-Agents Overview to amplify Barracuda - Updated the links in Limitations.md to point to the specific subsections - Cleaned up the Designing a Learning Environment page. Added an intro paragraph. - Updated the installation guide to specifically call out local installation - A few minor formatting, spelling errors fixed./release_1_branch
GitHub
5 年前
当前提交
759e222e
共有 18 个文件被更改,包括 610 次插入 和 465 次删除
-
26README.md
-
130com.unity.ml-agents/Documentation~/com.unity.ml-agents.md
-
6docs/Background-TensorFlow.md
-
92docs/Custom-SideChannels.md
-
14docs/Getting-Started.md
-
165docs/Installation.md
-
8docs/Learning-Environment-Design-Agents.md
-
58docs/Learning-Environment-Design.md
-
94docs/Learning-Environment-Examples.md
-
2docs/Learning-Environment-Executable.md
-
8docs/Limitations.md
-
100docs/ML-Agents-Overview.md
-
68docs/Profiling-Python.md
-
14docs/Readme.md
-
60docs/Unity-Inference-Engine.md
-
158gym-unity/README.md
-
37ml-agents-envs/README.md
-
35ml-agents/README.md
|
|||
# Unity Inference Engine |
|||
|
|||
The ML-Agents Toolkit allows you to use pre-trained neural network models |
|||
inside your Unity games. This support is possible thanks to the Unity Inference |
|||
Engine. The Unity Inference Engine is using |
|||
[compute shaders](https://docs.unity3d.com/Manual/class-ComputeShader.html) |
|||
to run the neural network within Unity. |
|||
The ML-Agents Toolkit allows you to use pre-trained neural network models inside |
|||
your Unity games. This support is possible thanks to the |
|||
[Unity Inference Engine](https://docs.unity3d.com/Packages/com.unity.barracuda@latest/index.html) |
|||
(codenamed Barracuda). The Unity Inference Engine uses |
|||
[compute shaders](https://docs.unity3d.com/Manual/class-ComputeShader.html) to |
|||
run the neural network within Unity. |
|||
__Note__: The ML-Agents Toolkit only supports the models created with our |
|||
**Note**: The ML-Agents Toolkit only supports the models created with our |
|||
Scripting Backends : The Unity Inference Engine is generally faster with |
|||
__IL2CPP__ than with __Mono__ for Standalone builds. |
|||
In the Editor, It is not possible to use the Unity Inference Engine with |
|||
GPU device selected when Editor Graphics Emulation is set to __OpenGL(ES) |
|||
3.0 or 2.0 emulation__. Also there might be non-fatal build time errors |
|||
when target platform includes Graphics API that does not support |
|||
__Unity Compute Shaders__. |
|||
The Unity Inference Engine supposedly works on any Unity supported platform |
|||
but we only tested for the following platforms : |
|||
See the Unity Inference Engine documentation for a list of the |
|||
[supported platforms](https://docs.unity3d.com/Packages/com.unity.barracuda@latest/index.html#supported-platforms). |
|||
* Linux 64 bits |
|||
* Mac OS X 64 bits (`OpenGLCore` Graphics API is not supported) |
|||
* Windows 64 bits |
|||
* iOS |
|||
* Android |
|||
Scripting Backends : The Unity Inference Engine is generally faster with |
|||
**IL2CPP** than with **Mono** for Standalone builds. In the Editor, It is not |
|||
possible to use the Unity Inference Engine with GPU device selected when Editor |
|||
Graphics Emulation is set to **OpenGL(ES) 3.0 or 2.0 emulation**. Also there |
|||
might be non-fatal build time errors when target platform includes Graphics API |
|||
that does not support **Unity Compute Shaders**. |
|||
|
|||
* Barracuda (`.nn`) files use a proprietary format produced by the [`tensorflow_to_barracuda.py`]() script. |
|||
* ONNX (`.onnx`) files use an [industry-standard open format](https://onnx.ai/about.html) produced by the [tf2onnx package](https://github.com/onnx/tensorflow-onnx). |
|||
Export to ONNX is currently considered beta. To enable it, make sure `tf2onnx>=1.5.5` is installed in pip. |
|||
tf2onnx does not currently support tensorflow 2.0.0 or later, or earlier than 1.12.0. |
|||
- Barracuda (`.nn`) files use a proprietary format produced by the |
|||
[`tensorflow_to_barracuda.py`]() script. |
|||
- ONNX (`.onnx`) files use an |
|||
[industry-standard open format](https://onnx.ai/about.html) produced by the |
|||
[tf2onnx package](https://github.com/onnx/tensorflow-onnx). |
|||
|
|||
Export to ONNX is currently considered beta. To enable it, make sure |
|||
`tf2onnx>=1.5.5` is installed in pip. tf2onnx does not currently support |
|||
tensorflow 2.0.0 or later, or earlier than 1.12.0. |
|||
When using a model, drag the model file into the **Model** field in the Inspector of the Agent. |
|||
Select the **Inference Device** : CPU or GPU you want to use for Inference. |
|||
When using a model, drag the model file into the **Model** field in the |
|||
Inspector of the Agent. Select the **Inference Device** : CPU or GPU you want to |
|||
use for Inference. |
|||
**Note:** For most of the models generated with the ML-Agents Toolkit, CPU will be faster than GPU. |
|||
You should use the GPU only if you use the |
|||
ResNet visual encoder or have a large number of agents with visual observations. |
|||
**Note:** For most of the models generated with the ML-Agents Toolkit, CPU will |
|||
be faster than GPU. You should use the GPU only if you use the ResNet visual |
|||
encoder or have a large number of agents with visual observations. |
撰写
预览
正在加载...
取消
保存
Reference in new issue