浏览代码
[refactor] Make PyTorch the default and TensorFlow optional (#4517)
[refactor] Make PyTorch the default and TensorFlow optional (#4517)
* Torch setup.py * Set torch to default * Make torch default in setup.py * Remove indents * Remove other instances of TF being used * Add tensorboard to setup.py * Adding correst setup commands for verifying torch is installed (#4524) * Adding correst setup commands for verifying torch is installed * Editing the test_requirments to add tf and remove torch * Develop torchdefault raise outside setup (#4530) * Torch not imported error to raise at first usage * Torch not imported error to raise at first usage * [refactor] Use PyTorch TensorBoard utils (#4518) * Convert stats writer to use PyTorch TB support * Use common function to print params * Update test * Bump tensorboard to 1.15 to fix the tests * putting tensorboard 1.15.0 as min version requirement Co-authored-by: vincentpierre <vincentpierre@unity3d.com> * [Docs] Initial documentation changes for making.../MLA-1734-demo-provider
GitHub
4 年前
当前提交
a690af74
共有 33 个文件被更改,包括 320 次插入 和 316 次删除
-
2.github/ISSUE_TEMPLATE/bug_report.md
-
2README.md
-
5com.unity.ml-agents/CHANGELOG.md
-
2docs/Background-Machine-Learning.md
-
10docs/Getting-Started.md
-
24docs/Installation.md
-
4docs/Learning-Environment-Executable.md
-
8docs/ML-Agents-Overview.md
-
2docs/Readme.md
-
2docs/Training-Configuration-File.md
-
35docs/Training-ML-Agents.md
-
5docs/Unity-Inference-Engine.md
-
1ml-agents/mlagents/tf_utils/__init__.py
-
63ml-agents/mlagents/tf_utils/tf.py
-
1ml-agents/mlagents/torch_utils/__init__.py
-
66ml-agents/mlagents/torch_utils/torch.py
-
11ml-agents/mlagents/trainers/cli_utils.py
-
6ml-agents/mlagents/trainers/learn.py
-
39ml-agents/mlagents/trainers/ppo/trainer.py
-
61ml-agents/mlagents/trainers/sac/trainer.py
-
2ml-agents/mlagents/trainers/settings.py
-
86ml-agents/mlagents/trainers/stats.py
-
20ml-agents/mlagents/trainers/tests/test_stats.py
-
34ml-agents/mlagents/trainers/trainer/rl_trainer.py
-
18ml-agents/mlagents/trainers/trainer/trainer_factory.py
-
10ml-agents/mlagents/trainers/trainer_controller.py
-
12ml-agents/mlagents/trainers/training_status.py
-
9ml-agents/setup.py
-
21ml-agents/tests/yamato/training_int_tests.py
-
1test_constraints_min_version.txt
-
4test_requirements.txt
-
35docs/Background-PyTorch.md
-
35docs/Background-TensorFlow.md
|
|||
from mlagents.tf_utils.tf import tf as tf # noqa |
|||
from mlagents.tf_utils.tf import set_warnings_enabled # noqa |
|||
from mlagents.tf_utils.tf import generate_session_config # noqa |
|||
from mlagents.tf_utils.tf import is_available # noqa |
|
|||
from mlagents.torch_utils.torch import torch as torch # noqa |
|||
from mlagents.torch_utils.torch import nn # noqa |
|||
from mlagents.torch_utils.torch import is_available # noqa |
|||
from mlagents.torch_utils.torch import default_device # noqa |
|
|||
import os |
|||
|
|||
from distutils.version import LooseVersion |
|||
import pkg_resources |
|||
# Detect availability of torch package here. |
|||
# NOTE: this try/except is temporary until torch is required for ML-Agents. |
|||
try: |
|||
# This should be the only place that we import torch directly. |
|||
# Everywhere else is caught by the banned-modules setting for flake8 |
|||
import torch # noqa I201 |
|||
torch.set_num_threads(cpu_utils.get_num_threads_to_use()) |
|||
os.environ["KMP_BLOCKTIME"] = "0" |
|||
def assert_torch_installed(): |
|||
# Check that torch version 1.6.0 or later has been installed. If not, refer |
|||
# user to the PyTorch webpage for install instructions. |
|||
torch_pkg = None |
|||
try: |
|||
torch_pkg = pkg_resources.get_distribution("torch") |
|||
except pkg_resources.DistributionNotFound: |
|||
pass |
|||
assert torch_pkg is not None and LooseVersion(torch_pkg.version) >= LooseVersion( |
|||
"1.6.0" |
|||
), ( |
|||
"A compatible version of PyTorch was not installed. Please visit the PyTorch homepage " |
|||
+ "(https://pytorch.org/get-started/locally/) and follow the instructions to install. " |
|||
+ "Version 1.6.0 and later are supported." |
|||
) |
|||
# Known PyLint compatibility with PyTorch https://github.com/pytorch/pytorch/issues/701 |
|||
# pylint: disable=E1101 |
|||
if torch.cuda.is_available(): |
|||
torch.set_default_tensor_type(torch.cuda.FloatTensor) |
|||
device = torch.device("cuda") |
|||
else: |
|||
torch.set_default_tensor_type(torch.FloatTensor) |
|||
device = torch.device("cpu") |
|||
nn = torch.nn |
|||
# pylint: disable=E1101 |
|||
except ImportError: |
|||
torch = None |
|||
nn = None |
|||
device = None |
|||
|
|||
assert_torch_installed() |
|||
|
|||
# This should be the only place that we import torch directly. |
|||
# Everywhere else is caught by the banned-modules setting for flake8 |
|||
import torch # noqa I201 |
|||
|
|||
|
|||
torch.set_num_threads(cpu_utils.get_num_threads_to_use()) |
|||
os.environ["KMP_BLOCKTIME"] = "0" |
|||
|
|||
# Known PyLint compatibility with PyTorch https://github.com/pytorch/pytorch/issues/701 |
|||
# pylint: disable=E1101 |
|||
if torch.cuda.is_available(): |
|||
torch.set_default_tensor_type(torch.cuda.FloatTensor) |
|||
device = torch.device("cuda") |
|||
else: |
|||
torch.set_default_tensor_type(torch.FloatTensor) |
|||
device = torch.device("cpu") |
|||
nn = torch.nn |
|||
|
|||
|
|||
def is_available(): |
|||
""" |
|||
Returns whether Torch is available in this Python environment |
|||
""" |
|||
return torch is not None |
|
|||
# Background: PyTorch |
|||
|
|||
As discussed in our |
|||
[machine learning background page](Background-Machine-Learning.md), many of the |
|||
algorithms we provide in the ML-Agents Toolkit leverage some form of deep |
|||
learning. More specifically, our implementations are built on top of the |
|||
open-source library [PyTorch](https://pytorch.org/). In this page we |
|||
provide a brief overview of PyTorch and TensorBoard |
|||
that we leverage within the ML-Agents Toolkit. |
|||
|
|||
## PyTorch |
|||
|
|||
[PyTorch](https://pytorch.org/) is an open source library for |
|||
performing computations using data flow graphs, the underlying representation of |
|||
deep learning models. It facilitates training and inference on CPUs and GPUs in |
|||
a desktop, server, or mobile device. Within the ML-Agents Toolkit, when you |
|||
train the behavior of an agent, the output is a model (.onnx) file that you can |
|||
then associate with an Agent. Unless you implement a new algorithm, the use of |
|||
PyTorch is mostly abstracted away and behind the scenes. |
|||
|
|||
## TensorBoard |
|||
|
|||
One component of training models with PyTorch is setting the values of |
|||
certain model attributes (called _hyperparameters_). Finding the right values of |
|||
these hyperparameters can require a few iterations. Consequently, we leverage a |
|||
visualization tool called |
|||
[TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard). |
|||
It allows the visualization of certain agent attributes (e.g. reward) throughout |
|||
training which can be helpful in both building intuitions for the different |
|||
hyperparameters and setting the optimal values for your Unity environment. We |
|||
provide more details on setting the hyperparameters in the |
|||
[Training ML-Agents](Training-ML-Agents.md) page. If you are unfamiliar with |
|||
TensorBoard we recommend our guide on |
|||
[using TensorBoard with ML-Agents](Using-Tensorboard.md) or this |
|||
[tutorial](https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial). |
|
|||
# Background: TensorFlow |
|||
|
|||
As discussed in our |
|||
[machine learning background page](Background-Machine-Learning.md), many of the |
|||
algorithms we provide in the ML-Agents Toolkit leverage some form of deep |
|||
learning. More specifically, our implementations are built on top of the |
|||
open-source library [TensorFlow](https://www.tensorflow.org/). In this page we |
|||
provide a brief overview of TensorFlow, in addition to TensorFlow-related tools |
|||
that we leverage within the ML-Agents Toolkit. |
|||
|
|||
## TensorFlow |
|||
|
|||
[TensorFlow](https://www.tensorflow.org/) is an open source library for |
|||
performing computations using data flow graphs, the underlying representation of |
|||
deep learning models. It facilitates training and inference on CPUs and GPUs in |
|||
a desktop, server, or mobile device. Within the ML-Agents Toolkit, when you |
|||
train the behavior of an agent, the output is a model (.nn) file that you can |
|||
then associate with an Agent. Unless you implement a new algorithm, the use of |
|||
TensorFlow is mostly abstracted away and behind the scenes. |
|||
|
|||
## TensorBoard |
|||
|
|||
One component of training models with TensorFlow is setting the values of |
|||
certain model attributes (called _hyperparameters_). Finding the right values of |
|||
these hyperparameters can require a few iterations. Consequently, we leverage a |
|||
visualization tool within TensorFlow called |
|||
[TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard). |
|||
It allows the visualization of certain agent attributes (e.g. reward) throughout |
|||
training which can be helpful in both building intuitions for the different |
|||
hyperparameters and setting the optimal values for your Unity environment. We |
|||
provide more details on setting the hyperparameters in the |
|||
[Training ML-Agents](Training-ML-Agents.md) page. If you are unfamiliar with |
|||
TensorBoard we recommend our guide on |
|||
[using TensorBoard with ML-Agents](Using-Tensorboard.md) or this |
|||
[tutorial](https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial). |
撰写
预览
正在加载...
取消
保存
Reference in new issue