比较提交

...
此合并请求有变更与目标分支冲突。
/test_requirements.txt
/gym-unity/gym_unity/__init__.py
/Project/Packages/manifest.json
/Project/ProjectSettings/ProjectSettings.asset
/Project/ProjectSettings/ProjectVersion.txt
/Project/Assets/ML-Agents/Examples/WallJump/TFModels/BigWallJump.nn
/Project/Assets/ML-Agents/Examples/WallJump/TFModels/SmallWallJump.nn
/Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBall.unity
/Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBallHard.unity
/Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBall.nn
/Project/Assets/ML-Agents/Examples/GridWorld/Scenes/GridWorld.unity
/Project/Assets/ML-Agents/Examples/PushBlock/TFModels/PushBlock.nn
/Project/Assets/ML-Agents/Examples/Pyramids/TFModels/Pyramids.nn
/Project/Assets/ML-Agents/Examples/Basic/TFModels/Basic.nn
/Project/Assets/ML-Agents/Examples/Basic/Scenes/Basic.unity
/com.unity.ml-agents/package.json
/com.unity.ml-agents/Editor/BehaviorParametersEditor.cs
/com.unity.ml-agents/Tests/Editor/PublicAPI/PublicApiValidation.cs
/com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs
/com.unity.ml-agents/Runtime/Academy.cs
/com.unity.ml-agents/Runtime/Policies/BehaviorParameters.cs
/com.unity.ml-agents/CHANGELOG.md
/ml-agents-envs/mlagents_envs/__init__.py
/docs/Migrating.md
/docs/Unity-Inference-Engine.md
/ml-agents/mlagents/trainers/learn.py
/ml-agents/mlagents/trainers/trainer_controller.py
/ml-agents/mlagents/trainers/__init__.py
/README.md
/com.unity.ml-agents/Tests/Editor/BehaviorParameterTests.cs
/.circleci/config.yml
/test_constraints_min_version.txt
/test_constraints_max_tf1_version.txt
/config/gail_config.yaml
/config/trainer_config.yaml
/ml-agents/mlagents/model_serialization.py
/Project/Assets/ML-Agents/Examples/Template/AgentPrefabsAndColors.unity
/Project/Assets/ML-Agents/Examples/Template/Scene.unity
/Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHard.nn
/Project/Assets/ML-Agents/Examples/Bouncer/TFModels/Bouncer.nn
/Project/Assets/ML-Agents/Examples/Crawler/Scenes/CrawlerDynamicTarget.unity
/Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerDynamic.nn
/Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerStatic.nn
/Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.nn
/Project/Assets/ML-Agents/Examples/GridWorld/TFModels/GridWorld.nn
/Project/Assets/ML-Agents/Examples/Hallway/TFModels/Hallway.nn
/Project/Assets/ML-Agents/Examples/Reacher/TFModels/Reacher.nn
/Project/Assets/ML-Agents/Examples/Soccer/TFModels/Soccer.nn
/Project/Assets/ML-Agents/Examples/Tennis/TFModels/Tennis.nn
/Project/Assets/ML-Agents/Examples/Walker/TFModels/Walker.nn

16 次代码提交

作者 SHA1 备注 提交日期
GitHub 8ed45da0 Updating the NN models (#3632) 5 年前
GitHub 0d3fd17e [bug-fix] Increase 3dballhard and GAIL default steps (#3636) 5 年前
GitHub 188d8589 Merge pull request #3631 from Unity-Technologies/release-0.15.0-fix-stats 5 年前
Chris Elion d73125d6 make sure top-level timer is closed before writing 5 年前
Jonathan Harper c36f3534 Remove space from Product Name for examples 5 年前
GitHub 771389cb Updated the release branch docs (#3621) 5 年前
GitHub 9b227be2 Merge pull request #3627 from Unity-Technologies/release-0.15.0-log-no-model 5 年前
Chris Elion 40c48565 fix unit test 5 年前
GitHub 557678a0 Merge pull request #3628 from Unity-Technologies/release-0.15.0-onnx-CI 5 年前
GitHub 88c2bc66 Update error message 5 年前
Chris Elion e0667035 add meta file 5 年前
Chris Elion 9c5fc33a enforce onnx conversion (expect tf2 CI to fail) (#3600) 5 年前
Chris Elion 53cd6695 Improve warnings and exception if using unsupported combo 5 年前
GitHub 1d9cce7b Remove dead components from the examples scenes (#3619) (#3624) 5 年前
Jonathan Harper 391cb49d Update examples project to 2018.4.18f1 (#3618) 5 年前
GitHub 0a8b30e9 Bumping version on the release (#3615) 5 年前
共有 51 个文件被更改,包括 7321 次插入7313 次删除
  1. 7
      .circleci/config.yml
  2. 13
      README.md
  3. 2
      test_constraints_min_version.txt
  4. 4
      test_requirements.txt
  5. 3
      test_constraints_max_tf1_version.txt
  6. 2
      config/gail_config.yaml
  7. 2
      config/trainer_config.yaml
  8. 2
      docs/Unity-Inference-Engine.md
  9. 2
      docs/Migrating.md
  10. 2
      gym-unity/gym_unity/__init__.py
  11. 2
      ml-agents-envs/mlagents_envs/__init__.py
  12. 56
      ml-agents/mlagents/model_serialization.py
  13. 2
      ml-agents/mlagents/trainers/__init__.py
  14. 14
      ml-agents/mlagents/trainers/learn.py
  15. 14
      ml-agents/mlagents/trainers/trainer_controller.py
  16. 2
      com.unity.ml-agents/Tests/Editor/PublicAPI/PublicApiValidation.cs
  17. 18
      com.unity.ml-agents/package.json
  18. 3
      com.unity.ml-agents/Editor/BehaviorParametersEditor.cs
  19. 2
      com.unity.ml-agents/Runtime/Academy.cs
  20. 16
      com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs
  21. 10
      com.unity.ml-agents/Runtime/Policies/BehaviorParameters.cs
  22. 4
      com.unity.ml-agents/CHANGELOG.md
  23. 4
      Project/Packages/manifest.json
  24. 2
      Project/ProjectSettings/ProjectVersion.txt
  25. 5
      Project/ProjectSettings/ProjectSettings.asset
  26. 722
      Project/Assets/ML-Agents/Examples/Template/AgentPrefabsAndColors.unity
  27. 133
      Project/Assets/ML-Agents/Examples/Template/Scene.unity
  28. 12
      Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBall.unity
  29. 12
      Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBallHard.unity
  30. 491
      Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBall.nn
  31. 605
      Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHard.nn
  32. 9
      Project/Assets/ML-Agents/Examples/Basic/Scenes/Basic.unity
  33. 9
      Project/Assets/ML-Agents/Examples/Basic/TFModels/Basic.nn
  34. 133
      Project/Assets/ML-Agents/Examples/Bouncer/TFModels/Bouncer.nn
  35. 10
      Project/Assets/ML-Agents/Examples/Crawler/Scenes/CrawlerDynamicTarget.unity
  36. 1001
      Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerDynamic.nn
  37. 1001
      Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerStatic.nn
  38. 659
      Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.nn
  39. 28
      Project/Assets/ML-Agents/Examples/GridWorld/Scenes/GridWorld.unity
  40. 1001
      Project/Assets/ML-Agents/Examples/GridWorld/TFModels/GridWorld.nn
  41. 1001
      Project/Assets/ML-Agents/Examples/Hallway/TFModels/Hallway.nn
  42. 1001
      Project/Assets/ML-Agents/Examples/PushBlock/TFModels/PushBlock.nn
  43. 1001
      Project/Assets/ML-Agents/Examples/Pyramids/TFModels/Pyramids.nn
  44. 567
      Project/Assets/ML-Agents/Examples/Reacher/TFModels/Reacher.nn
  45. 1001
      Project/Assets/ML-Agents/Examples/Soccer/TFModels/Soccer.nn
  46. 1001
      Project/Assets/ML-Agents/Examples/Tennis/TFModels/Tennis.nn
  47. 1001
      Project/Assets/ML-Agents/Examples/Walker/TFModels/Walker.nn
  48. 1001
      Project/Assets/ML-Agents/Examples/WallJump/TFModels/BigWallJump.nn
  49. 1001
      Project/Assets/ML-Agents/Examples/WallJump/TFModels/SmallWallJump.nn
  50. 29
      com.unity.ml-agents/Tests/Editor/BehaviorParameterTests.cs
  51. 11
      com.unity.ml-agents/Tests/Editor/BehaviorParameterTests.cs.meta

7
.circleci/config.yml


pip_constraints:
type: string
description: Constraints file that is passed to "pip install". We constraint older versions of libraries for older python runtime, in order to help ensure compatibility.
enforce_onnx_conversion:
type: integer
default: 0
description: Whether to raise an exception if ONNX models couldn't be saved.
executor: << parameters.executor >>
working_directory: ~/repo

TEST_ENFORCE_ONNX_CONVERSION: << parameters.enforce_onnx_conversion >>
steps:
- checkout

pyversion: 3.7.3
# Test python 3.7 with the newest supported versions
pip_constraints: test_constraints_max_tf1_version.txt
# Make sure ONNX conversion passes here (recent version of tensorflow 1.x)
enforce_onnx_conversion: 1
- build_python:
name: python_3.7.3+tf2
executor: python373

13
README.md


* Unity environment control from Python
* 15+ sample Unity environments
* Two deep reinforcement learning algorithms,
[Proximal Policy Optimization](https://github.com/Unity-Technologies/ml-agents/tree/latest_release/docs/Training-PPO.md)
(PPO) and [Soft Actor-Critic](https://github.com/Unity-Technologies/ml-agents/tree/latest_release/docs/Training-SAC.md)
[Proximal Policy Optimization](docs/Training-PPO.md)
(PPO) and [Soft Actor-Critic](docs/Training-SAC.md)
* Built-in support for [Imitation Learning](https://github.com/Unity-Technologies/ml-agents/tree/latest_release/docs/Training-Imitation-Learning.md) through Behavioral Cloning or Generative Adversarial Imitation Learning
* Built-in support for [Imitation Learning](docs/Training-Imitation-Learning.md) through Behavioral Cloning or Generative Adversarial Imitation Learning
* Flexible agent control with On Demand Decision Making
* Visualizing network outputs within the environment
* Wrap learning environments as a gym

## Releases & Documentation
**Our latest, stable release is 0.14.1. Click
[here](https://github.com/Unity-Technologies/ml-agents/tree/latest_release/docs/Readme.md) to
**Our latest, stable release is 0.15.0. Click
[here](docs/Readme.md) to
get started with the latest release of ML-Agents.**
The table below lists all our releases, including our `master` branch which is under active

| **Version** | **Release Date** | **Source** | **Documentation** | **Download** |
|:-------:|:------:|:-------------:|:-------:|:------------:|
| **master** (unstable) | -- | [source](https://github.com/Unity-Technologies/ml-agents/tree/master) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/master/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/master.zip) |
| **0.14.1** (latest stable release) | February 26, 2020 | **[source](https://github.com/Unity-Technologies/ml-agents/tree/latest_release)** | **[docs](https://github.com/Unity-Technologies/ml-agents/tree/latest_release/docs/Readme.md)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/latest_release.zip)** |
| **0.14.1** | February 26, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.14.1) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.14.1/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.14.1.zip) |
| **0.14.0** | February 13, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.14.0) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.14.0/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.14.0.zip) |
| **0.13.1** | January 21, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.13.1) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.13.1/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.13.1.zip) |
| **0.13.0** | January 8, 2020 | [source](https://github.com/Unity-Technologies/ml-agents/tree/0.13.0) | [docs](https://github.com/Unity-Technologies/ml-agents/tree/0.13.0/docs/Readme.md) | [download](https://github.com/Unity-Technologies/ml-agents/archive/0.13.0.zip) |

2
test_constraints_min_version.txt


numpy==1.14.1
Pillow==4.2.1
protobuf==3.6
tensorflow==1.7
tensorflow==1.7.0
h5py==2.9.0

4
test_requirements.txt


pytest-cov==2.6.1
pytest-xdist
# Tests install onnx and tf2onnx, but this doesn't support tensorflow>=2.0.0
# Since we test tensorflow2.0 with python3.7, exclude it based on the python version
tf2onnx>=1.5.5; python_version < '3.7'
tf2onnx>=1.5.5

3
test_constraints_max_tf1_version.txt


# For projects with upper bounds, we should periodically update this list to the latest release version
grpcio>=1.23.0
numpy>=1.17.2
# Temporary workaround for https://github.com/tensorflow/tensorflow/issues/36179 and https://github.com/tensorflow/tensorflow/issues/36188
tensorflow>=1.14.0,<1.15.1
tensorflow>=1.15.2,<2.0.0
h5py>=2.10.0

2
config/gail_config.yaml


hidden_units: 128
lambd: 0.95
learning_rate: 3.0e-4
max_steps: 5.0e4
max_steps: 5.0e5
memory_size: 256
normalize: false
num_epoch: 3

2
config/trainer_config.yaml


buffer_size: 12000
summary_freq: 12000
time_horizon: 1000
max_steps: 5.0e5
max_steps: 5.0e6
beta: 0.001
reward_signals:
extrinsic:

2
docs/Unity-Inference-Engine.md


* ONNX (`.onnx`) files use an [industry-standard open format](https://onnx.ai/about.html) produced by the [tf2onnx package](https://github.com/onnx/tensorflow-onnx).
Export to ONNX is currently considered beta. To enable it, make sure `tf2onnx>=1.5.5` is installed in pip.
tf2onnx does not currently support tensorflow 2.0.0 or later.
tf2onnx does not currently support tensorflow 2.0.0 or later, or earlier than 1.12.0.
## Using the Unity Inference Engine

2
docs/Migrating.md


# Migrating
## Migrating from 0.14 to latest
## Migrating from 0.14 to 0.15
### Important changes
* The `Agent.CollectObservations()` virtual method now takes as input a `VectorSensor` sensor as argument. The `Agent.AddVectorObs()` methods were removed.

2
gym-unity/gym_unity/__init__.py


__version__ = "0.15.0.dev0"
__version__ = "0.15.0"

2
ml-agents-envs/mlagents_envs/__init__.py


__version__ = "0.15.0.dev0"
__version__ = "0.15.0"

56
ml-agents/mlagents/model_serialization.py


from distutils.util import strtobool
import os
from distutils.version import LooseVersion
try:
import onnx

from tensorflow.python.platform import gfile
from tensorflow.python.framework import graph_util
from mlagents.trainers import tensorflow_to_barracuda as tf2bc
if LooseVersion(tf.__version__) < LooseVersion("1.12.0"):
# ONNX is only tested on 1.12.0 and later
ONNX_EXPORT_ENABLED = False
logger = logging.getLogger("mlagents.trainers")

logger.info(f"Exported {settings.model_path}.nn file")
# Save to onnx too (if we were able to import it)
if ONNX_EXPORT_ENABLED and settings.convert_to_onnx:
try:
onnx_graph = convert_frozen_to_onnx(settings, frozen_graph_def)
onnx_output_path = settings.model_path + ".onnx"
with open(onnx_output_path, "wb") as f:
f.write(onnx_graph.SerializeToString())
logger.info(f"Converting to {onnx_output_path}")
except Exception:
logger.exception(
"Exception trying to save ONNX graph. Please report this error on "
"https://github.com/Unity-Technologies/ml-agents/issues and "
"attach a copy of frozen_graph_def.pb"
if ONNX_EXPORT_ENABLED:
if settings.convert_to_onnx:
try:
onnx_graph = convert_frozen_to_onnx(settings, frozen_graph_def)
onnx_output_path = settings.model_path + ".onnx"
with open(onnx_output_path, "wb") as f:
f.write(onnx_graph.SerializeToString())
logger.info(f"Converting to {onnx_output_path}")
except Exception:
# Make conversion errors fatal depending on environment variables (only done during CI)
if _enforce_onnx_conversion():
raise
logger.exception(
"Exception trying to save ONNX graph. Please report this error on "
"https://github.com/Unity-Technologies/ml-agents/issues and "
"attach a copy of frozen_graph_def.pb"
)
else:
if _enforce_onnx_conversion():
# Either we're on an old version of tensorflow, or the import failed.
raise RuntimeError(
"ONNX conversion enforced, but ONNX_EXPORT_ENABLED was false."
)

for n in nodes:
logger.info("\t" + n)
return nodes
def _enforce_onnx_conversion() -> bool:
env_var_name = "TEST_ENFORCE_ONNX_CONVERSION"
if env_var_name not in os.environ:
return False
val = os.environ[env_var_name]
try:
# This handles e.g. "false" converting reasonably to False
return strtobool(val)
except Exception:
return False

2
ml-agents/mlagents/trainers/__init__.py


__version__ = "0.15.0.dev0"
__version__ = "0.15.0"

14
ml-agents/mlagents/trainers/learn.py


from mlagents_envs.side_channel.side_channel import SideChannel
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfig
from mlagents_envs.exception import UnityEnvironmentException
from mlagents_envs.timers import hierarchical_timer
from mlagents_envs.timers import hierarchical_timer, get_timer_tree
from mlagents.logging_util import create_logger

tc.start_learning(env_manager)
finally:
env_manager.close()
write_timing_tree(summaries_dir, options.run_id)
def write_timing_tree(summaries_dir: str, run_id: str) -> None:
timing_path = f"{summaries_dir}/{run_id}_timers.json"
try:
with open(timing_path, "w") as f:
json.dump(get_timer_tree(), f, indent=4)
except FileNotFoundError:
logging.warning(
f"Unable to save to {timing_path}. Make sure the directory exists"
)
def create_sampler_manager(sampler_config, run_seed=None):

14
ml-agents/mlagents/trainers/trainer_controller.py


import os
import sys
import json
import logging
from typing import Dict, Optional, Set
from collections import defaultdict

UnityCommunicationException,
)
from mlagents.trainers.sampler_class import SamplerManager
from mlagents_envs.timers import hierarchical_timer, get_timer_tree, timed
from mlagents_envs.timers import hierarchical_timer, timed
from mlagents.trainers.trainer import Trainer
from mlagents.trainers.meta_curriculum import MetaCurriculum
from mlagents.trainers.trainer_util import TrainerFactory

"Learning was interrupted. Please wait while the graph is generated."
)
self._save_model()
def _write_timing_tree(self) -> None:
timing_path = f"{self.summaries_dir}/{self.run_id}_timers.json"
try:
with open(timing_path, "w") as f:
json.dump(get_timer_tree(), f, indent=4)
except FileNotFoundError:
self.logger.warning(
f"Unable to save to {timing_path}. Make sure the directory exists"
)
def _export_graph(self):
"""

pass
if self.train_model:
self._export_graph()
self._write_timing_tree()
def end_trainer_episodes(
self, env: EnvManager, lessons_incremented: Dict[str, bool]

2
com.unity.ml-agents/Tests/Editor/PublicAPI/PublicApiValidation.cs


var agent = gameObject.AddComponent<PublicApiAgent>();
// Make sure we can set the behavior type correctly after the agent is added
behaviorParams.behaviorType = BehaviorType.InferenceOnly;
// Can't actually create an Agent with InferenceOnly and no model, so change back
behaviorParams.behaviorType = BehaviorType.Default;
// TODO - not internal yet
// var decisionRequester = gameObject.AddComponent<DecisionRequester>();

18
com.unity.ml-agents/package.json


{
"name": "com.unity.ml-agents",
"displayName":"ML Agents",
"version": "0.14.1-preview",
"unity": "2018.4",
"description": "Add interactivity to your game with Machine Learning Agents trained using Deep Reinforcement Learning.",
"dependencies": {
"com.unity.barracuda": "0.6.1-preview"
}
}
"name": "com.unity.ml-agents",
"displayName": "ML Agents",
"version": "0.15.0-preview",
"unity": "2018.4",
"description": "Add interactivity to your game with Machine Learning Agents trained using Deep Reinforcement Learning.",
"dependencies": {
"com.unity.barracuda": "0.6.1-preview"
}
}

3
com.unity.ml-agents/Editor/BehaviorParametersEditor.cs


if (brainParameters != null)
{
var failedChecks = Inference.BarracudaModelParamLoader.CheckModel(
barracudaModel, brainParameters, sensorComponents);
barracudaModel, brainParameters, sensorComponents, behaviorParameters.behaviorType
);
foreach (var check in failedChecks)
{
if (check != null)

2
com.unity.ml-agents/Runtime/Academy.cs


/// Unity package version of com.unity.ml-agents.
/// This must match the version string in package.json and is checked in a unit test.
/// </summary>
internal const string k_PackageVersion = "0.14.1-preview";
internal const string k_PackageVersion = "0.15.0-preview";
const int k_EditorTrainingPort = 5004;

16
com.unity.ml-agents/Runtime/Inference/BarracudaModelParamLoader.cs


/// </param>
/// <param name="sensorComponents">Attached sensor components</param>
/// <returns>The list the error messages of the checks that failed</returns>
public static IEnumerable<string> CheckModel(Model model, BrainParameters brainParameters, SensorComponent[] sensorComponents)
public static IEnumerable<string> CheckModel(Model model, BrainParameters brainParameters,
SensorComponent[] sensorComponents, BehaviorType behaviorType = BehaviorType.Default)
failedModelChecks.Add(
"There is no model for this Brain, cannot run inference. " +
"(But can still train)");
var errorMsg = "There is no model for this Brain; cannot run inference. ";
if (behaviorType == BehaviorType.InferenceOnly)
{
errorMsg += "Either assign a model, or change to a different Behavior Type.";
}
else
{
errorMsg += "(But can still train)";
}
failedModelChecks.Add(errorMsg);
return failedModelChecks;
}

10
com.unity.ml-agents/Runtime/Policies/BehaviorParameters.cs


case BehaviorType.HeuristicOnly:
return new HeuristicPolicy(heuristic);
case BehaviorType.InferenceOnly:
{
if (m_Model == null)
{
var behaviorType = BehaviorType.InferenceOnly.ToString();
throw new UnityAgentsException(
$"Can't use Behavior Type {behaviorType} without a model. " +
"Either assign a model, or change to a different Behavior Type."
);
}
}
case BehaviorType.Default:
if (Academy.Instance.IsCommunicatorOn)
{

4
com.unity.ml-agents/CHANGELOG.md


and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.15.0-preview] - 2020-03-18
### Major Changes
- `Agent.CollectObservations` now takes a VectorSensor argument. (#3352, #3389)
- Added `Agent.CollectDiscreteActionMasks` virtual method with a `DiscreteActionMasker` argument to specify which discrete actions are unavailable to the Agent. (#3525)

- Fixed an issue when using GAIL with less than `batch_size` number of demonstrations. (#3591)
- The interfaces to the `SideChannel` classes (on C# and python) have changed to use new `IncomingMessage` and `OutgoingMessage` classes. These should make reading and writing data to the channel easier. (#3596)
- Updated the ExpertPyramid.demo example demonstration file (#3613)
- Updated project version for example environments to 2018.4.18f1. (#3618)
- Changed the Product Name in the example environments to remove spaces, so that the default build executable file doesn't contain spaces. (#3612)
## [0.14.1-preview] - 2020-02-25

4
Project/Packages/manifest.json


{
"dependencies": {
"com.unity.ads": "2.0.8",
"com.unity.analytics": "3.2.2",
"com.unity.analytics": "3.2.3",
"com.unity.collab-proxy": "1.2.15",
"com.unity.ml-agents": "file:../../com.unity.ml-agents",
"com.unity.package-manager-ui": "2.0.8",

"com.unity.modules.wind": "1.0.0",
"com.unity.modules.xr": "1.0.0"
},
"testables" : [
"testables": [
"com.unity.ml-agents"
]
}

2
Project/ProjectSettings/ProjectVersion.txt


m_EditorVersion: 2018.4.14f1
m_EditorVersion: 2018.4.18f1

5
Project/ProjectSettings/ProjectSettings.asset


useOnDemandResources: 0
accelerometerFrequency: 60
companyName: Unity Technologies
productName: Unity Environment
productName: UnityEnvironment
m_ShowUnitySplashScreen: 0
m_ShowUnitySplashScreen: 1
m_ShowUnitySplashLogo: 1
m_SplashScreenOverlayOpacity: 1
m_SplashScreenAnimation: 1

xboxOneMonoLoggingLevel: 0
xboxOneLoggingLevel: 1
xboxOneDisableEsram: 0
xboxOneEnableTypeOptimization: 0
xboxOnePresentImmediateThreshold: 0
switchQueueCommandMemory: 1048576
switchQueueControlMemory: 16384

722
Project/Assets/ML-Agents/Examples/Template/AgentPrefabsAndColors.unity
文件差异内容过多而无法显示
查看文件

133
Project/Assets/ML-Agents/Examples/Template/Scene.unity


--- !u!104 &2
RenderSettings:
m_ObjectHideFlags: 0
serializedVersion: 8
serializedVersion: 9
m_Fog: 0
m_FogColor: {r: 0.5, g: 0.5, b: 0.5, a: 1}
m_FogMode: 3

m_CustomReflection: {fileID: 0}
m_Sun: {fileID: 0}
m_IndirectSpecularColor: {r: 0, g: 0, b: 0, a: 1}
m_UseRadianceAmbientProbe: 0
--- !u!157 &3
LightmapSettings:
m_ObjectHideFlags: 0

m_BounceScale: 1
m_IndirectOutputScale: 1
m_AlbedoBoost: 1
m_TemporalCoherenceThreshold: 1
serializedVersion: 9
serializedVersion: 10
m_TextureWidth: 1024
m_TextureHeight: 1024
m_AtlasSize: 1024
m_AO: 0
m_AOMaxDistance: 1
m_CompAOExponent: 1

--- !u!1 &762086410
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
serializedVersion: 5
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 762086412}
- component: {fileID: 762086411}

--- !u!108 &762086411
Light:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 762086410}
m_Enabled: 1
serializedVersion: 8

serializedVersion: 2
m_Bits: 4294967295
m_Lightmapping: 4
m_LightShadowCasterMode: 0
m_AreaSize: {x: 1, y: 1}
m_BounceIntensity: 1
m_ColorTemperature: 6570

--- !u!4 &762086412
Transform:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 762086410}
m_LocalRotation: {x: 0.40821788, y: -0.23456968, z: 0.10938163, w: 0.8754261}
m_LocalPosition: {x: 0, y: 3, z: 0}

--- !u!1 &1223085755
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
serializedVersion: 5
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
m_Component:
- component: {fileID: 1223085757}
- component: {fileID: 1223085756}

--- !u!114 &1223085756
MonoBehaviour:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1223085755}
m_Enabled: 1
m_EditorHideFlags: 0

brain: {fileID: 0}
agentCameras: []
resetOnDone: 1
onDemandDecision: 0
numberOfActionsBetweenDecisions: 1
hasUpgradedFromAgentParameters: 1
maxStep: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1223085755}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0.71938086, y: 0.27357092, z: 4.1970553}

--- !u!1 &1574236047
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
serializedVersion: 5
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
- component: {fileID: 1574236048}
m_Layer: 0
m_Name: Academy
m_TagString: Untagged

m_IsActive: 1
--- !u!114 &1574236048
MonoBehaviour:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 1574236047}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 9af83cd96d4bc4088a966af174446d1b, type: 3}
m_Name:
m_EditorClassIdentifier:
broadcastHub:
broadcastingBrains: []
_brainsToControl: []
maxSteps: 0
trainingConfiguration:
width: 80
height: 80
qualityLevel: 0
timeScale: 100
targetFrameRate: 60
inferenceConfiguration:
width: 1024
height: 720
qualityLevel: 1
timeScale: 1
targetFrameRate: 60
resetParameters:
resetParameters: []
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1574236047}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0.71938086, y: 0.27357092, z: 4.1970553}

--- !u!1 &1715640920
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
serializedVersion: 5
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
serializedVersion: 6
- component: {fileID: 1715640923}
- component: {fileID: 1715640922}
- component: {fileID: 1715640921}
m_Layer: 0

--- !u!81 &1715640921
AudioListener:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 1715640920}
m_Enabled: 1
--- !u!92 &1715640923
Behaviour:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_projectionMatrixMode: 1
m_SensorSize: {x: 36, y: 24}
m_LensShift: {x: 0, y: 0}
m_GateFitMode: 2
m_FocalLength: 50
m_NormalizedViewPortRect:
serializedVersion: 2
x: 0

--- !u!4 &1715640925
Transform:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1715640920}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 1, z: -10}

12
Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBall.unity


m_ReflectionIntensity: 1
m_CustomReflection: {fileID: 0}
m_Sun: {fileID: 0}
m_IndirectSpecularColor: {r: 0.4497121, g: 0.49977785, b: 0.57563704, a: 1}
m_IndirectSpecularColor: {r: 0.44971162, g: 0.49977726, b: 0.5756362, a: 1}
m_UseRadianceAmbientProbe: 0
--- !u!157 &3
LightmapSettings:

m_Component:
- component: {fileID: 807556627}
- component: {fileID: 807556626}
- component: {fileID: 807556625}
- component: {fileID: 807556624}
- component: {fileID: 807556623}
m_Layer: 0

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 807556622}
m_Enabled: 1
--- !u!92 &807556625
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 807556622}
m_Enabled: 1
--- !u!20 &807556626
Camera:
m_ObjectHideFlags: 0

m_Name:
m_EditorClassIdentifier:
gravityMultiplier: 1
monitorVerticalOffset: 0
fixedDeltaTime: 0.02
maximumDeltaTime: 0.33333334
solverIterations: 6

12
Project/Assets/ML-Agents/Examples/3DBall/Scenes/3DBallHard.unity


m_ReflectionIntensity: 1
m_CustomReflection: {fileID: 0}
m_Sun: {fileID: 0}
m_IndirectSpecularColor: {r: 0.4497121, g: 0.49977785, b: 0.57563704, a: 1}
m_IndirectSpecularColor: {r: 0.44971162, g: 0.49977726, b: 0.5756362, a: 1}
m_UseRadianceAmbientProbe: 0
--- !u!157 &3
LightmapSettings:

m_Component:
- component: {fileID: 807556627}
- component: {fileID: 807556626}
- component: {fileID: 807556625}
- component: {fileID: 807556624}
- component: {fileID: 807556623}
m_Layer: 0

m_GameObject: {fileID: 807556622}
m_Enabled: 1
--- !u!124 &807556624
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 807556622}
m_Enabled: 1
--- !u!92 &807556625
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Name:
m_EditorClassIdentifier:
gravityMultiplier: 1
monitorVerticalOffset: 0
fixedDeltaTime: 0.02
maximumDeltaTime: 0.33333334
solverIterations: 6

491
Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBall.nn
文件差异内容过多而无法显示
查看文件

605
Project/Assets/ML-Agents/Examples/3DBall/TFModels/3DBallHard.nn
文件差异内容过多而无法显示
查看文件

9
Project/Assets/ML-Agents/Examples/Basic/Scenes/Basic.unity


m_Component:
- component: {fileID: 1715640925}
- component: {fileID: 1715640924}
- component: {fileID: 1715640923}
- component: {fileID: 1715640922}
- component: {fileID: 1715640921}
m_Layer: 0

m_GameObject: {fileID: 1715640920}
m_Enabled: 1
--- !u!124 &1715640922
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1715640920}
m_Enabled: 1
--- !u!92 &1715640923
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

9
Project/Assets/ML-Agents/Examples/Basic/TFModels/Basic.nn


vector_observation���� action_masks����action_probs/action_probsconcat_2/concatactionaction_output_shape������?action_output_shape memory_sizeversion_numberis_continuous_controladd_3/ySum/reduction_indicesadd_1/ymain_graph_0/hidden_0/BiasAdd�����?vector_observationmain_graph_0/hidden_0/kernel�main_graph_0/hidden_0/bias�main_graph_0/hidden_0/Mul2 �����?main_graph_0/hidden_0/BiasAdd dense/MatMul�����?main_graph_0/hidden_0/Mul dense/kernel�<dense/MatMul/patch:0�action_probs/action_probs2�����? dense/MatMul strided_slice������?action_probs/action_probsstrided_slice_1������? action_masksSoftmax2�����? strided_sliceadd_1d�����?Softmaxadd_1/yMulf�����?add_1strided_slice_1Sum������?MulSum/reduction_indicestruedivg�����?MulSumadd_3d�����?truedivadd_3/yLog_12r�����?add_3concat_2/concat2�����?Log_1action2�����?concat_2/concat@@@���3�?���3Ȥ꾒+�=�5پܢ׻�B��(������:Ȫ�>ɒ�>��J�����!��C}��,��>0R�S���'fA���a��Q>d�<"S�>��>u�Ծui`>r�9�0��Cc��>���5�>��w��j�=#����Z>�Q����~>C)�=)ѽ6=�|��=.0u>K)s>J�>�^->/��>Ǹ5>�8Q�!Q%�/�Y>y:�;�� ��n�Uߣ�i�m���X����<Yn�����颾��s�*�Ѿ|2��L�>޷�=�O.�rN/>����nzz�SЋ>����)t��V�=�_>TY�>�C��]��˟<�ay=�ě>���-�Z��<�>� ��Ծ���=��˽���diǾzS߾�Ó� 2>��_>2�d>�>� w���0��:���;�>�M˾#a�=a�>�%�����>+c�=w�f���7� �U���=>��R=*�<˷�� a�=�پe\p�����]>O���A >�|3L=۽����J���m>,��*48>�D�>OKy�&��>����>�>��;����hp3�� �@��t�^>�==��'>���<�b���j
���p��Y�>6�>|����>+�>Q���p1c>��Z�bI1=�I�>͋����������(_���=Z�j�G�U�^���Dp�<����o� >��H>�ъ=b=��=Q雾k�T> ��<���=�G�\P?��">x\#>�1S��Oӽ�����+)���=F��1]�?��=�� �G G>�D����>n�n>6����>������#>o�V=����b����>dI�=v�<�8�-)>����*�=�?)�ݼWT�>���>���>Ľo=gt�>��N�>��1��C�~k\>���,��=Ig�>[�>���=�Z�<��q=z�>�]�>�a}�o�8=8>?I-?P�>\�?u7�Bb>p�Y��{W�afK>�eӾ!?�+)?"?&�?BB��0^.�� �<Yv�>�1羛H =`WR��b�>�'�>vK>����/�`?����B ���?|վ�q%?�?=? N?���&�.^�0|2?���= q���V>�?��!?��>Zk>h&ϾN�>j���55�J�c>tc���>��?i{^>�`>���y�%
�=�h�>�`��Mt�>[7?h?��?`��>���=#x.?u&8��(Ľy�>R����>�U�>*-�>]1�>&����-�9�?�>`�=���+?W�=�Ӽ���>�=�r��Ax�>g��Ĵ�����>�x�����>��>���>�T??�������[>�'�ƌѾ��?�56?���>}x-?��?Cn\���;��&�LT����>���� ��>�R�>.�?J��>|�ʽ�;H��*=� �Q{��ȵ�;�`�����>R��<���>m6�=�a$>�=>��n=�T���y>���>4�S���>)3��E\�>���3�=���<D鸾�.վ�X.�ʩҽ>�>��=�21�_�����%��l��g����N��Ȉ����=�<�7��Ѣx>Ax'>�'v>�)�>}^�>�վ�>$o�<���=U���GY�=�&L�2p�=��s�:x+�Vm��۪���-<E�;'j�>���#�+?t_#?����?9�)?��"?��?��*?j���'?܂
�_S �Y9,?;�����(?ћ3?�� ?f�"?PH�y.�>;�@>�4쾕���T~оsg? ����m���"? �>�޶>Q���׾�q��U�?�վr*羺�?��澅�Ծ`?z�Ⱦ�۾t�?�Fݾ�z���?�E�>ٽ�>����=�wܾʃ?!d�>hx�>D���r�>Tb�>�� ��i��w�ھ ?C"�>�~�>I�������Z���%?{���J��U.?�8��O��}�#?��о@c¾Ց?#��>�j�>/��
vector_observation���� action_masks����policy_1/concat_2/concatactionaction_output_shape������?action_output_shape memory_sizeversion_numberis_continuous_controlpolicy_1/add_2/ypolicy_1/add/ypolicy_1/Sum/reduction_indices$policy/main_graph_0/hidden_0/BiasAdd�����?vector_observation#policy/main_graph_0/hidden_0/kernel�!policy/main_graph_0/hidden_0/bias� policy/main_graph_0/hidden_0/Mul2 �����?$policy/main_graph_0/hidden_0/BiasAddpolicy_1/dense/MatMul�����? policy/main_graph_0/hidden_0/Mulpolicy/dense/kernel�<policy_1/dense/MatMul/patch:0�policy_1/strided_slice������? action_maskspolicy_1/Softmax2�����?policy_1/dense/MatMul policy_1/addd�����?policy_1/Softmaxpolicy_1/add/y policy_1/Mulf�����? policy_1/addpolicy_1/strided_slice policy_1/Sum������? policy_1/Mulpolicy_1/Sum/reduction_indicespolicy_1/truedivg�����? policy_1/Mul policy_1/Sumpolicy_1/add_2d�����?policy_1/truedivpolicy_1/add_2/ypolicy_1/Log_12r�����?policy_1/add_2policy_1/concat_2/concat2�����?policy_1/Log_1action2�����?policy_1/concat_2/concat@@@���3���3�?������>C�{>�����>M��;D�o�X|6>�q*>n����l?낝>>d=�ߓ���u�ȹ�<��"��8��cw�>��l�ʉ�>���="�����>�#�<�=���€�Ka����>aN+c>�|Խn�T�/ּ�m�>4S�nvm>�a �X쥽V_���l:=Y�:;=+P�*��.`k;����B>��UQv�t�=>�7)=��;;��Μ� VW;�̵��瀽=���kl�>��={1�>Q6�=��K�� w���D��$�=���>H��>>2��>K�>������>0��>!�C�&����.>��>�[�=��>���C1>x3�=��>��?=�%4��>�5�������Sý;T�>�k���?>|��=�I�;
~l>�V���,���ꂽi�߾��>�Ⱥ�{�%>]g�>#^ >���������}�>���K%��K\�<���>v^�<�u(>v���]�^=E�߽9~�> �}�kE��\�'�no�>��>�
���cɾs��<�/z=#?��8��>�Ǽ=�^�=o*@����>H-�>��N� ���:>����=�Ȗ>�mҽ٨h�2 �>\V�=�T���/�=@}�=��ǽ�-">�>#~���i=�^�=4��>������>QgϾM�
���ʼ�렾�4>�=\T>l�վ�� � ��=������� 2��پuA>J��= ��w+���_'>�y�>(� ?� �>h�j>,~��~q`�%>�ʾv�n��;?�(�<h�ɾ0��a��<�:��*�=���#>^����4>.-�>T�>(�~>;�=���>ܼ�>�1?��?6�ɾ�/��=�>�>� �����=�����)���j=U6�F>>l��>A V?�϶��|?��g��6�>�>i�(?�C?���֒��>^%�>�)?Z&?GN.�P���a��>���}�.?D?]��>H�ܾ���>�U�o{?���㇡�ݳ�>1\�;�;���������=x�>���;�I��Z�>���.�>�s>��ٽT���թ �"��>��>*)������<R>��@����>���=���>���>Ya��V_=^�5=k/ʽ'F�>A�(=4)L>� ��| �>wQ�<����w7?���>^K�=Z"����z�?��>��`?PM%?C��>�;��=3���N> -)��6�>�k�=�P�>O=P? c�a|?J?��#?Z7�>��F�u��:�2>e�=2#�r8�>����ʊ�9D'���-<'�� κ>��>�!_���H>������>ޤ�>5��>��4=FU��������=M�>��>U��=.��i����L>q7ν�أ>M��=4��>:n#�:�<ْh����>n���k��X%����<W�#�A �<�R>9t�����)["��@�����>h�=��q�>�:z�]B
=�±��`��c>�>Ln=<2��(W�=���>�g���2���X��v��������>홼�&ܾɡ�>�#�o����<>�DE=e, �(�і���l��Z �´�4���%�����d��w%>aY����'>*�B��+0��v����e����6R�r�,��v">?S�>X͜>2�4?�:?4�??��#�J���3?Ĝ;?�}=?t??���v�g�<?�I�P+C?��C?p=?�4�yJ?n����<?9ݾ�꾙�(?��ܾ�]پ�S?�ھ����?8��>C5�>+��x��>U��>�M��DϾ����!�(?�#Ҿ(��R�?�ɾ <羚P?o�־�4���D2?���>+$�>�1����>�>S/�w+޾-���&,?���>�:�>��}׾����E�(?p]����ξ�j.?&�۾�ƾ؏!?���>���>?��/����~��[m�> ��>B��>� �����[���b�-?

133
Project/Assets/ML-Agents/Examples/Bouncer/TFModels/Bouncer.nn
文件差异内容过多而无法显示
查看文件

10
Project/Assets/ML-Agents/Examples/Crawler/Scenes/CrawlerDynamicTarget.unity


m_Component:
- component: {fileID: 1392866532}
- component: {fileID: 1392866531}
- component: {fileID: 1392866530}
- component: {fileID: 1392866529}
- component: {fileID: 1392866528}
- component: {fileID: 1392866533}

m_GameObject: {fileID: 1392866527}
m_Enabled: 1
--- !u!124 &1392866529
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 1392866527}
m_Enabled: 1
--- !u!92 &1392866530
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}

m_Name:
m_EditorClassIdentifier:
gravityMultiplier: 1
monitorVerticalOffset: 1
fixedDeltaTime: 0.01333
maximumDeltaTime: 0.15
solverIterations: 12

1001
Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerDynamic.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Crawler/TFModels/CrawlerStatic.nn
文件差异内容过多而无法显示
查看文件

659
Project/Assets/ML-Agents/Examples/FoodCollector/TFModels/FoodCollector.nn
文件差异内容过多而无法显示
查看文件

28
Project/Assets/ML-Agents/Examples/GridWorld/Scenes/GridWorld.unity


m_ReflectionIntensity: 1
m_CustomReflection: {fileID: 0}
m_Sun: {fileID: 0}
m_IndirectSpecularColor: {r: 0.4497121, g: 0.49977785, b: 0.57563704, a: 1}
m_IndirectSpecularColor: {r: 0.44971162, g: 0.49977726, b: 0.5756362, a: 1}
m_UseRadianceAmbientProbe: 0
--- !u!157 &3
LightmapSettings:

m_Component:
- component: {fileID: 99095116}
- component: {fileID: 99095115}
- component: {fileID: 99095114}
- component: {fileID: 99095113}
m_Layer: 0
m_Name: Main Camera

m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 99095112}
m_Enabled: 1
--- !u!92 &99095114
Behaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 99095112}
m_Enabled: 1
--- !u!20 &99095115
Camera:
m_ObjectHideFlags: 0

m_EditorClassIdentifier:
agentParameters:
maxStep: 100
resetOnDone: 1
onDemandDecision: 1
numberOfActionsBetweenDecisions: 1
hasUpgradedFromAgentParameters: 1
maxStep: 100
area: {fileID: 1795599557}
timeBetweenDecisionsAtInference: 0.15
renderCamera: {fileID: 797520692}

m_InferenceDevice: 0
m_BehaviorType: 0
m_BehaviorName: GridWorld
m_TeamID: 0
m_useChildSensors: 1
TeamId: 0
m_UseChildSensors: 1
--- !u!114 &125487791
MonoBehaviour:
m_ObjectHideFlags: 0

m_Script: {fileID: 11500000, guid: 132e1194facb64429b007ea1edf562d0, type: 3}
m_Name:
m_EditorClassIdentifier:
renderTexture: {fileID: 8400000, guid: 114608d5384404f89bff4b6f88432958, type: 2}
sensorName: RenderTextureSensor
grayscale: 0
compression: 1
m_RenderTexture: {fileID: 8400000, guid: 114608d5384404f89bff4b6f88432958, type: 2}
m_SensorName: RenderTextureSensor
m_Grayscale: 0
m_Compression: 1
--- !u!1 &260425459
GameObject:
m_ObjectHideFlags: 0

1001
Project/Assets/ML-Agents/Examples/GridWorld/TFModels/GridWorld.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Hallway/TFModels/Hallway.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/PushBlock/TFModels/PushBlock.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Pyramids/TFModels/Pyramids.nn
文件差异内容过多而无法显示
查看文件

567
Project/Assets/ML-Agents/Examples/Reacher/TFModels/Reacher.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Soccer/TFModels/Soccer.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Tennis/TFModels/Tennis.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/Walker/TFModels/Walker.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/WallJump/TFModels/BigWallJump.nn
文件差异内容过多而无法显示
查看文件

1001
Project/Assets/ML-Agents/Examples/WallJump/TFModels/SmallWallJump.nn
文件差异内容过多而无法显示
查看文件

29
com.unity.ml-agents/Tests/Editor/BehaviorParameterTests.cs


using NUnit.Framework;
using UnityEngine;
using MLAgents;
using MLAgents.Policies;
namespace MLAgents.Tests
{
[TestFixture]
public class BehaviorParameterTests
{
static float[] DummyHeuristic()
{
return null;
}
[Test]
public void TestNoModelInferenceOnlyThrows()
{
var gameObj = new GameObject();
var bp = gameObj.AddComponent<BehaviorParameters>();
bp.behaviorType = BehaviorType.InferenceOnly;
Assert.Throws<UnityAgentsException>(() =>
{
bp.GeneratePolicy(DummyHeuristic);
});
}
}
}

11
com.unity.ml-agents/Tests/Editor/BehaviorParameterTests.cs.meta


fileFormatVersion: 2
guid: 877266b9e1bfe4330a68ab5f2da1836b
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:
正在加载...
取消
保存