浏览代码

Merge branch 'master' into 0.8.0.preview.4_staging

/0.8.0.preview.4_staging
GitHub 3 年前
当前提交
b9babd68
共有 52 个文件被更改,包括 11715 次插入1996 次删除
  1. 25
      README.md
  2. 1
      TestProjects/PerceptionHDRP/Assets/SemanticSegmentationLabelingConfiguration.asset
  3. 1
      TestProjects/PerceptionURP/Assets/SemanticSegmentationLabelingConfiguration.asset
  4. 33
      com.unity.perception/CHANGELOG.md
  5. 35
      com.unity.perception/Documentation~/GroundTruthLabeling.md
  6. 9
      com.unity.perception/Documentation~/HPTutorial/TUTORIAL.md
  7. 20
      com.unity.perception/Documentation~/PerceptionCamera.md
  8. 7
      com.unity.perception/Documentation~/Randomization/Index.md
  9. 1
      com.unity.perception/Documentation~/Schema/Synthetic_Dataset_Schema.md
  10. 999
      com.unity.perception/Documentation~/Schema/image_0.png
  11. 589
      com.unity.perception/Documentation~/Tutorial/Images/build_uploaded.png
  12. 999
      com.unity.perception/Documentation~/Tutorial/Images/di_usim_2.png
  13. 597
      com.unity.perception/Documentation~/Tutorial/Images/runinusim.png
  14. 31
      com.unity.perception/Documentation~/Tutorial/Phase1.md
  15. 27
      com.unity.perception/Documentation~/Tutorial/Phase3.md
  16. 2
      com.unity.perception/Documentation~/Tutorial/TUTORIAL.md
  17. 2
      com.unity.perception/Editor/GroundTruth/IdLabelConfigEditor.cs
  18. 10
      com.unity.perception/Editor/GroundTruth/SemanticSegmentationLabelConfigEditor.cs
  19. 7
      com.unity.perception/Editor/GroundTruth/LabelConfigEditor.cs
  20. 2
      com.unity.perception/Editor/GroundTruth/Uxml/ColoredLabelElementInLabelConfig.uxml
  21. 14
      com.unity.perception/Editor/GroundTruth/Uxml/LabelConfig_Main.uxml
  22. 43
      com.unity.perception/Editor/GroundTruth/PerceptionCameraEditor.cs
  23. 2
      com.unity.perception/Runtime/GroundTruth/DatasetJsonUtility.cs
  24. 2
      com.unity.perception/Runtime/GroundTruth/Labelers/BoundingBox3DLabeler.cs
  25. 2
      com.unity.perception/Runtime/GroundTruth/Labelers/KeypointLabeler.cs
  26. 13
      com.unity.perception/Runtime/GroundTruth/Labelers/SemanticSegmentationLabeler.cs
  27. 6
      com.unity.perception/Runtime/GroundTruth/Labeling/SemanticSegmentationLabelConfig.cs
  28. 3
      com.unity.perception/Runtime/GroundTruth/PerceptionCamera.cs
  29. 2
      com.unity.perception/Runtime/GroundTruth/RenderPasses/CrossPipelinePasses/SemanticSegmentationCrossPipelinePass.cs
  30. 20
      com.unity.perception/Runtime/GroundTruth/SimulationState.cs
  31. 2
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerTag.cs
  32. 2
      com.unity.perception/Tests/Runtime/GroundTruthTests/DatasetJsonUtilityTests.cs
  33. 38
      com.unity.perception/Tests/Runtime/GroundTruthTests/SegmentationGroundTruthTests.cs
  34. 994
      com.unity.perception/Documentation~/Randomization/Images/randomization_uml.png
  35. 214
      com.unity.perception/Documentation~/images/build_res.png
  36. 366
      com.unity.perception/Documentation~/images/gameview_res.png
  37. 739
      com.unity.perception/Documentation~/images/robotics_pose.png
  38. 140
      com.unity.perception/Documentation~/images/unity-wide-whiteback.png
  39. 250
      com.unity.perception/Documentation~/images/labeling_uml.png
  40. 1001
      com.unity.perception/Documentation~/FAQ/images/inner_objects.png
  41. 1001
      com.unity.perception/Documentation~/FAQ/images/inner_labels.gif
  42. 229
      com.unity.perception/Documentation~/FAQ/images/cluster_randomizer.png
  43. 199
      com.unity.perception/Documentation~/FAQ/images/prefab_cluster.png
  44. 1001
      com.unity.perception/Documentation~/FAQ/images/hdrp.png
  45. 1001
      com.unity.perception/Documentation~/FAQ/images/hdrp_pt_128_samples.png
  46. 1001
      com.unity.perception/Documentation~/FAQ/images/hdrp_pt_4096_samples.png
  47. 1001
      com.unity.perception/Documentation~/FAQ/images/hdrp_rt_gi.png
  48. 419
      com.unity.perception/Documentation~/FAQ/images/volume.png
  49. 609
      com.unity.perception/Documentation~/FAQ/FAQ.md

25
README.md


<img src="com.unity.perception/Documentation~/images/unity-wide.png" align="middle" width="3000"/>
<img src="com.unity.perception/Documentation~/images/unity-wide-whiteback.png" align="middle" width="3000"/>
<img src="com.unity.perception/Documentation~/images/banner2.PNG" align="middle"/>

<img src="https://img.shields.io/badge/unity-2019.4-green.svg?style=flat-square" alt="unity 2019.4">
<img src="https://img.shields.io/badge/unity-2020.2-green.svg?style=flat-square" alt="unity 2020.2">
<img src="https://img.shields.io/badge/unity-2020.2-green.svg?style=flat-square" alt="unity 2020.3">
> `com.unity.perception` is in active development. Its features and API are subject to significant change as development progresses.

**[Human Pose Labeling and Randomization Tutorial](com.unity.perception/Documentation~/HPTutorial/TUTORIAL.md)**
Step by step instructions for using the keypoint, pose, and animation randomization tools included in the Perception package. It is recommended that you finish Phase 1 of the Perception Tutorial above before starting this tutorial.
**[FAQ](com.unity.perception/Documentation~/FAQ/FAQ.md)**
Check out our FAQ for a list of common questions, tips, tricks, and some sample code.
## Documentation
In-depth documentation on individual components of the package.

|[Perception Camera](com.unity.perception/Documentation~/PerceptionCamera.md)|Captures RGB images and ground truth from a [Camera](https://docs.unity3d.com/Manual/class-Camera.html).|
|[Dataset Capture](com.unity.perception/Documentation~/DatasetCapture.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset.|
|[Randomization](com.unity.perception/Documentation~/Randomization/Index.md)|The Randomization tool set lets you integrate domain randomization principles into your simulation.|
## Community and Support
For setup problems or discussions about leveraging the Perception package in your project, please create a new thread on the **[Unity Computer Vision forum](https://forum.unity.com/forums/computer-vision.626/)** and make sure to include as much detail as possible. If you run into any other problems with the Perception package or have a specific feature request, please submit a **[GitHub issue](https://github.com/Unity-Technologies/com.unity.perception/issues)**.
For any other questions or feedback, connect directly with the Computer Vision team at [computer-vision@unity3d.com](mailto:computer-vision@unity3d.com).
## Example Projects

The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how the Perception package could be used in a smart city or autonomous vehicle simulation. You can generate datasets locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
### Robotics Object Pose Estimation Demo
<img src="com.unity.perception/Documentation~/images/robotics_pose.png"/>
The [Robotics Object Pose Estimation Demo & Tutorial](https://github.com/Unity-Technologies/Robotics-Object-Pose-Estimation) demonstrates pick-and-place with a robot arm in Unity. It includes using ROS with Unity, importing URDF models, collecting labeled training data using the Perception package, and training and deploying a deep learning model.
## Local development
The repository includes two projects for local development in `TestProjects` folder, one set up for HDRP and the other for URP.

## License
* [License](com.unity.perception/LICENSE.md)
## Support
For general questions or concerns please contact the Computer Vision team at computer-vision@unity3d.com.
For feedback, bugs, or other issues please file a GitHub issue and the Computer Vision team will investigate the issue as soon as possible.
## Citation
If you find this package useful, consider citing it using:

1
TestProjects/PerceptionHDRP/Assets/SemanticSegmentationLabelingConfiguration.asset


color: {r: 0, g: 1, b: 0.16973758, a: 1}
- label: Terrain
color: {r: 0.8207547, g: 0, b: 0.6646676, a: 1}
skyColor: {r: 0, g: 0, b: 0, a: 1}

1
TestProjects/PerceptionURP/Assets/SemanticSegmentationLabelingConfiguration.asset


color: {r: 0, g: 1, b: 0.16973758, a: 1}
- label: Terrain
color: {r: 0.8207547, g: 0, b: 0.6646676, a: 1}
skyColor: {r: 0, g: 0, b: 0, a: 1}

33
com.unity.perception/CHANGELOG.md


### Known Issues
### Added
Added support for 'step' button in editor.
Added random seed field to the Run in Unity Simulation Window
User can now choose the base folder location to store their generated data.
Added 'projection' field in the capture.sensor metadata. Values are either "perspective" or "orthographic"
Added support for 'step' button in editor.

Increased color variety in instance segmentation images
The PoissonDiskSampling utility now samples a larger region of points to then crop to size of the intended region to prevent edge case bias.
Upgraded capture package dependency to 0.0.10-preview.22 to fix an issue with URP where post processing effects were not included when capturing images.
Changed the JSON serialization key of Normal Sampler's standard deviation property from "standardDeviation" to "stddev". Scneario JSON configurations that were generated using previous versions will need to be manually updated to reflect this change.
Increased color variety in instance segmentation images.

### Removed
### Fixed
Fixed keypoint labeling bug when visualizations are disabled.
Fixed an issue where Simulation Delta Time values larger than 100 seconds (in Perception Camera) would cause incorrect capture scheduling behavior.
Fixed an issue where Categorical Parameters sometimes tried to fetch items at `i = categories.Count`, which caused an exception.
## [0.8.0-preview.3] - 2021-03-24
### Changed
Expanded documentation on the Keypoint Labeler
Updated Keypoint Labeler logic to only report keypoints for visible objects by default
Increased color variety in instance segmentation images
### Fixed
Fixed compiler warnings in projects with HDRP on 2020.1 and later
Fixed a bug in the Normal Sampler where it would return values less than the passed in minimum value, or greater than the passed in maximum value, for random values very close to 0 or 1 respectively.
Fixed keypoint labeling bug when visualizations are disabled.

35
com.unity.perception/Documentation~/GroundTruthLabeling.md


# Labeling
Many labelers require mapping the objects in the view to the values recorded in the dataset. As an example, Semantic Segmentation needs to determine the color to draw each object in the segmentation image.
# Ground Truth Generation
The Perception package includes a set of Labelers which capture ground truth information along with each captured frame. The built-in Labelers support a variety of common computer vision tasks, including 2D and 3D bounding boxes, instance and semantic segmentation, and keypoint labeling (labeled points on 3D objects). The package also includes extensible components for building new Labelers to support additional tasks. Labelers derive ground truth data from labels specified on the GameObjects present in the Scene.
<p align="center">
<img src="images/labeling_uml.png" width="800"/>
<br><i>Class diagram for the ground truth generation system of the Perception package</i>
</p>
## Camera Labeler
A set of Camera Labelers are added to the Perception Camera, each tasked with generating a specific type of ground truth. For instance, the Semantic Segmentation Labeler outputs segmentation images in which each labeled object is rendered in a unique user-definable color and non-labeled objects and the background are rendered black.
## Label Config
The Label Config acts as a mapping between string labels and object classes (currently colors or integers), deciding which labels in the Scene (and thus which objects) should be tracked by the Labeler, and what color (or integer id) they should have in the captured frames.
## Labeling Component
The Labeling component associates a list of string-based labels with a GameObject and its descendants. A Labeling component on a descendant overrides its parent's labels.
## Label Resolution
The Labeling component added to the GameObjects in the Scene works in conjunction with each active Labeler's Label Config, in order to map each labeled GameObject to an object class in the output.
## Labeling component
The Labeling component associates a list of string-based labels with a GameObject and its descendants. A Labeling component on a descendant overrides its parent's labels.
### Limitations
## Limitations
## Label Config
Many labelers require a Label Config asset. This asset specifies a list of all labels to be captured in the dataset along with extra information used by the various labelers.
For example, you could label an asset representing a box of Rice Krispies as `food\cereal\kellogs\ricekrispies`
For example, you can assign four different labels to an asset representing a box of Rice Krispies so as to define an inherent hierarchy:
* "food": type
* "cereal": subtype

If the goal of the algorithm is to identify all objects in a Scene that are "food", that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a Scene that label is also available. Depending on the goal of the algorithm, you can use any mix of labels in the hierarchy.
This way, you can have Label Configs that include labels from different levels of this hierarchy so that you can easily switch an object's label in the output by switching to a different Label Config. If the goal of the algorithm is to identify all objects in a Scene that are "food", that label is available and can be used if the Label Config only contains "food" and not the other labels of the object. Conversely if the goal is to identify only Rice Krispies cereal within a Scene, that label is also available. Depending on the goal of the algorithm, you can use any mix of labels in the hierarchy.

9
com.unity.perception/Documentation~/HPTutorial/TUTORIAL.md


* [Step 5: Add Joints to the Character and Customize Keypoint Templates](#step-5)
* [Step 6: Randomize the Humanoid Character's Animations](#step-6)
> :information_source: If you face any problems while following this tutorial, please create a post on the **[Unity Computer Vision forum](https://forum.unity.com/forums/computer-vision.626/)** or the **[GitHub issues](https://github.com/Unity-Technologies/com.unity.perception/issues)** page and include as much detail as possible.
### <a name="step-1">Step 1: Import `.fbx` Models and Animations</a>
This tutorial assumes that you have already created a Unity project, installed the Perception package, and set up a Scene with a `Perception Camera` inside. If this is not the case, please follow **steps 1 to 3** of [Phase 1 of the Perception Tutorial](../Tutorial/Phase1.md).

</p>
* **:green_circle: Action**: Return to `Perception Camera` and assign `HPE_IdLabelConfig` to the `KeyPointLabeler`'s label configuration property.
* **:green_circle: Action**: Search in the _**Project**_ tab for `CocoKeypointTemplate`, with the scope set to _**In Packages**_. Drag and drop the found asset into the `Active Template` field of the `Perception Camera`.
</p>
</p>
Note the `CocoKeypointTemplate` asset that is already assigned as the `Active Template`. This template will tell the labeler how to map default Unity rig joints to human joint labels in the popular COCO dataset so that the output of the labeler can be easily converted to COCO format. Later in this tutorial, we will learn how to add more joints to our character and how to customize joint mapping templates.
The `Active Template` tells the labeler how to map default Unity rig joints to human joint labels in the popular COCO dataset so that the output of the labeler can be easily converted to COCO format. Later in this tutorial, we will learn how to add more joints to our character and how to customize joint mapping templates.
<p align="center">
<img src="Images/take_objects_keypoints.gif" width="600"/>

20
com.unity.perception/Documentation~/PerceptionCamera.md


|--|--|
| Affect Simulation Timing | Have this camera affect simulation timings (similar to a scheduled camera) by requesting a specific frame delta time. Enabling this option will let you set the `Simulation Delta Time` property described above.|
## Output Resolution
When using Unity Editor to generate datasets, the resolution of the images generated by the Perception Camera will match the resolution set for the ***Game*** view of the editor. However, images generated with built players (including local builds and Unity Simulation runs) will use the resolution specified in project settings.
## Camera labelers
* To set the resolution of the ***Game*** view, click on the dropdown menu in front of `Display 1`. You can use one of the provided resolutions or create a new one. To create one, click **+**. Set `Type` to `Fixed Resolution` and `Width` and `Height` to your desired resolution.
<p align="center">
<img src="images/gameview_res.png" width="300"/>
<br><i>Creating a new resolution preset for the ***Game*** view</i>
</p>
* To set the resolution of the built player, Open ***Edit -> Project Settings*** and navigate to the ***Player*** tab. In the ***Resolution and Presentation*** section, set ***Fullscreen Mode*** to ***Windowed*** and then set ***Default Screen Width*** and ***Default Screen Height*** to your desired resolution.
<p align="center">
<img src="images/build_res.png" width="700"/>
<br><i>Setting the resolution of the built player</i>
</p>
## Camera Labelers
Camera labelers capture data related to the Camera in the JSON dataset. You can use this data to train models and for dataset statistics. The Perception package provides several Camera labelers, and you can derive from the CameraLabeler class to define more labelers.
### Semantic Segmentation Labeler

The BoundingBox2DLabeler produces 2D bounding boxes for each visible object with a label you define in the IdLabelConfig. Unity calculates bounding boxes using the rendered image, so it only excludes occluded or out-of-frame portions of the objects.
### Bounding Box 3D Ground Truth Labeler
### Bounding Box 3D Labeler
The Bounding Box 3D Ground Truth Labeler produces 3D ground truth bounding boxes for each labeled game object in the scene. Unlike the 2D bounding boxes, 3D bounding boxes are calculated from the labeled meshes in the scene and all objects (independent of their occlusion state) are recorded.

7
com.unity.perception/Documentation~/Randomization/Index.md


4. Parameters
5. Samplers
<br>
<p align="center">
<img src="Images/randomization_uml.png" width="900"/>
<br><i>Class diagram for the randomization framework included in the Perception package</i>
</p>
## Scenarios

1
com.unity.perception/Documentation~/Schema/Synthetic_Dataset_Schema.md


translation: <float, float, float> -- Position in meters: (x, y, z) with respect to the ego coordinate system. This is typically fixed during the simulation, but we can allow small variation for domain randomization.
rotation: <float, float, float, float> -- Orientation as quaternion: (w, x, y, z) with respect to ego coordinate system. This is typically fixed during the simulation, but we can allow small variation for domain randomization.
camera_intrinsic: <3x3 float matrix> [optional] -- Intrinsic camera calibration. Empty for sensors that are not cameras.
projection: <string> -- holds the type of projection the camera used for that capture: Options: "perspective" or "orthographic"
# add arbitrary optional key-value pairs for sensor attributes
}

999
com.unity.perception/Documentation~/Schema/image_0.png
文件差异内容过多而无法显示
查看文件

589
com.unity.perception/Documentation~/Tutorial/Images/build_uploaded.png

之前 之后
宽度: 1382  |  高度: 841  |  大小: 68 KiB

999
com.unity.perception/Documentation~/Tutorial/Images/di_usim_2.png
文件差异内容过多而无法显示
查看文件

597
com.unity.perception/Documentation~/Tutorial/Images/runinusim.png

之前 之后
宽度: 1212  |  高度: 812  |  大小: 61 KiB

31
com.unity.perception/Documentation~/Tutorial/Phase1.md


* [Step 7: Inspect Generated Synthetic Data](#step-7)
* [Step 8: Verify Data Using Dataset Insights](#step-8)
> :information_source: If you face any problems while following this tutorial, please create a post on the **[Unity Computer Vision forum](https://forum.unity.com/forums/computer-vision.626/)** or the **[GitHub issues](https://github.com/Unity-Technologies/com.unity.perception/issues)** page and include as much detail as possible.
### <a name="step-1">Step 1: Download Unity Editor and Create a New Project</a>
* **:green_circle: Action**: Navigate to [this](https://unity3d.com/get-unity/download/archive) page to download and install the latest version of **Unity Editor 2020.2.x**. (The tutorial has not yet been fully tested on newer versions.)

As seen in the UI for `Perception Camera`, the list of `Camera Labelers` is currently empty. For each type of ground-truth you wish to generate along-side your captured frames (e.g. 2D bounding boxes around objects), you will need to add a corresponding `Camera Labeler` to this list.
To speed-up your workflow, the Perception package comes with five common labelers for object-detection tasks; however, if you are comfortable with code, you can also add your own custom labelers. The labelers that come with the Perception package cover **3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), and semantic segmentation images (each object rendered in a unique colour)**. We will use four of these in this tutorial.
To speed-up your workflow, the Perception package comes with seven common Labelers for object-detection and human keypoint labeling tasks; however, if you are comfortable with code, you can also add your own custom Labelers. The Labelers that come with the Perception package cover **keypoint labeling, 3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), instance segmentation, and semantic segmentation**. We will use four of these in this tutorial.
Once you add the labelers, the _**Inspector**_ view of the `Perception Camera` component will look like this:
Once you add the Labelers, the _**Inspector**_ view of the `Perception Camera` component will look like this:
<p align="center">
<img src="Images/pc_labelers_added.png" width="400"/>

One of the useful features that comes with the `Perception Camera` component is the ability to display real-time visualizations of the labelers when your simulation is running. For instance, `BoundingBox2DLabeler` can display two-dimensional bounding boxes around the foreground objects that it tracks in real-time and `SemanticSegmentationLabeler` displays the semantic segmentation image overlaid on top of the camera's view. To enable this feature, make sure the `Show Labeler Visualizations` checkmark is enabled.
One of the useful features that comes with the `Perception Camera` component is the ability to display real-time visualizations of the Labelers when your simulation is running. For instance, `BoundingBox2DLabeler` can display two-dimensional bounding boxes around the foreground objects that it tracks in real-time and `SemanticSegmentationLabeler` displays the semantic segmentation image overlaid on top of the camera's view. To enable this feature, make sure the `Show Labeler Visualizations` checkmark is enabled.
It is now time to tell each labeler added to the `Perception Camera` which objects it should label in the generated dataset. For instance, if your workflow is intended for generating frames and ground-truth for detecting chairs, your labelers would need to know that they should look for objects labeled "chair" within the scene. The chairs should in turn also be labeled "chair" in order to make them visible to the labelers. We will now learn how to set up these configurations.
It is now time to tell each Labeler added to the `Perception Camera` which objects it should label in the generated dataset. For instance, if your workflow is intended for generating frames and ground-truth for detecting chairs, your Labelers would need to know that they should look for objects labeled "chair" within the scene. The chairs should in turn also be labeled "chair" in order to make them visible to the Labelers. We will now learn how to set up these configurations.
You will notice each added labeler has a `Label Config` field. By adding a label configuration here you can instruct the labeler to look for certain labels within the scene and ignore the rest. To do that, we should first create label configurations.
You will notice each added Labeler has a `Label Config` field. By adding a label configuration here you can instruct the Labeler to look for certain labels within the scene and ignore the rest. To do that, we should first create label configurations.
* **:green_circle: Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Id Label Config**_.

* **:green_circle: Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Semantic Segmentation Label Config**_. Name this asset `TutorialSemanticSegmentationLabelConfig`.
Now that you have created your label configurations, we need to assign them to labelers that you previously added to your `Perception Camera` component.
Now that you have created your label configurations, we need to assign them to Labelers that you previously added to your `Perception Camera` component.
* **:green_circle: Action**: Select the `Main Camera` object from the Scene _**Hierarchy**_, and in the _**Inspector**_ tab, assign the newly created `TutorialIdLabelConfig` to the first three labelers. To do so, you can either drag and drop the former into the corresponding fields for each labeler, or click on the small circular button in front of the `Id Label Config` field, which brings up an asset selection window filtered to only show compatible assets. Assign `TutorialSemanticSegmentationLabelConfig` to the fourth labeler. The `Perception Camera` component will now look like the image below:
* **:green_circle: Action**: Select the `Main Camera` object from the Scene _**Hierarchy**_, and in the _**Inspector**_ tab, assign the newly created `TutorialIdLabelConfig` to the first three Labelers. To do so, you can either drag and drop the former into the corresponding fields for each Labeler, or click on the small circular button in front of the `Id Label Config` field, which brings up an asset selection window filtered to only show compatible assets. Assign `TutorialSemanticSegmentationLabelConfig` to the fourth Labeler. The `Perception Camera` component will now look like the image below:
<p align="center">
<img src="Images/pclabelconfigsadded.png" width="400"/>

The Prefab contains a number of components, including a `Transform`, a `Mesh Filter`, a `Mesh Renderer` and a `Labeling` component (highlighted in the image above). While the first three of these are common Unity components, the fourth one is specific to the Perception package, and is used for assigning labels to objects. You can see here that the Prefab has one label already added, displayed in the list of `Added Labels`. The UI here provides a multitude of ways for you to assign labels to the object. You can either choose to have the asset automatically labeled (by enabling `Use Automatic Labeling`), or add labels manually. In case of automatic labeling, you can choose from a number of labeling schemes, e.g. the asset's name or folder name. If you go the manual route, you can type in labels, add labels from any of the label configurations included in the project, or add from lists of suggested labels based on the Prefab's name and path.
Note that each object can have multiple labels assigned, and thus appear as different objects to labelers with different label configurations. For instance, you may want your semantic segmentation labeler to detect all cream cartons as `dairy_product`, while your bounding box labeler still distinguishes between different types of dairy product. To achieve this, you can add a `dairy_product` label to all your dairy products, and then in your label configuration for semantic segmentation, only add the `dairy_product` label, and not any specific products or brand names.
Note that each object can have multiple labels assigned, and thus appear as different objects to Labelers with different label configurations. For instance, you may want your semantic segmentation Labeler to detect all cream cartons as `dairy_product`, while your bounding box Labeler still distinguishes between different types of dairy product. To achieve this, you can add a `dairy_product` label to all your dairy products, and then in your label configuration for semantic segmentation, only add the `dairy_product` label, and not any specific products or brand names.
For this tutorial, we have already prepared the foreground Prefabs for you and added the `Labeling` component to all of them. These Prefabs were based on 3D scans of the actual grocery items. If you are making your own Prefabs, you can easily add a `Labeling` component to them using the _**Add Component**_ button visible in the bottom right corner of the screenshot above.

<img src="Images/labelconfigs.png" width="800"/>
</p>
> :information_source: Since we used automatic labels here and added them to our configurations, we are confident that the labels in the configurations match the labels of our objects. In cases where you decide to add manual labels to objects and configurations, make sure you use the exact same labels, otherwise, the objects for which a matching label is not found in your configurations will not be detected by the labelers that are using those configurations.
> :information_source: Since we used automatic labels here and added them to our configurations, we are confident that the labels in the configurations match the labels of our objects. In cases where you decide to add manual labels to objects and configurations, make sure you use the exact same labels, otherwise, the objects for which a matching label is not found in your configurations will not be detected by the Labelers that are using those configurations.
Now that we have labelled all our foreground objects and setup our label configurations, let's briefly test things.

<img src="Images/first_run.png" width = "700"/>
</p>
In this view, you will also see the real-time visualizations we discussed before shown on top of the camera's view. In the top right corner of the window, you can see a visualization control panel, through which you can enable or disable visualizations for individual labelers. That said, we currently have no foreground objects in the Scene yet, so no bounding boxes or semantic segmentation overlays will be displayed.
In this view, you will also see the real-time visualizations we discussed before shown on top of the camera's view. In the top right corner of the window, you can see a visualization control panel, through which you can enable or disable visualizations for individual Labelers. That said, we currently have no foreground objects in the Scene yet, so no bounding boxes or semantic segmentation overlays will be displayed.
Note that disabling visualizations for a labeler does not affect your generated data. The annotations from all labelers that are active before running the simulation will continue to be recorded and will appear in the output data.
Note that disabling visualizations for a Labeler does not affect your generated data. The annotations from all Labelers that are active before running the simulation will continue to be recorded and will appear in the output data.
To generate data as fast as possible, the simulation utilizes asynchronous processing to churn through frames quickly, rearranging and randomizing the objects in each frame. To be able to check out individual frames and inspect the real-time visualizations, click on the pause button (next to play). You can also switch back to the Scene view to be able to inspect each object individually. For performance reasons, it is recommended to disable visualizations altogether (from the _**Inspector**_ view of `Perception Camera`) once you are ready to generate a large dataset.

- RGB images (raw camera output) (if the `Save Camera Output to Disk` check mark is enabled on `Perception Camera`)
- Semantic segmentation images (if the `SemanticSegmentationLabeler` is added and active on `Perception Camera`)
The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation on the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active Labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation on the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
> :information_source: Are the RGB images blank? This may be a bug. When using URP in OSX, having MSAA enabled on the camera may cause the output RGB images to be blank. As a workaround, you can disable MSAA and use FXAA instead, until the issue is fixed.
* `label_id`: The numerical id assigned to this object's label in the labeler's label configuration
* `label_id`: The numerical id assigned to this object's label in the Labeler's label configuration
* `label_name`: The object's label, e.g. `candy_minipralines_lindt`
* `instance_id`: Unique instance id of the object
* `x` and `y`: Pixel coordinates of the top-left corner of the object's bounding box (measured from the top-left corner of the image)

27
com.unity.perception/Documentation~/Tutorial/Phase3.md


The process of running a project on Unity Simulation involves building it for Linux and then uploading this build, along with a set of parameters, to Unity Simulation. The Perception package simplifies this process by including a dedicated _**Run in Unity Simulation**_ window that accepts a small number of required parameters and handles everything else automatically.
For performance reasons, it is best to disable real-time visualizations before carrying on with the Unity Simulation run.
* **:green_circle: Action**: From the _**Inspector**_ view of `Perception Camera`, disable real-time visualizations.
In order to make sure our builds are compatible with Unity Simulation, we need to set our project's scripting backend to _**Mono**_ rather than _**IL2CPP**_ (if not already set). We will also need to switch to _**Windowed**_ mode.

<img src="Images/runinusim.png" width="600"/>
</p>
Here, you can also specify a name for the run, the number of Iterations the Scenario will execute for, and the number of _**Instances**_ (number of nodes the work will be distributed across) for the run. This window automatically picks the currently active Scene and Scenario to run in Unity Simulation.
Here, you can specify a name for the run, the number of Iterations the Scenario will execute for, and the number of Instances (number of nodes the work will be distributed across) for the run. This window automatically picks the currently active Scene and Scenario to run in Unity Simulation.
* **:green_circle: Action**: Name your run `FirstRun`, set the number of Iterations to `1000`, and Instances to `20`.
* **:green_circle: Action**: Name your run `FirstRun`, set the number of Iterations to `1000`, and Instances to `20`.
* **:green_circle: Action**: If you'd like to use a new random seed for this run of your Scenario, click `Randomize` to generate a new seed.
* **:green_circle: Action**: Click _**Build and Run**_.
> :information_source: You can ignore the ***Optional Configuration*** section for now. This is useful if you plan to specify a configuration for your Scenario (including the Randomizers) that will override the values set in the Scenario UI, in Unity Simulation. To generate a configuration, you can click on the ***Generate JSON Config*** button provided in the ***Inspector*** view of Scenario components.

* **:green_circle: Action**: Once the operation is complete, you can find the **Execution ID** of this Unity Simulation run in the **Console** tab and the ***Run in Unity Simulation** Window:
* **:green_circle: Action**: Once the operation is complete, you can find the **Execution ID** of this Unity Simulation run in the **Console** tab and the ***Run in Unity Simulation*** Window:
<p align="center">
<img src="Images/build_uploaded.png" width="600"/>

```
name id creation time
--------------------- ---------------------------------------- ---------------------------
Perception Tutorial acd31956-582b-4138-bec8-6670be150f09 * 2020-09-30T00:33:41+00:00
Perception Tutorial 38baa0d0-a2cd-4ee1-801b-39ca3fc5cbc6 * 2020-09-30T00:33:41+00:00
SynthDet 9ec23417-73cd-becd-9dd6-556183946153 2020-08-12T19:46:20+00:00
```

An example output with 3 runs would look like this:
```
Active Project ID: acd31956-582b-4138-bec8-6670be150f09
Active Project ID: 38baa0d0-a2cd-4ee1-801b-39ca3fc5cbc6
yegz4WN In_Progress 2020-10-01 23:17:54
ojE8Z20 In_Progress 2020-10-01 23:17:54
Run2 klvfxgT 2020-10-01 21:46:39 id status created_at
--------- ------------- ---------------------
kML3i50 In_Progress 2020-10-01 21:46:42

You can also obtain a list of all the builds you have uploaded to Unity Simulation using the `usim get builds` command.
You may notice that the IDs seen above for the run named `FirstRun` match those we saw earlier in Unity Editor's _**Console**_. You can see here that the single execution for our recently uploaded build is `In_Progress` and that the execution ID is `yegz4WN`.
You may notice that the IDs seen above for the run named `FirstRun` match those we saw earlier in Unity Editor's _**Console**_. You can see here that the single execution for our recently uploaded build is `In_Progress` and that the execution ID is `ojE8Z20`.
Unity Simulation utilizes the ability to run simulation Instances in parallel. If you enter a number larger than 1 for the number of Instances in the _**Run in Unity Simulation**_ window, your run will be parallelized, and multiple simulation Instances will simultaneously execute. You can view the status of all simulation Instances using the `usim summarize run-execution <execution-id>` command. This command will tell you how many Instances have succeeded, failed, have not run yet, or are in progress. Make sure to replace `<execution-id>` with the execution ID seen in your run list. In the above example, this ID would be `yegz4WN`.
Unity Simulation utilizes the ability to run simulation Instances in parallel. If you enter a number larger than 1 for the number of Instances in the _**Run in Unity Simulation**_ window, your run will be parallelized, and multiple simulation Instances will simultaneously execute. You can view the status of all simulation Instances using the `usim summarize run-execution <execution-id>` command. This command will tell you how many Instances have succeeded, failed, have not run yet, or are in progress. Make sure to replace `<execution-id>` with the execution ID seen in your run list. In the above example, this ID would be `ojE8Z20`.
* **:green_circle: Action**: Use the `usim summarize run-execution <execution-id>` command to observe the status of your execution nodes:

`USimCLI\windows\usim summarize run-execution <execution-id>`-->
Here is an example output of this command, indicating that there is only one node, and that the node is still in progress:
Here is an example output of this command, indicating that there are 20 nodes, and that they are all still in progress:
In Progress 1
In Progress 20
At this point, we will need to wait until the execution is complete. Check your run with the above command periodically until you see a 1 for `Successes` and 0 for `In Progress`.
At this point, we will need to wait until the execution is complete. Check your run with the above command periodically until you see a 20 for `Successes` and 0 for `In Progress`.
Given the relatively small size of our Scenario (1,000 Iterations), this should take less than 5 minutes.
* **:green_circle: Action**: Use the `usim summarize run-execution <execution-id>` command periodically to check the progress of your run.

`USimCLI/mac/usim download manifest <execution-id>`
The manifest is a `.csv` formatted file and will be downloaded to the same location from which you execute the above command, which is the `unity_simulation_bundle` folder.
This file does **not**** include actual data, rather, it includes links to the generated data, including the JSON files, the logs, the images, and so on.
This file does **not** include actual data, rather, it includes links to the generated data, including the JSON files, the logs, the images, and so on.
* **:green_circle: Action**: Open the manifest file to check it. Make sure there are links to various types of output and check a few of the links to see if they work.

2
com.unity.perception/Documentation~/Tutorial/TUTORIAL.md


<img src="../images/unity-wide.png" align="middle" width="3000"/>
<img src="../images/unity-wide-whiteback.png" align="middle" width="3000"/>
# Perception Tutorial

2
com.unity.perception/Editor/GroundTruth/IdLabelConfigEditor.cs


m_StartingIdEnumField.SetEnabled(AutoAssign);
m_SkyColorUi.style.display = DisplayStyle.None;
AutoAssignIdsIfNeeded();
m_MoveDownButton.clicked += MoveSelectedItemDown;
m_MoveUpButton.clicked += MoveSelectedItemUp;

10
com.unity.perception/Editor/GroundTruth/SemanticSegmentationLabelConfigEditor.cs


{
m_MoveButtons.style.display = DisplayStyle.None;
m_IdSpecificUi.style.display = DisplayStyle.None;
var skyColorProperty = serializedObject.FindProperty(nameof(SemanticSegmentationLabelConfig.skyColor));
m_SkyColorField.BindProperty(skyColorProperty);
m_SkyColorField.RegisterValueChangedCallback(e => UpdateSkyHexLabel(e.newValue));
UpdateSkyHexLabel(skyColorProperty.colorValue);
}
private void UpdateSkyHexLabel(Color colorValue)
{
m_SkyHexLabel.text = "#" + ColorUtility.ToHtmlStringRGBA(colorValue);
}
public override void PostRemoveOperations()

7
com.unity.perception/Editor/GroundTruth/LabelConfigEditor.cs


protected Button m_MoveDownButton;
protected VisualElement m_MoveButtons;
protected VisualElement m_IdSpecificUi;
protected VisualElement m_SkyColorUi;
protected ColorField m_SkyColorField;
protected Label m_SkyHexLabel;
public void OnEnable()

m_StartingIdEnumField = m_Root.Q<EnumField>("starting-id-dropdown");
m_AutoIdToggle = m_Root.Q<Toggle>("auto-id-toggle");
m_IdSpecificUi = m_Root.Q<VisualElement>("id-specific-ui");
m_SkyColorUi = m_Root.Q<VisualElement>("sky-color-ui");
m_SkyColorField = m_Root.Q<ColorField>("sky-color-value");
m_SkyHexLabel = m_Root.Q<Label>("sky-color-hex");
m_SaveButton.SetEnabled(false);

2
com.unity.perception/Editor/GroundTruth/Uxml/ColoredLabelElementInLabelConfig.uxml


<UXML xmlns="UnityEngine.UIElements" xmlns:editor="UnityEditor.UIElements">
<VisualElement class="added-label" style="padding-top: 3px;">
<Button name="remove-button" class="labeling__remove-item-button"/>
<TextField name="label-value" class="labeling__added-label-value"/>
<TextField name="label-value" class="labeling__added-label-value"/>
<VisualElement style="min-width:20px; flex-direction: row; display:none">
<Button name="move-up-button" class="move-label-in-config-button move-up" style="margin-right:-2px"/>
<Button name="move-down-button" class="move-label-in-config-button move-down"/>

14
com.unity.perception/Editor/GroundTruth/Uxml/LabelConfig_Main.uxml


<VisualElement class="outer-container" name="outer-container">
<Style src="../Uss/Styles.uss"/>
<VisualElement class="inner-container" name="id-specific-ui">
<Label text="Options" name="options-title" class="title-label"/>
</VisualElement>
</VisualElement>
<VisualElement class="inner-container" name="sky-color-ui">
<Label text="Options" name="options-title" class="title-label"/>
<VisualElement style="flex-direction: row; flex-grow: 1;">
<Label text="Sky Color" name="added-labels-title"
style="flex-grow: 1; font-size: 11; min-width : 60px; align-self:center;"/>
<VisualElement class="generic-hover"
style="min-width : 137px; flex-direction: row; padding: 3px 5px 3px 5px; margin-left: 3px; margin-right: 3px; border-width: 1px; border-color: #666666; border-radius: 4px;">
<editor:ColorField name="sky-color-value" style="min-width : 60px; max-width: 60px; align-self:center;"/>
<Label name="sky-color-hex"
style="font-size: 11; min-width : 60px; max-width: 60px; align-self:center; margin: 2px"/>
</VisualElement>
</VisualElement>
</VisualElement>
<VisualElement name="added-labels" class="inner-container" style="margin-top:5px">

43
com.unity.perception/Editor/GroundTruth/PerceptionCameraEditor.cs


m_LabelersList.DoLayoutList();
}
var s = new GUIStyle(EditorStyles.textField);
s.wordWrap = true;
var defaultColor = s.normal.textColor;
EditorGUILayout.LabelField("Latest Output Folder");
EditorGUILayout.LabelField("Latest Generated Dataset");
EditorGUILayout.HelpBox(dir, MessageType.None);
s.normal.textColor = Color.green;
EditorGUILayout.LabelField(dir, s);
if (GUILayout.Button("Show Folder"))
{
EditorUtility.RevealInFinder(dir);

GUILayout.EndHorizontal();
GUILayout.EndVertical();
}
GUILayout.Space(10);
var userBaseDir = PlayerPrefs.GetString(SimulationState.userBaseDirectoryKey);
if (userBaseDir == string.Empty)
{
var folder = PlayerPrefs.GetString(SimulationState.defaultOutputBaseDirectory);
userBaseDir = folder != string.Empty ? folder : Application.persistentDataPath;
}
EditorGUILayout.LabelField("Output Base Folder");
GUILayout.BeginVertical("TextArea");
s.normal.textColor = defaultColor;
EditorGUILayout.LabelField(userBaseDir, s);
GUILayout.BeginHorizontal();
if (GUILayout.Button("Change Folder"))
{
var path = EditorUtility.OpenFolderPanel("Choose Output Folder", "", "");
if (path.Length != 0)
{
Debug.Log($"Chose path: {path}");
PlayerPrefs.SetString(SimulationState.userBaseDirectoryKey, path);
}
}
GUILayout.EndHorizontal();
GUILayout.EndVertical();
if (EditorSettings.asyncShaderCompilation)
{

2
com.unity.perception/Runtime/GroundTruth/DatasetJsonUtility.cs


case double v:
return new JValue(v);
case string v:
return new JValue($"\"{v}\"");
return new JValue(v);
case uint v:
return new JValue(v);
}

2
com.unity.perception/Runtime/GroundTruth/Labelers/BoundingBox3DLabeler.cs


protected override void Setup()
{
if (idLabelConfig == null)
throw new InvalidOperationException("BoundingBox2DLabeler's idLabelConfig field must be assigned");
throw new InvalidOperationException("BoundingBox3DLabeler's idLabelConfig field must be assigned");
m_AnnotationDefinition = DatasetCapture.RegisterAnnotationDefinition("bounding box 3D", idLabelConfig.GetAnnotationSpecification(),
"Bounding box for each labeled object visible to the sensor", id: new Guid(annotationId));

2
com.unity.perception/Runtime/GroundTruth/Labelers/KeypointLabeler.cs


return json;
}
}
}
}

13
com.unity.perception/Runtime/GroundTruth/Labelers/SemanticSegmentationLabeler.cs


{
label_name = l.label,
pixel_value = l.color
}).ToArray();
});
if (labelConfig.skyColor != Color.black)
{
specs = specs.Append(new SemanticSegmentationSpec()
{
label_name = "sky",
pixel_value = labelConfig.skyColor
});
}
specs,
specs.ToArray(),
"pixel-wise semantic segmentation label",
"PNG",
id: Guid.Parse(annotationId));

6
com.unity.perception/Runtime/GroundTruth/Labeling/SemanticSegmentationLabelConfig.cs


Color.yellow,
Color.gray
};
/// <summary>
/// The color to use for the sky in semantic segmentation images
/// </summary>
public Color skyColor = Color.black;
}
/// <summary>

3
com.unity.perception/Runtime/GroundTruth/PerceptionCamera.cs


// Record the camera's projection matrix
SetPersistentSensorData("camera_intrinsic", ToProjectionMatrix3x3(cam.projectionMatrix));
// Record the camera's projection type (orthographic or perspective)
SetPersistentSensorData("projection", cam.orthographic ? "orthographic" : "perspective");
var captureFilename = $"{Manager.Instance.GetDirectoryFor(rgbDirectory)}/{k_RgbFilePrefix}{Time.frameCount}.png";
var dxRootPath = $"{rgbDirectory}/{k_RgbFilePrefix}{Time.frameCount}.png";
SensorHandle.ReportCapture(dxRootPath, SensorSpatialData.FromGameObjects(

2
com.unity.perception/Runtime/GroundTruth/RenderPasses/CrossPipelinePasses/SemanticSegmentationCrossPipelinePass.cs


s_LastFrameExecuted = Time.frameCount;
var renderList = CreateRendererListDesc(camera, cullingResult, "FirstPass", 0, m_OverrideMaterial, -1);
cmd.ClearRenderTarget(true, true, Color.black);
cmd.ClearRenderTarget(true, true, m_LabelConfig.skyColor);
DrawRendererList(renderContext, cmd, RendererList.Create(renderList));
}

20
com.unity.perception/Runtime/GroundTruth/SimulationState.cs


float m_LastTimeScale;
readonly string m_OutputDirectoryName;
string m_OutputDirectoryPath;
public const string userBaseDirectoryKey = "userBaseDirectory";
public const string defaultOutputBaseDirectory = "defaultOutputBaseDirectory";
public bool IsRunning { get; private set; }

public SimulationState(string outputDirectory)
{
PlayerPrefs.SetString(defaultOutputBaseDirectory, Configuration.Instance.GetStorageBasePath());
PlayerPrefs.SetString(latestOutputDirectoryKey, Manager.Instance.GetDirectoryFor());
var basePath = PlayerPrefs.GetString(userBaseDirectoryKey, string.Empty);
if (basePath != string.Empty)
{
if (Directory.Exists(basePath))
{
Configuration.localPersistentDataPath = basePath;
}
else
{
Debug.LogWarning($"Passed in directory to store simulation artifacts: {basePath}, does not exist. Using default directory {Configuration.localPersistentDataPath} instead.");
basePath = Configuration.localPersistentDataPath;
}
}
PlayerPrefs.SetString(latestOutputDirectoryKey, Manager.Instance.GetDirectoryFor("", basePath));
IsRunning = true;
}

2
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerTag.cs


/// OnEnable is called when this RandomizerTag is enabled, either created, instantiated, or enabled via
/// the Unity Editor
/// </summary>
protected void OnEnable()
protected virtual void OnEnable()
{
Register();
}

2
com.unity.perception/Tests/Runtime/GroundTruthTests/DatasetJsonUtilityTests.cs


[TestCase(1u, "1")]
[TestCase(1.0, "1")]
[TestCase(1.0f, "1")]
[TestCase("string", "\"string\"")]
[TestCase("string", "string")]
public void Primitive_ReturnsValue(object o, string jsonExpected)
{
var jsonActual = DatasetJsonUtility.ToJToken(o).ToString();

38
com.unity.perception/Tests/Runtime/GroundTruthTests/SegmentationGroundTruthTests.cs


{
static readonly Color32 k_SemanticPixelValue = new Color32(10, 20, 30, Byte.MaxValue);
private static readonly Color32 k_InstanceSegmentationPixelValue = new Color32(255,0,0, 255);
private static readonly Color32 k_SkyValue = new Color32(10, 20, 30, 40);
public enum SegmentationKind
{

switch (segmentationKind)
{
case SegmentationKind.Instance:
//expectedPixelValue = new Color32(0, 74, 255, 255);
expectedPixelValue = k_InstanceSegmentationPixelValue;
cameraObject = SetupCameraInstanceSegmentation(OnSegmentationImageReceived);
break;

CollectionAssert.AreEqual(Enumerable.Repeat(expectedPixelValue, data.Length), data.ToArray());
}
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false);
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false, k_SkyValue);
AddTestObjectForCleanup(TestHelper.CreateLabeledPlane(label: "non-matching"));
yield return null;

CollectionAssert.AreEqual(Enumerable.Repeat(expectedPixelValue, data.Length), data.ToArray());
}
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false);
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false, k_SkyValue);
var gameObject = TestHelper.CreateLabeledPlane();
gameObject.GetComponent<Labeling>().enabled = false;

}
[UnityTest]
public IEnumerator SemanticSegmentationPass_WithEmptyFrame_ProducesBlack([Values(false, true)] bool showVisualizations)
public IEnumerator SemanticSegmentationPass_WithEmptyFrame_ProducesSky([Values(false, true)] bool showVisualizations)
var expectedPixelValue = new Color32(0, 0, 0, 255);
var expectedPixelValue = k_SkyValue;
void OnSegmentationImageReceived(NativeArray<Color32> data)
{
timesSegmentationImageReceived++;

var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), showVisualizations);
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), showVisualizations, expectedPixelValue);
//TestHelper.LoadAndStartRenderDocCapture(out var gameView);
yield return null;

}
[UnityTest]
public IEnumerator SemanticSegmentationPass_WithNoObjects_ProducesSky()
{
int timesSegmentationImageReceived = 0;
var expectedPixelValue = k_SkyValue;
void OnSegmentationImageReceived(NativeArray<Color32> data)
{
timesSegmentationImageReceived++;
CollectionAssert.AreEqual(Enumerable.Repeat(expectedPixelValue, data.Length), data.ToArray());
}
var cameraObject = SetupCameraSemanticSegmentation(
a => OnSegmentationImageReceived(a.data), false, expectedPixelValue);
yield return null;
//destroy the object to force all pending segmented image readbacks to finish and events to be fired.
DestroyTestObject(cameraObject);
Assert.AreEqual(1, timesSegmentationImageReceived);
}
[UnityTest]
public IEnumerator SemanticSegmentationPass_WithTextureOverride_RendersToOverride([Values(true, false)] bool showVisualizations)
{
var expectedPixelValue = new Color32(0, 0, 255, 255);

return cameraObject;
}
GameObject SetupCameraSemanticSegmentation(Action<SemanticSegmentationLabeler.ImageReadbackEventArgs> onSegmentationImageReceived, bool showVisualizations)
GameObject SetupCameraSemanticSegmentation(Action<SemanticSegmentationLabeler.ImageReadbackEventArgs> onSegmentationImageReceived, bool showVisualizations, Color? backgroundColor = null)
{
var cameraObject = SetupCamera(out var perceptionCamera, showVisualizations);
var labelConfig = ScriptableObject.CreateInstance<SemanticSegmentationLabelConfig>();

color = k_SemanticPixelValue
}
});
if (backgroundColor != null)
{
labelConfig.skyColor = backgroundColor.Value;
}
var semanticSegmentationLabeler = new SemanticSegmentationLabeler(labelConfig);
semanticSegmentationLabeler.imageReadback += onSegmentationImageReceived;
perceptionCamera.AddLabeler(semanticSegmentationLabeler);

994
com.unity.perception/Documentation~/Randomization/Images/randomization_uml.png

之前 之后
宽度: 3124  |  高度: 1304  |  大小: 254 KiB

214
com.unity.perception/Documentation~/images/build_res.png

之前 之后
宽度: 1455  |  高度: 465  |  大小: 66 KiB

366
com.unity.perception/Documentation~/images/gameview_res.png

之前 之后
宽度: 692  |  高度: 946  |  大小: 147 KiB

739
com.unity.perception/Documentation~/images/robotics_pose.png

之前 之后
宽度: 700  |  高度: 196  |  大小: 179 KiB

140
com.unity.perception/Documentation~/images/unity-wide-whiteback.png

之前 之后
宽度: 3000  |  高度: 636  |  大小: 41 KiB

250
com.unity.perception/Documentation~/images/labeling_uml.png
文件差异内容过多而无法显示
查看文件

1001
com.unity.perception/Documentation~/FAQ/images/inner_objects.png
文件差异内容过多而无法显示
查看文件

1001
com.unity.perception/Documentation~/FAQ/images/inner_labels.gif
文件差异内容过多而无法显示
查看文件

229
com.unity.perception/Documentation~/FAQ/images/cluster_randomizer.png

之前 之后
宽度: 850  |  高度: 460  |  大小: 73 KiB

199
com.unity.perception/Documentation~/FAQ/images/prefab_cluster.png

之前 之后
宽度: 802  |  高度: 312  |  大小: 55 KiB

1001
com.unity.perception/Documentation~/FAQ/images/hdrp.png
文件差异内容过多而无法显示
查看文件

1001
com.unity.perception/Documentation~/FAQ/images/hdrp_pt_128_samples.png
文件差异内容过多而无法显示
查看文件

1001
com.unity.perception/Documentation~/FAQ/images/hdrp_pt_4096_samples.png
文件差异内容过多而无法显示
查看文件

1001
com.unity.perception/Documentation~/FAQ/images/hdrp_rt_gi.png
文件差异内容过多而无法显示
查看文件

419
com.unity.perception/Documentation~/FAQ/images/volume.png

之前 之后
宽度: 1234  |  高度: 740  |  大小: 115 KiB

609
com.unity.perception/Documentation~/FAQ/FAQ.md


# FAQ and Code Samples
This page covers a variety of topics, including common questions and issues that may arise while using the Perception package, as well as code samples and recommendations for a number of popular workflows and use-cases.
## <a name="labeling">Labeling</a>
<details>
<summary><strong>Q: How can I disable or enable labeling on an object at runtime?</strong></summary>
<br>
You can turn labeling on and off on a GameObject by switching the enabled state of its `Labeling` component. For example:
```C#
gameObject.GetComponent<Labeling>().enabled = false;
```
---
</details>
<details>
<summary><strong>Q: How can I remove or add new labels to objects at runtime?</strong></summary><br>
This can be achieved through modifying the `labels` list of the `Labeling` component. The key is to call `RefreshLabeling()` on the component after making any changes to the labels. Example:
```C#
var labeling = gameObject.GetComponent<Labeling>();
labeling.labels.Clear();
labeling.labels.Add("new-label");
labeling.RefreshLabeling();
```
Keep in mind that any new label added with this method should already be present in the Label Config attached to the Labeler that is supposed to label this object.
---
</details>
<details>
<summary><strong>Q: Is it possible to label only parts of an object or assign different labels to different parts of objects?</strong></summary><br>
Labeling works on the GameObject level, so to achieve the scenarios described here, you will need to break down your main object into multiple GameObjects parented to the same root object, and add `Labeling` components to each of the inner objects, as shown below.
<p align="center">
<img src="images/inner_objects.png" width="800"/>
</p>
Alternatively, in cases where parts of the surface of the object need to be labeled (e.g. decals on objects), you can add labeled invisible surfaces on top of these sections. These invisible surfaces need to have a fully transparent material. To create an invisible material:
* Create a new material (***Assets -> Create -> Material***) and name it `TransparentMaterial`
* Set the **Surface Type** for the material to **Transparent**, and set the alpha channel of the **Base Map** color to 0.
* For HDRP: In addition to the above, disable **Preserve specular lighting**
An example labeled output for an object with separate labels on inner objects and decals is shown below:
<p align="center">
<img src="images/inner_labels.gif" width="600"/>
</p>
---
</details>
<details>
<summary><strong>Q: When visible surfaces of two objects are fully aligned, the bounding boxes seem to blink in and out of existence from one frame to another. Why is that?</strong></summary><br>
This is due to a common graphics problem called *z-fighting*. This occurs when the shader can't decide which of the two surfaces to draw on top of the other, since they both have the exact same distance from the camera. To fix this, simply move one of the objects slightly so that the two problematic surfaces do not fully align.
---
</details>
## <a name="randomization">Randomization</a>
<details>
<summary><strong>Q: How can I have multiple sets of prefabs in a foreground placement Randomizer, and on every Iteration select one from each set?</strong>
</summary><br>
This question is an example of more complex functionality that can be achieved by applying slight modifications to the provided sample Randomizers, or by creating completely custom ones by extending the `Randomizer` class.
Here, we have a variety of options toward achieving the described outcome. One simple method could be to add several more `GameObjectParameter` fields inside of the provided sample `ForegroundObjectPlacementRandomizer`. Each of these Parameters could hold one of our object lists. Then, on each iteration, we would fetch one prefab from each of the lists using the `Sample()` function of each Parameter.
The above solution can work but it is not modular enough, with the lists of prefabs not being reusable in other Randomizers.
A better approach can be to define each prefab list separately as a scriptable object asset, and then just reference those scriptable objects inside of our foreground Randomizer. To do this, we first define a `PrefabCluster` class to hold a list of prefabs.
```C#
using UnityEngine;
using UnityEngine.Perception.Randomization.Parameters;
[CreateAssetMenu(fileName="NewPrefabCluster", menuName="Test/PrefabCluster")]
public class PrefabCluster : ScriptableObject
{
public GameObjectParameter clusterPrefabs;
}
```
We can now create a cluster asset using the ***Assets -> Create -> Test -> PrefabCluster*** menu option and populate its list of prefabs. Each cluster contains one `GameObjectParameter`, which will hold the list of prefabs and provide us with a `Sample()` function.
To be able to edit these clusters with the same editor UI available for Randomizers, you will also need to add an empty custom editor for the `PrefabCluster` class that extends our bespoke `ParameterUIElementsEditor` class:
```C#
using UnityEditor;
using UnityEditor.Perception.Randomization;
[CustomEditor(typeof(PrefabCluster))]
public class PrefabClusterEditor : ParameterUIElementsEditor { }
```
Note that any editor scripts must be placed inside a folder named "Editor" within your project. "Editor" is a special folder name in Unity that prevents editor code from compiling into a player during the build process. For example, the file path for the `PrefabClusterEditor` script above could be `.../Assets/Scripts/Editor/PrefabClusterEditor`.
The ***Inspector*** view of a prefab cluster asset looks like below:
<p align="center">
<img src="images/prefab_cluster.png" width="400"/>
</p>
Now all that is left is to use our prefab clusters inside a Randomizer. Here is some sample code:
```C#
using System;
using UnityEngine;
[Serializable]
[UnityEngine.Perception.Randomization.Randomizers.AddRandomizerMenu("My Randomizers/Cluster Randomizer")]
public class ClusterRandomizer : UnityEngine.Perception.Randomization.Randomizers.Randomizer
{
public PrefabCluster[] clusters;
protected override void OnIterationStart()
{
//select a random prefab from each cluster
foreach (var cluster in clusters)
{
var prefab = cluster.clusterPrefabs.Sample();
//do things with this prefab, e.g. create instances of it, etc.
}
}
}
```
This Randomizer takes a list of `PrefabCluster` assets, then, on each Iteration, it goes through all the provided clusters and samples one prefab from each. The ***Inspector*** view for this Randomizer looks like this:
<p align="center">
<img src="images/cluster_randomizer.png" width="400"/>
</p>
---
</details>
<details>
<summary><strong>Q: How can I specify an exact number of objects to place using the sample foreground object placement Randomizer?</strong> </summary><br>
The provided `ForegroundObjectPlacementRandomizer` uses Poisson Disk sampling to find randomly positioned points in the space denoted by the provided `Width` and `Height` values. The lower bound on the distance between the sampled points will be `Separation Distance`. The number of sampled points will be the maximum number of points in the given area that match these criteria.
Thus, to limit the number of spawned objects, you can simply introduce a hard limit in the `for` loop that iterates over the Poisson Disk samples, to break out of the loop if the limit is reached. Additionally, we will need to shuffle the list of points retrieved from the Poisson Disk sampling in order to remove any selection bias when building our subset of points. This is because Poisson Disk points are sampled in sequence and relative to the points already sampled, therefore, the initial points in the list are likely to be closer to each other. We will use a `FloatParameter` to perform this shuffle, so that we can guarantee that our simulation is deterministic and reproducible.
```C#
FloatParameter m_IndexShuffleParameter = new FloatParameter { value = new UniformSampler(0, 1) };
protected override void OnIterationStart()
{
var seed = SamplerState.NextRandomState();
var placementSamples = PoissonDiskSampling.GenerateSamples(
placementArea.x, placementArea.y, separationDistance, seed);
var offset = new Vector3(placementArea.x, placementArea.y, 0f) * -0.5f;
//shuffle retrieved points
var indexes = Enumerable.Range(0, placementSamples.Length).ToList();
indexes = indexes.OrderBy(item => m_IndexShuffleParameter.Sample()).ToList();
//maximum number of objects to place
var limit = 50;
var instantiatedCount = 0;
//iterate over all points
foreach (var index in indexes)
{
instantiatedCount ++;
if (instantiatedCount == limit)
break;
var instance = m_GameObjectOneWayCache.GetOrInstantiate(prefabs.Sample());
instance.transform.position = new Vector3(placementSamples[index].x, placementSamples[index].y, depth) + offset;
}
placementSamples.Dispose();
}
```
This will guarantee an upper limit of 50 on the number of objects. To have exactly 50 objects, we need to make sure the `Separation Distance` is small enough for the given area, so that there are always at least 50 point samples found. Experiment with different values for the distance until you find one that produces the minimum number of points required.
---
</details>
<details>
<summary><strong>Q: How can I avoid object overlap with the sample foreground object placement Randomizer?</strong></summary><br>
There are a number of ways for procedurally placing objects while avoiding any overlap between them, and most of these methods can be rather complex and need to place objects in a sequence. All the modifications to the objects (like scale, rotation, etc.) would also need to happen before the next object is placed, so that the state of the world is fully known before each placement.
Here, we are going to introduce a rather simple modification in the sample foreground placement code provided with the package. In each Iteration, a random scale factor is chosen, and then a desirable separation distance is calculated based on this scale factor and the list of given prefabs. We also scale the objects here to introduce additional randomization, due to the fact that once we have placed the objects we can no longer scale them.
Based on the value given for `Non Overlap Guarantee`, this Randomizer can either reduce the amount of overlap or completely remove overlap.
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;
using UnityEngine.Perception.Randomization.Parameters;
using UnityEngine.Perception.Randomization.Randomizers;
using UnityEngine.Perception.Randomization.Randomizers.Utilities;
using UnityEngine.Perception.Randomization.Samplers;
[Serializable]
[AddRandomizerMenu("Example/No Overlap Foreground Object Placement Randomizer")]
public class NoOverlapForegroundObjectPlacementRandomizer : Randomizer
{
public float depth;
[Tooltip("Range of scales used for objects. All objects in each frame will use the same scale.")]
public FloatParameter scaleParameter = new FloatParameter { value = new UniformSampler(4, 8) };
public Vector2 placementArea;
public GameObjectParameter prefabs;
[Tooltip("Degree to which we can guarantee that no objects will overlap. Use 1 for no overlap and smaller values (down to 0) for more dense placement with a possibility of some overlap.")]
public float nonOverlapGuarantee = 1;
float m_ScaleFactor = 1f;
GameObject m_Container;
GameObjectOneWayCache m_GameObjectOneWayCache;
Dictionary<GameObject, float> m_GameObjectBoundsSizeCache;
List<GameObject> m_SelectedPrefabs;
int m_SelectionPoolSizePerFrame = 1;
FloatParameter m_IndexSelector = new FloatParameter { value = new UniformSampler(0, 1) };
protected override void OnAwake()
{
m_Container = new GameObject("Foreground Objects");
m_Container.transform.parent = scenario.transform;
m_GameObjectOneWayCache = new GameObjectOneWayCache(
m_Container.transform, prefabs.categories.Select(element => element.Item1).ToArray());
m_GameObjectBoundsSizeCache = new Dictionary<GameObject, float>();
m_SelectedPrefabs = new List<GameObject>();
//Calculate the average bounds size for the prefabs included in this categorical parameter
var averageBoundsSize = CalculateAverageBoundsSize();
//Calculate average scale based on the scale range given
var averageScale = 1f;
var sampler = (UniformSampler)scaleParameter.value;
if (sampler != null)
{
averageScale = (sampler.range.minimum + sampler.range.maximum) / 2;
}
//Use average bounds size and average scale to guess the maximum number of objects that can be placed without having them overlap.
//This is a heuristic to help us start the placement process. The actual number of items placed will usually be much smaller.
m_SelectionPoolSizePerFrame = (int)(placementArea.x * placementArea.y / (averageBoundsSize * averageScale));
}
protected override void OnIterationStart()
{
m_ScaleFactor = scaleParameter.Sample();
m_SelectedPrefabs.Clear();
//Select a random number of prefabs for this frame. Placement calculations will be done based on this subset.
for (var i = 0; i < m_SelectionPoolSizePerFrame; i++)
{
var randIndex = (int)Mathf.Round((m_IndexSelector.Sample() * prefabs.categories.Count) - 0.5f);
m_SelectedPrefabs.Add(prefabs.categories[randIndex].Item1);
}
//Calculate the minimum separation distance needed for the selected prefabs to not overlap.
var separationDistance = CalculateMaxSeparationDistance(m_SelectedPrefabs);
var seed = SamplerState.NextRandomState();
var placementSamples = PoissonDiskSampling.GenerateSamples(
placementArea.x, placementArea.y, separationDistance, seed);
var offset = new Vector3(placementArea.x, placementArea.y, 0f) * -0.5f;
foreach (var sample in placementSamples)
{
//Pick a random prefab from the selected subset and instantiate it.
var randIndex = (int)Mathf.Round((m_IndexSelector.Sample() * m_SelectedPrefabs.Count) - 0.5f);
var instance = m_GameObjectOneWayCache.GetOrInstantiate(m_SelectedPrefabs[randIndex]);
instance.transform.position = new Vector3(sample.x, sample.y, depth) + offset;
instance.transform.localScale = Vector3.one * m_ScaleFactor;
}
placementSamples.Dispose();
}
protected override void OnIterationEnd()
{
m_GameObjectOneWayCache.ResetAllObjects();
}
/// <summary>
/// Calculates the separation distance needed between placed objects to be sure that no two objects will overlap
/// </summary><br>
/// <returns>The max separation distance</returns>
float CalculateMaxSeparationDistance(ICollection<GameObject> categories)
{
var maxBoundsSize = m_GameObjectBoundsSizeCache.Where(item => categories.Contains(item.Key)).Max(pair => pair.Value);
return maxBoundsSize * m_ScaleFactor * nonOverlapGuarantee;
}
float CalculateAverageBoundsSize()
{
foreach (var category in prefabs.categories)
{
var prefab = category.Item1;
prefab.transform.localScale = Vector3.one;
var renderers = prefab.GetComponentsInChildren<Renderer>();
var totalBounds = new Bounds();
foreach (var renderer in renderers)
{
totalBounds.Encapsulate(renderer.bounds);
}
var boundsSize = totalBounds.size.magnitude;
m_GameObjectBoundsSizeCache.Add(prefab, boundsSize);
}
return m_GameObjectBoundsSizeCache.Values.Average();
}
}
```
---
</details>
<details>
<summary><strong>Q: What if I don't want randomized object placement? Can I move my objects in a non-randomized deterministic manner on each frame?</strong> </summary><br>
Even though we call them Randomizers, you can use a Randomizer to perform any task through-out the execution lifecycle of your Scenario. The power of the Randomizers comes from the lifecycle hooks that they have into the Iteration and the Scenario, making it easy to know and guarantee when and in which order in the life of your simulation each piece of code runs. These functions include:
* `OnEnable`
* `OnAwake`
* `OnUpdate`
* `OnIterationStart`
* `OnIterationEnd`
* `OnScenarioStart`
* `OnScenarioComplete`
* `OnDisable`
So, in order to have deliberate non-random object movement, you will just need to put your object movement code inside of one of the recurrent lifecycle functions. `OnUpdate()` runs on every frame of the simulation, and `OnIterationStart()` runs every Iteration (which can be the same as each frame if you have only 1 frame per Iteration of your Scenario). For example, the code below moves all objects tagged with the component `ForwardMoverTag` along their forward axis by 1 unit, on every Iteration.
```C#
protected override void OnIterationStart()
{
var tags = tagManager.Query<ForwardMoverTag>();
foreach (var tag in tags)
{
tag.transform.Translate(Vector3.forward);
}
}
```
Additionally, keep in mind that you can use Perception Samplers (and therefore Parameters) to generate constant values, not just random ones. The `ConstantSampler` class provides this functionality.
---
</details>
<details>
<summary><strong>Q: The objects instantiated using the sample foreground placement Randomizer are floating in the air. How can I use this Randomizer to place objects on a horizontal surface instead?</strong> </summary><br>
The objects instantiated by the sample foreground Randomizer are all parented to an object named `Foreground Objects` at the root of the Scene Hierarchy. To modify the orientation of the objects, you can simply rotate this parent object at the beginning of the Scenario.
Alternatively, you could also place `Foreground Objects` inside another GameObject in the Scene using the `Transform.SetParent()` method, and then modify the local position and rotation of `Foreground Objects` in such a way that makes the objects appear on the surface of the parent GameObject.
In addition, if you'd like to have horizontal placement without touching the parent object, you can modify the Randomizer's code to place objects horizontally instead of vertically. The lines responsible for placement are:
```C#
var offset = new Vector3(placementArea.x, placementArea.y, 0) * -0.5f;
```
```C#
instance.transform.position = new Vector3(sample.x, sample.y, depth) + offset;
```
The first line builds an offset vector that is later used to center the points retrieved from Poisson Disk sampling around the center of the coordinate system. The second line is executed in a loop, and each time, places one object at one of the sampled points at the depth (along Z axis) provided by the user.
To make this placement horizontal, you would just need to change these two lines to swap Y for Z. The resulting lines would be:
```C#
var offset = new Vector3(placementArea.x, 0, placementArea.y) * -0.5f;
```
```C#
instance.transform.position = new Vector3(sample.x, depth, sample.y) + offset;
```
Note that the variable `depth` is in fact playing the role of a height variable now.
Finally, to achieve more natural placement, you could also use Unity's physics engine to drop the objects on a surface, let them settle, and then capture an image. To achieve this, you would just need to have sufficient frames in each Iteration of the Scenario (instead of the default 1 frame per iteration), and set your Perception Camera's capture interval to a large enough number that would make it capture each Iteration once after the objects have settled. This example is explained in more detail in the [Perception Camera](#perception-camera) section of this FAQ.
---
</details>
<details>
<summary><strong>Q: Does using the same Random Seed value in two runs of the same Scenario guarantee that the generated datasets are identical?</strong></summary><br>
If you only use the Samplers (and Parameters, which internally use Samplers) provided in the Perception package to generate random values throughout the Scenario's lifecycle and keep the `Random Seed` value unchanged, an identical sequence of random numbers will be generated every time the Scenario is run. This is because the Samplers obtain their seeds through continually mutating the provided global `Random Seed` in the Scenario.
Keep in mind that any change in the order of sampling or the number of samples obtained can lead to different outcomes. For example, if you change the order of Randomizers in the Scenario, the Samplers inside of these Randomizers will now execute in the new order, and thus, they will operate based on different seeds than before and generate different numbers. The same can happen if you add additional calls to a Sampler inside a Randomizer, causing the Samplers in later Randomizers to now use different seeds, since the global seed has been mutated more times than before.
One more thing to keep in mind is that certain systems and components of Unity are not deterministic and can produce different outcomes in consecutive runs. Examples of this are the physics engine and the film grain post processing. Hence, if you need to guarantee that your simulation always produces the exact same dataset, make sure to research the various systems that you use to make sure they behave deterministically.
---
</details>
## <a name="perception-camera">Perception Camera</a>
<details>
<summary><strong>Q: What is the relationship between the Scenario's lifecycle properties (Iterations and Frames per Iteration) and the Perception Camera's timing properties (Simulation Delta Time, Start at Frame, and Frames Between Captures)?</strong> </summary><br>
Each Iteration of the Scenario resets the Perception Camera's timing variables. Thus, you can think of each Iteration of the Scenario as one separate Perception Camera sequence, in which the camera's internal timing properties come into play. For instance, if you have 10 `Frames Per Iteration` on your Scenario, and your Perception Camera's `Start at Frame` value is set to 8, you will get two captures from the camera at the 9th and 10th frames of each Iteration (note that `Start at Frame` starts from 0). Similarly, you can use the `Frames Between Captures` to introduce intervals between captures. A value of 0 leads to all frames being captured.
---
</details>
<details>
<summary><strong>Q: I want to simulate physics (or other accumulative behaviors such as auto-exposure) for a number of frames and let things settle before capturing the Scene. Is this possible with the Perception package?</strong></summary><br>
Yes. The Perception Camera can be set to capture at specific frame intervals, rather than every frame. The `Frames Between Captures` value is set to 0 by default, which causes the camera to capture all frames; however, you can change this to 1 to capture every other frame, or larger numbers to allow more time between captures. You can also have the camera start capturing at a certain frame rather than the first frame, by setting the `Start at Frame` value to a value other than 0. All of this timing happens within each Iteration of the Scenario, and gets reset when you advance to the next Iteration. Therefore, the combination of these properties and the Scenario's `Frames Per Iteration` property allows you to randomize the state of your Scene at the start of each Iteration, let things run for a number of frames, and then capture the Scene at the end of the Iteration.
Suppose we need to drop a few objects into the Scene, let them interact physically and settle after a number of frames, and then capture their final state once. Afterwards, we want to repeat this cycle by randomizing the initial positions of the objects, dropping them, and capturing the final state again. We will set the Scenario's `Frames Per Iteration` to 300, which should be sufficient for the objects to get close to a settled position (this depends on the value you use for `Simulation Delta Time` in Perception Camera and the physical properties of the engine and objects, and can be found through experimentation). We also set the `Start at Frame` value of the Perception Camera to 290, and the `Frames Between Captures` to a sufficiently large number (like 100), so that we only get one capture per Iteration of the Scenario. The results look like below. Note how the bounding boxes only appear after the objects are fairly settled. These are the timestamps at which captures are happening.
<p align="center">
<img src="images/object_drop.gif" width="700"/>
</p>
---
</details>
<details>
<summary><strong>Q: I do not want the Perception Camera to control the delta time of my simulation or capture on a scheduled basis. Can I have a Perception Camera in my Scene that does not affect timings and trigger captures manually from other scripts?</strong></summary><br>
Yes. The Perception Camera offers two trigger modes, `Scheduled` and `Manual`, and these can be chosen in the editor UI for the Perception Camera component. If you select the `Manual` mode, you will be able to trigger captures by calling the `RequestCapture()` method of `PerceptionCamera`. In this mode, you have the option to not have this camera dictate your simulation delta time. This is controlled using the `Affect Simulation Timing` checkbox.
---
</details>
<details>
<summary><strong>Q: Can I have multiple Perception Cameras active in my Scene simultaneously?</strong></summary><br>
No, at this time the Perception package only supports one active Perception Camera. This is something that is on our roadmap and we hope to support soon.
However, the package does support having more than one Perception Camera in the Scene, as long as only one is active when the simulation starts. Therefore, one possible workaround, if your simulation is fully deterministic from one run to the next, would be to run the simulation more than once, each time with one of the cameras active. While not ideal, this will at least let you generate matching datasets.
---
</details>
<details>
<summary><strong>Q: My RGB images look darker than what I see in Unity Editor, when I render the Perception Camera to a texture. How can I fix this?</strong>
</summary><br>
This issue is caused by the color format of the texture. In the ***Inspector*** view of the render texture, set color format to `R8G8B8A8_SRGB`.
---
</details>
<details>
<summary><strong>Q: How do I report additional custom information in my output dataset for each frame or the whole simulation (e.g. 3D position of objects at the start of each Iteration, intensity of lights, etc.)?</strong>
</summary><br>
This can be done by adding custom annotations to your dataset. Have a look at [this](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/DatasetCapture.md) page for an explanation, as well as an example for how to do this.
---
</details>
## <a name="miscellaneous">Miscellaneous</a>
<details>
<summary><strong>Q: Objects in my captured images have jagged edges, how can I fix this?</strong>
</summary><br>
This is a common issue with rendering graphics into raster images (digital images), when the resolution of the raster is not high enough to perfectly display slanting lines. The common solution to this issue is the use of anti-aliasing methods, and Unity offers a number of these in both URP and HDRP. To experiment with anti-aliasing, go to the ***Inspector*** view of your Perception Camera object and in the Camera component, change `Anti-aliasing` from `None` to another option.
---
</details>
<details>
<summary><strong>Q: I am using an HDRP Unity project with Perception and my images have significant blurring around most objects. How can I remove this blur?</strong>
</summary><br>
The effect you are observing here is motion blur, which happens because the placement Randomizers used in the Perception tutorial cache their instantiated objects from one Iteration to the next, and move them to new locations on each Iteration instead of destroying them and creating new ones. This "motion" of the objects causes the motion blur effect to kick in.
HDRP projects have motion blur and a number of other post processing effects enabled by default. To disable motion blur or any other effect, follow these steps:
1. Create an empty GameObject in your Scene and add a Volume component to it.
2. Set the Volume's profile to the **Volume Global** asset.
3. Uncheck the **Motion Blur** option.
<p align="center">
<img src="images/volume.png" width="500"/>
</p>
---
</details>
<details>
<summary><strong>Q: Are all post processing effects provided by Unity safe to use in synthetic datasets?</strong>
</summary><br>
No. When using post processing effects, you need to be careful about issues regarding randomness and determinism:
* There are certain post processing effects that need randomization internally, e.g. film grain. The film grain effect provided by Unity is not sufficiently randomized for model training and can thus mislead your CV model to look for a specific noise pattern during prediction.
* Even if such an effect is properly randomized, using a randomized effect would make your overall randomization strategy non-deterministic, meaning you would not be able to reproduce your datasets. This is because the effect would internally use random number generators outside of the Samplers provided by the Perception package. If you have access to the source code for a randomized effect, you can modify it to only use Perception Samplers for random number generation, which would make its behavior deterministic and thus appropriate for use in synthetic datasets that need to be reproducible.
To make sure you do not run into insufficient randomization or non-determinism, it would be best to implement effects such as film grain yourself or modify existing code to make sure no random number generators are used except for the Samplers provided in the Perception package.
---
</details>
<details>
<summary><strong>Q: What post processing effects can help improve model performance?</strong>
</summary><br>
Based on our experiments, randomizing contrast, saturation, lens blur, and lens distortion can help significantly improve the performance of your CV model. We recommend experimenting with these as well as other effects to determine those that work best for your use-case.
---
</details>
<details>
<summary><strong>Q: Can I debug my C# code?</strong>
</summary><br>
Unity projects can be debugged using external editors such as Visual Studio or JetBrains Rider. For local development and debugging, you will first need to clone the Perception repository to disk and add the Perception package from this cloned repository to your Unity project. Then, in Unity Editor, go to ***Edit (or "Unity" on OSX) -> Preferences -> External Tools***. Select your preferred editor as the External Script Editor, and enable
**General .csproj files** for at least **Embedded packages** and **Local packages**. This will allow you to quickly navigate through the code-base for the Perception package and internal Unity Editor packages.
All you need to do now is to double click any of the Perception package's C# script files from inside Unity Editor's **Project** window. The files are located in `Assets/Perception`. Double clicking will open them in your external editor of choice, and you will be able to attach the debugger to Unity.
---
</details>
<details>
<summary><strong>Q: What kind of synthetic environment will be best for my use-case?</strong>
</summary><br>
It is difficult to say what type of synthetic environment would lead to the best model performance. It is best to carry out small and quick experiments with both random unstructured environments (such as the [SynthDet](https://github.com/Unity-Technologies/SynthDet) project) and more structured ones that may resemble real environments in which prediction will need to happen. This will help identify the types of environments and randomizations that work best for each specific use-case. The beauty of synthetic data is that you can try these experiments fairly quickly.
Here are a few blog posts to give you some ideas: [1](https://blog.unity.com/technology/synthetic-data-simulating-myriad-possibilities-to-train-robust-machine-learning-models), [2](https://blog.unity.com/technology/use-unitys-perception-tools-to-generate-and-analyze-synthetic-data-at-scale-to-train), [3](https://blog.unity.com/technology/training-a-performant-object-detection-ml-model-on-synthetic-data-using-unity-perception), [4](https://blog.unity.com/technology/supercharge-your-computer-vision-models-with-synthetic-datasets-built-by-unity), [5](https://blog.unity.com/technology/boosting-computer-vision-performance-with-synthetic-data).
---
</details>
<details>
<summary><strong>Q: Can I have more realistic rendering in my Scene?</strong>
</summary><br>
A project's lighting configuration typically has the greatest influence over the final rendered output over any other simulation property. Unity has many lighting options, each of which is designed as a different trade-off between performance and realism/capability. The 3 most pertinent options that you will likely be interested in are:
* URP baked lighting: The Universal Render Pipeline offers the most performant lighting configurations by using an offline baking process to generate realistic bounce lighting within a static scene and then using simple shadow mapped dynamic lights in conjunctions with light probes to make dynamic (or randomized) objects "fit" into the baked scene. This option provides high performance, but lacks the visual fidelity needed for interior environments and is geared toward more outdoor-like settings. Also, depending on scene randomization complexity, light baking might not be the best option. Randomly generated scenes will often place objects and adjust lighting in ways that make the new scene incompatible with the original baked lighting configuration.
* HDRP lighting: A default HDRP scene offers a step toward more realistic environments with a much larger array of lighting settings (soft shadows, multiple dynamic lights, etc.) and a host of additional real-time effects like camera exposure and screen space ambient occlusion. A warning though: real time screen space effects may make your scene "look better", but the way these effects are calculated is not based on how light works in the real world, so realism may vary. Another huge advantage of HDRP is the potential to have moderately realistic lighting without baking your lighting configuration (though you can integrate light baking if you want to). However, there is no real-time global illumination option in default HDRP, meaning that your scene will not simulate complex real world light behavior such as light bouncing, light bleeding, or realistic shadows for dynamic scenes. This can result in unrealistically dark scenes when only using directional lights and windows (without extra interior lights to brighten things up). Overall though, HDRP offers a good compromise between performance and realism for some use cases.
* HDRP DXR (DirectX Raytracing): Unity offers some preview ray tracing features in its latest editor versions that can be used to drastically improve the realism of your scene. Here are the pros and cons of DXR:
* Pros:
* Can simulate more realistic light behaviors (light bouncing, light color bleeding, and realistic shadows)
* No light baking required
* Cons:
* Requires special hardware to run (Nvidia RTX graphics cards)
* Time consuming to render (relative to default HDRP). Some lighting options (Global Illumination) are less expensive than others (Path Tracing).
* More complicated to configure
* These features are still in preview and subject to change
A visual comparison of the different lighting configurations in HDRP is shown below. The Scene includes one directional light and one dim point light on the ceiling.
Default HDRP:
<p align="center">
<img src="images/hdrp.png" width="700"/>
</p>
HDRP with Global Illumination (notice how much brighter the scene is with ray traced light bouncing):
<p align="center">
<img src="images/hdrp_rt_gi.png" width="700"/>
</p>
HDRP with Path Tracing (128 samples) (notice the red light bleeding from the cube onto the floor and the increased shadow quality):
<p align="center">
<img src="images/hdrp_pt_128_samples.png" width="700"/>
</p>
HDRP with Path Tracing (4096 samples) (more samples leads to less ray tracing noise but also a longer time to render):
<p align="center">
<img src="images/hdrp_pt_4096_samples.png" width="700"/>
</p>
---
</details>
<details>
<summary><strong>Q: I am randomizing my Scene every frame and using ray casting to detect the position of objects, but my ray casts are returning incorrect results. What is the issue here?</strong>
</summary><br>
The physics engine needs to catch up with the position and rotation of your objects and is typically a frame behind. When you randomize things every frame, the physics engine can never catch up. To fix this, call `Physics.SyncTransforms` just before calling any ray casting methods.
---
</details>
<details>
<summary><strong>Q: Where can I get humanoid models and animations?</strong>
</summary><br>
One useful resource for humanoid characters and animations is [Mixamo](https://www.mixamo.com/#/?page=1&type=Motion%2CMotionPack).
---
</details>

部分文件因为文件数量过多而无法显示

正在加载...
取消
保存