浏览代码

commit

/validation-tool
Wesley Mareovich Smith 3 年前
当前提交
aa96fb53
共有 61 个文件被更改,包括 2709 次插入286 次删除
  1. 7
      .github/pull_request_template.md
  2. 4
      .yamato/environments.yml
  3. 17
      README.md
  4. 1
      TestProjects/PerceptionHDRP/Assets/SemanticSegmentationLabelingConfiguration.asset
  5. 4
      TestProjects/PerceptionHDRP/Packages/packages-lock.json
  6. 1
      TestProjects/PerceptionURP/Assets/SemanticSegmentationLabelingConfiguration.asset
  7. 4
      TestProjects/PerceptionURP/Packages/packages-lock.json
  8. 21
      com.unity.perception/CHANGELOG.md
  9. 2
      com.unity.perception/Documentation~/HPTutorial/TUTORIAL.md
  10. 98
      com.unity.perception/Documentation~/PerceptionCamera.md
  11. 26
      com.unity.perception/Documentation~/Tutorial/Phase1.md
  12. 14
      com.unity.perception/Documentation~/Tutorial/Phase3.md
  13. 2
      com.unity.perception/Documentation~/Tutorial/TUTORIAL.md
  14. 2
      com.unity.perception/Editor/GroundTruth/IdLabelConfigEditor.cs
  15. 7
      com.unity.perception/Editor/GroundTruth/LabelConfigEditor.cs
  16. 9
      com.unity.perception/Editor/GroundTruth/PerceptionCameraEditor.cs
  17. 10
      com.unity.perception/Editor/GroundTruth/SemanticSegmentationLabelConfigEditor.cs
  18. 2
      com.unity.perception/Editor/GroundTruth/Uxml/ColoredLabelElementInLabelConfig.uxml
  19. 14
      com.unity.perception/Editor/GroundTruth/Uxml/LabelConfig_Main.uxml
  20. 15
      com.unity.perception/Editor/Randomization/Editors/RunInUnitySimulationWindow.cs
  21. 45
      com.unity.perception/Runtime/GroundTruth/InstanceIdToColorMapping.cs
  22. 18
      com.unity.perception/Runtime/GroundTruth/Labelers/BoundingBox3DLabeler.cs
  23. 146
      com.unity.perception/Runtime/GroundTruth/Labelers/KeypointLabeler.cs
  24. 13
      com.unity.perception/Runtime/GroundTruth/Labelers/SemanticSegmentationLabeler.cs
  25. 6
      com.unity.perception/Runtime/GroundTruth/Labeling/SemanticSegmentationLabelConfig.cs
  26. 5
      com.unity.perception/Runtime/GroundTruth/PerceptionCamera.cs
  27. 2
      com.unity.perception/Runtime/GroundTruth/RenderPasses/CrossPipelinePasses/SemanticSegmentationCrossPipelinePass.cs
  28. 13
      com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/GroundTruthPass.cs
  29. 10
      com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/InstanceSegmentationPass.cs
  30. 10
      com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/LensDistortionPass.cs
  31. 10
      com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/SemanticSegmentationPass.cs
  32. 21
      com.unity.perception/Runtime/GroundTruth/SimulationState.cs
  33. 3
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/AnimationRandomizer.cs
  34. 5
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/BackgroundObjectPlacementRandomizer.cs
  35. 3
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/ColorRandomizer.cs
  36. 4
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/ForegroundObjectPlacementRandomizer.cs
  37. 3
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/HueOffsetRandomizer.cs
  38. 3
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/RotationRandomizer.cs
  39. 11
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/SunAngleRandomizer.cs
  40. 3
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/TextureRandomizer.cs
  41. 86
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Utilities/PoissonDiskSampling.cs
  42. 2
      com.unity.perception/Runtime/Randomization/Randomizers/RandomizerTag.cs
  43. 2
      com.unity.perception/Runtime/Randomization/Samplers/SamplerUtility.cs
  44. 7
      com.unity.perception/Runtime/Unity.Perception.Runtime.asmdef
  45. 16
      com.unity.perception/Tests/Editor/DatasetCaptureEditorTests.cs
  46. 263
      com.unity.perception/Tests/Runtime/GroundTruthTests/BoundingBox3dTests.cs
  47. 33
      com.unity.perception/Tests/Runtime/GroundTruthTests/DatasetCaptureSensorSchedulingTests.cs
  48. 51
      com.unity.perception/Tests/Runtime/GroundTruthTests/InstanceIdToColorMappingTests.cs
  49. 299
      com.unity.perception/Tests/Runtime/GroundTruthTests/KeypointGroundTruthTests.cs
  50. 38
      com.unity.perception/Tests/Runtime/GroundTruthTests/SegmentationGroundTruthTests.cs
  51. 2
      com.unity.perception/package.json
  52. 214
      com.unity.perception/Documentation~/images/build_res.png
  53. 366
      com.unity.perception/Documentation~/images/gameview_res.png
  54. 739
      com.unity.perception/Documentation~/images/robotics_pose.png
  55. 140
      com.unity.perception/Documentation~/images/unity-wide-whiteback.png
  56. 22
      com.unity.perception/Editor/GroundTruth/JointLabelEditor.cs
  57. 3
      com.unity.perception/Editor/GroundTruth/JointLabelEditor.cs.meta
  58. 24
      com.unity.perception/Runtime/GroundTruth/Labelers/KeypointObjectFilter.cs
  59. 3
      com.unity.perception/Runtime/GroundTruth/Labelers/KeypointObjectFilter.cs.meta
  60. 91
      com.unity.perception/Documentation~/GroundTruth/KeypointLabeler.md

7
.github/pull_request_template.md


# Peer Review Information:
Information on any code, feature, documentation changes here
**Editor Version Target (i.e. 19.4, 20.1)**: 2019.4
**Editor Version Target**: 2019.4
## Dev Testing:
**Tests Added**:

**At Risk Areas**:
**Notes + Expectations**:
- [ ] - Test Rail Cases
- [ ] - Updated test rail

4
.yamato/environments.yml


standalone-platform: StandaloneOSX
- name: ubuntu
type: Unity::VM
image: package-ci/ubuntu:latest
image: package-ci/ubuntu:stable
flavor: b1.large
performance_platforms:

standalone-platform: StandaloneOSX
- name: ubuntu
type: Unity::VM
image: package-ci/ubuntu:latest
image: package-ci/ubuntu:stable
flavor: b1.large
performance_suites:

17
README.md


<img src="com.unity.perception/Documentation~/images/unity-wide.png" align="middle" width="3000"/>
<img src="com.unity.perception/Documentation~/images/unity-wide-whiteback.png" align="middle" width="3000"/>
<img src="com.unity.perception/Documentation~/images/banner2.PNG" align="middle"/>

<img src="https://img.shields.io/badge/unity-2019.4-green.svg?style=flat-square" alt="unity 2019.4">
<img src="https://img.shields.io/badge/unity-2020.2-green.svg?style=flat-square" alt="unity 2020.2">
> com.unity.perception is in active development. Its features and API are subject to significant change as development progresses.

The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how the Perception package could be used in a smart city or autonomous vehicle simulation. You can generate datasets locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
### Robotics Object Pose Estimation Demo
<img src="com.unity.perception/Documentation~/images/robotics_pose.png"/>
The [Robotics Object Pose Estimation Demo & Tutorial](https://github.com/Unity-Technologies/Robotics-Object-Pose-Estimation) demonstrates pick-and-place with a robot arm in Unity. It includes using ROS with Unity, importing URDF models, collecting labeled training data using the Perception package, and training and deploying a deep learning model.
## Local development
The repository includes two projects for local development in `TestProjects` folder, one set up for HDRP and the other for URP.

## License
* [License](com.unity.perception/LICENSE.md)
## Support
## Community and Feedback
For general questions or concerns please contact the Computer Vision team at computer-vision@unity3d.com.
For setup problems or discussions about leveraging the Perception package in your project, please create a new thread on the [Unity Computer Vision forum](https://forum.unity.com/forums/computer-vision.626/) and make sure to include as much detail as possible. If you run into any other problems with the Perception package or have a specific feature request, please submit a [GitHub issue](https://github.com/Unity-Technologies/com.unity.perception/issues).
For feedback, bugs, or other issues please file a GitHub issue and the Computer Vision team will investigate the issue as soon as possible.
For any other questions or feedback, connect directly with the Computer Vision team at [computer-vision@unity3d.com](mailto:computer-vision@unity3d.com).
## Citation
If you find this package useful, consider citing it using:

1
TestProjects/PerceptionHDRP/Assets/SemanticSegmentationLabelingConfiguration.asset


color: {r: 0, g: 1, b: 0.16973758, a: 1}
- label: Terrain
color: {r: 0.8207547, g: 0, b: 0.6646676, a: 1}
skyColor: {r: 0, g: 0, b: 0, a: 1}

4
TestProjects/PerceptionHDRP/Packages/packages-lock.json


"com.unity.collections": "0.9.0-preview.6",
"com.unity.nuget.newtonsoft-json": "1.1.2",
"com.unity.render-pipelines.core": "7.1.6",
"com.unity.simulation.capture": "0.0.10-preview.19",
"com.unity.simulation.capture": "0.0.10-preview.20",
"com.unity.simulation.client": "0.0.10-preview.10",
"com.unity.simulation.core": "0.0.10-preview.22"
}

"url": "https://packages.unity.com"
},
"com.unity.simulation.capture": {
"version": "0.0.10-preview.19",
"version": "0.0.10-preview.20",
"depth": 1,
"source": "registry",
"dependencies": {

1
TestProjects/PerceptionURP/Assets/SemanticSegmentationLabelingConfiguration.asset


color: {r: 0, g: 1, b: 0.16973758, a: 1}
- label: Terrain
color: {r: 0.8207547, g: 0, b: 0.6646676, a: 1}
skyColor: {r: 0, g: 0, b: 0, a: 1}

4
TestProjects/PerceptionURP/Packages/packages-lock.json


"com.unity.collections": "0.9.0-preview.6",
"com.unity.nuget.newtonsoft-json": "1.1.2",
"com.unity.render-pipelines.core": "7.1.6",
"com.unity.simulation.capture": "0.0.10-preview.19",
"com.unity.simulation.capture": "0.0.10-preview.20",
"com.unity.simulation.client": "0.0.10-preview.10",
"com.unity.simulation.core": "0.0.10-preview.22"
}

"url": "https://packages.unity.com"
},
"com.unity.simulation.capture": {
"version": "0.0.10-preview.19",
"version": "0.0.10-preview.20",
"depth": 1,
"source": "registry",
"dependencies": {

21
com.unity.perception/CHANGELOG.md


### Known Issues
### Added
Added support for 'step' button in editor.
Increased color variety in instance segmentation images
The PoissonDiskSampling utility now samples a larger region of points to then crop to size of the intended region to prevent edge case bias.
### Deprecated

Fixed keypoint labeling bug when visualizations are disabled.
Fixed an issue where Simulation Delta Time values larger than 100 seconds (in Perception Camera) would cause incorrect capture scheduling behavior.
## [0.8.0-preview.3] - 2021-03-24
### Changed
Expanded documentation on the Keypoint Labeler
Updated Keypoint Labeler logic to only report keypoints for visible objects by default
Increased color variety in instance segmentation images
### Fixed
Fixed compiler warnings in projects with HDRP on 2020.1 and later
Fixed a bug in the Normal Sampler where it would return values less than the passed in minimum value, or greater than the passed in maximum value, for random values very close to 0 or 1 respectively.
## [0.8.0-preview.2] - 2021-03-15

2
com.unity.perception/Documentation~/HPTutorial/TUTORIAL.md


}
```
In the above annotation, all of the 18 joints defined in the COCO template we used are listed. For each joint that is present in our character, you can see the X and Y coordinates within the captured frame. However, you may notice three of the joints are listed with (0,0) coordinates. These joints are not present in our character. A fact that is also denoted by the `state` field. A state of **0** means the joint was not present, **1** denotes a joint that is present but not visible (to be implemented in a later version of the package), and **2** means the joint was present and visible.
In the above annotation, all of the 18 joints defined in the COCO template we used are listed. For each joint that is present in our character, you can see the X and Y coordinates within the captured frame. However, you may notice three of the joints are listed with (0,0) coordinates. These joints are not present in our character. A fact that is also denoted by the `state` field. A state of **0** means the joint either does not exist or is outside of the image's bounds, **1** denotes a joint that is inside of the image but cannot be seen because the part of the object it belongs to is not visible in the image, and **2** means the joint was present and visible.
You may also note that the `pose` field has a value of `unset`. This is because we have not defined poses for our animation clip and `Perception Camera` yet. We will do this next.

98
com.unity.perception/Documentation~/PerceptionCamera.md


|--|--|
| Affect Simulation Timing | Have this camera affect simulation timings (similar to a scheduled camera) by requesting a specific frame delta time. Enabling this option will let you set the `Simulation Delta Time` property described above.|
## Output Resolution
When using Unity Editor to generate datasets, the resolution of the images generated by the Perception Camera will match the resolution set for the ***Game*** view of the editor. However, images generated with built players (including local builds and Unity Simulation runs) will use the resolution specified in project settings.
## Camera labelers
* To set the resolution of the ***Game*** view, click on the dropdown menu in front of `Display 1`. You can use one of the provided resolutions or create a new one. To create one, click **+**. Set `Type` to `Fixed Resolution` and `Width` and `Height` to your desired resolution.
<p align="center">
<img src="images/gameview_res.png" width="300"/>
<br><i>Creating a new resolution preset for the ***Game*** view</i>
</p>
* To set the resolution of the built player, Open ***Edit -> Project Settings*** and navigate to the ***Player*** tab. In the ***Resolution and Presentation*** section, set ***Fullscreen Mode*** to ***Windowed*** and then set ***Default Screen Width*** and ***Default Screen Height*** to your desired resolution.
<p align="center">
<img src="images/build_res.png" width="700"/>
<br><i>Setting the resolution of the built player</i>
</p>
## Camera Labelers
Camera labelers capture data related to the Camera in the JSON dataset. You can use this data to train models and for dataset statistics. The Perception package provides several Camera labelers, and you can derive from the CameraLabeler class to define more labelers.
### Semantic Segmentation Labeler

The BoundingBox2DLabeler produces 2D bounding boxes for each visible object with a label you define in the IdLabelConfig. Unity calculates bounding boxes using the rendered image, so it only excludes occluded or out-of-frame portions of the objects.
### Bounding Box 3D Ground Truth Labeler
### Bounding Box 3D Labeler
The Bounding Box 3D Ground Truth Labeler produces 3D ground truth bounding boxes for each labeled game object in the scene. Unlike the 2D bounding boxes, 3D bounding boxes are calculated from the labeled meshes in the scene and all objects (independent of their occlusion state) are recorded.

```
_Example rendered object info for a single object_
The RenderedObjectInfoLabeler records a list of all objects visible in the Camera image, including its instance ID, resolved label ID and visible pixels. If Unity cannot resolve objects to a label in the IdLabelConfig, it does not record these objects.
### KeypointLabeler
The keypoint labeler captures keypoints of a labeled gameobject. The typical use of this labeler is capturing human pose estimation data. The labeler uses a [keypoint template](#KeypointTemplate) which defines the keypoints to capture for the model and the skeletal connections between those keypoints. The positions of the keypoints are recorded in pixel coordinates. Each keypoint has a state value: 0 - the keypoint either does not exist or is outside of the image's bounds, 1 - the keypoint exists and is inside of the image's bounds but cannot be seen because it is occluded by another object, and 2 - the keypoint exists and is visible.
```
keypoints {
label_id: <int> -- Integer identifier of the label
instance_id: <str> -- UUID of the instance.
template_guid: <str> -- UUID of the keypoint template
pose: <str> -- Pose ground truth information
keypoints [ -- Array of keypoint data, one entry for each keypoint defined in associated template file.
{
index: <int> -- Index of keypoint in template
x: <float> -- X pixel coordinate of keypoint
y: <float> -- Y pixel coordinate of keypoint
state: <int> -- 0: keypoint does not exist, 1: keypoint exists but is not visible, 2: keypoint exists and is visible
}, ...
]
}
```
#### Keypoint Template
keypoint templates are used to define the keypoints and skeletal connections captured by the KeypointLabeler. The keypoint
template takes advantage of Unity's humanoid animation rig, and allows the user to automatically associate template keypoints
to animation rig joints. Additionally, the user can choose to ignore the rigged points, or add points not defined in the rig.
A Coco keypoint template is included in the perception package.
The `RenderedObjectInfoLabeler` records a list of all objects visible in the camera image, including their instance IDs, resolved label IDs, and visible pixel counts. If Unity cannot resolve objects to a label in the `IdLabelConfig`, it does not record these objects.
##### Editor
### Keypoint Labeler
The keypoint template editor allows the user to create/modify a keypoint template. The editor consists of the header information,
the keypoint array, and the skeleton array.
The keypoint labeler captures the screen locations of specific points on labeled GameObjects. The typical use of this Labeler is capturing human pose estimation data, but it can be used to capture points on any kind of object. The Labeler uses a [Keypoint Template](#KeypointTemplate) which defines the keypoints to capture for the model and the skeletal connections between those keypoints. The positions of the keypoints are recorded in pixel coordinates.
![Header section of the keypoint template](images/keypoint_template_header.png)
<br/>_Header section of the keypoint template_
In the header section, a user can change the name of the template and supply textures that they would like to use for the keypoint
visualization.
![The keypoint section of the keypoint template](images/keypoint_template_keypoints.png)
<br/>_Keypoint section of the keypoint template_
The keypoint section allows the user to create/edit keypoints and associate them with Unity animation rig points. Each keypoint record
has 4 fields: label (the name of the keypoint), Associate to Rig (a boolean value which, if true, automatically maps the keypoint to
the gameobject defined by the rig), Rig Label (only needed if Associate To Rig is true, defines which rig component to associate with
the keypoint), and Color (RGB color value of the keypoint in the visualization).
![Skeleton section of the keypoint template](images/keypoint_template_skeleton.png)
<br/>_Skeleton section of the keypoint template_
The skeleton section allows the user to create connections between joints, basically defining the skeleton of a labeled object.
##### Format
```
annotation_definition.spec {
template_id: <str> -- The UUID of the template
template_name: <str> -- Human readable name of the template
key_points [ -- Array of joints defined in this template
{
label: <str> -- The label of the joint
index: <int> -- The index of the joint
}, ...
]
skeleton [ -- Array of skeletal connections (which joints have connections between one another) defined in this template
{
joint1: <int> -- The first joint of the connection
joint2: <int> -- The second joint of the connection
}, ...
]
}
```
#### Animation Pose Label
This file is used to define timestamps in an animation to a pose label.
For more information, see [Keypoint Labeler](GroundTruth/KeypointLabeler.md) or the [Human Pose Labeling and Randomization Tutorial](HPTutorial/TUTORIAL.md)
## Limitations

26
com.unity.perception/Documentation~/Tutorial/Phase1.md


As seen in the UI for `Perception Camera`, the list of `Camera Labelers` is currently empty. For each type of ground-truth you wish to generate along-side your captured frames (e.g. 2D bounding boxes around objects), you will need to add a corresponding `Camera Labeler` to this list.
To speed-up your workflow, the Perception package comes with five common labelers for object-detection tasks; however, if you are comfortable with code, you can also add your own custom labelers. The labelers that come with the Perception package cover **3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), and semantic segmentation images (each object rendered in a unique colour)**. We will use four of these in this tutorial.
To speed-up your workflow, the Perception package comes with seven common Labelers for object-detection and human keypoint labeling tasks; however, if you are comfortable with code, you can also add your own custom Labelers. The Labelers that come with the Perception package cover **keypoint labeling, 3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), instance segmentation, and semantic segmentation**. We will use four of these in this tutorial.
Once you add the labelers, the _**Inspector**_ view of the `Perception Camera` component will look like this:
Once you add the Labelers, the _**Inspector**_ view of the `Perception Camera` component will look like this:
<p align="center">
<img src="Images/pc_labelers_added.png" width="400"/>

One of the useful features that comes with the `Perception Camera` component is the ability to display real-time visualizations of the labelers when your simulation is running. For instance, `BoundingBox2DLabeler` can display two-dimensional bounding boxes around the foreground objects that it tracks in real-time and `SemanticSegmentationLabeler` displays the semantic segmentation image overlaid on top of the camera's view. To enable this feature, make sure the `Show Labeler Visualizations` checkmark is enabled.
One of the useful features that comes with the `Perception Camera` component is the ability to display real-time visualizations of the Labelers when your simulation is running. For instance, `BoundingBox2DLabeler` can display two-dimensional bounding boxes around the foreground objects that it tracks in real-time and `SemanticSegmentationLabeler` displays the semantic segmentation image overlaid on top of the camera's view. To enable this feature, make sure the `Show Labeler Visualizations` checkmark is enabled.
It is now time to tell each labeler added to the `Perception Camera` which objects it should label in the generated dataset. For instance, if your workflow is intended for generating frames and ground-truth for detecting chairs, your labelers would need to know that they should look for objects labeled "chair" within the scene. The chairs should in turn also be labeled "chair" in order to make them visible to the labelers. We will now learn how to set up these configurations.
It is now time to tell each Labeler added to the `Perception Camera` which objects it should label in the generated dataset. For instance, if your workflow is intended for generating frames and ground-truth for detecting chairs, your Labelers would need to know that they should look for objects labeled "chair" within the scene. The chairs should in turn also be labeled "chair" in order to make them visible to the Labelers. We will now learn how to set up these configurations.
You will notice each added labeler has a `Label Config` field. By adding a label configuration here you can instruct the labeler to look for certain labels within the scene and ignore the rest. To do that, we should first create label configurations.
You will notice each added Labeler has a `Label Config` field. By adding a label configuration here you can instruct the Labeler to look for certain labels within the scene and ignore the rest. To do that, we should first create label configurations.
* **:green_circle: Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Id Label Config**_.

* **:green_circle: Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Semantic Segmentation Label Config**_. Name this asset `TutorialSemanticSegmentationLabelConfig`.
Now that you have created your label configurations, we need to assign them to labelers that you previously added to your `Perception Camera` component.
Now that you have created your label configurations, we need to assign them to Labelers that you previously added to your `Perception Camera` component.
* **:green_circle: Action**: Select the `Main Camera` object from the Scene _**Hierarchy**_, and in the _**Inspector**_ tab, assign the newly created `TutorialIdLabelConfig` to the first three labelers. To do so, you can either drag and drop the former into the corresponding fields for each labeler, or click on the small circular button in front of the `Id Label Config` field, which brings up an asset selection window filtered to only show compatible assets. Assign `TutorialSemanticSegmentationLabelConfig` to the fourth labeler. The `Perception Camera` component will now look like the image below:
* **:green_circle: Action**: Select the `Main Camera` object from the Scene _**Hierarchy**_, and in the _**Inspector**_ tab, assign the newly created `TutorialIdLabelConfig` to the first three Labelers. To do so, you can either drag and drop the former into the corresponding fields for each Labeler, or click on the small circular button in front of the `Id Label Config` field, which brings up an asset selection window filtered to only show compatible assets. Assign `TutorialSemanticSegmentationLabelConfig` to the fourth Labeler. The `Perception Camera` component will now look like the image below:
<p align="center">
<img src="Images/pclabelconfigsadded.png" width="400"/>

The Prefab contains a number of components, including a `Transform`, a `Mesh Filter`, a `Mesh Renderer` and a `Labeling` component (highlighted in the image above). While the first three of these are common Unity components, the fourth one is specific to the Perception package, and is used for assigning labels to objects. You can see here that the Prefab has one label already added, displayed in the list of `Added Labels`. The UI here provides a multitude of ways for you to assign labels to the object. You can either choose to have the asset automatically labeled (by enabling `Use Automatic Labeling`), or add labels manually. In case of automatic labeling, you can choose from a number of labeling schemes, e.g. the asset's name or folder name. If you go the manual route, you can type in labels, add labels from any of the label configurations included in the project, or add from lists of suggested labels based on the Prefab's name and path.
Note that each object can have multiple labels assigned, and thus appear as different objects to labelers with different label configurations. For instance, you may want your semantic segmentation labeler to detect all cream cartons as `dairy_product`, while your bounding box labeler still distinguishes between different types of dairy product. To achieve this, you can add a `dairy_product` label to all your dairy products, and then in your label configuration for semantic segmentation, only add the `dairy_product` label, and not any specific products or brand names.
Note that each object can have multiple labels assigned, and thus appear as different objects to Labelers with different label configurations. For instance, you may want your semantic segmentation Labeler to detect all cream cartons as `dairy_product`, while your bounding box Labeler still distinguishes between different types of dairy product. To achieve this, you can add a `dairy_product` label to all your dairy products, and then in your label configuration for semantic segmentation, only add the `dairy_product` label, and not any specific products or brand names.
For this tutorial, we have already prepared the foreground Prefabs for you and added the `Labeling` component to all of them. These Prefabs were based on 3D scans of the actual grocery items. If you are making your own Prefabs, you can easily add a `Labeling` component to them using the _**Add Component**_ button visible in the bottom right corner of the screenshot above.

<img src="Images/labelconfigs.png" width="800"/>
</p>
> :information_source: Since we used automatic labels here and added them to our configurations, we are confident that the labels in the configurations match the labels of our objects. In cases where you decide to add manual labels to objects and configurations, make sure you use the exact same labels, otherwise, the objects for which a matching label is not found in your configurations will not be detected by the labelers that are using those configurations.
> :information_source: Since we used automatic labels here and added them to our configurations, we are confident that the labels in the configurations match the labels of our objects. In cases where you decide to add manual labels to objects and configurations, make sure you use the exact same labels, otherwise, the objects for which a matching label is not found in your configurations will not be detected by the Labelers that are using those configurations.
Now that we have labelled all our foreground objects and setup our label configurations, let's briefly test things.

<img src="Images/first_run.png" width = "700"/>
</p>
In this view, you will also see the real-time visualizations we discussed before shown on top of the camera's view. In the top right corner of the window, you can see a visualization control panel, through which you can enable or disable visualizations for individual labelers. That said, we currently have no foreground objects in the Scene yet, so no bounding boxes or semantic segmentation overlays will be displayed.
In this view, you will also see the real-time visualizations we discussed before shown on top of the camera's view. In the top right corner of the window, you can see a visualization control panel, through which you can enable or disable visualizations for individual Labelers. That said, we currently have no foreground objects in the Scene yet, so no bounding boxes or semantic segmentation overlays will be displayed.
Note that disabling visualizations for a labeler does not affect your generated data. The annotations from all labelers that are active before running the simulation will continue to be recorded and will appear in the output data.
Note that disabling visualizations for a Labeler does not affect your generated data. The annotations from all Labelers that are active before running the simulation will continue to be recorded and will appear in the output data.
To generate data as fast as possible, the simulation utilizes asynchronous processing to churn through frames quickly, rearranging and randomizing the objects in each frame. To be able to check out individual frames and inspect the real-time visualizations, click on the pause button (next to play). You can also switch back to the Scene view to be able to inspect each object individually. For performance reasons, it is recommended to disable visualizations altogether (from the _**Inspector**_ view of `Perception Camera`) once you are ready to generate a large dataset.

- RGB images (raw camera output) (if the `Save Camera Output to Disk` check mark is enabled on `Perception Camera`)
- Semantic segmentation images (if the `SemanticSegmentationLabeler` is added and active on `Perception Camera`)
The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation on the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active Labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation on the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
* `label_id`: The numerical id assigned to this object's label in the labeler's label configuration
* `label_id`: The numerical id assigned to this object's label in the Labeler's label configuration
* `label_name`: The object's label, e.g. `candy_minipralines_lindt`
* `instance_id`: Unique instance id of the object
* `x` and `y`: Pixel coordinates of the top-left corner of the object's bounding box (measured from the top-left corner of the image)

14
com.unity.perception/Documentation~/Tutorial/Phase3.md


The process of running a project on Unity Simulation involves building it for Linux and then uploading this build, along with a set of parameters, to Unity Simulation. The Perception package simplifies this process by including a dedicated _**Run in Unity Simulation**_ window that accepts a small number of required parameters and handles everything else automatically.
For performance reasons, it is best to disable real-time visualizations before carrying on with the Unity Simulation run.
* **:green_circle: Action**: From the _**Inspector**_ view of `Perception Camera`, disable real-time visualizations.
In order to make sure our builds are compatible with Unity Simulation, we need to set our project's scripting backend to _**Mono**_ rather than _**IL2CPP**_ (if not already set). We will also need to switch to _**Windowed**_ mode.

<img src="Images/runinusim.png" width="600"/>
</p>
Here, you can also specify a name for the run, the number of Iterations the Scenario will execute for, and the number of _**Instances**_ (number of nodes the work will be distributed across) for the run. This window automatically picks the currently active Scene and Scenario to run in Unity Simulation.
Here, you can specify a name for the run, the number of Iterations the Scenario will execute for, and the number of Instances (number of nodes the work will be distributed across) for the run. This window automatically picks the currently active Scene and Scenario to run in Unity Simulation.
* **:green_circle: Action**: Name your run `FirstRun`, set the number of Iterations to `1000`, and Instances to `20`.
* **:green_circle: Action**: Click _**Build and Run**_.

Your project will now be built and then uploaded to Unity Simulation and run. This may take a few minutes to complete, during which the editor may become frozen; this is normal behaviour.
* **:green_circle: Action**: Once the operation is complete, you can find the **Execution ID** of this Unity Simulation run in the **Console** tab and the ***Run in Unity Simulation** Window:
* **:green_circle: Action**: Once the operation is complete, you can find the **Execution ID** of this Unity Simulation run in the **Console** tab and the ***Run in Unity Simulation*** Window:
<p align="center">
<img src="Images/build_uploaded.png" width="600"/>

<!--Windows:
`USimCLI\windows\usim summarize run-execution <execution-id>`-->
Here is an example output of this command, indicating that there is only one node, and that the node is still in progress:
Here is an example output of this command, indicating that there are 20 nodes, and that they are all still in progress:
In Progress 1
In Progress 20
At this point, we will need to wait until the execution is complete. Check your run with the above command periodically until you see a 1 for `Successes` and 0 for `In Progress`.
At this point, we will need to wait until the execution is complete. Check your run with the above command periodically until you see a 20 for `Successes` and 0 for `In Progress`.
Given the relatively small size of our Scenario (1,000 Iterations), this should take less than 5 minutes.
* **:green_circle: Action**: Use the `usim summarize run-execution <execution-id>` command periodically to check the progress of your run.

`USimCLI/mac/usim download manifest <execution-id>`
The manifest is a `.csv` formatted file and will be downloaded to the same location from which you execute the above command, which is the `unity_simulation_bundle` folder.
This file does **not**** include actual data, rather, it includes links to the generated data, including the JSON files, the logs, the images, and so on.
This file does **not** include actual data, rather, it includes links to the generated data, including the JSON files, the logs, the images, and so on.
* **:green_circle: Action**: Open the manifest file to check it. Make sure there are links to various types of output and check a few of the links to see if they work.

2
com.unity.perception/Documentation~/Tutorial/TUTORIAL.md


<img src="../images/unity-wide.png" align="middle" width="3000"/>
<img src="../images/unity-wide-whiteback.png" align="middle" width="3000"/>
# Perception Tutorial

2
com.unity.perception/Editor/GroundTruth/IdLabelConfigEditor.cs


m_StartingIdEnumField.SetEnabled(AutoAssign);
m_SkyColorUi.style.display = DisplayStyle.None;
AutoAssignIdsIfNeeded();
m_MoveDownButton.clicked += MoveSelectedItemDown;
m_MoveUpButton.clicked += MoveSelectedItemUp;

7
com.unity.perception/Editor/GroundTruth/LabelConfigEditor.cs


protected Button m_MoveDownButton;
protected VisualElement m_MoveButtons;
protected VisualElement m_IdSpecificUi;
protected VisualElement m_SkyColorUi;
protected ColorField m_SkyColorField;
protected Label m_SkyHexLabel;
public void OnEnable()

m_StartingIdEnumField = m_Root.Q<EnumField>("starting-id-dropdown");
m_AutoIdToggle = m_Root.Q<Toggle>("auto-id-toggle");
m_IdSpecificUi = m_Root.Q<VisualElement>("id-specific-ui");
m_SkyColorUi = m_Root.Q<VisualElement>("sky-color-ui");
m_SkyColorField = m_Root.Q<ColorField>("sky-color-value");
m_SkyHexLabel = m_Root.Q<Label>("sky-color-hex");
m_SaveButton.SetEnabled(false);

9
com.unity.perception/Editor/GroundTruth/PerceptionCameraEditor.cs


}
const string k_FrametimeTitle = "Simulation Delta Time";
const float k_DeltaTimeTooLarge = 200;
public override void OnInspectorGUI()
{
using(new EditorGUI.DisabledScope(EditorApplication.isPlaying))

GUILayout.BeginVertical("TextArea");
EditorGUILayout.LabelField("Scheduled Capture Properties", EditorStyles.boldLabel);
EditorGUILayout.PropertyField(serializedObject.FindProperty(nameof(perceptionCamera.simulationDeltaTime)),new GUIContent(k_FrametimeTitle, $"Sets Unity's Time.{nameof(Time.captureDeltaTime)} to the specified number, causing a fixed number of frames to be simulated for each second of elapsed simulation time regardless of the capabilities of the underlying hardware. Thus, simulation time and real time will not be synchronized."));
EditorGUILayout.PropertyField(serializedObject.FindProperty(nameof(perceptionCamera.simulationDeltaTime)),new GUIContent(k_FrametimeTitle, $"Sets Unity's Time.{nameof(Time.captureDeltaTime)} to the specified number, causing a fixed number of frames to be simulated for each second of elapsed simulation time regardless of the capabilities of the underlying hardware. Thus, simulation time and real time will not be synchronized. Note that large {k_FrametimeTitle} values will lead to lower performance as the engine will need to simulate longer periods of elapsed time for each rendered frame."));
if (perceptionCamera.simulationDeltaTime > k_DeltaTimeTooLarge)
{
EditorGUILayout.HelpBox($"Large {k_FrametimeTitle} values can lead to significantly lower simulation performance.", MessageType.Warning);
}
var interval = (perceptionCamera.framesBetweenCaptures + 1) * perceptionCamera.simulationDeltaTime;
var startTime = perceptionCamera.simulationDeltaTime * perceptionCamera.firstCaptureFrame;

10
com.unity.perception/Editor/GroundTruth/SemanticSegmentationLabelConfigEditor.cs


{
m_MoveButtons.style.display = DisplayStyle.None;
m_IdSpecificUi.style.display = DisplayStyle.None;
var skyColorProperty = serializedObject.FindProperty(nameof(SemanticSegmentationLabelConfig.skyColor));
m_SkyColorField.BindProperty(skyColorProperty);
m_SkyColorField.RegisterValueChangedCallback(e => UpdateSkyHexLabel(e.newValue));
UpdateSkyHexLabel(skyColorProperty.colorValue);
}
private void UpdateSkyHexLabel(Color colorValue)
{
m_SkyHexLabel.text = "#" + ColorUtility.ToHtmlStringRGBA(colorValue);
}
public override void PostRemoveOperations()

2
com.unity.perception/Editor/GroundTruth/Uxml/ColoredLabelElementInLabelConfig.uxml


<UXML xmlns="UnityEngine.UIElements" xmlns:editor="UnityEditor.UIElements">
<VisualElement class="added-label" style="padding-top: 3px;">
<Button name="remove-button" class="labeling__remove-item-button"/>
<TextField name="label-value" class="labeling__added-label-value"/>
<TextField name="label-value" class="labeling__added-label-value"/>
<VisualElement style="min-width:20px; flex-direction: row; display:none">
<Button name="move-up-button" class="move-label-in-config-button move-up" style="margin-right:-2px"/>
<Button name="move-down-button" class="move-label-in-config-button move-down"/>

14
com.unity.perception/Editor/GroundTruth/Uxml/LabelConfig_Main.uxml


<VisualElement class="outer-container" name="outer-container">
<Style src="../Uss/Styles.uss"/>
<VisualElement class="inner-container" name="id-specific-ui">
<Label text="Options" name="options-title" class="title-label"/>
</VisualElement>
</VisualElement>
<VisualElement class="inner-container" name="sky-color-ui">
<Label text="Options" name="options-title" class="title-label"/>
<VisualElement style="flex-direction: row; flex-grow: 1;">
<Label text="Sky Color" name="added-labels-title"
style="flex-grow: 1; font-size: 11; min-width : 60px; align-self:center;"/>
<VisualElement class="generic-hover"
style="min-width : 137px; flex-direction: row; padding: 3px 5px 3px 5px; margin-left: 3px; margin-right: 3px; border-width: 1px; border-color: #666666; border-radius: 4px;">
<editor:ColorField name="sky-color-value" style="min-width : 60px; max-width: 60px; align-self:center;"/>
<Label name="sky-color-hex"
style="font-size: 11; min-width : 60px; max-width: 60px; align-self:center; margin: 2px"/>
</VisualElement>
</VisualElement>
</VisualElement>
<VisualElement name="added-labels" class="inner-container" style="margin-top:5px">

15
com.unity.perception/Editor/Randomization/Editors/RunInUnitySimulationWindow.cs


Label m_PrevExecutionIdLabel;
RunParameters m_RunParameters;
const string m_SupportedGPUString = "NVIDIA";
[MenuItem("Window/Run in Unity Simulation")]
static void ShowWindow()
{

async void RunInUnitySimulation()
{
#if PLATFORM_CLOUD_RENDERING
if (!m_SysParamDefinitions[m_SysParamIndex].description.Contains(m_SupportedGPUString))
{
EditorUtility.DisplayDialog("Unsupported Sysparam",
"The current selection of the Sysparam " + m_SysParamDefinitions[m_SysParamIndex].description +
" is not supported by this build target. Please select a sysparam with GPU", "Ok");
return;
}
#endif
m_RunParameters = new RunParameters
{
runName = m_RunNameField.value,

{
scenes = new[] { m_RunParameters.currentOpenScenePath },
locationPathName = Path.Combine(projectBuildDirectory, $"{m_RunParameters.runName}.x86_64"),
#if PLATFORM_CLOUD_RENDERING
target = BuildTarget.CloudRendering,
#else
#endif
};
var report = BuildPipeline.BuildPlayer(buildPlayerOptions);
var summary = report.summary;

45
com.unity.perception/Runtime/GroundTruth/InstanceIdToColorMapping.cs


using System;
using System.Collections.Generic;
using Unity.Profiling;
namespace UnityEngine.Perception.GroundTruth
{

/// </summary>
public const uint maxId = uint.MaxValue - ((256 * 256 * 256) * 2) + k_HslCount;
static Dictionary<uint, uint> s_IdToColorCache;
static uint[] s_IdToColorCache;
const uint k_HslCount = 64;
const uint k_HslCount = 1024;
const int k_HuesInEachValue = 30;
const int k_HuesInEachValue = 64;
const uint k_Values = k_HslCount / k_HuesInEachValue;
static void InitializeMaps()
private static ProfilerMarker k_InitializeMapsMarker = new ProfilerMarker(nameof(InitializeMaps));
internal static void InitializeMaps()
using (k_InitializeMapsMarker.Auto())
{
s_IdToColorCache = new uint[k_HslCount + 1];
s_ColorToIdCache = new Dictionary<uint, uint>();
s_IdToColorCache = new Dictionary<uint, uint>();
s_ColorToIdCache = new Dictionary<uint, uint>();
s_IdToColorCache[0] = k_InvalidPackedColor;
s_ColorToIdCache[k_InvalidPackedColor] = 0;
s_IdToColorCache[0] = k_InvalidPackedColor;
s_IdToColorCache[k_InvalidPackedColor] = 0;
for (uint i = 1; i <= k_HslCount; i++)
{
var color = GenerateHSLValueForId(i);
s_IdToColorCache[i] = color;
s_ColorToIdCache[color] = i;
for (uint i = 1; i <= k_HslCount; i++)
{
var color = GenerateHSLValueForId(i);
s_IdToColorCache[i] = color;
s_ColorToIdCache.Add(color, i);
}
}
}

var ratio = count * k_GoldenRatio;
// assign hue based on golden ratio
var hueId = count % k_HuesInEachValue;
var ratio = hueId * k_GoldenRatio;
count /= k_HuesInEachValue;
ratio = count * k_GoldenRatio;
var value = 1 - (ratio - Mathf.Floor(ratio));
var valueId = count / k_HuesInEachValue;
// avoid value 0
var value = 1 - (float)valueId / (k_Values + 1);
var color = (Color32)Color.HSVToRGB(hue, 1f, value);
color.a = 255;

18
com.unity.perception/Runtime/GroundTruth/Labelers/BoundingBox3DLabeler.cs


// Need to convert all bounds into labeling mesh space...
foreach (var mesh in meshFilters)
{
if (!mesh.GetComponent<Renderer>().enabled)
continue;
var currentTransform = mesh.gameObject.transform;
// Grab the bounds of the game object from the mesh, although these bounds are axis-aligned,
// they are axis-aligned with respect to the current component's coordinate space. This, in theory

// Apply the transformations on this object until we reach the labeled transform
while (currentTransform != labelTransform)
{
transformedBounds.center = Vector3.Scale(transformedBounds.center, currentTransform.localScale);
transformedBounds.center = currentTransform.localRotation * transformedBounds.center;
transformedBounds.center += currentTransform.localPosition;
transformedBounds.extents = Vector3.Scale(transformedBounds.extents, currentTransform.localScale);
transformedRotation *= currentTransform.localRotation;

// Convert the combined bounds into world space
combinedBounds.center = labelTransform.TransformPoint(combinedBounds.center);
combinedBounds.extents = Vector3.Scale(combinedBounds.extents, labelTransform.localScale);
combinedBounds.extents = Vector3.Scale(combinedBounds.extents, labelTransform.lossyScale);
// Now convert all points into camera's space
var cameraCenter = cameraTransform.InverseTransformPoint(combinedBounds.center);
cameraCenter = Vector3.Scale(cameraTransform.localScale, cameraCenter);
// Now adjust the center and rotation to camera space. Camera space transforms never rescale objects
combinedBounds.center = combinedBounds.center - cameraTransform.position;
combinedBounds.center = Quaternion.Inverse(cameraTransform.rotation) * combinedBounds.center;
// Rotation to go from label space to camera space
var converted = ConvertToBoxData(labelEntry, labeledEntity.instanceId, cameraCenter, combinedBounds.extents, cameraRotation);
var converted = ConvertToBoxData(labelEntry, labeledEntity.instanceId, combinedBounds.center, combinedBounds.extents, cameraRotation);
m_BoundingBoxValues[m_CurrentFrame][labeledEntity.instanceId] = converted;
}

static Vector3 CalculateRotatedPoint(Camera cam, Vector3 start, Vector3 xDirection, Vector3 yDirection, Vector3 zDirection, float xScalar, float yScalar, float zScalar)
{
var rotatedPoint = start + xDirection * xScalar + yDirection * yScalar + zDirection * zScalar;
var worldPoint = cam.transform.TransformPoint(rotatedPoint);
var worldPoint = cam.transform.position + cam.transform.rotation * rotatedPoint;
return VisualizationHelper.ConvertToScreenSpace(cam, worldPoint);
}

146
com.unity.perception/Runtime/GroundTruth/Labelers/KeypointLabeler.cs


using System.Collections.Generic;
using System.Linq;
using Unity.Collections;
using Unity.Mathematics;
using UnityEngine.Rendering;
namespace UnityEngine.Perception.GroundTruth

/// The <see cref="IdLabelConfig"/> which associates objects with labels.
/// </summary>
public IdLabelConfig idLabelConfig;
/// <summary>
/// Controls which objects will have keypoints recorded in the dataset.
/// <see cref="KeypointObjectFilter"/>
/// </summary>
public KeypointObjectFilter objectFilter;
// ReSharper restore MemberCanBePrivate.Global
AnnotationDefinition m_AnnotationDefinition;

List<KeypointEntry> m_ToReport;
List<KeypointEntry> m_KeypointEntriesToReport;
int m_CurrentFrame;

/// </summary>
public List<AnimationPoseConfig> animationPoseConfigs;
/// <inheritdoc/>
protected override void Setup()
{

m_KnownStatus = new Dictionary<uint, CachedData>();
m_AsyncAnnotations = new Dictionary<int, (AsyncAnnotation, Dictionary<uint, KeypointEntry>)>();
m_ToReport = new List<KeypointEntry>();
m_KeypointEntriesToReport = new List<KeypointEntry>();
perceptionCamera.RenderedObjectInfosCalculated += OnRenderedObjectInfoReadback;
}
bool AreEqual(Color32 lhs, Color32 rhs)

bool PixelsMatch(int x, int y, Color32 idColor, (int x, int y) dimensions, NativeArray<Color32> data)
{
var h = dimensions.y - y;
var h = dimensions.y - 1 - y;
var pixelColor = data[h * dimensions.x + x];
return AreEqual(pixelColor, idColor);
}

{
if (keypoint.state == 0) return 0;
var centerX = Mathf.RoundToInt(keypoint.x);
var centerY = Mathf.RoundToInt(keypoint.y);
var centerX = Mathf.FloorToInt(keypoint.x);
var centerY = Mathf.FloorToInt(keypoint.y);
if (!PixelOnScreen(centerX, centerY, dimensions))
return 0;
var pixelOnScreen = false;
var pixelMatched = false;
for (var y = centerY - s_PixelTolerance; y <= centerY + s_PixelTolerance; y++)
{

pixelOnScreen = true;
pixelMatched = true;
if (PixelsMatch(x, y, instanceIdColor, dimensions, data))
{
return 2;

return pixelOnScreen ? 1 : 0;
return pixelMatched ? 1 : 0;
}
void OnInstanceSegmentationImageReadback(int frameCount, NativeArray<Color32> data, RenderTexture renderTexture)

m_AsyncAnnotations.Remove(frameCount);
m_ToReport.Clear();
var shouldReport = false;
foreach (var keypoint in keypointSet.Value.keypoints)
for (var i = 0; i < keypointSet.Value.keypoints.Length; i++)
var keypoint = keypointSet.Value.keypoints[i];
keypoint.state = DetermineKeypointState(keypoint, idColor, dimensions, data);
if (keypoint.state == 0)

}
else
{
shouldReport = true;
keypoint.x = math.clamp(keypoint.x, 0, dimensions.width - .001f);
keypoint.y = math.clamp(keypoint.y, 0, dimensions.height - .001f);
keypointSet.Value.keypoints[i] = keypoint;
}
}
}
if (shouldReport)
m_ToReport.Add(keypointSet.Value);
private void OnRenderedObjectInfoReadback(int frameCount, NativeArray<RenderedObjectInfo> objectInfos)
{
if (!m_AsyncAnnotations.TryGetValue(frameCount, out var asyncAnnotation))
return;
m_AsyncAnnotations.Remove(frameCount);
m_KeypointEntriesToReport.Clear();
//filter out objects that are not visible
foreach (var keypointSet in asyncAnnotation.keypoints)
{
var entry = keypointSet.Value;
var include = false;
if (objectFilter == KeypointObjectFilter.All)
include = true;
else
{
foreach (var objectInfo in objectInfos)
{
if (entry.instance_id == objectInfo.instanceId)
{
include = true;
break;
KeypointsComputed?.Invoke(frameCount, m_ToReport);
asyncAnnotation.annotation.ReportValues(m_ToReport);
if (!include && objectFilter == KeypointObjectFilter.VisibleAndOccluded)
include = keypointSet.Value.keypoints.Any(k => k.state == 1);
}
if (include)
m_KeypointEntriesToReport.Add(entry);
}
//This code assumes that OnRenderedObjectInfoReadback will be called immediately after OnInstanceSegmentationImageReadback
KeypointsComputed?.Invoke(frameCount, m_KeypointEntriesToReport);
asyncAnnotation.annotation.ReportValues(m_KeypointEntriesToReport);
}
/// <param name="scriptableRenderContext"></param>

/// The values of a specific keypoint
/// </summary>
[Serializable]
public class Keypoint
public struct Keypoint
{
/// <summary>
/// The index of the keypoint in the template file

return targetTexture != null ?
targetTexture.height : Screen.height;
}
// Converts a coordinate from world space into pixel space
if (Mathf.Approximately(pt.y, perceptionCamera.attachedCamera.pixelHeight))
pt.y -= .0001f;
if (Mathf.Approximately(pt.x, perceptionCamera.attachedCamera.pixelWidth))
pt.x -= .0001f;
return pt;
}

var bone = animator.GetBoneTransform(pt.rigLabel);
if (bone != null)
{
var loc = ConvertToScreenSpace(bone.position);
keypoints[i].index = i;
keypoints[i].x = loc.x;
keypoints[i].y = loc.y;
keypoints[i].state = 2;
InitKeypoint(bone.position, keypoints, i);
}
}
}

foreach (var (joint, idx) in cachedData.overrides)
{
var loc = ConvertToScreenSpace(joint.transform.position);
keypoints[idx].index = idx;
keypoints[idx].x = loc.x;
keypoints[idx].y = loc.y;
keypoints[idx].state = 2;
InitKeypoint(joint.transform.position, keypoints, idx);
}
cachedData.keypoints.pose = "unset";

cachedData.keypoints.pose = GetPose(cachedData.animator);
}
m_AsyncAnnotations[m_CurrentFrame].keypoints[labeledEntity.instanceId] = cachedData.keypoints;
var cachedKeypointEntry = cachedData.keypoints;
var keypointEntry = new KeypointEntry()
{
instance_id = cachedKeypointEntry.instance_id,
keypoints = cachedKeypointEntry.keypoints.ToArray(),
label_id = cachedKeypointEntry.label_id,
pose = cachedKeypointEntry.pose,
template_guid = cachedKeypointEntry.template_guid
};
m_AsyncAnnotations[m_CurrentFrame].keypoints[labeledEntity.instanceId] = keypointEntry;
}
}
private void InitKeypoint(Vector3 position, Keypoint[] keypoints, int idx)
{
var loc = ConvertToScreenSpace(position);
keypoints[idx].index = idx;
if (loc.z < 0)
{
keypoints[idx].x = 0;
keypoints[idx].y = 0;
keypoints[idx].state = 0;
}
else
{
keypoints[idx].x = loc.x;
keypoints[idx].y = loc.y;
keypoints[idx].state = 2;
}
}

return "unset";
}
Keypoint GetKeypointForJoint(KeypointEntry entry, int joint)
Keypoint? GetKeypointForJoint(KeypointEntry entry, int joint)
{
if (joint < 0 || joint >= entry.keypoints.Length) return null;
return entry.keypoints[joint];

protected override void OnVisualize()
{
if (m_ToReport == null) return;
if (m_KeypointEntriesToReport == null) return;
var jointTexture = activeTemplate.jointTexture;
if (jointTexture == null) jointTexture = m_MissingTexture;

foreach (var entry in m_ToReport)
foreach (var entry in m_KeypointEntriesToReport)
{
foreach (var bone in activeTemplate.skeleton)
{

if (joint1 != null && joint1.state == 2 && joint2 != null && joint2.state == 2)
if (joint1 != null && joint1.Value.state == 2 && joint2 != null && joint2.Value.state == 2)
VisualizationHelper.DrawLine(joint1.x, joint1.y, joint2.x, joint2.y, bone.color, 8, skeletonTexture);
VisualizationHelper.DrawLine(joint1.Value.x, joint1.Value.y, joint2.Value.x, joint2.Value.y, bone.color, 8, skeletonTexture);
}
}

13
com.unity.perception/Runtime/GroundTruth/Labelers/SemanticSegmentationLabeler.cs


{
label_name = l.label,
pixel_value = l.color
}).ToArray();
});
if (labelConfig.skyColor != Color.black)
{
specs = specs.Append(new SemanticSegmentationSpec()
{
label_name = "sky",
pixel_value = labelConfig.skyColor
});
}
specs,
specs.ToArray(),
"pixel-wise semantic segmentation label",
"PNG",
id: Guid.Parse(annotationId));

6
com.unity.perception/Runtime/GroundTruth/Labeling/SemanticSegmentationLabelConfig.cs


Color.yellow,
Color.gray
};
/// <summary>
/// The color to use for the sky in semantic segmentation images
/// </summary>
public Color skyColor = Color.black;
}
/// <summary>

5
com.unity.perception/Runtime/GroundTruth/PerceptionCamera.cs


// Ignore the subsequent calls.
if (lastFrameCalledThisCallback == Time.frameCount)
return false;
#if UNITY_EDITOR
if (UnityEditor.EditorApplication.isPaused)
return false;
#endif
return true;
}

2
com.unity.perception/Runtime/GroundTruth/RenderPasses/CrossPipelinePasses/SemanticSegmentationCrossPipelinePass.cs


s_LastFrameExecuted = Time.frameCount;
var renderList = CreateRendererListDesc(camera, cullingResult, "FirstPass", 0, m_OverrideMaterial, -1);
cmd.ClearRenderTarget(true, true, Color.black);
cmd.ClearRenderTarget(true, true, m_LabelConfig.skyColor);
DrawRendererList(renderContext, cmd, RendererList.Create(renderList));
}

13
com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/GroundTruthPass.cs


targetDepthBuffer = TargetBuffer.Custom;
}
protected sealed override void Execute(
ScriptableRenderContext renderContext, CommandBuffer cmd, HDCamera hdCamera, CullingResults cullingResult)
//overrides obsolete member in HDRP on 2020.1+. Re-address when removing 2019.4 support or the API is dropped
#if HDRP_9_OR_NEWER
protected override void Execute(CustomPassContext ctx)
{
ScriptableRenderContext renderContext = ctx.renderContext;
var cmd = ctx.cmd;
var hdCamera = ctx.hdCamera;
var cullingResult = ctx.cullingResults;
#else
protected override void Execute(ScriptableRenderContext renderContext, CommandBuffer cmd, HDCamera hdCamera, CullingResults cullingResult)
#endif
// CustomPasses are executed for each camera. We only want to run for the target camera
if (hdCamera.camera != targetCamera)
return;

10
com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/InstanceSegmentationPass.cs


[UsedImplicitly]
public InstanceSegmentationPass() {}
//overrides obsolete member in HDRP on 2020.1+. Re-address when removing 2019.4 support or the API is dropped
#if HDRP_9_OR_NEWER
protected override void Execute(CustomPassContext ctx)
{
ScriptableRenderContext renderContext = ctx.renderContext;
var cmd = ctx.cmd;
var hdCamera = ctx.hdCamera;
var cullingResult = ctx.cullingResults;
#else
#endif
CoreUtils.SetRenderTarget(cmd, targetTexture, ClearFlag.All);
m_InstanceSegmentationCrossPipelinePass.Execute(renderContext, cmd, hdCamera.camera, cullingResult);
}

10
com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/LensDistortionPass.cs


lensDistortionCrossPipelinePass.Setup();
}
//overrides obsolete member in HDRP on 2020.1+. Re-address when removing 2019.4 support or the API is dropped
#if HDRP_9_OR_NEWER
protected override void Execute(CustomPassContext ctx)
{
ScriptableRenderContext renderContext = ctx.renderContext;
var cmd = ctx.cmd;
var hdCamera = ctx.hdCamera;
var cullingResult = ctx.cullingResults;
#else
#endif
CoreUtils.SetRenderTarget(cmd, targetTexture);
lensDistortionCrossPipelinePass.Execute(renderContext, cmd, hdCamera.camera, cullingResult);
}

10
com.unity.perception/Runtime/GroundTruth/RenderPasses/HdrpPasses/SemanticSegmentationPass.cs


m_SemanticSegmentationCrossPipelinePass.Setup();
}
//overrides obsolete member in HDRP on 2020.1+. Re-address when removing 2019.4 support or the API is dropped
#if HDRP_9_OR_NEWER
protected override void Execute(CustomPassContext ctx)
{
ScriptableRenderContext renderContext = ctx.renderContext;
var cmd = ctx.cmd;
var hdCamera = ctx.hdCamera;
var cullingResult = ctx.cullingResults;
#else
#endif
CoreUtils.SetRenderTarget(cmd, targetTexture);
m_SemanticSegmentationCrossPipelinePass.Execute(renderContext, cmd, hdCamera.camera, cullingResult);
}

21
com.unity.perception/Runtime/GroundTruth/SimulationState.cs


const float k_SimulationTimingAccuracy = 0.01f;
const int k_MinPendingCapturesBeforeWrite = 150;
const int k_MinPendingMetricsBeforeWrite = 150;
const float k_MaxDeltaTime = 100f;
public SimulationState(string outputDirectory)
{

{
var sensorData = m_Sensors[activeSensor];
#if UNITY_EDITOR
if (UnityEditor.EditorApplication.isPaused)
{
//When the user clicks the 'step' button in the editor, frames will always progress at .02 seconds per step.
//In this case, just run all sensors each frame to allow for debugging
Debug.Log($"Frame step forced all sensors to synchronize, changing frame timings.");
sensorData.sequenceTimeOfNextRender = UnscaledSequenceTime;
sensorData.sequenceTimeOfNextCapture = UnscaledSequenceTime;
}
#endif
if (Mathf.Abs(sensorData.sequenceTimeOfNextRender - UnscaledSequenceTime) < k_SimulationTimingAccuracy)
{
//means this frame fulfills this sensor's simulation time requirements, we can move target to next frame.

sensorData.sequenceTimeOfNextCapture += sensorData.renderingDeltaTime * (sensorData.framesBetweenCaptures + 1);
Debug.Assert(sensorData.sequenceTimeOfNextCapture > UnscaledSequenceTime,
$"Next scheduled capture should be after {UnscaledSequenceTime} but is {sensorData.sequenceTimeOfNextCapture}");
while (sensorData.sequenceTimeOfNextCapture <= UnscaledSequenceTime)
sensorData.sequenceTimeOfNextCapture += sensorData.renderingDeltaTime * (sensorData.framesBetweenCaptures + 1);
}
else if (sensorData.captureTriggerMode.Equals(CaptureTriggerMode.Manual))
{

}
//find the deltatime required to land on the next active sensor that needs simulation
float nextFrameDt = k_MaxDeltaTime;
var nextFrameDt = float.PositiveInfinity;
foreach (var activeSensor in m_ActiveSensors)
{
float thisSensorNextFrameDt = -1;

}
if (thisSensorNextFrameDt > 0f && thisSensorNextFrameDt < nextFrameDt)
{
}
if (Math.Abs(nextFrameDt - k_MaxDeltaTime) < 0.0001)
if (float.IsPositiveInfinity(nextFrameDt))
{
//means no sensor is controlling simulation timing, so we set Time.captureDeltaTime to 0 (default) which means the setting does not do anything
nextFrameDt = 0;

3
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/AnimationRandomizer.cs


void RandomizeAnimation(AnimationRandomizerTag tag)
{
if (!tag.gameObject.activeInHierarchy)
return;
var animator = tag.gameObject.GetComponent<Animator>();
animator.applyRootMotion = tag.applyRootMotion;

5
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/BackgroundObjectPlacementRandomizer.cs


/// <summary>
/// The Z offset component applied to all generated background layers
/// </summary>
[Tooltip("The Z offset applied to positions of all placed objects.")]
[Tooltip("The number of background layers to generate.")]
[Tooltip("The minimum distance between the centers of the placed objects.")]
[Tooltip("The width and height of the area in which objects will be placed. These should be positive numbers and sufficiently large in relation with the Separation Distance specified.")]
[Tooltip("The list of Prefabs to be placed by this Randomizer.")]
public GameObjectParameter prefabs;
GameObject m_Container;

3
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/ColorRandomizer.cs


static readonly int k_BaseColor = Shader.PropertyToID("_BaseColor");
/// <summary>
/// Describes the range of random colors to assign to tagged objects
/// The range of random colors to assign to target objects
[Tooltip("The range of random colors to assign to target objects.")]
public ColorHsvaParameter colorParameter;
/// <summary>

4
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/ForegroundObjectPlacementRandomizer.cs


/// <summary>
/// The Z offset component applied to the generated layer of GameObjects
/// </summary>
[Tooltip("The Z offset applied to positions of all placed objects.")]
[Tooltip("The minimum distance between the centers of the placed objects.")]
[Tooltip("The width and height of the area in which objects will be placed. These should be positive numbers and sufficiently large in relation with the Separation Distance specified.")]
[Tooltip("The list of Prefabs to be placed by this Randomizer.")]
public GameObjectParameter prefabs;
GameObject m_Container;

3
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/HueOffsetRandomizer.cs


static readonly int k_HueOffsetShaderProperty = Shader.PropertyToID("_HueOffset");
/// <summary>
/// The range of hue offsets to assign to tagged objects
/// The range of random hue offsets to assign to target objects
[Tooltip("The range of random hue offsets to assign to target objects.")]
public FloatParameter hueOffset = new FloatParameter { value = new UniformSampler(-180f, 180f) };
/// <summary>

3
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/RotationRandomizer.cs


public class RotationRandomizer : Randomizer
{
/// <summary>
/// Defines the range of random rotations that can be assigned to tagged objects
/// The range of random rotations to assign to target objects
[Tooltip("The range of random rotations to assign to target objects.")]
public Vector3Parameter rotation = new Vector3Parameter
{
x = new UniformSampler(0, 360),

11
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/SunAngleRandomizer.cs


public class SunAngleRandomizer : Randomizer
{
/// <summary>
/// The hour of the day (0 to 24)
/// The range of hours in a day (default is 0 to 24)
[Tooltip("The range of hours in a day (default is 0 to 24).")]
/// The day of the year (0 being Jan 1st and 364 being December 31st)
/// The range of days in a year with 0 being Jan 1st and 364 being December 31st (default is 0 to 364)
public FloatParameter dayOfTheYear = new FloatParameter { value = new UniformSampler(0, 365)};
[Tooltip("The range of days in a year with 0 being Jan 1st and 364 being December 31st (default is 0 to 364).")]
public FloatParameter dayOfTheYear = new FloatParameter { value = new UniformSampler(0, 364)};
/// The earth's latitude (-90 is the south pole, 0 is the equator, and +90 is the north pole)
/// The range of latitudes. A latitude of -90 is the south pole, 0 is the equator, and +90 is the north pole (default is -90 to 90).
[Tooltip("The range of latitudes. A latitude of -90 is the south pole, 0 is the equator, and +90 is the north pole (default is -90 to 90).")]
public FloatParameter latitude = new FloatParameter { value = new UniformSampler(-90, 90)};
/// <summary>

3
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Randomizers/TextureRandomizer.cs


#endif
/// <summary>
/// The list of textures to sample and apply to tagged objects
/// The list of textures to sample and apply to target objects
[Tooltip("The list of textures to sample and apply to target objects.")]
public Texture2DParameter texture;
/// <summary>

86
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerExamples/Utilities/PoissonDiskSampling.cs


/// <param name="minimumRadius">The minimum distance required between each sampled point</param>
/// <param name="seed">The random seed used to initialize the algorithm state</param>
/// <param name="samplingResolution">The number of potential points sampled around every valid point</param>
/// <param name="allocator">The allocator to use for the samples container</param>
/// <returns>The list of generated poisson points</returns>
public static NativeList<float2> GenerateSamples(
float width,

int samplingResolution = k_DefaultSamplingResolution)
int samplingResolution = k_DefaultSamplingResolution,
Allocator allocator = Allocator.TempJob)
{
if (width < 0)
throw new ArgumentException($"Width {width} cannot be negative");

if (samplingResolution <= 0)
throw new ArgumentException($"SamplingAttempts {samplingResolution} cannot be <= 0");
var samples = new NativeList<float2>(Allocator.TempJob);
new SampleJob
var superSampledPoints = new NativeList<float2>(allocator);
var sampleJob = new SampleJob
{
width = width + minimumRadius * 2,
height = height + minimumRadius * 2,
minimumRadius = minimumRadius,
seed = seed,
samplingResolution = samplingResolution,
samples = superSampledPoints
}.Schedule();
var croppedSamples = new NativeList<float2>(allocator);
new CropJob
seed = seed,
samplingResolution = samplingResolution,
samples = samples
}.Schedule().Complete();
return samples;
superSampledPoints = superSampledPoints.AsDeferredJobArray(),
croppedSamples = croppedSamples
}.Schedule(sampleJob).Complete();
superSampledPoints.Dispose();
return croppedSamples;
}
[BurstCompile]

}
}
/// <summary>
/// This job is for filtering out all super sampled Poisson points that are found outside of the originally
/// specified 2D region. This job will also shift the cropped points back to their original region.
/// </summary>
[BurstCompile]
struct CropJob : IJob
{
public float width;
public float height;
public float minimumRadius;
[ReadOnly] public NativeArray<float2> superSampledPoints;
public NativeList<float2> croppedSamples;
public void Execute()
{
var results = new NativeArray<bool>(
superSampledPoints.Length, Allocator.Temp, NativeArrayOptions.UninitializedMemory);
// The comparisons operations made in this loop are done separately from the list-building loop
// so that burst can automatically generate vectorized assembly code for this portion of the job.
for (var i = 0; i < superSampledPoints.Length; i++)
{
var point = superSampledPoints[i];
results[i] = point.x >= minimumRadius && point.x <= width + minimumRadius
&& point.y >= minimumRadius && point.y <= height + minimumRadius;
}
// This list-building code is done separately from the filtering loop
// because it cannot be vectorized by burst.
for (var i = 0; i < superSampledPoints.Length; i++)
{
if (results[i])
croppedSamples.Add(superSampledPoints[i]);
}
// Remove the positional offset from the filtered-but-still-super-sampled points
var offset = new float2(minimumRadius, minimumRadius);
for (var i = 0; i < croppedSamples.Length; i++)
croppedSamples[i] -= offset;
results.Dispose();
}
}
// Algorithm sourced from Robert Bridson's paper "Fast Poisson Disk Sampling in Arbitrary Dimensions"
// https://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph07-poissondisk.pdf
/// <summary>

// Calculate occupancy grid dimensions
var random = new Unity.Mathematics.Random(seed);
var cellSize = minimumRadius / math.sqrt(2);
var rows = Mathf.FloorToInt(height / cellSize);
var cols = Mathf.FloorToInt(width / cellSize);
var cellSize = minimumRadius / math.sqrt(2f);
var rows = Mathf.CeilToInt(height / cellSize);
var cols = Mathf.CeilToInt(width / cellSize);
var gridSize = rows * cols;
if (gridSize == 0)
return samples;

// This list will track all points that may still have space around them for generating new points
var activePoints = new NativeList<float2>(Allocator.Temp);
// Randomly place a seed point in a central location within the generation space to kick off the algorithm
var firstPoint = new float2(
random.NextFloat(0.4f, 0.6f) * width,
random.NextFloat(0.4f, 0.6f) * height);
// Randomly place a seed point to kick off the algorithm
var firstPoint = new float2(random.NextFloat(0f, width), random.NextFloat(0f, height));
samples.Add(firstPoint);
var firstPointCol = Mathf.FloorToInt(firstPoint.x / cellSize);
var firstPointRow = Mathf.FloorToInt(firstPoint.y / cellSize);

2
com.unity.perception/Runtime/Randomization/Randomizers/RandomizerTag.cs


/// OnEnable is called when this RandomizerTag is enabled, either created, instantiated, or enabled via
/// the Unity Editor
/// </summary>
protected void OnEnable()
protected virtual void OnEnable()
{
Register();
}

2
com.unity.perception/Runtime/Randomization/Samplers/SamplerUtility.cs


return min;
var stdTruncNorm = NormalCdfInverse(c);
return stdTruncNorm * stdDev + mean;
return math.clamp(stdTruncNorm * stdDev + mean, min, max);
}
/// <summary>

7
com.unity.perception/Runtime/Unity.Perception.Runtime.asmdef


"name": "com.unity.simulation.capture",
"expression": "0.0.10-preview.16",
"define": "SIMULATION_CAPTURE_0_0_10_PREVIEW_16_OR_NEWER"
},
{
"name": "com.unity.render-pipelines.high-definition",
"expression": "9.0",
"define": "HDRP_9_OR_NEWER"
}
}

16
com.unity.perception/Tests/Editor/DatasetCaptureEditorTests.cs


using System.Collections;
using System.IO;
using NUnit.Framework;
using UnityEditor;
using UnityEngine;
using UnityEngine.Perception.GroundTruth;
using UnityEngine.TestTools;

expectedDatasetPath = DatasetCapture.OutputDirectory;
yield return new ExitPlayMode();
FileAssert.Exists(Path.Combine(expectedDatasetPath, "sensors.json"));
}
[UnityTest]
public IEnumerator StepFunction_OverridesSimulationDeltaTime_AndRunsSensors()
{
yield return new EnterPlayMode();
DatasetCapture.ResetSimulation();
var ego = DatasetCapture.RegisterEgo("ego");
var sensor = DatasetCapture.RegisterSensor(ego, "camera", "", 0, CaptureTriggerMode.Scheduled, 2f, 0);
yield return null;
var timeBeforeStep = Time.time;
EditorApplication.isPaused = true;
EditorApplication.Step();
Assert.True(Time.time - timeBeforeStep < .3f);
Assert.True(sensor.ShouldCaptureThisFrame);
yield return new ExitPlayMode();
}
}
}

263
com.unity.perception/Tests/Runtime/GroundTruthTests/BoundingBox3dTests.cs


labelId = 1,
labelName = "label",
position = new Vector3(0, 0, 10),
scale = new Vector3(5, 5, 5),
scale = new Vector3(10, 10, 10),
rotation = Quaternion.identity
}
};

labelId = 1,
labelName = "label",
position = new Vector3(0, 0, Mathf.Sqrt(200)),
scale = new Vector3(5, 5, 5),
scale = new Vector3(10, 10, 10),
rotation = Quaternion.identity
}
};

labelId = 1,
labelName = "label",
position = new Vector3(0, 0, 10),
scale = new Vector3(6.5f, 2.5f, 2.5f),
scale = new Vector3(13f, 5f, 5f),
rotation = Quaternion.identity
}
};

return ExecuteTest(target, cameraPosition, cameraRotation, expected);
}
public class ParentedTestData
{
public string name;
public Vector3 expectedScale = Vector3.one;
public Vector3 expectedPosition = new Vector3(0, 0, 10);
public Quaternion expectedRotation = Quaternion.identity;
public Vector3 childScale = Vector3.one;
public Vector3 childPosition = Vector3.zero;
public Quaternion childRotation = Quaternion.identity;
public Vector3 grandchildScale = Vector3.one;
public Vector3 grandchildPosition = Vector3.zero;
public Quaternion grandchildRotation = Quaternion.identity;
public Vector3 parentScale = Vector3.one;
public Vector3 parentPosition = Vector3.zero;
public Quaternion parentRotation = Quaternion.identity;
public Vector3 grandparentScale = Vector3.one;
public Vector3 grandparentPosition = Vector3.zero;
public Quaternion grandparentRotation = Quaternion.identity;
public Vector3 cameraParentScale = Vector3.one;
public Vector3 cameraParentPosition = Vector3.zero;
public Quaternion cameraParentRotation = Quaternion.identity;
public Vector3 cameraScale = Vector3.one;
public Vector3 cameraPosition = new Vector3(0, 0, -10);
public Quaternion cameraRotation = Quaternion.identity;
public override string ToString()
{
return name;
}
}
static IEnumerable<ParentedTestData> ParentedObject_ProduceProperResults_Values()
{
yield return new ParentedTestData()
{
name = "ParentScale",
expectedScale = new Vector3(1/5f, 1/5f, 1/5f),
parentScale = new Vector3(1/5f, 1/5f, 1/5f),
};
yield return new ParentedTestData()
{
name = "GrandparentScale",
expectedScale = new Vector3(1/5f, 1/5f, 1/5f),
grandparentScale = new Vector3(1/5f, 1/5f, 1/5f),
};
yield return new ParentedTestData()
{
name = "ChildScale",
expectedScale = new Vector3(1/5f, 1/5f, 1/5f),
childScale = new Vector3(1/5f, 1/5f, 1/5f),
};
yield return new ParentedTestData()
{
name = "ChildAndParentScale",
expectedScale = new Vector3(1f, 1f, 1f),
childScale = new Vector3(5, 5, 5),
parentScale = new Vector3(1/5f, 1/5f, 1/5f),
};
yield return new ParentedTestData()
{
name = "GrandchildScale",
expectedScale = new Vector3(2, 2, 2),
childScale = new Vector3(2, 2, 2),
grandchildScale = new Vector3(5, 5, 5),
parentScale = new Vector3(1/5f, 1/5f, 1/5f),
};
yield return new ParentedTestData()
{
name = "ParentRotation",
expectedRotation = Quaternion.Euler(1f, 2f, 3f),
parentRotation = Quaternion.Euler(1f, 2f, 3f),
};
yield return new ParentedTestData()
{
name = "ChildRotation",
expectedRotation = Quaternion.Euler(1f, 2f, 3f),
childRotation = Quaternion.Euler(1f, 2f, 3f),
};
yield return new ParentedTestData()
{
name = "ParentAndChildRotation",
expectedRotation = Quaternion.identity,
childRotation = Quaternion.Euler(20f, 0, 0),
parentRotation = Quaternion.Euler(-20f, 0, 0),
};
var diagonalSize = Mathf.Sqrt(2 * 2 + 2 * 2); //A^2 + B^2 = C^2
yield return new ParentedTestData()
{
name = "GrandchildRotation",
expectedRotation = Quaternion.identity,
expectedScale = new Vector3(diagonalSize / 2f, 1, diagonalSize / 2f),
grandchildRotation = Quaternion.Euler(0, 45, 0),
};
yield return new ParentedTestData()
{
name = "GrandparentRotation",
expectedRotation = Quaternion.Euler(-20f, 0, 0),
grandparentRotation = Quaternion.Euler(-20f, 0, 0),
};
yield return new ParentedTestData()
{
name = "GrandchildTRS",
expectedRotation = Quaternion.identity,
expectedPosition = new Vector3(-5, 0, 10),
expectedScale = new Vector3(.5f * diagonalSize / 2f, .5f, .5f * diagonalSize / 2f),
grandchildRotation = Quaternion.Euler(0, -45, 0),
grandchildPosition = new Vector3(-5, 0, 0),
grandchildScale = new Vector3(.5f, .5f, .5f),
};
yield return new ParentedTestData()
{
name = "CamParentPositionAndScale",
expectedRotation = Quaternion.identity,
expectedPosition = new Vector3(2, 0, 6.5f),
expectedScale = new Vector3(1, 1, 1),
childPosition = new Vector3(0, 0, 4),
cameraParentPosition = new Vector3(-2, 0, 0),
cameraParentScale = new Vector3(1/2f, 1/3f, 1/4f),
};
//point at the left side of the box
yield return new ParentedTestData()
{
name = "CamParentRotate",
expectedRotation = Quaternion.Euler(0, -90, 0),
expectedPosition = new Vector3(0, 0, 10),
cameraParentRotation = Quaternion.Euler(0, 90, 0),
};
//point at the left side of the box
yield return new ParentedTestData()
{
name = "CamParentScale",
expectedPosition = new Vector3(0, 0, 2.5f),
//Scale on the camera's hierarchy only affects the position of the camera. It does not affect the camera frustum
cameraParentScale = new Vector3(1/2f, 1/3f, 1/4f),
};
yield return new ParentedTestData()
{
name = "CamRotationParentScale",
expectedRotation = Quaternion.Euler(0, -90, 0),
expectedPosition = new Vector3(0, 0, 5),
cameraParentPosition = new Vector3(-5, 0, 0),
cameraParentScale = new Vector3(.5f, 1, 1),
cameraPosition = Vector3.zero,
cameraRotation = Quaternion.Euler(0, 90, 0),
};
}
[UnityTest]
public IEnumerator ParentedObject_ProduceProperResults([ValueSource(nameof(ParentedObject_ProduceProperResults_Values))] ParentedTestData parentedTestData)
{
var expected = new[]
{
new ExpectedResult
{
instanceId = 1,
labelId = 1,
labelName = "label",
position = parentedTestData.expectedPosition,
scale = parentedTestData.expectedScale,
rotation = parentedTestData.expectedRotation
}
};
var goGrandparent = new GameObject();
goGrandparent.transform.localPosition = parentedTestData.grandparentPosition;
goGrandparent.transform.localScale = parentedTestData.grandparentScale;
goGrandparent.transform.localRotation = parentedTestData.grandparentRotation;
var goParent = new GameObject();
goParent.transform.SetParent(goGrandparent.transform, false);
goParent.transform.localPosition = parentedTestData.parentPosition;
goParent.transform.localScale = parentedTestData.parentScale;
goParent.transform.localRotation = parentedTestData.parentRotation;
var goChild = new GameObject();
goChild.transform.SetParent(goParent.transform, false);
goChild.transform.localPosition = parentedTestData.childPosition;
goChild.transform.localScale = parentedTestData.childScale;
goChild.transform.localRotation = parentedTestData.childRotation;
var labeling = goChild.AddComponent<Labeling>();
labeling.labels.Add("label");
var goGrandchild = GameObject.CreatePrimitive(PrimitiveType.Cube);
goGrandchild.transform.SetParent(goChild.transform, false);
goGrandchild.transform.localPosition = parentedTestData.grandchildPosition;
goGrandchild.transform.localScale = parentedTestData.grandchildScale;
goGrandchild.transform.localRotation = parentedTestData.grandchildRotation;
var goCameraParent = new GameObject();
goCameraParent.transform.localPosition = parentedTestData.cameraParentPosition;
goCameraParent.transform.localScale = parentedTestData.cameraParentScale;
goCameraParent.transform.localRotation = parentedTestData.cameraParentRotation;
var receivedResults = new List<(int, List<BoundingBox3DLabeler.BoxData>)>();
var cameraObject = SetupCamera(SetupLabelConfig(), (frame, data) =>
{
receivedResults.Add((frame, data));
});
cameraObject.transform.SetParent(goCameraParent.transform, false);
cameraObject.transform.localPosition = parentedTestData.cameraPosition;
cameraObject.transform.localScale = parentedTestData.cameraScale;
cameraObject.transform.localRotation = parentedTestData.cameraRotation;
cameraObject.SetActive(true);
return ExecuteTestOnCamera(goGrandparent, expected, goCameraParent, receivedResults);
}
[UnityTest]
public IEnumerator MultiInheritedMesh_ProduceProperTranslationTest()
{

labelId = 2,
labelName = "car",
position = new Vector3(0, 0.525f, 20),
scale = new Vector3(2f, 0.875f, 2.4f),
scale = new Vector3(4f, 1.75f, 4.8f),
rotation = Quaternion.identity
},
};

[UnityTest]
public IEnumerator MultiInheritedMeshDifferentLabels_ProduceProperTranslationTest()
{
var wheelScale = new Vector3(0.35f, 1.0f, 0.35f);
var wheelScale = new Vector3(0.7f, 2.0f, 0.7f);
var wheelRot = Quaternion.Euler(0, 0, 90);
var expected = new[]

labelId = 2,
labelName = "car",
position = new Vector3(0, 1.05f, 20),
scale = new Vector3(1f, 0.7f, 2.4f),
scale = new Vector3(2f, 1.4f, 4.8f),
rotation = Quaternion.identity
},
new ExpectedResult

var target = TestHelper.CreateLabeledCube(scale: 15f, z: -50f);
return ExecuteSeenUnseenTest(target, Vector3.zero, quaternion.identity, 0);
}
struct ExpectedResult
{
public int labelId;

yield return null;
yield return null;
IEnumerator ExecuteTest(GameObject target, Vector3 cameraPos, Quaternion cameraRotation, IList<ExpectedResult> expectations)
{
var receivedResults = new List<(int, List<BoundingBox3DLabeler.BoxData>)>();

gameObject.transform.position = cameraPos;
gameObject.transform.rotation = cameraRotation;
AddTestObjectForCleanup(gameObject);
return ExecuteTestOnCamera(target, expectations, gameObject, receivedResults);
}
gameObject.SetActive(false);
private IEnumerator ExecuteTestOnCamera(GameObject target, IList<ExpectedResult> expectations, GameObject cameraObject,
List<(int, List<BoundingBox3DLabeler.BoxData>)> receivedResults)
{
AddTestObjectForCleanup(cameraObject);
AddTestObjectForCleanup(target);
cameraObject.SetActive(false);
gameObject.SetActive(true);
cameraObject.SetActive(true);
Assert.AreEqual(expectations.Count, receivedResults[0].Item2.Count);
for (var i = 0; i < receivedResults[0].Item2.Count; i++)

TestResults(b, expectations[i]);
}
DestroyTestObject(gameObject);
UnityEngine.Object.DestroyImmediate(target);
DestroyTestObject(cameraObject);
}
static IdLabelConfig SetupLabelConfig()

static void TestResults(BoundingBox3DLabeler.BoxData data, ExpectedResult e)
{
var scale = e.scale * 2;
Assert.AreEqual(scale[0], data.size[0], k_Delta);
Assert.AreEqual(scale[1], data.size[1], k_Delta);
Assert.AreEqual(scale[2], data.size[2], k_Delta);
Assert.AreEqual(e.scale[0], data.size[0], k_Delta);
Assert.AreEqual(e.scale[1], data.size[1], k_Delta);
Assert.AreEqual(e.scale[2], data.size[2], k_Delta);
Assert.AreEqual(e.rotation[0], data.rotation[0], k_Delta);
Assert.AreEqual(e.rotation[1], data.rotation[1], k_Delta);
Assert.AreEqual(e.rotation[2], data.rotation[2], k_Delta);

33
com.unity.perception/Tests/Runtime/GroundTruthTests/DatasetCaptureSensorSchedulingTests.cs


}
[UnityTest]
[TestCase(1, 0, 0, 1, 2, 3, ExpectedResult = (IEnumerator)null)]
[TestCase(10, 5, 50, 60, 70, 80, ExpectedResult = (IEnumerator)null)]
[TestCase(55, 0, 0, 55, 110, 165, ExpectedResult = (IEnumerator)null)]
[TestCase(235, 10, 2350, 2585, 2820, 3055, ExpectedResult = (IEnumerator)null)]
public IEnumerator SequenceTimeOfNextCapture_ReportsCorrectTime_VariedDeltaTimesAndStartFrames(float simulationDeltaTime, int firstCaptureFrame, float firstCaptureTime, float secondCaptureTime, float thirdCaptureTime, float fourthCaptureTime)
{
var ego = DatasetCapture.RegisterEgo("ego");
var sensorHandle = DatasetCapture.RegisterSensor(ego, "cam", "", firstCaptureFrame, CaptureTriggerMode.Scheduled, simulationDeltaTime, 0);
float[] sequenceTimesExpected =
{
firstCaptureTime,
secondCaptureTime,
thirdCaptureTime,
fourthCaptureTime
};
for (var i = 0; i < firstCaptureFrame; i++)
{
//render the non-captured frames before firstCaptureFrame
yield return null;
}
for (var i = 0; i < sequenceTimesExpected.Length; i++)
{
var sensorData = m_TestHelper.GetSensorData(sensorHandle);
var sequenceTimeActual = m_TestHelper.CallSequenceTimeOfNextCapture(sensorData);
Assert.AreEqual(sequenceTimesExpected[i], sequenceTimeActual, 0.0001f);
yield return null;
}
}
[UnityTest]
public IEnumerator SequenceTimeOfManualCapture_ReportsCorrectTime_ManualSensorDoesNotAffectTimings()
{
var ego = DatasetCapture.RegisterEgo("ego");

51
com.unity.perception/Tests/Runtime/GroundTruthTests/InstanceIdToColorMappingTests.cs


public class InstanceIdToColorMappingTests
{
[Test]
public void InitializeMaps_DoesNotThrow()
{
Assert.DoesNotThrow(InstanceIdToColorMapping.InitializeMaps);
}
[Test]
for (var i = 1u; i <= 64u; i++)
for (var i = 1u; i <= 1024u; i++)
Assert.IsTrue(InstanceIdToColorMapping.TryGetColorFromInstanceId(i, out var color));
Assert.IsTrue(InstanceIdToColorMapping.TryGetInstanceIdFromColor(color, out var id));
Assert.IsTrue(InstanceIdToColorMapping.TryGetColorFromInstanceId(i, out var color), $"Failed TryGetColorFromInstanceId on id {i}");
Assert.IsTrue(InstanceIdToColorMapping.TryGetInstanceIdFromColor(color, out var id), $"Failed TryGetInstanceIdFromColor on id {i}");
Assert.AreEqual(i, id);
color = InstanceIdToColorMapping.GetColorFromInstanceId(i);

[TestCase(4u,255,0,223,255)]
[TestCase(5u,0,255,212,255)]
[TestCase(6u,255,138,0,255)]
[TestCase(64u,195,0,75,255)]
[TestCase(65u,0,0,1,254)]
[TestCase(66u,0,0,2,254)]
[TestCase(64u + 256u,0,1,0,254)]
[TestCase(65u + 256u,0,1,1,254)]
[TestCase(64u + 65536u,1,0,0,254)]
[TestCase(16777216u,255,255,192,254)]
[TestCase(64u + 16777216u,0,0,0,253)]
[TestCase(64u + (16777216u * 2),0,0,0,252)]
[TestCase(1024u,30, 0, 11,255)]
[TestCase(1025u,0,0,1,254)]
[TestCase(1026u,0,0,2,254)]
[TestCase(1024u + 256u,0,1,0,254)]
[TestCase(1025u + 256u,0,1,1,254)]
[TestCase(1024u + 65536u,1,0,0,254)]
[TestCase(1024u + 16777216u,0,0,0,253)]
[TestCase(1024u + (16777216u * 2),0,0,0,252)]
Assert.AreEqual(color.r, r);
Assert.AreEqual(color.g, g);
Assert.AreEqual(color.b, b);
Assert.AreEqual(color.a, a);
var expected = new Color32(r, g, b, a);
Assert.AreEqual(expected, color);
Assert.AreEqual(color.r, r);
Assert.AreEqual(color.g, g);
Assert.AreEqual(color.b, b);
Assert.AreEqual(color.a, a);
Assert.AreEqual(expected, color);
id2 = InstanceIdToColorMapping.GetInstanceIdFromColor(color);
Assert.AreEqual(id, id2);

public void InstanceIdToColorMappingTests_GetCorrectValuesFor255()
{
var expectedColor = new Color(0, 0, 191, 254);
var expectedColor = new Color32(19, 210, 0, 255);
Assert.AreEqual(color.r, expectedColor.r);
Assert.AreEqual(color.g, expectedColor.g);
Assert.AreEqual(color.b, expectedColor.b);
Assert.AreEqual(color.a, expectedColor.a);
Assert.AreEqual(expectedColor, color);
Assert.AreEqual(color.r, expectedColor.r);
Assert.AreEqual(color.g, expectedColor.g);
Assert.AreEqual(color.b, expectedColor.b);
Assert.AreEqual(color.a, expectedColor.a);
Assert.AreEqual(expectedColor, color);
id2 = InstanceIdToColorMapping.GetInstanceIdFromColor(color);
Assert.AreEqual(255u, id2);
}

299
com.unity.perception/Tests/Runtime/GroundTruthTests/KeypointGroundTruthTests.cs


public class KeyPointGroundTruthTests : GroundTruthTestBase, IPrebuildSetup, IPostBuildCleanup
{
private const string kAnimatedCubeScenePath = "Packages/com.unity.perception/Tests/Runtime/TestAssets/AnimatedCubeScene.unity";
private const double k_Delta = .01;
public void Setup()
{

#endif
}
static GameObject SetupCamera(IdLabelConfig config, KeypointTemplate template, Action<int, List<KeypointLabeler.KeypointEntry>> computeListener, RenderTexture renderTexture = null)
static GameObject SetupCamera(IdLabelConfig config, KeypointTemplate template, Action<int, List<KeypointLabeler.KeypointEntry>> computeListener, RenderTexture renderTexture = null, KeypointObjectFilter keypointObjectFilter = KeypointObjectFilter.Visible)
{
var cameraObject = new GameObject();
cameraObject.SetActive(false);

var perceptionCamera = cameraObject.AddComponent<PerceptionCamera>();
perceptionCamera.captureRgbImages = false;
var keyPointLabeler = new KeypointLabeler(config, template);
keyPointLabeler.objectFilter = keypointObjectFilter;
if (computeListener != null)
keyPointLabeler.KeypointsComputed += computeListener;

}
[UnityTest]
public IEnumerator Keypoint_TestMovingCube()
{
var incoming = new List<List<KeypointLabeler.KeypointEntry>>();
var keypoints = new[]
{
new KeypointDefinition
{
label = "Center",
associateToRig = false,
color = Color.black
}
};
var template = ScriptableObject.CreateInstance<KeypointTemplate>();
template.templateID = Guid.NewGuid().ToString();
template.templateName = "label";
template.jointTexture = null;
template.skeletonTexture = null;
template.keypoints = keypoints;
template.skeleton = new SkeletonDefinition[0];
var texture = new RenderTexture(1024, 768, 16);
texture.Create();
var cam = SetupCamera(SetUpLabelConfig(), template, (frame, data) =>
{
incoming.Add(new List<KeypointLabeler.KeypointEntry>(data));
}, texture);
cam.GetComponent<PerceptionCamera>().showVisualizations = false;
var cube = TestHelper.CreateLabeledCube(scale: 6, z: 8);
SetupCubeJoint(cube, template, "Center", 0, 0, 0);
cube.SetActive(true);
cam.SetActive(true);
AddTestObjectForCleanup(cam);
AddTestObjectForCleanup(cube);
yield return null;
cube.transform.localPosition = new Vector3(-1, 0, 0);
yield return null;
//force all async readbacks to complete
DestroyTestObject(cam);
texture.Release();
var testCase = incoming[0];
Assert.AreEqual(1, testCase.Count);
var t = testCase.First();
Assert.NotNull(t);
Assert.AreEqual(1, t.instance_id);
Assert.AreEqual(1, t.label_id);
Assert.AreEqual(template.templateID.ToString(), t.template_guid);
Assert.AreEqual(1, t.keypoints.Length);
Assert.AreEqual(1024 / 2, t.keypoints[0].x);
Assert.AreEqual(768 / 2, t.keypoints[0].y);
Assert.AreEqual(0, t.keypoints[0].index);
Assert.AreEqual(2, t.keypoints[0].state);
var testCase2 = incoming[1];
Assert.AreEqual(1, testCase2.Count);
var t2 = testCase2.First();
Assert.AreEqual(445, t2.keypoints[0].x, 1);
Assert.AreEqual(768 / 2, t2.keypoints[0].y);
}
[UnityTest]
public IEnumerator Keypoint_TestPartialOffScreen([Values(1,5)] int framesToRunBeforeAsserting)
{
var incoming = new List<List<KeypointLabeler.KeypointEntry>>();

}
[UnityTest]
public IEnumerator Keypoint_TestAllBlockedByOther()
public IEnumerator Keypoint_FullyOccluded_DoesNotReport()
{
var incoming = new List<List<KeypointLabeler.KeypointEntry>>();
var template = CreateTestTemplate(Guid.NewGuid(), "TestTemplate");

incoming.Add(new List<KeypointLabeler.KeypointEntry>(data));
}, texture);
CreateFullyOccludedScene(template, cam);
yield return null;
//force all async readbacks to complete
DestroyTestObject(cam);
if (texture != null) texture.Release();
var testCase = incoming.Last();
Assert.AreEqual(0, testCase.Count);
}
[UnityTest]
public IEnumerator Keypoint_FullyOccluded_WithIncludeOccluded_ReportsProperly(
[Values(KeypointObjectFilter.VisibleAndOccluded, KeypointObjectFilter.All)] KeypointObjectFilter keypointObjectFilter)
{
var incoming = new List<List<KeypointLabeler.KeypointEntry>>();
var template = CreateTestTemplate(Guid.NewGuid(), "TestTemplate");
var texture = new RenderTexture(1024, 768, 16);
texture.Create();
var cam = SetupCamera(SetUpLabelConfig(), template, (frame, data) =>
{
incoming.Add(data);
}, texture, keypointObjectFilter);
CreateFullyOccludedScene(template, cam);
yield return null;
//force all async readbacks to complete
DestroyTestObject(cam);
if (texture != null) texture.Release();
var testCase = incoming.Last();
Assert.AreEqual(1, testCase.Count);
var t = testCase.First();
Assert.NotNull(t);
Assert.AreEqual(1, t.instance_id);
Assert.AreEqual(1, t.label_id);
Assert.AreEqual(template.templateID.ToString(), t.template_guid);
Assert.AreEqual(9, t.keypoints.Length);
for (var i = 0; i < 8; i++)
Assert.AreEqual(1, t.keypoints[i].state);
for (var i = 0; i < 9; i++) Assert.AreEqual(i, t.keypoints[i].index);
Assert.Zero(t.keypoints[8].state);
Assert.Zero(t.keypoints[8].x);
Assert.Zero(t.keypoints[8].y);
}
private void CreateFullyOccludedScene(KeypointTemplate template, GameObject cam)
{
var cube = TestHelper.CreateLabeledCube(scale: 6, z: 8);
SetupCubeJoints(cube, template);

AddTestObjectForCleanup(cam);
AddTestObjectForCleanup(cube);
AddTestObjectForCleanup(blocker);
}
[UnityTest]
public IEnumerator Keypoint_Offscreen_DoesNotReport(
[Values(KeypointObjectFilter.VisibleAndOccluded, KeypointObjectFilter.Visible)] KeypointObjectFilter keypointObjectFilter)
{
var incoming = new List<List<KeypointLabeler.KeypointEntry>>();
var template = CreateTestTemplate(Guid.NewGuid(), "TestTemplate");
var texture = new RenderTexture(1024, 768, 16);
texture.Create();
var cam = SetupCamera(SetUpLabelConfig(), template, (frame, data) =>
{
incoming.Add(data);
}, texture, keypointObjectFilter);
var cube = TestHelper.CreateLabeledCube(scale: 6, z: -100);
SetupCubeJoints(cube, template);
cube.SetActive(true);
cam.SetActive(true);
AddTestObjectForCleanup(cam);
AddTestObjectForCleanup(cube);
yield return null;
//force all async readbacks to complete
DestroyTestObject(cam);
if (texture != null) texture.Release();
var testCase = incoming.Last();
Assert.AreEqual(0, testCase.Count);
}
[UnityTest]
public IEnumerator Keypoint_Offscreen_WithIncludeAll_ReportsProperly()
{
var incoming = new List<List<KeypointLabeler.KeypointEntry>>();
var template = CreateTestTemplate(Guid.NewGuid(), "TestTemplate");
var texture = new RenderTexture(1024, 768, 16);
texture.Create();
var cam = SetupCamera(SetUpLabelConfig(), template, (frame, data) =>
{
incoming.Add(data);
}, texture, KeypointObjectFilter.All);
var cube = TestHelper.CreateLabeledCube(scale: 6, z: -20);
SetupCubeJoints(cube, template);
cube.SetActive(true);
cam.SetActive(true);
AddTestObjectForCleanup(cam);
AddTestObjectForCleanup(cube);
yield return null;

var testCase = incoming.Last();
Assert.AreEqual(1, testCase.Count);
var t = testCase.First();
Assert.NotNull(t);
Assert.AreEqual(1, t.instance_id);

for (var i = 0; i < 8; i++)
Assert.AreEqual(1, t.keypoints[i].state);
for (var i = 0; i < 9; i++) Assert.AreEqual(i, t.keypoints[i].index);
Assert.Zero(t.keypoints[8].state);
Assert.Zero(t.keypoints[8].x);
Assert.Zero(t.keypoints[8].y);
for (var i = 0; i < 9; i++)
{
Assert.Zero(t.keypoints[i].state);
Assert.Zero(t.keypoints[i].x);
Assert.Zero(t.keypoints[i].y);
}
}
[UnityTest]

Assert.AreEqual(screenPointCenterExpected.y, t.keypoints[8].y, Screen.height * .1);
Assert.AreEqual(8, t.keypoints[8].index);
Assert.AreEqual(2, t.keypoints[8].state);
}
static IEnumerable<(float scale, bool expectObject, int expectedState, KeypointObjectFilter keypointFilter, Vector2 expectedTopLeft, Vector2 expectedBottomRight)> Keypoint_OnBox_ReportsProperCoordinates_TestCases()
{
yield return (
1,
true,
2,
KeypointObjectFilter.Visible,
new Vector2(0, 0),
new Vector2(1023.99f, 1023.99f));
yield return (
1.001f,
true,
0,
KeypointObjectFilter.Visible,
new Vector2(0, 0),
new Vector2(0, 0));
yield return (
1.2f,
true,
0,
KeypointObjectFilter.Visible,
new Vector2(0, 0),
new Vector2(0, 0));
yield return (
0f,
false,
1,
KeypointObjectFilter.Visible,
new Vector2(512, 512),
new Vector2(512, 512));
yield return (
0f,
true,
1,
KeypointObjectFilter.VisibleAndOccluded,
new Vector2(512, 512),
new Vector2(512, 512));
}
[UnityTest]
public IEnumerator Keypoint_OnBox_ReportsProperCoordinates(
[ValueSource(nameof(Keypoint_OnBox_ReportsProperCoordinates_TestCases))]
(float scale, bool expectObject, int expectedState, KeypointObjectFilter keypointFilter, Vector2 expectedTopLeft, Vector2 expectedBottomRight) args)
{
var incoming = new List<List<KeypointLabeler.KeypointEntry>>();
var template = CreateTestTemplate(Guid.NewGuid(), "TestTemplate");
var frameSize = 1024;
var texture = new RenderTexture(frameSize, frameSize, 16);
var cam = SetupCamera(SetUpLabelConfig(), template, (frame, data) =>
{
incoming.Add(data);
}, texture, args.keypointFilter);
var camComponent = cam.GetComponent<Camera>();
camComponent.orthographic = true;
camComponent.orthographicSize = .5f;
var cube = TestHelper.CreateLabeledCube(scale: args.scale, z: 8);
SetupCubeJoints(cube, template);
cube.SetActive(true);
cam.SetActive(true);
AddTestObjectForCleanup(cam);
AddTestObjectForCleanup(cube);
yield return null;
//force all async readbacks to complete
DestroyTestObject(cam);
texture.Release();
var testCase = incoming.Last();
if (!args.expectObject)
{
Assert.AreEqual(0, testCase.Count);
yield break;
}
Assert.AreEqual(1, testCase.Count);
var t = testCase.First();
Assert.NotNull(t);
Assert.AreEqual(1, t.instance_id);
Assert.AreEqual(1, t.label_id);
Assert.AreEqual(template.templateID.ToString(), t.template_guid);
Assert.AreEqual(9, t.keypoints.Length);
CollectionAssert.AreEqual(Enumerable.Repeat(args.expectedState, 8), t.keypoints.Take(8).Select(k => k.state), "State mismatch");
Assert.AreEqual(args.expectedTopLeft.x, t.keypoints[0].x, k_Delta);
Assert.AreEqual(args.expectedBottomRight.y, t.keypoints[0].y, k_Delta);
Assert.AreEqual(args.expectedTopLeft.x, t.keypoints[1].x, k_Delta);
Assert.AreEqual(args.expectedTopLeft.y, t.keypoints[1].y, k_Delta);
Assert.AreEqual(args.expectedBottomRight.x, t.keypoints[2].x, k_Delta);
Assert.AreEqual(args.expectedTopLeft.y, t.keypoints[2].y, k_Delta);
Assert.AreEqual(args.expectedBottomRight.x, t.keypoints[3].x, k_Delta);
Assert.AreEqual(args.expectedBottomRight.y, t.keypoints[3].y, k_Delta);
}
}
}

38
com.unity.perception/Tests/Runtime/GroundTruthTests/SegmentationGroundTruthTests.cs


{
static readonly Color32 k_SemanticPixelValue = new Color32(10, 20, 30, Byte.MaxValue);
private static readonly Color32 k_InstanceSegmentationPixelValue = new Color32(255,0,0, 255);
private static readonly Color32 k_SkyValue = new Color32(10, 20, 30, 40);
public enum SegmentationKind
{

switch (segmentationKind)
{
case SegmentationKind.Instance:
//expectedPixelValue = new Color32(0, 74, 255, 255);
expectedPixelValue = k_InstanceSegmentationPixelValue;
cameraObject = SetupCameraInstanceSegmentation(OnSegmentationImageReceived);
break;

CollectionAssert.AreEqual(Enumerable.Repeat(expectedPixelValue, data.Length), data.ToArray());
}
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false);
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false, k_SkyValue);
AddTestObjectForCleanup(TestHelper.CreateLabeledPlane(label: "non-matching"));
yield return null;

CollectionAssert.AreEqual(Enumerable.Repeat(expectedPixelValue, data.Length), data.ToArray());
}
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false);
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), false, k_SkyValue);
var gameObject = TestHelper.CreateLabeledPlane();
gameObject.GetComponent<Labeling>().enabled = false;

}
[UnityTest]
public IEnumerator SemanticSegmentationPass_WithEmptyFrame_ProducesBlack([Values(false, true)] bool showVisualizations)
public IEnumerator SemanticSegmentationPass_WithEmptyFrame_ProducesSky([Values(false, true)] bool showVisualizations)
var expectedPixelValue = new Color32(0, 0, 0, 255);
var expectedPixelValue = k_SkyValue;
void OnSegmentationImageReceived(NativeArray<Color32> data)
{
timesSegmentationImageReceived++;

var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), showVisualizations);
var cameraObject = SetupCameraSemanticSegmentation(a => OnSegmentationImageReceived(a.data), showVisualizations, expectedPixelValue);
//TestHelper.LoadAndStartRenderDocCapture(out var gameView);
yield return null;

}
[UnityTest]
public IEnumerator SemanticSegmentationPass_WithNoObjects_ProducesSky()
{
int timesSegmentationImageReceived = 0;
var expectedPixelValue = k_SkyValue;
void OnSegmentationImageReceived(NativeArray<Color32> data)
{
timesSegmentationImageReceived++;
CollectionAssert.AreEqual(Enumerable.Repeat(expectedPixelValue, data.Length), data.ToArray());
}
var cameraObject = SetupCameraSemanticSegmentation(
a => OnSegmentationImageReceived(a.data), false, expectedPixelValue);
yield return null;
//destroy the object to force all pending segmented image readbacks to finish and events to be fired.
DestroyTestObject(cameraObject);
Assert.AreEqual(1, timesSegmentationImageReceived);
}
[UnityTest]
public IEnumerator SemanticSegmentationPass_WithTextureOverride_RendersToOverride([Values(true, false)] bool showVisualizations)
{
var expectedPixelValue = new Color32(0, 0, 255, 255);

return cameraObject;
}
GameObject SetupCameraSemanticSegmentation(Action<SemanticSegmentationLabeler.ImageReadbackEventArgs> onSegmentationImageReceived, bool showVisualizations)
GameObject SetupCameraSemanticSegmentation(Action<SemanticSegmentationLabeler.ImageReadbackEventArgs> onSegmentationImageReceived, bool showVisualizations, Color? backgroundColor = null)
{
var cameraObject = SetupCamera(out var perceptionCamera, showVisualizations);
var labelConfig = ScriptableObject.CreateInstance<SemanticSegmentationLabelConfig>();

color = k_SemanticPixelValue
}
});
if (backgroundColor != null)
{
labelConfig.skyColor = backgroundColor.Value;
}
var semanticSegmentationLabeler = new SemanticSegmentationLabeler(labelConfig);
semanticSegmentationLabeler.imageReadback += onSegmentationImageReceived;
perceptionCamera.AddLabeler(semanticSegmentationLabeler);

2
com.unity.perception/package.json


"displayName": "Perception",
"name": "com.unity.perception",
"unity": "2019.4",
"version": "0.8.0-preview.2",
"version": "0.8.0-preview.3",
"samples": [
{
"displayName": "Tutorial Files",

214
com.unity.perception/Documentation~/images/build_res.png

之前 之后
宽度: 1455  |  高度: 465  |  大小: 66 KiB

366
com.unity.perception/Documentation~/images/gameview_res.png

之前 之后
宽度: 692  |  高度: 946  |  大小: 147 KiB

739
com.unity.perception/Documentation~/images/robotics_pose.png

之前 之后
宽度: 700  |  高度: 196  |  大小: 179 KiB

140
com.unity.perception/Documentation~/images/unity-wide-whiteback.png

之前 之后
宽度: 3000  |  高度: 636  |  大小: 41 KiB

22
com.unity.perception/Editor/GroundTruth/JointLabelEditor.cs


using System.Linq;
using UnityEngine;
using UnityEngine.Perception.GroundTruth;
namespace UnityEditor.Perception.GroundTruth
{
[CustomEditor(typeof(JointLabel))]
public class JointLabelEditor : Editor
{
public override void OnInspectorGUI()
{
base.OnInspectorGUI();
#if UNITY_2020_1_OR_NEWER
//GetComponentInParent<T>(bool includeInactive) only exists on 2020.1 and later
if (targets.Any(t => ((Component)t).gameObject.GetComponentInParent<Labeling>(true) == null))
#else
if (targets.Any(t => ((Component)t).GetComponentInParent<Labeling>() == null))
#endif
EditorGUILayout.HelpBox("No Labeling component detected on parents. Keypoint labeling requires a Labeling component on the root of the object.", MessageType.Info);
}
}
}

3
com.unity.perception/Editor/GroundTruth/JointLabelEditor.cs.meta


fileFormatVersion: 2
guid: 8b21d46736dd4cbb96bf827457752855
timeCreated: 1616108507

24
com.unity.perception/Runtime/GroundTruth/Labelers/KeypointObjectFilter.cs


namespace UnityEngine.Perception.GroundTruth
{
/// <summary>
/// Keypoint filtering modes.
/// </summary>
public enum KeypointObjectFilter
{
/// <summary>
/// Only include objects which are partially visible in the frame.
/// </summary>
[InspectorName("Visible objects")]
Visible,
/// <summary>
/// Include visible objects and objects with keypoints in the frame.
/// </summary>
[InspectorName("Visible and occluded objects")]
VisibleAndOccluded,
/// <summary>
/// Include all labeled objects containing matching skeletons.
/// </summary>
[InspectorName("All objects")]
All
}
}

3
com.unity.perception/Runtime/GroundTruth/Labelers/KeypointObjectFilter.cs.meta


fileFormatVersion: 2
guid: a1ea6aeb2b1c476288b4aa0cbc795280
timeCreated: 1616179952

91
com.unity.perception/Documentation~/GroundTruth/KeypointLabeler.md


# Keypoint Labeler
The Keypoint Labeler captures the screen locations of specific points on labeled GameObjects. The typical use of this Labeler is capturing human pose estimation data, but it can be used to capture points on any kind of object. The Labeler uses a [Keypoint Template](#KeypointTemplate) which defines the keypoints to capture for the model and the skeletal connections between those keypoints. The positions of the keypoints are recorded in pixel coordinates.
## Data Format
The keypoints captured each frame are in the following format:
```
keypoints {
label_id: <int> -- Integer identifier of the label
instance_id: <str> -- UUID of the instance.
template_guid: <str> -- UUID of the keypoint template
pose: <str> -- Current pose
keypoints [ -- Array of keypoint data, one entry for each keypoint defined in associated template file.
{
index: <int> -- Index of keypoint in template
x: <float> -- X pixel coordinate of keypoint
y: <float> -- Y pixel coordinate of keypoint
state: <int> -- Visibility state
}, ...
]
}
```
The `state` entry has three possible values:
* 0 - the keypoint either does not exist or is outside of the image's bounds
* 1 - the keypoint exists inside of the image bounds but cannot be seen because the object is not visible at its location in the image
* 2 - the keypoint exists and the object is visible at its location
The annotation definition, captured by the Keypoint Labeler once in each dataset, describes points being captured and their skeletal connections. These are defined by the [Keypoint Template](#KeypointTemplate).
```
annotation_definition.spec {
template_id: <str> -- The UUID of the template
template_name: <str> -- Human readable name of the template
key_points [ -- Array of joints defined in this template
{
label: <str> -- The label of the joint
index: <int> -- The index of the joint
}, ...
]
skeleton [ -- Array of skeletal connections (which joints have connections between one another) defined in this template
{
joint1: <int> -- The first joint of the connection
joint2: <int> -- The second joint of the connection
}, ...
]
}
```
## Setup
The Keypoint Labeler captures keypoints each frame from each object in the scene that meets the following conditions:
* The object or its children are at least partially visible in the frame
* The _Object Filter_ option on the Keypoint Labeler can be used to also include fully occluded or off-screen objects
* The root object has a `Labeling` component
* The object matches at least one entry in the Keypoint Template by either:
* Containing an Animator with a [humanoid avatar](https://docs.unity3d.com/Manual/ConfiguringtheAvatar.html) whose rig matches a keypoint OR
* Containing children with Joint Label components whose labels match keypoints
For a tutorial on setting up your project for keypoint labeling, see the [Human Pose Labeling and Randomization Tutorial](../HPTutorial/TUTORIAL.md).
## Keypoint Template
Keypoint Templates are used to define the keypoints and skeletal connections captured by the Keypoint Labeler. The Keypoint Template takes advantage of Unity's humanoid animation rig, and allows the user to automatically associate template keypoints to animation rig joints. Additionally, the user can choose to ignore the rigged points, or add points not defined in the rig.
A [COCO](https://cocodataset.org/#home) Keypoint Template is included in the Perception package.
### Editor
The Keypoint Template editor allows the user to create/modify a Keypoint Template. The editor consists of the header information, the keypoint array, and the skeleton array.
![Header section of the keypoint template](../images/keypoint_template_header.png)
<br/>_Header section of the keypoint template_
In the header section, a user can change the name of the template and supply textures that they would like to use for the keypoint visualization.
![The keypoint section of the keypoint template](../images/keypoint_template_keypoints.png)
<br/>_Keypoint section of the keypoint template_
The keypoint section allows the user to create/edit keypoints and associate them with Unity animation rig points. Each keypoint record
has 4 fields: label (the name of the keypoint), Associate to Rig (a boolean value which, if true, automatically maps the keypoint to
the GameObject defined by the rig), Rig Label (only needed if Associate To Rig is true, defines which rig component to associate with
the keypoint), and Color (RGB color value of the keypoint in the visualization).
![Skeleton section of the keypoint template](../images/keypoint_template_skeleton.png)
<br/>_Skeleton section of the keypoint template_
The skeleton section allows the user to create connections between joints, basically defining the skeleton of a labeled object.
#### Animation Pose Label
This file is used to define timestamps in an animation to a pose label.
正在加载...
取消
保存