浏览代码

Merge pull request #83 from Unity-Technologies/docs-tw-review

Docs tw review
/main
GitHub 4 年前
当前提交
4d5cb140
共有 10 个文件被更改,包括 272 次插入137 次删除
  1. 10
      com.unity.perception/Documentation~/DatasetCapture.md
  2. 125
      com.unity.perception/Documentation~/GettingStarted.md
  3. 55
      com.unity.perception/Documentation~/PerceptionCamera.md
  4. 2
      com.unity.perception/Documentation~/Randomization/Index.md
  5. 14
      com.unity.perception/Documentation~/SetupSteps.md
  6. 18
      com.unity.perception/Documentation~/TableOfContents.md
  7. 30
      com.unity.perception/Documentation~/index.md
  8. 28
      com.unity.perception/Documentation~/GroundTruthLabeling.md
  9. 102
      com.unity.perception/Documentation~/images/SemanticSegmentationLabelConfig.png
  10. 25
      com.unity.perception/Documentation~/GroundTruth-Labeling.md

10
com.unity.perception/Documentation~/DatasetCapture.md


## Sensor scheduling
While sensors are registered, `DatasetCapture` ensures that frame timing is deterministic and run at the appropriate simulation times to let each sensor run at its own rate.
Using [Time.CaptureDeltaTime](https://docs.unity3d.com/ScriptReference/Time-captureDeltaTime.html), it also decouples wall clock time from simulation time, allowing the simulation to run as fast as possible.
While sensors are registered, `DatasetCapture` ensures that frame timing is deterministic and run at the appropriate simulation times to let each sensor run at its own rate.
Using [Time.CaptureDeltaTime](https://docs.unity3d.com/ScriptReference/Time-captureDeltaTime.html), it also decouples wall clock time from simulation time, allowing the simulation to run as fast as possible.
Custom sensors can be registered using `DatasetCapture.RegisterSensor()`. The `period` passed in at registration time determines how often in simulation time frames should be scheduled for the sensor to run. The sensor implementation would then check `ShouldCaptureThisFrame` on the returned `SensorHandle` each frame to determine whether it is time for the sensor to perform a capture. `SensorHandle.ReportCapture` should then be called in each of these frames to report the state of the sensor to populate the dataset.
You can register custom sensors using `DatasetCapture.RegisterSensor()`. The `period` you pass in at registration time determines how often (in simulation time) frames should be scheduled for the sensor to run. The sensor implementation then checks `ShouldCaptureThisFrame` on the returned `SensorHandle` each frame to determine whether it is time for the sensor to perform a capture. `SensorHandle.ReportCapture` should then be called in each of these frames to report the state of the sensor to populate the dataset.
In addition to the common annotations and metrics produced by [PerceptionCamera](PerceptionCamera.md), scripts can produce their own via `DatasetCapture`. Annotation and metric definitions must first be registered using `DatasetCapture.RegisterAnnotationDefinition()` or `DatasetCapture.RegisterMetricDefinition()`. These return `AnnotationDefinition` and `MetricDefinition` instances which can then be used to report values during runtime.
In addition to the common annotations and metrics produced by [PerceptionCamera](PerceptionCamera.md), scripts can produce their own via `DatasetCapture`. You must first register annotation and metric definitions using `DatasetCapture.RegisterAnnotationDefinition()` or `DatasetCapture.RegisterMetricDefinition()`. These return `AnnotationDefinition` and `MetricDefinition` instances which you can then use to report values during runtime.
Annotations and metrics are always associated with the frame they are reported in. They may also be associated with a specific sensor by using the `Report*` methods on `SensorHandle`.

125
com.unity.perception/Documentation~/GettingStarted.md


# Getting Started with Perception
# Getting started with Perception
This walkthrough will provide creating a new scene for generating perception datasets including segmentation data and image captures.
This walkthrough shows you how to create a new scene in order to generate perception datasets including segmentation data and image captures.
First, follow [this guide](SetupSteps.md) to install Perception in your project.
To install Perception in your project, follow the [Installing the Perception package in your project](SetupSteps.md) guide.
This step can be skipped for HDRP projects.
You can skip this step for HDRP projects.
1. Select your project's `ScriptableRenderer` asset and open the inspector window. In most projects it is located at `Assets/Settings/ForwardRenderer.asset`.
2. Click `Add Renderer Feature` and select `Ground Truth Renderer Feature`
1. Select your project's **ScriptableRenderer** asset and open the Inspector window. In most projects it is located at `Assets/Settings/ForwardRenderer.asset`.
2. Select **Add Renderer Feature** and **Ground Truth Renderer Feature**.
<img src="images/ScriptableRendererStep.png" align="middle"/>
![ForwardRenderer](images/ScriptableRendererStep.png)
<br/>_ForwardRenderer_
1. Create a new scene using File -> New Scene
2. `ctrl+s` to save the scene and give it a name
3. Select the Main Camera and reset the Position transform to 0
4. In the Hierarchy window select the main camera
1. In the inspector panel of the main camera select Add Component
2. Add a **Perception Camera** component
1. Create a new scene using **File** > **New Scene**
2. Ctrl+S to save the Scene and give it a name
3. Select the Main Camera and reset the **Position** transform to 0
4. In the Hierarchy window select the **Main Camera**
1. In the Main Camera's Inspector window select **Add Component**
2. Add a Perception Camera component
<img src="images/PerceptionCameraFinished.png" align="middle"/>
![Perception Camera component](images/PerceptionCameraFinished.png)
<br/>_Perception Camera component_
1. Create a cube by right-clicking in the Hierarchy window, select 3D Object -> Cube
2. Create 2 more cubes this way
1. Create a cube by right-clicking in the Hierarchy window and selecting **3D Object** > **Cube**
2. Create two more cubes this way
4. Position the Cubes in front of the main Camera
4. Position the cubes in front of the Main Camera
![Position of the cubes in front of the Main Camera](images/CompletedScene.png)
<br/>_Position of the cubes in front of the Main Camera_
5. On each cube, in the Inspector window, add a Labeling component
1. Select **Add (+)**
2. In the text field add the name of the object, for example Crate. Unity uses this label in the semantic segmentation images.
![A labeling component, for example "Crate"](images/LabeledObject.png)
<br/>_A labeling component, for example "Crate"_
6. Create and set up an IdLabelConfig
1. In the Project window select **Add (+)**, then **Perception** > **ID Label Config**
2. In the Assets folder, select the new **IdLabelConfig**
3. In the Inspector, select **Add to list (+)** three times
4. In the three label text fields, add the text (Crate, Cube and Box) from the Labeling script on the objects you created in the Scene
![IdLabelConfig with three labels](images/IDLabelingConfigurationFinished.png)
<br/>_IdLabelConfig with three labels_
7. Create and set up a SemanticSegmentationLabelingConfiguration
1. In the Project panel select **Add (+)**, then **Perception** > **Semantic Segmentation Label Config**
2. In the Assets folder, select the new SemanticSegmentationLabelingConfiguration
3. In the Inspector, select **Add to list (+)** three times
4. In the three label text fields, add the text (Cube, Crate and Box) from the Labeling script on the objects you created in the Scene
![SemanticSegmentationLabelConfig with three labels and three colors](images/SemanticSegmentationLabelConfig.png)
<br/>_SemanticSegmentationLabelConfig with three labels and three colors_
<img src="images/CompletedScene.PNG" align="middle"/>
8. In the Hierarchy window select the Main Camera
5. On each cube, from the inspector panel add a **Labeling** component
1. Click the **+**
2. In the text field add the name of the object i.e Crate. This will be the label used in the semantic segmentation images
9. Add the IdLabelConfig to the Perception Camera script
<img src="images/LabeledObject.PNG" align="middle"/>
1. In the Perception Camera script, find the following three Camera Labelers: BoundingBox2DLabeler, ObjectCountLabeler and RenderedObjectInfoLabeler. For each Camera Labeler, in the Id Label Config field (or Label Config field, for the ObjectCountLabeler Camera Labeler), click the circle button.
2. In the Select IdLabelConfig window, select the **IdLabelConfig** you created.
6. In the Project panel right click -> Perception -> Labeling Configuration
7. Select the new **ID Label Config**
1. Click the **+**
2. In the label text field add the same text that the Label script contains on the objects created in the scene (i.e Cube, Box, Crate)
<img src="images/IDLabelingConfigurationFinished.PNG" align="middle"/>
10. Add the SemanticSegmentationLabelingConfiguration to the Perception Camera script
1. In the Perception Camera script, find the SemanticSegmentationLabeler Camera Labeler. In its Label Config field, select the circle button.
2. In the Select SemanticSegmentationLabelConfig window, select the **SemanticSegmentationLabelConfig** you created.
8. Select the Main Camera in the Hierarchy panel
9. In the Perception Camera attach the ID Label Config created in previous step for each ID Label config
![Perception Camera Labelers](images/MainCameraLabelConfig.png)
<br/>_Perception Camera Labelers_
<img src="images/MainCameraLabelConfig.PNG" align="middle"/>
1. Press play in the editor, allow the scene to run for a few seconds, and then exit playmode
2. In the console log you will see a Shutdown in Progress message that will show a file path to the location of the generated dataset.
1. In the Editor, press the play button, allow the scene to run for 10 seconds, then exit Play mode.
2. In the console log you see a Shutdown in Progress message that shows a file path to the location of the generated dataset.
3. In the dataset folder you will find the following data:
3. You find the following data in the dataset folder:
<img src="images/rgb_2.png" align="middle"/>
![RGB capture](images/rgb_2.png)
<br/>_RGB capture_
_RGB image_
![Semantic segmentation image](images/segmentation_2.png)
<br/>_Semantic segmentation image_
<img src="images/segmentation_2.png" align="middle"/>
## Optional Step: realtime visualization of labelers
_Example semantic segmentation image_
The Perception package comes with the ability to show realtime results of the labeler in the scene. To enable this capability:
## Optional Step: Realtime visualization of labelers
![Example of Perception running with show visualizations enabled](images/visualized.png)
<br/>_Example of Perception running with show visualizations enabled_
The perception package now comes with the ability to show realtime results of the labeler in the scene. To enable this capability:
1. To use the visualizer, select the Main Camera, and in the Inspector window, in the Perception Camera component, enable **Show Visualizations**. This enables the built-in labelers which includes segmentation data, 2D bounding boxes, pixel and object counts.
2. Enabling the visualizer creates new UI controls in the Editor's Game view. These controls allow you to control each of the individual visualizers. You can enable or disable individual visualizers. Some visualizers also include controls that let you change their output.
<img src="images/visualized.png" align="middle"/>
_Example of perception running with show visualizations on_
![Visualization controls in action](images/controls.gif)
<br/>_Visualization controls in action_
1. To use the visualizer, verify that *Show Visualizations* is checked on in the Inspector pane. This turns on the built in labelers which includes segmentation data, 2D bounding boxes, pixel and object counts.
2. Turning on the visualizer creates new UI controls in the editor's game view. These controls allow you to atomically control each of the individual visualizers. Each individual can be turned on/off on their own. Some visualizers also include controls to change their output.
<img src="images/controls.gif" align="middle"/>
_Visualization controls in action_
***Important Note:*** The perception package takes advantage of asynchronous processing to ensure reasonable frame rates of a scene. A side effect of realtime visualization is that the labelers have to be applied to the capture in its actual frame, which will potentially adversely affect the scene's framerate.
**Important Note:** The Perception package uses asynchronous processing to ensure reasonable frame rates of a scene. A side effect of real-time visualization is that the labelers must be applied to the capture in its actual frame, which potentially adversely affects the Scene's framerate.

55
com.unity.perception/Documentation~/PerceptionCamera.md


# The Perception Camera component
The Perception Camera component ensures the attached [Camera](https://docs.unity3d.com/Manual/class-Camera.html) runs at deterministic rates and captures RGB and other Camera-related ground truth to the [JSON dataset](Schema/Synthetic_Dataset_Schema.md) using [DatasetCapture](DatasetCapture.md). It supports HDRP and URP.
The Perception Camera component ensures that the [Camera](https://docs.unity3d.com/Manual/class-Camera.html) runs at deterministic rates. It also ensures that the Camera uses [DatasetCapture](DatasetCapture.md) to capture RGB and other Camera-related ground truth in the [JSON dataset](Schema/Synthetic_Dataset_Schema.md). You can use the Perception Camera component on the High Definition Render Pipeline (HDRP) or the Universal Render Pipeline (URP).
<img src="images/PerceptionCamera.png" align="middle"/>
![Perception Camera component](images/PerceptionCameraFinished.png)
<br/>_Perception Camera component_
| Description | A description of the camera to be registered in the JSON dataset. |
| Period | The amount of simulation time in seconds between frames for this camera. For more on sensor scheduling, see [DatasetCapture](DatasetCapture.md). |
| Start Time | The simulation time at which to run the first frame. This time will offset the period, useful for allowing multiple cameras to run at the right times relative to each other. |
| Capture Rgb Images | When this is checked, RGB images will be captured as PNG files in the dataset each frame. |
| Camera Labelers | A list of labelers which generate data derived from this camera. |
| Description | A description of the Camera to be registered in the JSON dataset. |
| Period | The amount of simulation time in seconds between frames for this Camera. For more information on sensor scheduling, see [DatasetCapture](DatasetCapture.md). |
| Start Time | The simulation time at which to run the first frame. This time offsets the period, which allows multiple Cameras to run at the correct times relative to each other. |
| Capture Rgb Images | When you enable this property, Unity captures RGB images as PNG files in the dataset each frame. |
| Camera Labelers | A list of labelers that generate data derived from this Camera. |
## Camera Labelers
Camera Labelers capture data related to the camera into the JSON dataset. This data can be used for model training or for dataset statistics. A number of Camera Labelers are provided with Perception, and additional labelers can be defined by deriving from the CameraLabeler class.
## Camera labelers
Camera labelers capture data related to the Camera in the JSON dataset. You can use this data to train models and for dataset statistics. The Perception package provides several Camera labelers, and you can derive from the CameraLabeler class to define more labelers.
### Semantic Segmentation Labeler
<img src="images/semantic_segmentation.png" align="middle"/>
### SemanticSegmentationLabeler
![Example semantic segmentation image from a modified SynthDet project](images/semantic_segmentation.png)
<br/>_Example semantic segmentation image from a modified SynthDet project_
_Example semantic segmentation image from a modified [SynthDet](https://github.com/Unity-Technologies/SynthDet) project_
The SemanticSegmentationLabeler generates a 2D RGB image with the attached Camera. Unity draws objects in the color you associate with the label in the SemanticSegmentationLabelingConfiguration. If Unity can't find a label for an object, it draws it in black.
The Semantic Segmentation Labeler generates a 2D RGB image using the attached camera where objects are drawn with the color associated with their label in the provided SemanticSegmentationLabelConfig. If no label is resolved for an object, it is drawn black.
### Bounding Box 2D Labeler
<img src="images/bounding_boxes.png" align="middle"/>
_example bounding box visualization from [SynthDet](https://github.com/Unity-Technologies/SynthDet) generated by the `SynthDet_Statistics` jupyter notebook_
### BoundingBox2DLabeler
![Example bounding box visualization from SynthDet generated by the `SynthDet_Statistics` Jupyter notebook](images/bounding_boxes.png)
<br/>_Example bounding box visualization from SynthDet generated by the `SynthDet_Statistics` Jupyter notebook_
The Bounding Box 2D Labeler produces 2D bounding boxes for each visible object with a label resolved by the given ID Label Config. Bounding boxes are calculated using the rendered image, so only occluded or out-of-frame portions of the objects are not included.
The BoundingBox2DLabeler produces 2D bounding boxes for each visible object with a label you define in the IdLabelConfig. Unity calculates bounding boxes using the rendered image, so it only excludes occluded or out-of-frame portions of the objects.
### Object Count Labeler
### ObjectCountLabeler
```
{
"label_id": 25,

```
_Example object count for a single label_
The Object Count Labeler records object counts for each label in the provided ID Label Config. Only objects with at least one visible pixel in the camera frame will be recorded.
The ObjectCountLabeler records object counts for each label you define in the IdLabelConfig. Unity only records objects that have at least one visible pixel in the Camera frame.
### Rendered Object Info Labeler
### RenderedObjectInfoLabeler
```
{
"label_id": 24,

```
_Example rendered object info for a single object_
The Rendered Object Info Labeler records a list of all objects visible in the camera image, including its instance id, resolved label id and visible pixels. Objects not resolved to a label in the given ID Label Config are not recorded.
The RenderedObjectInfoLabeler records a list of all objects visible in the Camera image, including its instance ID, resolved label ID and visible pixels. If Unity cannot resolve objects to a label in the IdLabelConfig, it does not record these objects.
Ground truth is not compatible with all rendering features, especially ones that modify the visibility or shape of objects in the frame.
Ground truth is not compatible with all rendering features, especially those that modify the visibility or shape of objects in the frame.
* Vertex and geometry shaders are not run
* Transparency is not considered. All geometry is considered opaque
* Besides built-in Lens Distortion in URP and HDRP, post-processing effects are not run
* Unity does not run Vertex and geometry shaders
* Unity does not consider transparency and considers all geometry opaque
* Unity does not run post-processing effects, except built-in lens distortion in URP and HDRP
If you encounter additional incompatibilities, please open an [issue](https://github.com/Unity-Technologies/com.unity.perception/issues)
If you discover more incompatibilities, please open an issue in the [Perception GitHub repository](https://github.com/Unity-Technologies/com.unity.perception/issues).

2
com.unity.perception/Documentation~/Randomization/Index.md


# Overview
*NOTICE: The perception randomization toolset is currently marked as experimental and will experience a number of updates in the near future.*
*NOTE: The Perception package's randomization toolset is currently marked as experimental and is subject to change.*
The randomization toolset simplifies randomizing aspects of generating synthetic data. It facilitates exposing parameters for randomization, offers samplers to pick random values from parameters, and provides scenarios to define a full randomization process. Each of these also allows for custom implementations to fit particular randomization needs.

14
com.unity.perception/Documentation~/SetupSteps.md


1. Install the latest [2019.3 Unity Editor](https://unity.com/releases/2019-4)
1. Create a new HDRP or URP project, or open an existing project
1. Open `Window` -> `Package Manager`
1. In the Package Manager window find and click the ***+*** button in the upper lefthand corner of the window
1. Select ***Add package from git URL...***
1. Enter `com.unity.perception` and click ***Add***
1. Select **Window** > **Package Manager**
1. In the Package Manager window, in the top left-hand corner, select **Add (+)**
1. Select **Add package from git URL...**
1. Enter `com.unity.perception` and select **Add**
If you want a specific version of the package, append the version to the end of the "git URL". Ex. `com.unity.perception@0.1.0-preview.4`
If you want a specific version of the package, append the version to the end of the "git URL". For example: `com.unity.perception@0.1.0-preview.4`
To install from a local clone of the repository, see [installing a local package](https://docs.unity3d.com/Manual/upm-ui-local.html) in the Unity manual.
To install the Perception package from a local clone of the repository, see [installing a local package](https://docs.unity3d.com/Manual/upm-ui-local.html) in the Unity manual.
Once completed you can continue with the [getting started steps](GettingStarted.md).
When you have completed the installation, continue with the [getting started steps](GettingStarted.md).

18
com.unity.perception/Documentation~/TableOfContents.md


* [Installation Instructions](SetupSteps.md)
* [Getting Started](GettingStarted.md)
* [Labeling](GroundTruth-Labeling.md)
* [Perception Camera](PerceptionCamera.md)
* [Dataset Capture](DatasetCapture.md)
* [Randomization](Randomization/Index.md)
* [Unity Perception Package](index.md)
* [Installation instructions](SetupSteps.md)
* [Getting started](GettingStarted.md)
* [Labeling](GroundTruthLabeling.md)
* [Perception Camera](PerceptionCamera.md)
* [Dataset capture](DatasetCapture.md)
* [Randomization](Randomization/index.md)
* [Parameters](Randomization/Parameters.md)
* [Samplers](Randomization/Samplers.md)
* [Scenarios](Randomization/Scenarios.md)
* [Tutorial](Randomization/Tutorial.md)

30
com.unity.perception/Documentation~/index.md


<img src="images/banner2.PNG" align="middle"/>
# Unity Perception package (com.unity.perception)
The Perception package provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on capturing ground truth for camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks.
> The Perception package is in active development. Its features and API are subject to significant change as development progresses.
The Perception package provides a toolkit for generating large-scale datasets for perception-based machine learning, training and validation. It is focused on capturing ground truth for Camera-based use cases. In the future, the Perception package will include other types of sensors and machine learning tasks.
[Installation instructions](SetupSteps.md)

## Preview package
This package is available as a preview, so it is not ready for production use. The features and documentation in this package might change before it is verified for release.
## Example projects using Perception

[SynthDet](https://github.com/Unity-Technologies/SynthDet) is an end-to-end solution for training a 2d object detection model using synthetic data.
[SynthDet](https://github.com/Unity-Technologies/SynthDet) is an end-to-end solution for training a 2D object detection model using synthetic data.
### Unity Simulation Smart Camera Example
### Unity Simulation Smart Camera example
The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how Perception could be used in a smart city or autonomous vehicle simulation. Datasets can be generated locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how Perception could be used in a smart city or autonomous vehicle simulation. You can generate datasets locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
|Feature|Description
|Feature|Description|
|[Labeling](GroundTruth-Labeling.md)|Component which marks a GameObject and its descendants with a set of labels|
|[LabelConfig](GroundTruth-Labeling.md#LabelConfig)|Asset which defines a taxonomy of labels for ground truth generation|
|[Perception Camera](PerceptionCamera.md)|Captures RGB images and ground truth from a [Camera](https://docs.unity3d.com/Manual/class-Camera.html)|
|[DatasetCapture](DatasetCapture.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset|
|[Randomization (Experimental)](Randomization/Index.md)|Integrate domain randomization principles into your simulation|
|[Labeling](GroundTruth-Labeling.md)|A component that marks a GameObject and its descendants with a set of labels|
|[LabelConfig](GroundTruth-Labeling.md#LabelConfig)|An asset that defines a taxonomy of labels for ground truth generation|
|[Perception Camera](PerceptionCamera.md)|Captures RGB images and ground truth from a [Camera](https://docs.unity3d.com/Manual/class-Camera.html).|
|[DatasetCapture](DatasetCapture.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset.|
|[Randomization (Experimental)](Randomization/Index.md)|The Randomization tool set lets you integrate domain randomization principles into your simulation.|
## Known Issues
## Known issues
* The Linux Editor 2019.4.7f1 and 2019.4.8f1 have been found to hang when importing HDRP-based perception projects. For Linux Editor support, use 2019.4.6f1 or 2020.1
* The Linux Editor 2019.4.7f1 and 2019.4.8f1 might hang when importing HDRP-based Perception projects. For Linux Editor support, use 2019.4.6f1 or 2020.1

28
com.unity.perception/Documentation~/GroundTruthLabeling.md


# Labeling
Many labelers require mapping the objects in the view to the values recorded in the dataset. As an example, Semantic Segmentation needs to determine the color to draw each object in the segmentation image.
This mapping is accomplished for a GameObject by:
* Finding the nearest Labeling component attached to the object or its parents.
* Finding the first label in the Labeling component that is present anywhere in the Labeler's Label Config.
Unity uses the resolved Label Entry from the Label Config to produce the final output.
## Labeling component
The Labeling component associates a list of string-based labels with a GameObject and its descendants. A Labeling component on a descendant overrides its parent's labels.
## Label Config
Many labelers require a Label Config asset. This asset specifies a list of all labels to be captured in the dataset along with extra information used by the various labelers.
## Best practices
Generally algorithm testing and training requires a single label on an asset for proper identification such as "chair", "table" or "door". To maximize asset reuse, however, it is useful to give each object multiple labels in a hierarchy.
For example, you could label an asset representing a box of Rice Krispies as `food\cereal\kellogs\ricekrispies`
* "food": type
* "cereal": subtype
* "kellogs": main descriptor
* "ricekrispies": sub descriptor
If the goal of the algorithm is to identify all objects in a Scene that are "food", that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a Scene that label is also available. Depending on the goal of the algorithm, you can use any mix of labels in the hierarchy.

102
com.unity.perception/Documentation~/images/SemanticSegmentationLabelConfig.png

之前 之后
宽度: 454  |  高度: 245  |  大小: 30 KiB

25
com.unity.perception/Documentation~/GroundTruth-Labeling.md


# Labeling
Many labelers require mapping the objects in the view to the values recorded in the dataset. As an example, Semantic Segmentation needs to determine the color to draw each object in the segmentation image.
This mapping is accomplished for a GameObject by:
* Finding the nearest Labeling component attached to the object or its ancestors.
* Find the first label in the Labeling which is present anywhere in the Labeler's Label Config.
* The resolved Label Entry from the Label Config is used to produce the final output.
## Labeling component
The `Labeling` component associates a list of string-based labels with a GameObject and its descendants. A `Labeling` component on a descendant overrides its parent's labels.
## Label Config
Many labelers require require a `Label Config` asset. This asset specifies a list of all labels to be captured in the dataset along with extra information used by the various labelers.
## Best practices
Generally algorithm testing and training requires a single label on an asset for proper identification such as “chair”, “table”, or “door". To maximize asset reuse, however, it is useful to give each object multiple labels in a hierarchy.
For example, an asset representing a box of Rice Krispies cereal could be labeled as `food\cereal\kellogs\ricekrispies`
* “food” - type
* “cereal” - subtype
* “kellogs” - main descriptor
* “ricekrispies” - sub descriptor
If the goal of the algorithm is to identify all objects in a scene that is “food” that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a scene that label is also available. Depending on the goal of the algorithm any mix of labels in the hierarchy can be used.
正在加载...
取消
保存