Jon Hogins
5 年前
当前提交
bdd23a27
共有 6 个文件被更改,包括 64 次插入 和 45 次删除
-
14README.md
-
36com.unity.perception/Documentation~/GettingStarted.md
-
21com.unity.perception/Documentation~/GroundTruth-Labeling.md
-
2com.unity.perception/Documentation~/SetupSteps.md
-
24com.unity.perception/Documentation~/index.md
-
12com.unity.perception/Documentation~/SimulationManager.md
|
|||
_Note: This document is a work in progress_ |
|||
|
|||
You can add a Labeling component to individual GameModels within a scene although it is a good practice to create a prefab of a GameModel and apply the Labeling component to it. |
|||
The Labeling components contain properties that control the number of labels applied to the GameModel. “Classes” has a property named “size”, this identifies how many labels are applied to a GameModel. Default = 0 (no label). Setting “size” to 1 will expose an “Element 0” parameter and an input field allowing for a custom label as text or numbers (combination of both) that can be used to label the asset. |
|||
You can add a Labeling component to individual GameObjects within a scene although it is a good practice to create a prefab of a GameModel and apply the Labeling component to it. |
|||
Multiple labels can be used by setting “size” to 2 or more. These additional Elements (labels) can be used for any purpose in development. For example in SynthDet labels have a hierarchy where Element0 is the highest level label identifying an GameModel in a very general category. Subsequent categories become more focused in identifying what types and groups an object can be classified. The last Element is reserved for the specific name (or label) the asset is defined as. |
|||
Multiple labels can be assigned to the same `Labeling`. When ground truth which requires unique labels per object is being generated, the first label in the `Labeling` present anywhere in the `LabelingConfiguration` is used. |
|||
Semantic segmentation (and other metrics) require a labeling configuration file located here: |
|||
This file gives a list of all labels currently being used in the data set and what RGB value they are associated with. This file can be used as is or created by the developer. When a Semantic segmentation output is generated the per pixel RGB value can be used to identify the object for the algorithm. |
|||
|
|||
Note: the labeling configuration file is not validated and must be managed by the developer. |
|||
Many labelers require require a `Labeling Configuration` asset. |
|||
This file specifies a list of all labels to be captured in the dataset for a labeler along with extra information used by the various labelers. |
|||
Generally algorithm testing and training requires a single label on an asset for proper identification (“chair”, “table”, “door, “window”, etc.) In Unity SynthDet a labeling hierarchy is used to identify assets at a higher level and/or more granularly. |
|||
Generally algorithm testing and training requires a single label on an asset for proper identification such as “chair”, “table”, or “door". To maximize asset reuse, however, it is useful to give each object multiple labels in a heirarchy. |
|||
Example |
|||
An asset representing a box of Rice Krispies cereal is labeled as: food\cereal\kellogs\ricekrispies |
|||
For example |
|||
An asset representing a box of Rice Krispies cereal could b labeled as: food\cereal\kellogs\ricekrispies |
|||
If the goal of the algorithm is to identify all objects in a scene that is “food” that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a scene that label is also available. Depending on the goal of the algorithm any mix of labels in the hierarchy can be used at the discretion of the developer. |
|||
If the goal of the algorithm is to identify all objects in a scene that is “food” that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a scene that label is also available. Depending on the goal of the algorithm any mix of labels in the hierarchy can be used. |
|
|||
# About the Perception SDK |
|||
com.unity.perception provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on a handful of camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks. |
|||
<img src="images/banner2.PNG" align="middle"/> |
|||
# Technical details |
|||
## Requirements |
|||
# com.unity.perception |
|||
The Perception package provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on capturing ground truth for camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks. |
|||
This version of _Perception_ is compatible Unity Editor 2019.3 and later |
|||
> com.unity.perception is in active development. Its features and API are subject to significant change as development progresses. |
|||
|
|||
[Installation instructions](com.unity.perception/Documentation~/SetupSteps.md) |
|||
|
|||
[Setting up your first perception scene](com.unity.perception/Documentation~/GettingStarted.md) |
|||
|Ground Truth|Captures semantic segmentation, bounding boxes, and other forms of ground truth.| |
|||
|
|||
|Feature|Description |
|||
|Labeling|MonoBehaviour which marks an object and its descendants with a set of labels| |
|||
|Labeling Configuration|Asset which defines a taxonomy of labels used for ground truth generation | |
|||
|Perception Camera|Captures RGB images and ground truth on a Unity Camera| |
|||
|---|---| |
|||
|[Labeling](GroundTruth-Labeling.md)|Component which marks a GameObject and its descendants with a set of labels| |
|||
|[Labeling Configuration](GroundTruth-Labeling.md#LabelingConfiguration)|Asset which defines a taxonomy of labels for ground truth generation| |
|||
|Perception Camera|Captures RGB images and ground truth from a [Camera](https://docs.unity3d.com/Manual/class-Camera.html)| |
|||
|[SimulationManager](SimulationManager.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset| |
|
|||
# SimulationManager |
|||
|
|||
The SimulationManager tracks egos, sensors, annotations, and metrics, combining them into a unified [JSON-based dataset](Schema/Synthetic_Dataset_Schema.md) on disk. While sensors are registered, the SimulationManager ensures that frames are deterministic and run at the appropriate simulation times to let each sensor run at its own rate. |
|||
|
|||
## Sensor scheduling |
|||
|
|||
## Custom annotations and metrics |
|||
|
|||
```csharp |
|||
//example MonoBehaviour |
|||
``` |
|||
## Custom sensors |
撰写
预览
正在加载...
取消
保存
Reference in new issue