For closest standards conformity and best experience overall, JetBrains Rider or Visual Studio w/ JetBrains Resharper are suggested. For optimal experience, perform the following additional steps:
* To allow navigating to code in all packages included in your project, in your Unity Editor, navigate to `Edit -> Preferences... -> External Tools` and check `Generate all .csproj files.`
* To get automatic feedback and fixups on formatting and naming convention violations, set up Rider/JetBrains with our Unity standard .dotsettings file by following [these instructions](https://github.cds.internal.unity3d.com/unity/com.unity.coding/tree/master/UnityCoding/Packages/com.unity.coding/Coding~/Configs/JetBrains).
* If you use VS Code, install the Editorconfig extension to get automatic code formatting according to our convention
Accurately labeling assets with a predefined taxonomy will inform training and testing of algorithms as to which objects in a dataset have importance. Example: assets labeled with “table” and “chair” will provide an algorithm with the information it needs to train on identifying these objects separately within a scene.
# Labeling Configuration
## Labeling Configuration
Semantic segmentation (and other metrics) require a labeling configuration file located here:
This file gives a list of all labels currently being used in the data set and what RGB value they are associated with. This file can be used as is or created by the developer. When a Semantic segmentation output is generated the per pixel RGB value can be used to identify the object for the algorithm.
“kellogs” - main descriptor
“ricekrispies” - sub descriptor
If the goal of the algorithm is to identify all objects in a scene that is “food” that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a scene that label is also available. Depending on the goal of the algorithm any mix of labels in the hierarchy can be used at the discretion of the developer.
Note: this labeling hierarchy is suggested and not required. Please adjust or discard if your project goals have other requirements.
Adding Labels to a Unity Asset
Labels are added to Unity Assets by attaching a labeling script to an asset and creating a prefab object.
### Asset Organization
If the goal of the algorithm is to identify all objects in a scene that is “food” that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a scene that label is also available. Depending on the goal of the algorithm any mix of labels in the hierarchy can be used at the discretion of the developer.
* Clone the [Perception](https://github.com/Unity-Technologies/com.unity.perception) repository into an arbirary directory on disk
* Clone the [Perception](https://github.com/Unity-Technologies/com.unity.perception) repository
Down below are 2 options for getting started using the Perception SDK, Option 1 is opening existing test projects in the repository. Option 2 a guide on how to create a new
Unity project and intergrate the Perception
Below are two options for getting started using the Perception package. Option 1 is opening existing test projects in the repository. Option 2 new Unity project and integrate the Perception package.
The repository includes two projects for local development in `TestProjects` folder, one set up for HDRP and the other for URP. You can open these with the Unity
4. Navigate to the com.unity.perception folder in your cloned repository and select the package.json file
3. Once you have a project with Perception SDK installed you can move forward to the Getting Started walkthrough
Once completed you can move on to the getting started steps, click [here](Documentation~/GettingStarted.md) to start project setup.
## Suggested IDE Setup
For closest standards conformity and best experience overall, JetBrains Rider or Visual Studio w/ JetBrains Resharper are suggested. For optimal experience, perform the following additional steps:
* To allow navigating to code in all packages included in your project, in your Unity Editor, navigate to `Edit -> Preferences... -> External Tools` and check `Generate all .csproj files.`
* To get automatic feedback and fixups on formatting and naming convention violations, set up Rider/JetBrains with our Unity standard .dotsettings file by following [these instructions](https://github.cds.internal.unity3d.com/unity/com.unity.coding/tree/master/UnityCoding/Packages/com.unity.coding/Coding~/Configs/JetBrains).
* If you use VS Code, install the Editorconfig extension to get automatic code formatting according to our convention
Once completed you can move on to the getting started steps, click [here](Documentation~/GettingStarted.md) to start project setup.
com.unity.perception provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on a handful of camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks.
# Using Perception
<!--## Known limitations -->
|Labeling|Script object that labels an asset for target taxonomy|
|Labeling Configuration|Captures all the labeling and adds the data to the corresponding pixels|
|Scenarios|Contains different sample scenarios of driving conditions for a car to operate in|
|Labeling|MonoBehaviour which marks an object and its descendants with a set of labels|
|Labeling Configuration|Asset which defines a taxonomy of labels used for ground truth generation |
|Perception Camera|Captures RGB images and ground truth on a Unity Camera|
Ground truth is an essential part of most datasets for perception tasks. The Perception package provides the generation of many common forms of ground truth on top of an extensible ground truth and metric capture framework.