浏览代码

Sweeping updates to docs.

/update-setup-steps
Jon Hogins 5 年前
当前提交
bdd23a27
共有 6 个文件被更改,包括 64 次插入45 次删除
  1. 14
      README.md
  2. 36
      com.unity.perception/Documentation~/GettingStarted.md
  3. 21
      com.unity.perception/Documentation~/GroundTruth-Labeling.md
  4. 2
      com.unity.perception/Documentation~/SetupSteps.md
  5. 24
      com.unity.perception/Documentation~/index.md
  6. 12
      com.unity.perception/Documentation~/SimulationManager.md

14
README.md


[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE.md)
> com.unity.perception is in active development. Its features and API are subject to significant change as development progresses.
**Click [here](com.unity.perception/Documentation~/SetupSteps.md) to set up a Perception project**
[Installation instructions](com.unity.perception/Documentation~/SetupSteps.md)
**Click [here](com.unity.perception/Documentation~/GettingStarted.md) to get the started with Perception**
[Setting up your first perception scene](com.unity.perception/Documentation~/GettingStarted.md)
## Suggested IDE Setup
[Perception manual](com.unity.perception/Documentation~/index.md)
## Local development
The repository includes two projects for local development in `TestProjects` folder, one set up for HDRP and the other for URP.
### Suggested IDE Setup
For closest standards conformity and best experience overall, JetBrains Rider or Visual Studio w/ JetBrains Resharper are suggested. For optimal experience, perform the following additional steps:
* To allow navigating to code in all packages included in your project, in your Unity Editor, navigate to `Edit -> Preferences... -> External Tools` and check `Generate all .csproj files.`
* To get automatic feedback and fixups on formatting and naming convention violations, set up Rider/JetBrains with our Unity standard .dotsettings file by following [these instructions](https://github.cds.internal.unity3d.com/unity/com.unity.coding/tree/master/UnityCoding/Packages/com.unity.coding/Coding~/Configs/JetBrains).

36
com.unity.perception/Documentation~/GettingStarted.md


# Getting Started with SynthDet
This will provide a step by step instructions on creating a new scene using the Perception features to create semantic data and image captures. These steps should work with both options for setup steps of using a existing project or creating a new project. The goal is to have a working scene by the end of these instructions that will provide you with a dataset, rgb images captures, and segmentic data.
# Getting Started with Perception
This walkthrough will provide creating a new scene for generating perception datasets including segmentation data and image captures.
1. Create a new scene File-> New Scene
2. Save the Scene File-> Save and give it a name, i.e created scene name is PerceptionScene
1. Create a new scene using File -> New Scene
2. `ctrl+s` to save the scene and give it a name
3. Select the Main Camera and reset the Position transform to 0
4. In the Hierarchy window select the main camera
1. In the inspector panel of the main camera select Add Component

## Step 2: Create labeled objects
1. In the Hierarchy window right click -> Go to 3D Object -> Select Cube
1. Create 3 Cubes
2. Change the names of the cubes to have 3 seperate names Cube, Box, Crate
3. Position the Cubes in front of the FOV of the main Camera, example image of the completed scene for reference down below
1. Create a cube by right-clicking in the Hierarchy window, select 3D Object -> Cube
1. Create 2 more cubes this way
1. Change the names of the cubes to Cube, Box, and Crate
1. Position the Cubes in front of the main Camera
2. For each object in the scene that was created, from the inspector panel add the script called **Labeling**
1. On each cube, from the inspector panel add a **Labeling** component
2. In the text field add the name of the object i.e Crate
2. In the text field add the name of the object i.e Crate. This will be the label used in the semantic segmentation images
3. In the Project panel right click -> Perception -> Labeling Configuration
4. Select the **Labeling Configuration** created in the project panel
1. In the Project panel right click -> Perception -> Labeling Configuration
1. Select the new **Labeling Configuration**
1. Make sure the labels all have different values, for this example use values of 10,000
1. Make sure the labels all have different values, for this example use increments of 10,000 to ensure they show up as very distinct colors in the segmentation images
<img src="images/LabelingConfigurationFinished.PNG" align="middle"/>
9. Select the Main Camera in the Hierarchy panel
1. In the Perception Camera script in the Labeling Configuration field add the Labeling Configuration script created in previous step

1. Press play in the editor and allow the scene to run for 10 seconds before ending playmode
2. In the console log you will see a Shutdown in Progress message that will show a file path to the location of the generated dataset
3. The file path is the Application Persistent Path + /Defaultcompany/UnityTestFramework/<Hash Key>
1. Example file path on a Windows PC : *C:/Users/<User Name>/AppData/LocalLow/DefaultCompany/UnityTestFramework\2e10ec21-9d97-4cee-b5a2-7e95e299afa4\RGB18f61842-ef8d-4b31-acb5-cb1da36fb7b1*
4. In the output path for the Labeling content you can verify the following data is present:
1. Press play in the editor, allow the scene to run for a few seconds, and then exit playmode
2. In the console log you will see a Shutdown in Progress message that will show a file path to the location of the generated dataset.
>Example file path on a Windows PC : `C:/Users/<User Name>/AppData/LocalLow/DefaultCompany/UnityTestFramework\2e10ec21-9d97-4cee-b5a2-7e95e299afa4\RGB18f61842-ef8d-4b31-acb5-cb1da36fb7b1`
4. In the dataset folder you will find the following data:
<img src="images/rgb_2.png" align="middle"/>
<img src="images/segmentation_2.png" align="middle"/>

21
com.unity.perception/Documentation~/GroundTruth-Labeling.md


_Note: This document is a work in progress_
You can add a Labeling component to individual GameModels within a scene although it is a good practice to create a prefab of a GameModel and apply the Labeling component to it.
The Labeling components contain properties that control the number of labels applied to the GameModel. “Classes” has a property named “size”, this identifies how many labels are applied to a GameModel. Default = 0 (no label). Setting “size” to 1 will expose an “Element 0” parameter and an input field allowing for a custom label as text or numbers (combination of both) that can be used to label the asset.
You can add a Labeling component to individual GameObjects within a scene although it is a good practice to create a prefab of a GameModel and apply the Labeling component to it.
Multiple labels can be used by setting “size” to 2 or more. These additional Elements (labels) can be used for any purpose in development. For example in SynthDet labels have a hierarchy where Element0 is the highest level label identifying an GameModel in a very general category. Subsequent categories become more focused in identifying what types and groups an object can be classified. The last Element is reserved for the specific name (or label) the asset is defined as.
Multiple labels can be assigned to the same `Labeling`. When ground truth which requires unique labels per object is being generated, the first label in the `Labeling` present anywhere in the `LabelingConfiguration` is used.
Semantic segmentation (and other metrics) require a labeling configuration file located here:
This file gives a list of all labels currently being used in the data set and what RGB value they are associated with. This file can be used as is or created by the developer. When a Semantic segmentation output is generated the per pixel RGB value can be used to identify the object for the algorithm.
Note: the labeling configuration file is not validated and must be managed by the developer.
Many labelers require require a `Labeling Configuration` asset.
This file specifies a list of all labels to be captured in the dataset for a labeler along with extra information used by the various labelers.
Generally algorithm testing and training requires a single label on an asset for proper identification (“chair”, “table”, “door, “window”, etc.) In Unity SynthDet a labeling hierarchy is used to identify assets at a higher level and/or more granularly.
Generally algorithm testing and training requires a single label on an asset for proper identification such as “chair”, “table”, or “door". To maximize asset reuse, however, it is useful to give each object multiple labels in a heirarchy.
Example
An asset representing a box of Rice Krispies cereal is labeled as: food\cereal\kellogs\ricekrispies
For example
An asset representing a box of Rice Krispies cereal could b labeled as: food\cereal\kellogs\ricekrispies
If the goal of the algorithm is to identify all objects in a scene that is “food” that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a scene that label is also available. Depending on the goal of the algorithm any mix of labels in the hierarchy can be used at the discretion of the developer.
If the goal of the algorithm is to identify all objects in a scene that is “food” that label is available and can be used. Conversely if the goal is to identify only Rice Krispies cereal within a scene that label is also available. Depending on the goal of the algorithm any mix of labels in the hierarchy can be used.

2
com.unity.perception/Documentation~/SetupSteps.md


# Setup for local development
# Using Perception in your project
* Clone the [Perception](https://github.com/Unity-Technologies/com.unity.perception) repository
* Install and use Unity latest [2019.3 Unity editor](https://unity.com/releases/2019-3)

24
com.unity.perception/Documentation~/index.md


# About the Perception SDK
com.unity.perception provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on a handful of camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks.
<img src="images/banner2.PNG" align="middle"/>
# Technical details
## Requirements
# com.unity.perception
The Perception package provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on capturing ground truth for camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks.
This version of _Perception_ is compatible Unity Editor 2019.3 and later
> com.unity.perception is in active development. Its features and API are subject to significant change as development progresses.
[Installation instructions](com.unity.perception/Documentation~/SetupSteps.md)
[Setting up your first perception scene](com.unity.perception/Documentation~/GettingStarted.md)
|Ground Truth|Captures semantic segmentation, bounding boxes, and other forms of ground truth.|
|Feature|Description
|Labeling|MonoBehaviour which marks an object and its descendants with a set of labels|
|Labeling Configuration|Asset which defines a taxonomy of labels used for ground truth generation |
|Perception Camera|Captures RGB images and ground truth on a Unity Camera|
|---|---|
|[Labeling](GroundTruth-Labeling.md)|Component which marks a GameObject and its descendants with a set of labels|
|[Labeling Configuration](GroundTruth-Labeling.md#LabelingConfiguration)|Asset which defines a taxonomy of labels for ground truth generation|
|Perception Camera|Captures RGB images and ground truth from a [Camera](https://docs.unity3d.com/Manual/class-Camera.html)|
|[SimulationManager](SimulationManager.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset|

12
com.unity.perception/Documentation~/SimulationManager.md


# SimulationManager
The SimulationManager tracks egos, sensors, annotations, and metrics, combining them into a unified [JSON-based dataset](Schema/Synthetic_Dataset_Schema.md) on disk. While sensors are registered, the SimulationManager ensures that frames are deterministic and run at the appropriate simulation times to let each sensor run at its own rate.
## Sensor scheduling
## Custom annotations and metrics
```csharp
//example MonoBehaviour
```
## Custom sensors
正在加载...
取消
保存