浏览代码

#48: Final touches on 0.9.0.preview.1 release

/0.9.0.preview.1_staging
GitHub Enterprise 3 年前
当前提交
6d87d934
共有 9 个文件被更改,包括 51 次插入41 次删除
  1. 4
      README.md
  2. 16
      TestProjects/PerceptionURP/Assets/Settings/ForwardRenderer.asset
  3. 2
      com.unity.perception/Documentation~/SetupSteps.md
  4. 58
      com.unity.perception/Documentation~/Tutorial/Phase1.md
  5. 2
      com.unity.perception/Documentation~/Tutorial/Phase2.md
  6. 2
      com.unity.perception/Editor/Randomization/PropertyDrawers/AssetSourceDrawer.cs
  7. 4
      com.unity.perception/Editor/Randomization/VisualElements/AssetSource/AssetListElement.cs
  8. 2
      com.unity.perception/Editor/Randomization/VisualElements/AssetSource/AssetListItemElement.cs
  9. 2
      com.unity.perception/Editor/Randomization/VisualElements/AssetSource/AssetSourceElement.cs

4
README.md


**[FAQ](com.unity.perception/Documentation~/FAQ/FAQ.md)**
Check out our FAQ for a list of common questions, tips, tricks, and some sample code.
**[Verifying Datasets with Dataset Insights](com.unity.perception/Documentation~/DatasetInsights.md)**
**[Verifying Datasets with Dataset Insights](com.unity.perception/Documentation~/Tutorial/DatasetInsights.md)**
Introduction to Unity's [Dataset Insights](https://github.com/Unity-Technologies/datasetinsights) – a python package for downloading, parsing and analyzing synthetic datasets.
## Documentation

* To allow navigating to code in all packages included in your project, in your Unity Editor, navigate to `Edit -> Preferences... -> External Tools` and check `Generate all .csproj files.`
## Known issues
* The Linux Editor 2019.4.7f1 and 2019.4.8f1 might hang when importing HDRP-based Perception projects. For Linux Editor support, use 2019.4.6f1 or 2020.1.
* The Linux Editor 2019.4.7f1 and 2019.4.8f1 might hang when importing HDRP-based Perception projects. For Linux Editor support, use 2020.3 or 2021.2.
* Projects that use the Perception package on Windows or OS X will have a dependency for Python for Unity added to their manifest, in order for the new Dataset Visualizer tool to work. This tool and Python for Unity are not supported on Linux, therefore this dependency should be removed from the project's manifest file if the project is saved on Windows or OSX and opened on Linux.
## License

16
TestProjects/PerceptionURP/Assets/Settings/ForwardRenderer.asset


%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!114 &-2415090886241764546
MonoBehaviour:
m_ObjectHideFlags: 0
m_CorrespondingSourceObject: {fileID: 0}
m_PrefabInstance: {fileID: 0}
m_PrefabAsset: {fileID: 0}
m_GameObject: {fileID: 0}
m_Enabled: 1
m_EditorHideFlags: 0
m_Script: {fileID: 11500000, guid: 780f1bb8e775c4245b86116069a82828, type: 3}
m_Name: Ground Truth
m_EditorClassIdentifier:
m_Active: 1
--- !u!114 &11400000
MonoBehaviour:
m_ObjectHideFlags: 0

m_EditorClassIdentifier:
m_RendererFeatures:
- {fileID: 5741507687788441411}
m_RendererFeatureMap:
- {fileID: -2415090886241764546}
m_RendererFeatureMap: 3ebb4679a2df7bde
postProcessData: {fileID: 11400000, guid: 41439944d30ece34e96484bdb6645b55, type: 2}
xrSystemData: {fileID: 11400000, guid: 60e1133243b97e347b653163a8c01b64, type: 2}
shaders:

2
com.unity.perception/Documentation~/SetupSteps.md


Note that although the Perception package is compatible with both URP and HDRP, Unity Simulation currently only supports URP projects, therefore a URP project is recommended.
If you want a specific version of the package, append the version to the end of the "git URL". Ex. `com.unity.perception@0.8.0-preview.1`
If you want a specific version of the package, append the version to the end of the "git URL". Ex. `com.unity.perception@0.8.0-preview.4`
To install from a local clone of the repository, see [installing a local package](https://docs.unity3d.com/Manual/upm-ui-local.html) in the Unity manual.

58
com.unity.perception/Documentation~/Tutorial/Phase1.md


# Perception Tutorial
## Phase 1: Setup and Basic Randomizations
In this phase of the Perception tutorial, you will start from downloading and installing Unity Editor and the Perception package. You will then use our sample assets and provided components to easily generate a synthetic dataset for training an object-detection model.
In this phase of the Perception tutorial, you will start by downloading and installing Unity Editor and the Perception package. You will then use our sample assets and provided components to easily generate a synthetic dataset for training an object-detection model.
Through-out the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while the rest of the text will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions!
Throughout the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while the rest of the text will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions!
Steps included in this phase of the tutorial:

* **:green_circle: Action**: Make sure the _**Linux Build Support (Mono)**_ and _**Visual Studio**_ installation options are checked when selecting modules during installation.
When you first run Unity, you will be asked to open an existing project, or create a new one.
When you first run Unity, you will be asked to open an existing project or create a new one.
* **:green_circle: Action**: Open Unity and create a new project using the Universal Render Pipeline. Name your new project _**Perception Tutorial**_, and specify a desired location as shown below.
* **:green_circle: Action**: Open Unity and create a new project using the Universal Render Pipeline. Name your new project _**Perception Tutorial**_, and specify the desired location as shown below.
<p align="center">
<img src="Images/create_new_project.png" align="center" width="800"/>

* **:green_circle: Action**: Click on the _**+**_ sign at the top-left corner of the _**Package Manager**_ window and then choose the option _**Add package from git URL...**_.
* **:green_circle: Action**: Enter the address `com.unity.perception` and click _**Add**_.
> :information_source: If you would like to install a specific version of the package, you can append the version to the end of the url. For example `com.unity.perception@0.8.0-preview.1`. For this tutorial, **we do not need to add a version**. You can also install the package from a local clone of the Perception repository. More information on installing local packages is available [here](https://docs.unity3d.com/Manual/upm-ui-local.html).
> :information_source: If you would like to install a specific version of the package, you can append the version to the end of the url. For example `com.unity.perception@0.8.0-preview.4`. For this tutorial, **we do not need to add a version**. You can also install the package from a local clone of the Perception repository. More information on installing local packages is available [here](https://docs.unity3d.com/Manual/upm-ui-local.html).
It will take some time for the manager to download and import the package. Once the operation finishes, you will see the newly downloaded Perception package automatically selected in the _**Package Manager**_, as depicted below:

<img src="Images/perc_comp.png" width="400"/>
</p>
If you hover your mouse pointer over each of the fields shown (e.g. `Simulation Delta Time`), you will see a tooltip popup with an explanation on what the item controls.
If you hover your mouse pointer over each of the fields shown (e.g. `Simulation Delta Time`), you will see a tooltip popup with an explanation of what the item controls.
As seen in the UI for `Perception Camera`, the list of `Camera Labelers` is currently empty. For each type of ground-truth you wish to generate along-side your captured frames (e.g. 2D bounding boxes around objects), you will need to add a corresponding `Camera Labeler` to this list.
As seen in the UI for `Perception Camera`, the list of `Camera Labelers` is currently empty. For each type of ground-truth you wish to generate alongside your captured frames (e.g. 2D bounding boxes around objects), you will need to add a corresponding `Camera Labeler` to this list.
To speed-up your workflow, the Perception package comes with seven common Labelers for object-detection and human keypoint labeling tasks; however, if you are comfortable with code, you can also add your own custom Labelers. The Labelers that come with the Perception package cover **keypoint labeling, 3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), instance segmentation, and semantic segmentation**. We will use four of these in this tutorial.
To speed up your workflow, the Perception package comes with seven common Labelers for object-detection and human keypoint labeling tasks; however, if you are comfortable with code, you can also add your own custom Labelers. The Labelers that come with the Perception package cover **keypoint labeling, 3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), instance segmentation, and semantic segmentation**. We will use four of these in this tutorial.
* **:green_circle: Action**: Click on the _**+**_ button at the bottom right corner of the empty labeler list and select `BoundingBox2DLabeler`.
* **:green_circle: Action**: Repeat the above step to add `ObjectCountLabeler`, `RenderedObjectInfoLabeler`, `SemanticSegmentationLabeler`.

Click on this asset to bring up its _**Inspector**_ view. In there, you can specify the labels that this config will keep track of. You can type in labels, add any labels defined in the project (through being added to prefabs), and import/export this label config as a JSON file. A new label config like this one contains an empty list of labels.
In this tutorial, we will generate synthetic data intended for detecting 10 everyday grocery items. These grocery items were imported into your project when you imported the tutorial files from the _**Package Manager**_, and are located in the folder `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
In this tutorial, we will generate synthetic data intended for detecting 10 everyday grocery items. These grocery items were imported into your project when you imported the tutorial files from the _**Package Manager**_, and are located in the folder `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
The label configuration we have created (`TutorialIdLabelConfig`) is of type `IdLabelConfig`, and is compatible with three of the four labelers we have attached to our `Perception Camera`. This type of label configuration carries a unique numerical ID for each label. However, `SemanticSegmentationLabeler` requires a different kind of label configuration which includes unique colors for each label instead of numerical IDs. This is because the output of this labeler is a set of images in which each visible foreground object is painted in a unique color.

<img src="Images/pclabelconfigsadded.png" width="400"/>
</p>
It is now time to assign labels to the objects that are supposed to be detected by an eventual object-detection model, and add those labels to both of the label configurations we have created. As mentioned above, these objects are located at `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
It is now time to assign labels to the objects that are supposed to be detected by an eventual object-detection model, and add those labels to both of the label configurations we have created. As mentioned above, these objects are located at `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`
* **:green_circle: Action**: Double click the file named `drink_whippingcream_lucerne.prefab` to open the Prefab asset.
When you open the Prefab asset, you will see the object shown in the Scene tab and its components shown on the right side of the editor, in the _**Inspector**_ tab:

</p>
The Prefab contains a number of components, including a `Transform`, a `Mesh Filter`, a `Mesh Renderer` and a `Labeling` component (highlighted in the image above). While the first three of these are common Unity components, the fourth one is specific to the Perception package, and is used for assigning labels to objects. You can see here that the Prefab has one label already added, displayed in the list of `Added Labels`. The UI here provides a multitude of ways for you to assign labels to the object. You can either choose to have the asset automatically labeled (by enabling `Use Automatic Labeling`), or add labels manually. In case of automatic labeling, you can choose from a number of labeling schemes, e.g. the asset's name or folder name. If you go the manual route, you can type in labels, add labels from any of the label configurations included in the project, or add from lists of suggested labels based on the Prefab's name and path.
The Prefab contains a number of components, including a `Transform`, a `Mesh Filter`, a `Mesh Renderer` and a `Labeling` component (highlighted in the image above). While the first three of these are common Unity components, the fourth one is specific to the Perception package and is used for assigning labels to objects. You can see here that the Prefab has one label already added, displayed in the list of `Added Labels`. The UI here provides a multitude of ways for you to assign labels to the object. You can either choose to have the asset automatically labeled (by enabling `Use Automatic Labeling`), or add labels manually. In case of automatic labeling, you can choose from a number of labeling schemes, e.g. the asset's name or folder name. If you go the manual route, you can type in labels, add labels from any of the label configurations included in the project, or add from lists of suggested labels based on the Prefab's name and path.
Note that each object can have multiple labels assigned, and thus appear as different objects to Labelers with different label configurations. For instance, you may want your semantic segmentation Labeler to detect all cream cartons as `dairy_product`, while your bounding box Labeler still distinguishes between different types of dairy product. To achieve this, you can add a `dairy_product` label to all your dairy products, and then in your label configuration for semantic segmentation, only add the `dairy_product` label, and not any specific products or brand names.
Note that each object can have multiple labels assigned, and thus appear as different objects to Labelers with different label configurations. For instance, you may want your semantic segmentation Labeler to detect all cream cartons as `dairy_product`, while your Bounding Box Labeler still distinguishes between different types of dairy products. To achieve this, you can add a `dairy_product` label to all your dairy products, and then in your label configuration for semantic segmentation, only add the `dairy_product` label, and not any specific products or brand names.
For this tutorial, we have already prepared the foreground Prefabs for you and added the `Labeling` component to all of them. These Prefabs were based on 3D scans of the actual grocery items. If you are making your own Prefabs, you can easily add a `Labeling` component to them using the _**Add Component**_ button visible in the bottom right corner of the screenshot above.

* **:green_circle: Action**: Select **all the files** inside the `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs` folder.
* **:green_circle: Action**: Select **all the files** inside the `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs` folder.
* **:green_circle: Action**: From the _**Inspector**_ tab, enable `Use Automatic Labeling for All Selected Items`, and then select `Use asset name` as the labeling scheme.
<p align="center">

> :information_source: Since we used automatic labels here and added them to our configurations, we are confident that the labels in the configurations match the labels of our objects. In cases where you decide to add manual labels to objects and configurations, make sure you use the exact same labels, otherwise, the objects for which a matching label is not found in your configurations will not be detected by the Labelers that are using those configurations.
Now that we have labelled all our foreground objects and setup our label configurations, let's briefly test things.
Now that we have labeled all our foreground objects and set up our label configurations, let's briefly test things.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: Drag and drop any of the Prefabs inside this folder into the Scene.
* **:green_circle: Action**: Click on the **▷** (play) button located at the top middle section of the editor to run your simulation.

As mentioned earlier, one of the core ingredients of the perception workflow is the randomization of various aspects of the simulation, in order to introduce sufficient variation into the generated data.
To start randomizing your simulation you will first need to add a `Scenario` to your scene. Scenarios control the execution flow of your simulation by coordinating all `Randomizer` components added to them. The Perception package comes with a useful set of Randomizers that let you quickly place your foreground objects in the Scene, generate varied backgrounds, as well as randomize various parameters of the simulation over time, including things such as position, scale, and rotation of objects, number of objects within the camera's view, and so on. Randomizers achieve this through coordinating a number of `Parameter`s, which essentially define the most granular randomization behaviors. For instance, for continuous variable types such as floats, vectors, and colors, Parameters can define the range and sampling distribution for randomization. This is while another class of Parameters let you randomly select one out of a number of categorical options.
To start randomizing your simulation you will first need to add a `Scenario` to your scene. Scenarios control the execution flow of your simulation by coordinating all `Randomizer` components added to them. The Perception package comes with a useful set of Randomizers that let you quickly place your foreground objects in the Scene, generate varied backgrounds, as well as randomize various parameters of the simulation over time such as position, scale, and rotation of objects, number of objects within the camera's view, and so on. Randomizers achieve this through coordinating a number of `Parameter`s, which essentially define the most granular randomization behaviors. For instance, for continuous variable types such as floats, vectors, and colors, Parameters can define the range and sampling distribution for randomization. While another class of Parameters let you randomly select one out of a number of categorical options.
To summarize, a sample `Scenario` could look like this:

This Randomizer uses Poisson-Disk sampling to select random positions from a given area, and spawn copies of randomly selected Prefabs (from a given list) at the chosen positions. We will use this component to generate a background that will act as a distraction for our eventual object-detection machine learning model.
* **:green_circle: Action**: Click _**Add Folder**_, and from the file explorer window that opens, choose the folder `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **:green_circle: Action**: Click _**Add Folder**_, and from the file explorer window that opens, choose the folder `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
The background Prefabs are primitive shapes devoid of color or texture. Later Randomizers will take care of those aspects.

`TextureRandomizer` will have the task of attaching random textures to our colorless background objects at each Iteration of the Scenario. Similarly, `HueOffsetRandomizer` will alter the color of the objects, and `RotationRandomizer` will give the objects a new random rotation each Iteration.
* **:green_circle: Action**: In the UI snippet for `TextureRandomizer`, click _**Add Folder**_ and choose `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Background Textures`.
* **:green_circle: Action**: In the UI snippet for `TextureRandomizer`, click _**Add Folder**_ and choose `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Background Textures`.
* **:green_circle: Action**: In the UI snippet for `RotationRandomizer`, verify that all the minimum values for the three ranges are `0` and that maximum values are `360`.

<img src="Images/all_back_rands.png" width = "400"/>
</p>
There is one more important thing left to do, in order to make sure all the above Randomizers operate as expected. Since `BackgroundObjectPlacementRandomizer` spawns objects, it already knows which objects in the Scene it is dealing with; however, the rest of the Randomizers we added are not yet aware of what objects they should target because they don't spawn their own objects.
There is one more important thing left to do, in order to make sure all the above Randomizers operate as expected. Since `BackgroundObjectPlacementRandomizer` spawns objects, it already knows which objects in the Scene it is dealing with; however, the rest of the Randomizers we added are not yet aware of what objects they should target because they don't spawn their own objects.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **:green_circle: Action**: Select all the files inside and from the _**Inspector**_ tab add a `TextureRandomizerTag` to them. This will add the component to all the selected files.
* **:green_circle: Action**: Repeat the above step to add `HueOffsetRandomizerTag` and `RotationRandomizerTag` to all selected Prefabs.

It is now time to spawn and randomize our foreground objects.
* **:green_circle: Action**: Add `ForegroundObjectPlacementRandomizer` to your list of Randomizers. Click _**Add Folder**_ and select `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: Add `ForegroundObjectPlacementRandomizer` to your list of Randomizers. Click _**Add Folder**_ and select `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: Set these values for the above Randomizer: `Depth = -3, Separation Distance = 1.5, Placement Area = (5,5)`.
This Randomizer uses the same algorithm as the one we used for backgrounds; however, it is defined in a separate C# class because you can only have **one of each type of Randomizer added to your Scenario**. Therefore, this is our way of differentiating between how background and foreground objects are treated.

* **:green_circle: Action**: From the _**Project**_ tab select all the foreground Prefabs located in `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`, and add a `RotationRandomizerTag` component to them.
* **:green_circle: Action**: From the _**Project**_ tab select all the foreground Prefabs located in `Assets/Samples/Perception/0.9.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`, and add a `RotationRandomizerTag` component to them.
* **:green_circle: Action**: Drag `ForegroundObjectPlacementRandomizer` using the striped handle bar (on its left side) and drop it above `RotationRandomizer`.
* **:green_circle: Action**: Drag `ForegroundObjectPlacementRandomizer` using the striped handlebar (on its left side) and drop it above `RotationRandomizer`.
Your full list of Randomizers should now look like the screenshot below:

In this folder, you will find a few types of data, depending on your `Perception Camera` settings. These can include:
- Logs
- JSON data
- RGB images (raw camera output) (if the `Save Camera Output to Disk` check mark is enabled on `Perception Camera`)
- RGB images (raw camera output) (if the `Save Camera Output to Disk` checkmark is enabled on `Perception Camera`)
The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active Labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation on the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
> :information_source: Are the RGB images blank? This may be a bug. When using URP in OSX, having MSAA enabled on the camera may cause the output RGB images to be blank. As a workaround, you can disable MSAA and use FXAA instead, until the issue is fixed. To do this, select `Main Camera`, and in the ***Inspector*** view of the Camera component, in the ***Output*** section, set MSAA to `Off`. If you would like to use FXAA, in the ***Rendering*** section, set the Anti-aliasing option to `Fast Approximate Anti-aliasing (FXAA)`.
The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active Labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation of the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
* **:green_circle: Action**: To get a quick feel of how the data is stored, open the folder whose name starts with `Dataset`, then open the file named `captures_000.json`. This file contains the output from `BoundingBox2DLabeler`. The `captures` array contains the position and rotation of the sensor (camera), the position and rotation of the ego (sensor group, currently only one), and the annotations made by `BoundingBox2DLabeler` for all visible objects defined in its label configuration. For each visible object, the annotations include:
* `label_id`: The numerical id assigned to this object's label in the Labeler's label configuration

* **:green_circle: Action**: Open _**Window**_ -> _**Dataset Visualizer**_ -> _**Open**_. (This is also accessible from the `Perception Camera`)
This will first install the visualizer and then run it. The visualizer is a Python based tool that runs in the browser. It may take a few minutes to install the tool for the first time. Once it is installed, a browser window will open to the address `http://localhost:8501/`.
This will first install the visualizer and then run it. The visualizer is a Python-based tool that runs in the browser. It may take a few minutes to install the tool for the first time. Once it is installed, a browser window will open to the address `http://localhost:8501/`.
In certain circumstances, the tool may run but the browser window may not open. To alleviate this, once the tool is running you will get a prompt in Unity Editor, asking whether the browser window opened successfully. If it did not, you can force it to open by clicking **Manually Open**.

<img src="Images/vis_expanded.png" width = "800"/>
</p>
To further analyze your dataset and verify statistics such as number of objects in each frame and more, you can use Unity's Dataset Insights, which is a Python package created for processing Perception datasets. [This guide](DatasetInsights.md) can get you started.
To further analyze your dataset and verify statistics such as the number of objects in each frame and more, you can use Unity's Dataset Insights, which is a Python package created for processing Perception datasets. [This guide](DatasetInsights.md) can get you started.
This concludes Phase 1 of the Perception Tutorial. In the next phase, we will dive a little bit into randomization code and learn how to build custom Randomizers.

2
com.unity.perception/Documentation~/Tutorial/Phase2.md


}
```
Notice how we now utilize the `SetIntensity` fucntion of `MyLightRandomizerTag` components of the tagged objects, instead of directly setting the intensity of the `Light` components.
Notice how we now utilize the `SetIntensity` function of `MyLightRandomizerTag` components of the tagged objects, instead of directly setting the intensity of the `Light` components.
* **:green_circle: Action**: Run your simulation, then pause it. Go to the _**Scene**_ view and inspect the color and intensity of each of the lights. Try turning each on and off to see how they affect the current frame.

2
com.unity.perception/Editor/Randomization/PropertyDrawers/AssetSourceDrawer.cs


using Editor.Randomization.VisualElements.AssetSource;
using UnityEditor.Perception.Randomization.VisualElements.AssetSource;
using UnityEngine;
using UnityEngine.Perception.Randomization;
using UnityEngine.UIElements;

4
com.unity.perception/Editor/Randomization/VisualElements/AssetSource/AssetListElement.cs


using System;
using System.Collections;
using UnityEditor;
using UnityEditor.Perception.Randomization;
namespace Editor.Randomization.VisualElements.AssetSource
namespace UnityEditor.Perception.Randomization.VisualElements.AssetSource
{
class AssetListElement : VisualElement
{

2
com.unity.perception/Editor/Randomization/VisualElements/AssetSource/AssetListItemElement.cs


using UnityEditor.UIElements;
using UnityEngine.UIElements;
namespace Editor.Randomization.VisualElements.AssetSource
namespace UnityEditor.Perception.Randomization.VisualElements.AssetSource
{
class AssetListItemElement : VisualElement
{

2
com.unity.perception/Editor/Randomization/VisualElements/AssetSource/AssetSourceElement.cs


using UnityEngine.Perception.Randomization;
using UnityEngine.UIElements;
namespace Editor.Randomization.VisualElements.AssetSource
namespace UnityEditor.Perception.Randomization.VisualElements.AssetSource
{
class AssetSourceElement : VisualElement
{

正在加载...
取消
保存