> com.unity.perception is in active development. Its features and API are subject to significant change as development progresses.
# Perception
# Perception Package (Unity Computer Vision)
The Perception package provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on a handful of camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks.
|Feature|Description|
|---|---|
|[Labeling](com.unity.perception/Documentation~/GroundTruthLabeling.md)|A component that marks a GameObject and its descendants with a set of labels|
|[LabelConfig](com.unity.perception/Documentation~/GroundTruthLabeling.md#label-config)|An asset that defines a taxonomy of labels for ground truth generation|
|[LabelConfig](com.unity.perception/Documentation~/GroundTruthLabeling.md#label-config)|An asset that defines a taxonomy of labels for ground truth generation|
|[DatasetCapture](com.unity.perception/Documentation~/DatasetCapture.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset.|
|[DatasetCapture](com.unity.perception/Documentation~/DatasetCapture.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset.|
|[Randomization (Experimental)](com.unity.perception/Documentation~/Randomization/Index.md)|The Randomization tool set lets you integrate domain randomization principles into your simulation.|
The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how Perception could be used in a smart city or autonomous vehicle simulation. You can generate datasets locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how the Perception toolset could be used in a smart city or autonomous vehicle simulation. You can generate datasets locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
## Local development
The repository includes two projects for local development in `TestProjects` folder, one set up for HDRP and the other for URP.
## License
* [License](com.unity.perception/LICENSE.md)
## Support
For general questions or concerns please contact the Computer Vision team at computer-vision@unity3d.com.
For feedback, bugs, or other issues please file a GitHub issue and the Computer Vision team will investigate the issue as soon as possible.
While sensors are registered, `DatasetCapture` ensures that frame timing is deterministic and run at the appropriate simulation times to let each sensor run at its own rate.
While sensors are registered, `DatasetCapture` ensures that frame timing is deterministic and run at the appropriate simulation times to let each sensor render and capture at its own rate.
Using [Time.CaptureDeltaTime](https://docs.unity3d.com/ScriptReference/Time-captureDeltaTime.html), it also decouples wall clock time from simulation time, allowing the simulation to run as fast as possible.
Using [Time.captureDeltaTime](https://docs.unity3d.com/ScriptReference/Time-captureDeltaTime.html), it also decouples wall clock time from simulation time, allowing the simulation to run as fast as possible.
You can register custom sensors using `DatasetCapture.RegisterSensor()`. The `period` you pass in at registration time determines how often (in simulation time) frames should be scheduled for the sensor to run. The sensor implementation then checks `ShouldCaptureThisFrame` on the returned `SensorHandle` each frame to determine whether it is time for the sensor to perform a capture. `SensorHandle.ReportCapture` should then be called in each of these frames to report the state of the sensor to populate the dataset.
You can register custom sensors using `DatasetCapture.RegisterSensor()`. The `simulationDeltaTime` you pass in at registration time is used as `Time.captureDeltaTime` and determines how often (in simulation time) frames should be simulated for the sensor to run. This and the `framesBetweenCaptures` value determine at which exact times the sensor should capture the simulated frames. The decoupling of simulation delta time and capture frequency based on frames simulated allows you to render frames in-between captures. If no in-between frames are desired, you can set `framesBetweenCaptures` to 0. When it is time to capture, the `ShouldCaptureThisFrame` check of the `SensorHandle` returns true. `SensorHandle.ReportCapture` should then be called in each of these frames to report the state of the sensor to populate the dataset.
`Time.captureDeltaTime` is set at every frame in order to precisely fall on the next sensor that requires simulation, and this includes multi-sensor simulations. For instance, if one sensor has a `simulationDeltaTime` of 2 and another 3, the first five values for `Time.captureDeltaTime` will be 2, 1, 1, 2, and 3, meaning simulation will happen on the timestamps 0, 2, 3, 4, 6, and 9.
## Custom annotations and metrics
In addition to the common annotations and metrics produced by [PerceptionCamera](PerceptionCamera.md), scripts can produce their own via `DatasetCapture`. You must first register annotation and metric definitions using `DatasetCapture.RegisterAnnotationDefinition()` or `DatasetCapture.RegisterMetricDefinition()`. These return `AnnotationDefinition` and `MetricDefinition` instances which you can then use to report values during runtime.
<br><i>The Inspector view of the Perception Camera component</i>
</p>
| Period | The amount of simulation time in seconds between frames for this Camera. For more information on sensor scheduling, see [DatasetCapture](DatasetCapture.md). |
| Start Time | The simulation time at which to run the first frame. This time offsets the period, which allows multiple Cameras to run at the correct times relative to each other. |
| Capture Rgb Images | When you enable this property, Unity captures RGB images as PNG files in the dataset each frame. |
| Show Visualizations | Display realtime visualizations for labelers that are currently active on this camera. |
| Capture RGB Images | When you enable this property, Unity captures RGB images as PNG files in the dataset each frame. |
| Capture Trigger Mode | The method of triggering captures for this camera. In `Scheduled` mode, captures happen automatically based on a start frame and frame delta time. In `Manual` mode, captures should be triggered manually through calling the `RequestCapture` method of `PerceptionCamera`. |
### Properties for Scheduled Capture Mode
| Property: | Function: |
|--|--|
| Simulation Delta Time | The simulation frame time (seconds) for this camera. E.g. 0.0166 translates to 60 frames per second. This will be used as Unity's `Time.captureDeltaTime`, causing a fixed number of frames to be generated for each second of elapsed simulation time regardless of the capabilities of the underlying hardware. For more information on sensor scheduling, see [DatasetCapture](DatasetCapture.md). |
| First Capture Frame | Frame number at which this camera starts capturing. |
| Frames Between Captures | The number of frames to simulate and render between the camera's scheduled captures. Setting this to 0 makes the camera capture every frame. |
### Properties for Manual Capture Mode
| Property: | Function: |
|--|--|
| Affect Simulation Timing | Have this camera affect simulation timings (similar to a scheduled camera) by requesting a specific frame delta time. Enabling this option will let you set the `Simulation Delta Time` property described above.|
### SemanticSegmentationLabeler
### Semantic Segmentation Labeler
### InstanceSegmentationLabeler
### Instance Segmentation Labeler
### BoundingBox2DLabeler
### Bounding Box 2D Labeler
![Example bounding box visualization from SynthDet generated by the `SynthDet_Statistics` Jupyter notebook](images/bounding_boxes.png)
<br/>_Example bounding box visualization from SynthDet generated by the `SynthDet_Statistics` Jupyter notebook_
The Bounding Box 3D Ground Truth Labeler prouces 3D ground truth bounding boxes for each labeled game object in the scene. Unlike the 2D bounding boxes, 3D bounding boxes are calculated from the labeled meshes in the scene and all objects (independent of their occlusion state) are recorded.
The Bounding Box 3D Ground Truth Labeler produces 3D ground truth bounding boxes for each labeled game object in the scene. Unlike the 2D bounding boxes, 3D bounding boxes are calculated from the labeled meshes in the scene and all objects (independent of their occlusion state) are recorded.
### ObjectCountLabeler
### ObjectCountLabeler
```
{
The ObjectCountLabeler records object counts for each label you define in the IdLabelConfig. Unity only records objects that have at least one visible pixel in the Camera frame.
*NOTE: The Perception package's randomization toolset is currently marked as experimental and is subject to change.*
The randomization toolset simplifies randomizing aspects of generating synthetic data. It facilitates exposing parameters for randomization, offers samplers to pick random values from parameters, and provides scenarios to coordinate a full randomization process. Each of these also allows for custom implementations to fit particular randomization needs.
The randomization toolset simplifies randomizing aspects of generating synthetic data. It facilitates exposing parameters for randomization, offers samplers to pick random values from parameters, and provides Scenarios to coordinate a full randomization process. Each of these also allows for custom implementations to fit particular randomization needs.
Our use of domain randomization draws from Tobin et al. (2017) work training robotic pick and place using purely synthetic data.
Our use of domain randomization draws from Tobin et al.'s (2017) work on training robotic pick and place using purely synthetic data.
1. Create a scenario
2. Define and add randomizers to the scenario
3. Customize parameters and samplers in the randomizers
4. Generate randomized perception data
1. Create a Scenario
2. Define and add Randomizers to the Scenario
3. Customize Parameters and Samplers in the Randomizers
4. Generate randomized computer vision training data
Beginning with step 1, add a scenario component to your simulation. This scenario will act as the central hub for all randomization activities that occur when your scene is executed.
Beginning with step 1, add a Scenario component to your simulation. This Scenario will act as the central hub for all randomization activities that occur when your scene is executed.
Next, add a few randomizers to the scenario. The randomizers, in conjunction with the scenario, will perform the actual randomization activities within the simulation.
Next, add a few Randomizers to the Scenario. The Randomizers, in conjunction with the Scenario, will perform the actual randomization activities within the simulation.
After adding the necessary randomizers, configure the random parameters assigned to each randomizer to further customize how the simulation is randomized. The random parameters and samplers exposed in each randomizer's inspector can be manipulated to specify different probabilty distributions to use when generating random values.
After adding the necessary Randomizers, configure the random Parameters assigned to each Randomizer to further customize how the simulation is Randomized. The random Parameters and Samplers exposed in each Randomizer's inspector can be manipulated to specify different probability distributions to use when generating random values.
Once the project has been randomized and your scene has been configured with the data capture tools available in the perception package, enter play mode in the editor or execute your scenario through the Unity Simulation Cloud service to generate domain randomized perception data.
Once the project has been randomized and your scene has been configured with the data capture tools available in the Perception package, enter play mode in the editor or execute your Scenario through the Unity Simulation cloud service to generate domain randomized perception data.
Continue reading for more details concerning the primary components driving randomizations in the perception package, including:
Continue reading for more details concerning the primary components driving randomizations in the Perception package, including:
1. Scenarios
2. Randomizers
3. Randomizer Tags
## Scenarios
Within a randomized simulation, the scenario component has three responsibilities:
Within a randomized simulation, the Scenario component has three responsibilities:
2. Defining a list of randomizers
2. Defining a list of Randomizers
The fundamental principle of domain randomization is to simulate environments under a variety of randomized conditions. Each **iteration** of a scenario is intended to encapsulate one complete run of a simulated environment under uniquely randomized conditions. Scenarios futher define what conditions determine the end of an iteration and how many iterations to perform.
The fundamental principle of domain randomization is to simulate environments under a variety of randomized conditions. Each Iteration of a Scenario is intended to encapsulate one complete run of a simulated environment under uniquely randomized conditions. Scenarios further define what conditions determine the end of an Iteration and how many Iterations to perform.
To actually randomize a simulation, randomizers can be added to a scenario to vary different simulation properties. At runtime, the scenario will execute each randomizer according to its place within the randomizers list.
To actually randomize a simulation, Randomizers can be added to a Scenario to vary different simulation properties. At runtime, the Scenario will execute each Randomizer according to its place within the Randomizer list.
To read more about scenarios and how to customize them, navigate over to the [scenarios doc](Scenarios.md).
To read more about Scenarios and how to customize them, navigate over to the **[Scenarios documentation](Scenarios.md)**.
Randomizers encapsulate specific randomization activities to perform during the lifecycle of a randomized simulation. For example, randomizers exist for spawning objects, repositioning lights, varying the color of objects, etc. Randomizers expose random parameters to their inspector interface to further customize these variations.
Randomizers encapsulate specific randomization activities to perform during the lifecycle of a randomized simulation. For example, Randomizers exist for spawning objects, repositioning lights, varying the color of objects, etc. Randomizers expose random Parameters to their inspector interface to further customize these variations.
To read more about how to create custom parameter types, navigate over to the **[Randomizers doc](Randomizers.md)**.
To read more about how to create custom Parameter types, navigate over to the **[Randomizers documentation](Randomizers.md)**.
RandomizerTags are the primary mechanism by which randomizers query for a certain subset of GameObjects to randoize within a simulation. For example, a rotation randomizer could query for all GameObjects with a RotationRandomizerTag component to obtain an array of all objects the randomizer should vary for the given simulation iteration.
RandomizerTags are the primary mechanism by which Randomizers query for a certain subset of GameObjects to randomize within a simulation. For example, a rotation Randomizer could query for all GameObjects with a RotationRandomizerTag component to obtain an array of all objects the Randomizer should vary for the given simulation Iteration.
To read more about how to use RandomizerTags, navigate over to the **[RandomizerTags doc](RandomizerTags.md)**.
To read more about how to use RandomizerTags, navigate over to the **[RandomizerTags documentation](RandomizerTags.md)**.
Parameters are classes that utilize samplers to deterministically generate random typed values. Parameters are often exposed within the inspector interface of randomizers to allow users to customize said randomizer's behavior. To accomplish this, parameters combine and transform the float values produced by one or more samplers into various C# types. For example, a Vector3 parameter can be used to map three samplers to the x, y, and z dimensions of a GameObject. Or a material parameter can utilize a sampler to randomly select one material from a list of possible options.
Parameters are classes that utilize Samplers to deterministically generate random typed values. Parameters are often exposed within the inspector interface of Randomizers to allow users to customize said Randomizer's behavior. To accomplish this, Parameters combine and transform the float values produced by one or more Samplers into various C# types. For example, a Vector3 Parameter can be used to map three Samplers to the x, y, and z dimensions of a GameObject. Or a material Parameter can utilize a Sampler to randomly select one material from a list of possible options.
To read more about how to create custom parameter types, navigate over to the **[Parameters doc](Parameters.md)**.
To read more about how to create custom Parameter types, navigate over to the **[Parameters documentation](Parameters.md)**.
Samplers generate bounded random float values by sampling from probability distributions. They are considered bounded since each random sampler generates float values within a range defined by a minumum and maximum value.
Samplers generate bounded random float values by sampling from probability distributions. They are considered bounded since each random sampler generates float values within a range defined by a minimum and maximum value.
Take a look at the **[Samplers doc](Samplers.md)** to learn more about implementing custom probability distributions and samplers that can integrate with the perception package.
Take a look at the **[Samplers doc](Samplers.md)** to learn more about implementing custom probability distributions and samplers that can integrate with the Perception package.
Visit the [Perception Tutorial](../Tutorial/TUTORIAL.md) to get started using the perception package's randomization tools in an example project.
Visit the [Perception Tutorial](../Tutorial/TUTORIAL.md) to get started using the Perception package's randomization tools in an example project.
Parameters are often defined as fields of a randomizer class, but they can also be instanced just like any other C# class:
Parameters are often defined as fields of a Randomizer class, but they can also be instanced just like any other C# class:
// Create a color parameter
// Create a color Parameter
var colorParameter = new HsvaColorParameter();
// Generate one color sample
Note that parameters, like samplers, generate new random values for each call to the Sample() method:
Note that Parameters, like Samplers, generate new random values for each call to the Sample() method:
```
var color1 = colorParameter.Sample();
var color2 = colorParameter.Sample();
## Defining Custom Parameters
All parameters derive from the `Parameter` abstract class, but all included perception package parameter types derive from two specialized Parameter base classes:
All Parameters derive from the `Parameter` abstract class. Additionally, the Parameters included in the Perception package types derive from two specialized Parameter base classes:
1. `CategoricalParameter`
2. `NumericParameter`
### Categorical Parameters
Categorical parameters choose a value from a list of options that have no intrinsic ordering. For example, a material paramater randomly chooses from a list of material options, but the list of material options itself can be rearranged into any particular order without affecting the distribution of materials selected.
Categorical Parameters choose a value from a list of options that have no intrinsic ordering. For example, a material Parameter randomly chooses from a list of material options, but the list of material options itself can be rearranged into any particular order without affecting the distribution of materials selected.
If your custom parameter is a categorical in nature, take a look at the [StringParameter]() class included in the perception package as a reference for how to derive the `CategoricalParameter` class.
If your custom Parameter is categorical in nature, take a look at the [StringParameter]() class included in the perception package as a reference for how to derive the `CategoricalParameter` class.
```
using UnityEngine.Perception.Randomization.Parameters.Attributes;
}
```
**Note:** the AddComponentMenu attribute with an empty string prevents parameters from appearing in the Add Component GameObject menu. Randomization parameters should only be created with by a `ParameterConfiguration`
**Note:** the AddComponentMenu attribute with an empty string prevents Parameters from appearing in the Add Component GameObject menu. Randomization Parameters should only be created with by a `ParameterConfiguration`
Numeric parameters use samplers to generate randomized structs. Take a look at the [ColorHsvaParameter]() class included in the perception package for an example on how to implement a numeric parameter.
Numeric Parameters use samplers to generate randomized structs. Take a look at the [ColorHsvaParameter]() class included in the Perception package for an example on how to implement a numeric Parameter.
RandomizerTags are the primary mechanism by which randomizers query for a certain subset of GameObjects to randomize within a simulation.
RandomizerTags are the primary mechanism by which Randomizers query for a certain subset of GameObjects to randomize within a simulation.
More specifically, RandomizerTags are components that can be added to GameObjects to register them with the active scenario's TagManager. This TagManager is aware of all objects with tags in the scene and can be queried to find all GameObjects that contain a specific tag. Below is a simple example of a ColorRandomizer querying for all GameObjects with a ColorRandomizerTag that it will apply a random material base color to:
More specifically, RandomizerTags are components that can be added to GameObjects to register them with the active Scenario's TagManager. This TagManager is aware of all objects with tags in the scene and can be queried to find all GameObjects that contain a specific tag. Below is a simple example of a ColorRandomizer querying for all GameObjects with a ColorRandomizerTag that it will apply a random material base color to:
```
[Serializable]
}
```
RandomizerTags can also be used to customize how randomizers apply their randomizations to a particular GameObject. Visit [Phase 2 of the Perception Tutorial](../Tutorial/TUTORIAL.md) to explore an in depth example of implementing a LightRandomizer that does exactly this.
RandomizerTags can also be used to customize how Randomizers apply their randomizations to a particular GameObject. Visit [Phase 2 of the Perception Tutorial](../Tutorial/TUTORIAL.md) to explore an in depth example of implementing a LightRandomizer that does exactly this.
Randomizers encapsulate specific randomization activities to perform during the execution of a randomized simulation. For example, randomizers exist for spawning objects, repositioning lights, varying the color of objects, etc. Randomizers expose random parameters to their inspector interface to further customize these variations. Users can add a set of randomizers to a scenario in order to define an ordered list randomization activities to perform during the lifecycle of a simulation.
Randomizers encapsulate specific randomization activities to perform during the execution of a randomized simulation. For example, Randomizers exist for spawning objects, repositioning lights, varying the color of objects, etc. Randomizers expose random parameters to their inspector interface to further customize these variations. Users can add a set of Randomizers to a Scenario in order to define an ordered list of randomization activities to perform during the lifecycle of a simulation.
To define an entirely new randomizer, derive the Randomizer class and implement one or more of the methods listed in the section below to randomize GameObjects during the runtime of a simulation.
To define an entirely new Randomizer, derive the Randomizer class and implement one or more of the methods listed in the section below to randomize GameObjects during the runtime of a simulation.
1. OnCreate() - called when the Randomizer is added or loaded to a scenario
2. OnIterationStart() - called at the start of a new scenario iteration
3. OnIterationEnd() - called the after a scenario iteration has completed
4. OnScenarioComplete() - called the after the entire scenario has completed
1. OnCreate() - called when the Randomizer is added or loaded to a Scenario
2. OnIterationStart() - called at the start of a new Scenario Iteration
3. OnIterationEnd() - called the after a Scenario Iteration has completed
4. OnScenarioComplete() - called the after the entire Scenario has completed
5. OnStartRunning() - called on the first frame a Randomizer is enabled
6. OnStopRunning() - called on the first frame a disabled Randomizer is updated
7. OnUpdate() - executed every frame for enabled Randomizers
Below is the code for the sample rotation randomizer included with the perception package:
Below is the code for the sample rotation Randomizer included with the Perception package:
```
[Serializable]
```
There are a few key things to note from this example:
1. Make sure to add the [Serializable] tag to all randomizer implementations to ensure that the randomizer can be customized and saved within the Unity Editor.
2. The [AddRandomizerMenu] attribute customizes the "Add Randomizer" sub menu path in the scenario inspector for a particular randomizer. In this example, the RotationRandomizer can be added to a scenario by opening the add randomizer menu and clicking `Perception -> Rotation Randomizer`.
3. The line `var taggedObjects = tagManager.Query<RotationRandomizerTag>();` uses RandomizerTags in combination with the current Scenario's tagManager to query for all objects with RotationRandomizerTags to obtain the subset of GameObjects within the simulation that need to have their rotations randomzied. To learn more about how RandomizerTags work, visit the [RandomizerTags doc](RandomizerTags.md).
1. Make sure to add the [Serializable] tag to all Randomizer implementations to ensure that the Randomizer can be customized and saved within the Unity Editor.
2. The [AddRandomizerMenu] attribute customizes the "Add Randomizer" sub menu path in the Scenario inspector for a particular Randomizer. In this example, the RotationRandomizer can be added to a Scenario by opening the _**Add Randomizer**_ menu and clicking `Perception -> Rotation Randomizer`.
3. The line `var taggedObjects = tagManager.Query<RotationRandomizerTag>();` uses RandomizerTags in combination with the current Scenario's tagManager to query for all objects with RotationRandomizerTags and obtain the subset of GameObjects within the simulation that need to have their rotations randomzied. To learn more about how RandomizerTags work, visit the [RandomizerTags documentation page](RandomizerTags.md).
Samplers in the perception package are classes that deterministically generate random float values from bounded probability distributions. Although samplers are often used in conjunction with parameters to generate arrays of typed random values, samplers can be instantiated and used from any ordinary script:
Samplers in the perception package are classes that deterministically generate random float values from bounded probability distributions. Although Samplers are often used in conjunction with Parameters to generate arrays of typed random values, Samplers can be instantiated and used from any ordinary script:
```
var sampler = new NormalSampler();
sampler.mean = 3;
## Random Seeding
Samplers generate random values that are seeded by the active scenario's current random state. Changing the scenario's random seed will result in samplers generating different values. Changing the order of samplers, parameters, or randomizers will also result in different values being sampled during a simulation.
Samplers generate random values that are seeded by the active Scenario's current random state. Changing the Scenario's random seed will result in Samplers generating different values. Changing the order of Samplers, Parameters, or Randomizers will also result in different values being sampled during a simulation.
It is recommended that users do not generate random values using the UnityEngine.Random() class or the System.Random() class within a simulation since both of these classes can potentially generate non-determinisitic or improperly seeded random values. Using only Perception Samplers to generate random values will help ensure that Perception simulations generate consistent results during local execution and on Unity Simulation in the cloud.
It is recommended that users do not generate random values using the UnityEngine.Random() class or the System.Random() class within a simulation since both of these classes can potentially generate non-deterministic or improperly seeded random values. Using only Perception Samplers to generate random values will help ensure that Perception simulations generate consistent results during local execution and on Unity Simulation in the cloud.
## Custom Samplers
Samplers have a NativeSamples() method that can schedule a ready-made multi-threaded job intended for generating a large array of samples. Below is an example of how to combine two job handles returned by NativeSamples() to generate two arrays of samples simultaneously:
1. Controlling the execution flow of your simulation
2. Defining a list of randomizers
2. Defining a list of Randomizers
By default, the perception package includes one ready-made scenario, the `FixedLengthScenario` class. This scenario runs each iteration for a fixed number of frames and is compatible with the Run in Unity Simulation window for cloud simulation execution.
By default, the Perception package includes one ready-made Scenario, the `FixedLengthScenario` class. This Scenario runs each Iteration for a fixed number of frames and is compatible with the Run in Unity Simulation window for cloud simulation execution.
Users can utilize Unity's Unity Simulation service to execute a scenario in the cloud through the perception package's Run in Unity Simulation window. To open this window from the Unity editor using the top menu bar, navigate to `Window -> Run in Unity Simulation`.
Users can utilize Unity's Unity Simulation service to execute a Scenario in the cloud through the perception package's Run in Unity Simulation window. To open this window from the Unity editor using the top menu bar, navigate to `Window -> Run in Unity Simulation`.
2. **Total Iterations** - The number of scenario iterations to complete during the run
2. **Total Iterations** - The number of Scenario Iterations to complete during the run
5. **Scenario** - The scenario to execute
6. **Sys-Param** - The system parameters or the hardware configuration of Unity Simulation worker instances to execute the scenario with. Determines per instance specifications such as the number of CPU cores, amount of memory, and presence of a GPU for accelerated execution.
5. **Scenario** - The Scenario to execute
6. **Sys-Param** - The system parameters or the hardware configuration of Unity Simulation worker instances to execute the Scenario with. Determines per instance specifications such as the number of CPU cores, amount of memory, and presence of a GPU for accelerated execution.
NOTE: To execute a scenario using the Run in Unity Simulation window, the scenario class must implement the UnitySimulationScenario class.
NOTE: To execute a Scenario using the Run in Unity Simulation window, the Scenario class must implement the UnitySimulationScenario class.
For use cases where the scenario should run for an arbitrary number of frames, implementing a custom scenario may be necessary. Below are the two most common scenario properties a user might want to override to implement custom scenario iteration conditions:
1. **isIterationComplete** - determines the conditions that cause the end of a scenario iteration
2. **isScenarioComplete** - determines the conditions that cause the end of a scenario
For use cases where the Scenario should run for an arbitrary number of frames, implementing a custom Scenario may be necessary. Below are the two most common Scenario properties a user might want to override to implement custom Scenario Iteration conditions:
1. **isIterationComplete** - determines the conditions that cause the end of a Scenario Iteration
2. **isScenarioComplete** - determines the conditions that cause the end of a Scenario
Scenarios can be serialized to JSON, modified, and reimported at runtime to configure simulation behavior even after a Unity player has been built. Constants and randomizer sampler settings are the two primary sections generated when serializing a scenario. Note that currently, only numerical samplers are serialized. Below is the contents of a JSON configuration file created when serializing the scenario used in Phase 1 of the [Perception Tutorial](../Tutorial/TUTORIAL.md):
Scenarios can be serialized to JSON, modified, and reimported at runtime to configure simulation behavior even after a Unity player has been built. Constants and Randomizer Sampler settings are the two primary sections generated when serializing a Scenario. Note that currently, only numerical samplers are serialized. Below is the contents of a JSON configuration file created when serializing the Scenario used in Phase 1 of the [Perception Tutorial](../Tutorial/TUTORIAL.md):
```
{
"constants": {
### Constants
Constants can include properties such as starting iteration value or total iteration count, and you can always add your own custom constants. Below is an example of the constants class used in the `FixedLengthScenario` class:
Constants can include properties such as starting Iteration value or total Iteration count, and you can always add your own custom constants. Below is an example of the Constants class used in the `FixedLengthScenario` class:
```
[Serializable]
public class Constants : UnitySimulationScenarioConstants
```
There are a few key things to note here:
1. The constants class will need to inherit from `UnitySimulationScenarioConstants` to be compatible with the Run in Unity Simulation window. Deriving from `UnitySimulationScenarioConstants` will add a few key properties to the constants class that are needed to coordinate a Unity Simulation run.
2. Make sure to include the `[Serializable]` attribute on a constant class. This will ensure that the constants can be manipulated from the Unity inspector.
3. A scenario class's `SerializeToJson()` and `DeserializeFromJson()` methods can be overriden to implement custom serialization strategies.
1. The Constants class will need to inherit from `UnitySimulationScenarioConstants` to be compatible with the Run in Unity Simulation window. Deriving from `UnitySimulationScenarioConstants` will add a few key properties to the Constants class that are needed to coordinate a Unity Simulation run.
2. Make sure to include the `[Serializable]` attribute on a constant class. This will ensure that the Constants can be manipulated from the Unity inspector.
3. A Scenario class's `SerializeToJson()` and `DeserializeFromJson()` methods can be overridden to implement custom serialization strategies.
Follow the instructions below to generate a scenario configuration file to modify your scenario constants and randomizers in a built player:
1. Click the serialize constants button in the scenario's inspector window. This will generate a `scenario_configuration.json` file and place it in the project's Assets/StreamingAssets folder.
Follow the instructions below to generate a Scenario configuration file to modify your Scenario Constants and Randomizers in a built player:
1. Click the _**Serialize Constants**_ button in the Scenario's inspector window. This will generate a `scenario_configuration.json` file and place it in the project's Assets/StreamingAssets folder.
2. Build your player. The new player will have a [ProjectName]_Data/StreamingAssets folder. A copy of the `scenario_configuration.json` file previously constructed in the editor will be found in this folder.
3. Change the contents of the `scenario_configuration.json` file. Any running player thereafter will utilize the newly authored values.
> :information_source: If you would like to install a specific version of the package, you can append the version to the end of the url. For example `com.unity.perception@0.1.0-preview.5`. For this tutorial, **we do not need to add a version**. You can also install the package from a local clone of the Perception repository. More information on installing local packages is available [here](https://docs.unity3d.com/Manual/upm-ui-local.html).
It will take some time for the manager to download and import the package. Once the operation finishes, you will see the newly download Perception package automatically selected in the _**Package Manager**_, as depicted below:
It will take some time for the manager to download and import the package. Once the operation finishes, you will see the newly downloaded Perception package automatically selected in the _**Package Manager**_, as depicted below:
<palign="center">
<imgsrc="Images/package_manager.png"width="600"/>
Each package can come with a set of samples. As seen in the righthand panel, the Perception package includes a sample named _**Tutorial Files**_, which will be required for completing this tutorial. The sample files consist of example foreground and background objects, randomizers, shaders, and other useful elements to work with during this tutorial. **Foreground** objects are those that the eventual machine learning model will try to detect, and **background** objects will be placed in the background as distractors for the model.
Each package can come with a set of samples. As seen in the righthand panel, the Perception package includes a sample named _**Tutorial Files**_, which will be required for completing this tutorial. The sample files consist of example foreground and background objects, Randomizer, shaders, and other useful elements to work with during this tutorial. **Foreground** objects are those that the eventual machine learning model will try to detect, and **background** objects will be placed in the background as distractors for the model.
* **:green_circle: Action**: In the _**Package Manager**_ window, from the list of _**Samples**_ for the Perception package, click on the _**Import into Project**_ button for the sample named _**Tutorial Files**_.
* **:green_circle: Action**: Click on `Directional Light` and in the _**Inspector**_ tab, set `Shadow Type` to `No Shadows`.
We will now add the necessary components to the camera in order to equip it for the perception workflow. To do this, we need to add a `Perception Camera` component to it, and then define which types of ground-truth we wish to generate using this camera.
We will now add the necessary components to the camera in order to equip it for the Perception workflow. To do this, we need to add a `Perception Camera` component to it, and then define which types of ground-truth we wish to generate using this camera.
* **:green_circle: Action**: Select `Main Camera` again and in the _**Inspector**_ tab, click on the _**Add Component**_ button.
* **:green_circle: Action**: Start typing `Perception Camera` in the search bar that appears, until the `Perception Camera` script is found, with a **#** icon to the left:
As seen in the UI for `Perception Camera`, the list of `Camera Labelers` is currently empty. For each type of ground-truth you wish to generate along-side your captured frames (e.g. 2D bounding boxes around objects), you will need to add a corresponding `Camera Labeler` to this list.
To speed-up your perception workflow, the Perception package comes with five common labelers for object-detection tasks; however, if you are comfortable with code, you can also add your own custom labelers. The labelers that come with the Perception package cover **3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), and semantic segmentation images (each object rendered in a unique colour)**. We will use four of these in this tutorial.
To speed-up your workflow, the Perception package comes with five common labelers for object-detection tasks; however, if you are comfortable with code, you can also add your own custom labelers. The labelers that come with the Perception package cover **3D bounding boxes, 2D bounding boxes, object counts, object information (pixel counts and ids), and semantic segmentation images (each object rendered in a unique colour)**. We will use four of these in this tutorial.
* **:green_circle: Action**: Click on the _**+**_ button at the bottom right corner of the empty labeler list and select `BoundingBox2DLabeler`.
* **:green_circle: Action**: Repeat the above step to add `ObjectCountLabeler`, `RenderedObjectInfoLabeler`, `SemanticSegmentationLabeler`.
For this tutorial, we have already prepared the foreground Prefabs for you and added the `Labeling` component to all of them. These Prefabs were based on 3D scans of the actual grocery items. If you are making your own Prefabs, you can easily add a `Labeling` component to them using the _**Add Component**_ button visible in the bottom right corner of the screenshot above.
> :information_source: If you are interested in knowing more about the process of creating Unity compatible 3D models for use in Perception, you can visit [this page](https://github.com/Unity-Technologies/SynthDet/blob/master/docs/CreatingAssets.md). Once you have 3D models in `.fbx` format, the Perception package lets you quickly create Prefabs from multiple models. Just select all your models and from the top menu bar select _**Assets -> Perception -> Create Prefabs from Selected Models**_. The newly created Prefabs will be placed in the same folders as their corresponding models.
> :information_source: If you are interested in knowing more about the process of creating Unity compatible 3D models for use with the Perception package, you can visit [this page](https://github.com/Unity-Technologies/SynthDet/blob/master/docs/CreatingAssets.md). Once you have 3D models in `.fbx` format, the Perception package lets you quickly create Prefabs from multiple models. Just select all your models and from the top menu bar select _**Assets -> Perception -> Create Prefabs from Selected Models**_. The newly created Prefabs will be placed in the same folders as their corresponding models.
Even though the sample Prefabs already have a label manually added, to learn more about how to use the Labeling component, we will now use automatic labeling to label all our foreground objects. This will overwrite their manually added labels.
In this folder, you will find a few types of data, depending on your `Perception Camera` settings. These can include:
- Logs
- JSON data
- RGB images (raw camera output) (if the `Save Camera Output to Disk` checkmark is enabled on `Perception Camera`)
- RGB images (raw camera output) (if the `Save Camera Output to Disk` checkmark is enabled on `Perception Camera`)
- Semantic segmentation images (if the `SemanticSegmentationLabeler` is added and active on `Perception Camera`)
The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation on the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
* **:green_circle: Action**: Follow the instructions laid out in the notebook and run each code block to view its outputs.
This concludes Phase 1 of the Perception tutorial. In the next phase, you will dive a little bit into randomization code and learn how to build your own custom Randomizer.
This concludes Phase 1 of the Perception Tutorial. In the next phase, you will dive a little bit into randomization code and learn how to build your own custom Randomizer.
**[Continue to Phase 2: Custom Randomizations](Phase2.md)**
The purpose of this piece of code is to obtain a random float parameter and assign it to the light's `Intensity` field on the start of every Iteration. Let's go through the code above and understand each part. The `FloatParameter` field makes it possible for us to define a randomized float parameter and modify its properties from the editor UI, similar to how we already modified the properties for the previous Randomizers we used.
The purpose of this piece of code is to obtain a random float Parameter and assign it to the light's `Intensity` field on the start of every Iteration. Let's go through the code above and understand each part. The `FloatParameter` field makes it possible for us to define a randomized float Parameter and modify its properties from the editor UI, similar to how we already modified the properties for the previous Randomizers we used.
> :information_source: If you look at the _**Console**_ tab of the editor now, you will see an error regarding `MyLightRandomizerTag` not being found. This is to be expected, since we have not yet created this class; the error will go away once we create the class later.
}
```
If you now check the UI snippet for `MyLightRandomizer`, you will notice that `Color Parameter` is added. This Parameter includes four separate randomized values for `Red`, `Green`, `Blue` and `Alpha`. Note that the meaningful range for all of these values is 0-1 (and not 0-255). You can see that the sampling range for red, green, and blue is currently also set to 0-1, which means the parameter covers a full range of colors. A color with (0,0,0) RGB components essentially emits no light. So, let's increase the minimum a bit to avoid such a scenario.
If you now check the UI snippet for `MyLightRandomizer`, you will notice that `Color Parameter` is added. This Parameter includes four separate randomized values for `Red`, `Green`, `Blue` and `Alpha`. Note that the meaningful range for all of these values is 0-1 (and not 0-255). You can see that the sampling range for red, green, and blue is currently also set to 0-1, which means the Parameter covers a full range of colors. A color with (0,0,0) RGB components essentially emits no light. So, let's increase the minimum a bit to avoid such a scenario.
* **:green_circle: Action**: Increase the minimum value for red, green, and blue components to 0.4 (this is an arbitrary number that typically produces good-looking results).
</p>
* **:green_circle: Action**: Run the simulation for a few frames to observe the lighting color changing on each iteration.
* **:green_circle: Action**: Run the simulation for a few frames to observe the lighting color changing on each Iteration of the Scenario.
### <aname="step-2">Step 2: Bundle Data and Logic Inside RandomizerTags</a>
* **:green_circle: Action**: Choose `TutorialScene` (which is the Scene we have been working in) as your _**Main Scene**_ and the `SimulationScenario` object as your _**Scenario**_.
Here, you can also specify a name for the run, the number of iterations the Scenario will execute for, and the number of _**Instances**_ (number of nodes the work will be distributed across) for the run.
Here, you can also specify a name for the run, the number of Iterations the Scenario will execute for, and the number of _**Instances**_ (number of nodes the work will be distributed across) for the run.
* **:green_circle: Action**: Name your run `FirstRun`, set the number of iterations to `1000`, and instances to `20`.
* **:green_circle: Action**: Name your run `FirstRun`, set the number of Iterations to `1000`, and Instances to `20`.
Your project will now be built and then uploaded to Unity Simulation. Depending on the upload speed of your internet connection, this might take anywhere from a few seconds to a couple of minutes.
Your project will now be built and then uploaded to Unity Simulation. This may take a few minutes to complete, during which the editor may become frozen; this is normal behaviour.
* **:green_circle: Action**: Once the operation is complete, you can find the **Build ID**, **Run Definition ID**, and **Execution ID** of this Unity Simulation run in the _**Console**_ tab:
In case you have more than one cloud project, you will need to "activate" the one corresponding with your perception tutorial project. If there is only one project, it is already activated, and you will not need to execute the command below (note: replace `<project-id>` with the id of your desired project).
In case you have more than one cloud project, you will need to "activate" the one corresponding with your Perception Tutorial project. If there is only one project, it is already activated, and you will not need to execute the command below (note: replace `<project-id>` with the id of your desired project).
* **:green_circle: Action**: Activate the relevant project:
You may notice that the IDs seen above for the run named `FirstRun` match those we saw earlier in Unity Editor's _**Console**_. You can see here that the single execution for our recently uploaded build is `In_Progress` and that the execution ID is `yegz4WN`.
Unity Simulation utilizes the ability to run simulation instances in parallel. If you enter a number larger than 1 for the number of instances in the _**Run in Unity Simulation**_ window, your run will be parallelized, and multiple simulation instances will simultaneously execute. You can view the status of all simulation instances using the `usim summarize run-execution <execution-id>` command. This command will tell you how many instances have succeeded, failed, have not run yet, or are in progress. Make sure to replace `<execution-id>` with the execution ID seen in your run list. In the above example, this ID would be `yegz4WN`.
Unity Simulation utilizes the ability to run simulation Instances in parallel. If you enter a number larger than 1 for the number of Instances in the _**Run in Unity Simulation**_ window, your run will be parallelized, and multiple simulation Instances will simultaneously execute. You can view the status of all simulation Instances using the `usim summarize run-execution <execution-id>` command. This command will tell you how many Instances have succeeded, failed, have not run yet, or are in progress. Make sure to replace `<execution-id>` with the execution ID seen in your run list. In the above example, this ID would be `yegz4WN`.
* **:green_circle: Action**: Use the `usim summarize run-execution <execution-id>` command to observe the status of your execution nodes:
Follow the rest of the steps inside the notebook to generate a variety of plots and stats. Keep in mind that this notebook is provided just as an example, and you can modify and extend it according to your own needs using the tools provided by the [Dataset Insights framework](https://datasetinsights.readthedocs.io/en/latest/).
This concludes the Perception tutorial. The next step in this workflow would be to train an object-detection model using a dataset generated on Unity Simulation. It is important to note that the 1000 large dataset we generated here is probably not sufficiently large for training most models. We chose this number here so that the run would complete in a fairly short period of time, allowing us to move on to learning how to analyze the dataset's statistics. In order to generate data for training, we recommend a dataset of about 400,000 captures.
In the near future, we will expand this tutorial to Phase 4, which will include instructions on how to train a Faster R-CNN object-detection model using a dataset that can be generated by following this tutorial.
This concludes the Perception Tutorial. The next step in this workflow would be to train an object-detection model using a dataset generated on Unity Simulation. It is important to note that the 1000 large dataset we generated here is probably not sufficiently large for training most models. We chose this number here so that the run would complete in a fairly short period of time, allowing us to move on to learning how to analyze the statistics of the dataset. In order to generate data for training, we recommend a dataset of about 400,000 captures.
In order to get the best out of computer vision models, the training data needs to contain a large degree of variation. This is achieved through randomizing various aspects of your simulation between captured frames. While you will use basic randomizations in Phase 1, Phase 2 of the tutorial will help you learn how to randomize your simulations in more complex ways by guiding you through writing your first customized randomizer in C# code. Once you complete this phase, you will know how to:
* Create custom randomizers by extending our provided samples.
* Coordinate the operation of several randomizers by specifying their order of execution and the objects they affect.
In order to get the best out of computer vision models, the training data needs to contain a large degree of variation. This is achieved through randomizing various aspects of your simulation between captured frames. While you will use basic randomizations in Phase 1, Phase 2 of the tutorial will help you learn how to randomize your simulations in more complex ways by guiding you through writing your first customized Randomizer in C# code. Once you complete this phase, you will know how to:
* Create custom Randomizers by extending our provided samples.
* Coordinate the operation of several Randomizers by specifying their order of execution and the objects they affect.
* Have objects specify criteria (e.g. ranges, means, etc.) and logic (e.g. unique behaviors) for their randomizable attributes.