All appearances of the term `KeyPoint` have been renamed to `Keypoint`. Therefore, after upgrading to this version, if you have code that relies on any renamed types or names, make sure to alter your code to reflect the new names.
### Known Issues
### Added
### Changed
### Deprecated
### Removed
### Fixed
## [0.8.0-preview.2] - 2021-03-15
### Upgrade Notes
All appearances of the term `KeyPoint` have been renamed to `Keypoint`. If you have code that relies on any renamed types or names, make sure to alter your code to reflect the new names.
`ScenarioBase`'s `Awake()`, `Start()`, and `Update()` functions are now private. If you previously used these, replace the usages with `OnAwake()`, `OnStart()`, and `OnUpdate()`.
Added error message when missing Randomizer scripts are detected
Added error message when missing Randomizer scripts are detected.
Scenario serialization has been updated to include scalar values on randomizers and parameters.
Scenario serialization has been updated to include scalar values on Randomizers and Parameters.
Added new ScenarioBase virtual lifecycle hooks: OnAwake, OnStart, OnComplete, and OnIdle.
Added new `ScenarioBase` virtual lifecycle hooks: `OnAwake()`, `OnStart()`, `OnUpdate()`, `OnComplete()`, and `OnIdle()`.
Keypoint occlusion has been added, no keypoint information will be recorded for a labeled asset completely out of the camera's frustum.
Keypoint occlusion has been added. No keypoint information will be recorded for a labeled asset completely out of the camera's frustum.
New keypoint tests have been added to test keypoint state.
New keypoint tests have been added to test keypoint states.
The color of keypoints and connections are now reported in the annotation definition json file for keypoint templates.
The color of keypoints and connections are now reported in the annotation definition JSON file for keypoint templates.
The PerceptionScenario abstract class has been added to abstract perception data capture specific functionality from the vanilla scenario lifecycle.
The `PerceptionScenario` abstract class has been added to abstract perception data capture specific functionality from the vanilla Scenario lifecycle.
The newly added LabelManager class now enables custom Labelers to access the list of registered Labeling Components present in the scene.
The newly added `LabelManager` class now enables custom Labelers to access the list of registered `Labeling` Components present in the Scene.
Improved UI for KeypointTemplate and added useful default colors for keypoint and skeleton definitions.
Improved UI for `KeypointTemplate` and added useful default colors for keypoint and skeleton definitions.
Added the ability to switch ground-truth labeling on or off for an object at runtime by enabling or disabling its `Labeling` component.
ScenarioBase's Awake, Start, and Update methods are now private. The newly added virtual lifecycle hooks are to be used as replacements.
ScenarioBase's `Awake()`, `Start()`, and `Update()` methods are now private. The newly added virtual lifecycle hooks are to be used as replacements.
Improved Run Unity Simulation window UI.
Improved _Run in Unity Simulation_ window UI.
The Run Unity Simulation window now accepts option scenario JSON configurations to override existing scenario editor settings.
The _Run in Unity Simulation_ window now accepts an optional Scenario JSON configuration to override existing Scenario editor UI settings.
ScenarioBase's Get and Create randomizer methods have been augmented or replaced with more generic list index style accessors.
The `GetRandomizer()` and `CreateRandomizer()` methods of `ScenarioBase` have been augmented or replaced with more generic list index style accessors.
The scenario inspector buttons serialize and deserialize have been refactored to open a file explorer generate and import JSON configurations.
The Scenario inspector buttons for serialization and deserialization have been refactored to open a file explorer so that the user can choose where to save the generated JSON configuration or which file to import a configuration from.
Randomizer tags now use OnEnable and OnDisable to manage lifecycle. This allows the user to toggle them on and off in the editor.
RandomizerTags now use `OnEnable()` and `OnDisable()` to manage their lifecycle. This allows the user to toggle them on and off in the editor.
The randomizer methods OnCreate(), OnStartRunning(), and OnStopRunning() are now deprecated and have been replaced with OnAwake(), OnEnable() and OnDisable() respectively to better reflect the existing MonoBehaviour lifecycle methods.
Upgraded `com.unity.simulation.capture` package dependency to integrate new changes that prevent the API updater from looping infinitely when opening the project settings window on new URP projects.
CameraLabeler methods OnBeginRendering() and OnEndRendering() have an added ScriptableRenderContext parameter.
`CameraLabeler` methods `OnBeginRendering()` and `OnEndRendering()` now have an added `ScriptableRenderContext` parameter.
Upgraded com.unity.simulation.capture package dependency to integrate new changes that prevent the API updater from looping infinitely when opening the project settings window on new URP projects.
### Deprecated
### Deprecated
The Randomizer methods `OnCreate()`, `OnStartRunning()`, and `OnStopRunning()` are now deprecated and have been replaced with `OnAwake()`, `OnEnable()` and `OnDisable()` respectively, so as to better reflect the existing MonoBehaviour lifecycle methods.
Removed the entities package dependency
Removed the Entities package dependency.
Fixed a null reference error that appeared when adding options to categorical parameters.
Fixed a null reference error that appeared when adding options to Categorical Parameters.
Fixed ground truth not properly produced when there are other disabled PerceptionCameras present. Note: this does not yet add support for multiple enabled PerceptionCameras.
Fixed ground truth not properly being produced when there are other disabled PerceptionCameras present. Note: this does not yet add support for multiple enabled PerceptionCameras.
Fixed exception when rendering inspector for randomizers with private serialized fields
Fixed an exception when rendering inspector for Randomizers with private serialized fields.
Fixed an issue preventing a user from adding more options to a Categorical Parameter's list of options with the 'Add Folder' button. 'Add Folder' now correctly appends the contents of the new folder on the list.
Fixed an issue preventing the user from adding more options to a Categorical Parameter's list of options with the _Add Folder_ button. _Add Folder_ now correctly appends the contents of the new folder to the existing list.
Fixed keypoints being reported in wrong locations on the first frame an object is visible.
Fixed keypoints being reported in wrong locations on the first frame in which an object is visible.
Fixed an out of range error if a keypoint template skeleton relies on a joint that is not available.
Fixed an out of range error that occurred when a keypoint template skeleton relied on a joint that was not available.
Fixed wrong labels on 2d bounding boxes when all labeled objects are deleted in a frame.
In this tutorial, we will walk through the production of keypoint and pose datasets for computer vision tasks such as human pose estimation and gesture recognition.
We strongly recommend you finish [Phase 1 of the Perception Tutorial](../Tutorial/Phase1.md) before continuing with this one, especially if you do not have prior experience with Unity Editor. This tutorial requires at least version **0.7.0-preview.2** of the Perception package.
We strongly recommend you finish [Phase 1 of the Perception Tutorial](../Tutorial/Phase1.md) before continuing with this one, especially if you do not have prior experience with Unity Editor. This tutorial requires at least version **0.8.0-preview.1** of the Perception package.
In this tutorial, **":green_circle: Action:"** mark all of the actions needed to progress through the tutorial. If you are in a hurry, just follow the actions!
We now need to import the sample files required for this tutorial.
* **:green_circle: Action**: Open _**Package Manager**_ and select the Perception package, which should already be present in the navigation pane to the left side.
* **:green_circle: Action**: From the list of ***Samples*** for the Perception package, click on the ***Import into Project*** button for the sample bundle named _**Human Pose Estimation**_.
* **:green_circle: Action**: From the list of ***Samples*** for the Perception package, click on the ***Import*** button for the sample bundle named _**Human Pose Labeling and Randomization**_.
Once the sample files are imported, they will be placed inside the `Assets/Samples/Perception` folder in your Unity project, as seen in the image below:
* **:green_circle: Action**: Select all of the assets inside the `Assets/Samples/Perception/<perception-package-version>/Human Pose Estimation/Models and Animations`.
* **:green_circle: Action**: Select all of the assets inside the `Assets/Samples/Perception/<perception-package-version>/Human Pose Labeling and Randomization/Models and Animations`.
* **:green_circle: Action**: In the _**Inspector**_ tab, navigate to the _**Rig**_ section.
Note how `Animation Type` is set to `Humanoid` for all selected assets. This is a requirement and makes sure all animations included in the sample `.fbx` files are ready to be used on a rigged humanoid model.
The keypoint labeler captures keypoints of a labeled gameobject. The typical use of this labeler is capturing human pose
estimation data. The labeler uses a [keypoint template](#KeypointTemplate) which defines the keypoints to capture for the
model and the skeletal connections between those keypoints. The positions of the keypoints are recorded in pixel coordinates
and saved to the captures json file.
The keypoint labeler captures keypoints of a labeled gameobject. The typical use of this labeler is capturing human pose estimation data. The labeler uses a [keypoint template](#KeypointTemplate) which defines the keypoints to capture for the model and the skeletal connections between those keypoints. The positions of the keypoints are recorded in pixel coordinates. Each keypoint has a state value: 0 - the keypoint either does not exist or is outside of the image's bounds, 1 - the keypoint exists and is inside of the image's bounds but cannot be seen because it is occluded by another object, and 2 - the keypoint exists and is visible.
```
keypoints {
index: <int> -- Index of keypoint in template
x: <float> -- X pixel coordinate of keypoint
y: <float> -- Y pixel coordinate of keypoint
state: <int> -- 0: keypoint does not exist, 1 keypoint exists
state: <int> -- 0: keypoint does not exist, 1: keypoint exists but is not visible, 2: keypoint exists and is visible
This page provides brief instructions on installing the Perception package. Head over to the [Perception Tutorial](Tutorial/TUTORIAL.md) for more detailed instructions and steps for building a sample project.
1. Install the latest version of 2019.4.x or 2020.1.x Unity Editor from [here](https://unity3d.com/get-unity/download/archive). (Perception has not been tested on Unity versions newer than 2020.1)
1. Install the latest version of **2020.2.x** Unity Editor from [here](https://unity3d.com/get-unity/download/archive). (The Perception package has not been fully tested on newer Unity versions)
1. Create a new HDRP or URP project, or open an existing project.
1. Open `Window` -> `Package Manager`
1. In the Package Manager window find and click the ***+*** button in the upper lefthand corner of the window
Note that although the Perception package is compatible with both URP and HDRP, Unity Simulation currently only supports URP projects, therefore a URP project is recommended.
If you want a specific version of the package, append the version to the end of the "git URL". Ex. `com.unity.perception@0.1.0-preview.4`
If you want a specific version of the package, append the version to the end of the "git URL". Ex. `com.unity.perception@0.8.0-preview.1`
To install from a local clone of the repository, see [installing a local package](https://docs.unity3d.com/Manual/upm-ui-local.html) in the Unity manual.
* [Step 8: Verify Data Using Dataset Insights](#step-8)
### <aname="step-1">Step 1: Download Unity Editor and Create a New Project</a>
* **:green_circle: Action**: Navigate to [this](https://unity3d.com/get-unity/download/archive) page to download and install the latest version of **Unity Editor 2019.4.x**. (The tutorial has not yet been fully tested on newer versions.)
* **:green_circle: Action**: Navigate to [this](https://unity3d.com/get-unity/download/archive) page to download and install the latest version of **Unity Editor 2020.2.x**. (The tutorial has not yet been fully tested on newer versions.)
An alternative approach is to first install [_**Unity Hub**_](https://unity3d.com/get-unity/download), which will allow you to have multiple versions of Unity on your computer, and make it easier to manage your Unity projects and the versions of Unity they will use.
* **:green_circle: Action**: Click on the _**+**_ sign at the top-left corner of the _**Package Manager**_ window and then choose the option _**Add package from git URL...**_.
* **:green_circle: Action**: Enter the address `com.unity.perception` and click _**Add**_.
> :information_source: If you would like to install a specific version of the package, you can append the version to the end of the url. For example `com.unity.perception@0.1.0-preview.5`. For this tutorial, **we do not need to add a version**. You can also install the package from a local clone of the Perception repository. More information on installing local packages is available [here](https://docs.unity3d.com/Manual/upm-ui-local.html).
> :information_source: If you would like to install a specific version of the package, you can append the version to the end of the url. For example `com.unity.perception@0.8.0-preview.1`. For this tutorial, **we do not need to add a version**. You can also install the package from a local clone of the Perception repository. More information on installing local packages is available [here](https://docs.unity3d.com/Manual/upm-ui-local.html).
It will take some time for the manager to download and import the package. Once the operation finishes, you will see the newly downloaded Perception package automatically selected in the _**Package Manager**_, as depicted below:
Each package can come with a set of samples. As seen in the righthand panel, the Perception package includes a sample named _**Tutorial Files**_, which will be required for completing this tutorial. The sample files consist of example foreground and background objects, Randomizer, shaders, and other useful elements to work with during this tutorial. **Foreground** objects are those that the eventual machine learning model will try to detect, and **background** objects will be placed in the background as distractors for the model.
* **:green_circle: Action**: In the _**Package Manager**_ window, from the list of _**Samples**_ for the Perception package, click on the _**Import into Project**_ button for the sample named _**Tutorial Files**_.
* **:green_circle: Action**: In the _**Package Manager**_ window, from the list of _**Samples**_ for the Perception package, click on the _**Import**_ button for the sample named _**Tutorial Files**_.
Once the sample files are imported, they will be placed inside the `Assets/Samples/Perception` folder in your Unity project. You can view your project's folder structure and access your files from the _**Project**_ tab of the editor, as seen in the image below:
Once the sample files are imported, they will be placed inside the `Assets/Samples/Perception` folder in your Unity project. You can view your project's folder structure and access your files from the _**Project**_ tab of the editor, as seen in the image below (the package version should match the version you downloaded):
* **:green_circle: Action**: **(For URP projects only)** The _**Project**_ tab contains a search bar; use it to find the file named `ForwardRenderer.asset`, as shown below:
As seen above, the new Scene already contains a camera (`Main Camera`) and a light (`Directional Light`). We will now modify the camera's field of view and position to prepare it for the tutorial.
* **:green_circle: Action**: Click on `Main Camera` and in the _**Inspector**_ tab, modify the camera's `Position`, `Rotation`, `Projection` and `Size` to match the screenshot below. (Note that `Size` only becomes available once you set `Projection` to `Orthographic`)
* **:green_circle: Action**: Click on `Main Camera` and in the _**Inspector**_ tab, make sure the camera's `Position`, `Rotation`, `Projection`, and `Field of View` match the screenshot below.
<imgsrc="Images/camera_prep.png"width ="900"/>
<imgsrc="Images/camera_prep.png"width ="400"/>
For this tutorial, we prefer our light to not cast any shadows, therefore:
Click on this asset to bring up its _**Inspector**_ view. In there, you can specify the labels that this config will keep track of. You can type in labels, add any labels defined in the project (through being added to prefabs), and import/export this label config as a JSON file. A new label config like this one contains an empty list of labels.
In this tutorial, we will generate synthetic data intended for detecting 10 everyday grocery items. These grocery items were imported into your project when you imported the tutorial files from the _**Package Manager**_, and are located in the folder `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
In this tutorial, we will generate synthetic data intended for detecting 10 everyday grocery items. These grocery items were imported into your project when you imported the tutorial files from the _**Package Manager**_, and are located in the folder `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
The label configuration we have created (`TutorialIdLabelConfig`) is of type `IdLabelConfig`, and is compatible with three of the four labelers we have attached to our `Perception Camera`. This type of label configuration carries a unique numerical ID for each label. However, `SemanticSegmentationLabeler` requires a different kind of label configuration which includes unique colors for each label instead of numerical IDs. This is because the output of this labeler is a set of images in which each visible foreground object is painted in a unique color.
It is now time to assign labels to the objects that are supposed to be detected by an eventual object-detection model, and add those labels to both of the label configurations we have created. As mentioned above, these objects are located at `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
It is now time to assign labels to the objects that are supposed to be detected by an eventual object-detection model, and add those labels to both of the label configurations we have created. As mentioned above, these objects are located at `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`
* **:green_circle: Action**: Double click the file named `drink_whippingcream_lucerne.prefab` to open the Prefab asset.
When you open the Prefab asset, you will see the object shown in the Scene tab and its components shown on the right side of the editor, in the _**Inspector**_ tab:
Even though the sample Prefabs already have a label manually added, to learn more about how to use the Labeling component, we will now use automatic labeling to label all our foreground objects. This will overwrite their manually added labels.
* **:green_circle: Action**: Select **all the files** inside the `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs` folder.
* **:green_circle: Action**: Select **all the files** inside the `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs` folder.
* **:green_circle: Action**: From the _**Inspector**_ tab, enable `Use Automatic Labeling for All Selected Items`, and then select `Use asset name` as the labeling scheme.
<palign="center">
Now that we have labelled all our foreground objects and setup our label configurations, let's briefly test things.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: Drag and drop any of the Prefabs inside this folder into the Scene.
* **:green_circle: Action**: Click on the **▷** (play) button located at the top middle section of the editor to run your simulation.
This Randomizer uses Poisson-Disk sampling to select random positions from a given area, and spawn copies of randomly selected Prefabs (from a given list) at the chosen positions. We will use this component to generate a background that will act as a distraction for our eventual object-detection machine learning model.
* **:green_circle: Action**: Click _**Add Folder**_, and from the file explorer window that opens, choose the folder `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **:green_circle: Action**: Click _**Add Folder**_, and from the file explorer window that opens, choose the folder `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
The background Prefabs are primitive shapes devoid of color or texture. Later Randomizers will take care of those aspects.
<imgsrc="Images/first_run.png"width ="700"/>
</p>
In this view, you will also see the real-time visualizations we discussed before shown on top of the camera's view. In the top right corner of the window, you can see a visualization control panel, through which you can enable or disable visualizations for individual labelers. That said, we currently have no foreground objects in the Scene yet, so no bounding boxes or semantic segmentations will be displayed.
In this view, you will also see the real-time visualizations we discussed before shown on top of the camera's view. In the top right corner of the window, you can see a visualization control panel, through which you can enable or disable visualizations for individual labelers. That said, we currently have no foreground objects in the Scene yet, so no bounding boxes or semantic segmentation overlays will be displayed.
Note that disabling visualizations for a labeler does not affect your generated data. The annotations from all labelers that are active before running the simulation will continue to be recorded and will appear in the output data.
`TextureRandomizer` will have the task of attaching random textures to our colorless background objects at each Iteration of the Scenario. Similarly, `HueOffsetRandomizer` will alter the color of the objects, and `RotationRandomizer` will give the objects a new random rotation each Iteration.
* **:green_circle: Action**: In the UI snippet for `TextureRandomizer`, click _**Add Folder**_ and choose `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Background Textures`.
* **:green_circle: Action**: In the UI snippet for `TextureRandomizer`, click _**Add Folder**_ and choose `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Background Textures`.
* **:green_circle: Action**: In the UI snippet for `RotationRandomizer`, verify that all the minimum values for the three ranges are `0` and that maximum values are `360`.
To make sure each Randomizer knows which objects it should work with, we will use an object tagging and querying workflow that the bundled Randomizers already use. Each Randomizer can query the Scene for objects that carry certain types of `RandomizerTag` components. For instance, the `TextureRandomizer` queries the Scene for objects that have a `TextureRandomizerTag` component (you can change this in code!). Therefore, in order to make sure our background Prefabs are affected by the `TextureRandomizer` we need to make sure they have `TextureRandomizerTag` attached to them.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **:green_circle: Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **:green_circle: Action**: Select all the files inside and from the _**Inspector**_ tab add a `TextureRandomizerTag` to them. This will add the component to all the selected files.
* **:green_circle: Action**: Repeat the above step to add `HueOffsetRandomizerTag` and `RotationRandomizerTag` to all selected Prefabs.
It is now time to spawn and randomize our foreground objects.
* **:green_circle: Action**: Add `ForegroundObjectPlacementRandomizer` to your list of Randomizers. Click _**Add Folder**_ and select `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: Add `ForegroundObjectPlacementRandomizer` to your list of Randomizers. Click _**Add Folder**_ and select `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **:green_circle: Action**: Set these values for the above Randomizer: `Depth = -3, Separation Distance = 1.5, Placement Area = (5,5)`.
This Randomizer uses the same algorithm as the one we used for backgrounds; however, it is defined in a separate C# class because you can only have **one of each type of Randomizer added to your Scenario**. Therefore, this is our way of differentiating between how background and foreground objects are treated.
* **:green_circle: Action**: From the _**Project**_ tab select all the foreground Prefabs located in `Assets/Samples/Perception/0.6.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`, and add a `RotationRandomizerTag` component to them.
* **:green_circle: Action**: From the _**Project**_ tab select all the foreground Prefabs located in `Assets/Samples/Perception/0.8.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`, and add a `RotationRandomizerTag` component to them.
Randomizers execute according to their order within the list of Randomizers added to your Scenario. If you look at the list now, you will notice that `ForegroundObjectPlacementRandomizer` is coming after `RotationRandomizer`, therefore, foreground objects will NOT be included in the rotation randomizations, even though they are carrying the proper RandomizerTag. To fix that:
If you have not logged in yet, the _**Services**_ tab will display a message noting that you are offline:
* **:green_circle: Action** Click the ***General settings***.
This will open the ***Services*** tab of the ***Project Settings*** window. If you have not logged in yet, you will see a message noting that you are signed out:
<imgsrc="Images/signin.png"width="400"/>
<imgsrc="Images/signin.png"width="600"/>
* **:green_circle: Action**: Click _**Sign in...**_ and follow the steps in the window that opens to sign in or create an account.
* **:green_circle: Action**: Click _**Sign in...**_ and follow the steps in the window that opens to sign in or create a Unity account.
Once you have registered for a free trial, you will be taken to your Unity Simulation dashboard, where you will be able to observe your usage and billing invoices.
Once you have registered for a free trial, you will be taken to your Unity Simulation dashboard, where you will be able to observe your usage and billing information.
* **:green_circle: Action**: Return to Unity Editor. In the _**Services**_ tab click _**Select Organization**_ and choose the only available option (which typically has the same name as your Unity username).
* **:green_circle: Action**: Return to Unity Editor. Click _**Select Organization**_ and choose the only available option (which typically has the same name as your Unity username).
If you have used Unity before, you might have set-up multiple organizations for your account. In that case, choose whichever you would like to associate with this project.
If you have used Unity before, you might have set up multiple organizations for your account. In that case, choose whichever you would like to associate with this project.
<imgsrc="Images/create_proj.png"width="400"/>
<imgsrc="Images/create_proj.png"width="600"/>
* **:green_circle: Action**: Click _**Create**_ to create a new cloud project and connect your local project to it.
* **:green_circle: Action**: Click _**Create Project ID**_ to create a new cloud project and connect your local project to it.
### <aname="step-2">Step 2: Run Project on Unity Simulation</a>
<imgsrc="Images/runinusim.png"width="600"/>
</p>
* **:green_circle: Action**: Choose `TutorialScene` (which is the Scene we have been working in) as your _**Main Scene**_ and the `SimulationScenario` object as your _**Scenario**_.
Here, you can also specify a name for the run, the number of Iterations the Scenario will execute for, and the number of _**Instances**_ (number of nodes the work will be distributed across) for the run.
Here, you can also specify a name for the run, the number of Iterations the Scenario will execute for, and the number of _**Instances**_ (number of nodes the work will be distributed across) for the run. This window automatically picks the currently active Scene and Scenario to run in Unity Simulation.
Your project will now be built and then uploaded to Unity Simulation. This may take a few minutes to complete, during which the editor may become frozen; this is normal behaviour.
> :information_source: You can ignore the ***Optional Configuration*** section for now. This is useful if you plan to specify a configuration for your Scenario (including the Randomizers) that will override the values set in the Scenario UI, in Unity Simulation. To generate a configuration, you can click on the ***Generate JSON Config*** button provided in the ***Inspector*** view of Scenario components.
Your project will now be built and then uploaded to Unity Simulation and run. This may take a few minutes to complete, during which the editor may become frozen; this is normal behaviour.
* **:green_circle: Action**: Once the operation is complete, you can find the **Build ID**, **Run Definition ID**, and **Execution ID** of this Unity Simulation run in the _**Console**_ tab:
* **:green_circle: Action**: Once the operation is complete, you can find the **Execution ID** of this Unity Simulation run in the **Console** tab and the ***Run in Unity Simulation** Window:
<imgsrc="Images/build_uploaded.png"/>
<imgsrc="Images/build_uploaded.png"width="600"/>
### <aname="step-3">Step 3: Keep Track of Your Runs Using the Unity Simulation Command-Line Interface</a>
Follow the rest of the steps inside the notebook to generate a variety of plots and stats. Keep in mind that this notebook is provided just as an example, and you can modify and extend it according to your own needs using the tools provided by the [Dataset Insights framework](https://datasetinsights.readthedocs.io/en/latest/).
This concludes the Perception Tutorial. The next step in this workflow would be to train an object-detection model using a dataset generated on Unity Simulation. It is important to note that the 1000 large dataset we generated here is probably not sufficiently large for training most models. We chose this number here so that the run would complete in a fairly short period of time, allowing us to move on to learning how to analyze the statistics of the dataset. In order to generate data for training, we recommend a dataset of about 400,000 captures.
The next step in this workflow, which is out of the scope of this tutorial, is to train an object-detection model using our synthetic dataset. It is important to note that the 1000 large dataset we generated here is probably not sufficiently large for training most models. We chose this number here so that the Unity Simulation run would finish quickly, allowing us to move on to learning how to analyze the statistics of the dataset. In order to generate data for training, we recommend a minimum dataset size of around 50,000 captures with a large degree of randomization.
This concludes the Perception Tutorial. In case of any issues or questions, please feel free to open a GitHub issue on the `com.unity.perception` repository so that the Unity Computer Vision team can get back to you as soon as possible.