浏览代码

polish

/main
mohsen.k 4 年前
当前提交
8624e712
共有 2 个文件被更改,包括 97 次插入100 次删除
  1. 102
      Tutorial/Phase1.md
  2. 95
      Tutorial/Phase3.md

102
Tutorial/Phase1.md


### <a name="step-1">Step 1: Download Unity Editor and Create a New Project</a>
<font color="green">* **Action**:</font> Navigate to [this](https://unity3d.com/get-unity/download/archive) page to download and install the latest version of Unity Editor 2020.1.x. (The tutorial has not yet been tested on newer versions.)
* **Action**: Navigate to [this](https://unity3d.com/get-unity/download/archive) page to download and install the latest version of Unity Editor 2020.1.x. (The tutorial has not yet been tested on newer versions.)
<font color="green">* **Action**:</font> Make sure the _**Linux Build Support**_ and _**Visual Studio**_ installation options are checked when selecting modules during installation.
* **Action**: Make sure the _**Linux Build Support**_ and _**Visual Studio**_ installation options are checked when selecting modules during installation.
<font color="green">* **Action**:</font> Open Unity and create a new project using the Universal Render Pipeline. Name your new project _**Perception Tutorial**_, and specify a desired location as shown below.
* **Action**: Open Unity and create a new project using the Universal Render Pipeline. Name your new project _**Perception Tutorial**_, and specify a desired location as shown below.
<p align="center">
<img src="Images/create_new_project.png" align="center" width="800"/>

Once your new project is created and loaded, you will be presented with the Unity Editor interface. From this point on, whenever we refer to _**the editor**_, we mean Unity Editor.
<font color="green">* **Action**:</font> From the top menu bar, open _**Window**_ -> _**Package Manager**_.
* **Action**: From the top menu bar, open _**Window**_ -> _**Package Manager**_.
<font color="green">* **Action**:</font> Click on the _**+**_ sign at the top-left corner of the _**Package Manager**_ window and then choose the option _**Add package frim git URL...**_.
<font color="green">* **Action**:</font> Enter the address `com.unity.perception` and click _**Add**_.
* **Action**: Click on the _**+**_ sign at the top-left corner of the _**Package Manager**_ window and then choose the option _**Add package frim git URL...**_.
* **Action**: Enter the address `com.unity.perception` and click _**Add**_.
**Note:** If you would like a specific version of the package, you can append the version to the end of the url. For example `com.unity.perception@0.1.0-preview.5`. For this tutorial, **we do not need to add a version**. You can also install the package from a local clone of the Perception repository. More inofrmation on installing local pacakges is available [here](https://docs.unity3d.com/Manual/upm-ui-local.html).

Each package can come with a set of samples. As seen in the righthand panel, the Perception package includes a sample named _**Tutorial Files**_, which will be required for completing this tutorial. The sample files consist of example foreground and background objects, randomizers, shaders, and other useful elements to work with during this tutorial. **Foreground** objects are those thatthe eventual machine learning model will try to detect, and **background** objects will be placed in the background as distractors for the model.
<font color="green">* **Action**:</font> In the _**Package Manager**_ window, from the list of _**Samples**_ for the Perception package, click on the _**Import into Project**_ button for the sample named _**Tutorial Files**_.
* **Action**: In the _**Package Manager**_ window, from the list of _**Samples**_ for the Perception package, click on the _**Import into Project**_ button for the sample named _**Tutorial Files**_.
Once the sample files are imported, they will be placed inside the `Assets/Samples/Perception` folder in your Unity project. You can view your project's folder structure and access your files from the _**Project**_ tab of the editor, as seen in the image below:

<font color="green">* **Action**:</font> The _**Project**_ tab contains a search bar; use it to find the file named `ForwardRenderer.asset`, as shown below:
* **Action**: The _**Project**_ tab contains a search bar; use it to find the file named `ForwardRenderer.asset`, as shown below:
<font color="green">* **Action**:</font> Click on the found file to select it. Then, from the _**Inspector**_ tab of the editor, click on the _**Add Renderer Feature**_ button, and select _**Ground Truth Renderer Feature**_ from the dropdown menu:
* **Action**: Click on the found file to select it. Then, from the _**Inspector**_ tab of the editor, click on the _**Add Renderer Feature**_ button, and select _**Ground Truth Renderer Feature**_ from the dropdown menu:
<p align="center">
<img src="Images/forward_renderer_inspector.png" width="400"/>

### <a name="step-3">Step 3: Setup a Scene for Your Perception Simulation</a>
Simply put, in Unity, Scenes contain any object that exists in the world. This world can be a game, or in this case, a perception-oriented simulation. Every new project contains a Scene named `SampleScene`, which is automatically openned when the project is created. This Scenes comes with several objects and settings that we do not need, so let's create a new one.
<font color="green">* **Action**:</font> In the _**Project**_ tab, right-click on the `Assets/Scenes` folder and click _**Create -> Scene**_. Name this new Scene `TutorialScene` and double-click on it to open it.
* **Action**: In the _**Project**_ tab, right-click on the `Assets/Scenes` folder and click _**Create -> Scene**_. Name this new Scene `TutorialScene` and double-click on it to open it.
The _**Hierarchy**_ tab of the editor displays all the Scenes currently loaded, and all the objects currently present in each loaded Scene, as shown below:
<p align="center">

As seen above, the new Scene already contains a camera (`Main Camera`) and a light (`Directional Light`). We will now modify the camera's field of view and position to prepare it for the tutorial.
<font color="green">* **Action**:</font> Click on `Main Camera` and in the _**Inspector**_ tab, modify the camera's `Position`, `Rotation`, `Projection` and `Size` to match the screenshot below. (Note that `Size` only becomes available once you set `Projection` to `Orthographic`)
* **Action**: Click on `Main Camera` and in the _**Inspector**_ tab, modify the camera's `Position`, `Rotation`, `Projection` and `Size` to match the screenshot below. (Note that `Size` only becomes available once you set `Projection` to `Orthographic`)
<p align="center">
<img src="Images/camera_prep.png"/>

For this tutorial, we prefer our light to not cast any shadows, therefore:
<font color="green">* **Action**:</font> Click on `Directional Light` and in the _**Inspector**_ tab, set `Shadow Type` to `No Shadows`.
* **Action**: Click on `Directional Light` and in the _**Inspector**_ tab, set `Shadow Type` to `No Shadows`.
<font color="green">* **Action**:</font> Select `Main Camera` again and in the _**Inspector**_ tab, click on the _**Add Component**_ button.
<font color="green">* **Action**:</font> Start typing `Perception Camera` in the search bar that appears, until the `Perception Camera` script is found, with a **#** icon to the left:
* **Action**: Select `Main Camera` again and in the _**Inspector**_ tab, click on the _**Add Component**_ button.
* **Action**: Start typing `Perception Camera` in the search bar that appears, until the `Perception Camera` script is found, with a **#** icon to the left:
<font color="green">* **Action**:</font> Click on this script to add it as a component. Your camera is now a `Perception` camera.
* **Action**: Click on this script to add it as a component. Your camera is now a `Perception` camera.
Adding components is the standard way in which objects can have various kinds of logic and data attached to them in Unity. This includes objects placed within the Scene (called GameObjects), such as the camera above, or objects outside of a Scene, in your project folders (called Prefabs).

To speed-up your perception workflow, the Perception package comes with four common labelers for object-detection tasks; however, if you are comfortable with code, you can also add your own custom labelers. The labelers that come with the Perception package cover **2D bounding boxes, object counts, object information (pixel counts and ids), and semantic segmentation images (each object rendered in a unique colour)**.
<font color="green">* **Action**:</font> Click on the _**+**_ button to the bottom right corner of the empty labeler list, and select `BoundingBox2DLabeler`.
<font color="green">* **Action**:</font> Repeat the above step to add `ObjectCountLabeler`, `RenderedObjectInfoLabeler`, `SemanticSegmentationLabeler`.
* **Action**: Click on the _**+**_ button to the bottom right corner of the empty labeler list, and select `BoundingBox2DLabeler`.
* **Action**: Repeat the above step to add `ObjectCountLabeler`, `RenderedObjectInfoLabeler`, `SemanticSegmentationLabeler`.
Once you add the labelers, the _**Inspector**_ view of the `Perception Camera` component will look like this:

You will notice each added labeler has a field named `Id Label Config`. By adding a label configuration here you can instruct the labeler to look for certain labels within the scene and ignore the rest. To do that, we should first create label configurations.
<font color="green">* **Action**:</font> In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Id Label Config**_.
* **Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Id Label Config**_.
<font color="green">* **Action**:</font> Rename the newly created `IdLabelConfig` asset to `TutorialIdLabelConfig`.
* **Action**: Rename the newly created `IdLabelConfig` asset to `TutorialIdLabelConfig`.
<font color="green">* **Action**:</font> Select `TutorialIdLabelConfig` and in the _**Inspector**_ tab, click on the _**+**_ button to add 10 new label entries. Use the following exact names for these entries:
* **Action**: Select `TutorialIdLabelConfig` and in the _**Inspector**_ tab, click on the _**+**_ button to add 10 new label entries. Use the following exact names for these entries:
1 `candy_minipralines_lindt`
2 `cereal_cheerios_honeynut`
3 `cleaning_snuggle_henkel`

The label configuration we have created is compatible with three of the four labelers we plan to attach to our `Perception Camera`. However, `SemanticSegmentationLabeler` requires a different kind of label configuration which includes unique colors for each label instead of numerical IDs. This is because the output of this labeler are images in which each visibile foreground object is painted in a unique color.
<font color="green">* **Action**:</font> In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Semantic Segmentation Label Config**_. Name this asset `TutorialSemanticSegmentationLabelConfig`.
<font color="green">* **Action**:</font> Add the same 10 labels from the above list to this new label configuration. Note how this time they each get a new unique color instead of a number:
* **Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Semantic Segmentation Label Config**_. Name this asset `TutorialSemanticSegmentationLabelConfig`.
* **Action**: Add the same 10 labels from the above list to this new label configuration. Note how this time they each get a new unique color instead of a number:
<p align="center">
<img src="Images/semseglabelconfig.png" width="400"/>

<font color="green">* **Action**:</font> Select the `Main Camera` object from the Scene _**Hierarchy**_, and in the _**Inspector**_ tab, assign the newly created `TutorialIdLabelConfig` to the first three labelers. To do so, you can either drag and drop the former into the corresponding fields for each labeler, or click on the small circular button in front of the `Id Label Config` field, which brings up an asset selection window filtered to only show compatible assets. Assign `TutorialSemanticSegmentationLabelConfig` to the fourth labeler. The `Perception Camera` component will now look like the image below:
* **Action**: Select the `Main Camera` object from the Scene _**Hierarchy**_, and in the _**Inspector**_ tab, assign the newly created `TutorialIdLabelConfig` to the first three labelers. To do so, you can either drag and drop the former into the corresponding fields for each labeler, or click on the small circular button in front of the `Id Label Config` field, which brings up an asset selection window filtered to only show compatible assets. Assign `TutorialSemanticSegmentationLabelConfig` to the fourth labeler. The `Perception Camera` component will now look like the image below:
<p align="center">
<img src="Images/pclabelconfigsadded.png" width="400"/>

In Unity, Prefabs are essentially reusable GameObjects that are stored to disk, along with all their child GameObjects, components, and property values. Let's see what our sample prefabs include.
<font color="green">* **Action**:</font> In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/ Foreground Objects/Phase 1/Prefbas`
<font color="green">* **Action**:</font> Double click the file named `drink_whippingcream_lucerne.prefab` to open the Prefab asset.
* **Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/ Foreground Objects/Phase 1/Prefbas`
* **Action**: Double click the file named `drink_whippingcream_lucerne.prefab` to open the Prefab asset.
When you open the Prefab asset, you will see the object shown in the Scene tab and its components shown on the right side of the editor, in the _**Inspector**_ tab:

In this tutorial, you will learn how to use the provided Randomizers, as well as how to create new ones that are custom-fitted to your randomization needs.
<font color="green">* **Action**:</font> Create a new GameObject in your Scene by right-clicking in the _**Hierarchy**_ tab and clicking `Create Empty`.
<font color="green">* **Action**:</font> Rename your new GameObject to `Simulation Scenario`.
<font color="green">* **Action**:</font> In the _**Inspector**_ view of this new object, add a new `Fixed Length Scenario` component.
* **Action**: Create a new GameObject in your Scene by right-clicking in the _**Hierarchy**_ tab and clicking `Create Empty`.
* **Action**: Rename your new GameObject to `Simulation Scenario`.
* **Action**: In the _**Inspector**_ view of this new object, add a new `Fixed Length Scenario` component.
Each `Scenario` executes a number of `Iteration`s, and each Iteration carries on for a number of frames. These are timing elements you can leverage in order to customize your Scenarios and the timing of your randomizations. You will learn how to use Iteartions and frames in Phase 2 of this tutorial. For now, we will use the `Fixed Length Scenario`, which is a special kind of Scenario that runs for a fixed number of frames during each Iteration, and is sufficient for many common use-cases. Note that at any given time, you can have only one Scenario active in your Scene.

There are a number settings and properties you can modify here. `Quit On Complete` instructs the simulation to quit once this Scenario has completed executing. We can see here that the Scenario has been set to run for 100 Iterations, and that each Iteration will run for one frame. But this is currently an empty `Scneario`, so let's add some Randomizers.
<font color="green">* **Action**:</font> Click _**Add Randomizer**_, and from the list choose `BackgroundObjectPlacementRandomizer`.
* **Action**: Click _**Add Randomizer**_, and from the list choose `BackgroundObjectPlacementRandomizer`.
<font color="green">* **Action**:</font> Click _**Add Folder**_, and from the file explorer window that opnes, choose the folder `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **Action**: Click _**Add Folder**_, and from the file explorer window that opnes, choose the folder `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
<font color="green">* **Action**:</font> Set the rest of the properties (except for `Seed`) according to the image below. The `Seed` attribute is the seed used for the underlying random sampler, and does not need to match the image shown.
* **Action**: Set the rest of the properties (except for `Seed`) according to the image below. The `Seed` attribute is the seed used for the underlying random sampler, and does not need to match the image shown.
<font color="green">* **Action**:</font> Click on the **▷** (play) button located at top middle section of the editor to run your simulation.
* **Action**: Click on the **▷** (play) button located at top middle section of the editor to run your simulation.
<p align="center">
<img src="Images/play.png" width = "500"/>

As seen in the image above, what we have now is just a beige-colored wall of shapes. This is because so far we are only spawning them, and the beige color of our light is what gives them their current look. To make this background more useful, let's add a couple more `Randomizers`.
<font color="green">* **Action**:</font> Repeat the previous steps to add `TextureRandomizer`, `HueOffsetRandomizer`, and `RotationRandomizer`.
* **Action**: Repeat the previous steps to add `TextureRandomizer`, `HueOffsetRandomizer`, and `RotationRandomizer`.
<font color="green">* **Action**:</font> In the UI snippet for `TextureRandomizer`, click _**Add Folder**_ and choose `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Background Textures`.
* **Action**: In the UI snippet for `TextureRandomizer`, click _**Add Folder**_ and choose `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Background Textures`.
<font color="green">* **Action**:</font> In the UI snippet for `RotationRandomizer`, change all the maximum values for the three ranges to `360` and leave the minimums at `0`.
* **Action**: In the UI snippet for `RotationRandomizer`, change all the maximum values for the three ranges to `360` and leave the minimums at `0`.
Your list of Randomizers should now look like the screenshot below:

To make sure each Randomizer knows which objects it should work with, we will use an object tagging and querying workflow that the bundled Randomizers already use. Each Randomizer can query the Scene for objects that carry certain types of `RandomizerTag` components. For instance, the `TextureRandomizer` queries the Scene for objects that have a `TextureRandomizerTag` component (you can change this in code!). Therefore, in order to make sure our background Prefabs are affected by the `TextureRandomizer` we need to make sure they have `TextureRandomizerTag` attached to them.
<font color="green">* **Action**:</font> In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
<font color="green">* **Action**:</font> Select all the files inside and from the _**Inspector**_ tab add a `TextureRandomizerTag` to them. This will add the component to all the selected files.
<font color="green">* **Action**:</font> Repeat the above step to add `HueOffsetRandomizerTag` and `RotationRandomizerTag` to all selected Prefabs.
* **Action**: In the _**Project**_ tab, navigate to `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Background Objects/Prefabs`.
* **Action**: Select all the files inside and from the _**Inspector**_ tab add a `TextureRandomizerTag` to them. This will add the component to all the selected files.
* **Action**: Repeat the above step to add `HueOffsetRandomizerTag` and `RotationRandomizerTag` to all selected Prefabs.
Once the above step is done, the _**Inspector**_ tab for a background Prefab should look like this:

It is now time to spawn and randomize our foregournd objects. We are getting close to generating our first set of synthetic data!
<font color="green">* **Action**:</font> Add `ForegroundObjectPlacementRandomizer` to your list of Randomizers. Click _**Add Folder**_ and select `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
<font color="green">* **Action**:</font> Set these values for the above Randomizer: `Depth = 3, Separation Distance = 1.5, Placement Area = (5,5)`.
* **Action**: Add `ForegroundObjectPlacementRandomizer` to your list of Randomizers. Click _**Add Folder**_ and select `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`.
* **Action**: Set these values for the above Randomizer: `Depth = 3, Separation Distance = 1.5, Placement Area = (5,5)`.
<font color="green">* **Action**:</font> From the _**Project**_ tab select all the foreground Prefabs located in `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`, and add a `RotationRandomizerTag` component to them.
* **Action**: From the _**Project**_ tab select all the foreground Prefabs located in `Assets/Samples/Perception/0.5.0-preview.1/Tutorial Files/Foreground Objects/Phase 1/Prefabs`, and add a `RotationRandomizerTag` component to them.
<font color="green">* **Action**:</font> Drag `ForegroundObjectPlacementRandomizer` and drop it above `RotationRandomizer`.
* **Action**: Drag `ForegroundObjectPlacementRandomizer` and drop it above `RotationRandomizer`.
### <a name="step-6">Step 6: Generate and Verify Synthetic Data</a>

<img src="Images/dataset_written.png"/>
</p>
<font color="green">* **Action**:</font> Navigate to the dataset path addressed in the _**Console**_.
* **Action**: Navigate to the dataset path addressed in the _**Console**_.
In this folder, you will find a few types of data, depending on your `Perception Camera` settings. These can include:
- Logs

The output dataset includes a variety of information about different aspects of the active sensors in the Scene (currently only one), as well as the ground-truth generated by all active labelers. [This page](https://github.com/Unity-Technologies/com.unity.perception/blob/master/com.unity.perception/Documentation%7E/Schema/Synthetic_Dataset_Schema.md) provides a comprehensive explanation on the schema of this dataset. We strongly recommend having a look at the page once you have completed this tutorial.
<font color="green">* **Action**:</font> To get a quick feel of how the data is stored, open the folder whose name starts with `Dataset`, then open the file named `captures_000.json`. This file contains the output from `BoundingBox2DLabeler`. The `captures` array contains the position and rotation of the sensor (camera), the position and rotation of the ego (sensor group, currently only one), and the annotations made by `BoundingBox2DLabeler` for all visible objects defined in its label configuration. For each visibile object, the annotations include:
* **Action**: To get a quick feel of how the data is stored, open the folder whose name starts with `Dataset`, then open the file named `captures_000.json`. This file contains the output from `BoundingBox2DLabeler`. The `captures` array contains the position and rotation of the sensor (camera), the position and rotation of the ego (sensor group, currently only one), and the annotations made by `BoundingBox2DLabeler` for all visible objects defined in its label configuration. For each visibile object, the annotations include:
* `label_id`: The numerical id assigned to this object's label in the labeler's label configuration
* `label_name`: The object's label, e.g. `candy_minipralines_lindt`
* `instance_id`: Unique instance id of the object

To verify and analyze a variety of metrics for the generated data, such as number of foreground objects in each frame and degree of representation for each foreground object (label), we will now use Unity's Dataset Insights framework. This will involve running a Jupyter notebook which is conveniently packaged within a Docker file that you can download from Unity.
<font color="green">* **Action**:</font> Download and install [Docker Desktop](https://www.docker.com/products/docker-desktop)
<font color="green">* **Action**:</font> Open a command line interface (Command Prompt on Windows, Terminal on Mac OS, etc.) and type the following command to run the Dataset Insights Docker image:
* **Action**: Download and install [Docker Desktop](https://www.docker.com/products/docker-desktop)
* **Action**: Open a command line interface (Command Prompt on Windows, Terminal on Mac OS, etc.) and type the following command to run the Dataset Insights Docker image:
<font color="green">* **Action**:</font> The image is now running on your computer. Open a web browser and navigate to `http://localhost:8888` to open the Jupyter notebook:
* **Action**: The image is now running on your computer. Open a web browser and navigate to `http://localhost:8888` to open the Jupyter notebook:
<font color="green">* **Action**:</font> To make sure your data is properly mounted, navigate to the `data` folder. If you see the dataset's folders there, we are good to go.
<font color="green">* **Action**:</font> Navigate to the `datasetinsights/notebooks` folder and open `Perception_Statistics.ipynb`.
<font color="green">* **Action**:</font> Once in the notebook, replace the `<GUID>` in the `data_root = /data/<GUID>` line with the name of the dataset folder inside your generated data. For example `data_root = /data/Dataseta26351bc-1b72-46c5-9e0c-d7afd6df2974`.
* **Action**: To make sure your data is properly mounted, navigate to the `data` folder. If you see the dataset's folders there, we are good to go.
* **Action**: Navigate to the `datasetinsights/notebooks` folder and open `Perception_Statistics.ipynb`.
* **Action**: Once in the notebook, replace the `<GUID>` in the `data_root = /data/<GUID>` line with the name of the dataset folder inside your generated data. For example `data_root = /data/Dataseta26351bc-1b72-46c5-9e0c-d7afd6df2974`.
<p align="center">
<img src="Images/jupyter2.png"/>

</p>
<font color="green">* **Action**:</font> Follow the instructions laid out in the notebook and run each code block to view its outputs.
* **Action**: Follow the instructions laid out in the notebook and run each code block to view its outputs.
This concludes Phase 1 of the Perception tutoial. In the next phase, you will dive a little bit into randomization code and learn how to build your own custom Randomizer. [Click here to continue to Phase 2: Custom Randomizations](Phase2.md)

95
Tutorial/Phase3.md


<p align="center">
<img src="Images/cloud_icon.png" width="400"/>
</p>Basic Simulation
</p>
If you have not logged in yet, the _**Services**_ tab will display a message noting that you are offline:

* **Action**: Click _**Sign in...**_ and follow the steps within the window that opens to sign in or create an account.
* **Action**: Click _**Sign in...**_ and follow the steps in the window that opens to sign in or create an account.
Unity Simulation is a cloud-based service that makes it possible for you run thousands of instances of Unity builds in order to generate massive amounts of data.
The USim service is billed on a per-usage basis, and the free trial offers up to $100 of free credit per month. In order to access the free trial, you will need to provide credit card information. **This information will be used to charge your account if you exceed the $100 monthly credit.** A list of hourly and daily rates for various computational resources is available in the page where you first register for USim.
Unity Simulation is a cloud-based service that makes it possible for you run thousands of instances of Unity builds in order to generate massive amounts of data. The USim service is billed on a per-usage basis, and the free trial offers up to $100 of free credit per month. In order to access the free trial, you will need to provide credit card information. **This information will be used to charge your account if you exceed the $100 monthly credit.** A list of hourly and daily rates for various computational resources is available in the page where you first register for USim.
It is now time connect your local Unity project to a cloud project and your simulation on USim.
It is now time to connect your local Unity project to a cloud project.
* **Action**: Return to Unity Editor. In the _**Services**_ tab click _**Select Organization**_ and choose the only available option (which typically has the same name as your Unity username).

* **Action**: Click _**Create**_ to create a new cloud project and connect your local project to it.
### Step 2: Run Project on USim
For performance reasons, it is best to disable real-time visualizations before longer runs.
For performance reasons, it is best to disable real-time visualizations before carrying on with the USim run.
In order to make sure our builds are compatible with USim, we need to set our project's scripting backend to _**Mono**_ rather than _**IL2CPP**_. The latter is the default option for projects created with newer versions of Unity.
In order to make sure our builds are compatible with USim, we need to set our project's scripting backend to _**Mono**_ rather than _**IL2CPP**_. The latter is the default option for projects created with newer versions of Unity, so we need to change it.
* **Action**: From the top menu bar, open _**Edit -> Project Settings**_.
* **Action**: In the window that opens, navigate to the _**Player**_ tab, find the _**Scripting Backend**_ setting (under _**Other Settings**_), and change it to _**Mono**_:

</p>
* **Action**: Close _**Project Settings**_. From the top menu bar, open _**Window -> Run in USim**_.
* **Action**: Close _**Project Settings**_.
* **Action**: From the top menu bar, open _**Window -> Run in USim**_.
<p align="center">
<img src="Images/runinusim.png" width="600"/>

Here, you can also specify a name for the run, the number of iterations the Scenario will produce, and the number of concurrent _**Instances**_ for the run.
Here, you can also specify a name for the run, the number of iterations the Scenario will execute for, and the number of concurrent _**Instances**_ for the run.
* **Action**: Name your run `FirstRun`, set the number of iterations to `20,000`, and instances to `1`.
* **Action**: Click _**Build and Run**_.

### <a name="step-3">Step 3: Keep Track of USim Runs Using USim-CLI</a>
To keep track of the progress of your USim run, you will need to use USim's command-line interface (USim CLI). Detailed instructions for the USim CLI are provided [here](https://github.com/Unity-Technologies/Unity-Simulation-Docs/blob/master/doc/quickstart.md#download-unity-simulation-quickstart-materials).
For the purposes of this tutorial, we will only go through the most essential commands, which will help us know when our USim run is complete and where to find the produced dataset.
To keep track of the progress of your USim run, you will need to use USim's command-line interface (USim-CLI). Detailed instructions for USim-CLI are provided [here](https://github.com/Unity-Technologies/Unity-Simulation-Docs/blob/master/doc/quickstart.md#download-unity-simulation-quickstart-materials). For the purposes of this tutorial, we will only go through the most essential commands, which will help us know when our USim run is complete and where to find the produced dataset.
* **Action**: Download the latest version of `unity_simulation_bundle.zip` from [here](https://github.com/Unity-Technologies/Unity-Simulation-Docs/releases)
* **Action**: Download the latest version of `unity_simulation_bundle.zip` from [here](https://github.com/Unity-Technologies/Unity-Simulation-Docs/releases).
**Note**: If you are using a MacOS computer, we recommend using the _**curl**_ command from the Terminal to download the file, in order to avoid issues caused by the MacOS Gatekeeper when running the CLI. You can use these commands:
**Note**: If you are using a MacOS computer, we recommend using the _**curl**_ command from the Terminal to download the file, in order to avoid issues caused by the MacOS Gatekeeper when using the CLI. You can use these commands:
```
curl -Lo ~/Downloads/unity_simulation_bundle.zip <URL-unity_simulation_bundle.zip>
unzip ~/Downloads/unity_simulation_bundle.zip -d ~/Downloads/unity_simulation_bundle

* **Action**: Extract the zip archive you downloaded.
* **Action**: Open a command-line interface (Terminal on Mac OS, cmd on Windows, etc.) and navigate to the extracted folder.
* **Action**: Open a command-line interface (Terminal on MacOS, cmd on Windows, etc.) and navigate to the extracted folder.
If you downloaded the zip archive in the default location in your downloads folder, you can use these commands to navigate to it from the command-line:

You will now be using the _**usim**_ executable to interact with Unity Simluation through commands.
* **Action** To see a list of available commands, simply run `usim` once:
MacOS:
`USimCLI/mac/usim`

The first step is to login.
* **Action**: Login to USim using the `usim login auth` command.
MacOS:
`USimCLI/mac/usim login auth`

This command will ask you to press Enter to open a browser for you to login to your Unity account:
`Press [ENTER] to open your browser to ...'
`Press [ENTER] to open your browser to ...`
* **Action**: Press Enter to open a browser window for logging in.

</p>
**Note**: On MacOS, you might get errors related to permissions. In these cases, try running your commands with the `sudo` qualifier. For example:
`sudo USimCLI/mac/usim login auth`
This will ask for your MacOS account's password, and should help overcome the persmission issues.
`sudo USimCLI/mac/usim login auth`. This will ask for your MacOS account's password, and should help overcome the persmission issues.
**Note : From this point on we will only include MacOS formatted commands in the tutorial, but all the USim commands we use will work in all operating systems.**
**Note : From this point on we will only include MacOS formatted commands in the tutorial, but all the USim commands we use will work in all supported operating systems.**
* **Action**: Return to your command-line interface. Get a list of your cloud projects using the `usim get projects` command:
* **Action**: Return to your command-line interface. Get a list of cloud projects associated with your Unity account using the `usim get projects` command:
MacOS:
`USimCLI/mac/usim get projects`

This gives you a list of the cloud projects associated with your Unity account along with their project IDs. In case you have more than one cloud project, you will need to "activate" the one corresponding with your perception tutorial project here. If there is only one project, it is already activated and you will not need to execute the command below (note: replace `<project-id>` with the id of your desired project).
* **Action**: Activate the relevant project:
MacOS:
`USimCLI/mac/usim activate project <project-id>`
<!--Windows:
`USimCLI\windows\usim get projects <project-id>` -->
When asked if you are sure you want to change the active project, enter "y".
Example output:
```

SynthDet 9ec23417-73cd-becd-9dd6-556183946153 2020-08-12T19:46:20+00:00
```
In case you have more than one cloud project, you will need to "activate" the one corresponding with your perception tutorial project. If there is only one project, it is already activated and you will not need to execute the command below (note: replace `<project-id>` with the id of your desired project).
Now that we have made sure the correct project is active, we can get a list of all the current and past runs for the project.
* **Action**: Activate the relevant project:
* **Action**: Use the `usim get runs` command to obtain a list of current and past runs:
MacOS:
MacOS:
`USimCLI/mac/usim activate project <project-id>`
<!--Windows:
`USimCLI\windows\usim get projects <project-id>` -->
When asked if you are sure you want to change the active project, enter "**y**" and press **Enter**.
Now that we have made sure the correct project is active, we can get a list of all the current and past runs for the project.
* **Action**: Use the `usim get runs` command to obtain a list of current and past runs:
MacOS:
`USimCLI/mac/usim get runs`
<!--Windows:

xBv3arj Completed 2020-10-01 02:27:11
```
As seen above, each run has a name, an ID, a creation time, and a list of executions. Note that each "run" can have more than one "execution", as you can manually execute runs again using USimCLI. For now though, we will not concern ourselves with that.
As seen above, each run has a name, an ID, a creation time, and a list of executions. Note that each "run" can have more than one "execution", as you can manually execute runs again using USim-CLI.
USim runs execution on simulation nodes. If you enter a number larger than 1 for the number of instances in the _**Run in USim**_ window, your run will execute simultaneously on more than one simulation node. You can view the status of each execution node using the `usim summarize run-execution <execution-id>` command. This command will tell you how many nodes have succeeded, failed, have not run yet, or are in progress. Make sure to replace `<execution-id>` with the execution ID seen in your run list. In the above example, this ID would be `yegz4WN`.
USim runs executions on simulation nodes. If you enter a number larger than 1 for the number of instances in the _**Run in USim**_ window, your run will execute simultaneously on more than one node. You can view the status of each execution node using the `usim summarize run-execution <execution-id>` command. This command will tell you how many nodes have succeeded, failed, have not run yet, or are in progress. Make sure to replace `<execution-id>` with the execution ID seen in your run list. In the above example, this ID would be `yegz4WN`.
* **Action**: Use the `usim summarize run-execution <execution-id>` command to observe the status of your execution nodes:

`USimCLI/mac/usim download manifest <execution-id>`
The manifest is a `.csv` formatted file and will be downloaded to the same location from which you execute the above command, which is the `unity_simulation_bundle` folder.
This file does include actual data, rather, it includes links to the generated data, including the JSON files, the logs, the images, and so on.
This file does **not**** include actual data, rather, it includes links to the generated data, including the JSON files, the logs, the images, and so on.
* **Action**: Open the manifest file to check it. Make sure there are links to various types of output and check a few of the links to see if they work.

* **Action**: Open a web browser and navigate to `http://localhost:8888` to open the Jupyter notebook.
* **Action**: Navigate to the `datasetinsights/notebooks` folder and open `Perception_Statistics.ipynb`.
* **Action**: In the `data_root = /data/<GUID>` line, the `<GUID>` part will be the location inside your `<download path>` where the data will be downloaded. Therefore, you can just remove it so as to have data downloaded directly to path you previously specified:
* **Action**: In the `data_root = /data/<GUID>` line, the `<GUID>` part will be the location inside your `<download path>` where the data will be downloaded. Therefore, you can just remove it so as to have data downloaded directly to the path you previously specified:
<p align="center">
<img src="Images/di_usim_1.png"/>

* **Action**: In the block of code titled "Unity Simulation [Optional]", uncomment the lines that assign values to variables, and insert the correct values, based on information from USim run.
* **Action**: In the block of code titled "Unity Simulation [Optional]", uncomment the lines that assign values to variables, and insert the correct values, based on information from your USim run.
We have previoulsy learned how to obtain the `run_execution_id` and `project_id`. You can remove the value already present in for `annotation_definition_id` and leave it blank. What's left is the `access_token`.
We have previoulsy learned how to obtain the `run_execution_id` and `project_id`. You can remove the value already present for `annotation_definition_id` and leave it blank. What's left is the `access_token`.
* **Action**: Return to your command-line interface and run the `usim inspect auth` command.

If you receive errors regarding your authentication, your token might have timed out. Repeat the login step (`usim login auth`) to login again and fix this issue.
If you receive errors regarding authentication, your token might have timed out. Repeat the login step (`usim login auth`) to login again and fix this issue.
A sample output from `usim inspect auth` will like like below:

updated: 2020-10-02 14:50:11.412979
```
The `access_token` you need for your Dataset Insights notebook is the access token shown by the above command, minus the `Bearer ` part. So in this case, we should input `0CfQbhJ6gjYIHjC6BaP5gkYn1x5xtAp7ZA9I003fTNT1sFp` in the notebook.
The `access_token` you need for your Dataset Insights notebook is the access token shown by the above command, minus the `'Bearer '` part. So in this case, we should input `0CfQbhJ6gjYIHjC6BaP5gkYn1x5xtAp7ZA9I003fTNT1sFp` in the notebook.
* **Action**: Copy the access token minus the `Bearer ` part to the corresponding field in the Dataset Inisghts notebook.
* **Action**: Copy the access token excluding the `'Bearer '` part to the corresponding field in the Dataset Inisghts notebook.
Once you have entered all the information, the block of code should look like the screenshot below:
Once you have entered all the information, the block of code should look like the screenshot below (the actual values you input will be different):
<p align="center">
<img src="Images/di_usim_2.png"/>

The next couple of code blocks (under "Load dataset metadata") analyze the downloaded meta-data and display a table containing annotation-id's for the various metrics defined in the dataset.
* **Action** Once you reach the code block titled "Built-in Statistics", make sure the value assigned to the field `rendered_object_info_definition_id` matches the id displayed for this metric in the table output by the code block immediately before it. The screenshot below demonstrates this (note that your ids might differ from the ones here):
* **Action**: Once you reach the code block titled "Built-in Statistics", make sure the value assigned to the field `rendered_object_info_definition_id` matches the id displayed for this metric in the table output by the code block immediately before it. The screenshot below demonstrates this (note that your ids might differ from the ones here):
Follow the rest of the steps inside the notebook to generate a variety of plots and stats. Keep in mind that this notebook is provided just as an example, but you can modify and extend to your own needs, using the tools provided by the [Dataset Insights framework](https://datasetinsights.readthedocs.io/en/latest/).
Follow the rest of the steps inside the notebook to generate a variety of plots and stats. Keep in mind that this notebook is provided just as an example, and you can modify and extend it according to your own needs using the tools provided by the [Dataset Insights framework](https://datasetinsights.readthedocs.io/en/latest/).
**Important note regarding data size**: In the "Annotation Visualization" section of the notebook, you will download all the files present in the dataset, including images. The example dataset we created here contains 20,000 images (one for each Iteration), and would be have a size of around 50 GB. Therefore, make sure you account for storage before you run the corresponding code block.
**Important note regarding data size**: In the "Annotation Visualization" section of the notebook, you will download all the files present in the dataset, including images. The example dataset we created here contains 20,000 images (one for each Iteration), and would have a size of around 50 GB. Therefore, it is best to account for storage needs before you run the corresponding code block.
正在加载...
取消
保存