Through-out the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while non-bulleted lines will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions!
Steps included in this tutorial:
- [<a name="step-1">Step 2: Set Up a Humanoid Character in a Scene</a>](#step-2-set-up-a-humanoid-character-in-a-scene)
- [<a name="step-2">Step 2: Set Up a Humanoid Character in a Scene</a>](#step-2-set-up-a-humanoid-character-in-a-scene)
-
- [Step 3: Set Up the Perception Camera for Key Point Annotation](#step-3)
- [Step 4: Configure Human Pose Estimation](#step-4)
- [Step 5: Randomize the Humanoid Character's Animations](#step-5)
### <aname="step-1">Step 1: Import `.fbx` Models and Animations</a>
<imgsrc="Images/scenario_empty.png"width="400"/>
</p>
* **:green_circle: Action**: Select `Main Camera` and in the _**Inspector**_ view of the `Perception Camera` component, disable all previously added labelers using the check-mark in front of each.
* **:green_circle: Action**: Select `Main Camera` and in the _**Inspector**_ view of the `Perception Camera` component, **disable** all previously added labelers using the check-mark in front of each. We will be using a new labeler in this tutorial.
We now need to import the sample files required for this tutorial.
* **:green_circle: Action**: Open _**Package Manager**_ and select the Perception package, which should already be present in the navigation pane to the left side.
* **:green_circle: Action**: From the list of ***Samples*** for the Perception package, click on the ***Import into Project*** button for the sample bundle named _**Human Pose Estimation**_.
This will make sure all animations included in the sample `.fbx` files are ready to be used on a rigged humanoid model.
### <aname="step-1">Step 2: Set Up a Humanoid Character in a Scene</a>
### <aname="step-2">Step 2: Set Up a Humanoid Character in a Scene</a>
* **:green_circle: Action**: Drag and drop the file named `Player` into your Scene _**Hierarchy**_.
* **:green_circle: Action**: Select the new `Player` object in the Scene and in the _**Inspector**_ tab set its transform's position and rotation according to the image below to make the character face the camera.
**[Continue to Phase 1: Setup and Basic Randomizations](Phase1.md)**
If you run the simulation now you will see the character going through a series of three animations for picking up a hypothetical object as seen in the GIF below. This is one complete run of the `TakeObjects` animation clip we selected earlier.
<palign="center">
<imgsrc="Images/take_objects.gif"width="600"/>
</p>
### <aname="step-3">Step 3: Set Up the Perception Camera for Key Point Annotation</a>
Now that we have our character performing animations, let's modify our `Perception Camera` to report the character's key points in the output dataset, updating frame by frame as they animate.
* **:green_circle: Action**: Add a `KeyPointLabeler` to the list of labelers in `Perception Camera`.
Similar to the labelers we used in the Perception Tutorial, we will need a label configuration for this new labeler.
* **:green_circle: Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Id Label Config**_. Name the new asset `HPE_IdLabelConfig`.
* **:green_circle: Action**: Add the label `MyCharacter` to the newly created config.
> :information_source: You can use any label string, as long as you assign the same label to the `Player` object in the next step.
* **:green_circle: Action**: Add a `Labeling` component to the `Player` object in the Scene.
* **:green_circle: Action**: In the _**Inspector**_ UI for this new `Labeling` component, expand `HPE_IdLabelConfig` and click _**Add to Labels** on `MyCharacter`.
Note the `CocoKeyPointTemplate` asset that is already assigned as the `Active Template`. This template will tell the labeler how to map default Unity rig joints to human joint labels in the popular COCO dataset, so that the output of the labeler can be easily converted to COCO format. Later in this tutorial, we will learn how to add more joints to our character and how to customize joint mapping templates.
You can now check out the output dataset to see what the annotations look like. To do this, click the _**Show Folder**_ button in the `Perception Camera` UI, then navigate inside to the dataset folder to find the `captures_000.json` file. Here is an example annotation for the first frame of our test-case here: