# Pose Estimation Demo: Phase 4 In [Phase 1](1_set_up_the_scene.md) of the tutorial, we learned how to create our scene in the Unity editor. In [Phase 2](2_set_up_the_data_collection_scene.md), we set up the scene for data collection. In [Phase 3](3_data_collection_model_training.md) we have learned: * How to collect the data * How to train the deep learning model In this phase, we will use our trained deep learning model to predict the pose of the cube, and pick it up with our robot arm.
**Table of Contents** - [Step 1: Setup](#step-1-setup) - [Step 2: Adding the Pose Estimation Model](#step-2-adding-the-pose-estimation-model) - [Step 3: Set up the ROS side](#step-3-set-up-the-ros-side) - [Step 4: Set up the Unity side](#step-4-set-up-the-unity-side) - [Step 5: Putting it together](#step-5-putting-it-together) --- ### Step 1: Setup If you have correctly followed phases 1 and 2, whether or not you choose to use the Unity project given by us or start it from scratch, you should have cloned the repository. **Note**: If you cloned the project and forgot to use `--recurse-submodules`, or if any submodule in this directory doesn't have content (e.g. moveit_msgs or ros_tcp_endpoint), you can run the following command to grab the Git submodules. But before you need to be in the `Pose-Estimation-Demo` folder. ```bash cd
* **Action**: In the terminal, ensure the current location is at the root of the `Pose-Estimation-Demo` directory. Build the provided ROS Docker image as follows: ```bash docker build -t unity-robotics:pose-estimation -f docker/Dockerfile . ``` **Note**: The provided Dockerfile uses the [ROS Noetic base Image](https://hub.docker.com/_/ros/). Building the image will install the necessary packages as well as copy the [provided ROS packages and submodules](../ROS/) to the container, predownload and cache the [VGG16 model](https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.vgg16), and build the catkin workspace. * **Action**: Start the newly built Docker container: ```docker docker run -it --rm -p 10000:10000 -p 5005:5005 unity-robotics:pose-estimation /bin/bash ``` When this is complete, it will print: `Successfully tagged unity-robotics:pose-estimation`. This console should open into a bash shell at the ROS workspace root, e.g. `root@8d88ed579657:/catkin_ws#`. **Note**: If you encounter issues with Docker, check the [Troubleshooting Guide](troubleshooting.md) for potential solutions. * **Action**: Source your ROS workspace: ```bash source devel/setup.bash ``` The ROS workspace is now ready to accept commands! **Note**: The Docker-related files (Dockerfile, bash scripts for setup) are located in `PATH-TO-Pose-Estimation-Demo/docker`. --- ### Step 4: Set up the Unity side If your Pose Estimation Tutorial Unity project is not already open, select and open it from the Unity Hub. **Note**: A complete version of this step has been provided in this repository, called `PoseEstimationDemoProject`. If you have some experience with Unity and would like to skip the scene setup portion, you can open this provided project via Unity Hub and open the scene TutorialPoseEstimation. You will need to update the ROS Settings as described below, then skip to [Step 5: Putting it together](#step-5-putting-it-together). We will work on the same scene that was created in the [Phase 1](1_set_up_the_scene.md) and [Phase 2](2_set_up_the_data_collection_scene.md), so if you have not already, complete Phases 1 and 2 to set up the Unity project. #### Connecting with ROS Prefabs have been provided for the UI elements and trajectory planner for convenience. These are grouped under the parent `ROSObjects` tag. * **Action**: In the Project tab, go to `Assets > TutorialAssets > Prefabs > Part4` and drag and drop the `ROSObjects` prefab inside the _**Hierarchy**_ panel. **Action**: The ROS TCP connection needs to be created. In the top menu bar in the Unity Editor, select `Robotics -> ROS Settings`. Find the IP address of your ROS machine. * If you are going to run ROS services with the Docker container introduced [above](#step-3-set-up-the-ros-side), fill `ROS IP Address` and `Override Unity IP` with the loopback IP address `127.0.0.1`. If you will be running ROS services via a non-Dockerized setup, you will most likely want to have the `Override Unity IP` field blank, which will let the Unity IP be determined automatically. * If you are **not** going to run ROS services with the Docker container, e.g. a dedicated Linux machine or VM, open a terminal window in this ROS workspace. Set the ROS IP Address field as the output of the following command: ```bash hostname -I ``` * **Action**: Ensure that the ROS Port is set to `10000` and the Unity Port is set to `5005`. You can leave the Show HUD box unchecked. This HUD can be helpful for debugging message and service requests with ROS. You may turn this on if you encounter connection issues.
Opening the ROS Settings has created a ROSConnectionPrefab in `Assets/Resources` with the user-input settings. When the static `ROSConnection.instance` is referenced in a script, if a `ROSConnection` instance is not already present, the prefab will be instantiated in the Unity scene, and the connection will begin. **Note**: While using the ROS Settings menu is the suggested workflow, you may still manually create a GameObject with an attached ROSConnection component. The provided script `Assets/TutorialAssets/Scripts/TrajectoryPlanner.cs` contains the logic to invoke the motion planning services, as well as the logic to control the gripper and end effector tool. This has been adapted from the [Pick-and-Place tutorial](https://github.com/Unity-Technologies/Unity-Robotics-Hub/blob/main/tutorials/pick_and_place/3_pick_and_place.md). The component has been added to the ROSObjects/Publisher object. In this TrajectoryPlanner script, there are two functions that are defined, but not yet implemented. `InvokePoseEstimationService()` and `PoseEstimationCallback()` will create a [ROS Service](http://wiki.ros.org/Services) Request and manage on the ROS Service Response, respectively. The following steps will provide the code and explanations for these functions. * **Action**: Open the `TrajectoryPlanner.cs` script in an editor. Find the empty `InvokePoseEstimationService(byte[] imageData)` function definition, starting at line 165. Replace the empty function with the following: ```csharp private void InvokePoseEstimationService(byte[] imageData) { uint imageHeight = (uint)renderTexture.height; uint imageWidth = (uint)renderTexture.width; RosMessageTypes.Sensor.Image rosImage = new RosMessageTypes.Sensor.Image(new RosMessageTypes.Std.Header(), imageWidth, imageHeight, "RGBA", isBigEndian, step, imageData); PoseEstimationServiceRequest poseServiceRequest = new PoseEstimationServiceRequest(rosImage); ros.SendServiceMessage
#### Switching to Inference Mode * **Action**: On the `Simulation Scenario` GameObject, uncheck the `Fixed Length Scenario` component to disable it, as we are no longer in the Data Collection phase. If you want to collect new data in the future, you can always check back on the `Fixed Length Scenario` and uncheck to disable the `ROSObjects`. Also note that the UI elements have been provided in `ROSObjects/Canvas`, including the Event System that is added on default by Unity. In `ROSObjects/Canvas/ButtonPanel`, the OnClick callbacks have been pre-assigned in the prefab. These buttons set the robot to its upright default position, randomize the cube position and rotation, randomize the target, and call the Pose Estimation service. ### Step 5: Putting it together Then, run the following roslaunch in order to start roscore, set the ROS parameters, start the server endpoint, start the Mover Service and Pose Estimation nodes, and launch MoveIt. * **Action**: In the terminal window of your ROS workspace opened in [Step 2](#step-3-set-up-the-ros-side), run the provided launch file: ```bash roslaunch ur3_moveit pose_est.launch ``` --- This launch file also loads all relevant files and starts ROS nodes required for trajectory planning for the UR3 robot (`gazebo.launch`). The launch files for this project are available in the package's launch directory, i.e. `src/ur3_moveit/launch/`. This launch will print various messages to the console, including the set parameters and the nodes launched. The final two messages should confirm `You can start planning now!`. **Note**: The launch file may throw errors regarding `[controller_spawner-5] process has died`. These are safe to ignore as long as the final message is `Ready to plan`. This confirmation may take up to a minute to appear. * **Action**: Return to Unity, and press Play. **Note**: If you encounter connection errors such as a `SocketException` or don't see a completed TCP handshake between ROS and Unity in the console window, return to the [Connecting with ROS](#connecting-with-ros) section above to update the ROS Settings and generate the ROSConnectionPrefab. Note that the robot arm must be in its default position, i.e. standing upright, to perform Pose Estimation. This is done by simply clicking the `Reset Robot Position` button after each run. * **Action**: Press the `Pose Estimation` button to send the image to ROS. This will grab the current camera view, generate a [sensor_msgs/Image](http://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html) message, and send a new Pose Estimation Service Response to the ROS node running `pose_estimation_service.py`. This will run the trained model and return a Pose Estimation Service Response containing an estimated pose, which is subsequently converted and sent as a new Mover Service Response to the `mover.py` ROS node. Finally, MoveIt calculates and returns a list of trajectories to Unity, and the poses are executed to pick up and place the cube. The target object and empty goal object can be moved around during runtime for different trajectory calculations, or the target can be randomized using the `Randomize Cube` button. **Note**: You may encounter a `UserWarning: CUDA initialization: Found no NVIDIA driver on your system.` error upon the first image prediction attempt. This warning can be safely ignored. **Note**: If you encounter issues with the connection between Unity and ROS, check the [Troubleshooting Guide](troubleshooting.md) for potential solutions. You should see the following:
### Click here to go back to [Phase 3](3_data_collection_model_training.md).