In this tutorial we'll demonstrate how one can perform dynamic human reconstruction in nvblox using realsense data. If you want to know how human reconstruction works behind the scenes refer to the technical details.
Note: This example including nvblox with human reconstruction on RealSense data is not intended to run on a Jetson platform yet. Stay tuned for updates.
Note: Currently we recommend the heavier
PeopleSemSegNet
over the lighterPeopleSemSegNet ShuffleSeg
model provided in Isaac ROS Image Segmentation for better segmentation performance.
The following steps show you how to run PeopleSemSegNet
in ROS.
Refer to this readme to run the PeopleSemSegNet ShuffleSeg
network.
-
Complete all steps of the RealSense tutorial and make sure it is working fine.
-
Clone the segmentation repository and its dependencies under
~/workspaces/isaac_ros-dev/src
.cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_segmentation
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline.git
-
Pull down a ROS Bag of sample data:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_image_segmentation && \ git lfs pull -X "" -I "resources/rosbags/"
-
Launch the Docker container using the
run_dev.sh
script:cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \ ./scripts/run_dev.sh
-
Download the
PeopleSemSegNet
ETLT file and theint8
inference mode cache file:mkdir -p /workspaces/isaac_ros-dev/models/peoplesemsegnet/1 cd /workspaces/isaac_ros-dev/models/peoplesemsegnet wget 'https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_quantized_vanilla_unet_v2.0/files/peoplesemsegnet_vanilla_unet_dynamic_etlt_int8.cache' wget 'https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_quantized_vanilla_unet_v2.0/files/peoplesemsegnet_vanilla_unet_dynamic_etlt_int8_fp16.etlt'
-
Convert the ETLT file to a TensorRT plan file:
/opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_1:0,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -c peoplesemsegnet_vanilla_unet_dynamic_etlt_int8.cache -e /workspaces/isaac_ros-dev/models/peoplesemsegnet/1/model.plan -o argmax_1 peoplesemsegnet_vanilla_unet_dynamic_etlt_int8_fp16.etlt
-
Create the triton configuration file called
/workspaces/isaac_ros-dev/models/peoplesemsegnet/config.pbtxt
with the following content:name: "peoplesemsegnet" platform: "tensorrt_plan" max_batch_size: 0 input [ { name: "input_1:0" data_type: TYPE_FP32 dims: [ 1, 3, 544, 960 ] } ] output [ { name: "argmax_1" data_type: TYPE_INT32 dims: [ 1, 544, 960, 1 ] } ] version_policy: { specific { versions: [ 1 ] } }
-
Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev && \ colcon build --symlink-install && \ source install/setup.bash
-
(Optional) Run tests to verify complete and correct installation:
colcon test --executor sequential
-
Run this following launch file to get the ROS node running:
ros2 launch isaac_ros_unet isaac_ros_unet_triton.launch.py model_name:=peoplesemsegnet model_repository_paths:=['/workspaces/isaac_ros-dev/models'] input_binding_names:=['input_1:0'] output_binding_names:=['argmax_1'] network_output_type:='argmax'
-
Open two other terminals, and enter the Docker container in both:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \ ./scripts/run_dev.sh
-
Play the ROS bag in one of the terminals:
ros2 bag play -l src/isaac_ros_image_segmentation/resources/rosbags/unet_sample_data/
And visualize the output in the other terminal:
ros2 run rqt_image_view rqt_image_view
-
Verify that the output looks similar to this image.
-
Complete the Isaac ROS Image Segmentation setup above.
-
Connect the RealSense device to your machine.
-
Run the ROS Docker container using the
run_dev.sh
script:cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \ ./scripts/run_dev.sh
-
Source the workspace:
source /workspaces/isaac_ros-dev/install/setup.bash
-
At this point, you can check that the RealSense camera is connected by running realsense-viewer:
realsense-viewer
-
If successful, run the launch file to spin up the example:
ros2 launch nvblox_examples_bringup realsense_humans_example.launch.py
Refer to the RealSense recording tutorial on how to record RealSense data. Below we show how to run the example on your own recorded ROS bags.
-
Complete the Isaac ROS Image Segmentation setup above.
-
Run the ROS Docker container using the
run_dev.sh
script:cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \ ./scripts/run_dev.sh
-
Source the workspace:
source /workspaces/isaac_ros-dev/install/setup.bash
-
If successful, run the launch file to spin up the example:
ros2 launch nvblox_examples_bringup realsense_humans_example.launch.py from_bag:=True bag_path:=<PATH_TO_YOUR_BAG>
See our troubleshooting page here.