From 26b1f0e08c94f2517251cdefb286b22ea9f6bd36 Mon Sep 17 00:00:00 2001 From: Wojtek Rajtar Date: Fri, 19 Apr 2024 14:49:07 +0200 Subject: [PATCH] [#58247] Review examples/yolact/README.md and examples/mask_rcnn/README.md Signed-off-by: Wojtek Rajtar --- examples/mask_rcnn/README.md | 60 ++++++++++++++++++------------------ examples/yolact/README.md | 28 ++++++++--------- 2 files changed, 44 insertions(+), 44 deletions(-) diff --git a/examples/mask_rcnn/README.md b/examples/mask_rcnn/README.md index 48b13e9..e8ced3c 100644 --- a/examples/mask_rcnn/README.md +++ b/examples/mask_rcnn/README.md @@ -1,12 +1,12 @@ # Instance segmentation inference testing with MaskRCNN -This demo runs an instance segmentation algorithm on frames from COCO dataset. +This demo runs an instance segmentation algorithm on frames from the COCO dataset. The demo consists of four parts: * `CVNodeManager` - manages testing scenario and data flow between dataprovider and tested MaskRCNN node. -* `CVNodeManagerGUI` - visualizes the input data and results of the inference testing. +* `CVNodeManagerGUI` - visualizes input data and results of inference testing. * `Kenning` - provides images to the MaskRCNN node and collects inference results. -* `MaskRCNN` - runs inference on the input images and returns the results. +* `MaskRCNN` - runs inference on input images and returns results. ## Necessary dependencies @@ -15,7 +15,7 @@ This demo requires: * A CUDA-enabled NVIDIA GPU for inference acceleration * [repo tool](https://gerrit.googlesource.com/git-repo/+/refs/heads/main/README.md) to clone all necessary repositories * [Docker](https://www.docker.com/) to use a prepared environment -* [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit) to provide access to the GPU in the Docker container +* [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit) to provide access to the GPU in the Docker container. All the necessary build, runtime and development dependencies are provided in the [Dockerfile](./Dockerfile). The image contains: @@ -27,7 +27,7 @@ The image contains: * CUDNN and CUDA libraries for faster acceleration on GPUs * Additional development tools -Docker image containing all necessary dependencies can be built with: +To build the Docker image containing all necessary dependencies, run: ```bash sudo ./build-docker.sh @@ -37,13 +37,13 @@ For more details regarding base image refer to the [ROS2 GuiNode](https://github ## Preparing the environment -First off, create a workspace directory, where downloaded repositories will be stored: +First off, create a workspace directory to store downloaded repositories: ```bash mkdir cvnode && cd cvnode ``` -Then, all the dependencies can be downloaded using the `repo` tool: +Download all dependencies using the `repo` tool: ```bash repo init -u https://github.com/antmicro/ros2-vision-node-base.git -m examples/mask_rcnn/manifest.xml -b main @@ -84,16 +84,16 @@ This script starts the image with: * `-v $(pwd):/data` - mounts current (`cvnode`) directory in the `/data` directory in the container's context * `-v /tmp/.X11-unix/:/tmp/.X11-unix/` - passes the X11 socket directory to the container's context (to allow running GUI application) * `-e DISPLAY=$DISPLAY`, `-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR` - adds X11-related environment variables -* `--gpus='all,"capabilities=compute,utility,graphics,display"'` - adds GPUs to the container's context for computing and displaying purposes +* `--gpus='all,"capabilities=compute,utility,graphics,display"'` - adds GPUs to the container's context for compute and display purposes -Then, in the Docker container, you need to install graphics libraries for NVIDIA that match your host's drivers. -To check NVIDIA drivers version, run: +Then, in the Docker container, install graphics libraries for NVIDIA that match your host's drivers. +To check NVIDIA driver versions, run: ```bash nvidia-smi ``` -And check the `Driver version`. +And check `Driver version`. For example, for 530.41.03, install the following in the container: @@ -120,12 +120,12 @@ The script takes the following arguments: * `--image` - path to the image to run inference on * `--output` - path to the directory where the exported model will be stored -* `--method` - method to export model with. Should be one of: onnx, torchscript +* `--method` - method for model export. Should be one of: onnx, torchscript * `--num-classes` - optional argument indicating amount of classes to use in model architecture -* `--weights` - optional argument indicating path to the file storing weights. -By default, fetches COCO pre-trained model weights from model zoo +* `--weights` - optional argument indicating path to the file storage weights. + By default, fetches COCO pre-trained model weights from model zoo. -For example, to export the model to the `TorchScript` and locate it in the `config` directory: +For example, to export the model to `TorchScript` and locate it in the `config` directory, run: ```bash curl http://images.cocodataset.org/val2017/000000000632.jpg --output image.jpg @@ -141,13 +141,13 @@ Later, the model can be loaded with the `mask_rcnn_torchscript_launch.py` launch ## Building the MaskRCNN demo -Firstly, the ROS2 environment has to be sourced: +First, source the ROS2 environment: ```bash source /opt/ros/setup.sh ``` -Then, the GUI node and the Camera node can be build with: +Then, build the GUI node and the Camera node: ```bash colcon build --base-path=src/ --packages-select \ @@ -157,13 +157,13 @@ colcon build --base-path=src/ --packages-select \ --cmake-args ' -DBUILD_GUI=ON' ' -DBUILD_MASK_RCNN=ON ' ' -DBUILD_MASK_RCNN_TORCHSCRIPT=ON' ' -DBUILD_TORCHVISION=ON' ``` -Where the `--cmake-args` are: +Here, the `--cmake-args` are: * `-DBUILD_GUI=ON` - builds the GUI for CVNodeManager * `-D BUILD_MASK_RCNN=ON and ' -DBUILD_MASK_RCNN_TORCHSCRIPT=ON` - builds the MaskRCNN demos * `-DBUILD_TORCHVISION=ON` - builds the TorchVision library needed for MaskRCNN -Build targets then can be sourced with: +Source the build targets with: ```bash source install/setup.sh @@ -176,7 +176,7 @@ source install/setup.sh * `mask_rcnn_detectron_launch.py` - runs the MaskRCNN node with Python Detectron2 backend * `mask_rcnn_torchscript_launch.py` - runs the MaskRCNN node with C++ TorchScript backend -A sample launch with the Python backend can be run with: +You can run a sample launch with a Python backend with: ```bash ros2 launch cvnode_base mask_rcnn_detectron_launch.py \ @@ -191,7 +191,7 @@ ros2 launch cvnode_base mask_rcnn_detectron_launch.py \ log_level:=INFO ``` -And with the C++ backend: +For a C++ backend, run: ```bash ros2 launch cvnode_base mask_rcnn_torchscript_launch.py \ @@ -207,20 +207,20 @@ ros2 launch cvnode_base mask_rcnn_torchscript_launch.py \ log_level:=INFO ``` -Where the parameters are: +Here, the parameters are: -* `model_path` - path to the TorchScript model -* `class_names_path` - path to the CSV file with class names -* `inference_configuration` - path to the JSON file with Kenning's inference configuration +* `model_path` - path to a TorchScript model +* `class_names_path` - path to a CSV file with class names +* `inference_configuration` - path to a JSON file with Kenning's inference configuration * `publish_visualizations` - whether to publish visualizations for the GUI * `preserve_output` - whether to preserve the output of the last inference if timeout is reached -* `scenario` - scenario to run the demo in, one of: +* `scenario` - scenario for running the demo, one of: * `real_world_last` - tries to process last received frame within timeout * `real_world_first` - tries to process first received frame * `synthetic` - ignores timeout and processes frames as fast as possible * `inference_timeout_ms` - timeout for inference in milliseconds. Used only by `real_world` scenarios -* `measurements` - path to the file where inference measurements will be stored -* `report_path` - path to the file where the rendered report will be stored -* `log_level` - log level for running the demo +* `measurements` - path to file where inference measurements will be stored +* `report_path` - path to file where the rendered report will be stored +* `log_level` - log level for running the demo. -Later, produced reports can be found under `/data/build/reports` directory. +The produced reports can later be found in the `/data/build/reports` directory. diff --git a/examples/yolact/README.md b/examples/yolact/README.md index 60f2157..f205d8f 100644 --- a/examples/yolact/README.md +++ b/examples/yolact/README.md @@ -1,12 +1,12 @@ # Instance segmentation inference YOLACT -This demo runs an instance segmentation model YOLACT on sequences from [LindenthalCameraTraps](https://lila.science/datasets/lindenthal-camera-traps/) dataset. +This demo runs the YOLACT instance segmentation model on sequences from the [LindenthalCameraTraps](https://lila.science/datasets/lindenthal-camera-traps/) dataset. The demo consists of four parts: * `CVNodeManager` - manages testing scenario and data flow between dataprovider and tested CVNode. -* `CVNodeManagerGUI` - visualizes the input data and results of the inference testing. -* `Kenning` - provides sequences from LindenthalCameraTraps dataset and collects inference results. -* `CVNode` - runs inference on the input images and returns the results. +* `CVNodeManagerGUI` - visualizes input data and results of inference testing. +* `Kenning` - provides sequences from the LindenthalCameraTraps dataset and collects inference results. +* `CVNode` - runs inference on input images and returns results. ## Dependencies @@ -111,7 +111,7 @@ kenning report --measurements \ ## Building the demo -First of all, load the `setup.sh` script for ROS 2 tools, e.g.: +First, load the `setup.sh` script for ROS 2 tools, e.g.: ```bash source /opt/ros/setup.sh @@ -127,7 +127,7 @@ colcon build --base-path=src/ --packages-select \ --cmake-args ' -DBUILD_GUI=ON' ' -DBUILD_YOLACT=ON' ``` -Where the `--cmake-args` are: +Here, the `--cmake-args` are: * `-DBUILD_GUI=ON` - builds the GUI for CVNodeManager * `-DBUILD_YOLACT=ON` - builds the YOLACT CVNodes @@ -140,11 +140,11 @@ source install/setup.sh ## Running the demo -This example provides a single launch scripts for running the demo: +This example provides a single launch script for running the demo: -* `yolact_launch.py` - starts provided executable as CVNode along with other nodes +* `yolact_launch.py` - starts the provided executable as CVNode along with other nodes. -A sample launch with the TFLite backend can be run with: +Run a sample launch with the TFLite backend using: ```bash ros2 launch cvnode_base yolact_launch.py \ @@ -156,7 +156,7 @@ ros2 launch cvnode_base yolact_launch.py \ log_level:=INFO ``` -Where the parameters are: +Here, the parameters are: * `tflite` - backend to use, one of: * `tflite` - TFLite backend @@ -164,15 +164,15 @@ Where the parameters are: * `onnxruntime` - ONNXRuntime backend * `model_path` - path to the model file. Make sure to have IO specification placed alongside the model file with the same name and `.json` extension. -* `scenario` - scenario to run the demo in, one of: +* `scenario` - scenario for running the demo, one of: * `real_world_last` - tries to process last received frame within timeout * `real_world_first` - tries to process first received frame * `synthetic` - ignores timeout and processes frames as fast as possible -* `measurements` - path to the file where inference measurements will be stored -* `report_path` - path to the file where the rendered report will be stored +* `measurements` - path to file where inference measurements will be stored +* `report_path` - path to file where the rendered report will be stored * `log_level` - log level for running the demo -Later, produced reports can be found under `/data/build/reports` directory. +The produced reports can later be found in the `/data/build/reports` directory. This demo supports TFLite, TVM and ONNX backends. For more information on how to export model for these backends, see [Kenning documentation](https://antmicro.github.io/kenning/json-scenarios.html).