Skip to content

Commit

Permalink
yolo_ros (#55)
Browse files Browse the repository at this point in the history
* renaming yolov8_ros to yolo_ros

* README fixes

* yolo-world added

* label is class_name

* new inference params
 - iou
 - imgsz_height
 - imgsz_width
 - half
 - max_det
 - augment
 - agnostic_nms
 - retina_masks

* yolo-base.launch.py renamed to yolo.launch.py

* ultralytics upgraded

* dockerfile created

* dockerfile fixed

* github actions for ci

* workflow for github actions

* formatting fixes

* github actions ci fixed

* black formatter

* ci fixes

* ci fixes

* --diff added to rickstaa/action-black@v1

* lgeiger/black-action@master

* python_formatter

* docker readme fixed

* minor fix in ci.yml

* CI name fixed
  • Loading branch information
mgonzs13 authored Oct 31, 2024
1 parent 6d4db82 commit 1a4bffe
Show file tree
Hide file tree
Showing 53 changed files with 1,249 additions and 935 deletions.
29 changes: 29 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: CI with Formatting Check and Docker Build

on: [push, pull_request]

jobs:
python_formatter:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Black Formatter
uses: lgeiger/black-action@master
with:
args: ". --check --diff"

docker_build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build Docker
uses: docker/build-push-action@v6
with:
push: false
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
.vscode
__pycache__
yolov8m-seg.pt
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,6 @@ message: "If you use this software, please cite it as below."
authors:
- family-names: "González-Santamarta"
given-names: "Miguel Á."
title: "yolov8_ros"
title: "yolo_ros"
date-released: 2023-02-21
url: "https://github.com/mgonzs13/yolov8_ros"
url: "https://github.com/mgonzs13/yolo_ros"
29 changes: 29 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
ARG ROS_DISTRO=humble
FROM ros:${ROS_DISTRO} AS deps

# Create ros2_ws and copy files
WORKDIR /root/ros2_ws
SHELL ["/bin/bash", "-c"]
COPY . /root/ros2_ws/src

# Install dependencies
RUN apt-get update \
&& apt-get -y --quiet --no-install-recommends install \
gcc \
git \
python3 \
python3-pip
RUN pip3 install -r src/requirements.txt
RUN rosdep install --from-paths src --ignore-src -r -y
RUN pip3 install sphinx==8.0.0 sphinx-rtd-theme==3.0.0

# Colcon the ws
FROM deps AS builder
ARG CMAKE_BUILD_TYPE=Release
RUN source /opt/ros/${ROS_DISTRO}/setup.bash && colcon build

# Source the ROS2 setup file
RUN echo "source /root/ros2_ws/install/setup.bash" >> ~/.bashrc

# Run a default command, e.g., starting a bash shell
CMD ["bash"]
161 changes: 89 additions & 72 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,44 +1,43 @@
> ## 🚨 **Repository Name Change Announcement** 🚨
>
> We are planning to rename this repository from **yolov8_ros** to **yolo_ros** on **31-10-2024**.
>
> The repository is renamed since more YOLO models are supported in this tool, not only YOLOv8.
>
> Check out the updates in the [yolo_ros branch](https://github.com/mgonzs13/yolov8_ros/tree/yolo_ros).
>
> Please update your local repository, dependencies, scripts or tools that rely on the repository URL.
>
> After the name change, update your local repository URL:
>
> ```bash
> git remote set-url origin https://github.com/mgonzs13/yolo_ros.git
> ```
# yolov8_ros
ROS 2 wrap for [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) to perform object detection and tracking, instance segmentation, human pose estimation and Oriented Bounding Box (OBB). There are also 3D versions of object detection, including instance segmentation, and human pose estimation based on depth images.
# yolo_ros

ROS 2 wrap for YOLO models from [Ultralytics](https://github.com/ultralytics/ultralytics) to perform object detection and tracking, instance segmentation, human pose estimation and Oriented Bounding Box (OBB). There are also 3D versions of object detection, including instance segmentation, and human pose estimation based on depth images.

## Table of Contents

1. [Installation](#installation)
2. [Models](#models)
3. [Usage](#usage)
4. [Demos](#demos)
2. [Docker](#docker)
3. [Models](#models)
4. [Usage](#usage)
5. [Demos](#demos)

## Installation

```shell
$ cd ~/ros2_ws/src
$ git clone https://github.com/mgonzs13/yolov8_ros.git
$ pip3 install -r yolov8_ros/requirements.txt
$ git clone https://github.com/mgonzs13/yolo_ros.git
$ pip3 install -r yolo_ros/requirements.txt
$ cd ~/ros2_ws
$ rosdep install --from-paths src --ignore-src -r -y
$ colcon build
```

## Docker

Build the yolo_ros docker.

```shell
$ docker build -t yolo_ros .
```

Run the docker container. If you want to use CUDA, you have to install the [NVIDIA Container Tollkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) and add `--gpus all`.

```shell
$ docker run -it --rm --gpus all yolo_ros
```

## Models

The available models for yolov8_ros are the following:
The compatible models for yolo_ros are the following:

- [YOLOv3](https://docs.ultralytics.com/models/yolov3/)
- [YOLOv4](https://docs.ultralytics.com/models/yolov4/)
Expand All @@ -50,90 +49,98 @@ The available models for yolov8_ros are the following:
- [YOLOv10](https://docs.ultralytics.com/models/yolov10/)
- [YOLOv11](https://docs.ultralytics.com/models/yolo11/)
- [YOLO-NAS](https://docs.ultralytics.com/models/yolo-nas/)
- [YOLO-World](https://docs.ultralytics.com/models/yolo-world/)

## Usage

### YOLOv5 / YOLOv8 / YOLOv9 / YOLOv10 / YOLOv11 / YOLO-NAS
<details>
<summary>Click to expand</summary>

```shell
$ ros2 launch yolov8_bringup yolov5.launch.py
```
### YOLOv5

```shell
$ ros2 launch yolov8_bringup yolov8.launch.py
$ ros2 launch yolo_bringup yolov5.launch.py
```

```shell
$ ros2 launch yolov8_bringup yolov9.launch.py
```
### YOLOv8

```shell
$ ros2 launch yolov8_bringup yolov10.launch.py
$ ros2 launch yolo_bringup yolov8.launch.py
```

### YOLOv9

```shell
$ ros2 launch yolov8_bringup yolov11.launch.py
$ ros2 launch yolo_bringup yolov9.launch.py
```

### YOLOv10

```shell
$ ros2 launch yolov8_bringup yolo-nas.launch.py
$ ros2 launch yolo_bringup yolov10.launch.py
```

<p align="center">
<img src="./docs/rqt_graph_yolov8.png" width="100%" />
</p>
#### Topics
### YOLOv11

- **/yolo/detections**: Objects detected by YOLO using the RGB images. Each object contains a bounding boxes and a class name. It may also include a mak or a list of keypoints.
- **/yolo/tracking**: Objects detected and tracked from YOLO results. Each object is assigned a tracking ID.
- **/yolo/debug_image**: Debug images showing the detected and tracked objects. They can be visualized with rviz2.
```shell
$ ros2 launch yolo_bringup yolov11.launch.py
```

#### Parameters
### YOLO-NAS

- **model_type**: Ultralytics model type (default: YOLO)
- **model**: YOLOv8 model (default: yolov8m.pt)
- **tracker**: Tracker file (default: bytetrack.yaml)
- **device**: GPU/CUDA (default: cuda:0)
- **enable**: Wether to start YOLOv8 enabled (default: True)
- **threshold**: Detection threshold (default: 0.5)
- **input_image_topic**: Camera topic of RGB images (default: /camera/rgb/image_raw)
- **image_reliability**: Reliability for the image topic: 0=system default, 1=Reliable, 2=Best Effort (default: 2)
```shell
$ ros2 launch yolo_bringup yolo-nas.launch.py
```

### YOLOv8 3D
### YOLO-World

```shell
$ ros2 launch yolov8_bringup yolov8_3d.launch.py
$ ros2 launch yolo_bringup yolo-world.launch.py
```

</details>

<p align="center">
<img src="./docs/rqt_graph_yolov8_3d.png" width="100%" />
<img src="./docs/rqt_graph_yolov8.png" width="100%" />
</p>

#### Topics
### Topics

- **/yolo/detections**: Objects detected by YOLO using the RGB images. Each object contains a bounding boxes and a class name. It may also include a mak or a list of keypoints.
- **/yolo/detections**: Objects detected by YOLO using the RGB images. Each object contains a bounding box and a class name. It may also include a mark or a list of keypoints.
- **/yolo/tracking**: Objects detected and tracked from YOLO results. Each object is assigned a tracking ID.
- **/yolo/detections_3d**: 3D objects detected. YOLO results are used to crop the depth images to create the 3D bounding boxes and 3D keypoints.
- **/yolo/debug_image**: Debug images showing the detected and tracked objects. They can be visualized with rviz2.

#### Parameters
### Parameters

These are the parameters from the [yolo.launch.py](./yolo_bringup/launch/yolo.launch.py), used to launch all models. Check out the [Ultralytics page](https://docs.ultralytics.com/modes/predict/#inference-arguments) for more details.

- **model_type**: Ultralytics model type (default: YOLO)
- **model**: YOLOv8 model (default: yolov8m.pt)
- **model**: YOLO model (default: yolov8m.pt)
- **tracker**: tracker file (default: bytetrack.yaml)
- **device**: GPU/CUDA (default: cuda:0)
- **enable**: wether to start YOLOv8 enabled (default: True)
- **enable**: whether to start YOLO enabled (default: True)
- **threshold**: detection threshold (default: 0.5)
- **iou**: intersection Over Union (IoU) threshold for Non-Maximum Suppression (NMS) (default: 0.5)
- **imgsz_height**: image height for inference (default: 640)
- **imgsz_width**: image width for inference (default: 640)
- **half**: whether to enable half-precision (FP16) inference speeding up model inference with minimal impact on accuracy (default: False)
- **max_det**: maximum number of detections allowed per image (default: 300)
- **augment**: whether to enable test-time augmentation (TTA) for predictions improving detection robustness at the cost of speed (default: False)
- **agnostic_nms**: whether to enable class-agnostic Non-Maximum Suppression (NMS) merging overlapping boxes of different classes (default: False)
- **retina_masks**: whether to use high-resolution segmentation masks if available in the model, enhancing mask quality for segmentation (default: False)
- **input_image_topic**: camera topic of RGB images (default: /camera/rgb/image_raw)
- **image_reliability**: reliability for the image topic: 0=system default, 1=Reliable, 2=Best Effort (default: 2)
- **input_depth_topic**: camera topic of depth images (default: /camera/depth/image_raw)
- **depth_image_reliability**: reliability for the depth image topic: 0=system default, 1=Reliable, 2=Best Effort (default: 2)
- **input_depth_info_topic**: camera topic for info data (default: /camera/depth/camera_info)
- **depth_info_reliability**: reliability for the depth info topic: 0=system default, 1=Reliable, 2=Best Effort (default: 2)
- **depth_image_units_divisor**: divisor to convert the depth image into metres (default: 1000)
- **target_frame**: frame to transform the 3D boxes (default: base_link)
- **maximum_detection_threshold**: maximum detection threshold in the z axis (default: 0.3)
- **depth_image_units_divisor**: divisor to convert the depth image into meters (default: 1000)
- **maximum_detection_threshold**: maximum detection threshold in the z-axis (default: 0.3)
- **use_tracking**: whether to activate tracking after detection (default: True)
- **use_3d**: whether to activate 3D detections (default: False)
- **use_debug**: whether to activate debug node (default: True)

## Lifecycle nodes

Expand All @@ -147,24 +154,34 @@ These are some resource comparisons using the default yolov8m.pt model on a 30fp
| Active | 40-50% in one core | 628 MB | Up to 200 Mbps |
| Inactive | ~5-7% in one core | 338 MB | 0-20 Kbps |

### YOLO 3D

```shell
$ ros2 launch yolo_bringup yolov8.launch.py use_3d:=True
```

<p align="center">
<img src="./docs/rqt_graph_yolov8_3d.png" width="100%" />
</p>

## Demos

## Object Detection

This is the standard behavior of YOLOv8, which includes object tracking.
This is the standard behavior of yolo_ros which includes object tracking.

```shell
$ ros2 launch yolov8_bringup yolov8.launch.py
$ ros2 launch yolo_bringup yolo.launch.py
```

[![](https://drive.google.com/thumbnail?authuser=0&sz=w1280&id=1gTQt6soSIq1g2QmK7locHDiZ-8MqVl2w)](https://drive.google.com/file/d/1gTQt6soSIq1g2QmK7locHDiZ-8MqVl2w/view?usp=sharing)

## Instance Segmentation

Instance masks are the borders of the detected objects, not the all the pixels inside the masks.
Instance masks are the borders of the detected objects, not all the pixels inside the masks.

```shell
$ ros2 launch yolov8_bringup yolov8.launch.py model:=yolov8m-seg.pt
$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-seg.pt
```

[![](https://drive.google.com/thumbnail?authuser=0&sz=w1280&id=1dwArjDLSNkuOGIB0nSzZR6ABIOCJhAFq)](https://drive.google.com/file/d/1dwArjDLSNkuOGIB0nSzZR6ABIOCJhAFq/view?usp=sharing)
Expand All @@ -174,17 +191,17 @@ $ ros2 launch yolov8_bringup yolov8.launch.py model:=yolov8m-seg.pt
Online persons are detected along with their keypoints.

```shell
$ ros2 launch yolov8_bringup yolov8.launch.py model:=yolov8m-pose.pt
$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-pose.pt
```

[![](https://drive.google.com/thumbnail?authuser=0&sz=w1280&id=1pRy9lLSXiFEVFpcbesMCzmTMEoUXGWgr)](https://drive.google.com/file/d/1pRy9lLSXiFEVFpcbesMCzmTMEoUXGWgr/view?usp=sharing)

## 3D Object Detection

The 3D bounding boxes are calculated filtering the depth image data from an RGB-D camera using the 2D bounding box. Only objects with a 3D bounding box are visualized in the 2D image.
The 3D bounding boxes are calculated by filtering the depth image data from an RGB-D camera using the 2D bounding box. Only objects with a 3D bounding box are visualized in the 2D image.

```shell
$ ros2 launch yolov8_bringup yolov8_3d.launch.py
$ ros2 launch yolo_bringup yolo.launch.py use_3d:=True
```

[![](https://drive.google.com/thumbnail?authuser=0&sz=w1280&id=1ZcN_u9RB9_JKq37mdtpzXx3b44tlU-pr)](https://drive.google.com/file/d/1ZcN_u9RB9_JKq37mdtpzXx3b44tlU-pr/view?usp=sharing)
Expand All @@ -194,7 +211,7 @@ $ ros2 launch yolov8_bringup yolov8_3d.launch.py
In this, the depth image data is filtered using the max and min values obtained from the instance masks. Only objects with a 3D bounding box are visualized in the 2D image.

```shell
$ ros2 launch yolov8_bringup yolov8_3d.launch.py model:=yolov8m-seg.pt
$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-seg.pt use_3d:=True
```

[![](https://drive.google.com/thumbnail?authuser=0&sz=w1280&id=1wVZgi5GLkAYxv3GmTxX5z-vB8RQdwqLP)](https://drive.google.com/file/d/1wVZgi5GLkAYxv3GmTxX5z-vB8RQdwqLP/view?usp=sharing)
Expand All @@ -204,7 +221,7 @@ $ ros2 launch yolov8_bringup yolov8_3d.launch.py model:=yolov8m-seg.pt
Each keypoint is projected in the depth image and visualized using purple spheres. Only objects with a 3D bounding box are visualized in the 2D image.

```shell
$ ros2 launch yolov8_bringup yolov8_3d.launch.py model:=yolov8m-pose.pt
$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-pose.pt use_3d:=True
```

[![](https://drive.google.com/thumbnail?authuser=0&sz=w1280&id=1j4VjCAsOCx_mtM2KFPOLkpJogM0t227r)](https://drive.google.com/file/d/1j4VjCAsOCx_mtM2KFPOLkpJogM0t227r/view?usp=sharing)
4 changes: 2 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
opencv-python==4.8.1.78
opencv-python>=4.8.1.78
typing-extensions>=4.4.0
ultralytics==8.3.3
ultralytics==8.3.18
super-gradients==3.7.1
lap==0.4.0
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.8)
project(yolov8_bringup)
project(yolo_bringup)

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
Expand Down
Loading

0 comments on commit 1a4bffe

Please sign in to comment.