Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low FPS in ROS 2 using Python API. #1049

Open
Shivam7Sharma opened this issue Jul 2, 2024 · 11 comments
Open

Low FPS in ROS 2 using Python API. #1049

Shivam7Sharma opened this issue Jul 2, 2024 · 11 comments

Comments

@Shivam7Sharma
Copy link

Shivam7Sharma commented Jul 2, 2024

Hi,

I am writing my own ROS2 driver because the one depthai gave doesn't work properly. I am using Python API. The fps is less than 10 at 400 resolution. What might be causing the low fps? How to improve the fps? The following is my code which I agree might not be 100% accurate:

#!/usr/bin/env python3

import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image, CameraInfo
from cv_bridge import CvBridge
import depthai as dai
import numpy as np


class DepthAICameraNode(Node):
    def __init__(self):
        super().__init__('depthai_camera_node')

        self.bridge = CvBridge()

        self.left_image_pub = self.create_publisher(
            Image, 'left/image_raw', 10)
        self.right_image_pub = self.create_publisher(
            Image, 'right/image_raw', 10)
        self.depth_image_pub = self.create_publisher(
            Image, 'depth/image_raw', 10)
        self.left_camera_info_pub = self.create_publisher(
            CameraInfo, 'left/camera_info', 10)
        self.right_camera_info_pub = self.create_publisher(
            CameraInfo, 'right/camera_info', 10)

        self.pipeline = self.create_pipeline()

               # Create the device with USB speed set to SUPER_PLUS
        self.device = dai.Device(
            self.pipeline, usb2Mode=False, usbSpeed=dai.UsbSpeed.SUPER_PLUS)
        self.left_queue = self.device.getOutputQueue(
            name="left", maxSize=8, blocking=False)
        self.right_queue = self.device.getOutputQueue(
            name="right", maxSize=8, blocking=False)
        self.depth_queue = self.device.getOutputQueue(
            name="depth", maxSize=8, blocking=False)

        # Adjust the timer to match the desired FPS
        self.timer = self.create_timer(1.0 / 20.0, self.publish_images)

    def create_pipeline(self):
        pipeline = dai.Pipeline()

        cam_left = pipeline.create(dai.node.MonoCamera)
        cam_right = pipeline.create(dai.node.MonoCamera)
        stereo = pipeline.create(dai.node.StereoDepth)

        xout_left = pipeline.create(dai.node.XLinkOut)
        xout_right = pipeline.create(dai.node.XLinkOut)
        xout_depth = pipeline.create(dai.node.XLinkOut)

        xout_left.setStreamName("left")
        xout_right.setStreamName("right")
        xout_depth.setStreamName("depth")

        cam_left.setBoardSocket(dai.CameraBoardSocket.LEFT)
        cam_right.setBoardSocket(dai.CameraBoardSocket.RIGHT)

        # Set the camera FPS
        fps = 20
        cam_left.setFps(fps)
        cam_right.setFps(fps)

        cam_left.setResolution(
            dai.MonoCameraProperties.SensorResolution.THE_400_P)
        cam_right.setResolution(
            dai.MonoCameraProperties.SensorResolution.THE_400_P)

        stereo.setLeftRightCheck(True)
        stereo.setExtendedDisparity(False)
        stereo.setSubpixel(False)

        cam_left.out.link(stereo.left)
        cam_right.out.link(stereo.right)

        stereo.rectifiedLeft.link(xout_left.input)
        stereo.rectifiedRight.link(xout_right.input)
        stereo.depth.link(xout_depth.input)

        return pipeline

    def publish_images(self):
        if self.left_queue.has():
            left_frame = self.left_queue.get().getCvFrame()
            left_image_msg = self.bridge.cv2_to_imgmsg(
                left_frame, encoding='mono8')
            self.left_image_pub.publish(left_image_msg)

        if self.right_queue.has():
            right_frame = self.right_queue.get().getCvFrame()
            right_image_msg = self.bridge.cv2_to_imgmsg(
                right_frame, encoding='mono8')
            self.right_image_pub.publish(right_image_msg)

        if self.depth_queue.has():
            depth_frame = self.depth_queue.get().getCvFrame()
            depth_image_msg = self.bridge.cv2_to_imgmsg(
                depth_frame, encoding='16UC1')
            self.depth_image_pub.publish(depth_image_msg)

        # Placeholder for CameraInfo messages, to be filled with actual calibration data
        # left_camera_info_msg = CameraInfo()
        # right_camera_info_msg = CameraInfo()

        # self.left_camera_info_pub.publish(left_camera_info_msg)
        # self.right_camera_info_pub.publish(right_camera_info_msg)


def main(args=None):
    rclpy.init(args=args)
    node = DepthAICameraNode()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()


if __name__ == '__main__':
    main()

Here is an image of the performance:

Screenshot 2024-07-02 184702

@Erol444
Copy link
Member

Erol444 commented Jul 3, 2024

Hi @Shivam7Sharma , For FPS/latency optimization please refer to the documentation here:
https://docs.luxonis.com/software/depthai/optimizing/

@Shivam7Sharma
Copy link
Author

@Erol444 I have looked at the webpage in this link. It says at 400 resolution i should be getting more than 10 fps.

I am usimg super plus usb speed and low resolution.

Do you have more debugging steps?

@Erol444
Copy link
Member

Erol444 commented Jul 3, 2024

@Shivam7Sharma have you tried the exact same code as I used?
#1048 (comment)

@Shivam7Sharma
Copy link
Author

@Erol444 Thank you for your reply. I tried your code. I got 120 fps. But I want good fps in ROS 2. So if I don't open Rviz or visualize, then the fps will be higher in ROS 2? I am checking the fps using ros2 topic hz /topic_name. My aim is to use SLAM software with the camera, and for better performance, I need good fps in ROS 2. For me, visualization is not important. Without visualization in Rviz, the fps is still 10.
Screenshot 2024-07-03 115154

@Shivam7Sharma
Copy link
Author

@Erol444 I updated the code and used no visualization. The fps I got with ROS 2 was 24 when I set it to 120. But self.fps.fps() is printing about 80 fps. Can the FPs be improved? Thank you for your help so far. Any advice for creating this Python ROS 2 driver? The following is my code:

#!/usr/bin/env python3
# ros2 run depthai_camera_driver depthai_camera_node

import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image, CameraInfo
from cv_bridge import CvBridge
import depthai as dai
import numpy as np
from depthai_sdk import OakCamera
from depthai_sdk.fps import FPSHandler


class DepthAICameraNode(Node):
    def __init__(self):
        super().__init__('depthai_camera_node')
        self.logger = rclpy.logging.get_logger('depthai_camera_node')
        self.logger.set_level(rclpy.logging.LoggingSeverity.DEBUG)

        self.logger.debug('Initializing DepthAICameraNode')

        self.bridge = CvBridge()
        self.fps = FPSHandler()

        self.left_image_pub = self.create_publisher(
            Image, 'left/image_raw', 20)
        self.right_image_pub = self.create_publisher(
            Image, 'right/image_raw', 20)
        self.depth_image_pub = self.create_publisher(
            Image, 'depth/image_raw', 20)
        self.left_camera_info_pub = self.create_publisher(
            CameraInfo, 'left/camera_info', 20)
        self.right_camera_info_pub = self.create_publisher(
            CameraInfo, 'right/camera_info', 20)

        self.pipeline = self.create_pipeline()

        # Create the device with USB speed set to SUPER_PLUS
        self.device = dai.Device(
            self.pipeline, maxUsbSpeed=dai.UsbSpeed.SUPER_PLUS)

        self.left_queue = self.device.getOutputQueue(
            name="left", maxSize=30, blocking=False)
        self.right_queue = self.device.getOutputQueue(
            name="right", maxSize=30, blocking=False)
        self.depth_queue = self.device.getOutputQueue(
            name="depth", maxSize=30, blocking=False)

        # Adjust the timer to match the desired FPS
        self.timer = self.create_timer(1.0 / 120.0, self.publish_images)

    def create_pipeline(self):
        pipeline = dai.Pipeline()

        cam_left = pipeline.create(dai.node.MonoCamera)
        cam_right = pipeline.create(dai.node.MonoCamera)
        stereo = pipeline.create(dai.node.StereoDepth)

        xout_left = pipeline.create(dai.node.XLinkOut)
        xout_right = pipeline.create(dai.node.XLinkOut)
        xout_depth = pipeline.create(dai.node.XLinkOut)

        xout_left.setStreamName("left")
        xout_right.setStreamName("right")
        xout_depth.setStreamName("depth")

        cam_left.setBoardSocket(dai.CameraBoardSocket.LEFT)
        cam_right.setBoardSocket(dai.CameraBoardSocket.RIGHT)

#        Set the camera FPS
        fps = 120
        cam_left.setFps(fps)
        cam_right.setFps(fps)

        cam_left.setResolution(
            dai.MonoCameraProperties.SensorResolution.THE_800_P)
        cam_right.setResolution(
            dai.MonoCameraProperties.SensorResolution.THE_800_P)

        stereo.setLeftRightCheck(True)
        stereo.setExtendedDisparity(False)
        stereo.setSubpixel(False)

        cam_left.out.link(stereo.left)
        cam_right.out.link(stereo.right)

        stereo.rectifiedLeft.link(xout_left.input)
        stereo.rectifiedRight.link(xout_right.input)
        stereo.depth.link(xout_depth.input)

        return pipeline

    def publish_images(self):
        if self.left_queue.has():
            left_frame = self.left_queue.get().getCvFrame()
            left_image_msg = self.bridge.cv2_to_imgmsg(
                left_frame, encoding='mono8')
            self.left_image_pub.publish(left_image_msg)
            self.fps.nextIter()
            self.get_logger().info('Published left image')

        if self.right_queue.has():
            right_frame = self.right_queue.get().getCvFrame()
            right_image_msg = self.bridge.cv2_to_imgmsg(
                right_frame, encoding='mono8')
            self.right_image_pub.publish(right_image_msg)
            self.fps.nextIter()
            self.get_logger().info('Published right image')

        if self.depth_queue.has():
            depth_frame = self.depth_queue.get().getCvFrame()
            depth_image_msg = self.bridge.cv2_to_imgmsg(
                depth_frame, encoding='16UC1')
            self.depth_image_pub.publish(depth_image_msg)
            self.fps.nextIter()
            self.get_logger().info('Published depth image')

        # Print FPS to monitor performance
        # Log FPS to monitor performance
        self.get_logger().info(f'Current FPS: {self.fps.fps()}')
        # Placeholder for CameraInfo messages, to be filled with actual calibration data
        # left_camera_info_msg = CameraInfo()
        # right_camera_info_msg = CameraInfo()

        # self.left_camera_info_pub.publish(left_camera_info_msg)
        # self.right_camera_info_pub.publish(right_camera_info_msg)


def main(args=None):
    rclpy.init(args=args)
    node = DepthAICameraNode()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()


if __name__ == '__main__':
    main()

@themarpe
Copy link
Collaborator

themarpe commented Jul 3, 2024

@Shivam7Sharma as per our benchmarking from the past, Python->ROS2 has very poor performance (bandwidth wise).
I'd advise retaining the depthai ros driver or rewriting in C++ /w depthai-ros bridge component

@Shivam7Sharma
Copy link
Author

@themarpe Okay, I will try C++ with a Depth AI ros bridge. I am having issues with the depthai ROS 2 driver, so that is why I am writing my own. I asked for help in the Luxonis forum for their driver, but there has been no help so far regarding this in the past month.

@Erol444
Copy link
Member

Erol444 commented Jul 4, 2024

@Shivam7Sharma could you share the link to the forum post you are referring to?

@jwdinius
Copy link

jwdinius commented Oct 18, 2024

FWIW, for ROS 2 data transport with large messages (like video frames), you will get a big boost from using either shared memory (good) or in-process memory (best). You can eliminate use of network and shared memory by using composition.

Shared memory transport only works with messages of fixed size, so you would need to choose a message definition that has fixed size, like one from here

@Erol444
Copy link
Member

Erol444 commented Oct 19, 2024

Thanks for the info @jwdinius!
cc @moratom @Serafadam ^ on ROS2 for depthaiv3 integration:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants