-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pointcloud with respect to aux rectified image #97
Comments
Hi @poornimajd. The /multisense/image_points2_color is a version of the point cloud which uses this routine (https://docs.carnegierobotics.com/docs/cookbook/overview.html#create-a-color-3d-point-cloud) to colorize each 3d point with the aux image. This is not quite what you want, since I assume you are looking for the depth of objects you detected in the aux camera. A possible solution can be to exploit the approximation outlined here (https://docs.carnegierobotics.com/docs/cookbook/overview.html#approximation-for-execution-speed) and apply an extrinsics shift to the depth image in the left rectified coordinate frame to transform it into the aux rectified coordinate frame. Once you have a depth image in the aux camera coordinate frame you can perform direct point/depth lookups for any of your detections. You can use the Tx value computed from the aux project matrix (https://docs.carnegierobotics.com/docs/calibration/stereo.html#p-matrix) for this extrinsics transformation You are correct that the main purpose of the aux camera is to provide color images for various ML detection models. The aux camera also has a wider FOV lens which can help for certain detection applications. |
Thank you for the detailed reply. So If I understand correctly, is the following pipeline right? Could you please confirm it once for me?
So once the terms are obtained according to 3), I can use the auxillary image created in 2) as an input to the model and the corresponding point cloud will be "/multisense/image_points2" . I also wanted to verify: the rectified images on all topics are undistorted, right? |
Hi @mattalvarado , any suggestion would be appreciated! |
@poornimajd apologies for the delayed response. The pipeline I would recommend would be to:
You can verify this by evaluating the following expression: beta * [u_aux, v_aux, 1]^T = P_aux * Q_left * [u_left, v_left, d_left, 1]^T Since we are using the colorization approximation, you will need to zero out T_y and T_z in the P_aux matrix. You can also simplify the Q matrix (https://docs.carnegierobotics.com/docs/cookbook/overview.html#reproject-disparity-images-to-3d-point-clouds) by setting fy equal to fx, setting cx’ equal to cx, and dividing each term by fx*Tx. |
@mattalvarado Thank you for the response. I will try out step 1 and 2, but I think I do not need step 3, as I already have the point cloud corresponding to the left disparity , which is being published on "/multisense/image_points2" . |
@poornimajd, you would want to use the /multisense/organized_image_points2 topic rather than the /multisense/image_points2 topic, since the raw image_points2 topic skips invalid points breaking the easy mapping between a pixel in the disparity image and a 3D point in the output point cloud. An organized point cloud (https://pointclouds.org/documentation/tutorials/basic_structures.html) includes invalid points to preserve the 1-1 mapping between disparity pixels and 3d points. |
Hello, I'm using a color image as input for my model, so I've subscribed to "/multisense/aux/image_rect_color." I also require the corresponding point cloud. Should I use "/multisense/image_points2_color" for this purpose? Just as "/multisense/image_points2" — the grayscale point cloud — is aligned with the left rectified image, I assume "/multisense/image_points2_color" should align with "/multisense/aux/image_rect_color," correct?
Additionally, there isn't much information available about the auxiliary camera. Is its sole purpose to provide color images, or does it offer any other advantages?
The text was updated successfully, but these errors were encountered: