Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pointcloud with respect to aux rectified image #97

Open
poornimajd opened this issue Oct 7, 2024 · 6 comments
Open

pointcloud with respect to aux rectified image #97

poornimajd opened this issue Oct 7, 2024 · 6 comments

Comments

@poornimajd
Copy link

poornimajd commented Oct 7, 2024

Hello, I'm using a color image as input for my model, so I've subscribed to "/multisense/aux/image_rect_color." I also require the corresponding point cloud. Should I use "/multisense/image_points2_color" for this purpose? Just as "/multisense/image_points2" — the grayscale point cloud — is aligned with the left rectified image, I assume "/multisense/image_points2_color" should align with "/multisense/aux/image_rect_color," correct?

Additionally, there isn't much information available about the auxiliary camera. Is its sole purpose to provide color images, or does it offer any other advantages?

@mattalvarado
Copy link
Contributor

Hi @poornimajd. The /multisense/image_points2_color is a version of the point cloud which uses this routine (https://docs.carnegierobotics.com/docs/cookbook/overview.html#create-a-color-3d-point-cloud) to colorize each 3d point with the aux image. This is not quite what you want, since I assume you are looking for the depth of objects you detected in the aux camera. A possible solution can be to exploit the approximation outlined here (https://docs.carnegierobotics.com/docs/cookbook/overview.html#approximation-for-execution-speed) and apply an extrinsics shift to the depth image in the left rectified coordinate frame to transform it into the aux rectified coordinate frame. Once you have a depth image in the aux camera coordinate frame you can perform direct point/depth lookups for any of your detections. You can use the Tx value computed from the aux project matrix (https://docs.carnegierobotics.com/docs/calibration/stereo.html#p-matrix) for this extrinsics transformation

You are correct that the main purpose of the aux camera is to provide color images for various ML detection models. The aux camera also has a wider FOV lens which can help for certain detection applications.

@poornimajd
Copy link
Author

poornimajd commented Oct 8, 2024

Thank you for the detailed reply.
Yes I need the depth of the objects detected in the aux image.

So If I understand correctly, is the following pipeline right? Could you please confirm it once for me?

  1. Subscribe to "/multisense/image_points2" and subscribe to the left rectified image.
  2. create the auxilary image: using the formulae in (https://docs.carnegierobotics.com/docs/cookbook/overview.html#approximation-for-execution-speed).
  3. The different terms in the formulae can be obtained as:
  • u_left and v_left from the left rectified image.
  • Tx_aux and Ty_right from the projection matrix of the rectified auxilary and rectified right image. [ use the projection matrices as given in the ros topics].
  • disparity values can be obtained from "/multisense/left/disparity" corresponding to the u_left and v_left pixels.

So once the terms are obtained according to 3), I can use the auxillary image created in 2) as an input to the model and the corresponding point cloud will be "/multisense/image_points2" .

I also wanted to verify: the rectified images on all topics are undistorted, right?

@poornimajd
Copy link
Author

Hi @mattalvarado , any suggestion would be appreciated!

@mattalvarado
Copy link
Contributor

@poornimajd apologies for the delayed response. The pipeline I would recommend would be to:

  1. Subscribe to the /multisense/left/disparity, /multisense/aux/image_rect_color, /multisense/right/image_rect/camera_info and the /multisense/aux/image_rect_color/camera_info topics
  2. For each disparity pixel compute the corresponding aux color pixel by adjusting the u_x pixel value based on the approximation outlined here: https://docs.carnegierobotics.com/docs/cookbook/overview.html#approximation-for-execution-speed. You can compute T_x_aux and T_x_right from the P matrix (https://docs.carnegierobotics.com/docs/calibration/stereo.html#p-matrix) in the multisense/aux/image_rect_color/camera_info and /multisense/right/image_rect/camera_info respectively.
  3. Use the Q matrix computed from the /multisense/right/image_rect/camera_info to convert the disparity pixel into a depth or 3D point.

You can verify this by evaluating the following expression:

beta * [u_aux, v_aux, 1]^T = P_aux * Q_left * [u_left, v_left, d_left, 1]^T

Since we are using the colorization approximation, you will need to zero out T_y and T_z in the P_aux matrix. You can also simplify the Q matrix (https://docs.carnegierobotics.com/docs/cookbook/overview.html#reproject-disparity-images-to-3d-point-clouds) by setting fy equal to fx, setting cx’ equal to cx, and dividing each term by fx*Tx.

@poornimajd
Copy link
Author

poornimajd commented Oct 15, 2024

@mattalvarado Thank you for the response. I will try out step 1 and 2, but I think I do not need step 3, as I already have the point cloud corresponding to the left disparity , which is being published on "/multisense/image_points2" .
I basically need a point cloud and the corresponding color image.
so I get the point cloud from "/multisense/image_points2" and the color image can be constructed as in step 1 and step 2.

@mattalvarado
Copy link
Contributor

@poornimajd, you would want to use the /multisense/organized_image_points2 topic rather than the /multisense/image_points2 topic, since the raw image_points2 topic skips invalid points breaking the easy mapping between a pixel in the disparity image and a 3D point in the output point cloud. An organized point cloud (https://pointclouds.org/documentation/tutorials/basic_structures.html) includes invalid points to preserve the 1-1 mapping between disparity pixels and 3d points.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants