Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth and color not aligning properly #1

Open
tfrasca opened this issue Aug 3, 2020 · 12 comments
Open

Depth and color not aligning properly #1

tfrasca opened this issue Aug 3, 2020 · 12 comments

Comments

@tfrasca
Copy link

tfrasca commented Aug 3, 2020

When I visualize the point cloud from the realsense in rviz the color and depth images are not aligned properly. It seems as though the color image is shifted to the right and I don't know how to correct this misalignment.

I'm running the kortex_vision_rgbd.launch file to start up the realsense on the kortex arm and it is currently using the 1280x720 calibration file for the color stream and the 480x270 calibration file for the depth stream. I looked through the launch files and couldn't find any information regarding configuring the depth registration. How can i properly align the color and depth images?

ros version: melodic

@heiyumiao
Copy link

I have the same problem.
And I used the following topic instead. "/camera/depth_registered/sw_registered/image_rect"
Sometimes, however, there are unknown problem.

@VitaliyKhomko
Copy link

I think the problem is in the transformation between color and depth sensor.

You can find this transform here

This is how the line should look like with a properly calibrated transform
<node pkg="tf2_ros"   type="static_transform_publisher"   name="camera_depth_tf_publisher"   args="-0.026 -0.0095 0.0005 0 0 0 camera_link $(arg depth_frame_id)" />

I am intended to validate this solution. But I am not a ROS-ninja. So, bear with me. Because it will take some time on my end to set things up in my environment for ROS-development.

I did, however, use this exact transform in my other C++ vision projects (not ROS-based).
And here are my results:

  • Point Cloud
    kvm-realsense-pointcloud-snapshot
  • Distance-based thresholding
    kvm-rgbd-align-snapshot
  • Distance Measurement
    kvm-rgbd-measure-snapshot

Now, if you want to achieve even better color-to-depth alignment, the exact camera internal parameters and extrinsic transformation have to be calibrated on a case-by-case basis using the following procedure:

  • Figure out fixed color and depth sensor resolution that suits your vision application needs.
  • Disable Autofocus and figure out your optimal focus distance by manually adjusting color sensor focus (see 03-vision_sensor_focus_action.py).
  • set your preferred resolution and focus distance before starting the calibration.
  • Perform sensors calibration similar to OpenCV Stereo Calibration C++ Example.
  • make sure your sensors resolution and focus distance are properly configured when running your vision application with your calibrated intrinsic and extrinsic parameters.

References:

@JingKangSun
Copy link

Hi, @VitaliyKhomko

I have changed the transformation between color and depth sensor like you, however, I found that this change doesn't work.

No matter what I change the args, the result remains the same.

rvizimg

This problem has been bothering me for a long time. What else can I try?

Thanks.

@felixmaisonneuve
Copy link
Contributor

Hi @JingKangSun,

Have you tried the following suggestion

Now, if you want to achieve even better color-to-depth alignment, the exact camera internal parameters and extrinsic transformation have to be calibrated on a case-by-case basis using the following procedure:

  • Figure out fixed color and depth sensor resolution that suits your vision application needs.
  • Disable Autofocus and figure out your optimal focus distance by manually adjusting color sensor focus (see 03-vision_sensor_focus_action.py).
  • set your preferred resolution and focus distance before starting the calibration.
  • Perform sensors calibration similar to OpenCV Stereo Calibration C++ Example.
  • make sure your sensors resolution and focus distance are properly configured when running your vision application with your calibrated intrinsic and extrinsic parameters.

Best,
Felix

@FalseFeint
Copy link

FalseFeint commented Jun 9, 2022

Hello @VitaliyKhomko,

I've been experiencing the same issue and am trying to follow your suggestion. However, I have a few questions:

  1. The goal is to align color-to-depth (aligning the color image from the rgb camera with the depth image from the depth sensor) but the suggested method is stereo calibration (which is usually between the left and right stereo cameras making up the depth sensor)
    • Will performing stereo calibration on the left and right stereo cameras make the suggested transformation work properly? If so, could you provide the intuition behind this?
    • Or by stereo calibration, were you referring to calibration between the color and depth sensors, and if so, how do you perform such a calibration? The color and depth images contain different information (2D vs 3D).
  2. Assuming I need to perform stereo calibration between the two depth cameras, do you know how to access the data from each individual camera? In ROS I only see the final depth output and using the Kortex API I can only access the depth sensor (not the individual cameras)
  3. Lastly, and this is for anyone who might see this, has anyone successfully calibrated depth-to-color?

Also, here are photos showing what color-to-depth alignment looks like:

This is what the scene looks like. Note that the checkerboard in on top of a box and with proper depth-to-color alignment we would expect the checkerboard to be on top of a "mesa" in the point cloud.

Using the default color-to-depth transform found in kinova_vision.launch and kinova_vision_rgbd.launch

<node pkg="tf2_ros"
      type="static_transform_publisher"
      name="camera_depth_tf_publisher"
      args="-0.0195 -0.005 0 0 0 0 camera_link $(arg depth_frame_id)" />

transform1_1
transform1_2

Using the transform @VitaliyKhomko suggested

<node pkg="tf2_ros"   
      type="static_transform_publisher"   
      name="camera_depth_tf_publisher"   
      args="-0.026 -0.0095 0.0005 0 0 0 camera_link $(arg depth_frame_id)" /

transform2_1
transform2_2

In both cases the checkerboard is not centered on its corresponding "mesa" in the depth image and spills over the side.

Thanks in advance for the help!

@IlianaKinova
Copy link

Hi, @FalseFeint I wanted to let you know that I am currently working on a solution for this. I am designing a program that will allow you to calibrate the cameras by trying different offset values for the streams and then optimize the match between the color and depth streams using edge detection.

I believe this method will be the most accurate for the least amount of work.

@IlianaKinova
Copy link

IlianaKinova commented Jul 15, 2022

Hi, @FalseFeint it has been a while, but I want to assure you I am working on the issue. I currently am working from home because I got covid, but I am going to the office back on monday. I am making a lot of progress since I switched to using opencv on python instead of c++. c++ was causing a lot of issues which is why it took longer than expected.

My program requires a bit of a setup. It needs a rectangular shape that you can hold at a certain distance from the background. It is also much better if the background is darker than the rectangle. For the setup I am using aat home, I simply put a bright lamp pointed at a post it note I stuck at the end of a ruler. So the setup really doesn't need to be fancy at all. You can do it with pretty much anything.
Screenshot from 2022-07-15 15-10-53

I am currently able to find a square in a picture, find its bounding box and get the position of the corners. I have made it work with a webcam on my laptop, but I also see it is possible to convert a ros topic into a video capture for opencv. This is what I will be doing on monday.

I have made the logic to figure out the offset between two bounding boxes (color and depth) and all that is left is to control the camera offsets from the kinova_vision driver.

@IlianaKinova
Copy link

It's finally here!
https://github.com/IlianaKinova/cv_calibration
This is the first version. Open issues on that repo if you find bugs.
@FalseFeint @tfrasca @heiyumiao

@IlianaKinova
Copy link

Hi again, I now made an update that made the configuration of the streams a lot easier. Since I haven't had the opportunity to attempt to install it on a new machine, don't know if the installation steps are accurate, if you have issues, you can reply here.

If you have any ideas on what should be improved, let me know.

@FalseFeint
Copy link

FalseFeint commented Dec 6, 2022

Unfortunately, even after using Iliana's code we are still experiencing significant misalignment. @VitaliyKhomko @felixmaisonneuve would it be possible to setup a meeting to discuss this?

@tfrasca @heiyumiao @JingKangSun were you able to resolve this and, if so, how?

@IlianaKinova
Copy link

Hi.
it should be noted that the way point clouds work with the camera is that it is a color projection on 3d geometry.
Just like trying to point a projector on an uneven wall. The calibration tool is used to try to find the best place for the color projection on the 3D mesh.

I know that Kinect Fusion does that quite well, but from what I know, there is very little information on how the color texturing is done and it is the result of over 10 years of research.
I found this video which explains a bunch of details: https://www.youtube.com/watch?v=zzb_RQWrt6I&list=LL&index=1
Unfortunately, it does not describe the color texturing/alignment.

@rustlluk
Copy link

Hello,
I have been fighting with the same problem. My best easy-effort solution (without disassembling the camera and connecting directly to it) is to use transformation from kinova xacro file (https://github.com/Kinovarobotics/ros_kortex/blob/085b694280a77eb7be14ff2af86958ba18b633cf/kortex_description/arms/gen3/7dof/urdf/gen3_macro.xacro#L315-L336) instead of parameters from the parameters from rgbd launch.

I didn't have time to test it properly with a checkerboard or something. My use case is to combine point cloud from the Kinova camera with point cloud taken from another RGBD camera (fixed on a table, properly calibrated). With the parameters from Kinova xacro, the point clouds align properly from any position of the robot (and its camera). So if anybody still has the same problem, this is worth a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants