Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera Calibration for generating Pseudo Lidar Image from Depth Image #37

Open
RGH-NitinVijay opened this issue Apr 30, 2020 · 2 comments

Comments

@RGH-NitinVijay
Copy link

RGH-NitinVijay commented Apr 30, 2020

Hello,

Firstly, thank you for the amazing work with this repo.

I have a custom image from a monocular camera for which I generated the depth map using the following repo:
https://github.com/nianticlabs/monodepth2

The depth results look good, but now I'm trying to generate the pseudo lidar points for that image. I understand I need camera calibration parameters for each image so I calibrated my camera by using the checkered board calibration technique. Based on the KITTI and the NYU datasets, I observed that we need the calibration file for every single image. Is my assumption correct? If yes, I'm not sure how we're generating such a calibration file for every single image? My understanding was I would calibrate the camera once and generate one set of relevant calibration parameters for that camera. I was then planning to use them along with the previously obtained depth image for generating the Psuedo Lidar Image.

Let me know if my understanding is not correct. Thank you.

@RGH-NitinVijay RGH-NitinVijay changed the title Generating Pseudo Lidar Image from Depth Image Camera Calibration for generating Pseudo Lidar Image from Depth Image Apr 30, 2020
@RGH-NitinVijay
Copy link
Author

Hello,

Appreciate it if anyone can provide their thoughts on my questions above. Thank you!

@mileyan
Copy link
Owner

mileyan commented Jun 29, 2020

Hi @RGH-NitinVijay , in my understanding, if you use your monocular images to do self-supervised learning, you don't have to generate the calibrations for every frame. You can just use the camera parameters to genera point cloud in the camera coordinate. However, If you want to project the depth map to world coordinate, you will need to transformation matrix camera_to_world, which needs to be estimated frame by frame.

Reference: http://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants