You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The depth results look good, but now I'm trying to generate the pseudo lidar points for that image. I understand I need camera calibration parameters for each image so I calibrated my camera by using the checkered board calibration technique. Based on the KITTI and the NYU datasets, I observed that we need the calibration file for every single image. Is my assumption correct? If yes, I'm not sure how we're generating such a calibration file for every single image? My understanding was I would calibrate the camera once and generate one set of relevant calibration parameters for that camera. I was then planning to use them along with the previously obtained depth image for generating the Psuedo Lidar Image.
Let me know if my understanding is not correct. Thank you.
The text was updated successfully, but these errors were encountered:
RGH-NitinVijay
changed the title
Generating Pseudo Lidar Image from Depth Image
Camera Calibration for generating Pseudo Lidar Image from Depth Image
Apr 30, 2020
Hi @RGH-NitinVijay , in my understanding, if you use your monocular images to do self-supervised learning, you don't have to generate the calibrations for every frame. You can just use the camera parameters to genera point cloud in the camera coordinate. However, If you want to project the depth map to world coordinate, you will need to transformation matrix camera_to_world, which needs to be estimated frame by frame.
Hello,
Firstly, thank you for the amazing work with this repo.
I have a custom image from a monocular camera for which I generated the depth map using the following repo:
https://github.com/nianticlabs/monodepth2
The depth results look good, but now I'm trying to generate the pseudo lidar points for that image. I understand I need camera calibration parameters for each image so I calibrated my camera by using the checkered board calibration technique. Based on the KITTI and the NYU datasets, I observed that we need the calibration file for every single image. Is my assumption correct? If yes, I'm not sure how we're generating such a calibration file for every single image? My understanding was I would calibrate the camera once and generate one set of relevant calibration parameters for that camera. I was then planning to use them along with the previously obtained depth image for generating the Psuedo Lidar Image.
Let me know if my understanding is not correct. Thank you.
The text was updated successfully, but these errors were encountered: