Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In the mono image, how to transform depth to point cloud ? #15

Open
bright0072876 opened this issue Sep 18, 2019 · 13 comments
Open

In the mono image, how to transform depth to point cloud ? #15

bright0072876 opened this issue Sep 18, 2019 · 13 comments

Comments

@bright0072876
Copy link

bright0072876 commented Sep 18, 2019

In the monocular image, how to get the point cloud from depth image?

@bright0072876 bright0072876 changed the title In the mono image, how to transform depth to point could ? In the mono image, how to transform depth to point cloud ? Sep 18, 2019
@mileyan
Copy link
Owner

mileyan commented Sep 18, 2019

I use the pre-trained DORN model. You can download it from https://github.com/hufu6371/DORN .

@bright0072876
Copy link
Author

DORN just make the first step depth estimation from RGB image to RGB-depth, but not provide generate point cloud.

@mileyan
Copy link
Owner

mileyan commented Sep 26, 2019

You can use my code to convert disparity to point clouds. https://github.com/mileyan/pseudo_lidar#convert-the-disparities-to-point-clouds

@mileyan
Copy link
Owner

mileyan commented Sep 28, 2019

Update: Please add --is_depth in the command.

@DeriZSY
Copy link

DeriZSY commented Oct 9, 2019

Hi, do we need to do any processing before using the depth generated by DORN to generate pointcloud?

I use the depth generated with DORN pretrain model, using code here: https://github.com/hufu6371/DORN/blob/master/demo_kitti.py .

Juding from the code, the depth is saved to .png, and the result looks well.

depth = depth_prediction(args.filename)
depth = depth*256.0
depth = depth.astype(np.uint16)
img_id = args.filename.split('/')
img_id = img_id[len(img_id)-1]
img_id = img_id[0:len(img_id)-4]
if not os.path.exists(args.outputroot):
os.makedirs(args.outputroot)
cv2.imwrite(str(args.outputroot + '/' + img_id + '_pred.png'), depth)

0000000013_depth_pred

However, the point cloud generated with the provided code is obviously wrong. Do I need to do some preprocessing with the depth (for example divided by 256) to use it?

@DeriZSY
Copy link

DeriZSY commented Oct 9, 2019

I solved the problem described above and successfully generated valid pointcloud with depth generated by DORN.
Some tips:

  1. You must use the caffe provided in the DORN repository instead of any latest versions. Otherwise, you may encounter the problem of error when loading model prototxt.
  2. If you choose to generate depth by modifying the kitti demo code (which I think should be the most convenient way), you need to adjust the data type of the depth as indicated in devkit of KITTI Depth by simply adding: depth = disp_map.astype(np.float) / 256 at here before project depth to point cloud.

@mileyan
Copy link
Owner

mileyan commented Oct 10, 2019

Thanks so much. I have update the code.

@bright0072876
Copy link
Author

Hi, DeriZsy. Just using the depth image to generate the point clouds or need to predict the disparities first? I have already using the DORN caffe version demo code generate the depth image.

@DeriZSY
Copy link

DeriZSY commented Oct 13, 2019

Hi, DeriZsy. Just using the depth image to generate the point clouds or need to predict the disparities first? I have already using the DORN caffe version demo code generate the depth image.

Use the depth directly. Notice the 'is-depth' flag here in the code for lidar generation in this repo.

@bright0072876
Copy link
Author

First move the depth image to predict_disparity folder?

@DeriZSY
Copy link

DeriZSY commented Oct 13, 2019

First move the depth image to predict_disparity folder?

plz read the code yourself... then you got all the answers... brian is a good thing

@bright0072876
Copy link
Author

when the mono depth image generate the point clouds, each need the camera calibration file. If using common other image out of KITTI there is no camera calibration file, so cannot generate to the point clouds.

@mileyan
Copy link
Owner

mileyan commented Oct 23, 2019

when the mono depth image generate the point clouds, each need the camera calibration file. If using common other image out of KITTI there is no camera calibration file, so cannot generate to the point clouds.

Yes, you need calibration parameters when you generate the point cloud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants