Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to reconstruct using D435i dataset #3

Open
aswingururaj opened this issue Jul 12, 2019 · 7 comments
Open

Not able to reconstruct using D435i dataset #3

aswingururaj opened this issue Jul 12, 2019 · 7 comments

Comments

@aswingururaj
Copy link

Hii @Eman7C7 . I recorded rosbag from my D435i camera with color and aligned depth frames at 30 Hz each and wrote a python script to generate depth.txt, rgb.txt, rgb and the depth folder. When I run the code using my dataset it is not reconstructing properly. I am getting a rough reconstruction but it is not as good as your official data-sets.
Screenshot from 2019-07-12 14-56-36
I am getting something like this. What could be the possible reason?

@aswingururaj
Copy link
Author

Hii,
I think there is some problem with the depth data that I am feeding in. I saw in your paper that you had generated virtual depth from n=10 frames for testing with TUM-RGBD data-sets. I saw in the description that this implementation lacks the features that make it able to deal with invalid measurements. I would like to generate virtual depths using my data-set to generate the 3D reconstruction of my environment. It would be really helpful if you could explain how to generate the virtual depths.
Thanks.

@Eman7C7
Copy link
Member

Eman7C7 commented Jul 23, 2019

Hi,
Sorry for the late response.
First of all, let's assume your depth does NOT contain invalid values. Are you sure it is registered to the RGB frame? There is a flag in the driver configuration that you must enable (by default it is disabled)...
The virtual depth part of the algorithm is not necessary as long as you don't record a scene that has very far depth values.

@aswingururaj
Copy link
Author

Hi,
Thanks for your response. My depth is registered to RGB frame. I used the align option to make sure that both RGB and depth images are from the same viewport before recording the rosbag. Also I have been using your default parameters for running on my data-set. I changed only the intrinsics of the RGBD sensor. Otherwise I have been trying only with the default parameters. Will that affect a lot? If so, how should I tune them? Or do you think there could be some other issues?
Thanks.

@vbhutani
Copy link

vbhutani commented Jan 2, 2020

Hi,
@aswingururaj, I am also facing the same issue and I also aligned both the RGB and depth images before putting it in the rosbag. Have you been able to solve the problem?

@aswingururaj
Copy link
Author

Hi,
@vbhutani I was not able to find the solution for the problem.

@YJZLuckyBoy
Copy link

Hi,
@aswingururaj Do you have a problem that you can’t construct tsdf? At runtime, all camera poses are identity matrices. I think it ’s because the first frame of image did not build tsdf.

@aswingururaj
Copy link
Author

Hi,
@zyjluck, I don't think that was the issue in my case. As you can see from the picture I posted, I was not getting identity matrices, but just a bad reconstruction, which I thought was because of improper registration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants