-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get the depth picture #5
Comments
You should first calibrate your depth camera and get the internal and external parameters. Then, please refer to Registration in (https://github.com/r9y9/pylibfreenect2) as an example. |
Thanks. I read your paper carefully. However, I didn't find three discriminators (Fdb, Fcb, Fdf) in the code. Are there detailed descriptions of these. In addition, I would wonder to ask if there is a plan for open source training code. |
Hi, Thanks for amazing work! |
Thank you! Actually, we directly cut the body part using two thresholds in depth map in live demos. This may lead to bad results around the feet part (depth map also performs bad in this area). If you do not care about this area, that's enough. |
As we can not distribute our dataset due to commercial reasons, we think it's hard to generate a similar dataset (which needs hundreds of 3D human models). So we don't have the plan to update the training code. |
Hi! Thank you very much for your excellent work. I've got great reconstruction. However, I have a question how to get the depth map corresponding to the RGB picture?
The text was updated successfully, but these errors were encountered: