You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have come to the problem of evaluating your the pre-trained ShAPO model you provided in the repo here
I could not find an evaluation script in your ShAPO repository, and I found a similar issue in your CenterSnap repo here. The author of the issue states problems with finding the predicted class labels and sizes, then in one of your replies you provide this helper function and ask them to use mask_rcnn results from the object-derformnet repository.
I have done all the things you asked in that issue. However, I used your pre-trained shapo model (without post-optimization) for evaluation, instead of training one of my own from scratch. But I cannot reproduce the numbers you report in ShAPO paper (assuming your pre-trained model performs as well as CenterSnap's numbers). therefore I have the following questions:
Is the pre-trained ShAPO you provided not an optimal one, but an intermediate one? and therefore I cannot reproduce the numbers (without post-optimization). Moreover, does using your pretrained ShAPO model without post-optimization give similar numbers to CenterSnap?
How can one determine the f_size in result['pred_scales'] = f_size statement you write in that issue. I calculate the f_size using the predicted point-cloud from the predicted shape latents using this line of code from object-deformnet. As i understand, this f_size is important in calculating the 3D IoU numbers you report in the ShAPO paper.
To alleviate this confusion, is there a possibility that you could share the evaluation script that you used to generate the numbers using the compute_mAP function as you mentioned in that github Issue?
Thank you,
Sandeep
The text was updated successfully, but these errors were encountered:
Thanks for your email and trying out our codebase. Unfortunately it is not possible for us to share the complete evaluation script at this point but I can help as much as I can to help you reproduce numbers in the paper:
The pre-trained model we provided in the codebase is only for demo and may be sub-optimal for synthetic scenes. I would highly recommend training your own model using the instructions provided in our repo to get the best checkpoints which you can quantitatively evaluate.
That's correct. ShAPO training from scratch without post-optimization should give numbers close to CenterSnap.
Your understanding is correct. We use the following to get the predicted sizes. where pcd_dsdf_actual is the pointcloud obtained from the sdf latent codes as here:
I have come to the problem of evaluating your the pre-trained ShAPO model you provided in the repo here
I could not find an evaluation script in your ShAPO repository, and I found a similar issue in your CenterSnap repo here. The author of the issue states problems with finding the predicted class labels and sizes, then in one of your replies you provide this helper function and ask them to use mask_rcnn results from the object-derformnet repository.
I have done all the things you asked in that issue. However, I used your pre-trained shapo model (without post-optimization) for evaluation, instead of training one of my own from scratch. But I cannot reproduce the numbers you report in ShAPO paper (assuming your pre-trained model performs as well as CenterSnap's numbers). therefore I have the following questions:
Is the pre-trained ShAPO you provided not an optimal one, but an intermediate one? and therefore I cannot reproduce the numbers (without post-optimization). Moreover, does using your pretrained ShAPO model without post-optimization give similar numbers to CenterSnap?
How can one determine the
f_size
inresult['pred_scales'] = f_size
statement you write in that issue. I calculate thef_size
using the predicted point-cloud from the predicted shape latents using this line of code from object-deformnet. As i understand, thisf_size
is important in calculating the 3D IoU numbers you report in the ShAPO paper.To alleviate this confusion, is there a possibility that you could share the evaluation script that you used to generate the numbers using the compute_mAP function as you mentioned in that github Issue?
Thank you,
Sandeep
The text was updated successfully, but these errors were encountered: