Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I got diifferent results on the evaluate set. #12

Open
BinBCheng opened this issue Oct 10, 2023 · 1 comment
Open

I got diifferent results on the evaluate set. #12

BinBCheng opened this issue Oct 10, 2023 · 1 comment

Comments

@BinBCheng
Copy link

Hello, thanks for your wonderful work. I got the pretrained model and I ran the train.py to reproduce the dynamic-multiframe-depth, but I got different results from yours(resnet18-pretrained).

Paper
Abs_rel / Sq_rel / rmse / rmse_log / a1 / a2 / a3
0.043 0.151 2.113 0.073 0.975 0.996 0.999

Own
Abs_rel / Sq_rel / rmse / rmse_log / a1 / a2 / a3
0.126 0.893 4.552 0.19 0.833 0.94 0.981

And my torch==1.10.1+cu113,torchvision==0.11.2+cu113.

All indicators are quite different from those in the paper. I did not make changes in “trian_my_resnet18.json”, just replaced “n_gpus=8” with “n_gpus=3”.

  1. I want to know why my result is not good.
  2. In addition to the settings in “trian_my_resnet18.json”, what details do I need to pay attention to in order to reproduce the result.

Looking forward to your reply, thank you.
Best wishes!

@ruili3
Copy link
Owner

ruili3 commented Mar 29, 2024

Hi,
Thanks for your attention to our work. The results look weird, can you double-check if the scores are from dynamic area evaluation or full-image evaluation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants