Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about results in IMC2022 #82

Open
KN-Zhang opened this issue Oct 24, 2024 · 5 comments
Open

about results in IMC2022 #82

KN-Zhang opened this issue Oct 24, 2024 · 5 comments

Comments

@KN-Zhang
Copy link

Hi, I’m conducting experiments on the IMC2022 competition in Kaggle. However, the final metric I received is the 'score', which represents the average accuracy over a range of thresholds (as shown in the figure). How can I obtain the mAA@10 value as reported in Table 3?
QQ图片20241024134939

@Parskatt
Copy link
Owner

The notebook is public since a while back, you can see the approach there.https://www.kaggle.com/code/johanedstedt/tiny-roma-imc2022?scriptVersionId=130698232
Screenshot_20241024_142406_Chrome.jpg

@KN-Zhang
Copy link
Author

Yes, I’ve reviewed the notebook, but I can only generate a submission.csv file that includes three sample image pairs. The results of full test set of 10,000 image pairs from the hidden test set is not accessible.

@Parskatt
Copy link
Owner

If you are able to generate output for 3 pairs you just change it to not dryrun and submit it no?

@KN-Zhang
Copy link
Author

If you are able to generate output for 3 pairs you just change it to not dryrun and submit it no?

Thanks, I am runing it. But I just want to ensure if the output score the same metric as mAA@10 in Table 3? I see the public score of the notebook you mention is 0.88034, which is the same as mAA@10 in Table 3. :)

@Parskatt
Copy link
Owner

If you are able to generate output for 3 pairs you just change it to not dryrun and submit it no?

Thanks, I am runing it. But I just want to ensure if the output score the same metric as mAA@10 in Table 3? I see the public score of the notebook you mention is 0.88034, which is the same as mAA@10 in Table 3. :)

Yes, that's the same metric. There is more info on how it's computed on kaggle :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants