-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some issus about dataset unmatch #2
Comments
Hi, Also, can you confirm that you are using my code? My method does not make so many predictions on aeroplanes. Thanks again for your comment. |
Thank you for your response. After matching line by line, I found an inconsistency: PS: In the standard ORE evaluation, using all unknown class names for zero-shot testing does not achieve the results reported in the paper, with only 59 U-Recall. |
Thank you for your question. As I mentioned before, I followed the experiment setup of the SOTA method in 2024. Please refer to https://github.com/feifeiobama/OrthogonalDet. An interesting fact is that the data split in the ORE paper is closer to my setting. I hope this addresses your concerns. |
Hi, I found some data mismatch issues in your data validation code. Specifically, I printed the annotation entries in MOWODB and we found that there is a mismatch here:
The printing code is as follows:
However, the number of dataset categories in the original ORE implementation is:
These unknown class numbers also appear in ORE's question (1), (2) and in the log.txt in the google link he provided google cloud In fact, relevant information is also printed in the detectron2 evaluation tool:
Perhaps the relevant code and settings should be reviewed again to ensure uniformity. Thanks again for your influential work.
The text was updated successfully, but these errors were encountered: