Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about eval.py #86

Open
imotas30 opened this issue Apr 23, 2022 · 1 comment
Open

about eval.py #86

imotas30 opened this issue Apr 23, 2022 · 1 comment

Comments

@imotas30
Copy link

hello ,I can't get the same result using the model you give. For example, when I use my result of UAV123 , I get the lower 4% evaluation than that in readme.markdown you give. Even if I use the result of UAV123 you offer ,I get the lower 2% than that in readme.markdown.

@dhy1222
Copy link

dhy1222 commented Mar 29, 2023

hello ,I can't get the same result using the model you give. For example, when I use my result of UAV123 , I get the lower 4% evaluation than that in readme.markdown you give. Even if I use the result of UAV123 you offer ,I get the lower 2% than that in readme.markdown.

Have you figured it out? I meet the same question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants