We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I trained many times with batch size = 4 on RTX2080 Ti and got the best result below, I don't know the reason, does anyone meet the same issue?
Car [email protected], 0.70, 0.70: bbox AP:84.6559, 76.2390, 69.5820 bev AP:30.3075, 22.8254, 20.0140 3d AP:22.3193, 17.3183, 15.5445 aos AP:83.87, 74.73, 67.76 Car [email protected], 0.70, 0.70: bbox AP:87.6583, 77.4353, 70.4441 bev AP:26.2874, 18.2709, 14.9717 3d AP:17.4402, 12.1115, 9.9476 aos AP:86.76, 75.79, 68.42 Car [email protected], 0.50, 0.50: bbox AP:84.6559, 76.2390, 69.5820 bev AP:60.3748, 45.3538, 39.6286 3d AP:55.1636, 40.5460, 36.5379 aos AP:83.87, 74.73, 67.76 Car [email protected], 0.50, 0.50: bbox AP:87.6583, 77.4353, 70.4441 bev AP:60.5232, 43.0466, 36.9430 3d AP:54.3911, 38.4424, 33.1688 aos AP:86.76, 75.79, 68.42
The text was updated successfully, but these errors were encountered:
Sorry for the delay, the decrease of results brought by the small batch size is natural.It is better for the A100 or 3090.
Sorry, something went wrong.
No branches or pull requests
I trained many times with batch size = 4 on RTX2080 Ti and got the best result below, I don't know the reason, does anyone meet the same issue?
The text was updated successfully, but these errors were encountered: