-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low results for 3090? #8
Comments
Batch size is kept smaller, in line with the batch size running on M1 Max. This is for apple-to-apples comparison. I did note on the results that the 3090 would perform better at higher batch sizes. |
But batch size for 3090 is actually lower than for M1? |
I had noted in the readme that I have yet to update the batch size for 3090. Previously I had run both at the batch size, but after comments from other people I managed to improve the M1 Max performance (very marginally) by increasing the batch size, but I have yet to re-run the 3090 benchmarks at the larger batch size. I do expect it to improve. |
Cool! I think having best-performing results for both platforms would give people understanding what can you achieve in terms of peak performance. |
I have updated the readme. I can't get his level of performance even at BS=256. I suspect Ross Wightman is using PyTorch and it ends up being more performant for some reason. This is why DL benchmarking is hard... it is about the software as much as it is about the hardware. |
Ross Wightman reports 2400-2600 for different variations of ResNet-50 (with approx. 100ms per batch and thus bs 256). I'm surprised to see almost threefold difference here.
The text was updated successfully, but these errors were encountered: