YOLOv5 Study: Speed vs Batch-Size #6649
glenn-jocher
started this conversation in
Ideas
Replies: 2 comments 3 replies
-
The results above is for Pytorch model format? |
Beta Was this translation helpful? Give feedback.
1 reply
-
@glenn-jocher I try the yolov5s with different batch size in T4 GPU, however it seems no gains in inference time on average of each image . I use cmd like
Have you try the batch size inference in T4 GPU? @glenn-jocher |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Batch Size study here. Larger batches contribute to improved per-image inference speeds. Study run on Colab Pro+ with A100 40GB GPU. I used
val.py --data coco.yaml --task study
, and just updated the code a bit to run the study over batch size instead of image size.Google sheet with complete results.
https://docs.google.com/spreadsheets/d/1Nm3jofjdgKja0AZHV8Jk_m8TgcF7jenCSA06DuEG2C0
Inference Time vs Batch Size
FPS vs Batch Size
Speed gains vs
batch-size 1
Interesting takeaway is that smaller models benefit disproportionately from large batch sizes. These are the FPS gains vs batch-size 1 from the study:
Beta Was this translation helpful? Give feedback.
All reactions