Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How To Benchmark Models #1697

Merged
merged 3 commits into from
Nov 29, 2024
Merged

How To Benchmark Models #1697

merged 3 commits into from
Nov 29, 2024

Conversation

LinasKo
Copy link
Contributor

@LinasKo LinasKo commented Nov 29, 2024

Description

Adding a How-To guide for model benchmarking, along with a Colab version.

This is the most extensive guide to date, but I expect we'll shorten it somewhat & update it soon with new functionality (Annotator, class remapping function).

Temporarily relies on a specific branch in inference.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

How has this change been tested, please provide a testcase or example of how you tested the change?

Ran with both Inference and Ultralytics,

Any specific deployment considerations

Temporarily relies on a specific branch in inference, checked out the looks with mkdocs.

Docs

  • Docs updated? What were the changes:

@LinasKo LinasKo merged commit a5948c8 into develop Nov 29, 2024
11 checks passed
@SkalskiP
Copy link
Collaborator

SkalskiP commented Dec 2, 2024

Hi @LinasKo 👋🏻

It's a bit late, as the PR has already been merged, but I have a few thoughts. Please make the changes before the release.

Good first iteration! I get the impression that you wanted to say too much, and as a result, the tutorial becomes chaotic. Keep it simple. Don't introduce too many side topics. Less is more. Simplify and organize.

  1. We should treat object detection as the default example. Instance segmentation is probably an order of magnitude less common as a use case than object detection.

  2. The part where you talk about which libraries we'll be using definitely shouldn't be in the "Loading a Dataset" section. In previous "How to" tutorials, we usually had a single sentence in the introduction where we mentioned which libraries we'd be using and linked to them.

Screenshot 2024-12-02 at 12 47 05

  1. I think the tutorial should show how to benchmark a model on datasets in COCO, YOLO, and Pascal VOC formats. The fact that supervision is format-agnostic in this regard is a big asset, and we should highlight that.

This code snippet should have tabs for all three formats.

Screenshot 2024-12-02 at 13 23 21

Before running the model we should have a section showing how to iterate over dataset in all three formats.

Screenshot 2024-12-02 at 13 24 26

  1. We don't need that extra table of contents. It's not consistent with the other "How to" guides that we have. It delays the user getting to the actual tutorial. Most importantly, the table of contents is already on the right side.

Screenshot 2024-12-02 at 12 27 13

  1. Use concise action phrases as paragraph titles. For example, use "Load Model", "Run Model" instead of "Loading a model", "Running a model". This is consistent with the format we used in previous "How to" guides.

  2. When using callouts like the ones below, place them within MKDocs Material Tip sections.

Screenshot 2024-12-02 at 13 04 07

  1. I think just one tab dedicated to inference is enough. In this tutorial, you're loading a model trained on COCO, and let's stick to that. If you want to mention that inference can load any model from Roboflow Universe, do it using an MKDocs Material Tip below the code snippet.

Screenshot 2024-12-02 at 13 10 29

  1. Paragraphs like this don't add much information. In this situation, I think it's better to omit the paragraph entirely.

Screenshot 2024-12-02 at 13 13 10

  1. In our "How to" guides, we've been linking directly to specific classes/utils in the docs.

Screenshot 2024-12-02 at 13 20 12

To be consistent with what we have so far, I would write this as:

Screenshot 2024-12-02 at 13 20 36

"Use sv.BoxAnnotator for object detection and sv.OrientedBoxAnnotator for OBB."

  1. Transformer is a gigantic library. We should strive to become the default benchmarking tool for detection and segmentation models there.

Screenshot 2024-12-02 at 13 25 57

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants