Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

keypoints_to_detections and Keypoint tracking #1658

Merged
merged 6 commits into from
Nov 6, 2024

Conversation

LinasKo
Copy link
Contributor

@LinasKo LinasKo commented Nov 6, 2024

Description

The simplest way to implement keypoint tracking is to reuse the functionality in detections. In this PR I add:

  • keypoints_to_detections function for converstion between the two types
  • Docs, guides and examples how keypoint tracking can be done

Minors changes:

  • Added KeyPoints.is_empty()
  • typo & format fixes

🟢 This approach does not lock us in whatsoever - we can easily introduce a better tracking system later, without deprecations.

track-keypoints.mp4

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

How has this change been tested, please provide a testcase or example of how you tested the change?

  1. Tested in Colab: https://colab.research.google.com/drive/1Rh7CRUYZK7xLioOVYAQPQTENSrOqTmfe?usp=sharing
  2. Generated example video
  3. Looked through the docs repeatedly
Generate example video for docs

I used a smaller, 1280 x 720 video.
https://www.pexels.com/video/ski-montagne-skier-piste-de-ski-4274798/

import numpy as np
import supervision as sv
from ultralytics import YOLO

model = YOLO("yolov8m-pose.pt")
tracker = sv.ByteTrack()
smoother = sv.DetectionsSmoother()
box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
trace_annotator = sv.TraceAnnotator()

def callback(frame: np.ndarray, _: int) -> np.ndarray:
    results = model(frame)[0]
    keypoints = sv.KeyPoints.from_ultralytics(results)
    detections = sv.keypoints_to_detections(keypoints)
    detections = tracker.update_with_detections(detections)
    detections = smoother.update_with_detections(detections)

    labels = [
        f"#{tracker_id} {results.names[class_id]}"
        for class_id, tracker_id
        in zip(detections.class_id, detections.tracker_id)
    ]

    annotated_frame = box_annotator.annotate(
        frame.copy(), detections=detections)
    annotated_frame = label_annotator.annotate(
        annotated_frame, detections=detections, labels=labels)
    return trace_annotator.annotate(
        annotated_frame, detections=detections)

sv.process_video(
    source_path="skiing2.mp4",
    target_path="result.mp4",
    callback=callback
)

Any specific deployment considerations

Docs

  • Docs updated? What were the changes:
  • New section in "Track Objects on Video"
  • New subsection "Datatypes" in supervision global utils.

@LinasKo
Copy link
Contributor Author

LinasKo commented Nov 6, 2024

Future improvement: Show keypoints in end video. This likely needs new sections - detecting and drawing keypoints, then converting and also showing boxes, and then doing tracking.

@LinasKo LinasKo merged commit 8e91865 into develop Nov 6, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant