Skip to content

Deep Learning model to track a pickleball's location in a video of a match

Notifications You must be signed in to change notification settings

AndrewDettor/TrackNet-Pickleball

Repository files navigation

TrackNetPickleBall

Follow these steps to create a video that tracks the location of a pickleball and obtain metrics on the model's performance.

Model Architecture

model architecture

Steps

flow chart

Step 1 - Turn video into frames
Step 2 - Label frames by hand to get label csv
Step 3 - Adjust format of the label csv
Step 4 - Use label csv to turn frames into .npy files
Step 5 - Train model using .npy files and get new weights
Step 6 - Use video and new weights to get prediction csv
Step 7 - Use video and prediction csv to show ball trajectory
Step 8 - Use prediction csv and label csv to get model performance

For more information about each step, look into its respective folder.

System Environment

GPU is required for training the model and getting predictions (steps 5 and 6). CPU can be used for everything else.

Kaggle CPU Docker Image
Kaggle GPU Docker Image

There is also a requirements.txt file straight from the Kaggle GPU image, but it may not be easily usable.

Weights Files

The weights are too big to upload to GitHub (>25MB), so I uploaded them to Google Drive. The old weights are from the 3-in-3-out model in the TrackNetV2 repository. The new weights are from our best model.

Old Weights
New Weights

Sources

Labelling Tool
TrackNetV2

Final Presentation

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

About

Deep Learning model to track a pickleball's location in a video of a match

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published