This repository contains a two-stage-tracker. The detections generated by YOLOv5, a family of object detection architectures and models pretrained on the COCO dataset, are passed to a Deep Sort algorithm which tracks the objects. It can track any object that your Yolov5 model was trained to detect.
- Yolov5 training on Custom Data (link to external repository)
- Deep Sort deep descriptor training (link to external repository)
- Yolov5 deep_sort pytorch evaluation
- Clone the repository recursively:
git clone --recurse-submodules https://pc-4501.kl.dfki.de/mkhalid/yolact_yolov5_deepsort_pytorch.git
If you already cloned and forgot to use --recurse-submodules
you can run git submodule update --init
- Please run:
pip install -r requirements.txt
- Go to /yolact_vizta and run:
pip install -r requirements.txt
Tracking can be run on most video formats
$ python track.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download
$ python track.py --source 0 --yolo_weights yolov5n.pt --img 640
yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt --img 1280
By default the tracker tracks all MS COCO classes.
If you only want to track persons I recommend you to get these weights for increased performance
python3 track.py --source 0 --yolo_weights yolov5/weights/crowdhuman_yolov5m.pt --classes 0 # tracks persons, only
If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag
python3 track.py --source 0 --yolo_weights yolov5s.pt --classes 16 17 # tracks cats and dogs, only
Here is a list of all the possible objects that a Yolov5 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero.
Can be saved to inference/output
by
python3 track.py --source ... --save-txt
-
Watch tutorial Yolov5 training on Custom Data (link to external repository)
-
Add .yaml file in
/yolov5/data
(check existing files for reference) -
/yolov5/restructure_data.py
can be used to convert dataset from COCO format to YOLO format. -
This command can be used for training:
python train.py --img 512 --batch 16 --epochs 20 --data ir_data_2021_11_04.yaml --weights weights/crowdhuman_yolov5m.pt
-
Watch tutorial Yolov5 deep_sort pytorch evaluation
-
Install motmetrics:
pip install motmetrics
-
Convert data into the following format:
dataset/seq01, dataset/seq02 ...
-
Use
helper_functions.py
to create ground truth txt files in the following format:groundtruth/seq01/gt/gt.txt, groundtruth/seq02/gt/gt.txt ...
-
Run tracker on each sequence and generate txt files for each sequence.
python3 track.py --source ... --save-txt
- Evaluate results by using the following command:
python3 -m motmetrics.apps.eval_motchallenge /path/to/gtfolder /path/to/tracking/results