Skip to content

Commit

Permalink
feat: Add README and update LICENSE for OC_SORT fork used in SMOT4SB
Browse files Browse the repository at this point in the history
- Added README to explain the role of OC_SORT in SMOT4SB and provide installation and usage instructions.
- Updated LICENSE to clarify that modifications by Ukita's Lab at Toyota Technological Institute are covered under the MIT License.
- Ensured proper attribution to the original OC_SORT author.
- Clearly stated that this repository is used as a submodule in SMOT4SB and should not be used independently.
  • Loading branch information
Yuki-11 committed Jan 29, 2025
1 parent a6c553f commit dda73e8
Show file tree
Hide file tree
Showing 3 changed files with 136 additions and 87 deletions.
14 changes: 13 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2021 Yifu Zhang
Copyright (c) 2025 Ukita's Lab. at Toyota Technological Institute.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand All @@ -19,3 +19,15 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

---

The main part of this program is a modified version of **Yifu Zhang's OC_SORT**,
which is subject to the same MIT License.

Original OC_SORT License:

MIT License

Copyright (c) 2021 Yifu Zhang
...
110 changes: 24 additions & 86 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,99 +1,37 @@
# OC-SORT
# OC_SORT (Customized for SMOT4SB)

[![arXiv](https://img.shields.io/badge/arXiv-2203.14360-<COLOR>.svg)](https://arxiv.org/abs/2203.14360) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ![test](https://img.shields.io/static/v1?label=By&message=Pytorch&color=red)
This repository is a customized fork of [OC_SORT_for_SMOT4SB](https://github.com/IIM-TTIJ/OC_SORT_for_SMOT4SB) and is used as a submodule in the **SMOT4SB baseline code**.

**Observation-Centric SORT (OC-SORT)** is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in **crowded scenes and when objects are in non-linear motion**. It is designed by recognizing and fixing limitations in Kalman filter and [SORT](https://arxiv.org/abs/1602.00763). It is flexible to integrate with different detectors and matching modules, such as appearance similarity. It remains, **Simple, Online and Real-time**.
For usage instructions, please refer to the **SMOT4SB baseline repository**:
🔗 [SMOT4SB Baseline Code](https://github.com/IIM-TTIJ/MVA2025-SMOT4SB)

### Pipeline
<center>
<img src="assets/teaser.png" width="600"/>
</center>
---

## 📌 Overview
This repository serves as a tracking module within **SMOT4SB**, which is part of the **MVA2025 challenge**. It is integrated as a submodule and should not be used independently.

### Observation-centric Re-Update
<center>
<img src="assets/ocr.png" width="600"/>
</center>
## 🔧 Installation
Since this repository is utilized as a submodule, please follow the setup instructions in the **SMOT4SB baseline repository**.

## News
* [07/09/2023]: A C++ support is provided. See the [doc](deploy/OCSort/cpp/Readme.md) for instructions. Thanks for the contribution!
* [07/01/2023]: [Deep OC-SORT](https://github.com/GerardMaggiolino/Deep-OC-SORT/) is accepted to ICIP2023. It adds an adaptive appeareance similarity-based association upon OC-SORT.
* [03/15/2023]: We update the preprint version on [Arxiv](https://arxiv.org/pdf/2203.14360.pdf). We rename OOS to be "Observation-centric Re-Update" (ORU).
* [02/28/2023]: OC-SORT is accepted to CVPR 2023. We will update the code and paper soon. We made intensive revision of the paper writing.
* [02/26/2023]: Deep-OC-SORT, a combination of OC-SORT and deep visual appearance, is released on [Github](https://github.com/GerardMaggiolino/Deep-OC-SORT/) and [Arxiv](https://arxiv.org/abs/2302.11813). Significant performance improvement on MOT17, MOT20 and DanceTrack.
* [08/16/2022]: Support OC-SORT in [mmtracking](https://github.com/open-mmlab/mmtracking). If you want to do tracking with more advanced and customizable experience, you may want to give it a try. The mmtracking version is still in-preview. Performance on more datasets to be verified.
* [04/27/2022]: Support intergration with BYTE and multiple cost metrics, such as GIoU, CIoU, etc.
* [04/02/2022]: A preview version is released after a primary cleanup and refactor.
* [03/27/2022]: The [arxiv preprint](https://arxiv.org/abs/2203.14360) of OC-SORT is released.
## 🚀 Usage
All usage guidelines are documented in the **SMOT4SB baseline repository**. Please refer to the following link:
[SMOT4SB Baseline Repository](https://github.com/IIM-TTIJ/MVA2025-SMOT4SB)

## Benchmark Performance
## ⚠ Notes
- This repository is **not intended to be merged back into the original OC_SORT repository**.
- Direct usage of this repository is **not supported**; it is designed to be used within the **SMOT4SB framework**.
- Updates and modifications are managed in alignment with the **SMOT4SB project**.

[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multi-object-tracking-on-dancetrack)](https://paperswithcode.com/sota/multi-object-tracking-on-dancetrack?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multiple-object-tracking-on-kitti-tracking)](https://paperswithcode.com/sota/multiple-object-tracking-on-kitti-tracking?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multi-object-tracking-on-mot17)](https://paperswithcode.com/sota/multi-object-tracking-on-mot17?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multi-object-tracking-on-mot20-1)](https://paperswithcode.com/sota/multi-object-tracking-on-mot20-1?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multiple-object-tracking-on-crohd)](https://paperswithcode.com/sota/multiple-object-tracking-on-crohd?p=observation-centric-sort-rethinking-sort-for)
---

## 📝 License
This repository is a modified version of **OC_SORT**, originally developed by **Yifu Zhang**, and follows the same MIT License.

| Dataset | HOTA | AssA | IDF1 | MOTA | FP | FN | IDs | Frag |
| ---------------- | ---- | ---- | ---- | ---- | ------- | ------- | ----- | ------ |
| MOT17 (private) | 63.2 | 63.2 | 77.5 | 78.0 | 15,129 | 107,055 | 1,950 | 2,040 |
| MOT17 (public) | 52.4 | 57.6 | 65.1 | 58.2 | 4,379 | 230,449 | 784 | 2,006 |
| MOT20 (private) | 62.4 | 62.5 | 76.4 | 75.9 | 20,218 | 103,791 | 938 | 1,004 |
| MOT20 (public) | 54.3 | 59.5 | 67.0 | 59.9 | 4,434 | 202,502 | 554 | 2,345 |
| KITTI-cars | 76.5 | 76.4 | - | 90.3 | 2,685 | 407 | 250 | 280 |
| KITTI-pedestrian | 54.7 | 59.1 | - | 65.1 | 6,422 | 1,443 | 204 | 609 |
| DanceTrack-test | 55.1 | 38.0 | 54.2 | 89.4 | 114,107 | 139,083 | 1,992 | 3,838 |
| CroHD HeadTrack | 44.1 | - | 62.9 | 67.9 | 102,050 | 164,090 | 4,243 | 10,122 |
Modifications made by **Ukita's Lab at Toyota Technological Institute** are also licensed under the MIT License.

* Results are from reusing detections of previous methods and shared hyper-parameters. Tune the implementation adaptive to datasets may get higher performance.
* The inference speed is ~28FPS by a RTX 2080Ti GPU. If the detections are provided, the inference speed of OC-SORT association is 700FPS by a i9-3.0GHz CPU.
* A sample from DanceTrack-test set is as below and more visualizatiosn are available on [Google Drive](https://drive.google.com/drive/folders/1-T4jhHwhOAp42DGJ115yMlC7CkB-PNxy?usp=sharing)
For full license details, refer to the [LICENSE](LICENSE) file.

![](assets/dancetrack0088_slow.gif)
---



## Get Started
* See [INSTALL.md](./docs/INSTALL.md) for instructions of installing required components.

* See [GET_STARTED.md](./docs/GET_STARTED.md) for how to get started with OC-SORT.

* See [MODEL_ZOO.md](./docs/MODEL_ZOO.md) for available YOLOX weights.

* See [DEPLOY.md](./docs/DEPLOY.md) for deployment support over ONNX, TensorRT and ncnn.


## Demo
To run the tracker on a provided demo video from [Youtube](https://www.youtube.com/watch?v=qv6gl4h0dvg):

```shell
python3 tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/ocsort_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4
```

<center>
<img src="assets/dance_demo.gif" width="600"/>
</center>


## Roadmap
We are still actively updating OC-SORT. We always welcome contributions to make it better for the community. We have some high-priorty to-dos as below:
- [x] Add more asssocitaion cost choices: GIoU, CIoU, etc.
- [x] Support OC-SORT in [mmtracking](https://github.com/open-mmlab/mmtracking).
- [ ] Add more deployment options and improve the inference speed.
- [x] Make OC-SORT adaptive to customized detector (in the [mmtracking](https://github.com/open-mmlab/mmtracking) version).


## Acknowledgement and Citation
The codebase is built highly upon [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX), [filterpy](https://github.com/rlabbe/filterpy), and [ByteTrack](https://github.com/ifzhang/ByteTrack). We thank their wondeful works. OC-SORT, filterpy and ByteTrack are available under MIT License. And [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) uses Apache License 2.0 License.

If you find this work useful, please consider to cite our paper:
```
@inproceedings{cao2023observation,
title={Observation-centric sort: Rethinking sort for robust multi-object tracking},
author={Cao, Jinkun and Pang, Jiangmiao and Weng, Xinshuo and Khirodkar, Rawal and Kitani, Kris},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={9686--9696},
year={2023}
}
```
## 📩 Contact
For inquiries related to **SMOT4SB**, please check the official [MVA2025-SMOT4SB repository](https://github.com/IIM-TTIJ/MVA2025-SMOT4SB).
99 changes: 99 additions & 0 deletions README_origin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# OC-SORT

[![arXiv](https://img.shields.io/badge/arXiv-2203.14360-<COLOR>.svg)](https://arxiv.org/abs/2203.14360) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ![test](https://img.shields.io/static/v1?label=By&message=Pytorch&color=red)

**Observation-Centric SORT (OC-SORT)** is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in **crowded scenes and when objects are in non-linear motion**. It is designed by recognizing and fixing limitations in Kalman filter and [SORT](https://arxiv.org/abs/1602.00763). It is flexible to integrate with different detectors and matching modules, such as appearance similarity. It remains, **Simple, Online and Real-time**.

### Pipeline
<center>
<img src="assets/teaser.png" width="600"/>
</center>


### Observation-centric Re-Update
<center>
<img src="assets/ocr.png" width="600"/>
</center>

## News
* [07/09/2023]: A C++ support is provided. See the [doc](deploy/OCSort/cpp/Readme.md) for instructions. Thanks for the contribution!
* [07/01/2023]: [Deep OC-SORT](https://github.com/GerardMaggiolino/Deep-OC-SORT/) is accepted to ICIP2023. It adds an adaptive appeareance similarity-based association upon OC-SORT.
* [03/15/2023]: We update the preprint version on [Arxiv](https://arxiv.org/pdf/2203.14360.pdf). We rename OOS to be "Observation-centric Re-Update" (ORU).
* [02/28/2023]: OC-SORT is accepted to CVPR 2023. We will update the code and paper soon. We made intensive revision of the paper writing.
* [02/26/2023]: Deep-OC-SORT, a combination of OC-SORT and deep visual appearance, is released on [Github](https://github.com/GerardMaggiolino/Deep-OC-SORT/) and [Arxiv](https://arxiv.org/abs/2302.11813). Significant performance improvement on MOT17, MOT20 and DanceTrack.
* [08/16/2022]: Support OC-SORT in [mmtracking](https://github.com/open-mmlab/mmtracking). If you want to do tracking with more advanced and customizable experience, you may want to give it a try. The mmtracking version is still in-preview. Performance on more datasets to be verified.
* [04/27/2022]: Support intergration with BYTE and multiple cost metrics, such as GIoU, CIoU, etc.
* [04/02/2022]: A preview version is released after a primary cleanup and refactor.
* [03/27/2022]: The [arxiv preprint](https://arxiv.org/abs/2203.14360) of OC-SORT is released.

## Benchmark Performance

[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multi-object-tracking-on-dancetrack)](https://paperswithcode.com/sota/multi-object-tracking-on-dancetrack?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multiple-object-tracking-on-kitti-tracking)](https://paperswithcode.com/sota/multiple-object-tracking-on-kitti-tracking?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multi-object-tracking-on-mot17)](https://paperswithcode.com/sota/multi-object-tracking-on-mot17?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multi-object-tracking-on-mot20-1)](https://paperswithcode.com/sota/multi-object-tracking-on-mot20-1?p=observation-centric-sort-rethinking-sort-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/observation-centric-sort-rethinking-sort-for/multiple-object-tracking-on-crohd)](https://paperswithcode.com/sota/multiple-object-tracking-on-crohd?p=observation-centric-sort-rethinking-sort-for)


| Dataset | HOTA | AssA | IDF1 | MOTA | FP | FN | IDs | Frag |
| ---------------- | ---- | ---- | ---- | ---- | ------- | ------- | ----- | ------ |
| MOT17 (private) | 63.2 | 63.2 | 77.5 | 78.0 | 15,129 | 107,055 | 1,950 | 2,040 |
| MOT17 (public) | 52.4 | 57.6 | 65.1 | 58.2 | 4,379 | 230,449 | 784 | 2,006 |
| MOT20 (private) | 62.4 | 62.5 | 76.4 | 75.9 | 20,218 | 103,791 | 938 | 1,004 |
| MOT20 (public) | 54.3 | 59.5 | 67.0 | 59.9 | 4,434 | 202,502 | 554 | 2,345 |
| KITTI-cars | 76.5 | 76.4 | - | 90.3 | 2,685 | 407 | 250 | 280 |
| KITTI-pedestrian | 54.7 | 59.1 | - | 65.1 | 6,422 | 1,443 | 204 | 609 |
| DanceTrack-test | 55.1 | 38.0 | 54.2 | 89.4 | 114,107 | 139,083 | 1,992 | 3,838 |
| CroHD HeadTrack | 44.1 | - | 62.9 | 67.9 | 102,050 | 164,090 | 4,243 | 10,122 |

* Results are from reusing detections of previous methods and shared hyper-parameters. Tune the implementation adaptive to datasets may get higher performance.
* The inference speed is ~28FPS by a RTX 2080Ti GPU. If the detections are provided, the inference speed of OC-SORT association is 700FPS by a i9-3.0GHz CPU.
* A sample from DanceTrack-test set is as below and more visualizatiosn are available on [Google Drive](https://drive.google.com/drive/folders/1-T4jhHwhOAp42DGJ115yMlC7CkB-PNxy?usp=sharing)

![](assets/dancetrack0088_slow.gif)



## Get Started
* See [INSTALL.md](./docs/INSTALL.md) for instructions of installing required components.

* See [GET_STARTED.md](./docs/GET_STARTED.md) for how to get started with OC-SORT.

* See [MODEL_ZOO.md](./docs/MODEL_ZOO.md) for available YOLOX weights.

* See [DEPLOY.md](./docs/DEPLOY.md) for deployment support over ONNX, TensorRT and ncnn.


## Demo
To run the tracker on a provided demo video from [Youtube](https://www.youtube.com/watch?v=qv6gl4h0dvg):

```shell
python3 tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/ocsort_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4
```

<center>
<img src="assets/dance_demo.gif" width="600"/>
</center>


## Roadmap
We are still actively updating OC-SORT. We always welcome contributions to make it better for the community. We have some high-priorty to-dos as below:
- [x] Add more asssocitaion cost choices: GIoU, CIoU, etc.
- [x] Support OC-SORT in [mmtracking](https://github.com/open-mmlab/mmtracking).
- [ ] Add more deployment options and improve the inference speed.
- [x] Make OC-SORT adaptive to customized detector (in the [mmtracking](https://github.com/open-mmlab/mmtracking) version).


## Acknowledgement and Citation
The codebase is built highly upon [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX), [filterpy](https://github.com/rlabbe/filterpy), and [ByteTrack](https://github.com/ifzhang/ByteTrack). We thank their wondeful works. OC-SORT, filterpy and ByteTrack are available under MIT License. And [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) uses Apache License 2.0 License.

If you find this work useful, please consider to cite our paper:
```
@inproceedings{cao2023observation,
title={Observation-centric sort: Rethinking sort for robust multi-object tracking},
author={Cao, Jinkun and Pang, Jiangmiao and Weng, Xinshuo and Khirodkar, Rawal and Kitani, Kris},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={9686--9696},
year={2023}
}
```

0 comments on commit dda73e8

Please sign in to comment.