Skip to content

VIAME/YOLO

This branch is 41 commits ahead of, 16 commits behind MultimediaTechLab/YOLO:main.

Folders and files

NameName
Last commit message
Last commit date
Nov 25, 2024
Jan 3, 2025
Jul 10, 2024
Jul 31, 2024
Jul 31, 2024
Nov 21, 2024
Feb 4, 2025
Jun 21, 2024
Jun 6, 2024
Jul 24, 2024
May 28, 2024
Nov 23, 2024
Feb 6, 2025
Dec 23, 2024
Jan 28, 2025
Feb 3, 2025
Jan 29, 2025

Repository files navigation

YOLO: Official Implementation of YOLOv9, YOLOv7

Documentation Status GitHub License WIP

Developer Mode Build & Test Deploy Mode Validation & Inference

PWC

Open In Colab Hugging Face Spaces

Welcome to the official implementation of YOLOv7 and YOLOv9. This repository will contains the complete codebase, pre-trained models, and detailed instructions for training and deploying YOLOv9.

TL;DR

  • This is the official YOLO model implementation with an MIT License.
  • For quick deployment: you can directly install by pip+git:
pip install git+https://github.com/WongKinYiu/YOLO.git
yolo task.data.source=0 # source could be a single file, video, image folder, webcam ID

Introduction

Installation

To get started using YOLOv9's developer mode, we recommand you clone this repository and install the required dependencies:

git clone [email protected]:WongKinYiu/YOLO.git
cd YOLO
pip install -r requirements.txt

Features

Task

These are simple examples. For more customization details, please refer to Notebooks and lower-level modifications HOWTO.

Training

To train YOLO on your machine/dataset:

  1. Modify the configuration file yolo/config/dataset/**.yaml to point to your dataset.
  2. Run the training script:
python yolo/lazy.py task=train dataset=** use_wandb=True
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c weight=False # or more args

Transfer Learning

To perform transfer learning with YOLOv9:

python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c dataset={dataset_config} device={cpu, mps, cuda}

Inference

To use a model for object detection, use:

python yolo/lazy.py # if cloned from GitHub
python yolo/lazy.py task=inference \ # default is inference
                    name=AnyNameYouWant \ # AnyNameYouWant
                    device=cpu \ # hardware cuda, cpu, mps
                    model=v9-s \ # model version: v9-c, m, s
                    task.nms.min_confidence=0.1 \ # nms config
                    task.fast_inference=onnx \ # onnx, trt, deploy
                    task.data.source=data/toy/images/train \ # file, dir, webcam
                    +quite=True \ # Quite Output
yolo task.data.source={Any Source} # if pip installed
yolo task=inference task.data.source={Any}

Validation

To validate model performance, or generate a json file in COCO format:

python yolo/lazy.py task=validation
python yolo/lazy.py task=validation dataset=toy

Contributing

Contributions to the YOLO project are welcome! See CONTRIBUTING for guidelines on how to contribute.

Star History

Star History Chart

Citations

@misc{wang2022yolov7,
      title={YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
      author={Chien-Yao Wang and Alexey Bochkovskiy and Hong-Yuan Mark Liao},
      year={2022},
      eprint={2207.02696},
      archivePrefix={arXiv},
      primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'}
}
@misc{wang2024yolov9,
      title={YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information},
      author={Chien-Yao Wang and I-Hau Yeh and Hong-Yuan Mark Liao},
      year={2024},
      eprint={2402.13616},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

An MIT rewrite of YOLOv9

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.8%
  • Shell 4.3%
  • Dockerfile 0.9%