<
8000
img src="https://camo.githubusercontent.com/6e907d66e23305928e7cfd6502a1133c11e348490497081aa67a89582b825a77/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d677265656e" alt="Hugging Face Spaces" data-canonical-src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-green" style="max-width: 100%;">
Welcome to the official implementation of YOLOv7 and YOLOv9. This repository will contains the complete codebase, pre-trained models, and detailed instructions for training and deploying YOLOv9.
- This is the official YOLO model implementation with an MIT License.
- For quick deployment: you can directly install by pip+git:
pip install git+https://github.com/WongKinYiu/YOLO.git
yolo task.data.source=0 # source could be a single file, video, image folder, webcam ID
- YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information
- YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors
To get started using YOLOv9's developer mode, we recommand you clone this repository and install the required dependencies:
git clone git@github.com:WongKinYiu/YOLO.git
cd YOLO
pip install -r requirements.txt
These are simple examples. For more customization details, please refer to Notebooks and lower-level modifications HOWTO. To train YOLO on your machine/dataset:
python yolo/lazy.py task=train dataset=**
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c weight=False # or more args To perform transfer learning with YOLOv9: python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c dataset={dataset_config} device={cpu, mps, cuda} To use a model for object detection, use: python yolo/lazy.py # if cloned from GitHub
python yolo/lazy.py task=inference \ # default is inference
name=AnyNameYouWant \ # AnyNameYouWant
device=cpu \ # hardware cuda, cpu, mps
model=v9-s \ # model version: v9-c, m, s
task.nms.min_confidence=0.1 \ # nms config
task.fast_inference=onnx \ # onnx, trt, deploy
task.data.source=data/toy/images/train \ # file, dir, webcam
+quite=True \ # Quite Output
yolo task.data.source={Any Source} # if pip installed
yolo task=inference task.data.source={Any} To validate model performance, or generate a json file in COCO format: python yolo/lazy.py task=validation
python yolo/lazy.py task=validation dataset=toy Contributions to the YOLO project are welcome! See CONTRIBUTING for guidelines on how to contribute.
|