v2.2 v3.0

# Introduction This directory contains python software and an iOS App developed by Ultralytics LLC, and **is freely available for redistribution under the GPL-3.0 license**. For more information please visit https://www.ultralytics.com. # Description The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. The code works on Linux, MacOS and Windows. Training is done on the COCO dataset by default: https://cocodataset.org/#home. **Credit to Joseph Redmon for YOLO** (https://pjreddie.com/darknet/yolo/) and to **Erik Lindernoren for the PyTorch implementation** this work is based on (https://github.com/eriklindernoren/PyTorch-YOLOv3). # Requirements Python 3.7 or later with the following `pip3 install -U -r requirements.txt` packages: - `numpy` - `torch >= 1.0.0` - `opencv-python` # Tutorials * [Transfer Learning](https://github.com/ultralytics/yolov3/wiki/Example:-Transfer-Learning) * [Train Single Image](https://github.com/ultralytics/yolov3/wiki/Example:-Train-Single-Image) * [Train Single Class](https://github.com/ultralytics/yolov3/wiki/Example:-Train-Single-Class) * [Train Custom Data](https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data) # Training **Start Training:** Run `train.py` to begin training after downloading COCO data with `data/get_coco_dataset.sh`. **Resume Training:** Run `train.py --resume` resumes training from the latest checkpoint `weights/latest.pt`. Each epoch trains on 120,000 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set. Default training settings produce loss plots below, with **training speed of 0.6 s/batch on a 1080 Ti (18 epochs/day)** or 0.45 s/batch on a 2080 Ti. `from utils import utils; utils.plot_results()` ![Alt](https://user-images.githubusercontent.com/26833433/53494085-3251aa00-3a9d-11e9-8af7-8c08cf40d70b.png "train.py results") ## Image Augmentation `datasets.py` applies random OpenCV-powered (https://opencv.org/) augmentation to the input images in accordance with the following specifications. Augmentation is applied **only** during training, not during inference. Bounding boxes are automatically tracked and updated with the images. 416 x 416 examples pictured below. Augmentation | Description --- | --- Translation | +/- 10% (vertical and horizontal) Rotation | +/- 5 degrees Shear | +/- 2 degrees (vertical and horizontal) Scale | +/- 10% Reflection | 50% probability (horizontal-only) H**S**V Saturation | +/- 50% HS**V** Intensity | +/- 50% ## Speed https://cloud.google.com/deep-learning-vm/ **Machine type:** n1-highmem-4 (4 vCPUs, 26 GB memory) **CPU platform:** Intel Skylake **GPUs:** 1-4 x NVIDIA Tesla P100 **HDD:** 100 GB SSD GPUs | `batch_size` | speed | COCO epoch --- |---| --- | --- (P100) | (images) | (s/batch) | (min/epoch) 1 | 24 | 0.84s | 70min 2 | 48 | 1.27s | 53min 4 | 96 | 2.11s | 44min # Inference Run `detect.py` to apply trained weights to an image, such as `zidane.jpg` from the `data/samples` folder: **YOLOv3:** `detect.py --cfg cfg/yolov3.cfg --weights weights/yolov3.pt` **YOLOv3-tiny:** `detect.py --cfg cfg/yolov3-tiny.cfg --weights weights/yolov3-tiny.pt` ## Webcam Run `detect.py` with `webcam=True` to show a live webcam feed. # Pretrained Weights **Darknet** format: - https://pjreddie.com/media/files/yolov3.weights - https://pjreddie.com/media/files/yolov3-tiny.weights **PyTorch** format: - https://drive.google.com/drive/folders/1uxgUBemJVw9wZsdpboYbzUN4bcRhsuAI # mAP Run `test.py --save-json --conf-thres 0.005` to test the official YOLOv3 weights `weights/yolov3.weights` against the 5000 validation images. Compare to .579 at 608 x 608 reported in darknet (https://arxiv.org/abs/1804.02767). Run `test.py --weights weights/latest.pt` to validate against the latest training results. Hyperparameter settings and loss equation changes affect these results significantly, and additional trade studies may be needed to further improve this. ``` bash sudo rm -rf yolov3 && git clone https://github.com/ultralytics/yolov3 # bash yolov3/data/get_coco_dataset.sh sudo rm -rf cocoapi && git clone https://github.com/cocodataset/cocoapi && cd cocoapi/PythonAPI && make && cd ../.. && cp -r cocoapi/PythonAPI/pycocotools yolov3 cd yolov3 && python3 test.py --save-json --conf-thres 0.005 ... Namespace(batch_size=32, cfg='cfg/yolov3.cfg', conf_thres=0.005, data_cfg='cfg/coco.data', img_size=416, iou_thres=0.5, nms_thres=0.45, save_json=True, weights='weights/yolov3.weights') loading annotations into memory... Done (t=4.17s) creating index... index created! Loading and preparing results... DONE (t=1.75s) creating index... index created! Running per image evaluation... Evaluate annotation type *bbox* DONE (t=39.30s). Accumulating evaluation results... DONE (t=4.63s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.307 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.545 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.309 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.140 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.333 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.453 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.266 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.396 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.415 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.222 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.449 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.575 ``` # Contact For questions or comments please contact Glenn Jocher at glenn.jocher@ultralytics.com or visit us at https://contact.ultralytics.com.