diff --git a/yolov3/CONTRIBUTING.md b/yolov3/CONTRIBUTING.md index 8e83398b..0ef52f63 100644 --- a/yolov3/CONTRIBUTING.md +++ b/yolov3/CONTRIBUTING.md @@ -1,6 +1,6 @@ ## Contributing to YOLOv3 🚀 -We love your input! We want to make contributing to as easy and transparent as possible, whether it's: +We love your input! We want to make contributing to YOLOv3 as easy and transparent as possible, whether it's: - Reporting a bug - Discussing the current state of the code @@ -8,7 +8,7 @@ We love your input! We want to make contributing to as easy and transparent as p - Proposing a new feature - Becoming a maintainer -works so well due to our combined community effort, and for every small improvement you contribute you will be +YOLOv3 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃! ## Submitting a Pull Request (PR) đŸ› ī¸ @@ -18,72 +18,73 @@ Submitting a PR is easy! This example shows how to submit a PR for updating `req ### 1. Select File to Update Select `requirements.txt` to update by clicking on it in GitHub. -

PR_step1

### 2. Click 'Edit this file' -The button is in the top-right corner. - +Button is in top-right corner.

PR_step2

### 3. Make Changes -Change the `matplotlib` version from `3.2.2` to `3.3`. - +Change `matplotlib` version from `3.2.2` to `3.3`.

PR_step3

### 4. Preview Changes and Submit PR Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose -changes** button. All done, your PR is now submitted to for review and approval 😃! - +changes** button. All done, your PR is now submitted to YOLOv3 for review and approval 😃!

PR_step4

### PR recommendations To allow your work to be integrated as seamlessly as possible, we advise you to: -- ✅ Verify your PR is **up-to-date** with `ultralytics/yolov5` `master` branch. If your PR is behind you can update - your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally. +- ✅ Verify your PR is **up-to-date with upstream/master.** If your PR is behind upstream/master an + automatic [GitHub actions](https://github.com/ultralytics/yolov3/blob/master/.github/workflows/rebase.yml) rebase may + be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' + with the name of your local branch: -

Screenshot 2022-08-29 at 22 47 15

+ ```bash + git remote add upstream https://github.com/ultralytics/yolov3.git + git fetch upstream + git checkout feature # <----- replace 'feature' with local branch name + git merge upstream/master + git push -u origin -f + ``` - ✅ Verify all Continuous Integration (CI) **checks are passing**. - -

Screenshot 2022-08-29 at 22 47 03

- - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee ## Submitting a Bug Report 🐛 -If you spot a problem with please submit a Bug Report! +If you spot a problem with YOLOv3 please submit a Bug Report! For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few -short guidelines below to help users provide what we need to get started. +short guidelines below to help users provide what we need in order to get started. When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces the problem should be: -- ✅ **Minimal** – Use as little code as possible that still produces the same problem -- ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself -- ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem +* ✅ **Minimal** – Use as little code as possible that still produces the same problem +* ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself +* ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be: -- ✅ **Current** – Verify that your code is up-to-date with the current - GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new +* ✅ **Current** – Verify that your code is up-to-date with current + GitHub [master](https://github.com/ultralytics/yolov3/tree/master), and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits. -- ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this +* ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code âš ī¸. -If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 -**Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and provide +If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 ** +Bug Report** [template](https://github.com/ultralytics/yolov3/issues/new/choose) and providing a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better understand and diagnose your problem. diff --git a/yolov3/README.md b/yolov3/README.md index a4280412..1d961c3c 100644 --- a/yolov3/README.md +++ b/yolov3/README.md @@ -1,103 +1,94 @@
-

- - -

- -[English](README.md) | [įŽ€äŊ“中文](README.zh-CN.md) +

+ + +


-
- YOLOv3 CI - YOLOv3 Citation - Docker Pulls -
- Run on Gradient - Open In Colab - Open In Kaggle -
-
- -YOLOv3 🚀 is the world's most loved vision AI, representing Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development. - -To request an Enterprise License please complete the form at Ultralytics Licensing. - -
- - - - - - - - - - - - - - - - - - - - -
+ CI CPU testing + YOLOv3 Citation + Docker Pulls +
+ Open In Colab + Open In Kaggle + Join Forum

- -##
YOLOv8 🚀 NEW
- -We are thrilled to announce the launch of Ultralytics YOLOv8 🚀, our NEW cutting-edge, state-of-the-art (SOTA) model -released at **[https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics)**. -YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of -object detection, image segmentation and image classification tasks. - -See the [YOLOv8 Docs](https://docs.ultralytics.com) for details and get started with: - -```commandline -pip install ultralytics -``` -
- - + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+

+YOLOv3 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics + open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development. +

+ + + ##
Documentation
-See the [YOLOv3 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment. See below for quickstart examples. +See the [YOLOv3 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment. + +##
Quick Start Examples
Install -Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a -[**Python>=3.7.0**](https://www.python.org/) environment, including -[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). +[**Python>=3.6.0**](https://www.python.org/) is required with all +[requirements.txt](https://github.com/ultralytics/yolov3/blob/master/requirements.txt) installed including +[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/): + ```bash -git clone https://github.com/ultralytics/yolov3 # clone -cd yolov3 -pip install -r requirements.txt # install +$ git clone https://github.com/ultralytics/yolov3 +$ cd yolov3 +$ pip install -r requirements.txt ```
-
+
Inference -YOLOv3 [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) inference. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest -YOLOv3 [release](https://github.com/ultralytics/yolov5/releases). +Inference with YOLOv3 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download +from the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases). ```python import torch # Model -model = torch.hub.load("ultralytics/yolov3", "yolov3") # or yolov5n - yolov5x6, custom +model = torch.hub.load('ultralytics/yolov3', 'yolov3') # or yolov3-spp, yolov3-tiny, custom # Images -img = "https://ultralytics.com/images/zidane.jpg" # or file, Path, PIL, OpenCV, numpy, list +img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list # Inference results = model(img) @@ -108,23 +99,22 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
+ +
Inference with detect.py -`detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from -the latest YOLOv3 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. +`detect.py` runs inference on a variety of sources, downloading models automatically from +the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases) and saving results to `runs/detect`. ```bash -python detect.py --weights yolov5s.pt --source 0 # webcam - img.jpg # image - vid.mp4 # video - screen # screenshot - path/ # directory - list.txt # list of images - list.streams # list of streams - 'path/*.jpg' # glob - 'https://youtu.be/Zgi9g1ksQHc' # YouTube - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream +$ python detect.py --source 0 # webcam + img.jpg # image + vid.mp4 # video + path/ # directory + path/*.jpg # glob + 'https://youtu.be/Zgi9g1ksQHc' # YouTube + 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream ```
@@ -132,21 +122,7 @@ python detect.py --weights yolov5s.pt --source 0 #
Training -The commands below reproduce YOLOv3 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) -results. [Models](https://github.com/ultralytics/yolov5/tree/master/models) -and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest -YOLOv3 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are -1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://github.com/ultralytics/yolov5/issues/475) times faster). Use the -largest `--batch-size` possible, or pass `--batch-size -1` for -YOLOv3 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB. -```bash -python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128 - yolov5s 64 - yolov5m 40 - yolov5l 24 - yolov5x 16 -``` @@ -155,270 +131,20 @@ python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml -
Tutorials -- [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)  🚀 RECOMMENDED -- [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)  â˜˜ī¸ +* [Train Custom Data](https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data)  🚀 RECOMMENDED +* [Tips for Best Training Results](https://github.com/ultralytics/yolov3/wiki/Tips-for-Best-Training-Results)  â˜˜ī¸ RECOMMENDED -- [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) -- [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) 🌟 NEW -- [TFLite, ONNX, CoreML, TensorRT Export](https://github.com/ultralytics/yolov5/issues/251) 🚀 -- [NVIDIA Jetson Nano Deployment](https://github.com/ultralytics/yolov5/issues/9627) 🌟 NEW -- [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303) -- [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318) -- [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304) -- [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607) -- [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314) -- [Architecture Summary](https://github.com/ultralytics/yolov5/issues/6998) 🌟 NEW -- [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975)  🌟 NEW -- [ClearML Logging](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml) 🌟 NEW -- [YOLOv3 with Neural Magic's Deepsparse](https://bit.ly/yolov5-neuralmagic) 🌟 NEW -- [Comet Logging](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/comet) 🌟 NEW - -
- -##
Integrations
- -
- - -
-
- -
- - - - - - - - - - - -
- -| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW | -| :--------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: | -| Label and export your custom datasets directly to YOLOv3 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv3 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv3 models, resume training, and interactively visualise and debug predictions | Run YOLOv3 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) | - -##
Ultralytics HUB
- -Experience seamless AI with [Ultralytics HUB](https://bit.ly/ultralytics_hub) ⭐, the all-in-one solution for data visualization, YOLO 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://ultralytics.com/app_install). Start your journey for **Free** now! - - - - -##
Why YOLOv3
- -YOLOv3 has been designed to be super easy to get started and simple to learn. We prioritize real-world results. - -

-
- YOLOv3-P5 640 Figure - -

-
-
- Figure Notes - -- **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536. -- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32. -- **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8. -- **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` - -
- -### Pretrained Checkpoints - -| Model | size
(pixels) | mAPval
50-95 | mAPval
50 | Speed
CPU b1
(ms) | Speed
V100 b1
(ms) | Speed
V100 b32
(ms) | params
(M) | FLOPs
@640 (B) | -| ----------------------------------------------------------------------------------------------- | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- | -| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** | -| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 | -| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 | -| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 | -| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 | -| | | | | | | | | | -| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 | -| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 | -| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 | -| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 | -| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)
+ [TTA] | 1280
1536 | 55.0
**55.8** | 72.7
**72.7** | 3136
- | 26.2
- | 19.4
- | 140.7
- | 209.8
- | - -
- Table Notes - -- All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml). -- **mAPval** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` -- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.
Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1` -- **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.
Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment` - -
- -##
Segmentation
- -Our new YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) instance segmentation models are the fastest and most accurate in the world, beating all current [SOTA benchmarks](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco). We've made them super simple to train, validate and deploy. See full details in our [Release Notes](https://github.com/ultralytics/yolov5/releases/v7.0) and visit our [YOLOv5 Segmentation Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) for quickstart tutorials. - -
- Segmentation Checkpoints - -
- - -
- -We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) notebooks for easy reproducibility. - -| Model | size
(pixels) | mAPbox
50-95 | mAPmask
50-95 | Train time
300 epochs
A100 (hours) | Speed
ONNX CPU
(ms) | Speed
TRT A100
(ms) | params
(M) | FLOPs
@640 (B) | -| ------------------------------------------------------------------------------------------ | --------------------- | -------------------- | --------------------- | --------------------------------------------- | ------------------------------ | ------------------------------ | ------------------ | ---------------------- | -| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** | -| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 | -| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 | -| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 | -| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 | - -- All checkpoints are trained to 300 epochs with SGD optimizer with `lr0=0.01` and `weight_decay=5e-5` at image size 640 and all default settings.
Runs logged to https://wandb.ai/glenn-jocher/YOLOv5_v70_official -- **Accuracy** values are for single-model single-scale on COCO dataset.
Reproduce by `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt` -- **Speed** averaged over 100 inference images using a [Colab Pro](https://colab.research.google.com/signup) A100 High-RAM instance. Values indicate inference speed only (NMS adds about 1ms per image).
Reproduce by `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1` -- **Export** to ONNX at FP32 and TensorRT at FP16 done with `export.py`.
Reproduce by `python export.py --weights yolov5s-seg.pt --include engine --device 0 --half` - -
- -
- Segmentation Usage Examples  Open In Colab - -### Train - -YOLOv5 segmentation training supports auto-download COCO128-seg segmentation dataset with `--data coco128-seg.yaml` argument and manual download of COCO-segments dataset with `bash data/scripts/get_coco.sh --train --val --segments` and then `python train.py --data coco.yaml`. - -```bash -# Single-GPU -python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 - -# Multi-GPU DDP -python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3 -``` - -### Val - -Validate YOLOv5s-seg mask mAP on COCO dataset: - -```bash -bash data/scripts/get_coco.sh --val --segments # download COCO val segments split (780MB, 5000 images) -python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # validate -``` - -### Predict - -Use pretrained YOLOv5m-seg.pt to predict bus.jpg: - -```bash -python segment/predict.py --weights yolov5m-seg.pt --data data/images/bus.jpg -``` - -```python -model = torch.hub.load( - "ultralytics/yolov5", "custom", "yolov5m-seg.pt" -) # load from PyTorch Hub (WARNING: inference not yet supported) -``` - -| ![zidane](https://user-images.githubusercontent.com/26833433/203113421-decef4c4-183d-4a0a-a6c2-6435b33bc5d3.jpg) | ![bus](https://user-images.githubusercontent.com/26833433/203113416-11fe0025-69f7-4874-a0a6-65d0bfe2999a.jpg) | -| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | - -### Export - -Export YOLOv5s-seg model to ONNX and TensorRT: - -```bash -python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0 -``` - -
- -##
Classification
- -YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) brings support for classification model training, validation and deployment! See full details in our [Release Notes](https://github.com/ultralytics/yolov5/releases/v6.2) and visit our [YOLOv5 Classification Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) for quickstart tutorials. - -
- Classification Checkpoints - -
- -We trained YOLOv5-cls classification models on ImageNet for 90 epochs using a 4xA100 instance, and we trained ResNet and EfficientNet models alongside with the same default training settings to compare. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) for easy reproducibility. - -| Model | size
(pixels) | acc
top1 | acc
top5 | Training
90 epochs
4xA100 (hours) | Speed
ONNX CPU
(ms) | Speed
TensorRT V100
(ms) | params
(M) | FLOPs
@224 (B) | -| -------------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | -------------------------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ---------------------- | -| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** | -| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 | -| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 | -| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 | -| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 | -| | | | | | | | | | -| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 | -| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 | -| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 | -| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 | -| | | | | | | | | | -| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 | -| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 | -| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 | -| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 | - -
- Table Notes (click to expand) - -- All checkpoints are trained to 90 epochs with SGD optimizer with `lr0=0.001` and `weight_decay=5e-5` at image size 224 and all default settings.
Runs logged to https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2 -- **Accuracy** values are for single-model single-scale on [ImageNet-1k](https://www.image-net.org/index.php) dataset.
Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224` -- **Speed** averaged over 100 inference images using a Google [Colab Pro](https://colab.research.google.com/signup) V100 High-RAM instance.
Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1` -- **Export** to ONNX at FP32 and TensorRT at FP16 done with `export.py`.
Reproduce by `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224` - -
-
- -
- Classification Usage Examples  Open In Colab - -### Train - -YOLOv5 classification training supports auto-download of MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof, and ImageNet datasets with the `--data` argument. To start training on MNIST for example use `--data mnist`. - -```bash -# Single-GPU -python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128 - -# Multi-GPU DDP -python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3 -``` - -### Val - -Validate YOLOv5m-cls accuracy on ImageNet-1k dataset: - -```bash -bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images) -python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate -``` - -### Predict - -Use pretrained YOLOv5s-cls.pt to predict bus.jpg: - -```bash -python classify/predict.py --weights yolov5s-cls.pt --data data/images/bus.jpg -``` - -```python -model = torch.hub.load( - "ultralytics/yolov5", "custom", "yolov5s-cls.pt" -) # load from PyTorch Hub -``` - -### Export - -Export a group of trained YOLOv5s-cls, ResNet and EfficientNet models to ONNX and TensorRT: - -```bash -python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224 -``` +* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)  🌟 NEW +* [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975)  🌟 NEW +* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) +* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)  ⭐ NEW +* [TorchScript, ONNX, CoreML Export](https://github.com/ultralytics/yolov5/issues/251) 🚀 +* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303) +* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318) +* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304) +* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607) +* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)  ⭐ NEW +* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx)
@@ -427,67 +153,121 @@ python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --inclu Get started in seconds with our verified environments. Click each icon below for details. +##
Integrations
+ + + +|Weights and Biases|Roboflow ⭐ NEW| +|:-:|:-:| +|Automatically track and visualize all your YOLOv3 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv3 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | + + +##
Why YOLOv5
+ +

+
+ YOLOv3-P5 640 Figure (click to expand) + +

+
+
+ Figure Notes (click to expand) + +* **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536. +* **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32. +* **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8. +* **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` +
+ +### Pretrained Checkpoints + +[assets]: https://github.com/ultralytics/yolov5/releases +[TTA]: https://github.com/ultralytics/yolov5/issues/303 + +|Model |size
(pixels) |mAPval
0.5:0.95 |mAPval
0.5 |Speed
CPU b1
(ms) |Speed
V100 b1
(ms) |Speed
V100 b32
(ms) |params
(M) |FLOPs
@640 (B) +|--- |--- |--- |--- |--- |--- |--- |--- |--- +|[YOLOv5n][assets] |640 |28.4 |46.0 |**45** |**6.3**|**0.6**|**1.9**|**4.5** +|[YOLOv5s][assets] |640 |37.2 |56.0 |98 |6.4 |0.9 |7.2 |16.5 +|[YOLOv5m][assets] |640 |45.2 |63.9 |224 |8.2 |1.7 |21.2 |49.0 +|[YOLOv5l][assets] |640 |48.8 |67.2 |430 |10.1 |2.7 |46.5 |109.1 +|[YOLOv5x][assets] |640 |50.7 |68.9 |766 |12.1 |4.8 |86.7 |205.7 +| | | | | | | | | +|[YOLOv5n6][assets] |1280 |34.0 |50.7 |153 |8.1 |2.1 |3.2 |4.6 +|[YOLOv5s6][assets] |1280 |44.5 |63.0 |385 |8.2 |3.6 |16.8 |12.6 +|[YOLOv5m6][assets] |1280 |51.0 |69.0 |887 |11.1 |6.8 |35.7 |50.0 +|[YOLOv5l6][assets] |1280 |53.6 |71.6 |1784 |15.8 |10.5 |76.8 |111.4 +|[YOLOv5x6][assets]
+ [TTA][TTA]|1280
1536 |54.7
**55.4** |**72.4**
72.3 |3136
- |26.2
- |19.4
- |140.7
- |209.8
- + +
+ Table Notes (click to expand) + +* All checkpoints are trained to 300 epochs with default settings and hyperparameters. +* **mAPval** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` +* **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.
Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45` +* **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.
Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment` + +
+ ##
Contribute
We love your input! We want to make contributing to YOLOv3 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv3 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors! - + - - - -##
License
- -YOLOv3 is available under two different licenses: - -- **GPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) file for details. -- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license). ##
Contact
-For YOLOv3 bug reports and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues) or the [Ultralytics Community Forum](https://community.ultralytics.com/). +For YOLOv3 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov3/issues). For business inquiries or +professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).
-
- - - - - - - - - - - - - - - - - - - - -
-[tta]: https://github.com/ultralytics/yolov5/issues/303 + diff --git a/yolov3/data/Argoverse.yaml b/yolov3/data/Argoverse.yaml index a3b4511e..9be3ae79 100644 --- a/yolov3/data/Argoverse.yaml +++ b/yolov3/data/Argoverse.yaml @@ -1,10 +1,10 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/ by Argo AI +# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/ # Example usage: python train.py --data Argoverse.yaml # parent -# ├── yolov5 +# ├── yolov3 # └── datasets -# └── Argoverse ← downloads here (31.3 GB) +# └── Argoverse ← downloads here # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] @@ -14,15 +14,8 @@ val: Argoverse-1.1/images/val/ # val images (relative to 'path') 15062 images test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview # Classes -names: - 0: person - 1: bicycle - 2: car - 3: motorcycle - 4: bus - 5: truck - 6: traffic_light - 7: stop_sign +nc: 8 # number of classes +names: ['person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign'] # class names # Download script/URL (optional) --------------------------------------------------------------------------------------- @@ -39,7 +32,7 @@ download: | for annot in tqdm(a['annotations'], desc=f"Converting {set} to YOLOv3 format..."): img_id = annot['image_id'] img_name = a['images'][img_id]['name'] - img_label_name = f'{img_name[:-3]}txt' + img_label_name = img_name[:-3] + "txt" cls = annot['category_id'] # instance class id x_center, y_center, width, height = annot['bbox'] @@ -63,7 +56,7 @@ download: | # Download - dir = Path(yaml['path']) # dataset root dir + dir = Path('../datasets/Argoverse') # dataset root dir urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip'] download(urls, dir=dir, delete=False) diff --git a/yolov3/data/GlobalWheat2020.yaml b/yolov3/data/GlobalWheat2020.yaml index 9df59b84..10a2d3fc 100644 --- a/yolov3/data/GlobalWheat2020.yaml +++ b/yolov3/data/GlobalWheat2020.yaml @@ -1,10 +1,10 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# Global Wheat 2020 dataset http://www.global-wheat.com/ by University of Saskatchewan +# Global Wheat 2020 dataset http://www.global-wheat.com/ # Example usage: python train.py --data GlobalWheat2020.yaml # parent -# ├── yolov5 +# ├── yolov3 # └── datasets -# └── GlobalWheat2020 ← downloads here (7.0 GB) +# └── GlobalWheat2020 ← downloads here # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] @@ -26,15 +26,14 @@ test: # test images (optional) 1276 images - images/uq_1 # Classes -names: - 0: wheat_head +nc: 1 # number of classes +names: ['wheat_head'] # class names # Download script/URL (optional) --------------------------------------------------------------------------------------- download: | from utils.general import download, Path - # Download dir = Path(yaml['path']) # dataset root dir urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip', diff --git a/yolov3/data/SKU-110K.yaml b/yolov3/data/SKU-110K.yaml index 9ebbb555..183d3637 100644 --- a/yolov3/data/SKU-110K.yaml +++ b/yolov3/data/SKU-110K.yaml @@ -1,10 +1,10 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19 by Trax Retail +# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19 # Example usage: python train.py --data SKU-110K.yaml # parent -# ├── yolov5 +# ├── yolov3 # └── datasets -# └── SKU-110K ← downloads here (13.6 GB) +# └── SKU-110K ← downloads here # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] @@ -14,8 +14,8 @@ val: val.txt # val images (relative to 'path') 588 images test: test.txt # test images (optional) 2936 images # Classes -names: - 0: object +nc: 1 # number of classes +names: ['object'] # class names # Download script/URL (optional) --------------------------------------------------------------------------------------- @@ -24,7 +24,6 @@ download: | from tqdm import tqdm from utils.general import np, pd, Path, download, xyxy2xywh - # Download dir = Path(yaml['path']) # dataset root dir parent = Path(dir.parent) # download dir diff --git a/yolov3/data/VisDrone.yaml b/yolov3/data/VisDrone.yaml index fae733bb..945f05b4 100644 --- a/yolov3/data/VisDrone.yaml +++ b/yolov3/data/VisDrone.yaml @@ -1,10 +1,10 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset by Tianjin University +# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset # Example usage: python train.py --data VisDrone.yaml # parent -# ├── yolov5 +# ├── yolov3 # └── datasets -# └── VisDrone ← downloads here (2.3 GB) +# └── VisDrone ← downloads here # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] @@ -14,17 +14,8 @@ val: VisDrone2019-DET-val/images # val images (relative to 'path') 548 images test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images # Classes -names: - 0: pedestrian - 1: people - 2: bicycle - 3: car - 4: van - 5: truck - 6: tricycle - 7: awning-tricycle - 8: bus - 9: motor +nc: 10 # number of classes +names: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor'] # Download script/URL (optional) --------------------------------------------------------------------------------------- @@ -63,7 +54,7 @@ download: | 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip', 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip', 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip'] - download(urls, dir=dir, curl=True, threads=4) + download(urls, dir=dir) # Convert for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev': diff --git a/yolov3/data/coco.yaml b/yolov3/data/coco.yaml index 8dd3c7c8..1d89a3a0 100644 --- a/yolov3/data/coco.yaml +++ b/yolov3/data/coco.yaml @@ -1,107 +1,35 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# COCO 2017 dataset http://cocodataset.org by Microsoft +# COCO 2017 dataset http://cocodataset.org # Example usage: python train.py --data coco.yaml # parent -# ├── yolov5 +# ├── yolov3 # └── datasets -# └── coco ← downloads here (20.1 GB) +# └── coco ← downloads here # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco # dataset root dir train: train2017.txt # train images (relative to 'path') 118287 images -val: val2017.txt # val images (relative to 'path') 5000 images +val: val2017.txt # train images (relative to 'path') 5000 images test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794 # Classes -names: - 0: person - 1: bicycle - 2: car - 3: motorcycle - 4: airplane - 5: bus - 6: train - 7: truck - 8: boat - 9: traffic light - 10: fire hydrant - 11: stop sign - 12: parking meter - 13: bench - 14: bird - 15: cat - 16: dog - 17: horse - 18: sheep - 19: cow - 20: elephant - 21: bear - 22: zebra - 23: giraffe - 24: backpack - 25: umbrella - 26: handbag - 27: tie - 28: suitcase - 29: frisbee - 30: skis - 31: snowboard - 32: sports ball - 33: kite - 34: baseball bat - 35: baseball glove - 36: skateboard - 37: surfboard - 38: tennis racket - 39: bottle - 40: wine glass - 41: cup - 42: fork - 43: knife - 44: spoon - 45: bowl - 46: banana - 47: apple - 48: sandwich - 49: orange - 50: broccoli - 51: carrot - 52: hot dog - 53: pizza - 54: donut - 55: cake - 56: chair - 57: couch - 58: potted plant - 59: bed - 60: dining table - 61: toilet - 62: tv - 63: laptop - 64: mouse - 65: remote - 66: keyboard - 67: cell phone - 68: microwave - 69: oven - 70: toaster - 71: sink - 72: refrigerator - 73: book - 74: clock - 75: vase - 76: scissors - 77: teddy bear - 78: hair drier - 79: toothbrush +nc: 80 # number of classes +names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', + 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', + 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', + 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', + 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', + 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', + 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', + 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', + 'hair drier', 'toothbrush'] # class names # Download script/URL (optional) download: | from utils.general import download, Path - # Download labels segments = False # segment or box labels dir = Path(yaml['path']) # dataset root dir diff --git a/yolov3/data/coco128.yaml b/yolov3/data/coco128.yaml index 6cb076ac..19e8d800 100644 --- a/yolov3/data/coco128.yaml +++ b/yolov3/data/coco128.yaml @@ -1,10 +1,10 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics +# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) # Example usage: python train.py --data coco128.yaml # parent -# ├── yolov5 +# ├── yolov3 # └── datasets -# └── coco128 ← downloads here (7 MB) +# └── coco128 ← downloads here # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] @@ -14,87 +14,16 @@ val: images/train2017 # val images (relative to 'path') 128 images test: # test images (optional) # Classes -names: - 0: person - 1: bicycle - 2: car - 3: motorcycle - 4: airplane - 5: bus - 6: train - 7: truck - 8: boat - 9: traffic light - 10: fire hydrant - 11: stop sign - 12: parking meter - 13: bench - 14: bird - 15: cat - 16: dog - 17: horse - 18: sheep - 19: cow - 20: elephant - 21: bear - 22: zebra - 23: giraffe - 24: backpack - 25: umbrella - 26: handbag - 27: tie - 28: suitcase - 29: frisbee - 30: skis - 31: snowboard - 32: sports ball - 33: kite - 34: baseball bat - 35: baseball glove - 36: skateboard - 37: surfboard - 38: tennis racket - 39: bottle - 40: wine glass - 41: cup - 42: fork - 43: knife - 44: spoon - 45: bowl - 46: banana - 47: apple - 48: sandwich - 49: orange - 50: broccoli - 51: carrot - 52: hot dog - 53: pizza - 54: donut - 55: cake - 56: chair - 57: couch - 58: potted plant - 59: bed - 60: dining table - 61: toilet - 62: tv - 63: laptop - 64: mouse - 65: remote - 66: keyboard - 67: cell phone - 68: microwave - 69: oven - 70: toaster - 71: sink - 72: refrigerator - 73: book - 74: clock - 75: vase - 76: scissors - 77: teddy bear - 78: hair drier - 79: toothbrush +nc: 80 # number of classes +names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', + 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', + 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', + 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', + 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', + 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', + 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', + 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', + 'hair drier', 'toothbrush'] # class names # Download script/URL (optional) diff --git a/yolov3/data/custom-yolov3.yaml b/yolov3/data/custom-yolov3.yaml new file mode 100644 index 00000000..8a5982ae --- /dev/null +++ b/yolov3/data/custom-yolov3.yaml @@ -0,0 +1,18 @@ +# YOLOv3 🚀 by Ultralytics, GPL-3.0 license +# COCO 2017 dataset http://cocodataset.org +# Example usage: python train.py --data coco.yaml +# parent +# ├── yolov3 +# └── datasets +# └── coco ← downloads here + + +# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] +path: ../pipe-dataset/ # dataset root dir +train: train/images # train images (relative to 'path') 118287 images +val: val/images # train images (relative to 'path') 5000 images +# test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794 + +# Classes +nc: 1 # number of classes +names: ['pipe'] # class names diff --git a/yolov3/data/hyps/hyp.scratch-high.yaml b/yolov3/data/hyps/hyp.scratch-high.yaml index 05cd35cb..07ad9fc2 100644 --- a/yolov3/data/hyps/hyp.scratch-high.yaml +++ b/yolov3/data/hyps/hyp.scratch-high.yaml @@ -4,7 +4,7 @@ # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) -lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf) +lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf) momentum: 0.937 # SGD momentum/Adam beta1 weight_decay: 0.0005 # optimizer weight decay 5e-4 warmup_epochs: 3.0 # warmup epochs (fractions ok) diff --git a/yolov3/data/hyps/hyp.scratch.yaml b/yolov3/data/hyps/hyp.scratch.yaml new file mode 100644 index 00000000..31f6d142 --- /dev/null +++ b/yolov3/data/hyps/hyp.scratch.yaml @@ -0,0 +1,34 @@ +# YOLOv5 🚀 by Ultralytics, GPL-3.0 license +# Hyperparameters for COCO training from scratch +# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300 +# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials + +lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) +lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf) +momentum: 0.937 # SGD momentum/Adam beta1 +weight_decay: 0.0005 # optimizer weight decay 5e-4 +warmup_epochs: 3.0 # warmup epochs (fractions ok) +warmup_momentum: 0.8 # warmup initial momentum +warmup_bias_lr: 0.1 # warmup initial bias lr +box: 0.05 # box loss gain +cls: 0.5 # cls loss gain +cls_pw: 1.0 # cls BCELoss positive_weight +obj: 1.0 # obj loss gain (scale with pixels) +obj_pw: 1.0 # obj BCELoss positive_weight +iou_t: 0.20 # IoU training threshold +anchor_t: 4.0 # anchor-multiple threshold +# anchors: 3 # anchors per output layer (0 to ignore) +fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5) +hsv_h: 0.015 # image HSV-Hue augmentation (fraction) +hsv_s: 0.7 # image HSV-Saturation augmentation (fraction) +hsv_v: 0.4 # image HSV-Value augmentation (fraction) +degrees: 0.0 # image rotation (+/- deg) +translate: 0.1 # image translation (+/- fraction) +scale: 0.5 # image scale (+/- gain) +shear: 0.0 # image shear (+/- deg) +perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 +flipud: 0.0 # image flip up-down (probability) +fliplr: 0.5 # image flip left-right (probability) +mosaic: 1.0 # image mosaic (probability) +mixup: 0.0 # image mixup (probability) +copy_paste: 0.0 # segment copy-paste (probability) diff --git a/yolov3/data/objects365.yaml b/yolov3/data/objects365.yaml index 353635a3..3472f0f6 100644 --- a/yolov3/data/objects365.yaml +++ b/yolov3/data/objects365.yaml @@ -1,10 +1,10 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# Objects365 dataset https://www.objects365.org/ by Megvii +# Objects365 dataset https://www.objects365.org/ # Example usage: python train.py --data Objects365.yaml # parent -# ├── yolov5 +# ├── yolov3 # └── datasets -# └── Objects365 ← downloads here (712 GB = 367G data + 345G zips) +# └── Objects365 ← downloads here # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] @@ -14,382 +14,56 @@ val: images/val # val images (relative to 'path') 80000 images test: # test images (optional) # Classes -names: - 0: Person - 1: Sneakers - 2: Chair - 3: Other Shoes - 4: Hat - 5: Car - 6: Lamp - 7: Glasses - 8: Bottle - 9: Desk - 10: Cup - 11: Street Lights - 12: Cabinet/shelf - 13: Handbag/Satchel - 14: Bracelet - 15: Plate - 16: Picture/Frame - 17: Helmet - 18: Book - 19: Gloves - 20: Storage box - 21: Boat - 22: Leather Shoes - 23: Flower - 24: Bench - 25: Potted Plant - 26: Bowl/Basin - 27: Flag - 28: Pillow - 29: Boots - 30: Vase - 31: Microphone - 32: Necklace - 33: Ring - 34: SUV - 35: Wine Glass - 36: Belt - 37: Monitor/TV - 38: Backpack - 39: Umbrella - 40: Traffic Light - 41: Speaker - 42: Watch - 43: Tie - 44: Trash bin Can - 45: Slippers - 46: Bicycle - 47: Stool - 48: Barrel/bucket - 49: Van - 50: Couch - 51: Sandals - 52: Basket - 53: Drum - 54: Pen/Pencil - 55: Bus - 56: Wild Bird - 57: High Heels - 58: Motorcycle - 59: Guitar - 60: Carpet - 61: Cell Phone - 62: Bread - 63: Camera - 64: Canned - 65: Truck - 66: Traffic cone - 67: Cymbal - 68: Lifesaver - 69: Towel - 70: Stuffed Toy - 71: Candle - 72: Sailboat - 73: Laptop - 74: Awning - 75: Bed - 76: Faucet - 77: Tent - 78: Horse - 79: Mirror - 80: Power outlet - 81: Sink - 82: Apple - 83: Air Conditioner - 84: Knife - 85: Hockey Stick - 86: Paddle - 87: Pickup Truck - 88: Fork - 89: Traffic Sign - 90: Balloon - 91: Tripod - 92: Dog - 93: Spoon - 94: Clock - 95: Pot - 96: Cow - 97: Cake - 98: Dinning Table - 99: Sheep - 100: Hanger - 101: Blackboard/Whiteboard - 102: Napkin - 103: Other Fish - 104: Orange/Tangerine - 105: Toiletry - 106: Keyboard - 107: Tomato - 108: Lantern - 109: Machinery Vehicle - 110: Fan - 111: Green Vegetables - 112: Banana - 113: Baseball Glove - 114: Airplane - 115: Mouse - 116: Train - 117: Pumpkin - 118: Soccer - 119: Skiboard - 120: Luggage - 121: Nightstand - 122: Tea pot - 123: Telephone - 124: Trolley - 125: Head Phone - 126: Sports Car - 127: Stop Sign - 128: Dessert - 129: Scooter - 130: Stroller - 131: Crane - 132: Remote - 133: Refrigerator - 134: Oven - 135: Lemon - 136: Duck - 137: Baseball Bat - 138: Surveillance Camera - 139: Cat - 140: Jug - 141: Broccoli - 142: Piano - 143: Pizza - 144: Elephant - 145: Skateboard - 146: Surfboard - 147: Gun - 148: Skating and Skiing shoes - 149: Gas stove - 150: Donut - 151: Bow Tie - 152: Carrot - 153: Toilet - 154: Kite - 155: Strawberry - 156: Other Balls - 157: Shovel - 158: Pepper - 159: Computer Box - 160: Toilet Paper - 161: Cleaning Products - 162: Chopsticks - 163: Microwave - 164: Pigeon - 165: Baseball - 166: Cutting/chopping Board - 167: Coffee Table - 168: Side Table - 169: Scissors - 170: Marker - 171: Pie - 172: Ladder - 173: Snowboard - 174: Cookies - 175: Radiator - 176: Fire Hydrant - 177: Basketball - 178: Zebra - 179: Grape - 180: Giraffe - 181: Potato - 182: Sausage - 183: Tricycle - 184: Violin - 185: Egg - 186: Fire Extinguisher - 187: Candy - 188: Fire Truck - 189: Billiards - 190: Converter - 191: Bathtub - 192: Wheelchair - 193: Golf Club - 194: Briefcase - 195: Cucumber - 196: Cigar/Cigarette - 197: Paint Brush - 198: Pear - 199: Heavy Truck - 200: Hamburger - 201: Extractor - 202: Extension Cord - 203: Tong - 204: Tennis Racket - 205: Folder - 206: American Football - 207: earphone - 208: Mask - 209: Kettle - 210: Tennis - 211: Ship - 212: Swing - 213: Coffee Machine - 214: Slide - 215: Carriage - 216: Onion - 217: Green beans - 218: Projector - 219: Frisbee - 220: Washing Machine/Drying Machine - 221: Chicken - 222: Printer - 223: Watermelon - 224: Saxophone - 225: Tissue - 226: Toothbrush - 227: Ice cream - 228: Hot-air balloon - 229: Cello - 230: French Fries - 231: Scale - 232: Trophy - 233: Cabbage - 234: Hot dog - 235: Blender - 236: Peach - 237: Rice - 238: Wallet/Purse - 239: Volleyball - 240: Deer - 241: Goose - 242: Tape - 243: Tablet - 244: Cosmetics - 245: Trumpet - 246: Pineapple - 247: Golf Ball - 248: Ambulance - 249: Parking meter - 250: Mango - 251: Key - 252: Hurdle - 253: Fishing Rod - 254: Medal - 255: Flute - 256: Brush - 257: Penguin - 258: Megaphone - 259: Corn - 260: Lettuce - 261: Garlic - 262: Swan - 263: Helicopter - 264: Green Onion - 265: Sandwich - 266: Nuts - 267: Speed Limit Sign - 268: Induction Cooker - 269: Broom - 270: Trombone - 271: Plum - 272: Rickshaw - 273: Goldfish - 274: Kiwi fruit - 275: Router/modem - 276: Poker Card - 277: Toaster - 278: Shrimp - 279: Sushi - 280: Cheese - 281: Notepaper - 282: Cherry - 283: Pliers - 284: CD - 285: Pasta - 286: Hammer - 287: Cue - 288: Avocado - 289: Hamimelon - 290: Flask - 291: Mushroom - 292: Screwdriver - 293: Soap - 294: Recorder - 295: Bear - 296: Eggplant - 297: Board Eraser - 298: Coconut - 299: Tape Measure/Ruler - 300: Pig - 301: Showerhead - 302: Globe - 303: Chips - 304: Steak - 305: Crosswalk Sign - 306: Stapler - 307: Camel - 308: Formula 1 - 309: Pomegranate - 310: Dishwasher - 311: Crab - 312: Hoverboard - 313: Meat ball - 314: Rice Cooker - 315: Tuba - 316: Calculator - 317: Papaya - 318: Antelope - 319: Parrot - 320: Seal - 321: Butterfly - 322: Dumbbell - 323: Donkey - 324: Lion - 325: Urinal - 326: Dolphin - 327: Electric Drill - 328: Hair Dryer - 329: Egg tart - 330: Jellyfish - 331: Treadmill - 332: Lighter - 333: Grapefruit - 334: Game board - 335: Mop - 336: Radish - 337: Baozi - 338: Target - 339: French - 340: Spring Rolls - 341: Monkey - 342: Rabbit - 343: Pencil Case - 344: Yak - 345: Red Cabbage - 346: Binoculars - 347: Asparagus - 348: Barbell - 349: Scallop - 350: Noddles - 351: Comb - 352: Dumpling - 353: Oyster - 354: Table Tennis paddle - 355: Cosmetics Brush/Eyeliner Pencil - 356: Chainsaw - 357: Eraser - 358: Lobster - 359: Durian - 360: Okra - 361: Lipstick - 362: Cosmetics Mirror - 363: Curling - 364: Table Tennis +nc: 365 # number of classes +names: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup', + 'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book', + 'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag', + 'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV', + 'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle', + 'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird', + 'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck', + 'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning', + 'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife', + 'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock', + 'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish', + 'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan', + 'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard', + 'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign', + 'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat', + 'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard', + 'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry', + 'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks', + 'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors', + 'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape', + 'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck', + 'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette', + 'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket', + 'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine', + 'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine', + 'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon', + 'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse', + 'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball', + 'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin', + 'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts', + 'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit', + 'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD', + 'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder', + 'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips', + 'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab', + 'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal', + 'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart', + 'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French', + 'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell', + 'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil', + 'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis'] # Download script/URL (optional) --------------------------------------------------------------------------------------- download: | + from pycocotools.coco import COCO from tqdm import tqdm - from utils.general import Path, check_requirements, download, np, xyxy2xywhn - - check_requirements(('pycocotools>=2.0',)) - from pycocotools.coco import COCO + from utils.general import Path, download, np, xyxy2xywhn # Make Directories dir = Path(yaml['path']) # dataset root dir diff --git a/yolov3/data/scripts/download_weights.sh b/yolov3/data/scripts/download_weights.sh index 5e9acc44..50aec183 100755 --- a/yolov3/data/scripts/download_weights.sh +++ b/yolov3/data/scripts/download_weights.sh @@ -1,22 +1,18 @@ #!/bin/bash # YOLOv3 🚀 by Ultralytics, GPL-3.0 license -# Download latest models from https://github.com/ultralytics/yolov5/releases -# Example usage: bash data/scripts/download_weights.sh +# Download latest models from https://github.com/ultralytics/yolov3/releases +# Example usage: bash path/to/download_weights.sh # parent -# └── yolov5 -# ├── yolov5s.pt ← downloads here -# ├── yolov5m.pt +# └── yolov3 +# ├── yolov3.pt ← downloads here +# ├── yolov3-spp.pt # └── ... python - < 1)}, " # add to string # Write results @@ -164,23 +153,22 @@ def run( if save_txt: # Write to file xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format - with open(f'{txt_path}.txt', 'a') as f: + with open(txt_path + '.txt', 'a') as f: f.write(('%g ' * len(line)).rstrip() % line + '\n') if save_img or save_crop or view_img: # Add bbox to image c = int(cls) # integer class label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') annotator.box_label(xyxy, label, color=colors(c, True)) - if save_crop: - save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) + if save_crop: + save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) + + # Print time (inference-only) + LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)') # Stream results im0 = annotator.result() if view_img: - if platform.system() == 'Linux' and p not in windows: - windows.append(p) - cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) - cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) cv2.imshow(str(p), im0) cv2.waitKey(1) # 1 millisecond @@ -199,32 +187,24 @@ def run( h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) else: # stream fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos + save_path += '.mp4' vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) vid_writer[i].write(im0) - # Print time (inference-only) - LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms") - # Print results - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image + t = tuple(x / seen * 1E3 for x in dt) # speeds per image LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) if save_txt or save_img: s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") if update: - strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) + strip_optimizer(weights) # update model (to fix SourceChangeWarning) def parse_opt(): parser = argparse.ArgumentParser() - parser.add_argument('--weights', - nargs='+', - type=str, - default=ROOT / 'yolov3-tiny.pt', - help='model path or triton URL') - parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path') + parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3.pt', help='model path(s)') + parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') @@ -248,10 +228,9 @@ def parse_opt(): parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') opt = parser.parse_args() opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - print_args(vars(opt)) + print_args(FILE.stem, opt) return opt @@ -260,6 +239,6 @@ def main(opt): run(**vars(opt)) -if __name__ == '__main__': +if __name__ == "__main__": opt = parse_opt() main(opt) diff --git a/yolov3/export.py b/yolov3/export.py index fd4720a0..00fb8c0c 100644 --- a/yolov3/export.py +++ b/yolov3/export.py @@ -1,538 +1,287 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license """ -Export a YOLOv3 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit - -Format | `export.py --include` | Model ---- | --- | --- -PyTorch | - | yolov5s.pt -TorchScript | `torchscript` | yolov5s.torchscript -ONNX | `onnx` | yolov5s.onnx -OpenVINO | `openvino` | yolov5s_openvino_model/ -TensorRT | `engine` | yolov5s.engine -CoreML | `coreml` | yolov5s.mlmodel -TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/ -TensorFlow GraphDef | `pb` | yolov5s.pb -TensorFlow Lite | `tflite` | yolov5s.tflite -TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite -TensorFlow.js | `tfjs` | yolov5s_web_model/ -PaddlePaddle | `paddle` | yolov5s_paddle_model/ - -Requirements: - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU +Export a PyTorch model to TorchScript, ONNX, CoreML, TensorFlow (saved_model, pb, TFLite, TF.js,) formats +TensorFlow exports authored by https://github.com/zldrobit Usage: - $ python export.py --weights yolov5s.pt --include torchscript onnx openvino engine coreml tflite ... + $ python path/to/export.py --weights yolov3.pt --include torchscript onnx coreml saved_model pb tflite tfjs Inference: - $ python detect.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s_openvino_model # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU - yolov5s_paddle_model # PaddlePaddle + $ python path/to/detect.py --weights yolov3.pt + yolov3.onnx (must export with --dynamic) + yolov3_saved_model + yolov3.pb + yolov3.tflite TensorFlow.js: $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example $ npm install - $ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model + $ ln -s ../../yolov5/yolov3_web_model public/yolov3_web_model $ npm start """ import argparse -import contextlib import json import os -import platform -import re import subprocess import sys import time -import warnings from pathlib import Path -import pandas as pd import torch +import torch.nn as nn from torch.utils.mobile_optimizer import optimize_for_mobile FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv3 root directory +ROOT = FILE.parents[0] # root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH -if platform.system() != 'Windows': - ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative +ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative +from models.common import Conv from models.experimental import attempt_load -from models.yolo import ClassificationModel, Detect, DetectionModel, SegmentationModel -from utils.dataloaders import LoadImages -from utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_version, - check_yaml, colorstr, file_size, get_default_args, print_args, url2file, yaml_save) -from utils.torch_utils import select_device, smart_inference_mode - -MACOS = platform.system() == 'Darwin' # macOS environment +from models.yolo import Detect +from utils.activations import SiLU +from utils.datasets import LoadImages +from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, colorstr, file_size, print_args, + url2file) +from utils.torch_utils import select_device -def export_formats(): - # YOLOv3 export formats - x = [ - ['PyTorch', '-', '.pt', True, True], - ['TorchScript', 'torchscript', '.torchscript', True, True], - ['ONNX', 'onnx', '.onnx', True, True], - ['OpenVINO', 'openvino', '_openvino_model', True, False], - ['TensorRT', 'engine', '.engine', False, True], - ['CoreML', 'coreml', '.mlmodel', True, False], - ['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True], - ['TensorFlow GraphDef', 'pb', '.pb', True, True], - ['TensorFlow Lite', 'tflite', '.tflite', True, False], - ['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False], - ['TensorFlow.js', 'tfjs', '_web_model', False, False], - ['PaddlePaddle', 'paddle', '_paddle_model', True, True],] - return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU']) - - -def try_export(inner_func): - # YOLOv3 export decorator, i..e @try_export - inner_args = get_default_args(inner_func) - - def outer_func(*args, **kwargs): - prefix = inner_args['prefix'] - try: - with Profile() as dt: - f, model = inner_func(*args, **kwargs) - LOGGER.info(f'{prefix} export success ✅ {dt.t:.1f}s, saved as {f} ({file_size(f):.1f} MB)') - return f, model - except Exception as e: - LOGGER.info(f'{prefix} export failure ❌ {dt.t:.1f}s: {e}') - return None, None - - return outer_func - - -@try_export def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')): - # YOLOv3 TorchScript model export - LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...') - f = file.with_suffix('.torchscript') - - ts = torch.jit.trace(model, im, strict=False) - d = {'shape': im.shape, 'stride': int(max(model.stride)), 'names': model.names} - extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap() - if optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html - optimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files) - else: - ts.save(str(f), _extra_files=extra_files) - return f, None - - -@try_export -def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX:')): - # YOLOv3 ONNX export - check_requirements('onnx>=1.12.0') - import onnx - - LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...') - f = file.with_suffix('.onnx') - - output_names = ['output0', 'output1'] if isinstance(model, SegmentationModel) else ['output0'] - if dynamic: - dynamic = {'images': {0: 'batch', 2: 'height', 3: 'width'}} # shape(1,3,640,640) - if isinstance(model, SegmentationModel): - dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85) - dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'} # shape(1,32,160,160) - elif isinstance(model, DetectionModel): - dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85) - - torch.onnx.export( - model.cpu() if dynamic else model, # --dynamic only compatible with cpu - im.cpu() if dynamic else im, - f, - verbose=False, - opset_version=opset, - do_constant_folding=True, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False - input_names=['images'], - output_names=output_names, - dynamic_axes=dynamic or None) - - # Checks - model_onnx = onnx.load(f) # load onnx model - onnx.checker.check_model(model_onnx) # check onnx model - - # Metadata - d = {'stride': int(max(model.stride)), 'names': model.names} - for k, v in d.items(): - meta = model_onnx.metadata_props.add() - meta.key, meta.value = k, str(v) - onnx.save(model_onnx, f) - - # Simplify - if simplify: - try: - cuda = torch.cuda.is_available() - check_requirements(('onnxruntime-gpu' if cuda else 'onnxruntime', 'onnx-simplifier>=0.4.1')) - import onnxsim - - LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...') - model_onnx, check = onnxsim.simplify(model_onnx) - assert check, 'assert check failed' - onnx.save(model_onnx, f) - except Exception as e: - LOGGER.info(f'{prefix} simplifier failure: {e}') - return f, model_onnx - - -@try_export -def export_openvino(file, metadata, half, prefix=colorstr('OpenVINO:')): - # YOLOv3 OpenVINO export - check_requirements('openvino-dev') # requires openvino-dev: https://pypi.org/project/openvino-dev/ - import openvino.inference_engine as ie - - LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...') - f = str(file).replace('.pt', f'_openvino_model{os.sep}') - - cmd = f"mo --input_model {file.with_suffix('.onnx')} --output_dir {f} --data_type {'FP16' if half else 'FP32'}" - subprocess.run(cmd.split(), check=True, env=os.environ) # export - yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml - return f, None - - -@try_export -def export_paddle(model, im, file, metadata, prefix=colorstr('PaddlePaddle:')): - # YOLOv3 Paddle export - check_requirements(('paddlepaddle', 'x2paddle')) - import x2paddle - from x2paddle.convert import pytorch2paddle - - LOGGER.info(f'\n{prefix} starting export with X2Paddle {x2paddle.__version__}...') - f = str(file).replace('.pt', f'_paddle_model{os.sep}') - - pytorch2paddle(module=model, save_dir=f, jit_type='trace', input_examples=[im]) # export - yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml - return f, None - - -@try_export -def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')): - # YOLOv3 CoreML export - check_requirements('coremltools') - import coremltools as ct - - LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...') - f = file.with_suffix('.mlmodel') - - ts = torch.jit.trace(model, im, strict=False) # TorchScript model - ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])]) - bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None) - if bits < 32: - if MACOS: # quantization only supported on macOS - with warnings.catch_warnings(): - warnings.filterwarnings('ignore', category=DeprecationWarning) # suppress numpy==1.20 float warning - ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode) - else: - print(f'{prefix} quantization only supported on macOS, skipping...') - ct_model.save(f) - return f, ct_model - - -@try_export -def export_engine(model, im, file, half, dynamic, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')): - # YOLOv3 TensorRT export https://developer.nvidia.com/tensorrt - assert im.device.type != 'cpu', 'export running on CPU but must be on GPU, i.e. `python export.py --device 0`' + # TorchScript model export try: - import tensorrt as trt - except Exception: - if platform.system() == 'Linux': - check_requirements('nvidia-tensorrt', cmds='-U --index-url https://pypi.ngc.nvidia.com') - import tensorrt as trt + LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...') + f = file.with_suffix('.torchscript.pt') - if trt.__version__[0] == '7': # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012 - grid = model.model[-1].anchor_grid - model.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid] - export_onnx(model, im, file, 12, dynamic, simplify) # opset 12 - model.model[-1].anchor_grid = grid - else: # TensorRT >= 8 - check_version(trt.__version__, '8.0.0', hard=True) # require tensorrt>=8.0.0 - export_onnx(model, im, file, 12, dynamic, simplify) # opset 12 - onnx = file.with_suffix('.onnx') + ts = torch.jit.trace(model, im, strict=False) + d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names} + extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap() + (optimize_for_mobile(ts) if optimize else ts).save(f, _extra_files=extra_files) - LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...') - assert onnx.exists(), f'failed to export ONNX file: {onnx}' - f = file.with_suffix('.engine') # TensorRT engine file - logger = trt.Logger(trt.Logger.INFO) - if verbose: - logger.min_severity = trt.Logger.Severity.VERBOSE - - builder = trt.Builder(logger) - config = builder.create_builder_config() - config.max_workspace_size = workspace * 1 << 30 - # config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30) # fix TRT 8.4 deprecation notice - - flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)) - network = builder.create_network(flag) - parser = trt.OnnxParser(network, logger) - if not parser.parse_from_file(str(onnx)): - raise RuntimeError(f'failed to load ONNX file: {onnx}') - - inputs = [network.get_input(i) for i in range(network.num_inputs)] - outputs = [network.get_output(i) for i in range(network.num_outputs)] - for inp in inputs: - LOGGER.info(f'{prefix} input "{inp.name}" with shape{inp.shape} {inp.dtype}') - for out in outputs: - LOGGER.info(f'{prefix} output "{out.name}" with shape{out.shape} {out.dtype}') - - if dynamic: - if im.shape[0] <= 1: - LOGGER.warning(f'{prefix} WARNING âš ī¸ --dynamic model requires maximum --batch-size argument') - profile = builder.create_optimization_profile() - for inp in inputs: - profile.set_shape(inp.name, (1, *im.shape[1:]), (max(1, im.shape[0] // 2), *im.shape[1:]), im.shape) - config.add_optimization_profile(profile) - - LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine as {f}') - if builder.platform_has_fast_fp16 and half: - config.set_flag(trt.BuilderFlag.FP16) - with builder.build_engine(network, config) as engine, open(f, 'wb') as t: - t.write(engine.serialize()) - return f, None + LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') + except Exception as e: + LOGGER.info(f'{prefix} export failure: {e}') -@try_export -def export_saved_model(model, - im, - file, - dynamic, - tf_nms=False, - agnostic_nms=False, - topk_per_class=100, - topk_all=100, - iou_thres=0.45, - conf_thres=0.25, - keras=False, - prefix=colorstr('TensorFlow SavedModel:')): - # YOLOv3 TensorFlow SavedModel export +def export_onnx(model, im, file, opset, train, dynamic, simplify, prefix=colorstr('ONNX:')): + # ONNX export + try: + check_requirements(('onnx',)) + import onnx + + LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...') + f = file.with_suffix('.onnx') + + torch.onnx.export(model, im, f, verbose=False, opset_version=opset, + training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL, + do_constant_folding=not train, + input_names=['images'], + output_names=['output'], + dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'}, # shape(1,3,640,640) + 'output': {0: 'batch', 1: 'anchors'} # shape(1,25200,85) + } if dynamic else None) + + # Checks + model_onnx = onnx.load(f) # load onnx model + onnx.checker.check_model(model_onnx) # check onnx model + # LOGGER.info(onnx.helper.printable_graph(model_onnx.graph)) # print + + # Simplify + if simplify: + try: + check_requirements(('onnx-simplifier',)) + import onnxsim + + LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...') + model_onnx, check = onnxsim.simplify( + model_onnx, + dynamic_input_shape=dynamic, + input_shapes={'images': list(im.shape)} if dynamic else None) + assert check, 'assert check failed' + onnx.save(model_onnx, f) + except Exception as e: + LOGGER.info(f'{prefix} simplifier failure: {e}') + LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') + LOGGER.info(f"{prefix} run --dynamic ONNX model inference with: 'python detect.py --weights {f}'") + except Exception as e: + LOGGER.info(f'{prefix} export failure: {e}') + + +def export_coreml(model, im, file, prefix=colorstr('CoreML:')): + # CoreML export + ct_model = None + try: + check_requirements(('coremltools',)) + import coremltools as ct + + LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...') + f = file.with_suffix('.mlmodel') + + model.train() # CoreML exports should be placed in model.train() mode + ts = torch.jit.trace(model, im, strict=False) # TorchScript model + ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])]) + ct_model.save(f) + + LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') + except Exception as e: + LOGGER.info(f'\n{prefix} export failure: {e}') + + return ct_model + + +def export_saved_model(model, im, file, dynamic, + tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, + conf_thres=0.25, prefix=colorstr('TensorFlow saved_model:')): + # TensorFlow saved_model export + keras_model = None try: import tensorflow as tf - except Exception: - check_requirements(f"tensorflow{'' if torch.cuda.is_available() else '-macos' if MACOS else '-cpu'}") - import tensorflow as tf - from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 + from tensorflow import keras - from models.tf import TFModel + from models.tf import TFDetect, TFModel - LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') - f = str(file).replace('.pt', '_saved_model') - batch_size, ch, *imgsz = list(im.shape) # BCHW + LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') + f = str(file).replace('.pt', '_saved_model') + batch_size, ch, *imgsz = list(im.shape) # BCHW - tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz) - im = tf.zeros((batch_size, *imgsz, ch)) # BHWC order for TensorFlow - _ = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) - inputs = tf.keras.Input(shape=(*imgsz, ch), batch_size=None if dynamic else batch_size) - outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) - keras_model = tf.keras.Model(inputs=inputs, outputs=outputs) - keras_model.trainable = False - keras_model.summary() - if keras: + tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz) + im = tf.zeros((batch_size, *imgsz, 3)) # BHWC order for TensorFlow + y = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) + inputs = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size) + outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) + keras_model = keras.Model(inputs=inputs, outputs=outputs) + keras_model.trainable = False + keras_model.summary() keras_model.save(f, save_format='tf') - else: - spec = tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype) + + LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') + except Exception as e: + LOGGER.info(f'\n{prefix} export failure: {e}') + + return keras_model + + +def export_pb(keras_model, im, file, prefix=colorstr('TensorFlow GraphDef:')): + # TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow + try: + import tensorflow as tf + from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 + + LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') + f = file.with_suffix('.pb') + m = tf.function(lambda x: keras_model(x)) # full model - m = m.get_concrete_function(spec) + m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)) frozen_func = convert_variables_to_constants_v2(m) - tfm = tf.Module() - tfm.__call__ = tf.function(lambda x: frozen_func(x)[:4] if tf_nms else frozen_func(x), [spec]) - tfm.__call__(im) - tf.saved_model.save(tfm, - f, - options=tf.saved_model.SaveOptions(experimental_custom_gradients=False) if check_version( - tf.__version__, '2.6') else tf.saved_model.SaveOptions()) - return f, keras_model + frozen_func.graph.as_graph_def() + tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False) + + LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') + except Exception as e: + LOGGER.info(f'\n{prefix} export failure: {e}') -@try_export -def export_pb(keras_model, file, prefix=colorstr('TensorFlow GraphDef:')): - # YOLOv3 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow - import tensorflow as tf - from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 +def export_tflite(keras_model, im, file, int8, data, ncalib, prefix=colorstr('TensorFlow Lite:')): + # TensorFlow Lite export + try: + import tensorflow as tf - LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') - f = file.with_suffix('.pb') - - m = tf.function(lambda x: keras_model(x)) # full model - m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)) - frozen_func = convert_variables_to_constants_v2(m) - frozen_func.graph.as_graph_def() - tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False) - return f, None - - -@try_export -def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')): - # YOLOv3 TensorFlow Lite export - import tensorflow as tf - - LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') - batch_size, ch, *imgsz = list(im.shape) # BCHW - f = str(file).replace('.pt', '-fp16.tflite') - - converter = tf.lite.TFLiteConverter.from_keras_model(keras_model) - converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS] - converter.target_spec.supported_types = [tf.float16] - converter.optimizations = [tf.lite.Optimize.DEFAULT] - if int8: from models.tf import representative_dataset_gen - dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False) - converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100) - converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] - converter.target_spec.supported_types = [] - converter.inference_input_type = tf.uint8 # or tf.int8 - converter.inference_output_type = tf.uint8 # or tf.int8 - converter.experimental_new_quantizer = True - f = str(file).replace('.pt', '-int8.tflite') - if nms or agnostic_nms: - converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS) - tflite_model = converter.convert() - open(f, 'wb').write(tflite_model) - return f, None + LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') + batch_size, ch, *imgsz = list(im.shape) # BCHW + f = str(file).replace('.pt', '-fp16.tflite') + + converter = tf.lite.TFLiteConverter.from_keras_model(keras_model) + converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS] + converter.target_spec.supported_types = [tf.float16] + converter.optimizations = [tf.lite.Optimize.DEFAULT] + if int8: + dataset = LoadImages(check_dataset(data)['train'], img_size=imgsz, auto=False) # representative data + converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib) + converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] + converter.target_spec.supported_types = [] + converter.inference_input_type = tf.uint8 # or tf.int8 + converter.inference_output_type = tf.uint8 # or tf.int8 + converter.experimental_new_quantizer = False + f = str(file).replace('.pt', '-int8.tflite') + + tflite_model = converter.convert() + open(f, "wb").write(tflite_model) + LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') + + except Exception as e: + LOGGER.info(f'\n{prefix} export failure: {e}') -@try_export -def export_edgetpu(file, prefix=colorstr('Edge TPU:')): - # YOLOv3 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/ - cmd = 'edgetpu_compiler --version' - help_url = 'https://coral.ai/docs/edgetpu/compiler/' - assert platform.system() == 'Linux', f'export only supported on Linux. See {help_url}' - if subprocess.run(f'{cmd} >/dev/null', shell=True).returncode != 0: - LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}') - sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0 # sudo installed on system - for c in ( - 'curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -', - 'echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list', - 'sudo apt-get update', 'sudo apt-get install edgetpu-compiler'): - subprocess.run(c if sudo else c.replace('sudo ', ''), shell=True, check=True) - ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1] +def export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')): + # TensorFlow.js export + try: + check_requirements(('tensorflowjs',)) + import re - LOGGER.info(f'\n{prefix} starting export with Edge TPU compiler {ver}...') - f = str(file).replace('.pt', '-int8_edgetpu.tflite') # Edge TPU model - f_tfl = str(file).replace('.pt', '-int8.tflite') # TFLite model + import tensorflowjs as tfjs - cmd = f'edgetpu_compiler -s -d -k 10 --out_dir {file.parent} {f_tfl}' - subprocess.run(cmd.split(), check=True) - return f, None + LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...') + f = str(file).replace('.pt', '_web_model') # js dir + f_pb = file.with_suffix('.pb') # *.pb path + f_json = f + '/model.json' # *.json path + + cmd = f"tensorflowjs_converter --input_format=tf_frozen_model " \ + f"--output_node_names='Identity,Identity_1,Identity_2,Identity_3' {f_pb} {f}" + subprocess.run(cmd, shell=True) + + json = open(f_json).read() + with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order + subst = re.sub( + r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, ' + r'"Identity.?.?": {"name": "Identity.?.?"}, ' + r'"Identity.?.?": {"name": "Identity.?.?"}, ' + r'"Identity.?.?": {"name": "Identity.?.?"}}}', + r'{"outputs": {"Identity": {"name": "Identity"}, ' + r'"Identity_1": {"name": "Identity_1"}, ' + r'"Identity_2": {"name": "Identity_2"}, ' + r'"Identity_3": {"name": "Identity_3"}}}', + json) + j.write(subst) + + LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') + except Exception as e: + LOGGER.info(f'\n{prefix} export failure: {e}') -@try_export -def export_tfjs(file, int8, prefix=colorstr('TensorFlow.js:')): - # YOLOv3 TensorFlow.js export - check_requirements('tensorflowjs') - import tensorflowjs as tfjs - - LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...') - f = str(file).replace('.pt', '_web_model') # js dir - f_pb = file.with_suffix('.pb') # *.pb path - f_json = f'{f}/model.json' # *.json path - - int8_export = ' --quantize_uint8 ' if int8 else '' - - cmd = f'tensorflowjs_converter --input_format=tf_frozen_model {int8_export}' \ - f'--output_node_names=Identity,Identity_1,Identity_2,Identity_3 {f_pb} {f}' - subprocess.run(cmd.split()) - - json = Path(f_json).read_text() - with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order - subst = re.sub( - r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, ' - r'"Identity.?.?": {"name": "Identity.?.?"}, ' - r'"Identity.?.?": {"name": "Identity.?.?"}, ' - r'"Identity.?.?": {"name": "Identity.?.?"}}}', r'{"outputs": {"Identity": {"name": "Identity"}, ' - r'"Identity_1": {"name": "Identity_1"}, ' - r'"Identity_2": {"name": "Identity_2"}, ' - r'"Identity_3": {"name": "Identity_3"}}}', json) - j.write(subst) - return f, None - - -def add_tflite_metadata(file, metadata, num_outputs): - # Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata - with contextlib.suppress(ImportError): - # check_requirements('tflite_support') - from tflite_support import flatbuffers - from tflite_support import metadata as _metadata - from tflite_support import metadata_schema_py_generated as _metadata_fb - - tmp_file = Path('/tmp/meta.txt') - with open(tmp_file, 'w') as meta_f: - meta_f.write(str(metadata)) - - model_meta = _metadata_fb.ModelMetadataT() - label_file = _metadata_fb.AssociatedFileT() - label_file.name = tmp_file.name - model_meta.associatedFiles = [label_file] - - subgraph = _metadata_fb.SubGraphMetadataT() - subgraph.inputTensorMetadata = [_metadata_fb.TensorMetadataT()] - subgraph.outputTensorMetadata = [_metadata_fb.TensorMetadataT()] * num_outputs - model_meta.subgraphMetadata = [subgraph] - - b = flatbuffers.Builder(0) - b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER) - metadata_buf = b.Output() - - populator = _metadata.MetadataPopulator.with_model_file(file) - populator.load_metadata_buffer(metadata_buf) - populator.load_associated_files([str(tmp_file)]) - populator.populate() - tmp_file.unlink() - - -@smart_inference_mode() -def run( - data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path' - weights=ROOT / 'yolov5s.pt', # weights path +@torch.no_grad() +def run(data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path' + weights=ROOT / 'yolov3.pt', # weights path imgsz=(640, 640), # image (height, width) batch_size=1, # batch size device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu - include=('torchscript', 'onnx'), # include formats + include=('torchscript', 'onnx', 'coreml'), # include formats half=False, # FP16 half-precision export - inplace=False, # set YOLOv3 Detect() inplace=True - keras=False, # use Keras + inplace=False, # set Detect() inplace=True + train=False, # model.train() mode optimize=False, # TorchScript: optimize for mobile int8=False, # CoreML/TF INT8 quantization - dynamic=False, # ONNX/TF/TensorRT: dynamic axes + dynamic=False, # ONNX/TF: dynamic axes simplify=False, # ONNX: simplify model opset=12, # ONNX: opset version - verbose=False, # TensorRT: verbose log - workspace=4, # TensorRT: workspace size (GB) - nms=False, # TF: add NMS to model - agnostic_nms=False, # TF: add agnostic NMS to model topk_per_class=100, # TF.js NMS: topk per class to keep topk_all=100, # TF.js NMS: topk for all classes to keep iou_thres=0.45, # TF.js NMS: IoU threshold - conf_thres=0.25, # TF.js NMS: confidence threshold -): + conf_thres=0.25 # TF.js NMS: confidence threshold + ): t = time.time() - include = [x.lower() for x in include] # to lowercase - fmts = tuple(export_formats()['Argument'][1:]) # --include arguments - flags = [x in include for x in fmts] - assert sum(flags) == len(include), f'ERROR: Invalid --include {include}, valid --include arguments are {fmts}' - jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle = flags # export booleans - file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights) # PyTorch weights + include = [x.lower() for x in include] + tf_exports = list(x in include for x in ('saved_model', 'pb', 'tflite', 'tfjs')) # TensorFlow exports + imgsz *= 2 if len(imgsz) == 1 else 1 # expand + file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights) # Load PyTorch model device = select_device(device) - if half: - assert device.type != 'cpu' or coreml, '--half only compatible with GPU export, i.e. use --device 0' - assert not dynamic, '--half not compatible with --dynamic, i.e. use either --half or --dynamic but not both' - model = attempt_load(weights, device=device, inplace=True, fuse=True) # load FP32 model - - # Checks - imgsz *= 2 if len(imgsz) == 1 else 1 # expand - if optimize: - assert device.type == 'cpu', '--optimize not compatible with cuda devices, i.e. use --device cpu' + assert not (device.type == 'cpu' and half), '--half only compatible with GPU export, i.e. use --device 0' + model = attempt_load(weights, map_location=device, inplace=True, fuse=True) # load FP32 model + nc, names = model.nc, model.names # number of classes, class names # Input gs = int(max(model.stride)) # grid size (max stride) @@ -540,116 +289,81 @@ def run( im = torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection # Update model - model.eval() + if half: + im, model = im.half(), model.half() # to FP16 + model.train() if train else model.eval() # training mode = no Detect() layer grid construction for k, m in model.named_modules(): - if isinstance(m, Detect): + if isinstance(m, Conv): # assign export-friendly activations + if isinstance(m.act, nn.SiLU): + m.act = SiLU() + elif isinstance(m, Detect): m.inplace = inplace - m.dynamic = dynamic - m.export = True + m.onnx_dynamic = dynamic + # m.forward = m.forward_export # assign forward (optional) for _ in range(2): y = model(im) # dry runs - if half and not coreml: - im, model = im.half(), model.half() # to FP16 - shape = tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shape - metadata = {'stride': int(max(model.stride)), 'names': model.names} # model metadata - LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} with output shape {shape} ({file_size(file):.1f} MB)") + LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} ({file_size(file):.1f} MB)") # Exports - f = [''] * len(fmts) # exported filenames - warnings.filterwarnings(action='ignore', category=torch.jit.TracerWarning) # suppress TracerWarning - if jit: # TorchScript - f[0], _ = export_torchscript(model, im, file, optimize) - if engine: # TensorRT required before ONNX - f[1], _ = export_engine(model, im, file, half, dynamic, simplify, workspace, verbose) - if onnx or xml: # OpenVINO requires ONNX - f[2], _ = export_onnx(model, im, file, opset, dynamic, simplify) - if xml: # OpenVINO - f[3], _ = export_openvino(file, metadata, half) - if coreml: # CoreML - f[4], _ = export_coreml(model, im, file, int8, half) - if any((saved_model, pb, tflite, edgetpu, tfjs)): # TensorFlow formats - assert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.' - assert not isinstance(model, ClassificationModel), 'ClassificationModel export to TF formats not yet supported.' - f[5], s_model = export_saved_model(model.cpu(), - im, - file, - dynamic, - tf_nms=nms or agnostic_nms or tfjs, - agnostic_nms=agnostic_nms or tfjs, - topk_per_class=topk_per_class, - topk_all=topk_all, - iou_thres=iou_thres, - conf_thres=conf_thres, - keras=keras) + if 'torchscript' in include: + export_torchscript(model, im, file, optimize) + if 'onnx' in include: + export_onnx(model, im, file, opset, train, dynamic, simplify) + if 'coreml' in include: + export_coreml(model, im, file) + + # TensorFlow Exports + if any(tf_exports): + pb, tflite, tfjs = tf_exports[1:] + assert not (tflite and tfjs), 'TFLite and TF.js models must be exported separately, please pass only one type.' + model = export_saved_model(model.cpu(), im, file, dynamic, tf_nms=tfjs, agnostic_nms=tfjs, + topk_per_class=topk_per_class, topk_all=topk_all, conf_thres=conf_thres, + iou_thres=iou_thres) # keras model if pb or tfjs: # pb prerequisite to tfjs - f[6], _ = export_pb(s_model, file) - if tflite or edgetpu: - f[7], _ = export_tflite(s_model, im, file, int8 or edgetpu, data=data, nms=nms, agnostic_nms=agnostic_nms) - if edgetpu: - f[8], _ = export_edgetpu(file) - add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs)) + export_pb(model, im, file) + if tflite: + export_tflite(model, im, file, int8=int8, data=data, ncalib=100) if tfjs: - f[9], _ = export_tfjs(file, int8) - if paddle: # PaddlePaddle - f[10], _ = export_paddle(model, im, file, metadata) + export_tfjs(model, im, file) # Finish - f = [str(x) for x in f if x] # filter out '' and None - if any(f): - cls, det, seg = (isinstance(model, x) for x in (ClassificationModel, DetectionModel, SegmentationModel)) # type - det &= not seg # segmentation models inherit from SegmentationModel(DetectionModel) - dir = Path('segment' if seg else 'classify' if cls else '') - h = '--half' if half else '' # --half FP16 inference arg - s = '# WARNING âš ī¸ ClassificationModel not yet supported for PyTorch Hub AutoShape inference' if cls else \ - '# WARNING âš ī¸ SegmentationModel not yet supported for PyTorch Hub AutoShape inference' if seg else '' - LOGGER.info(f'\nExport complete ({time.time() - t:.1f}s)' - f"\nResults saved to {colorstr('bold', file.parent.resolve())}" - f"\nDetect: python {dir / ('detect.py' if det else 'predict.py')} --weights {f[-1]} {h}" - f"\nValidate: python {dir / 'val.py'} --weights {f[-1]} {h}" - f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}') {s}" - f'\nVisualize: https://netron.app') - return f # return list of exported files/dirs + LOGGER.info(f'\nExport complete ({time.time() - t:.2f}s)' + f"\nResults saved to {colorstr('bold', file.parent.resolve())}" + f'\nVisualize with https://netron.app') -def parse_opt(known=False): +def parse_opt(): parser = argparse.ArgumentParser() parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3-tiny.pt', help='model.pt path(s)') + parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='weights path') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)') parser.add_argument('--batch-size', type=int, default=1, help='batch size') parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--half', action='store_true', help='FP16 half-precision export') - parser.add_argument('--inplace', action='store_true', help='set Detect() inplace=True') - parser.add_argument('--keras', action='store_true', help='TF: use Keras') + parser.add_argument('--inplace', action='store_true', help='set YOLOv3 Detect() inplace=True') + parser.add_argument('--train', action='store_true', help='model.train() mode') parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile') parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization') - parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes') + parser.add_argument('--dynamic', action='store_true', help='ONNX/TF: dynamic axes') parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model') - parser.add_argument('--opset', type=int, default=17, help='ONNX: opset version') - parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log') - parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)') - parser.add_argument('--nms', action='store_true', help='TF: add NMS to model') - parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model') + parser.add_argument('--opset', type=int, default=13, help='ONNX: opset version') parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep') parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep') parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold') parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold') - parser.add_argument( - '--include', - nargs='+', - default=['torchscript'], - help='torchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle') - opt = parser.parse_known_args()[0] if known else parser.parse_args() - print_args(vars(opt)) + parser.add_argument('--include', nargs='+', + default=['torchscript', 'onnx'], + help='available formats are (torchscript, onnx, coreml, saved_model, pb, tflite, tfjs)') + opt = parser.parse_args() + print_args(FILE.stem, opt) return opt def main(opt): - for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]): - run(**vars(opt)) + run(**vars(opt)) -if __name__ == '__main__': +if __name__ == "__main__": opt = parse_opt() main(opt) diff --git a/yolov3/hubconf.py b/yolov3/hubconf.py index b1c45e7b..d610aa36 100644 --- a/yolov3/hubconf.py +++ b/yolov3/hubconf.py @@ -1,66 +1,52 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license """ -PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5 +PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/ Usage: import torch - model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # official model - model = torch.hub.load('ultralytics/yolov5:master', 'yolov5s') # from branch - model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.pt') # custom/local model - model = torch.hub.load('.', 'custom', 'yolov5s.pt', source='local') # local repo + model = torch.hub.load('ultralytics/yolov3', 'yolov3') """ import torch def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): - """Creates or loads a YOLOv3 model + """Creates a specified model Arguments: - name (str): model name 'yolov5s' or path 'path/to/best.pt' + name (str): name of model, i.e. 'yolov3' pretrained (bool): load pretrained weights into the model channels (int): number of input channels classes (int): number of model classes - autoshape (bool): apply YOLOv3 .autoshape() wrapper to model + autoshape (bool): apply .autoshape() wrapper to model verbose (bool): print all information to screen device (str, torch.device, None): device to use for model parameters Returns: - YOLOv3 model + pytorch model """ from pathlib import Path - from models.common import AutoShape, DetectMultiBackend from models.experimental import attempt_load - from models.yolo import ClassificationModel, DetectionModel, SegmentationModel + from models.yolo import Model from utils.downloads import attempt_download - from utils.general import LOGGER, check_requirements, intersect_dicts, logging + from utils.general import check_requirements, intersect_dicts, set_logging from utils.torch_utils import select_device - if not verbose: - LOGGER.setLevel(logging.WARNING) - check_requirements(exclude=('opencv-python', 'tensorboard', 'thop')) - name = Path(name) - path = name.with_suffix('.pt') if name.suffix == '' and not name.is_dir() else name # checkpoint path + file = Path(__file__).resolve() + check_requirements(exclude=('tensorboard', 'thop', 'opencv-python')) + set_logging(verbose=verbose) + + save_dir = Path('') if str(name).endswith('.pt') else file.parent + path = (save_dir / name).with_suffix('.pt') # checkpoint path try: - device = select_device(device) + device = select_device(('0' if torch.cuda.is_available() else 'cpu') if device is None else device) + if pretrained and channels == 3 and classes == 80: - try: - model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model - if autoshape: - if model.pt and isinstance(model.model, ClassificationModel): - LOGGER.warning('WARNING âš ī¸ YOLOv3 ClassificationModel is not yet AutoShape compatible. ' - 'You must pass torch tensors in BCHW to this model, i.e. shape(1,3,224,224).') - elif model.pt and isinstance(model.model, SegmentationModel): - LOGGER.warning('WARNING âš ī¸ YOLOv3 SegmentationModel is not yet AutoShape compatible. ' - 'You will not be able to run inference with this model.') - else: - model = AutoShape(model) # for file/URI/PIL/cv2/np inputs and NMS - except Exception: - model = attempt_load(path, device=device, fuse=False) # arbitrary model + model = attempt_load(path, map_location=device) # download/load FP32 model else: - cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0] # model.yaml path - model = DetectionModel(cfg, channels, classes) # create model + cfg = list((Path(__file__).parent / 'models').rglob(f'{name}.yaml'))[0] # model.yaml path + model = Model(cfg, channels, classes) # create model if pretrained: ckpt = torch.load(attempt_download(path), map_location=device) # load csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 @@ -68,102 +54,54 @@ def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbo model.load_state_dict(csd, strict=False) # load if len(ckpt['model'].names) == classes: model.names = ckpt['model'].names # set class names attribute - if not verbose: - LOGGER.setLevel(logging.INFO) # reset to default + if autoshape: + model = model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS return model.to(device) except Exception as e: help_url = 'https://github.com/ultralytics/yolov5/issues/36' - s = f'{e}. Cache may be out of date, try `force_reload=True` or see {help_url} for help.' + s = 'Cache may be out of date, try `force_reload=True`. See %s for help.' % help_url raise Exception(s) from e -def custom(path='path/to/model.pt', autoshape=True, _verbose=True, device=None): - # YOLOv3 custom or local model - return _create(path, autoshape=autoshape, verbose=_verbose, device=device) +def custom(path='path/to/model.pt', autoshape=True, verbose=True, device=None): + # custom or local model + return _create(path, autoshape=autoshape, verbose=verbose, device=device) -def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-nano model https://github.com/ultralytics/yolov5 - return _create('yolov5n', pretrained, channels, classes, autoshape, _verbose, device) +def yolov3(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): + # YOLOv3 model https://github.com/ultralytics/yolov3 + return _create('yolov3', pretrained, channels, classes, autoshape, verbose, device) -def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-small model https://github.com/ultralytics/yolov5 - return _create('yolov5s', pretrained, channels, classes, autoshape, _verbose, device) +def yolov3_spp(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): + # YOLOv3-SPP model https://github.com/ultralytics/yolov3 + return _create('yolov3-spp', pretrained, channels, classes, autoshape, verbose, device) -def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-medium model https://github.com/ultralytics/yolov5 - return _create('yolov5m', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-large model https://github.com/ultralytics/yolov5 - return _create('yolov5l', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-xlarge model https://github.com/ultralytics/yolov5 - return _create('yolov5x', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-nano-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5n6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-small-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5s6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-medium-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5m6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-large-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5l6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv3-xlarge-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5x6', pretrained, channels, classes, autoshape, _verbose, device) +def yolov3_tiny(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): + # YOLOv3-tiny model https://github.com/ultralytics/yolov3 + return _create('yolov3-tiny', pretrained, channels, classes, autoshape, verbose, device) if __name__ == '__main__': - import argparse + model = _create(name='yolov3-tiny', pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) # pretrained + # model = custom(path='path/to/model.pt') # custom + + # Verify inference from pathlib import Path + import cv2 import numpy as np from PIL import Image - from utils.general import cv2, print_args + imgs = ['data/images/zidane.jpg', # filename + Path('data/images/zidane.jpg'), # Path + 'https://ultralytics.com/images/zidane.jpg', # URI + cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV + Image.open('data/images/bus.jpg'), # PIL + np.zeros((320, 640, 3))] # numpy - # Argparser - parser = argparse.ArgumentParser() - parser.add_argument('--model', type=str, default='yolov5s', help='model name') - opt = parser.parse_args() - print_args(vars(opt)) - - # Model - model = _create(name=opt.model, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) - # model = custom(path='path/to/model.pt') # custom - - # Images - imgs = [ - 'data/images/zidane.jpg', # filename - Path('data/images/zidane.jpg'), # Path - 'https://ultralytics.com/images/zidane.jpg', # URI - cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV - Image.open('data/images/bus.jpg'), # PIL - np.zeros((320, 640, 3))] # numpy - - # Inference - results = model(imgs, size=320) # batched inference - - # Results + results = model(imgs) # batched inference results.print() results.save() diff --git a/yolov3/models/common.py b/yolov3/models/common.py index bc5fc0d3..a76fc628 100644 --- a/yolov3/models/common.py +++ b/yolov3/models/common.py @@ -3,17 +3,12 @@ Common modules """ -import ast -import contextlib import json import math import platform import warnings -import zipfile -from collections import OrderedDict, namedtuple from copy import copy from pathlib import Path -from urllib.parse import urlparse import cv2 import numpy as np @@ -21,37 +16,30 @@ import pandas as pd import requests import torch import torch.nn as nn -from IPython.display import display from PIL import Image from torch.cuda import amp -from utils import TryExcept -from utils.dataloaders import exif_transpose, letterbox -from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr, - increment_path, is_notebook, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy, - xyxy2xywh, yaml_load) +from utils.datasets import exif_transpose, letterbox +from utils.general import (LOGGER, check_requirements, check_suffix, colorstr, increment_path, make_divisible, + non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh) from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import copy_attr, smart_inference_mode +from utils.torch_utils import time_sync -def autopad(k, p=None, d=1): # kernel, padding, dilation - # Pad to 'same' shape outputs - if d > 1: - k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size +def autopad(k, p=None): # kernel, padding + # Pad to 'same' if p is None: p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad return p class Conv(nn.Module): - # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation) - default_act = nn.SiLU() # default activation - - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True): + # Standard convolution + def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups super().__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False) + self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) self.bn = nn.BatchNorm2d(c2) - self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity() + self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) def forward(self, x): return self.act(self.bn(self.conv(x))) @@ -61,15 +49,9 @@ class Conv(nn.Module): class DWConv(Conv): - # Depth-wise convolution - def __init__(self, c1, c2, k=1, s=1, d=1, act=True): # ch_in, ch_out, kernel, stride, dilation, activation - super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act) - - -class DWConvTranspose2d(nn.ConvTranspose2d): - # Depth-wise transpose convolution - def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out - super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2)) + # Depth-wise convolution class + def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups + super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act) class TransformerLayer(nn.Module): @@ -104,8 +86,8 @@ class TransformerBlock(nn.Module): if self.conv is not None: x = self.conv(x) b, _, w, h = x.shape - p = x.flatten(2).permute(2, 0, 1) - return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h) + p = x.flatten(2).unsqueeze(0).transpose(0, 3).squeeze(3) + return self.tr(p + self.linear(p)).unsqueeze(3).transpose(0, 3).reshape(b, self.c2, w, h) class Bottleneck(nn.Module): @@ -137,21 +119,7 @@ class BottleneckCSP(nn.Module): def forward(self, x): y1 = self.cv3(self.m(self.cv1(x))) y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1)))) - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) + return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) class C3(nn.Module): @@ -161,19 +129,12 @@ class C3(nn.Module): c_ = int(c2 * e) # hidden channels self.cv1 = Conv(c1, c_, 1, 1) self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2) + self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2) self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) + # self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)]) def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1)) - - -class C3x(C3): - # C3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n))) + return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1)) class C3TR(C3): @@ -217,7 +178,7 @@ class SPP(nn.Module): class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv3 by Glenn Jocher + # Spatial Pyramid Pooling - Fast (SPPF) layer for by Glenn Jocher def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) super().__init__() c_ = c1 // 2 # hidden channels @@ -231,18 +192,18 @@ class SPPF(nn.Module): warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning y1 = self.m(x) y2 = self.m(y1) - return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1)) + return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1)) class Focus(nn.Module): # Focus wh information into c-space def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act) + self.conv = Conv(c1 * 4, c2, k, s, p, g, act) # self.contract = Contract(gain=2) def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1)) + return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) # return self.conv(self.contract(x)) @@ -251,12 +212,12 @@ class GhostConv(nn.Module): def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups super().__init__() c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act=act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act) + self.cv1 = Conv(c1, c_, k, s, None, g, act) + self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) def forward(self, x): y = self.cv1(x) - return torch.cat((y, self.cv2(y)), 1) + return torch.cat([y, self.cv2(y)], 1) class GhostBottleneck(nn.Module): @@ -264,12 +225,11 @@ class GhostBottleneck(nn.Module): def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride super().__init__() c_ = c2 // 2 - self.conv = nn.Sequential( - GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, - act=False)) if s == 2 else nn.Identity() + self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw + DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw + GhostConv(c_, c2, 1, 1, act=False)) # pw-linear + self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), + Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity() def forward(self, x): return self.conv(x) + self.shortcut(x) @@ -314,350 +274,159 @@ class Concat(nn.Module): class DetectMultiBackend(nn.Module): - # YOLOv3 MultiBackend class for python inference on various backends - def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True): + # MultiBackend class for python inference on various backends + def __init__(self, weights='yolov3.pt', device=None, dnn=True): # Usage: - # PyTorch: weights = *.pt - # TorchScript: *.torchscript - # ONNX Runtime: *.onnx - # ONNX OpenCV DNN: *.onnx --dnn - # OpenVINO: *_openvino_model - # CoreML: *.mlmodel - # TensorRT: *.engine - # TensorFlow SavedModel: *_saved_model - # TensorFlow GraphDef: *.pb - # TensorFlow Lite: *.tflite - # TensorFlow Edge TPU: *_edgetpu.tflite - # PaddlePaddle: *_paddle_model - from models.experimental import attempt_download, attempt_load # scoped to avoid circular import - + # PyTorch: weights = *.pt + # TorchScript: *.torchscript.pt + # CoreML: *.mlmodel + # TensorFlow: *_saved_model + # TensorFlow: *.pb + # TensorFlow Lite: *.tflite + # ONNX Runtime: *.onnx + # OpenCV DNN: *.onnx with dnn=True super().__init__() w = str(weights[0] if isinstance(weights, list) else weights) - pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w) - fp16 &= pt or jit or onnx or engine # FP16 - nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH) - stride = 32 # default stride - cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA - if not (pt or triton): - w = attempt_download(w) # download if not local + suffix, suffixes = Path(w).suffix.lower(), ['.pt', '.onnx', '.tflite', '.pb', '', '.mlmodel'] + check_suffix(w, suffixes) # check weights have acceptable suffix + pt, onnx, tflite, pb, saved_model, coreml = (suffix == x for x in suffixes) # backend booleans + jit = pt and 'torchscript' in w.lower() + stride, names = 64, [f'class{i}' for i in range(1000)] # assign defaults - if pt: # PyTorch - model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse) - stride = max(int(model.stride.max()), 32) # model stride - names = model.module.names if hasattr(model, 'module') else model.names # get class names - model.half() if fp16 else model.float() - self.model = model # explicitly assign for to(), cpu(), cuda(), half() - elif jit: # TorchScript + if jit: # TorchScript LOGGER.info(f'Loading {w} for TorchScript inference...') extra_files = {'config.txt': ''} # model metadata - model = torch.jit.load(w, _extra_files=extra_files, map_location=device) - model.half() if fp16 else model.float() - if extra_files['config.txt']: # load metadata dict - d = json.loads(extra_files['config.txt'], - object_hook=lambda d: {int(k) if k.isdigit() else k: v - for k, v in d.items()}) + model = torch.jit.load(w, _extra_files=extra_files) + if extra_files['config.txt']: + d = json.loads(extra_files['config.txt']) # extra_files dict stride, names = int(d['stride']), d['names'] + elif pt: # PyTorch + from models.experimental import attempt_load # scoped to avoid circular import + model = torch.jit.load(w) if 'torchscript' in w else attempt_load(weights, map_location=device) + stride = int(model.stride.max()) # model stride + names = model.module.names if hasattr(model, 'module') else model.names # get class names + elif coreml: # CoreML *.mlmodel + import coremltools as ct + model = ct.models.MLModel(w) elif dnn: # ONNX OpenCV DNN LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...') - check_requirements('opencv-python>=4.5.4') + check_requirements(('opencv-python>=4.5.4',)) net = cv2.dnn.readNetFromONNX(w) elif onnx: # ONNX Runtime LOGGER.info(f'Loading {w} for ONNX Runtime inference...') + cuda = torch.cuda.is_available() check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime')) import onnxruntime providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider'] session = onnxruntime.InferenceSession(w, providers=providers) - output_names = [x.name for x in session.get_outputs()] - meta = session.get_modelmeta().custom_metadata_map # metadata - if 'stride' in meta: - stride, names = int(meta['stride']), eval(meta['names']) - elif xml: # OpenVINO - LOGGER.info(f'Loading {w} for OpenVINO inference...') - check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/ - from openvino.runtime import Core, Layout, get_batch - ie = Core() - if not Path(w).is_file(): # if not *.xml - w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir - network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin')) - if network.get_parameters()[0].get_layout().empty: - network.get_parameters()[0].set_layout(Layout('NCHW')) - batch_dim = get_batch(network) - if batch_dim.is_static: - batch_size = batch_dim.get_length() - executable_network = ie.compile_model(network, device_name='CPU') # device_name="MYRIAD" for Intel NCS2 - stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata - elif engine: # TensorRT - LOGGER.info(f'Loading {w} for TensorRT inference...') - import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download - check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0 - if device.type == 'cpu': - device = torch.device('cuda:0') - Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr')) - logger = trt.Logger(trt.Logger.INFO) - with open(w, 'rb') as f, trt.Runtime(logger) as runtime: - model = runtime.deserialize_cuda_engine(f.read()) - context = model.create_execution_context() - bindings = OrderedDict() - output_names = [] - fp16 = False # default updated below - dynamic = False - for i in range(model.num_bindings): - name = model.get_binding_name(i) - dtype = trt.nptype(model.get_binding_dtype(i)) - if model.binding_is_input(i): - if -1 in tuple(model.get_binding_shape(i)): # dynamic - dynamic = True - context.set_binding_shape(i, tuple(model.get_profile_shape(0, i)[2])) - if dtype == np.float16: - fp16 = True - else: # output - output_names.append(name) - shape = tuple(context.get_binding_shape(i)) - im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device) - bindings[name] = Binding(name, dtype, shape, im, int(im.data_ptr())) - binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items()) - batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size - elif coreml: # CoreML - LOGGER.info(f'Loading {w} for CoreML inference...') - import coremltools as ct - model = ct.models.MLModel(w) - elif saved_model: # TF SavedModel - LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...') - import tensorflow as tf - keras = False # assume TF1 saved_model - model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w) - elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt - LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...') + else: # TensorFlow model (TFLite, pb, saved_model) import tensorflow as tf + if pb: # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt + def wrap_frozen_graph(gd, inputs, outputs): + x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped + return x.prune(tf.nest.map_structure(x.graph.as_graph_element, inputs), + tf.nest.map_structure(x.graph.as_graph_element, outputs)) - def wrap_frozen_graph(gd, inputs, outputs): - x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=''), []) # wrapped - ge = x.graph.as_graph_element - return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs)) - - def gd_outputs(gd): - name_list, input_list = [], [] - for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef - name_list.append(node.name) - input_list.extend(node.input) - return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp')) - - gd = tf.Graph().as_graph_def() # TF GraphDef - with open(w, 'rb') as f: - gd.ParseFromString(f.read()) - frozen_func = wrap_frozen_graph(gd, inputs='x:0', outputs=gd_outputs(gd)) - elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python - try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu - from tflite_runtime.interpreter import Interpreter, load_delegate - except ImportError: - import tensorflow as tf - Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate, - if edgetpu: # TF Edge TPU https://coral.ai/software/#edgetpu-runtime - LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...') - delegate = { - 'Linux': 'libedgetpu.so.1', - 'Darwin': 'libedgetpu.1.dylib', - 'Windows': 'edgetpu.dll'}[platform.system()] - interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)]) - else: # TFLite - LOGGER.info(f'Loading {w} for TensorFlow Lite inference...') - interpreter = Interpreter(model_path=w) # load TFLite model - interpreter.allocate_tensors() # allocate - input_details = interpreter.get_input_details() # inputs - output_details = interpreter.get_output_details() # outputs - # load metadata - with contextlib.suppress(zipfile.BadZipFile): - with zipfile.ZipFile(w, 'r') as model: - meta_file = model.namelist()[0] - meta = ast.literal_eval(model.read(meta_file).decode('utf-8')) - stride, names = int(meta['stride']), meta['names'] - elif tfjs: # TF.js - raise NotImplementedError('ERROR: YOLOv3 TF.js inference is not supported') - elif paddle: # PaddlePaddle - LOGGER.info(f'Loading {w} for PaddlePaddle inference...') - check_requirements('paddlepaddle-gpu' if cuda else 'paddlepaddle') - import paddle.inference as pdi - if not Path(w).is_file(): # if not *.pdmodel - w = next(Path(w).rglob('*.pdmodel')) # get *.pdmodel file from *_paddle_model dir - weights = Path(w).with_suffix('.pdiparams') - config = pdi.Config(str(w), str(weights)) - if cuda: - config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0) - predictor = pdi.create_predictor(config) - input_handle = predictor.get_input_handle(predictor.get_input_names()[0]) - output_names = predictor.get_output_names() - elif triton: # NVIDIA Triton Inference Server - LOGGER.info(f'Using {w} as Triton Inference Server...') - check_requirements('tritonclient[all]') - from utils.triton import TritonRemoteModel - model = TritonRemoteModel(url=w) - nhwc = model.runtime.startswith('tensorflow') - else: - raise NotImplementedError(f'ERROR: {w} is not a supported format') - - # class names - if 'names' not in locals(): - names = yaml_load(data)['names'] if data else {i: f'class{i}' for i in range(999)} - if names[0] == 'n01440764' and len(names) == 1000: # ImageNet - names = yaml_load(ROOT / 'data/ImageNet.yaml')['names'] # human-readable names - + LOGGER.info(f'Loading {w} for TensorFlow *.pb inference...') + graph_def = tf.Graph().as_graph_def() + graph_def.ParseFromString(open(w, 'rb').read()) + frozen_func = wrap_frozen_graph(gd=graph_def, inputs="x:0", outputs="Identity:0") + elif saved_model: + LOGGER.info(f'Loading {w} for TensorFlow saved_model inference...') + model = tf.keras.models.load_model(w) + elif tflite: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python + if 'edgetpu' in w.lower(): + LOGGER.info(f'Loading {w} for TensorFlow Edge TPU inference...') + import tflite_runtime.interpreter as tfli + delegate = {'Linux': 'libedgetpu.so.1', # install https://coral.ai/software/#edgetpu-runtime + 'Darwin': 'libedgetpu.1.dylib', + 'Windows': 'edgetpu.dll'}[platform.system()] + interpreter = tfli.Interpreter(model_path=w, experimental_delegates=[tfli.load_delegate(delegate)]) + else: + LOGGER.info(f'Loading {w} for TensorFlow Lite inference...') + interpreter = tf.lite.Interpreter(model_path=w) # load TFLite model + interpreter.allocate_tensors() # allocate + input_details = interpreter.get_input_details() # inputs + output_details = interpreter.get_output_details() # outputs self.__dict__.update(locals()) # assign all variables to self - def forward(self, im, augment=False, visualize=False): - # YOLOv3 MultiBackend inference + def forward(self, im, augment=False, visualize=False, val=False): + # MultiBackend inference b, ch, h, w = im.shape # batch, channel, height, width - if self.fp16 and im.dtype != torch.float16: - im = im.half() # to FP16 - if self.nhwc: - im = im.permute(0, 2, 3, 1) # torch BCHW to numpy BHWC shape(1,320,192,3) - if self.pt: # PyTorch - y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im) - elif self.jit: # TorchScript - y = self.model(im) - elif self.dnn: # ONNX OpenCV DNN - im = im.cpu().numpy() # torch to numpy - self.net.setInput(im) - y = self.net.forward() - elif self.onnx: # ONNX Runtime - im = im.cpu().numpy() # torch to numpy - y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im}) - elif self.xml: # OpenVINO - im = im.cpu().numpy() # FP32 - y = list(self.executable_network([im]).values()) - elif self.engine: # TensorRT - if self.dynamic and im.shape != self.bindings['images'].shape: - i = self.model.get_binding_index('images') - self.context.set_binding_shape(i, im.shape) # reshape if dynamic - self.bindings['images'] = self.bindings['images']._replace(shape=im.shape) - for name in self.output_names: - i = self.model.get_binding_index(name) - self.bindings[name].data.resize_(tuple(self.context.get_binding_shape(i))) - s = self.bindings['images'].shape - assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}" - self.binding_addrs['images'] = int(im.data_ptr()) - self.context.execute_v2(list(self.binding_addrs.values())) - y = [self.bindings[x].data for x in sorted(self.output_names)] - elif self.coreml: # CoreML - im = im.cpu().numpy() + y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize) + return y if val else y[0] + elif self.coreml: # CoreML *.mlmodel + im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3) im = Image.fromarray((im[0] * 255).astype('uint8')) # im = im.resize((192, 320), Image.ANTIALIAS) y = self.model.predict({'image': im}) # coordinates are xywh normalized - if 'confidence' in y: - box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels - conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float) - y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1) - else: - y = list(reversed(y.values())) # reversed for segmentation models (pred, proto) - elif self.paddle: # PaddlePaddle - im = im.cpu().numpy().astype(np.float32) - self.input_handle.copy_from_cpu(im) - self.predictor.run() - y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names] - elif self.triton: # NVIDIA Triton Inference Server - y = self.model(im) - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - im = im.cpu().numpy() - if self.saved_model: # SavedModel - y = self.model(im, training=False) if self.keras else self.model(im) - elif self.pb: # GraphDef - y = self.frozen_func(x=self.tf.constant(im)) - else: # Lite or Edge TPU - input = self.input_details[0] + box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels + conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float) + y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1) + elif self.onnx: # ONNX + im = im.cpu().numpy() # torch to numpy + if self.dnn: # ONNX OpenCV DNN + self.net.setInput(im) + y = self.net.forward() + else: # ONNX Runtime + y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0] + else: # TensorFlow model (TFLite, pb, saved_model) + im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3) + if self.pb: + y = self.frozen_func(x=self.tf.constant(im)).numpy() + elif self.saved_model: + y = self.model(im, training=False).numpy() + elif self.tflite: + input, output = self.input_details[0], self.output_details[0] int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model if int8: scale, zero_point = input['quantization'] im = (im / scale + zero_point).astype(np.uint8) # de-scale self.interpreter.set_tensor(input['index'], im) self.interpreter.invoke() - y = [] - for output in self.output_details: - x = self.interpreter.get_tensor(output['index']) - if int8: - scale, zero_point = output['quantization'] - x = (x.astype(np.float32) - zero_point) * scale # re-scale - y.append(x) - y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y] - y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels - - if isinstance(y, (list, tuple)): - return self.from_numpy(y[0]) if len(y) == 1 else [self.from_numpy(x) for x in y] - else: - return self.from_numpy(y) - - def from_numpy(self, x): - return torch.from_numpy(x).to(self.device) if isinstance(x, np.ndarray) else x - - def warmup(self, imgsz=(1, 3, 640, 640)): - # Warmup model by running inference once - warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb, self.triton - if any(warmup_types) and (self.device.type != 'cpu' or self.triton): - im = torch.empty(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input - for _ in range(2 if self.jit else 1): # - self.forward(im) # warmup - - @staticmethod - def _model_type(p='path/to/model.pt'): - # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx - # types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle] - from export import export_formats - from utils.downloads import is_url - sf = list(export_formats().Suffix) # export suffixes - if not is_url(p, check=False): - check_suffix(p, sf) # checks - url = urlparse(p) # if url may be Triton inference server - types = [s in Path(p).name for s in sf] - types[8] &= not types[9] # tflite &= not edgetpu - triton = not any(types) and all([any(s in url.scheme for s in ['http', 'grpc']), url.netloc]) - return types + [triton] - - @staticmethod - def _load_metadata(f=Path('path/to/meta.yaml')): - # Load metadata from meta.yaml if it exists - if f.exists(): - d = yaml_load(f) - return d['stride'], d['names'] # assign stride, names - return None, None + y = self.interpreter.get_tensor(output['index']) + if int8: + scale, zero_point = output['quantization'] + y = (y.astype(np.float32) - zero_point) * scale # re-scale + y[..., 0] *= w # x + y[..., 1] *= h # y + y[..., 2] *= w # w + y[..., 3] *= h # h + y = torch.tensor(y) + return (y, []) if val else y class AutoShape(nn.Module): - # YOLOv3 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS + # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS conf = 0.25 # NMS confidence threshold iou = 0.45 # NMS IoU threshold - agnostic = False # NMS class-agnostic - multi_label = False # NMS multiple labels per box classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs + multi_label = False # NMS multiple labels per box max_det = 1000 # maximum number of detections per image - amp = False # Automatic Mixed Precision (AMP) inference - def __init__(self, model, verbose=True): + def __init__(self, model): super().__init__() - if verbose: - LOGGER.info('Adding AutoShape... ') - copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes - self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance - self.pt = not self.dmb or model.pt # PyTorch model self.model = model.eval() - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.inplace = False # Detect.inplace=False for safe multithread inference - m.export = True # do not output loss values + + def autoshape(self): + LOGGER.info('AutoShape already enabled, skipping... ') # model already converted to model.autoshape() + return self def _apply(self, fn): # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers self = super()._apply(fn) - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) + m = self.model.model[-1] # Detect() + m.stride = fn(m.stride) + m.grid = list(map(fn, m.grid)) + if isinstance(m.anchor_grid, list): + m.anchor_grid = list(map(fn, m.anchor_grid)) return self - @smart_inference_mode() - def forward(self, ims, size=640, augment=False, profile=False): - # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are: - # file: ims = 'data/images/zidane.jpg' # str or PosixPath + @torch.no_grad() + def forward(self, imgs, size=640, augment=False, profile=False): + # Inference from various sources. For height=640, width=1280, RGB images example inputs are: + # file: imgs = 'data/images/zidane.jpg' # str or PosixPath # URI: = 'https://ultralytics.com/images/zidane.jpg' # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) @@ -665,139 +434,129 @@ class AutoShape(nn.Module): # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - dt = (Profile(), Profile(), Profile()) - with dt[0]: - if isinstance(size, int): # expand - size = (size, size) - p = next(self.model.parameters()) if self.pt else torch.empty(1, device=self.model.device) # param - autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference - if isinstance(ims, torch.Tensor): # torch - with amp.autocast(autocast): - return self.model(ims.to(p.device).type_as(p), augment=augment) # inference + t = [time_sync()] + p = next(self.model.parameters()) # for device and type + if isinstance(imgs, torch.Tensor): # torch + with amp.autocast(enabled=p.device.type != 'cpu'): + return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - # Pre-process - n, ims = (len(ims), list(ims)) if isinstance(ims, (list, tuple)) else (1, [ims]) # number, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(ims): - f = f'image{i}' # filename - if isinstance(im, (str, Path)): # filename or uri - im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im - im = np.asarray(exif_transpose(im)) - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[..., :3] if im.ndim == 3 else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = max(size) / max(s) # gain - shape1.append([int(y * g) for y in s]) - ims[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update - shape1 = [make_divisible(x, self.stride) for x in np.array(shape1).max(0)] # inf shape - x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad - x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 + # Pre-process + n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images + shape0, shape1, files = [], [], [] # image and inference shapes, filenames + for i, im in enumerate(imgs): + f = f'image{i}' # filename + if isinstance(im, (str, Path)): # filename or uri + im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im + im = np.asarray(exif_transpose(im)) + elif isinstance(im, Image.Image): # PIL Image + im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f + files.append(Path(f).with_suffix('.jpg').name) + if im.shape[0] < 5: # image in CHW + im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) + im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input + s = im.shape[:2] # HWC + shape0.append(s) # image shape + g = (size / max(s)) # gain + shape1.append([y * g for y in s]) + imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update + shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape + x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad + x = np.stack(x, 0) if n > 1 else x[0][None] # stack + x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW + x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 + t.append(time_sync()) - with amp.autocast(autocast): + with amp.autocast(enabled=p.device.type != 'cpu'): # Inference - with dt[1]: - y = self.model(x, augment=augment) # forward + y = self.model(x, augment, profile)[0] # forward + t.append(time_sync()) # Post-process - with dt[2]: - y = non_max_suppression(y if self.dmb else y[0], - self.conf, - self.iou, - self.classes, - self.agnostic, - self.multi_label, - max_det=self.max_det) # NMS - for i in range(n): - scale_boxes(shape1, y[i][:, :4], shape0[i]) + y = non_max_suppression(y, self.conf, iou_thres=self.iou, classes=self.classes, + multi_label=self.multi_label, max_det=self.max_det) # NMS + for i in range(n): + scale_coords(shape1, y[i][:, :4], shape0[i]) - return Detections(ims, y, files, dt, self.names, x.shape) + t.append(time_sync()) + return Detections(imgs, y, files, t, self.names, x.shape) class Detections: - # YOLOv3 detections class for inference results - def __init__(self, ims, pred, files, times=(0, 0, 0), names=None, shape=None): + # detections class for inference results + def __init__(self, imgs, pred, files, times=None, names=None, shape=None): super().__init__() d = pred[0].device # device - gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in ims] # normalizations - self.ims = ims # list of images as numpy arrays + gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs] # normalizations + self.imgs = imgs # list of images as numpy arrays self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) self.names = names # class names self.files = files # image filenames - self.times = times # profiling times self.xyxy = pred # xyxy pixels self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized self.n = len(self.pred) # number of images (batch size) - self.t = tuple(x.t / self.n * 1E3 for x in times) # timestamps (ms) - self.s = tuple(shape) # inference BCHW shape + self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms) + self.s = shape # inference BCHW shape - def _run(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')): - s, crops = '', [] - for i, (im, pred) in enumerate(zip(self.ims, self.pred)): - s += f'\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string + def display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')): + crops = [] + for i, (im, pred) in enumerate(zip(self.imgs, self.pred)): + s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string if pred.shape[0]: for c in pred[:, -1].unique(): n = (pred[:, -1] == c).sum() # detections per class s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - s = s.rstrip(', ') if show or save or render or crop: annotator = Annotator(im, example=str(self.names)) for *box, conf, cls in reversed(pred): # xyxy, confidence, class label = f'{self.names[int(cls)]} {conf:.2f}' if crop: file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None - crops.append({ - 'box': box, - 'conf': conf, - 'cls': cls, - 'label': label, - 'im': save_one_box(box, im, file=file, save=save)}) + crops.append({'box': box, 'conf': conf, 'cls': cls, 'label': label, + 'im': save_one_box(box, im, file=file, save=save)}) else: # all others - annotator.box_label(box, label if labels else '', color=colors(cls)) + annotator.box_label(box, label, color=colors(cls)) im = annotator.im else: s += '(no detections)' im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np + if pprint: + LOGGER.info(s.rstrip(', ')) if show: - display(im) if is_notebook() else im.show(self.files[i]) + im.show(self.files[i]) # show if save: f = self.files[i] im.save(save_dir / f) # save if i == self.n - 1: LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}") if render: - self.ims[i] = np.asarray(im) - if pprint: - s = s.lstrip('\n') - return f'{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}' % self.t + self.imgs[i] = np.asarray(im) if crop: if save: LOGGER.info(f'Saved results to {save_dir}\n') return crops - @TryExcept('Showing images is not supported in this environment') - def show(self, labels=True): - self._run(show=True, labels=labels) # show results + def print(self): + self.display(pprint=True) # print results + LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % + self.t) - def save(self, labels=True, save_dir='runs/detect/exp', exist_ok=False): - save_dir = increment_path(save_dir, exist_ok, mkdir=True) # increment save_dir - self._run(save=True, labels=labels, save_dir=save_dir) # save results + def show(self): + self.display(show=True) # show results - def crop(self, save=True, save_dir='runs/detect/exp', exist_ok=False): - save_dir = increment_path(save_dir, exist_ok, mkdir=True) if save else None - return self._run(crop=True, save=save, save_dir=save_dir) # crop results + def save(self, save_dir='runs/detect/exp'): + save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir + self.display(save=True, save_dir=save_dir) # save results - def render(self, labels=True): - self._run(render=True, labels=labels) # render results - return self.ims + def crop(self, save=True, save_dir='runs/detect/exp'): + save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None + return self.display(crop=True, save=save, save_dir=save_dir) # crop results + + def render(self): + self.display(render=True) # render results + return self.imgs def pandas(self): # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) @@ -811,57 +570,24 @@ class Detections: def tolist(self): # return a list of Detections objects, i.e. 'for result in results.tolist():' - r = range(self.n) # iterable - x = [Detections([self.ims[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r] - # for d in x: - # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - # setattr(d, k, getattr(d, k)[0]) # pop out of list + x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)] + for d in x: + for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: + setattr(d, k, getattr(d, k)[0]) # pop out of list return x - def print(self): - LOGGER.info(self.__str__()) - - def __len__(self): # override len(results) + def __len__(self): return self.n - def __str__(self): # override print(results) - return self._run(pprint=True) # print results - - def __repr__(self): - return f'YOLOv3 {self.__class__} instance\n' + self.__str__() - - -class Proto(nn.Module): - # YOLOv3 mask Proto module for segmentation models - def __init__(self, c1, c_=256, c2=32): # ch_in, number of protos, number of masks - super().__init__() - self.cv1 = Conv(c1, c_, k=3) - self.upsample = nn.Upsample(scale_factor=2, mode='nearest') - self.cv2 = Conv(c_, c_, k=3) - self.cv3 = Conv(c_, c2) - - def forward(self, x): - return self.cv3(self.cv2(self.upsample(self.cv1(x)))) - class Classify(nn.Module): - # YOLOv3 classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, - c1, - c2, - k=1, - s=1, - p=None, - g=1, - dropout_p=0.0): # ch_in, ch_out, kernel, stride, padding, groups, dropout probability + # Classification head, i.e. x(b,c1,20,20) to x(b,c2) + def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups super().__init__() - c_ = 1280 # efficientnet_b0 size - self.conv = Conv(c1, c_, k, s, autopad(k, p), g) - self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1) - self.drop = nn.Dropout(p=dropout_p, inplace=True) - self.linear = nn.Linear(c_, c2) # to x(b,c2) + self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1) + self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1) + self.flat = nn.Flatten() def forward(self, x): - if isinstance(x, list): - x = torch.cat(x, 1) - return self.linear(self.drop(self.pool(self.conv(x)).flatten(1))) + z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list + return self.flat(self.conv(z)) # flatten to x(b,c2) diff --git a/yolov3/models/experimental.py b/yolov3/models/experimental.py index 39e3ad9f..ab8266a1 100644 --- a/yolov3/models/experimental.py +++ b/yolov3/models/experimental.py @@ -8,9 +8,24 @@ import numpy as np import torch import torch.nn as nn +from models.common import Conv from utils.downloads import attempt_download +class CrossConv(nn.Module): + # Cross Convolution Downsample + def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): + # ch_in, ch_out, kernel, stride, groups, expansion, shortcut + super().__init__() + c_ = int(c2 * e) # hidden channels + self.cv1 = Conv(c1, c_, (1, k), (1, s)) + self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) + self.add = shortcut and c1 == c2 + + def forward(self, x): + return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) + + class Sum(nn.Module): # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 def __init__(self, n, weight=False): # n: number of inputs @@ -48,8 +63,8 @@ class MixConv2d(nn.Module): a[0] = 1 c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - self.m = nn.ModuleList([ - nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)]) + self.m = nn.ModuleList( + [nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)]) self.bn = nn.BatchNorm2d(c2) self.act = nn.SiLU() @@ -63,49 +78,44 @@ class Ensemble(nn.ModuleList): super().__init__() def forward(self, x, augment=False, profile=False, visualize=False): - y = [module(x, augment, profile, visualize)[0] for module in self] + y = [] + for module in self: + y.append(module(x, augment, profile, visualize)[0]) # y = torch.stack(y).max(0)[0] # max ensemble # y = torch.stack(y).mean(0) # mean ensemble y = torch.cat(y, 1) # nms ensemble return y, None # inference, train output -def attempt_load(weights, device=None, inplace=True, fuse=True): - # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a +def attempt_load(weights, map_location=None, inplace=True, fuse=True): from models.yolo import Detect, Model + # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a model = Ensemble() for w in weights if isinstance(weights, list) else [weights]: - ckpt = torch.load(attempt_download(w), map_location='cpu') # load - ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model + ckpt = torch.load(attempt_download(w), map_location=map_location) # load + ckpt = (ckpt['ema'] or ckpt['model']).float() # FP32 model + model.append(ckpt.fuse().eval() if fuse else ckpt.eval()) # fused or un-fused model in eval mode - # Model compatibility updates - if not hasattr(ckpt, 'stride'): - ckpt.stride = torch.tensor([32.]) - if hasattr(ckpt, 'names') and isinstance(ckpt.names, (list, tuple)): - ckpt.names = dict(enumerate(ckpt.names)) # convert to dict - - model.append(ckpt.fuse().eval() if fuse and hasattr(ckpt, 'fuse') else ckpt.eval()) # model in eval mode - - # Module compatibility updates + # Compatibility updates for m in model.modules(): t = type(m) if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model): m.inplace = inplace # torch 1.7.0 compatibility - if t is Detect and not isinstance(m.anchor_grid, list): - delattr(m, 'anchor_grid') - setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl) + if t is Detect: + if not isinstance(m.anchor_grid, list): # new Detect Layer compatibility + delattr(m, 'anchor_grid') + setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl) + elif t is Conv: + m._non_persistent_buffers_set = set() # torch 1.6.0 compatibility elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'): m.recompute_scale_factor = None # torch 1.11.0 compatibility - # Return model if len(model) == 1: - return model[-1] - - # Return detection ensemble - print(f'Ensemble created with {weights}\n') - for k in 'names', 'nc', 'yaml': - setattr(model, k, getattr(model[0], k)) - model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride - assert all(model[0].nc == m.nc for m in model), f'Models have different class counts: {[m.nc for m in model]}' - return model + return model[-1] # return model + else: + print(f'Ensemble created with {weights}\n') + for k in ['names']: + setattr(model, k, getattr(model[-1], k)) + model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride + return model # return ensemble diff --git a/yolov3/models/tf.py b/yolov3/models/tf.py index 748a95c9..956ef2bd 100644 --- a/yolov3/models/tf.py +++ b/yolov3/models/tf.py @@ -1,22 +1,25 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license """ -TensorFlow, Keras and TFLite versions of YOLOv3 +TensorFlow, Keras and TFLite versions of Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127 Usage: - $ python models/tf.py --weights yolov5s.pt + $ python models/tf.py --weights yolov3.pt Export: - $ python export.py --weights yolov5s.pt --include saved_model pb tflite tfjs + $ python path/to/export.py --weights yolov3.pt --include saved_model pb tflite tfjs """ import argparse +import logging import sys from copy import deepcopy from pathlib import Path +from packaging import version + FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv3 root directory +ROOT = FILE.parents[1] # root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH # ROOT = ROOT.relative_to(Path.cwd()) # relative @@ -25,15 +28,21 @@ import numpy as np import tensorflow as tf import torch import torch.nn as nn +from keras import backend +from keras.engine.base_layer import Layer +from keras.engine.input_spec import InputSpec +from keras.utils import conv_utils from tensorflow import keras -from models.common import (C3, SPP, SPPF, Bottleneck, BottleneckCSP, C3x, Concat, Conv, CrossConv, DWConv, - DWConvTranspose2d, Focus, autopad) -from models.experimental import MixConv2d, attempt_load -from models.yolo import Detect, Segment +from models.common import C3, SPP, SPPF, Bottleneck, BottleneckCSP, Concat, Conv, DWConv, Focus, autopad +from models.experimental import CrossConv, MixConv2d, attempt_load +from models.yolo import Detect from utils.activations import SiLU from utils.general import LOGGER, make_divisible, print_args +# isort: off +from tensorflow.python.util.tf_export import keras_export + class TFBN(keras.layers.Layer): # TensorFlow BatchNormalization wrapper @@ -50,14 +59,33 @@ class TFBN(keras.layers.Layer): return self.bn(inputs) +class TFMaxPool2d(keras.layers.Layer): + # TensorFlow MAX Pooling + def __init__(self, k, s, p, w=None): + super().__init__() + self.pool = keras.layers.MaxPool2D(pool_size=k, strides=s, padding='valid') + + def call(self, inputs): + return self.pool(inputs) + + +class TFZeroPad2d(keras.layers.Layer): + # TensorFlow MAX Pooling + def __init__(self, p, w=None): + super().__init__() + if version.parse(tf.__version__) < version.parse('2.11.0'): + self.zero_pad = ZeroPadding2D(padding=p) + else: + self.zero_pad = keras.layers.ZeroPadding2D(padding=((p[0], p[1]), (p[2], p[3]))) + + def call(self, inputs): + return self.zero_pad(inputs) + + class TFPad(keras.layers.Layer): - # Pad inputs in spatial dimensions 1 and 2 def __init__(self, pad): super().__init__() - if isinstance(pad, int): - self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]]) - else: # tuple/list - self.pad = tf.constant([[0, 0], [pad[0], pad[0]], [pad[1], pad[1]], [0, 0]]) + self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]]) def call(self, inputs): return tf.pad(inputs, self.pad, mode='constant', constant_values=0) @@ -69,69 +97,31 @@ class TFConv(keras.layers.Layer): # ch_in, ch_out, weights, kernel, stride, padding, groups super().__init__() assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument" + assert isinstance(k, int), "Convolution with multiple kernels are not allowed." # TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding) # see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch + conv = keras.layers.Conv2D( - filters=c2, - kernel_size=k, - strides=s, - padding='SAME' if s == 1 else 'VALID', - use_bias=not hasattr(w, 'bn'), + c2, k, s, 'SAME' if s == 1 else 'VALID', use_bias=False if hasattr(w, 'bn') else True, kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()), bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy())) self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv]) self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity - self.act = activations(w.act) if act else tf.identity + + # activations + if isinstance(w.act, nn.LeakyReLU): + self.act = (lambda x: keras.activations.relu(x, alpha=0.1)) if act else tf.identity + elif isinstance(w.act, nn.Hardswish): + self.act = (lambda x: x * tf.nn.relu6(x + 3) * 0.166666667) if act else tf.identity + elif isinstance(w.act, (nn.SiLU, SiLU)): + self.act = (lambda x: keras.activations.swish(x)) if act else tf.identity + else: + raise Exception(f'no matching TensorFlow activation found for {w.act}') def call(self, inputs): return self.act(self.bn(self.conv(inputs))) -class TFDWConv(keras.layers.Layer): - # Depthwise convolution - def __init__(self, c1, c2, k=1, s=1, p=None, act=True, w=None): - # ch_in, ch_out, weights, kernel, stride, padding, groups - super().__init__() - assert c2 % c1 == 0, f'TFDWConv() output={c2} must be a multiple of input={c1} channels' - conv = keras.layers.DepthwiseConv2D( - kernel_size=k, - depth_multiplier=c2 // c1, - strides=s, - padding='SAME' if s == 1 else 'VALID', - use_bias=not hasattr(w, 'bn'), - depthwise_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()), - bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy())) - self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv]) - self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity - self.act = activations(w.act) if act else tf.identity - - def call(self, inputs): - return self.act(self.bn(self.conv(inputs))) - - -class TFDWConvTranspose2d(keras.layers.Layer): - # Depthwise ConvTranspose2d - def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0, w=None): - # ch_in, ch_out, weights, kernel, stride, padding, groups - super().__init__() - assert c1 == c2, f'TFDWConv() output={c2} must be equal to input={c1} channels' - assert k == 4 and p1 == 1, 'TFDWConv() only valid for k=4 and p1=1' - weight, bias = w.weight.permute(2, 3, 1, 0).numpy(), w.bias.numpy() - self.c1 = c1 - self.conv = [ - keras.layers.Conv2DTranspose(filters=1, - kernel_size=k, - strides=s, - padding='VALID', - output_padding=p2, - use_bias=True, - kernel_initializer=keras.initializers.Constant(weight[..., i:i + 1]), - bias_initializer=keras.initializers.Constant(bias[i])) for i in range(c1)] - - def call(self, inputs): - return tf.concat([m(x) for m, x in zip(self.conv, tf.split(inputs, self.c1, 3))], 3)[:, 1:-1, 1:-1] - - class TFFocus(keras.layers.Layer): # Focus wh information into c-space def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None): @@ -141,8 +131,10 @@ class TFFocus(keras.layers.Layer): def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c) # inputs = inputs / 255 # normalize 0-255 to 0-1 - inputs = [inputs[:, ::2, ::2, :], inputs[:, 1::2, ::2, :], inputs[:, ::2, 1::2, :], inputs[:, 1::2, 1::2, :]] - return self.conv(tf.concat(inputs, 3)) + return self.conv(tf.concat([inputs[:, ::2, ::2, :], + inputs[:, 1::2, ::2, :], + inputs[:, ::2, 1::2, :], + inputs[:, 1::2, 1::2, :]], 3)) class TFBottleneck(keras.layers.Layer): @@ -158,32 +150,15 @@ class TFBottleneck(keras.layers.Layer): return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs)) -class TFCrossConv(keras.layers.Layer): - # Cross Convolution - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False, w=None): - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = TFConv(c1, c_, (1, k), (1, s), w=w.cv1) - self.cv2 = TFConv(c_, c2, (k, 1), (s, 1), g=g, w=w.cv2) - self.add = shortcut and c1 == c2 - - def call(self, inputs): - return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs)) - - class TFConv2d(keras.layers.Layer): # Substitution for PyTorch nn.Conv2D def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None): super().__init__() assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument" - self.conv = keras.layers.Conv2D(filters=c2, - kernel_size=k, - strides=s, - padding='VALID', - use_bias=bias, - kernel_initializer=keras.initializers.Constant( - w.weight.permute(2, 3, 1, 0).numpy()), - bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None) + self.conv = keras.layers.Conv2D( + c2, k, s, 'VALID', use_bias=bias, + kernel_initializer=keras.initializers.Constant(w.weight.permute(2, 3, 1, 0).numpy()), + bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None, ) def call(self, inputs): return self.conv(inputs) @@ -200,7 +175,7 @@ class TFBottleneckCSP(keras.layers.Layer): self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3) self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4) self.bn = TFBN(w.bn) - self.act = lambda x: keras.activations.swish(x) + self.act = lambda x: keras.activations.relu(x, alpha=0.1) self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)]) def call(self, inputs): @@ -224,22 +199,6 @@ class TFC3(keras.layers.Layer): return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3)) -class TFC3x(keras.layers.Layer): - # 3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None): - # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1) - self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2) - self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3) - self.m = keras.Sequential([ - TFCrossConv(c_, c_, k=3, s=1, g=g, e=1.0, shortcut=shortcut, w=w.m[j]) for j in range(n)]) - - def call(self, inputs): - return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3)) - - class TFSPP(keras.layers.Layer): # Spatial pyramid pooling layer used in YOLOv3-SPP def __init__(self, c1, c2, k=(5, 9, 13), w=None): @@ -271,7 +230,6 @@ class TFSPPF(keras.layers.Layer): class TFDetect(keras.layers.Layer): - # TF YOLOv3 Detect layer def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None): # detection layer super().__init__() self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32) @@ -281,7 +239,8 @@ class TFDetect(keras.layers.Layer): self.na = len(anchors[0]) // 2 # number of anchors self.grid = [tf.zeros(1)] * self.nl # init grid self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32) - self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]), [self.nl, 1, -1, 1, 2]) + self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]), + [self.nl, 1, -1, 1, 2]) self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)] self.training = False # set to False after building model self.imgsz = imgsz @@ -296,21 +255,19 @@ class TFDetect(keras.layers.Layer): x.append(self.m[i](inputs[i])) # x(bs,20,20,255) to x(bs,3,20,20,85) ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i] - x[i] = tf.reshape(x[i], [-1, ny * nx, self.na, self.no]) + x[i] = tf.transpose(tf.reshape(x[i], [-1, ny * nx, self.na, self.no]), [0, 2, 1, 3]) if not self.training: # inference - y = x[i] - grid = tf.transpose(self.grid[i], [0, 2, 1, 3]) - 0.5 - anchor_grid = tf.transpose(self.anchor_grid[i], [0, 2, 1, 3]) * 4 - xy = (tf.sigmoid(y[..., 0:2]) * 2 + grid) * self.stride[i] # xy - wh = tf.sigmoid(y[..., 2:4]) ** 2 * anchor_grid + y = tf.sigmoid(x[i]) + xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy + wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # Normalize xywh to 0-1 to reduce calibration error xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32) wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32) - y = tf.concat([xy, wh, tf.sigmoid(y[..., 4:5 + self.nc]), y[..., 5 + self.nc:]], -1) - z.append(tf.reshape(y, [-1, self.na * ny * nx, self.no])) + y = tf.concat([xy, wh, y[..., 4:]], -1) + z.append(tf.reshape(y, [-1, 3 * ny * nx, self.no])) - return tf.transpose(x, [0, 2, 1, 3]) if self.training else (tf.concat(z, 1),) + return x if self.training else (tf.concat(z, 1), x) @staticmethod def _make_grid(nx=20, ny=20): @@ -320,44 +277,11 @@ class TFDetect(keras.layers.Layer): return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32) -class TFSegment(TFDetect): - # YOLOv3 Segment head for segmentation models - def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), imgsz=(640, 640), w=None): - super().__init__(nc, anchors, ch, imgsz, w) - self.nm = nm # number of masks - self.npr = npr # number of protos - self.no = 5 + nc + self.nm # number of outputs per anchor - self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)] # output conv - self.proto = TFProto(ch[0], self.npr, self.nm, w=w.proto) # protos - self.detect = TFDetect.call - - def call(self, x): - p = self.proto(x[0]) - # p = TFUpsample(None, scale_factor=4, mode='nearest')(self.proto(x[0])) # (optional) full-size protos - p = tf.transpose(p, [0, 3, 1, 2]) # from shape(1,160,160,32) to shape(1,32,160,160) - x = self.detect(self, x) - return (x, p) if self.training else (x[0], p) - - -class TFProto(keras.layers.Layer): - - def __init__(self, c1, c_=256, c2=32, w=None): - super().__init__() - self.cv1 = TFConv(c1, c_, k=3, w=w.cv1) - self.upsample = TFUpsample(None, scale_factor=2, mode='nearest') - self.cv2 = TFConv(c_, c_, k=3, w=w.cv2) - self.cv3 = TFConv(c_, c2, w=w.cv3) - - def call(self, inputs): - return self.cv3(self.cv2(self.upsample(self.cv1(inputs)))) - - class TFUpsample(keras.layers.Layer): - # TF version of torch.nn.Upsample() def __init__(self, size, scale_factor, mode, w=None): # warning: all arguments needed including 'w' super().__init__() - assert scale_factor % 2 == 0, 'scale_factor must be multiple of 2' - self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * scale_factor, x.shape[2] * scale_factor), mode) + assert scale_factor == 2, "scale_factor must be 2" + self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * 2, x.shape[2] * 2), method=mode) # self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode) # with default arguments: align_corners=False, half_pixel_centers=False # self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x, @@ -368,10 +292,9 @@ class TFUpsample(keras.layers.Layer): class TFConcat(keras.layers.Layer): - # TF version of torch.concat() def __init__(self, dimension=1, w=None): super().__init__() - assert dimension == 1, 'convert only NCHW to NHWC concat' + assert dimension == 1, "convert only NCHW to NHWC concat" self.d = 3 def call(self, inputs): @@ -395,26 +318,22 @@ def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3) pass n = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in [ - nn.Conv2d, Conv, DWConv, DWConvTranspose2d, Bottleneck, SPP, SPPF, MixConv2d, Focus, CrossConv, - BottleneckCSP, C3, C3x]: + if m in [nn.Conv2d, Conv, Bottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3]: c1, c2 = ch[f], args[0] c2 = make_divisible(c2 * gw, 8) if c2 != no else c2 args = [c1, c2, *args[1:]] - if m in [BottleneckCSP, C3, C3x]: + if m in [BottleneckCSP, C3]: args.insert(2, n) n = 1 elif m is nn.BatchNorm2d: args = [ch[f]] elif m is Concat: c2 = sum(ch[-1 if x == -1 else x + 1] for x in f) - elif m in [Detect, Segment]: + elif m is Detect: args.append([ch[x + 1] for x in f]) if isinstance(args[1], int): # number of anchors args[1] = [list(range(args[1] * 2))] * len(f) - if m is Segment: - args[3] = make_divisible(args[3] * gw, 8) args.append(imgsz) else: c2 = ch[f] @@ -435,8 +354,7 @@ def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3) class TFModel: - # TF YOLOv3 model - def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes + def __init__(self, cfg='yolov3.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes super().__init__() if isinstance(cfg, dict): self.yaml = cfg # model dict @@ -452,17 +370,11 @@ class TFModel: self.yaml['nc'] = nc # override yaml value self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz) - def predict(self, - inputs, - tf_nms=False, - agnostic_nms=False, - topk_per_class=100, - topk_all=100, - iou_thres=0.45, + def predict(self, inputs, tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25): y = [] # outputs x = inputs - for m in self.model.layers: + for i, m in enumerate(self.model.layers): if m.f != -1: # if not from previous layer x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers @@ -477,18 +389,15 @@ class TFModel: scores = probs * classes if agnostic_nms: nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres) + return nms, x[1] else: boxes = tf.expand_dims(boxes, 2) - nms = tf.image.combined_non_max_suppression(boxes, - scores, - topk_per_class, - topk_all, - iou_thres, - conf_thres, - clip_boxes=False) - return (nms,) - return x # output [1,6300,85] = [xywh, conf, class0, class1, ...] - # x = x[0] # [x(1,6300,85), ...] to x(6300,85) + nms = tf.image.combined_non_max_suppression( + boxes, scores, topk_per_class, topk_all, iou_thres, conf_thres, clip_boxes=False) + return nms, x[1] + + return x[0] # output only first tensor [1,6300,85] = [xywh, conf, class0, class1, ...] + # x = x[0][0] # [x(1,6300,85), ...] to x(6300,85) # xywh = x[..., :4] # x(6300,4) boxes # conf = x[..., 4:5] # x(6300,1) confidences # cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes @@ -505,8 +414,7 @@ class AgnosticNMS(keras.layers.Layer): # TF Agnostic NMS def call(self, input, topk_all, iou_thres, conf_thres): # wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450 - return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres), - input, + return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres), input, fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32), name='agnostic_nms') @@ -515,69 +423,50 @@ class AgnosticNMS(keras.layers.Layer): boxes, classes, scores = x class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32) scores_inp = tf.reduce_max(scores, -1) - selected_inds = tf.image.non_max_suppression(boxes, - scores_inp, - max_output_size=topk_all, - iou_threshold=iou_thres, - score_threshold=conf_thres) + selected_inds = tf.image.non_max_suppression( + boxes, scores_inp, max_output_size=topk_all, iou_threshold=iou_thres, score_threshold=conf_thres) selected_boxes = tf.gather(boxes, selected_inds) padded_boxes = tf.pad(selected_boxes, paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]], - mode='CONSTANT', - constant_values=0.0) + mode="CONSTANT", constant_values=0.0) selected_scores = tf.gather(scores_inp, selected_inds) padded_scores = tf.pad(selected_scores, paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]], - mode='CONSTANT', - constant_values=-1.0) + mode="CONSTANT", constant_values=-1.0) selected_classes = tf.gather(class_inds, selected_inds) padded_classes = tf.pad(selected_classes, paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]], - mode='CONSTANT', - constant_values=-1.0) + mode="CONSTANT", constant_values=-1.0) valid_detections = tf.shape(selected_inds)[0] return padded_boxes, padded_scores, padded_classes, valid_detections -def activations(act=nn.SiLU): - # Returns TF activation from input PyTorch activation - if isinstance(act, nn.LeakyReLU): - return lambda x: keras.activations.relu(x, alpha=0.1) - elif isinstance(act, nn.Hardswish): - return lambda x: x * tf.nn.relu6(x + 3) * 0.166666667 - elif isinstance(act, (nn.SiLU, SiLU)): - return lambda x: keras.activations.swish(x) - else: - raise Exception(f'no matching TensorFlow activation found for PyTorch activation {act}') - - def representative_dataset_gen(dataset, ncalib=100): # Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays for n, (path, img, im0s, vid_cap, string) in enumerate(dataset): - im = np.transpose(img, [1, 2, 0]) - im = np.expand_dims(im, axis=0).astype(np.float32) - im /= 255 - yield [im] + input = np.transpose(img, [1, 2, 0]) + input = np.expand_dims(input, axis=0).astype(np.float32) + input /= 255 + yield [input] if n >= ncalib: break -def run( - weights=ROOT / 'yolov5s.pt', # weights path +def run(weights=ROOT / 'yolov3.pt', # weights path imgsz=(640, 640), # inference size h,w batch_size=1, # batch size dynamic=False, # dynamic batch size -): + ): # PyTorch model im = torch.zeros((batch_size, 3, *imgsz)) # BCHW image - model = attempt_load(weights, device=torch.device('cpu'), inplace=True, fuse=False) - _ = model(im) # inference + model = attempt_load(weights, map_location=torch.device('cpu'), inplace=True, fuse=False) + y = model(im) # inference model.info() # TensorFlow model im = tf.zeros((batch_size, *imgsz, 3)) # BHWC image tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz) - _ = tf_model.predict(im) # inference + y = tf_model.predict(im) # inference # Keras model im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size) @@ -587,15 +476,146 @@ def run( LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\nUse export.py for TF model export.') +@keras_export("keras.layers.ZeroPadding2D") +class ZeroPadding2D(Layer): + """Zero-padding layer for 2D input (e.g. picture). + + This layer can add rows and columns of zeros + at the top, bottom, left and right side of an image tensor. + + Examples: + + >>> input_shape = (1, 1, 2, 2) + >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) + >>> print(x) + [[[[0 1] + [2 3]]]] + >>> y = tf.keras.layers.ZeroPadding2D(padding=1)(x) + >>> print(y) + tf.Tensor( + [[[[0 0] + [0 0] + [0 0] + [0 0]] + [[0 0] + [0 1] + [2 3] + [0 0]] + [[0 0] + [0 0] + [0 0] + [0 0]]]], shape=(1, 3, 4, 2), dtype=int64) + + Args: + padding: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. + - If int: the same symmetric padding + is applied to height and width. + - If tuple of 2 ints: + interpreted as two different + symmetric padding values for height and width: + `(symmetric_height_pad, symmetric_width_pad)`. + - If tuple of 2 tuples of 2 ints: + interpreted as + `((top_pad, bottom_pad), (left_pad, right_pad))` + data_format: A string, + one of `channels_last` (default) or `channels_first`. + The ordering of the dimensions in the inputs. + `channels_last` corresponds to inputs with shape + `(batch_size, height, width, channels)` while `channels_first` + corresponds to inputs with shape + `(batch_size, channels, height, width)`. + It defaults to the `image_data_format` value found in your + Keras config file at `~/.keras/keras.json`. + If you never set it, then it will be "channels_last". + + Input shape: + 4D tensor with shape: + - If `data_format` is `"channels_last"`: + `(batch_size, rows, cols, channels)` + - If `data_format` is `"channels_first"`: + `(batch_size, channels, rows, cols)` + + Output shape: + 4D tensor with shape: + - If `data_format` is `"channels_last"`: + `(batch_size, padded_rows, padded_cols, channels)` + - If `data_format` is `"channels_first"`: + `(batch_size, channels, padded_rows, padded_cols)` + """ + + def __init__(self, padding=(1, 1), data_format=None, **kwargs): + super().__init__(**kwargs) + self.data_format = conv_utils.normalize_data_format(data_format) + if isinstance(padding, int): + self.padding = ((padding, padding), (padding, padding)) + elif hasattr(padding, "__len__"): + if len(padding) == 4: + padding = ((padding[0], padding[1]), (padding[2], padding[3])) + if len(padding) != 2: + raise ValueError( + f"`padding` should have two elements. Received: {padding}." + ) + height_padding = conv_utils.normalize_tuple( + padding[0], 2, "1st entry of padding", allow_zero=True + ) + width_padding = conv_utils.normalize_tuple( + padding[1], 2, "2nd entry of padding", allow_zero=True + ) + self.padding = (height_padding, width_padding) + else: + raise ValueError( + "`padding` should be either an int, " + "a tuple of 2 ints " + "(symmetric_height_pad, symmetric_width_pad), " + "or a tuple of 2 tuples of 2 ints " + "((top_pad, bottom_pad), (left_pad, right_pad)). " + f"Received: {padding}." + ) + self.input_spec = InputSpec(ndim=4) + + def compute_output_shape(self, input_shape): + input_shape = tf.TensorShape(input_shape).as_list() + if self.data_format == "channels_first": + if input_shape[2] is not None: + rows = input_shape[2] + self.padding[0][0] + self.padding[0][1] + else: + rows = None + if input_shape[3] is not None: + cols = input_shape[3] + self.padding[1][0] + self.padding[1][1] + else: + cols = None + return tf.TensorShape([input_shape[0], input_shape[1], rows, cols]) + elif self.data_format == "channels_last": + if input_shape[1] is not None: + rows = input_shape[1] + self.padding[0][0] + self.padding[0][1] + else: + rows = None + if input_shape[2] is not None: + cols = input_shape[2] + self.padding[1][0] + self.padding[1][1] + else: + cols = None + return tf.TensorShape([input_shape[0], rows, cols, input_shape[3]]) + + def call(self, inputs): + return backend.spatial_2d_padding( + inputs, padding=self.padding, data_format=self.data_format + ) + + def get_config(self): + config = {"padding": self.padding, "data_format": self.data_format} + base_config = super().get_config() + return dict(list(base_config.items()) + list(config.items())) + + def parse_opt(): parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path') + parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='weights path') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') parser.add_argument('--batch-size', type=int, default=1, help='batch size') parser.add_argument('--dynamic', action='store_true', help='dynamic batch size') opt = parser.parse_args() opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - print_args(vars(opt)) + print_args(FILE.stem, opt) return opt @@ -603,6 +623,6 @@ def main(opt): run(**vars(opt)) -if __name__ == '__main__': +if __name__ == "__main__": opt = parse_opt() main(opt) diff --git a/yolov3/models/yolo.py b/yolov3/models/yolo.py index fbff6da7..f398d3f9 100644 --- a/yolov3/models/yolo.py +++ b/yolov3/models/yolo.py @@ -3,29 +3,26 @@ YOLO-specific modules Usage: - $ python models/yolo.py --cfg yolov5s.yaml + $ python path/to/models/yolo.py --cfg yolov3.yaml """ import argparse -import os -import platform import sys from copy import deepcopy from pathlib import Path FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv3 root directory +ROOT = FILE.parents[1] # root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH -if platform.system() != 'Windows': - ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative +# ROOT = ROOT.relative_to(Path.cwd()) # relative from models.common import * from models.experimental import * from utils.autoanchor import check_anchor_order from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args from utils.plots import feature_visualization -from utils.torch_utils import (fuse_conv_and_bn, initialize_weights, model_info, profile, scale_img, select_device, +from utils.torch_utils import (copy_attr, fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync) try: @@ -35,10 +32,8 @@ except ImportError: class Detect(nn.Module): - # YOLOv3 Detect head for detection models stride = None # strides computed during build - dynamic = False # force grid reconstruction - export = False # export mode + onnx_dynamic = False # ONNX export parameter def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer super().__init__() @@ -46,11 +41,11 @@ class Detect(nn.Module): self.no = nc + 5 # number of outputs per anchor self.nl = len(anchors) # number of detection layers self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.empty(0) for _ in range(self.nl)] # init grid - self.anchor_grid = [torch.empty(0) for _ in range(self.nl)] # init anchor grid + self.grid = [torch.zeros(1)] * self.nl # init grid + self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - self.inplace = inplace # use inplace ops (e.g. slice assignment) + self.inplace = inplace # use in-place ops (e.g. slice assignment) def forward(self, x): z = [] # inference output @@ -60,110 +55,35 @@ class Detect(nn.Module): x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() if not self.training: # inference - if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: + if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - if isinstance(self, Segment): # (boxes + masks) - xy, wh, conf, mask = x[i].split((2, 2, self.nc + 1, self.no - self.nc - 5), 4) - xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[i] # xy - wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i] # wh - y = torch.cat((xy, wh, conf.sigmoid(), mask), 4) - else: # Detect (boxes only) - xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4) - xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy - wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, self.na * nx * ny, self.no)) + y = x[i].sigmoid() + if self.inplace: + y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy + y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh + else: # for on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 + xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy + wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh + y = torch.cat((xy, wh, y[..., 4:]), -1) + z.append(y.view(bs, -1, self.no)) - return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x) + return x if self.training else (torch.cat(z, 1), x) - def _make_grid(self, nx=20, ny=20, i=0, torch_1_10=check_version(torch.__version__, '1.10.0')): + def _make_grid(self, nx=20, ny=20, i=0): d = self.anchors[i].device - t = self.anchors[i].dtype - shape = 1, self.na, ny, nx, 2 # grid shape - y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t) - yv, xv = torch.meshgrid(y, x, indexing='ij') if torch_1_10 else torch.meshgrid(y, x) # torch>=0.7 compatibility - grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5 - anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape) + if check_version(torch.__version__, '1.10.0'): # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility + yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)], indexing='ij') + else: + yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)]) + grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float() + anchor_grid = (self.anchors[i].clone() * self.stride[i]) \ + .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float() return grid, anchor_grid -class Segment(Detect): - # YOLOv3 Segment head for segmentation models - def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), inplace=True): - super().__init__(nc, anchors, ch, inplace) - self.nm = nm # number of masks - self.npr = npr # number of protos - self.no = 5 + nc + self.nm # number of outputs per anchor - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - self.proto = Proto(ch[0], self.npr, self.nm) # protos - self.detect = Detect.forward - - def forward(self, x): - p = self.proto(x[0]) - x = self.detect(self, x) - return (x, p) if self.training else (x[0], p) if self.export else (x[0], p, x[1]) - - -class BaseModel(nn.Module): - # YOLOv3 base model - def forward(self, x, profile=False, visualize=False): - return self._forward_once(x, profile, visualize) # single-scale inference, train - - def _forward_once(self, x, profile=False, visualize=False): - y, dt = [], [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - if profile: - self._profile_one_layer(m, x, dt) - x = m(x) # run - y.append(x if m.i in self.save else None) # save output - if visualize: - feature_visualization(x, m.type, m.i, save_dir=visualize) - return x - - def _profile_one_layer(self, m, x, dt): - c = m == self.model[-1] # is final layer, copy input as inplace fix - o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - t = time_sync() - for _ in range(10): - m(x.copy() if c else x) - dt.append((time_sync() - t) * 100) - if m == self.model[0]: - LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} module") - LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - if c: - LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - LOGGER.info('Fusing layers... ') - for m in self.model.modules(): - if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, 'bn') # remove batchnorm - m.forward = m.forward_fuse # update forward - self.info() - return self - - def info(self, verbose=False, img_size=640): # print model information - model_info(self, verbose, img_size) - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - m = self.model[-1] # Detect() - if isinstance(m, (Detect, Segment)): - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - -class DetectionModel(BaseModel): - # YOLOv3 detection model - def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes +class Model(nn.Module): + def __init__(self, cfg='yolov3.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes super().__init__() if isinstance(cfg, dict): self.yaml = cfg # model dict @@ -187,13 +107,12 @@ class DetectionModel(BaseModel): # Build strides, anchors m = self.model[-1] # Detect() - if isinstance(m, (Detect, Segment)): + if isinstance(m, Detect): s = 256 # 2x min stride m.inplace = self.inplace - forward = lambda x: self.forward(x)[0] if isinstance(m, Segment) else self.forward(x) - m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) + m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward m.anchors /= m.stride.view(-1, 1, 1) + check_anchor_order(m) self.stride = m.stride self._initialize_biases() # only run once @@ -221,6 +140,19 @@ class DetectionModel(BaseModel): y = self._clip_augmented(y) # clip augmented tails return torch.cat(y, 1), None # augmented inference, train + def _forward_once(self, x, profile=False, visualize=False): + y, dt = [], [] # outputs + for m in self.model: + if m.f != -1: # if not from previous layer + x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers + if profile: + self._profile_one_layer(m, x, dt) + x = m(x) # run + y.append(x if m.i in self.save else None) # save output + if visualize: + feature_visualization(x, m.type, m.i, save_dir=visualize) + return x + def _descale_pred(self, p, flips, scale, img_size): # de-scale predictions following augmented inference (inverse operation) if self.inplace: @@ -249,6 +181,19 @@ class DetectionModel(BaseModel): y[-1] = y[-1][:, i:] # small return y + def _profile_one_layer(self, m, x, dt): + c = isinstance(m, Detect) # is final layer, copy input as inplace fix + o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs + t = time_sync() + for _ in range(10): + m(x.copy() if c else x) + dt.append((time_sync() - t) * 100) + if m == self.model[0]: + LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") + LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') + if c: + LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") + def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency # https://arxiv.org/abs/1708.02002 section 3.3 # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. @@ -256,52 +201,55 @@ class DetectionModel(BaseModel): for mi, s in zip(m.m, m.stride): # from b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:5 + m.nc] += math.log(0.6 / (m.nc - 0.99999)) if cf is None else torch.log(cf / cf.sum()) # cls + b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) + def _print_biases(self): + m = self.model[-1] # Detect() module + for mi in m.m: # from + b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) + LOGGER.info( + ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) -Model = DetectionModel # retain 'Model' class for backwards compatibility + # def _print_weights(self): + # for m in self.model.modules(): + # if type(m) is Bottleneck: + # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights + def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers + LOGGER.info('Fusing layers... ') + for m in self.model.modules(): + if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): + m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv + delattr(m, 'bn') # remove batchnorm + m.forward = m.forward_fuse # update forward + self.info() + return self -class SegmentationModel(DetectionModel): - # segmentation model - def __init__(self, cfg='yolov5s-seg.yaml', ch=3, nc=None, anchors=None): - super().__init__(cfg, ch, nc, anchors) + def autoshape(self): # add AutoShape module + LOGGER.info('Adding AutoShape... ') + m = AutoShape(self) # wrap model + copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes + return m + def info(self, verbose=False, img_size=640): # print model information + model_info(self, verbose, img_size) -class ClassificationModel(BaseModel): - # classification model - def __init__(self, cfg=None, model=None, nc=1000, cutoff=10): # yaml, model, number of classes, cutoff index - super().__init__() - self._from_detection_model(model, nc, cutoff) if model is not None else self._from_yaml(cfg) - - def _from_detection_model(self, model, nc=1000, cutoff=10): - # Create a classification model from a detection model - if isinstance(model, DetectMultiBackend): - model = model.model # unwrap DetectMultiBackend - model.model = model.model[:cutoff] # backbone - m = model.model[-1] # last layer - ch = m.conv.in_channels if hasattr(m, 'conv') else m.cv1.conv.in_channels # ch into module - c = Classify(ch, nc) # Classify() - c.i, c.f, c.type = m.i, m.f, 'models.common.Classify' # index, from, type - model.model[-1] = c # replace - self.model = model.model - self.stride = model.stride - self.save = [] - self.nc = nc - - def _from_yaml(self, cfg): - # Create a classification model from a *.yaml file - self.model = None + def _apply(self, fn): + # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers + self = super()._apply(fn) + m = self.model[-1] # Detect() + if isinstance(m, Detect): + m.stride = fn(m.stride) + m.grid = list(map(fn, m.grid)) + if isinstance(m.anchor_grid, list): + m.anchor_grid = list(map(fn, m.anchor_grid)) + return self def parse_model(d, ch): # model_dict, input_channels(3) - # Parse a model.yaml dictionary LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - anchors, nc, gd, gw, act = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'], d.get('activation') - if act: - Conv.default_act = eval(act) # redefine default activation, i.e. Conv.default_act = nn.SiLU() - LOGGER.info(f"{colorstr('activation:')} {act}") # print + anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors no = na * (nc + 5) # number of outputs = anchors * (classes + 5) @@ -309,32 +257,30 @@ def parse_model(d, ch): # model_dict, input_channels(3) for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args m = eval(m) if isinstance(m, str) else m # eval strings for j, a in enumerate(args): - with contextlib.suppress(NameError): + try: args[j] = eval(a) if isinstance(a, str) else a # eval strings + except NameError: + pass n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in { - Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x}: + if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, + BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: c1, c2 = ch[f], args[0] if c2 != no: # if not output c2 = make_divisible(c2 * gw, 8) args = [c1, c2, *args[1:]] - if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x}: + if m in [BottleneckCSP, C3, C3TR, C3Ghost]: args.insert(2, n) # number of repeats n = 1 elif m is nn.BatchNorm2d: args = [ch[f]] elif m is Concat: c2 = sum(ch[x] for x in f) - # TODO: channel, gw, gd - elif m in {Detect, Segment}: + elif m is Detect: args.append([ch[x] for x in f]) if isinstance(args[1], int): # number of anchors args[1] = [list(range(args[1] * 2))] * len(f) - if m is Segment: - args[3] = make_divisible(args[3] * gw, 8) elif m is Contract: c2 = ch[f] * args[0] ** 2 elif m is Expand: @@ -357,34 +303,34 @@ def parse_model(d, ch): # model_dict, input_channels(3) if __name__ == '__main__': parser = argparse.ArgumentParser() - parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs') + parser.add_argument('--cfg', type=str, default='yolov3yaml', help='model.yaml') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--profile', action='store_true', help='profile model speed') - parser.add_argument('--line-profile', action='store_true', help='profile model speed layer by layer') parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') opt = parser.parse_args() opt.cfg = check_yaml(opt.cfg) # check YAML - print_args(vars(opt)) + print_args(FILE.stem, opt) device = select_device(opt.device) # Create model - im = torch.rand(opt.batch_size, 3, 640, 640).to(device) model = Model(opt.cfg).to(device) + model.train() - # Options - if opt.line_profile: # profile layer by layer - model(im, profile=True) + # Profile + if opt.profile: + img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) + y = model(img, profile=True) - elif opt.profile: # profile forward-backward - results = profile(input=im, ops=[model], n=3) - - elif opt.test: # test all models + # Test all models + if opt.test: for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): try: _ = Model(cfg) except Exception as e: print(f'Error in {cfg}: {e}') - else: # report fused model summary - model.fuse() + # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898) + # from torch.utils.tensorboard import SummaryWriter + # tb_writer = SummaryWriter('.') + # LOGGER.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/") + # tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph diff --git a/yolov3/requirements.txt b/yolov3/requirements.txt old mode 100644 new mode 100755 index 7bfde97b..f78e7ee0 --- a/yolov3/requirements.txt +++ b/yolov3/requirements.txt @@ -1,35 +1,31 @@ # YOLOv3 requirements # Usage: pip install -r requirements.txt -# Base ------------------------------------------------------------------------ -gitpython -ipython # interactive notebook +# Base ---------------------------------------- matplotlib>=3.2.2 numpy>=1.18.5 opencv-python>=4.1.1 Pillow>=7.1.2 -psutil # system resources PyYAML>=5.3.1 requests>=2.23.0 scipy>=1.4.1 -thop>=0.1.1 # FLOPs computation -torch>=1.7.0 # see https://pytorch.org/get-started/locally (recommended) +torch>=1.7.0 # see https://pytorch.org/get-started/locally/ (recommended) torchvision>=0.8.1 tqdm>=4.64.0 # protobuf<=3.20.1 # https://github.com/ultralytics/yolov5/issues/8012 -# Logging --------------------------------------------------------------------- +# Logging ------------------------------------- tensorboard>=2.4.1 -# clearml>=1.2.0 +# clearml # comet -# Plotting -------------------------------------------------------------------- +# Plotting ------------------------------------ pandas>=1.1.4 seaborn>=0.11.0 -# Export ---------------------------------------------------------------------- +# Export -------------------------------------- # coremltools>=6.0 # CoreML export -# onnx>=1.12.0 # ONNX export +# onnx>=1.9.0 # ONNX export # onnx-simplifier>=0.4.1 # ONNX simplifier # nvidia-pyindex # TensorRT export # nvidia-tensorrt # TensorRT export @@ -38,14 +34,14 @@ seaborn>=0.11.0 # tensorflowjs>=3.9.0 # TF.js export # openvino-dev # OpenVINO export -# Deploy ---------------------------------------------------------------------- -setuptools>=65.5.1 # Snyk vulnerability fix -wheel>=0.38.0 # Snyk vulnerability fix +# Deploy -------------------------------------- # tritonclient[all]~=2.24.0 -# Extras ---------------------------------------------------------------------- +# Extras -------------------------------------- +ipython # interactive notebook +psutil # system utilization +thop>=0.1.1 # FLOPs computation # mss # screenshots # albumentations>=1.0.3 -# pycocotools>=2.0.6 # COCO mAP +# pycocotools>=2.0 # COCO mAP # roboflow -# ultralytics # HUB https://hub.ultralytics.com diff --git a/yolov3/runs-20230212T095309Z-001.zip b/yolov3/runs-20230212T095309Z-001.zip new file mode 100644 index 00000000..44bcf6b6 Binary files /dev/null and b/yolov3/runs-20230212T095309Z-001.zip differ diff --git a/yolov3/setup.cfg b/yolov3/setup.cfg index d7c4cb3e..4ca0f0d7 100644 --- a/yolov3/setup.cfg +++ b/yolov3/setup.cfg @@ -1,10 +1,10 @@ # Project-wide configuration file, can be used for package metadata and other toll configurations # Example usage: global configuration for PEP8 (via flake8) setting or default pytest arguments -# Local usage: pip install pre-commit, pre-commit run --all-files [metadata] license_file = LICENSE -description_file = README.md +description-file = README.md + [tool:pytest] norecursedirs = @@ -16,6 +16,7 @@ addopts = --durations=25 --color=yes + [flake8] max-line-length = 120 exclude = .tox,*.egg,build,temp @@ -25,30 +26,26 @@ verbose = 2 # https://pep8.readthedocs.io/en/latest/intro.html#error-codes format = pylint # see: https://www.flake8rules.com/ -ignore = E731,F405,E402,F401,W504,E127,E231,E501,F403 - # E731: Do not assign a lambda expression, use a def - # F405: name may be undefined, or defined from star imports: module - # E402: module level import not at top of file - # F401: module imported but unused - # W504: line break after binary operator - # E127: continuation line over-indented for visual indent - # E231: missing whitespace after ‘,’, ‘;’, or ‘:’ - # E501: line too long - # F403: ‘from module import *’ used; unable to detect undefined names +ignore = + E731 # Do not assign a lambda expression, use a def + F405 + E402 + F841 + E741 + F821 + E722 + F401 + W504 + E127 + W504 + E231 + E501 + F403 + E302 + F541 + [isort] # https://pycqa.github.io/isort/docs/configuration/options.html line_length = 120 -# see: https://pycqa.github.io/isort/docs/configuration/multi_line_output_modes.html multi_line_output = 0 - -[yapf] -based_on_style = pep8 -spaces_before_comment = 2 -COLUMN_LIMIT = 120 -COALESCE_BRACKETS = True -SPACES_AROUND_POWER_OPERATOR = True -SPACE_BETWEEN_ENDING_COMMA_AND_CLOSING_BRACKET = False -SPLIT_BEFORE_CLOSING_BRACKET = False -SPLIT_BEFORE_FIRST_ARGUMENT = False -# EACH_DICT_ENTRY_ON_SEPARATE_LINE = False diff --git a/yolov3/train.py b/yolov3/train.py index 3d5796e0..9098b348 100644 --- a/yolov3/train.py +++ b/yolov3/train.py @@ -1,25 +1,14 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license """ -Train a YOLOv3 model on a custom dataset. -Models and datasets download automatically from the latest YOLOv3 release. +Train a model on a custom dataset -Usage - Single-GPU training: - $ python train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (recommended) - $ python train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch - -Usage - Multi-GPU DDP training: - $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 train.py --data coco128.yaml --weights yolov5s.pt --img 640 --device 0,1,2,3 - -Models: https://github.com/ultralytics/yolov5/tree/master/models -Datasets: https://github.com/ultralytics/yolov5/tree/master/data -Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data +Usage: + $ python path/to/train.py --data coco128.yaml --weights yolov3.pt --img 640 """ - import argparse import math import os import random -import subprocess import sys import time from copy import deepcopy @@ -31,46 +20,49 @@ import torch import torch.distributed as dist import torch.nn as nn import yaml -from torch.optim import lr_scheduler +from torch.cuda import amp +from torch.nn.parallel import DistributedDataParallel as DDP +from torch.optim import SGD, Adam, lr_scheduler from tqdm import tqdm FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv3 root directory +ROOT = FILE.parents[0] # root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative -import val as validate # for end-of-epoch mAP +import val # for end-of-epoch mAP from models.experimental import attempt_load from models.yolo import Model from utils.autoanchor import check_anchors from utils.autobatch import check_train_batch_size from utils.callbacks import Callbacks -from utils.dataloaders import create_dataloader -from utils.downloads import attempt_download, is_url -from utils.general import (LOGGER, TQDM_BAR_FORMAT, check_amp, check_dataset, check_file, check_git_info, - check_git_status, check_img_size, check_requirements, check_suffix, check_yaml, colorstr, - get_latest_run, increment_path, init_seeds, intersect_dicts, labels_to_class_weights, - labels_to_image_weights, methods, one_cycle, print_args, print_mutation, strip_optimizer, - yaml_save) +from utils.datasets import create_dataloader +from utils.downloads import attempt_download +from utils.general import (LOGGER, NCOLS, check_dataset, check_file, check_git_status, check_img_size, + check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path, + init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, + one_cycle, print_args, print_mutation, strip_optimizer) from utils.loggers import Loggers -from utils.loggers.comet.comet_utils import check_comet_resume +from utils.loggers.wandb.wandb_utils import check_wandb_resume from utils.loss import ComputeLoss from utils.metrics import fitness -from utils.plots import plot_evolve -from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, - smart_resume, torch_distributed_zero_first) +from utils.plots import plot_evolve, plot_labels +from utils.torch_utils import EarlyStopping, ModelEMA, de_parallel, select_device, torch_distributed_zero_first LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html RANK = int(os.getenv('RANK', -1)) WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) -GIT_INFO = check_git_info() -def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary - save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = \ - Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze - callbacks.run('on_pretrain_routine_start') +def train(hyp, # path/to/hyp.yaml or hyp dictionary + opt, + device, + callbacks + ): + save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, = \ + Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \ + opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze # Directories w = save_dir / 'weights' # weights dir @@ -82,36 +74,36 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio with open(hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) - opt.hyp = hyp.copy() # for saving hyps to checkpoints # Save run settings - if not evolve: - yaml_save(save_dir / 'hyp.yaml', hyp) - yaml_save(save_dir / 'opt.yaml', vars(opt)) + with open(save_dir / 'hyp.yaml', 'w') as f: + yaml.safe_dump(hyp, f, sort_keys=False) + with open(save_dir / 'opt.yaml', 'w') as f: + yaml.safe_dump(vars(opt), f, sort_keys=False) + data_dict = None # Loggers - data_dict = None - if RANK in {-1, 0}: + if RANK in [-1, 0]: loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance + if loggers.wandb: + data_dict = loggers.wandb.data_dict + if resume: + weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # Register actions for k in methods(loggers): callbacks.register_action(k, callback=getattr(loggers, k)) - # Process custom dataset artifact link - data_dict = loggers.remote_dataset - if resume: # If resuming runs from remote artifact - weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size - # Config - plots = not evolve and not opt.noplots # create plots + plots = not evolve # create plots cuda = device.type != 'cpu' - init_seeds(opt.seed + 1 + RANK, deterministic=True) + init_seeds(1 + RANK) with torch_distributed_zero_first(LOCAL_RANK): data_dict = data_dict or check_dataset(data) # check if None train_path, val_path = data_dict['train'], data_dict['val'] nc = 1 if single_cls else int(data_dict['nc']) # number of classes - names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names + names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names + assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset # Model @@ -120,7 +112,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio if pretrained: with torch_distributed_zero_first(LOCAL_RANK): weights = attempt_download(weights) # download if not found locally - ckpt = torch.load(weights, map_location='cpu') # load checkpoint to CPU to avoid CUDA memory leak + ckpt = torch.load(weights, map_location=device) # load checkpoint model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 @@ -129,13 +121,11 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report else: model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create - amp = check_amp(model) # check AMP # Freeze - freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze + freeze = [f'model.{x}.' for x in range(freeze)] # layers to freeze for k, v in model.named_parameters(): v.requires_grad = True # train all layers - # v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results) if any(x in k for x in freeze): LOGGER.info(f'freezing {k}') v.requires_grad = False @@ -146,35 +136,70 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio # Batch size if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size - batch_size = check_train_batch_size(model, imgsz, amp) - loggers.on_params_update({'batch_size': batch_size}) + batch_size = check_train_batch_size(model, imgsz) # Optimizer nbs = 64 # nominal batch size accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay - optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay']) + LOGGER.info(f"Scaled weight_decay = {hyp['weight_decay']}") + + g0, g1, g2 = [], [], [] # optimizer parameter groups + for v in model.modules(): + if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter): # bias + g2.append(v.bias) + if isinstance(v, nn.BatchNorm2d): # weight (no decay) + g0.append(v.weight) + elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter): # weight (with decay) + g1.append(v.weight) + + if opt.adam: + optimizer = Adam(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum + else: + optimizer = SGD(g0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True) + + optimizer.add_param_group({'params': g1, 'weight_decay': hyp['weight_decay']}) # add g1 with weight_decay + optimizer.add_param_group({'params': g2}) # add g2 (biases) + LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__} with parameter groups " + f"{len(g0)} weight, {len(g1)} weight (no decay), {len(g2)} bias") + del g0, g1, g2 # Scheduler - if opt.cos_lr: - lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] + if opt.linear_lr: + lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear else: - lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear + lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) # EMA - ema = ModelEMA(model) if RANK in {-1, 0} else None + ema = ModelEMA(model) if RANK in [-1, 0] else None # Resume - best_fitness, start_epoch = 0.0, 0 + start_epoch, best_fitness = 0, 0.0 if pretrained: + # Optimizer + if ckpt['optimizer'] is not None: + optimizer.load_state_dict(ckpt['optimizer']) + best_fitness = ckpt['best_fitness'] + + # EMA + if ema and ckpt.get('ema'): + ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) + ema.updates = ckpt['updates'] + + # Epochs + start_epoch = ckpt['epoch'] + 1 if resume: - best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) + assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.' + if epochs < start_epoch: + LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.") + epochs += ckpt['epoch'] # finetune additional epochs + del ckpt, csd # DP mode if cuda and RANK == -1 and torch.cuda.device_count() > 1: - LOGGER.warning('WARNING âš ī¸ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' + LOGGER.warning('WARNING: DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') model = torch.nn.DataParallel(model) @@ -184,53 +209,41 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio LOGGER.info('Using SyncBatchNorm()') # Trainloader - train_loader, dataset = create_dataloader(train_path, - imgsz, - batch_size // WORLD_SIZE, - gs, - single_cls, - hyp=hyp, - augment=True, - cache=None if opt.cache == 'val' else opt.cache, - rect=opt.rect, - rank=LOCAL_RANK, - workers=workers, - image_weights=opt.image_weights, - quad=opt.quad, - prefix=colorstr('train: '), - shuffle=True, - seed=opt.seed) - labels = np.concatenate(dataset.labels, 0) - mlc = int(labels[:, 0].max()) # max label class + train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls, + hyp=hyp, augment=True, cache=opt.cache, rect=opt.rect, rank=LOCAL_RANK, + workers=workers, image_weights=opt.image_weights, quad=opt.quad, + prefix=colorstr('train: '), shuffle=True) + mlc = int(np.concatenate(dataset.labels, 0)[:, 0].max()) # max label class + nb = len(train_loader) # number of batches assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' # Process 0 - if RANK in {-1, 0}: - val_loader = create_dataloader(val_path, - imgsz, - batch_size // WORLD_SIZE * 2, - gs, - single_cls, - hyp=hyp, - cache=None if noval else opt.cache, - rect=True, - rank=-1, - workers=workers * 2, - pad=0.5, + if RANK in [-1, 0]: + val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls, + hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1, + workers=workers, pad=0.5, prefix=colorstr('val: '))[0] if not resume: + labels = np.concatenate(dataset.labels, 0) + # c = torch.tensor(labels[:, 0]) # classes + # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency + # model._initialize_biases(cf.to(device)) + if plots: + plot_labels(labels, names, save_dir) + + # Anchors if not opt.noautoanchor: - check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor + check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) model.half().float() # pre-reduce anchor precision - callbacks.run('on_pretrain_routine_end', labels, names) + callbacks.run('on_pretrain_routine_end') # DDP mode if cuda and RANK != -1: - model = smart_DDP(model) + model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK) - # Model attributes + # Model parameters nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) hyp['box'] *= 3 / nl # scale to layers hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers @@ -243,23 +256,20 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio # Start training t0 = time.time() - nb = len(train_loader) # number of batches - nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations) + nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations) # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training last_opt_step = -1 maps = np.zeros(nc) # mAP per class results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) scheduler.last_epoch = start_epoch - 1 # do not move - scaler = torch.cuda.amp.GradScaler(enabled=amp) - stopper, stop = EarlyStopping(patience=opt.patience), False + scaler = amp.GradScaler(enabled=cuda) + stopper = EarlyStopping(patience=opt.patience) compute_loss = ComputeLoss(model) # init loss class - callbacks.run('on_train_start') LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' f"Logging results to {colorstr('bold', save_dir)}\n" f'Starting training for {epochs} epochs...') for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ - callbacks.run('on_train_epoch_start') model.train() # Update image weights (optional, single-GPU only) @@ -276,12 +286,11 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio if RANK != -1: train_loader.sampler.set_epoch(epoch) pbar = enumerate(train_loader) - LOGGER.info(('\n' + '%11s' * 7) % ('Epoch', 'GPU_mem', 'box_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size')) - if RANK in {-1, 0}: - pbar = tqdm(pbar, total=nb, bar_format=TQDM_BAR_FORMAT) # progress bar + LOGGER.info(('\n' + '%10s' * 7) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'labels', 'img_size')) + if RANK in [-1, 0]: + pbar = tqdm(pbar, total=nb, ncols=NCOLS, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar optimizer.zero_grad() for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- - callbacks.run('on_train_batch_start') ni = i + nb * epoch # number integrated batches (since train start) imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 @@ -292,7 +301,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) for j, x in enumerate(optimizer.param_groups): # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 - x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)]) + x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)]) if 'momentum' in x: x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) @@ -305,7 +314,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) # Forward - with torch.cuda.amp.autocast(amp): + with amp.autocast(enabled=cuda): pred = model(imgs) # forward loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size if RANK != -1: @@ -316,10 +325,8 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio # Backward scaler.scale(loss).backward() - # Optimize - https://pytorch.org/docs/master/notes/amp_examples.html + # Optimize if ni - last_opt_step >= accumulate: - scaler.unscale_(optimizer) # unscale gradients - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients scaler.step(optimizer) # optimizer.step scaler.update() optimizer.zero_grad() @@ -328,41 +335,37 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio last_opt_step = ni # Log - if RANK in {-1, 0}: + if RANK in [-1, 0]: mloss = (mloss * i + loss_items) / (i + 1) # update mean losses mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) - pbar.set_description(('%11s' * 2 + '%11.4g' * 5) % - (f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) - callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths, list(mloss)) - if callbacks.stop_training: - return + pbar.set_description(('%10s' * 2 + '%10.4g' * 5) % ( + f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) + callbacks.run('on_train_batch_end', ni, model, imgs, targets, paths, plots, opt.sync_bn) # end batch ------------------------------------------------------------------------------------------------ # Scheduler lr = [x['lr'] for x in optimizer.param_groups] # for loggers scheduler.step() - if RANK in {-1, 0}: + if RANK in [-1, 0]: # mAP callbacks.run('on_train_epoch_end', epoch=epoch) ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) final_epoch = (epoch + 1 == epochs) or stopper.possible_stop if not noval or final_epoch: # Calculate mAP - results, maps, _ = validate.run(data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - half=amp, - model=ema.ema, - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - plots=False, - callbacks=callbacks, - compute_loss=compute_loss) + results, maps, _ = val.run(data_dict, + batch_size=batch_size // WORLD_SIZE * 2, + imgsz=imgsz, + model=ema.ema, + single_cls=single_cls, + dataloader=val_loader, + save_dir=save_dir, + plots=False, + callbacks=callbacks, + compute_loss=compute_loss) # Update best mAP fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] - stop = stopper(epoch=epoch, fitness=fi) # early stop check if fi > best_fitness: best_fitness = fi log_vals = list(mloss) + list(results) + lr @@ -370,62 +373,65 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio # Save model if (not nosave) or (final_epoch and not evolve): # if save - ckpt = { - 'epoch': epoch, - 'best_fitness': best_fitness, - 'model': deepcopy(de_parallel(model)).half(), - 'ema': deepcopy(ema.ema).half(), - 'updates': ema.updates, - 'optimizer': optimizer.state_dict(), - 'opt': vars(opt), - 'git': GIT_INFO, # {remote, branch, commit} if a git repo - 'date': datetime.now().isoformat()} + ckpt = {'epoch': epoch, + 'best_fitness': best_fitness, + 'model': deepcopy(de_parallel(model)).half(), + 'ema': deepcopy(ema.ema).half(), + 'updates': ema.updates, + 'optimizer': optimizer.state_dict(), + 'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None, + 'date': datetime.now().isoformat()} # Save last, best and delete torch.save(ckpt, last) if best_fitness == fi: torch.save(ckpt, best) - if opt.save_period > 0 and epoch % opt.save_period == 0: + if (epoch > 0) and (opt.save_period > 0) and (epoch % opt.save_period == 0): torch.save(ckpt, w / f'epoch{epoch}.pt') del ckpt callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) - # EarlyStopping - if RANK != -1: # if DDP training - broadcast_list = [stop if RANK == 0 else None] - dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks - if RANK != 0: - stop = broadcast_list[0] - if stop: - break # must break all DDP ranks + # Stop Single-GPU + if RANK == -1 and stopper(epoch=epoch, fitness=fi): + break + + # Stop DDP TODO: known issues shttps://github.com/ultralytics/yolov5/pull/4576 + # stop = stopper(epoch=epoch, fitness=fi) + # if RANK == 0: + # dist.broadcast_object_list([stop], 0) # broadcast 'stop' to all ranks + + # Stop DPP + # with torch_distributed_zero_first(RANK): + # if stop: + # break # must break all DDP ranks # end epoch ---------------------------------------------------------------------------------------------------- # end training ----------------------------------------------------------------------------------------------------- - if RANK in {-1, 0}: + if RANK in [-1, 0]: LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') for f in last, best: if f.exists(): strip_optimizer(f) # strip optimizers if f is best: LOGGER.info(f'\nValidating {f}...') - results, _, _ = validate.run( - data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - model=attempt_load(f, device).half(), - iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - save_json=is_coco, - verbose=True, - plots=plots, - callbacks=callbacks, - compute_loss=compute_loss) # val best model with plots + results, _, _ = val.run(data_dict, + batch_size=batch_size // WORLD_SIZE * 2, + imgsz=imgsz, + model=attempt_load(f, device).half(), + iou_thres=0.65 if is_coco else 0.60, # best pycocotools results at 0.65 + single_cls=single_cls, + dataloader=val_loader, + save_dir=save_dir, + save_json=is_coco, + verbose=True, + plots=True, + callbacks=callbacks, + compute_loss=compute_loss) # val best model with plots if is_coco: callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi) - callbacks.run('on_train_end', last, best, epoch, results) + callbacks.run('on_train_end', last, best, plots, epoch, results) + LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}") torch.cuda.empty_cache() return results @@ -433,146 +439,133 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio def parse_opt(known=False): parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov3-tiny.pt', help='initial weights path') + parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='initial weights path') parser.add_argument('--cfg', type=str, default='', help='model.yaml path') parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') - parser.add_argument('--epochs', type=int, default=100, help='total training epochs') + parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch.yaml', help='hyperparameters path') + parser.add_argument('--epochs', type=int, default=300) parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch') parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--noval', action='store_true', help='only validate final epoch') - parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') - parser.add_argument('--noplots', action='store_true', help='save no plot files') + parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check') parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') - parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk') + parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"') parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') - parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') + parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer') parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name') parser.add_argument('--name', default='exp', help='save to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--quad', action='store_true', help='quad dataloader') - parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') + parser.add_argument('--linear-lr', action='store_true', help='linear LR') parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') - parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') + parser.add_argument('--freeze', type=int, default=0, help='Number of layers to freeze. backbone=10, all=24') parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)') - parser.add_argument('--seed', type=int, default=0, help='Global training seed') - parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') + parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify') - # Logger arguments - parser.add_argument('--entity', default=None, help='Entity') - parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='Upload data, "val" option') - parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval') - parser.add_argument('--artifact_alias', type=str, default='latest', help='Version of dataset artifact to use') + # Weights & Biases arguments + parser.add_argument('--entity', default=None, help='W&B: Entity') + parser.add_argument('--upload_dataset', action='store_true', help='W&B: Upload dataset as artifact table') + parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval') + parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use') - return parser.parse_known_args()[0] if known else parser.parse_args() + opt = parser.parse_known_args()[0] if known else parser.parse_args() + return opt def main(opt, callbacks=Callbacks()): # Checks - if RANK in {-1, 0}: - print_args(vars(opt)) + if RANK in [-1, 0]: + print_args(FILE.stem, opt) check_git_status() - check_requirements() + check_requirements(exclude=['thop']) - # Resume (from specified or most recent last.pt) - if opt.resume and not check_comet_resume(opt) and not opt.evolve: - last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) - opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml - opt_data = opt.data # original dataset - if opt_yaml.is_file(): - with open(opt_yaml, errors='ignore') as f: - d = yaml.safe_load(f) - else: - d = torch.load(last, map_location='cpu')['opt'] - opt = argparse.Namespace(**d) # replace - opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate - if is_url(opt_data): - opt.data = check_file(opt_data) # avoid HUB resume auth timeout + # Resume + if opt.resume and not check_wandb_resume(opt) and not opt.evolve: # resume an interrupted run + ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path + assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist' + with open(Path(ckpt).parent.parent / 'opt.yaml', errors='ignore') as f: + opt = argparse.Namespace(**yaml.safe_load(f)) # replace + opt.cfg, opt.weights, opt.resume = '', ckpt, True # reinstate + LOGGER.info(f'Resuming training from {ckpt}') else: opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' if opt.evolve: - if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve - opt.project = str(ROOT / 'runs/evolve') + opt.project = str(ROOT / 'runs/evolve') opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume - if opt.name == 'cfg': - opt.name = Path(opt.cfg).stem # use model.yaml as name opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # DDP mode device = select_device(opt.device, batch_size=opt.batch_size) if LOCAL_RANK != -1: - msg = 'is not compatible with YOLOv3 Multi-GPU DDP training' - assert not opt.image_weights, f'--image-weights {msg}' - assert not opt.evolve, f'--evolve {msg}' - assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size' - assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE' assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' + assert opt.batch_size % WORLD_SIZE == 0, '--batch-size must be multiple of CUDA device count' + assert not opt.image_weights, '--image-weights argument is not compatible with DDP training' + assert not opt.evolve, '--evolve argument is not compatible with DDP training' torch.cuda.set_device(LOCAL_RANK) device = torch.device('cuda', LOCAL_RANK) - dist.init_process_group(backend='nccl' if dist.is_nccl_available() else 'gloo') + dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo") # Train if not opt.evolve: train(opt.hyp, opt, device, callbacks) + if WORLD_SIZE > 1 and RANK == 0: + LOGGER.info('Destroying process group... ') + dist.destroy_process_group() # Evolve hyperparameters (optional) else: # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) - meta = { - 'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) - 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) - 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 - 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay - 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) - 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum - 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr - 'box': (1, 0.02, 0.2), # box loss gain - 'cls': (1, 0.2, 4.0), # cls loss gain - 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight - 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) - 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight - 'iou_t': (0, 0.1, 0.7), # IoU training threshold - 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold - 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) - 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) - 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) - 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) - 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) - 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) - 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) - 'scale': (1, 0.0, 0.9), # image scale (+/- gain) - 'shear': (1, 0.0, 10.0), # image shear (+/- deg) - 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 - 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) - 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) - 'mosaic': (1, 0.0, 1.0), # image mixup (probability) - 'mixup': (1, 0.0, 1.0), # image mixup (probability) - 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) + meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) + 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) + 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 + 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay + 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) + 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum + 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr + 'box': (1, 0.02, 0.2), # box loss gain + 'cls': (1, 0.2, 4.0), # cls loss gain + 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight + 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) + 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight + 'iou_t': (0, 0.1, 0.7), # IoU training threshold + 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold + 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) + 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) + 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) + 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) + 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) + 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) + 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) + 'scale': (1, 0.0, 0.9), # image scale (+/- gain) + 'shear': (1, 0.0, 10.0), # image shear (+/- deg) + 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 + 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) + 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) + 'mosaic': (1, 0.0, 1.0), # image mixup (probability) + 'mixup': (1, 0.0, 1.0), # image mixup (probability) + 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) with open(opt.hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict if 'anchors' not in hyp: # anchors commented in hyp.yaml hyp['anchors'] = 3 - if opt.noautoanchor: - del hyp['anchors'], meta['anchors'] opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' if opt.bucket: - subprocess.run( - f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}'.split()) # download evolve.csv if exists + os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {save_dir}') # download evolve.csv if exists for _ in range(opt.evolve): # generations to evolve if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate @@ -608,28 +601,25 @@ def main(opt, callbacks=Callbacks()): # Train mutation results = train(hyp.copy(), opt, device, callbacks) - callbacks = Callbacks() + # Write mutation results - keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss', - 'val/obj_loss', 'val/cls_loss') - print_mutation(keys, results, hyp.copy(), save_dir, opt.bucket) + print_mutation(results, hyp.copy(), save_dir, opt.bucket) # Plot results plot_evolve(evolve_csv) - LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n' + LOGGER.info(f'Hyperparameter evolution finished\n' f"Results saved to {colorstr('bold', save_dir)}\n" - f'Usage example: $ python train.py --hyp {evolve_yaml}') + f'Use best hyperparameters example: $ python train.py --hyp {evolve_yaml}') def run(**kwargs): - # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') + # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov3.pt') opt = parse_opt(True) for k, v in kwargs.items(): setattr(opt, k, v) main(opt) - return opt -if __name__ == '__main__': +if __name__ == "__main__": opt = parse_opt() main(opt) diff --git a/yolov3/tutorial.ipynb b/yolov3/tutorial.ipynb index 5828c40d..635061fb 100644 --- a/yolov3/tutorial.ipynb +++ b/yolov3/tutorial.ipynb @@ -5,7 +5,7 @@ "colab": { "name": "YOLOv3 Tutorial", "provenance": [], - "toc_visible": true + "include_colab_link": true }, "kernelspec": { "name": "python3", @@ -14,373 +14,373 @@ "accelerator": "GPU", "widgets": { "application/vnd.jupyter.widget-state+json": { - "1f7df330663048998adcf8a45bc8f69b": { + "eeda9d6850e8406f9bbc5b06051b3710": { "model_module": "@jupyter-widgets/controls", "model_name": "HBoxModel", "model_module_version": "1.5.0", "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HBoxModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", "_view_name": "HBoxView", + "_dom_classes": [], + "_model_name": "HBoxModel", + "_view_module": "@jupyter-widgets/controls", + "_model_module_version": "1.5.0", + "_view_count": null, + "_view_module_version": "1.5.0", "box_style": "", + "layout": "IPY_MODEL_1e823c45174a4216be7234a6cc5cfd99", + "_model_module": "@jupyter-widgets/controls", "children": [ - "IPY_MODEL_e896e6096dd244c59d7955e2035cd729", - "IPY_MODEL_a6ff238c29984b24bf6d0bd175c19430", - "IPY_MODEL_3c085ba3f3fd4c3c8a6bb41b41ce1479" - ], - "layout": "IPY_MODEL_16b0c8aa6e0f427e8a54d3791abb7504" + "IPY_MODEL_cd8efd6c5de94ea8848a7d5b8766a4d6", + "IPY_MODEL_a4ec69c4697c4b0e84e6193be227f63e", + "IPY_MODEL_9a5694c133be46df8d2fe809b77c1c35" + ] } }, - "e896e6096dd244c59d7955e2035cd729": { + "1e823c45174a4216be7234a6cc5cfd99": { + "model_module": "@jupyter-widgets/base", + "model_name": "LayoutModel", + "model_module_version": "1.2.0", + "state": { + "_view_name": "LayoutView", + "grid_template_rows": null, + "right": null, + "justify_content": null, + "_view_module": "@jupyter-widgets/base", + "overflow": null, + "_model_module_version": "1.2.0", + "_view_count": null, + "flex_flow": null, + "width": null, + "min_width": null, + "border": null, + "align_items": null, + "bottom": null, + "_model_module": "@jupyter-widgets/base", + "top": null, + "grid_column": null, + "overflow_y": null, + "overflow_x": null, + "grid_auto_flow": null, + "grid_area": null, + "grid_template_columns": null, + "flex": null, + "_model_name": "LayoutModel", + "justify_items": null, + "grid_row": null, + "max_height": null, + "align_content": null, + "visibility": null, + "align_self": null, + "height": null, + "min_height": null, + "padding": null, + "grid_auto_rows": null, + "grid_gap": null, + "max_width": null, + "order": null, + "_view_module_version": "1.2.0", + "grid_template_areas": null, + "object_position": null, + "object_fit": null, + "grid_auto_columns": null, + "margin": null, + "display": null, + "left": null + } + }, + "cd8efd6c5de94ea8848a7d5b8766a4d6": { "model_module": "@jupyter-widgets/controls", "model_name": "HTMLModel", "model_module_version": "1.5.0", "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HTMLModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", "_view_name": "HTMLView", + "style": "IPY_MODEL_d584167143f84a0484006dded3fd2620", + "_dom_classes": [], "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_c7b2dd0f78384cad8e400b282996cdf5", + "_model_name": "HTMLModel", "placeholder": "​", - "style": "IPY_MODEL_6a27e43b0e434edd82ee63f0a91036ca", - "value": "100%" + "_view_module": "@jupyter-widgets/controls", + "_model_module_version": "1.5.0", + "value": "100%", + "_view_count": null, + "_view_module_version": "1.5.0", + "description_tooltip": null, + "_model_module": "@jupyter-widgets/controls", + "layout": "IPY_MODEL_b9a25c0d425c4fe4b8cd51ae6a301b0d" } }, - "a6ff238c29984b24bf6d0bd175c19430": { + "a4ec69c4697c4b0e84e6193be227f63e": { "model_module": "@jupyter-widgets/controls", "model_name": "FloatProgressModel", "model_module_version": "1.5.0", "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "FloatProgressModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", "_view_name": "ProgressView", - "bar_style": "success", + "style": "IPY_MODEL_654525fe1ed34d5fbe1c36ed80ae1c1c", + "_dom_classes": [], "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_cce0e6c0c4ec442cb47e65c674e02e92", + "_model_name": "FloatProgressModel", + "bar_style": "success", "max": 818322941, - "min": 0, + "_view_module": "@jupyter-widgets/controls", + "_model_module_version": "1.5.0", + "value": 818322941, + "_view_count": null, + "_view_module_version": "1.5.0", "orientation": "horizontal", - "style": "IPY_MODEL_c5b9f38e2f0d4f9aa97fe87265263743", - "value": 818322941 + "min": 0, + "description_tooltip": null, + "_model_module": "@jupyter-widgets/controls", + "layout": "IPY_MODEL_09544845070e47baafc5e37d45ff23e9" } }, - "3c085ba3f3fd4c3c8a6bb41b41ce1479": { + "9a5694c133be46df8d2fe809b77c1c35": { "model_module": "@jupyter-widgets/controls", "model_name": "HTMLModel", "model_module_version": "1.5.0", "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HTMLModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", "_view_name": "HTMLView", + "style": "IPY_MODEL_1066f1d5b6104a3dae19f26269745bd0", + "_dom_classes": [], "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_df554fb955c7454696beac5a82889386", + "_model_name": "HTMLModel", "placeholder": "​", - "style": "IPY_MODEL_74e9112a87a242f4831b7d68c7da6333", - "value": " 780M/780M [00:05<00:00, 126MB/s]" - } - }, - "16b0c8aa6e0f427e8a54d3791abb7504": { - "model_module": "@jupyter-widgets/base", - "model_name": "LayoutModel", - "model_module_version": "1.2.0", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", + "_view_module": "@jupyter-widgets/controls", + "_model_module_version": "1.5.0", + "value": " 780M/780M [00:03<00:00, 200MB/s]", "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null + "_view_module_version": "1.5.0", + "description_tooltip": null, + "_model_module": "@jupyter-widgets/controls", + "layout": "IPY_MODEL_dd3a70e1ef4547ec8d3463749ce06285" } }, - "c7b2dd0f78384cad8e400b282996cdf5": { - "model_module": "@jupyter-widgets/base", - "model_name": "LayoutModel", - "model_module_version": "1.2.0", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "6a27e43b0e434edd82ee63f0a91036ca": { + "d584167143f84a0484006dded3fd2620": { "model_module": "@jupyter-widgets/controls", "model_name": "DescriptionStyleModel", "model_module_version": "1.5.0", "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "DescriptionStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", "_view_name": "StyleView", - "description_width": "" + "_model_name": "DescriptionStyleModel", + "description_width": "", + "_view_module": "@jupyter-widgets/base", + "_model_module_version": "1.5.0", + "_view_count": null, + "_view_module_version": "1.2.0", + "_model_module": "@jupyter-widgets/controls" } }, - "cce0e6c0c4ec442cb47e65c674e02e92": { + "b9a25c0d425c4fe4b8cd51ae6a301b0d": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "model_module_version": "1.2.0", "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, "right": null, + "justify_content": null, + "_view_module": "@jupyter-widgets/base", + "overflow": null, + "_model_module_version": "1.2.0", + "_view_count": null, + "flex_flow": null, + "width": null, + "min_width": null, + "border": null, + "align_items": null, + "bottom": null, + "_model_module": "@jupyter-widgets/base", "top": null, + "grid_column": null, + "overflow_y": null, + "overflow_x": null, + "grid_auto_flow": null, + "grid_area": null, + "grid_template_columns": null, + "flex": null, + "_model_name": "LayoutModel", + "justify_items": null, + "grid_row": null, + "max_height": null, + "align_content": null, "visibility": null, - "width": null + "align_self": null, + "height": null, + "min_height": null, + "padding": null, + "grid_auto_rows": null, + "grid_gap": null, + "max_width": null, + "order": null, + "_view_module_version": "1.2.0", + "grid_template_areas": null, + "object_position": null, + "object_fit": null, + "grid_auto_columns": null, + "margin": null, + "display": null, + "left": null } }, - "c5b9f38e2f0d4f9aa97fe87265263743": { + "654525fe1ed34d5fbe1c36ed80ae1c1c": { "model_module": "@jupyter-widgets/controls", "model_name": "ProgressStyleModel", "model_module_version": "1.5.0", "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "ProgressStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", "_view_name": "StyleView", + "_model_name": "ProgressStyleModel", + "description_width": "", + "_view_module": "@jupyter-widgets/base", + "_model_module_version": "1.5.0", + "_view_count": null, + "_view_module_version": "1.2.0", "bar_color": null, - "description_width": "" + "_model_module": "@jupyter-widgets/controls" } }, - "df554fb955c7454696beac5a82889386": { + "09544845070e47baafc5e37d45ff23e9": { "model_module": "@jupyter-widgets/base", "model_name": "LayoutModel", "model_module_version": "1.2.0", "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, "right": null, + "justify_content": null, + "_view_module": "@jupyter-widgets/base", + "overflow": null, + "_model_module_version": "1.2.0", + "_view_count": null, + "flex_flow": null, + "width": null, + "min_width": null, + "border": null, + "align_items": null, + "bottom": null, + "_model_module": "@jupyter-widgets/base", "top": null, + "grid_column": null, + "overflow_y": null, + "overflow_x": null, + "grid_auto_flow": null, + "grid_area": null, + "grid_template_columns": null, + "flex": null, + "_model_name": "LayoutModel", + "justify_items": null, + "grid_row": null, + "max_height": null, + "align_content": null, "visibility": null, - "width": null + "align_self": null, + "height": null, + "min_height": null, + "padding": null, + "grid_auto_rows": null, + "grid_gap": null, + "max_width": null, + "order": null, + "_view_module_version": "1.2.0", + "grid_template_areas": null, + "object_position": null, + "object_fit": null, + "grid_auto_columns": null, + "margin": null, + "display": null, + "left": null } }, - "74e9112a87a242f4831b7d68c7da6333": { + "1066f1d5b6104a3dae19f26269745bd0": { "model_module": "@jupyter-widgets/controls", "model_name": "DescriptionStyleModel", "model_module_version": "1.5.0", "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "DescriptionStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", "_view_name": "StyleView", - "description_width": "" + "_model_name": "DescriptionStyleModel", + "description_width": "", + "_view_module": "@jupyter-widgets/base", + "_model_module_version": "1.5.0", + "_view_count": null, + "_view_module_version": "1.2.0", + "_model_module": "@jupyter-widgets/controls" + } + }, + "dd3a70e1ef4547ec8d3463749ce06285": { + "model_module": "@jupyter-widgets/base", + "model_name": "LayoutModel", + "model_module_version": "1.2.0", + "state": { + "_view_name": "LayoutView", + "grid_template_rows": null, + "right": null, + "justify_content": null, + "_view_module": "@jupyter-widgets/base", + "overflow": null, + "_model_module_version": "1.2.0", + "_view_count": null, + "flex_flow": null, + "width": null, + "min_width": null, + "border": null, + "align_items": null, + "bottom": null, + "_model_module": "@jupyter-widgets/base", + "top": null, + "grid_column": null, + "overflow_y": null, + "overflow_x": null, + "grid_auto_flow": null, + "grid_area": null, + "grid_template_columns": null, + "flex": null, + "_model_name": "LayoutModel", + "justify_items": null, + "grid_row": null, + "max_height": null, + "align_content": null, + "visibility": null, + "align_self": null, + "height": null, + "min_height": null, + "padding": null, + "grid_auto_rows": null, + "grid_gap": null, + "max_width": null, + "order": null, + "_view_module_version": "1.2.0", + "grid_template_areas": null, + "object_position": null, + "object_fit": null, + "grid_auto_columns": null, + "margin": null, + "display": null, + "left": null } } } } }, "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "view-in-github", + "colab_type": "text" + }, + "source": [ + "\"Open" + ] + }, { "cell_type": "markdown", "metadata": { "id": "t6MPjfT5NrKQ" }, "source": [ - "
\n", + "\n", + "\n", "\n", - " \n", - " \n", - "\n", - "\n", - "
\n", - " \"Run\n", - " \"Open\n", - " \"Open\n", - "
\n", - "\n", - "This 🚀 notebook by Ultralytics presents simple train, validate and predict examples to help start your AI adventure.
See GitHub for community support or contact us for professional support.\n", - "\n", - "
" + "This is the **official YOLOv3 🚀 notebook** by **Ultralytics**, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). \n", + "For more information please visit https://github.com/ultralytics/yolov3 and https://ultralytics.com. Thank you!" ] }, { @@ -391,7 +391,7 @@ "source": [ "# Setup\n", "\n", - "Clone GitHub [repository](https://github.com/ultralytics/yolov5), install [dependencies](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) and check PyTorch and GPU." + "Clone repo, install dependencies and check PyTorch and GPU." ] }, { @@ -401,31 +401,24 @@ "colab": { "base_uri": "https://localhost:8080/" }, - "outputId": "f9f016ad-3dcf-4bd2-e1c3-d5b79efc6f32" + "outputId": "141002fc-fe49-48d2-a575-2555bf903413" }, "source": [ - "!git clone https://github.com/ultralytics/yolov5 # clone\n", - "%cd yolov5\n", + "!git clone https://github.com/ultralytics/yolov3 # clone\n", + "%cd yolov3\n", "%pip install -qr requirements.txt # install\n", "\n", "import torch\n", - "import utils\n", + "from yolov3 import utils\n", "display = utils.notebook_init() # checks" ], - "execution_count": null, + "execution_count": 1, "outputs": [ - { - "output_type": "stream", - "name": "stderr", - "text": [ - " 🚀 v7.0-1-gb32f67f Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n" - ] - }, { "output_type": "stream", "name": "stdout", "text": [ - "Setup complete ✅ (2 CPUs, 12.7 GB RAM, 22.6/78.2 GB disk)\n" + "Setup complete ✅\n" ] } ] @@ -436,19 +429,18 @@ "id": "4JnkELT0cIJg" }, "source": [ - "# 1. Detect\n", + "# 1. Inference\n", "\n", - "`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/detect`. Example inference sources are:\n", + "`detect.py` runs YOLOv3 inference on a variety of sources, downloading models automatically from the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases), and saving results to `runs/detect`. Example inference sources are:\n", "\n", "```shell\n", "python detect.py --source 0 # webcam\n", " img.jpg # image \n", " vid.mp4 # video\n", - " screen # screenshot\n", " path/ # directory\n", - " 'path/*.jpg' # glob\n", - " 'https://youtu.be/Zgi9g1ksQHc' # YouTube\n", - " 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream\n", + " path/*.jpg # glob\n", + " 'https://youtu.be/Zgi9g1ksQHc' # YouTube\n", + " 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream\n", "```" ] }, @@ -459,30 +451,27 @@ "colab": { "base_uri": "https://localhost:8080/" }, - "outputId": "b4db5c49-f501-4505-cf0d-a1d35236c485" + "outputId": "c29b082a-8e56-4799-b32a-056425f130d1" }, "source": [ - "!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images\n", + "!python detect.py --weights yolov3.pt --img 640 --conf 0.25 --source data/images\n", "# display.Image(filename='runs/detect/exp/zidane.jpg', width=600)" ], - "execution_count": null, + "execution_count": 4, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ - "\u001b[34m\u001b[1mdetect: \u001b[0mweights=['yolov5s.pt'], source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1\n", - " 🚀 v7.0-1-gb32f67f Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n", - "\n", - "Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt to yolov5s.pt...\n", - "100% 14.1M/14.1M [00:00<00:00, 116MB/s] \n", + "\u001b[34m\u001b[1mdetect: \u001b[0mweights=['yolov3.pt'], source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False\n", + "YOLOv3 🚀 v9.6.0-29-ga441ab1 torch 1.13.0+cu116 CUDA:0 (Tesla T4, 15110MiB)\n", "\n", "Fusing layers... \n", - "YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients\n", - "image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 17.0ms\n", - "image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 2 ties, 14.3ms\n", - "Speed: 0.5ms pre-process, 15.7ms inference, 18.6ms NMS per image at shape (1, 3, 640, 640)\n", - "Results saved to \u001b[1mruns/detect/exp\u001b[0m\n" + "Model Summary: 261 layers, 61922845 parameters, 0 gradients\n", + "image 1/2 /content/yolov3/data/images/bus.jpg: 640x480 4 persons, 1 bicycle, 1 bus, Done. (0.050s)\n", + "image 2/2 /content/yolov3/data/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.038s)\n", + "Speed: 0.5ms pre-process, 44.3ms inference, 1.3ms NMS per image at shape (1, 3, 640, 640)\n", + "Results saved to \u001b[1mruns/detect/exp2\u001b[0m\n" ] } ] @@ -504,7 +493,17 @@ }, "source": [ "# 2. Validate\n", - "Validate a model's accuracy on the [COCO](https://cocodataset.org/#home) dataset's `val` or `test` splits. Models are downloaded automatically from the [latest release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag." + "Validate a model's accuracy on [COCO](https://cocodataset.org/#home) val or test-dev datasets. Models are downloaded automatically from the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "eyTZYGgRjnMc" + }, + "source": [ + "## COCO val\n", + "Download [COCO val 2017](https://github.com/ultralytics/yolov3/blob/master/data/coco.yaml) dataset (1GB - 5000 images), and test model accuracy." ] }, { @@ -513,41 +512,41 @@ "id": "WQPtK1QYVaD_", "colab": { "base_uri": "https://localhost:8080/", - "height": 49, + "height": 48, "referenced_widgets": [ - "1f7df330663048998adcf8a45bc8f69b", - "e896e6096dd244c59d7955e2035cd729", - "a6ff238c29984b24bf6d0bd175c19430", - "3c085ba3f3fd4c3c8a6bb41b41ce1479", - "16b0c8aa6e0f427e8a54d3791abb7504", - "c7b2dd0f78384cad8e400b282996cdf5", - "6a27e43b0e434edd82ee63f0a91036ca", - "cce0e6c0c4ec442cb47e65c674e02e92", - "c5b9f38e2f0d4f9aa97fe87265263743", - "df554fb955c7454696beac5a82889386", - "74e9112a87a242f4831b7d68c7da6333" + "eeda9d6850e8406f9bbc5b06051b3710", + "1e823c45174a4216be7234a6cc5cfd99", + "cd8efd6c5de94ea8848a7d5b8766a4d6", + "a4ec69c4697c4b0e84e6193be227f63e", + "9a5694c133be46df8d2fe809b77c1c35", + "d584167143f84a0484006dded3fd2620", + "b9a25c0d425c4fe4b8cd51ae6a301b0d", + "654525fe1ed34d5fbe1c36ed80ae1c1c", + "09544845070e47baafc5e37d45ff23e9", + "1066f1d5b6104a3dae19f26269745bd0", + "dd3a70e1ef4547ec8d3463749ce06285" ] }, - "outputId": "c7d0a0d2-abfb-44c3-d60d-f99d0e7aabad" + "outputId": "56199bac-5a5e-41eb-8892-bf387a1ec7cb" }, "source": [ "# Download COCO val\n", - "torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip') # download (780M - 5000 images)\n", - "!unzip -q tmp.zip -d ../datasets && rm tmp.zip # unzip" + "torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip')\n", + "!unzip -q tmp.zip -d ../datasets && rm tmp.zip" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "eeda9d6850e8406f9bbc5b06051b3710", + "version_minor": 0, + "version_major": 2 + }, "text/plain": [ " 0%| | 0.00/780M [00:00

\n", + "

\n", "Close the active learning loop by sampling images from your inference conditions with the `roboflow` pip package\n", "

\n", "\n", - "Train a YOLOv5s model on the [COCO128](https://www.kaggle.com/ultralytics/coco128) dataset with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`.\n", + "Train a YOLOv3 model on the [COCO128](https://www.kaggle.com/ultralytics/coco128) dataset with `--data coco128.yaml`, starting from pretrained `--weights yolov3.pt`, or from randomly initialized `--weights '' --cfg yolov3yaml`.\n", "\n", - "- **Pretrained [Models](https://github.com/ultralytics/yolov5/tree/master/models)** are downloaded\n", - "automatically from the [latest release](https://github.com/ultralytics/yolov5/releases)\n", - "- **[Datasets](https://github.com/ultralytics/yolov5/tree/master/data)** available for autodownload include: [COCO](https://github.com/ultralytics/yolov5/blob/master/data/coco.yaml), [COCO128](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), [VOC](https://github.com/ultralytics/yolov5/blob/master/data/VOC.yaml), [Argoverse](https://github.com/ultralytics/yolov5/blob/master/data/Argoverse.yaml), [VisDrone](https://github.com/ultralytics/yolov5/blob/master/data/VisDrone.yaml), [GlobalWheat](https://github.com/ultralytics/yolov5/blob/master/data/GlobalWheat2020.yaml), [xView](https://github.com/ultralytics/yolov5/blob/master/data/xView.yaml), [Objects365](https://github.com/ultralytics/yolov5/blob/master/data/Objects365.yaml), [SKU-110K](https://github.com/ultralytics/yolov5/blob/master/data/SKU-110K.yaml).\n", + "- **Pretrained [Models](https://github.com/ultralytics/yolov3/tree/master/models)** are downloaded\n", + "automatically from the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases)\n", + "- **[Datasets](https://github.com/ultralytics/yolov3/tree/master/data)** available for autodownload include: [COCO](https://github.com/ultralytics/yolov3/blob/master/data/coco.yaml), [COCO128](https://github.com/ultralytics/yolov3/blob/master/data/coco128.yaml), [VOC](https://github.com/ultralytics/yolov3/blob/master/data/VOC.yaml), [Argoverse](https://github.com/ultralytics/yolov3/blob/master/data/Argoverse.yaml), [VisDrone](https://github.com/ultralytics/yolov3/blob/master/data/VisDrone.yaml), [GlobalWheat](https://github.com/ultralytics/yolov3/blob/master/data/GlobalWheat2020.yaml), [xView](https://github.com/ultralytics/yolov3/blob/master/data/xView.yaml), [Objects365](https://github.com/ultralytics/yolov3/blob/master/data/Objects365.yaml), [SKU-110K](https://github.com/ultralytics/yolov3/blob/master/data/SKU-110K.yaml).\n", "- **Training Results** are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.\n", - "

\n", - "\n", - "A **Mosaic Dataloader** is used for training which combines 4 images into 1 mosaic.\n", - "\n", - "## Train on Custom Data with Roboflow 🌟 NEW\n", - "\n", - "[Roboflow](https://roboflow.com/?ref=ultralytics) enables you to easily **organize, label, and prepare** a high quality dataset with your own custom data. Roboflow also makes it easy to establish an active learning pipeline, collaborate with your team on dataset improvement, and integrate directly into your model building workflow with the `roboflow` pip package.\n", - "\n", - "- Custom Training Example: [https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/](https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/?ref=ultralytics)\n", - "- Custom Training Notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/yolov5-custom-training-tutorial/blob/main/yolov5-custom-training.ipynb)\n", - "
\n", - "\n", - "

Label images lightning fast (including with model-assisted labeling)" + "

\n" ] }, { "cell_type": "code", - "source": [ - "#@title Select 🚀 logger {run: 'auto'}\n", - "logger = 'ClearML' #@param ['ClearML', 'Comet', 'TensorBoard']\n", - "\n", - "if logger == 'ClearML':\n", - " %pip install -q clearml\n", - " import clearml; clearml.browser_login()\n", - "elif logger == 'Comet':\n", - " %pip install -q comet_ml\n", - " import comet_ml; comet_ml.init()\n", - "elif logger == 'TensorBoard':\n", - " %load_ext tensorboard\n", - " %tensorboard --logdir runs/train" - ], "metadata": { - "id": "i3oKtE4g-aNn" + "id": "bOy5KI2ncnWd" }, + "source": [ + "# Tensorboard (optional)\n", + "%load_ext tensorboard\n", + "%tensorboard --logdir runs/train" + ], "execution_count": null, "outputs": [] }, @@ -676,172 +689,176 @@ "colab": { "base_uri": "https://localhost:8080/" }, - "outputId": "721b9028-767f-4a05-c964-692c245f7398" + "outputId": "c77013e3-347d-42a4-84de-3ca42ea3aee9" }, "source": [ - "# Train YOLOv5s on COCO128 for 3 epochs\n", - "!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache" + "# Train YOLOv3 on COCO128 for 3 epochs\n", + "!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov3.pt --cache" ], - "execution_count": null, + "execution_count": 3, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ - "\u001b[34m\u001b[1mtrain: \u001b[0mweights=yolov5s.pt, cfg=, data=coco128.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=ram, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest\n", - "\u001b[34m\u001b[1mgithub: \u001b[0mup to date with https://github.com/ultralytics/yolov5 ✅\n", - " 🚀 v7.0-1-gb32f67f Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n", + "\u001b[34m\u001b[1mtrain: \u001b[0mweights=yolov3.pt, cfg=, data=coco128.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=ram, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, patience=100, freeze=0, save_period=-1, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest\n", + "\u001b[34m\u001b[1mgithub: \u001b[0mup to date with https://github.com/ultralytics/yolov3 ✅\n", + "YOLOv3 🚀 v9.6.0-29-ga441ab1 torch 1.13.0+cu116 CUDA:0 (Tesla T4, 15110MiB)\n", "\n", - "\u001b[34m\u001b[1mhyperparameters: \u001b[0mlr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0\n", - "\u001b[34m\u001b[1mClearML: \u001b[0mrun 'pip install clearml' to automatically track, visualize and remotely train 🚀 in ClearML\n", - "\u001b[34m\u001b[1mComet: \u001b[0mrun 'pip install comet_ml' to automatically track and visualize 🚀 runs in Comet\n", + "\u001b[34m\u001b[1mhyperparameters: \u001b[0mlr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0\n", + "\u001b[34m\u001b[1mWeights & Biases: \u001b[0mrun 'pip install wandb' to automatically track and visualize YOLOv3 🚀 runs (RECOMMENDED)\n", "\u001b[34m\u001b[1mTensorBoard: \u001b[0mStart with 'tensorboard --logdir runs/train', view at http://localhost:6006/\n", "\n", - "Dataset not found âš ī¸, missing paths ['/content/datasets/coco128/images/train2017']\n", + "WARNING: Dataset not found, nonexistent paths: ['/content/datasets/coco128/images/train2017']\n", "Downloading https://ultralytics.com/assets/coco128.zip to coco128.zip...\n", - "100% 6.66M/6.66M [00:00<00:00, 261MB/s]\n", - "Dataset download success ✅ (0.3s), saved to \u001b[1m/content/datasets\u001b[0m\n", + "100% 6.66M/6.66M [00:00<00:00, 10.2MB/s]\n", + "Dataset autodownload success, saved to ../datasets\n", + "\n", "\n", " from n params module arguments \n", - " 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] \n", + " 0 -1 1 928 models.common.Conv [3, 32, 3, 1] \n", " 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] \n", - " 2 -1 1 18816 models.common.C3 [64, 64, 1] \n", + " 2 -1 1 20672 models.common.Bottleneck [64, 64] \n", " 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] \n", - " 4 -1 2 115712 models.common.C3 [128, 128, 2] \n", + " 4 -1 2 164608 models.common.Bottleneck [128, 128] \n", " 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] \n", - " 6 -1 3 625152 models.common.C3 [256, 256, 3] \n", + " 6 -1 8 2627584 models.common.Bottleneck [256, 256] \n", " 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] \n", - " 8 -1 1 1182720 models.common.C3 [512, 512, 1] \n", - " 9 -1 1 656896 models.common.SPPF [512, 512, 5] \n", - " 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] \n", - " 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n", - " 12 [-1, 6] 1 0 models.common.Concat [1] \n", - " 13 -1 1 361984 models.common.C3 [512, 256, 1, False] \n", - " 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] \n", - " 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n", - " 16 [-1, 4] 1 0 models.common.Concat [1] \n", - " 17 -1 1 90880 models.common.C3 [256, 128, 1, False] \n", - " 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] \n", - " 19 [-1, 14] 1 0 models.common.Concat [1] \n", - " 20 -1 1 296448 models.common.C3 [256, 256, 1, False] \n", - " 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] \n", - " 22 [-1, 10] 1 0 models.common.Concat [1] \n", - " 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] \n", - " 24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]\n", - "Model summary: 214 layers, 7235389 parameters, 7235389 gradients, 16.6 GFLOPs\n", + " 8 -1 8 10498048 models.common.Bottleneck [512, 512] \n", + " 9 -1 1 4720640 models.common.Conv [512, 1024, 3, 2] \n", + " 10 -1 4 20983808 models.common.Bottleneck [1024, 1024] \n", + " 11 -1 1 5245952 models.common.Bottleneck [1024, 1024, False] \n", + " 12 -1 1 525312 models.common.Conv [1024, 512, 1, 1] \n", + " 13 -1 1 4720640 models.common.Conv [512, 1024, 3, 1] \n", + " 14 -1 1 525312 models.common.Conv [1024, 512, 1, 1] \n", + " 15 -1 1 4720640 models.common.Conv [512, 1024, 3, 1] \n", + " 16 -2 1 131584 models.common.Conv [512, 256, 1, 1] \n", + " 17 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n", + " 18 [-1, 8] 1 0 models.common.Concat [1] \n", + " 19 -1 1 1377792 models.common.Bottleneck [768, 512, False] \n", + " 20 -1 1 1312256 models.common.Bottleneck [512, 512, False] \n", + " 21 -1 1 131584 models.common.Conv [512, 256, 1, 1] \n", + " 22 -1 1 1180672 models.common.Conv [256, 512, 3, 1] \n", + " 23 -2 1 33024 models.common.Conv [256, 128, 1, 1] \n", + " 24 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n", + " 25 [-1, 6] 1 0 models.common.Concat [1] \n", + " 26 -1 1 344832 models.common.Bottleneck [384, 256, False] \n", + " 27 -1 2 656896 models.common.Bottleneck [256, 256, False] \n", + " 28 [27, 22, 15] 1 457725 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [256, 512, 1024]]\n", + "Model Summary: 333 layers, 61949149 parameters, 61949149 gradients, 156.6 GFLOPs\n", "\n", - "Transferred 349/349 items from yolov5s.pt\n", - "\u001b[34m\u001b[1mAMP: \u001b[0mchecks passed ✅\n", - "\u001b[34m\u001b[1moptimizer:\u001b[0m SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias\n", - "\u001b[34m\u001b[1malbumentations: \u001b[0mBlur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))\n", - "\u001b[34m\u001b[1mtrain: \u001b[0mScanning /content/datasets/coco128/labels/train2017... 126 images, 2 backgrounds, 0 corrupt: 100% 128/128 [00:00<00:00, 1911.57it/s]\n", - "\u001b[34m\u001b[1mtrain: \u001b[0mNew cache created: /content/datasets/coco128/labels/train2017.cache\n", - "\u001b[34m\u001b[1mtrain: \u001b[0mCaching images (0.1GB ram): 100% 128/128 [00:00<00:00, 229.69it/s]\n", - "\u001b[34m\u001b[1mval: \u001b[0mScanning /content/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100% 128/128 [00:00 # 2. paste API key\n", - "python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt # 3. train\n", - "```\n", - "To learn more about all of the supported Comet features for this integration, check out the [Comet Tutorial](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/comet). If you'd like to learn more about Comet, head over to our [documentation](https://www.comet.com/docs/v2/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=yolov5_colab). Get started by trying out the Comet Colab Notebook:\n", - "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1RG0WOQyxlDlo5Km8GogJpIEJlg_5lyYO?usp=sharing)\n", - "\n", - "\n", - "\"Comet" - ], - "metadata": { - "id": "nWOsI5wJR1o3" - } - }, - { - "cell_type": "markdown", - "source": [ - "## ClearML Logging and Automation 🌟 NEW\n", - "\n", - "[ClearML](https://cutt.ly/yolov5-notebook-clearml) is completely integrated into to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML (check cells above):\n", - "\n", - "- `pip install clearml`\n", - "- run `clearml-init` to connect to a ClearML server (**deploy your own [open-source server](https://github.com/allegroai/clearml-server)**, or use our [free hosted server](https://cutt.ly/yolov5-notebook-clearml))\n", - "\n", - "You'll get all the great expected features from an experiment manager: live updates, model upload, experiment comparison etc. but ClearML also tracks uncommitted changes and installed packages for example. Thanks to that ClearML Tasks (which is what we call experiments) are also reproducible on different machines! With only 1 extra line, we can schedule a training task on a queue to be executed by any number of ClearML Agents (workers).\n", - "\n", - "You can use ClearML Data to version your dataset and then pass it to simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the [ClearML Tutorial](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml) for details!\n", - "\n", - "\n", - "\"ClearML" - ], - "metadata": { - "id": "Lay2WsTjNJzP" - } - }, { "cell_type": "markdown", "metadata": { @@ -908,11 +881,25 @@ "source": [ "## Local Logging\n", "\n", - "Training results are automatically logged with [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) loggers to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc.\n", + "All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note an Ultralytics **Mosaic Dataloader** is used for training (shown below), which combines 4 images into 1 mosaic during training.\n", "\n", - "This directory contains train and val statistics, mosaics, labels, predictions and augmentated mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices. \n", + "> \n", + "`train_batch0.jpg` shows train batch 0 mosaics and labels\n", "\n", - "\"Local\n" + "> \n", + "`test_batch0_labels.jpg` shows val batch 0 labels\n", + "\n", + "> \n", + "`test_batch0_pred.jpg` shows val batch 0 _predictions_\n", + "\n", + "Training results are automatically logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) as `results.csv`, which is plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:\n", + "\n", + "```python\n", + "from utils.plots import plot_results \n", + "plot_results('path/to/results.csv') # plot 'results.csv' as 'results.png'\n", + "```\n", + "\n", + "\"COCO128" ] }, { @@ -923,12 +910,12 @@ "source": [ "# Environments\n", "\n", - " may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n", + "YOLOv3 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n", "\n", - "- **Notebooks** with free GPU: \"Run \"Open \"Open\n", - "- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)\n", - "- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)\n", - "- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) \"Docker\n" + "- **Google Colab and Kaggle** notebooks with free GPU: \"Open \"Open\n", + "- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart)\n", + "- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/AWS-Quickstart)\n", + "- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/Docker-Quickstart) \"Docker\n" ] }, { @@ -939,9 +926,9 @@ "source": [ "# Status\n", "\n", - "![ CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)\n", + "![CI CPU testing](https://github.com/ultralytics/yolov3/workflows/CI%20CPU%20testing/badge.svg)\n", "\n", - "If this badge is green, all [YOLOv3 GitHub Actions](https://github.com/ultralytics/yolov3/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on macOS, Windows, and Ubuntu every 24 hours and on every commit.\n" + "If this badge is green, all [YOLOv3 GitHub Actions](https://github.com/ultralytics/yolov3/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv3 training ([train.py](https://github.com/ultralytics/yolov3/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov3/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov3/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov3/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.\n" ] }, { @@ -952,22 +939,112 @@ "source": [ "# Appendix\n", "\n", - "Additional content below." + "Optional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.\n" ] }, + { + "cell_type": "code", + "metadata": { + "id": "mcKoSIK2WSzj" + }, + "source": [ + "# Reproduce\n", + "for x in 'yolov3', 'yolov3-spp', 'yolov3-tiny':\n", + " !python val.py --weights {x}.pt --data coco.yaml --img 640 --task speed # speed\n", + " !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP" + ], + "execution_count": null, + "outputs": [] + }, { "cell_type": "code", "metadata": { "id": "GMusP4OAxFu6" }, "source": [ - "# PyTorch HUB Inference (DetectionModels only)\n", + "# PyTorch Hub\n", "import torch\n", "\n", - "model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True) # yolov5n - yolov5x6 or custom\n", - "im = 'https://ultralytics.com/images/zidane.jpg' # file, Path, PIL.Image, OpenCV, nparray, list\n", - "results = model(im) # inference\n", - "results.print() # or .show(), .save(), .crop(), .pandas(), etc." + "# Model\n", + "model = torch.hub.load('ultralytics/yolov3', 'yolov3')\n", + "\n", + "# Images\n", + "dir = 'https://ultralytics.com/images/'\n", + "imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images\n", + "\n", + "# Inference\n", + "results = model(imgs)\n", + "results.print() # or .show(), .save()" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "FGH0ZjkGjejy" + }, + "source": [ + "# CI Checks\n", + "%%shell\n", + "export PYTHONPATH=\"$PWD\" # to run *.py. files in subdirectories\n", + "rm -rf runs # remove runs/\n", + "for m in yolov3-tiny; do # models\n", + " python train.py --img 64 --batch 32 --weights $m.pt --epochs 1 --device 0 # train pretrained\n", + " python train.py --img 64 --batch 32 --weights '' --cfg $m.yaml --epochs 1 --device 0 # train scratch\n", + " for d in 0 cpu; do # devices\n", + " python val.py --weights $m.pt --device $d # val official\n", + " python val.py --weights runs/train/exp/weights/best.pt --device $d # val custom\n", + " python detect.py --weights $m.pt --device $d # detect official\n", + " python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom\n", + " done\n", + " python hubconf.py # hub\n", + " python models/yolo.py --cfg $m.yaml # build PyTorch model\n", + " python models/tf.py --weights $m.pt # build TensorFlow model\n", + " python export.py --img 64 --batch 1 --weights $m.pt --include torchscript onnx # export\n", + "done" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "gogI-kwi3Tye" + }, + "source": [ + "# Profile\n", + "from utils.torch_utils import profile\n", + "\n", + "m1 = lambda x: x * torch.sigmoid(x)\n", + "m2 = torch.nn.SiLU()\n", + "results = profile(input=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "RVRSOhEvUdb5" + }, + "source": [ + "# Evolve\n", + "!python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov3.pt --cache --noautoanchor --evolve\n", + "!d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "BSgFCAcMbk1R" + }, + "source": [ + "# VOC\n", + "for b, m in zip([24, 24, 64], ['yolov3', 'yolov3-spp', 'yolov3-tiny']): # zip(batch_size, model)\n", + " !python train.py --batch {b} --weights {m}.pt --data VOC.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}" ], "execution_count": null, "outputs": [] diff --git a/yolov3/utils/__init__.py b/yolov3/utils/__init__.py index 8e855a38..309830c8 100644 --- a/yolov3/utils/__init__.py +++ b/yolov3/utils/__init__.py @@ -3,78 +3,16 @@ utils/initialization """ -import contextlib -import platform -import threading - -def emojis(str=''): - # Return platform-dependent emoji-safe version of string - return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str - - -class TryExcept(contextlib.ContextDecorator): - # YOLOv3 TryExcept class. Usage: @TryExcept() decorator or 'with TryExcept():' context manager - def __init__(self, msg=''): - self.msg = msg - - def __enter__(self): - pass - - def __exit__(self, exc_type, value, traceback): - if value: - print(emojis(f"{self.msg}{': ' if self.msg else ''}{value}")) - return True - - -def threaded(func): - # Multi-threads a target function and returns thread. Usage: @threaded decorator - def wrapper(*args, **kwargs): - thread = threading.Thread(target=func, args=args, kwargs=kwargs, daemon=True) - thread.start() - return thread - - return wrapper - - -def join_threads(verbose=False): - # Join all daemon threads, i.e. atexit.register(lambda: join_threads()) - main_thread = threading.current_thread() - for t in threading.enumerate(): - if t is not main_thread: - if verbose: - print(f'Joining thread {t.name}') - t.join() - - -def notebook_init(verbose=True): - # Check system software and hardware +def notebook_init(): + # For notebooks print('Checking setup...') - - import os - import shutil - - from utils.general import check_font, check_requirements, is_colab - from utils.torch_utils import select_device # imports - - check_font() - - import psutil from IPython import display # to display images and clear console output - if is_colab(): - shutil.rmtree('/content/sample_data', ignore_errors=True) # remove colab /sample_data directory - - # System info - if verbose: - gb = 1 << 30 # bytes to GiB (1024 ** 3) - ram = psutil.virtual_memory().total - total, used, free = shutil.disk_usage('/') - display.clear_output() - s = f'({os.cpu_count()} CPUs, {ram / gb:.1f} GB RAM, {(total - free) / gb:.1f}/{total / gb:.1f} GB disk)' - else: - s = '' + from utils.general import emojis + from utils.torch_utils import select_device # imports + display.clear_output() select_device(newline=False) - print(emojis(f'Setup complete ✅ {s}')) + print(emojis('Setup complete ✅')) return display diff --git a/yolov3/utils/activations.py b/yolov3/utils/activations.py index 0ae05152..ae2fef1c 100644 --- a/yolov3/utils/activations.py +++ b/yolov3/utils/activations.py @@ -8,32 +8,29 @@ import torch.nn as nn import torch.nn.functional as F -class SiLU(nn.Module): - # SiLU activation https://arxiv.org/pdf/1606.08415.pdf +# SiLU https://arxiv.org/pdf/1606.08415.pdf ---------------------------------------------------------------------------- +class SiLU(nn.Module): # export-friendly version of nn.SiLU() @staticmethod def forward(x): return x * torch.sigmoid(x) -class Hardswish(nn.Module): - # Hard-SiLU activation +class Hardswish(nn.Module): # export-friendly version of nn.Hardswish() @staticmethod def forward(x): - # return x * F.hardsigmoid(x) # for TorchScript and CoreML - return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0 # for TorchScript, CoreML and ONNX + # return x * F.hardsigmoid(x) # for torchscript and CoreML + return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0 # for torchscript, CoreML and ONNX +# Mish https://github.com/digantamisra98/Mish -------------------------------------------------------------------------- class Mish(nn.Module): - # Mish activation https://github.com/digantamisra98/Mish @staticmethod def forward(x): return x * F.softplus(x).tanh() class MemoryEfficientMish(nn.Module): - # Mish activation memory-efficient class F(torch.autograd.Function): - @staticmethod def forward(ctx, x): ctx.save_for_backward(x) @@ -50,8 +47,8 @@ class MemoryEfficientMish(nn.Module): return self.F.apply(x) +# FReLU https://arxiv.org/abs/2007.11824 ------------------------------------------------------------------------------- class FReLU(nn.Module): - # FReLU activation https://arxiv.org/abs/2007.11824 def __init__(self, c1, k=3): # ch_in, kernel super().__init__() self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False) @@ -61,8 +58,9 @@ class FReLU(nn.Module): return torch.max(x, self.bn(self.conv(x))) +# ACON https://arxiv.org/pdf/2009.04759.pdf ---------------------------------------------------------------------------- class AconC(nn.Module): - r""" ACON activation (activate or not) + r""" ACON activation (activate or not). AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter according to "Activate or Not: Learning Customized Activation" . """ @@ -79,7 +77,7 @@ class AconC(nn.Module): class MetaAconC(nn.Module): - r""" ACON activation (activate or not) + r""" ACON activation (activate or not). MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network according to "Activate or Not: Learning Customized Activation" . """ diff --git a/yolov3/utils/augmentations.py b/yolov3/utils/augmentations.py index 85e68d05..16685044 100644 --- a/yolov3/utils/augmentations.py +++ b/yolov3/utils/augmentations.py @@ -8,42 +8,34 @@ import random import cv2 import numpy as np -import torch -import torchvision.transforms as T -import torchvision.transforms.functional as TF -from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box, xywhn2xyxy +from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box from utils.metrics import bbox_ioa -IMAGENET_MEAN = 0.485, 0.456, 0.406 # RGB mean -IMAGENET_STD = 0.229, 0.224, 0.225 # RGB standard deviation - class Albumentations: - # YOLOv3 Albumentations class (optional, only used if package is installed) - def __init__(self, size=640): + # Albumentations class (optional, only used if package is installed) + def __init__(self): self.transform = None - prefix = colorstr('albumentations: ') try: import albumentations as A check_version(A.__version__, '1.0.3', hard=True) # version requirement - T = [ - A.RandomResizedCrop(height=size, width=size, scale=(0.8, 1.0), ratio=(0.9, 1.11), p=0.0), + self.transform = A.Compose([ A.Blur(p=0.01), A.MedianBlur(p=0.01), A.ToGray(p=0.01), A.CLAHE(p=0.01), A.RandomBrightnessContrast(p=0.0), A.RandomGamma(p=0.0), - A.ImageCompression(quality_lower=75, p=0.0)] # transforms - self.transform = A.Compose(T, bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) + A.ImageCompression(quality_lower=75, p=0.0)], + bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) - LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p)) + LOGGER.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p)) except ImportError: # package not installed, skip pass except Exception as e: - LOGGER.info(f'{prefix}{e}') + LOGGER.info(colorstr('albumentations: ') + f'{e}') def __call__(self, im, labels, p=1.0): if self.transform and random.random() < p: @@ -52,18 +44,6 @@ class Albumentations: return im, labels -def normalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD, inplace=False): - # Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = (x - mean) / std - return TF.normalize(x, mean, std, inplace=inplace) - - -def denormalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD): - # Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = x * std + mean - for i in range(3): - x[:, i] = x[:, i] * std[i] + mean[i] - return x - - def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5): # HSV color-space augmentation if hgain or sgain or vgain: @@ -141,14 +121,7 @@ def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleF return im, ratio, (dw, dh) -def random_perspective(im, - targets=(), - segments=(), - degrees=10, - translate=.1, - scale=.1, - shear=10, - perspective=0.0, +def random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, border=(0, 0)): # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10)) # targets = [cls, xyxy] @@ -201,7 +174,7 @@ def random_perspective(im, # Transform label coordinates n = len(targets) if n: - use_segments = any(x.any() for x in segments) and len(segments) == n + use_segments = any(x.any() for x in segments) new = np.zeros((n, 4)) if use_segments: # warp segments segments = resample_segments(segments) # upsample @@ -250,10 +223,12 @@ def copy_paste(im, labels, segments, p=0.5): if (ioa < 0.30).all(): # allow 30% obscuration of existing labels labels = np.concatenate((labels, [[l[0], *box]]), 0) segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (1, 1, 1), cv2.FILLED) + cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - result = cv2.flip(im, 1) # augment segments (flip left-right) - i = cv2.flip(im_new, 1).astype(bool) + result = cv2.bitwise_and(src1=im, src2=im_new) + result = cv2.flip(result, 1) # augment segments (flip left-right) + i = result > 0 # pixels to replace + # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug return im, labels, segments @@ -280,7 +255,7 @@ def cutout(im, labels, p=0.5): # return unobscured labels if len(labels) and s > 0.03: box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, xywhn2xyxy(labels[:, 1:5], w, h)) # intersection over area + ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area labels = labels[ioa < 0.60] # remove >60% obscured labels return labels @@ -294,104 +269,9 @@ def mixup(im, labels, im2, labels2): return im, labels -def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) +def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio w1, h1 = box1[2] - box1[0], box1[3] - box1[1] w2, h2 = box2[2] - box2[0], box2[3] - box2[1] ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates - - -def classify_albumentations( - augment=True, - size=224, - scale=(0.08, 1.0), - ratio=(0.75, 1.0 / 0.75), # 0.75, 1.33 - hflip=0.5, - vflip=0.0, - jitter=0.4, - mean=IMAGENET_MEAN, - std=IMAGENET_STD, - auto_aug=False): - # YOLOv3 classification Albumentations (optional, only used if package is installed) - prefix = colorstr('albumentations: ') - try: - import albumentations as A - from albumentations.pytorch import ToTensorV2 - check_version(A.__version__, '1.0.3', hard=True) # version requirement - if augment: # Resize and crop - T = [A.RandomResizedCrop(height=size, width=size, scale=scale, ratio=ratio)] - if auto_aug: - # TODO: implement AugMix, AutoAug & RandAug in albumentation - LOGGER.info(f'{prefix}auto augmentations are currently not supported') - else: - if hflip > 0: - T += [A.HorizontalFlip(p=hflip)] - if vflip > 0: - T += [A.VerticalFlip(p=vflip)] - if jitter > 0: - color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, satuaration, 0 hue - T += [A.ColorJitter(*color_jitter, 0)] - else: # Use fixed crop for eval set (reproducibility) - T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)] - T += [A.Normalize(mean=mean, std=std), ToTensorV2()] # Normalize and convert to Tensor - LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p)) - return A.Compose(T) - - except ImportError: # package not installed, skip - LOGGER.warning(f'{prefix}âš ī¸ not found, install with `pip install albumentations` (recommended)') - except Exception as e: - LOGGER.info(f'{prefix}{e}') - - -def classify_transforms(size=224): - # Transforms to apply if albumentations not installed - assert isinstance(size, int), f'ERROR: classify_transforms size {size} must be integer, not (list, tuple)' - # T.Compose([T.ToTensor(), T.Resize(size), T.CenterCrop(size), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)]) - return T.Compose([CenterCrop(size), ToTensor(), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)]) - - -class LetterBox: - # YOLOv3 LetterBox class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()]) - def __init__(self, size=(640, 640), auto=False, stride=32): - super().__init__() - self.h, self.w = (size, size) if isinstance(size, int) else size - self.auto = auto # pass max size integer, automatically solve for short side using stride - self.stride = stride # used with auto - - def __call__(self, im): # im = np.array HWC - imh, imw = im.shape[:2] - r = min(self.h / imh, self.w / imw) # ratio of new/old - h, w = round(imh * r), round(imw * r) # resized image - hs, ws = (math.ceil(x / self.stride) * self.stride for x in (h, w)) if self.auto else self.h, self.w - top, left = round((hs - h) / 2 - 0.1), round((ws - w) / 2 - 0.1) - im_out = np.full((self.h, self.w, 3), 114, dtype=im.dtype) - im_out[top:top + h, left:left + w] = cv2.resize(im, (w, h), interpolation=cv2.INTER_LINEAR) - return im_out - - -class CenterCrop: - # YOLOv3 CenterCrop class for image preprocessing, i.e. T.Compose([CenterCrop(size), ToTensor()]) - def __init__(self, size=640): - super().__init__() - self.h, self.w = (size, size) if isinstance(size, int) else size - - def __call__(self, im): # im = np.array HWC - imh, imw = im.shape[:2] - m = min(imh, imw) # min dimension - top, left = (imh - m) // 2, (imw - m) // 2 - return cv2.resize(im[top:top + m, left:left + m], (self.w, self.h), interpolation=cv2.INTER_LINEAR) - - -class ToTensor: - # YOLOv3 ToTensor class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()]) - def __init__(self, half=False): - super().__init__() - self.half = half - - def __call__(self, im): # im = np.array HWC in BGR order - im = np.ascontiguousarray(im.transpose((2, 0, 1))[::-1]) # HWC to CHW -> BGR to RGB -> contiguous - im = torch.from_numpy(im) # to torch - im = im.half() if self.half else im.float() # uint8 to fp16/32 - im /= 255.0 # 0-255 to 0.0-1.0 - return im diff --git a/yolov3/utils/autoanchor.py b/yolov3/utils/autoanchor.py index 6b11e533..0c202c49 100644 --- a/yolov3/utils/autoanchor.py +++ b/yolov3/utils/autoanchor.py @@ -1,6 +1,6 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license """ -AutoAnchor utils +Auto-anchor utils """ import random @@ -10,23 +10,21 @@ import torch import yaml from tqdm import tqdm -from utils import TryExcept -from utils.general import LOGGER, TQDM_BAR_FORMAT, colorstr +from utils.general import LOGGER, colorstr, emojis PREFIX = colorstr('AutoAnchor: ') def check_anchor_order(m): - # Check anchor order against stride order for YOLOv3 Detect() module m, and correct if necessary - a = m.anchors.prod(-1).mean(-1).view(-1) # mean anchor area per output layer + # Check anchor order against stride order for Detect() module m, and correct if necessary + a = m.anchors.prod(-1).view(-1) # anchor area da = a[-1] - a[0] # delta a ds = m.stride[-1] - m.stride[0] # delta s - if da and (da.sign() != ds.sign()): # same order + if da.sign() != ds.sign(): # same order LOGGER.info(f'{PREFIX}Reversing anchor order') m.anchors[:] = m.anchors.flip(0) -@TryExcept(f'{PREFIX}ERROR') def check_anchors(dataset, model, thr=4.0, imgsz=640): # Check anchor fit to data, recompute if necessary m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() @@ -42,26 +40,26 @@ def check_anchors(dataset, model, thr=4.0, imgsz=640): bpr = (best > 1 / thr).float().mean() # best possible recall return bpr, aat - stride = m.stride.to(m.anchors.device).view(-1, 1, 1) # model strides - anchors = m.anchors.clone() * stride # current anchors + anchors = m.anchors.clone() * m.stride.to(m.anchors.device).view(-1, 1, 1) # current anchors bpr, aat = metric(anchors.cpu().view(-1, 2)) s = f'\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). ' if bpr > 0.98: # threshold to recompute - LOGGER.info(f'{s}Current anchors are a good fit to dataset ✅') + LOGGER.info(emojis(f'{s}Current anchors are a good fit to dataset ✅')) else: - LOGGER.info(f'{s}Anchors are a poor fit to dataset âš ī¸, attempting to improve...') + LOGGER.info(emojis(f'{s}Anchors are a poor fit to dataset âš ī¸, attempting to improve...')) na = m.anchors.numel() // 2 # number of anchors - anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) + try: + anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) + except Exception as e: + LOGGER.info(f'{PREFIX}ERROR: {e}') new_bpr = metric(anchors)[0] if new_bpr > bpr: # replace anchors anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) - m.anchors[:] = anchors.clone().view_as(m.anchors) - check_anchor_order(m) # must be in pixel-space (not grid-space) - m.anchors /= stride - s = f'{PREFIX}Done ✅ (optional: update model *.yaml to use these anchors in the future)' + m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss + check_anchor_order(m) + LOGGER.info(f'{PREFIX}New anchors saved to model. Update model *.yaml to use these anchors in the future.') else: - s = f'{PREFIX}Done âš ī¸ (original anchors better than new anchors, proceeding with original anchors)' - LOGGER.info(s) + LOGGER.info(f'{PREFIX}Original anchors better than new anchors. Proceeding with original anchors.') def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): @@ -83,7 +81,6 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen """ from scipy.cluster.vq import kmeans - npr = np.random thr = 1 / thr def metric(k, wh): # compute metrics @@ -103,7 +100,7 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n' \ f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \ f'past_thr={x[x > thr].mean():.3f}-mean: ' - for x in k: + for i, x in enumerate(k): s += '%i,%i, ' % (round(x[0]), round(x[1])) if verbose: LOGGER.info(s[:-2]) @@ -112,7 +109,7 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen if isinstance(dataset, str): # *.yaml file with open(dataset, errors='ignore') as f: data_dict = yaml.safe_load(f) # model dict - from utils.dataloaders import LoadImagesAndLabels + from utils.datasets import LoadImagesAndLabels dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) # Get label wh @@ -122,21 +119,18 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen # Filter i = (wh0 < 3.0).any(1).sum() if i: - LOGGER.info(f'{PREFIX}WARNING âš ī¸ Extremely small objects found: {i} of {len(wh0)} labels are <3 pixels in size') - wh = wh0[(wh0 >= 2.0).any(1)].astype(np.float32) # filter > 2 pixels - # wh = wh * (npr.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 + LOGGER.info(f'{PREFIX}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.') + wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels + # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 - # Kmeans init - try: - LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...') - assert n <= len(wh) # apply overdetermined constraint - s = wh.std(0) # sigmas for whitening - k = kmeans(wh / s, n, iter=30)[0] * s # points - assert n == len(k) # kmeans may return fewer points than requested if wh is insufficient or too similar - except Exception: - LOGGER.warning(f'{PREFIX}WARNING âš ī¸ switching strategies from kmeans to random init') - k = np.sort(npr.rand(n * 2)).reshape(n, 2) * img_size # random init - wh, wh0 = (torch.tensor(x, dtype=torch.float32) for x in (wh, wh0)) + # Kmeans calculation + LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...') + s = wh.std(0) # sigmas for whitening + k, dist = kmeans(wh / s, n, iter=30) # points, mean distance + assert len(k) == n, f'{PREFIX}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}' + k *= s + wh = torch.tensor(wh, dtype=torch.float32) # filtered + wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered k = print_results(k, verbose=False) # Plot @@ -152,8 +146,9 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen # fig.savefig('wh.png', dpi=200) # Evolve + npr = np.random f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), bar_format=TQDM_BAR_FORMAT) # progress bar + pbar = tqdm(range(gen), desc=f'{PREFIX}Evolving anchors with Genetic Algorithm:') # progress bar for _ in pbar: v = np.ones(sh) while (v == 1).all(): # mutate until a change occurs (prevent duplicates) @@ -166,4 +161,4 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen if verbose: print_results(k, verbose) - return print_results(k).astype(np.float32) + return print_results(k) diff --git a/yolov3/utils/autobatch.py b/yolov3/utils/autobatch.py index de613ca1..4fbf32bc 100644 --- a/yolov3/utils/autobatch.py +++ b/yolov3/utils/autobatch.py @@ -7,66 +7,51 @@ from copy import deepcopy import numpy as np import torch +from torch.cuda import amp from utils.general import LOGGER, colorstr from utils.torch_utils import profile -def check_train_batch_size(model, imgsz=640, amp=True): - # Check YOLOv3 training batch size - with torch.cuda.amp.autocast(amp): +def check_train_batch_size(model, imgsz=640): + # Check training batch size + with amp.autocast(): return autobatch(deepcopy(model).train(), imgsz) # compute optimal batch size -def autobatch(model, imgsz=640, fraction=0.8, batch_size=16): - # Automatically estimate best YOLOv3 batch size to use `fraction` of available CUDA memory +def autobatch(model, imgsz=640, fraction=0.9, batch_size=16): + # Automatically estimate best batch size to use `fraction` of available CUDA memory # Usage: # import torch # from utils.autobatch import autobatch - # model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) + # model = torch.hub.load('ultralytics/yolov3', 'yolov3', autoshape=False) # print(autobatch(model)) - # Check device prefix = colorstr('AutoBatch: ') LOGGER.info(f'{prefix}Computing optimal batch size for --imgsz {imgsz}') device = next(model.parameters()).device # get model device if device.type == 'cpu': LOGGER.info(f'{prefix}CUDA not detected, using default CPU batch-size {batch_size}') return batch_size - if torch.backends.cudnn.benchmark: - LOGGER.info(f'{prefix} âš ī¸ Requires torch.backends.cudnn.benchmark=False, using default batch-size {batch_size}') - return batch_size - # Inspect CUDA memory - gb = 1 << 30 # bytes to GiB (1024 ** 3) d = str(device).upper() # 'CUDA:0' properties = torch.cuda.get_device_properties(device) # device properties - t = properties.total_memory / gb # GiB total - r = torch.cuda.memory_reserved(device) / gb # GiB reserved - a = torch.cuda.memory_allocated(device) / gb # GiB allocated - f = t - (r + a) # GiB free + t = properties.total_memory / 1024 ** 3 # (GiB) + r = torch.cuda.memory_reserved(device) / 1024 ** 3 # (GiB) + a = torch.cuda.memory_allocated(device) / 1024 ** 3 # (GiB) + f = t - (r + a) # free inside reserved LOGGER.info(f'{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free') - # Profile batch sizes batch_sizes = [1, 2, 4, 8, 16] try: - img = [torch.empty(b, 3, imgsz, imgsz) for b in batch_sizes] - results = profile(img, model, n=3, device=device) + img = [torch.zeros(b, 3, imgsz, imgsz) for b in batch_sizes] + y = profile(img, model, n=3, device=device) except Exception as e: LOGGER.warning(f'{prefix}{e}') - # Fit a solution - y = [x[2] for x in results if x] # memory [2] - p = np.polyfit(batch_sizes[:len(y)], y, deg=1) # first degree polynomial fit + y = [x[2] for x in y if x] # memory [2] + batch_sizes = batch_sizes[:len(y)] + p = np.polyfit(batch_sizes, y, deg=1) # first degree polynomial fit b = int((f * fraction - p[1]) / p[0]) # y intercept (optimal batch size) - if None in results: # some sizes failed - i = results.index(None) # first fail index - if b >= batch_sizes[i]: # y intercept above failure point - b = batch_sizes[max(i - 1, 0)] # select prior safe point - if b < 1 or b > 1024: # b outside of safe range - b = batch_size - LOGGER.warning(f'{prefix}WARNING âš ī¸ CUDA anomaly detected, recommend restart environment and retry command.') - - fraction = (np.polyval(p, b) + r + a) / t # actual fraction predicted - LOGGER.info(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%) ✅') + LOGGER.info(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%)') return b diff --git a/yolov3/utils/callbacks.py b/yolov3/utils/callbacks.py index 7715cf18..43e81a7c 100644 --- a/yolov3/utils/callbacks.py +++ b/yolov3/utils/callbacks.py @@ -3,46 +3,46 @@ Callback utils """ -import threading - class Callbacks: """" - Handles all registered callbacks for YOLOv3 Hooks + Handles all registered callbacks for Hooks """ - def __init__(self): - # Define the available callbacks - self._callbacks = { - 'on_pretrain_routine_start': [], - 'on_pretrain_routine_end': [], - 'on_train_start': [], - 'on_train_epoch_start': [], - 'on_train_batch_start': [], - 'optimizer_step': [], - 'on_before_zero_grad': [], - 'on_train_batch_end': [], - 'on_train_epoch_end': [], - 'on_val_start': [], - 'on_val_batch_start': [], - 'on_val_image_end': [], - 'on_val_batch_end': [], - 'on_val_end': [], - 'on_fit_epoch_end': [], # fit = train + val - 'on_model_save': [], - 'on_train_end': [], - 'on_params_update': [], - 'teardown': [],} - self.stop_training = False # set True to interrupt training + # Define the available callbacks + _callbacks = { + 'on_pretrain_routine_start': [], + 'on_pretrain_routine_end': [], + + 'on_train_start': [], + 'on_train_epoch_start': [], + 'on_train_batch_start': [], + 'optimizer_step': [], + 'on_before_zero_grad': [], + 'on_train_batch_end': [], + 'on_train_epoch_end': [], + + 'on_val_start': [], + 'on_val_batch_start': [], + 'on_val_image_end': [], + 'on_val_batch_end': [], + 'on_val_end': [], + + 'on_fit_epoch_end': [], # fit = train + val + 'on_model_save': [], + 'on_train_end': [], + + 'teardown': [], + } def register_action(self, hook, name='', callback=None): """ Register a new action to a callback hook Args: - hook: The callback hook name to register the action to - name: The name of the action for later reference - callback: The callback to fire + hook The callback hook name to register the action to + name The name of the action for later reference + callback The callback to fire """ assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}" assert callable(callback), f"callback '{callback}' is not callable" @@ -53,24 +53,24 @@ class Callbacks: Returns all the registered actions by callback hook Args: - hook: The name of the hook to check, defaults to all + hook The name of the hook to check, defaults to all """ - return self._callbacks[hook] if hook else self._callbacks + if hook: + return self._callbacks[hook] + else: + return self._callbacks - def run(self, hook, *args, thread=False, **kwargs): + def run(self, hook, *args, **kwargs): """ - Loop through the registered actions and fire all callbacks on main thread + Loop through the registered actions and fire all callbacks Args: - hook: The name of the hook to check, defaults to all - args: Arguments to receive from YOLOv3 - thread: (boolean) Run callbacks in daemon thread - kwargs: Keyword Arguments to receive from YOLOv3 + hook The name of the hook to check, defaults to all + args Arguments to receive from + kwargs Keyword Arguments to receive from """ assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}" + for logger in self._callbacks[hook]: - if thread: - threading.Thread(target=logger['callback'], args=args, kwargs=kwargs, daemon=True).start() - else: - logger['callback'](*args, **kwargs) + logger['callback'](*args, **kwargs) diff --git a/yolov3/utils/downloads.py b/yolov3/utils/downloads.py index cac99afa..cd653078 100644 --- a/yolov3/utils/downloads.py +++ b/yolov3/utils/downloads.py @@ -3,104 +3,147 @@ Download utils """ -import logging +import os +import platform import subprocess +import time import urllib from pathlib import Path +from zipfile import ZipFile import requests import torch -def is_url(url, check=True): - # Check if string is URL and check if URL exists - try: - url = str(url) - result = urllib.parse.urlparse(url) - assert all([result.scheme, result.netloc]) # check if is url - return (urllib.request.urlopen(url).getcode() == 200) if check else True # check if exists online - except (AssertionError, urllib.request.HTTPError): - return False - - def gsutil_getsize(url=''): # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8') return eval(s.split(' ')[0]) if len(s) else 0 # bytes -def url_getsize(url='https://ultralytics.com/images/bus.jpg'): - # Return downloadable file size in bytes - response = requests.head(url, allow_redirects=True) - return int(response.headers.get('content-length', -1)) - - def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''): # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes - from utils.general import LOGGER - file = Path(file) assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}" try: # url1 - LOGGER.info(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, str(file), progress=LOGGER.level <= logging.INFO) + print(f'Downloading {url} to {file}...') + torch.hub.download_url_to_file(url, str(file)) assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check except Exception as e: # url2 - if file.exists(): - file.unlink() # remove partial downloads - LOGGER.info(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...') - subprocess.run( - f"curl -# -L '{url2 or url}' -o '{file}' --retry 3 -C -".split()) # curl download, retry and resume on fail + file.unlink(missing_ok=True) # remove partial downloads + print(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...') + os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail finally: if not file.exists() or file.stat().st_size < min_bytes: # check - if file.exists(): - file.unlink() # remove partial downloads - LOGGER.info(f'ERROR: {assert_msg}\n{error_msg}') - LOGGER.info('') + file.unlink(missing_ok=True) # remove partial downloads + print(f"ERROR: {assert_msg}\n{error_msg}") + print('') -def attempt_download(file, repo='ultralytics/yolov5', release='v7.0'): - # Attempt file download from GitHub release assets if not found locally. release = 'latest', 'v7.0', etc. - from utils.general import LOGGER - - def github_assets(repository, version='latest'): - # Return GitHub repo tag (i.e. 'v7.0') and assets (i.e. ['yolov5s.pt', 'yolov5m.pt', ...]) - if version != 'latest': - version = f'tags/{version}' # i.e. tags/v7.0 - response = requests.get(f'https://api.github.com/repos/{repository}/releases/{version}').json() # github api - return response['tag_name'], [x['name'] for x in response['assets']] # tag, assets - +def attempt_download(file, repo='ultralytics/yolov3'): # from utils.downloads import *; attempt_download() + # Attempt file download if does not exist file = Path(str(file).strip().replace("'", '')) + if not file.exists(): # URL specified name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc. if str(file).startswith(('http:/', 'https:/')): # download url = str(file).replace(':/', '://') # Pathlib turns :// -> :/ - file = name.split('?')[0] # parse authentication https://url.com/file.txt?auth... - if Path(file).is_file(): - LOGGER.info(f'Found {url} locally at {file}') # file already exists - else: - safe_download(file=file, url=url, min_bytes=1E5) - return file + name = name.split('?')[0] # parse authentication https://url.com/file.txt?auth... + safe_download(file=name, url=url, min_bytes=1E5) + return name # GitHub assets - assets = [f'yolov5{size}{suffix}.pt' for size in 'nsmlx' for suffix in ('', '6', '-cls', '-seg')] # default - try: - tag, assets = github_assets(repo, release) - except Exception: - try: - tag, assets = github_assets(repo) # latest release - except Exception: - try: - tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1] - except Exception: - tag = release - file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required) + try: + response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api + assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov3.pt'...] + tag = response['tag_name'] # i.e. 'v1.0' + except: # fallback plan + assets = ['yolov3.pt', 'yolov3-spp.pt', 'yolov3-tiny.pt'] + try: + tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1] + except: + tag = 'v9.5.0' # current release + if name in assets: safe_download(file, url=f'https://github.com/{repo}/releases/download/{tag}/{name}', + # url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}', # backup url (optional) min_bytes=1E5, - error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/{tag}') + error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/') return str(file) + + +def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'): + # Downloads a file from Google Drive. from yolov3.utils.downloads import *; gdrive_download() + t = time.time() + file = Path(file) + cookie = Path('cookie') # gdrive cookie + print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='') + file.unlink(missing_ok=True) # remove existing file + cookie.unlink(missing_ok=True) # remove existing cookie + + # Attempt file download + out = "NUL" if platform.system() == "Windows" else "/dev/null" + os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}') + if os.path.exists('cookie'): # large file + s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}' + else: # small file + s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"' + r = os.system(s) # execute, capture return + cookie.unlink(missing_ok=True) # remove existing cookie + + # Error check + if r != 0: + file.unlink(missing_ok=True) # remove partial + print('Download error ') # raise Exception('Download error') + return r + + # Unzip if archive + if file.suffix == '.zip': + print('unzipping... ', end='') + ZipFile(file).extractall(path=file.parent) # unzip + file.unlink() # remove zip + + print(f'Done ({time.time() - t:.1f}s)') + return r + + +def get_token(cookie="./cookie"): + with open(cookie) as f: + for line in f: + if "download" in line: + return line.split()[-1] + return "" + +# Google utils: https://cloud.google.com/storage/docs/reference/libraries ---------------------------------------------- +# +# +# def upload_blob(bucket_name, source_file_name, destination_blob_name): +# # Uploads a file to a bucket +# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python +# +# storage_client = storage.Client() +# bucket = storage_client.get_bucket(bucket_name) +# blob = bucket.blob(destination_blob_name) +# +# blob.upload_from_filename(source_file_name) +# +# print('File {} uploaded to {}.'.format( +# source_file_name, +# destination_blob_name)) +# +# +# def download_blob(bucket_name, source_blob_name, destination_file_name): +# # Uploads a blob from a bucket +# storage_client = storage.Client() +# bucket = storage_client.get_bucket(bucket_name) +# blob = bucket.blob(source_blob_name) +# +# blob.download_to_filename(destination_file_name) +# +# print('Blob {} downloaded to {}.'.format( +# source_blob_name, +# destination_file_name)) diff --git a/yolov3/utils/general.py b/yolov3/utils/general.py old mode 100644 new mode 100755 index 56c94d58..8331e7db --- a/yolov3/utils/general.py +++ b/yolov3/utils/general.py @@ -5,31 +5,23 @@ General utils import contextlib import glob -import inspect import logging -import logging.config import math import os import platform import random import re +import shutil import signal -import subprocess -import sys import time import urllib -from copy import deepcopy -from datetime import datetime from itertools import repeat from multiprocessing.pool import ThreadPool from pathlib import Path from subprocess import check_output -from tarfile import is_tarfile -from typing import Optional -from zipfile import ZipFile, is_zipfile +from zipfile import ZipFile import cv2 -import IPython import numpy as np import pandas as pd import pkg_resources as pkg @@ -37,152 +29,41 @@ import torch import torchvision import yaml -from utils import TryExcept, emojis from utils.downloads import gsutil_getsize from utils.metrics import box_iou, fitness -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv3 root directory -RANK = int(os.getenv('RANK', -1)) - # Settings -NUM_THREADS = min(8, max(1, os.cpu_count() - 1)) # number of YOLOv3 multiprocessing threads -DATASETS_DIR = Path(os.getenv('YOLOv5_DATASETS_DIR', ROOT.parent / 'datasets')) # global datasets directory -AUTOINSTALL = str(os.getenv('YOLOv5_AUTOINSTALL', True)).lower() == 'true' # global auto-install mode -VERBOSE = str(os.getenv('YOLOv5_VERBOSE', True)).lower() == 'true' # global verbose mode -TQDM_BAR_FORMAT = '{l_bar}{bar:10}{r_bar}' # tqdm bar format -FONT = 'Arial.ttf' # https://ultralytics.com/assets/Arial.ttf - torch.set_printoptions(linewidth=320, precision=5, profile='long') np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 pd.options.display.max_columns = 10 cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(NUM_THREADS) # NumExpr max threads -os.environ['OMP_NUM_THREADS'] = '1' if platform.system() == 'darwin' else str(NUM_THREADS) # OpenMP (PyTorch and SciPy) +os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads + +FILE = Path(__file__).resolve() +ROOT = FILE.parents[1] # root directory -def is_ascii(s=''): - # Is string composed of all ASCII (no UTF) characters? (note str().isascii() introduced in python 3.7) - s = str(s) # convert list, tuple, None, etc. to str - return len(s.encode().decode('ascii', 'ignore')) == len(s) - - -def is_chinese(s='äēēåˇĨæ™ēčƒŊ'): - # Is string composed of any Chinese characters? - return bool(re.search('[\u4e00-\u9fff]', str(s))) - - -def is_colab(): - # Is environment a Google Colab instance? - return 'google.colab' in sys.modules - - -def is_notebook(): - # Is environment a Jupyter notebook? Verified on Colab, Jupyterlab, Kaggle, Paperspace - ipython_type = str(type(IPython.get_ipython())) - return 'colab' in ipython_type or 'zmqshell' in ipython_type - - -def is_kaggle(): - # Is environment a Kaggle Notebook? - return os.environ.get('PWD') == '/kaggle/working' and os.environ.get('KAGGLE_URL_BASE') == 'https://www.kaggle.com' - - -def is_docker() -> bool: - """Check if the process runs inside a docker container.""" - if Path('/.dockerenv').exists(): - return True - try: # check if docker is in control groups - with open('/proc/self/cgroup') as file: - return any('docker' in line for line in file) - except OSError: - return False - - -def is_writeable(dir, test=False): - # Return True if directory has write permissions, test opening a file with write permissions if test=True - if not test: - return os.access(dir, os.W_OK) # possible issues on Windows - file = Path(dir) / 'tmp.txt' - try: - with open(file, 'w'): # open file with write permissions - pass - file.unlink() # remove file - return True - except OSError: - return False - - -LOGGING_NAME = 'yolov5' - - -def set_logging(name=LOGGING_NAME, verbose=True): - # sets up logging for the given name +def set_logging(name=None, verbose=True): + # Sets level and returns logger rank = int(os.getenv('RANK', -1)) # rank in world for Multi-GPU trainings - level = logging.INFO if verbose and rank in {-1, 0} else logging.ERROR - logging.config.dictConfig({ - 'version': 1, - 'disable_existing_loggers': False, - 'formatters': { - name: { - 'format': '%(message)s'}}, - 'handlers': { - name: { - 'class': 'logging.StreamHandler', - 'formatter': name, - 'level': level,}}, - 'loggers': { - name: { - 'level': level, - 'handlers': [name], - 'propagate': False,}}}) + logging.basicConfig(format="%(message)s", level=logging.INFO if (verbose and rank in (-1, 0)) else logging.WARNING) + return logging.getLogger(name) -set_logging(LOGGING_NAME) # run before defining LOGGER -LOGGER = logging.getLogger(LOGGING_NAME) # define globally (used in train.py, val.py, detect.py, etc.) -if platform.system() == 'Windows': - for fn in LOGGER.info, LOGGER.warning: - setattr(LOGGER, fn.__name__, lambda x: fn(emojis(x))) # emoji safe logging - - -def user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'): - # Return path of user configuration directory. Prefer environment variable if exists. Make dir if required. - env = os.getenv(env_var) - if env: - path = Path(env) # use environment variable - else: - cfg = {'Windows': 'AppData/Roaming', 'Linux': '.config', 'Darwin': 'Library/Application Support'} # 3 OS dirs - path = Path.home() / cfg.get(platform.system(), '') # OS-specific config dir - path = (path if is_writeable(path) else Path('/tmp')) / dir # GCP and AWS lambda fix, only /tmp is writeable - path.mkdir(exist_ok=True) # make if required - return path - - -CONFIG_DIR = user_config_dir() # Ultralytics settings dir +LOGGER = set_logging(__name__) # define globally (used in train.py, val.py, detect.py, etc.) class Profile(contextlib.ContextDecorator): - # YOLOv3 Profile class. Usage: @Profile() decorator or 'with Profile():' context manager - def __init__(self, t=0.0): - self.t = t - self.cuda = torch.cuda.is_available() - + # Usage: @Profile() decorator or 'with Profile():' context manager def __enter__(self): - self.start = self.time() - return self + self.start = time.time() def __exit__(self, type, value, traceback): - self.dt = self.time() - self.start # delta-time - self.t += self.dt # accumulate dt - - def time(self): - if self.cuda: - torch.cuda.synchronize() - return time.time() + print(f'Profile results: {time.time() - self.start:.5f}s') class Timeout(contextlib.ContextDecorator): - # YOLOv3 Timeout class. Usage: @Timeout(seconds) decorator or 'with Timeout(seconds):' context manager + # Usage: @Timeout(seconds) decorator or 'with Timeout(seconds):' context manager def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors=True): self.seconds = int(seconds) self.timeout_message = timeout_msg @@ -192,15 +73,13 @@ class Timeout(contextlib.ContextDecorator): raise TimeoutError(self.timeout_message) def __enter__(self): - if platform.system() != 'Windows': # not supported on Windows - signal.signal(signal.SIGALRM, self._timeout_handler) # Set handler for SIGALRM - signal.alarm(self.seconds) # start countdown for SIGALRM to be raised + signal.signal(signal.SIGALRM, self._timeout_handler) # Set handler for SIGALRM + signal.alarm(self.seconds) # start countdown for SIGALRM to be raised def __exit__(self, exc_type, exc_val, exc_tb): - if platform.system() != 'Windows': - signal.alarm(0) # Cancel SIGALRM if it's scheduled - if self.suppress and exc_type is TimeoutError: # Suppress TimeoutError - return True + signal.alarm(0) # Cancel SIGALRM if it's scheduled + if self.suppress and exc_type is TimeoutError: # Suppress TimeoutError + return True class WorkingDirectory(contextlib.ContextDecorator): @@ -216,50 +95,40 @@ class WorkingDirectory(contextlib.ContextDecorator): os.chdir(self.cwd) +def try_except(func): + # try-except function. Usage: @try_except decorator + def handler(*args, **kwargs): + try: + func(*args, **kwargs) + except Exception as e: + print(e) + + return handler + + def methods(instance): # Get class/instance methods - return [f for f in dir(instance) if callable(getattr(instance, f)) and not f.startswith('__')] + return [f for f in dir(instance) if callable(getattr(instance, f)) and not f.startswith("__")] -def print_args(args: Optional[dict] = None, show_file=True, show_func=False): - # Print function arguments (optional args dict) - x = inspect.currentframe().f_back # previous frame - file, _, func, _, _ = inspect.getframeinfo(x) - if args is None: # get args automatically - args, _, _, frm = inspect.getargvalues(x) - args = {k: v for k, v in frm.items() if k in args} - try: - file = Path(file).resolve().relative_to(ROOT).with_suffix('') - except ValueError: - file = Path(file).stem - s = (f'{file}: ' if show_file else '') + (f'{func}: ' if show_func else '') - LOGGER.info(colorstr(s) + ', '.join(f'{k}={v}' for k, v in args.items())) +def print_args(name, opt): + # Print argparser arguments + LOGGER.info(colorstr(f'{name}: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items())) -def init_seeds(seed=0, deterministic=False): +def init_seeds(seed=0): # Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html + # cudnn seed 0 settings are slower and more reproducible, else faster and less reproducible + import torch.backends.cudnn as cudnn random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) # for Multi-GPU, exception safe - # torch.backends.cudnn.benchmark = True # AutoBatch problem https://github.com/ultralytics/yolov5/issues/9287 - if deterministic and check_version(torch.__version__, '1.12.0'): # https://github.com/ultralytics/yolov5/pull/8213 - torch.use_deterministic_algorithms(True) - torch.backends.cudnn.deterministic = True - os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':4096:8' - os.environ['PYTHONHASHSEED'] = str(seed) + cudnn.benchmark, cudnn.deterministic = (False, True) if seed == 0 else (True, False) def intersect_dicts(da, db, exclude=()): # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values - return {k: v for k, v in da.items() if k in db and all(x not in k for x in exclude) and v.shape == db[k].shape} - - -def get_default_args(func): - # Get func() default arguments - signature = inspect.signature(func) - return {k: v.default for k, v in signature.parameters.items() if v.default is not inspect.Parameter.empty} + return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape} def get_latest_run(search_dir='.'): @@ -268,26 +137,76 @@ def get_latest_run(search_dir='.'): return max(last_list, key=os.path.getctime) if last_list else '' -def file_age(path=__file__): - # Return days since last file update - dt = (datetime.now() - datetime.fromtimestamp(Path(path).stat().st_mtime)) # delta - return dt.days # + dt.seconds / 86400 # fractional days +def user_config_dir(dir='Ultralytics', env_var='YOLOV3_CONFIG_DIR'): + # Return path of user configuration directory. Prefer environment variable if exists. Make dir if required. + env = os.getenv(env_var) + if env: + path = Path(env) # use environment variable + else: + cfg = {'Windows': 'AppData/Roaming', 'Linux': '.config', 'Darwin': 'Library/Application Support'} # 3 OS dirs + path = Path.home() / cfg.get(platform.system(), '') # OS-specific config dir + path = (path if is_writeable(path) else Path('/tmp')) / dir # GCP and AWS lambda fix, only /tmp is writeable + path.mkdir(exist_ok=True) # make if required + return path -def file_date(path=__file__): - # Return human-readable file modification date, i.e. '2021-3-26' - t = datetime.fromtimestamp(Path(path).stat().st_mtime) - return f'{t.year}-{t.month}-{t.day}' +def is_writeable(dir, test=False): + # Return True if directory has write permissions, test opening a file with write permissions if test=True + if test: # method 1 + file = Path(dir) / 'tmp.txt' + try: + with open(file, 'w'): # open file with write permissions + pass + file.unlink() # remove file + return True + except OSError: + return False + else: # method 2 + return os.access(dir, os.R_OK) # possible issues on Windows + + +def is_docker(): + # Is environment a Docker container? + return Path('/workspace').exists() # or Path('/.dockerenv').exists() + + +def is_colab(): + # Is environment a Google Colab instance? + try: + import google.colab + return True + except ImportError: + return False + + +def is_pip(): + # Is file in a pip package? + return 'site-packages' in Path(__file__).resolve().parts + + +def is_ascii(s=''): + # Is string composed of all ASCII (no UTF) characters? (note str().isascii() introduced in python 3.7) + s = str(s) # convert list, tuple, None, etc. to str + return len(s.encode().decode('ascii', 'ignore')) == len(s) + + +def is_chinese(s='äēēåˇĨæ™ēčƒŊ'): + # Is string composed of any Chinese characters? + return re.search('[\u4e00-\u9fff]', s) + + +def emojis(str=''): + # Return platform-dependent emoji-safe version of string + return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str def file_size(path): # Return file/dir size (MB) - mb = 1 << 20 # bytes to MiB (1024 ** 2) path = Path(path) if path.is_file(): - return path.stat().st_size / mb + return path.stat().st_size / 1E6 elif path.is_dir(): - return sum(f.stat().st_size for f in path.glob('**/*') if f.is_file()) / mb + return sum(f.stat().st_size for f in path.glob('**/*') if f.is_file()) / 1E6 else: return 0.0 @@ -295,123 +214,84 @@ def file_size(path): def check_online(): # Check internet connectivity import socket - - def run_once(): - # Check once - try: - socket.create_connection(('1.1.1.1', 443), 5) # check host accessibility - return True - except OSError: - return False - - return run_once() or run_once() # check twice to increase robustness to intermittent connectivity issues - - -def git_describe(path=ROOT): # path must be a directory - # Return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe try: - assert (Path(path) / '.git').is_dir() - return check_output(f'git -C {path} describe --tags --long --always', shell=True).decode()[:-1] - except Exception: - return '' + socket.create_connection(("1.1.1.1", 443), 5) # check host accessibility + return True + except OSError: + return False -@TryExcept() +@try_except @WorkingDirectory(ROOT) -def check_git_status(repo='ultralytics/yolov5', branch='master'): - # YOLOv3 status check, recommend 'git pull' if code is out of date - url = f'https://github.com/{repo}' - msg = f', for updates see {url}' - s = colorstr('github: ') # string - assert Path('.git').exists(), s + 'skipping check (not a git repository)' + msg - assert check_online(), s + 'skipping check (offline)' + msg +def check_git_status(): + # Recommend 'git pull' if code is out of date + msg = ', for updates see https://github.com/ultralytics/yolov3' + print(colorstr('github: '), end='') + assert Path('.git').exists(), 'skipping check (not a git repository)' + msg + assert not is_docker(), 'skipping check (Docker image)' + msg + assert check_online(), 'skipping check (offline)' + msg - splits = re.split(pattern=r'\s', string=check_output('git remote -v', shell=True).decode()) - matches = [repo in s for s in splits] - if any(matches): - remote = splits[matches.index(True) - 1] - else: - remote = 'ultralytics' - check_output(f'git remote add {remote} {url}', shell=True) - check_output(f'git fetch {remote}', shell=True, timeout=5) # git fetch - local_branch = check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out - n = int(check_output(f'git rev-list {local_branch}..{remote}/{branch} --count', shell=True)) # commits behind + cmd = 'git fetch && git config --get remote.origin.url' + url = check_output(cmd, shell=True, timeout=5).decode().strip().rstrip('.git') # git fetch + branch = check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out + n = int(check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind if n > 0: - pull = 'git pull' if remote == 'origin' else f'git pull {remote} {branch}' - s += f"âš ī¸ YOLOv3 is out of date by {n} commit{'s' * (n > 1)}. Use `{pull}` or `git clone {url}` to update." + s = f"âš ī¸ YOLOv3 is out of date by {n} commit{'s' * (n > 1)}. Use `git pull` or `git clone {url}` to update." else: - s += f'up to date with {url} ✅' - LOGGER.info(s) + s = f'up to date with {url} ✅' + print(emojis(s)) # emoji-safe -@WorkingDirectory(ROOT) -def check_git_info(path='.'): - # YOLOv3 git info check, return {remote, branch, commit} - check_requirements('gitpython') - import git - try: - repo = git.Repo(path) - remote = repo.remotes.origin.url.replace('.git', '') # i.e. 'https://github.com/ultralytics/yolov5' - commit = repo.head.commit.hexsha # i.e. '3134699c73af83aac2a481435550b968d5792c0d' - try: - branch = repo.active_branch.name # i.e. 'main' - except TypeError: # not on any branch - branch = None # i.e. 'detached HEAD' state - return {'remote': remote, 'branch': branch, 'commit': commit} - except git.exc.InvalidGitRepositoryError: # path is not a git dir - return {'remote': None, 'branch': None, 'commit': None} - - -def check_python(minimum='3.7.0'): +def check_python(minimum='3.6.2'): # Check current python version vs. required python version check_version(platform.python_version(), minimum, name='Python ', hard=True) -def check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False, hard=False, verbose=False): +def check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False, hard=False): # Check version vs. required version current, minimum = (pkg.parse_version(x) for x in (current, minimum)) result = (current == minimum) if pinned else (current >= minimum) # bool - s = f'WARNING âš ī¸ {name}{minimum} is required by YOLOv3, but {name}{current} is currently installed' # string - if hard: - assert result, emojis(s) # assert min requirements met - if verbose and not result: - LOGGER.warning(s) - return result + if hard: # assert min requirements met + assert result, f'{name}{minimum} required by YOLOv3, but {name}{current} is currently installed' + else: + return result -@TryExcept() -def check_requirements(requirements=ROOT / 'requirements.txt', exclude=(), install=True, cmds=''): - # Check installed dependencies meet YOLOv3 requirements (pass *.txt file or list of packages or single package str) +@try_except +def check_requirements(requirements=ROOT / 'requirements.txt', exclude=(), install=True): + # Check installed dependencies meet requirements (pass *.txt file or list of packages) prefix = colorstr('red', 'bold', 'requirements:') check_python() # check python version - if isinstance(requirements, Path): # requirements.txt file - file = requirements.resolve() - assert file.exists(), f'{prefix} {file} not found, check failed.' + if isinstance(requirements, (str, Path)): # requirements.txt file + file = Path(requirements) + assert file.exists(), f"{prefix} {file.resolve()} not found, check failed." with file.open() as f: requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(f) if x.name not in exclude] - elif isinstance(requirements, str): - requirements = [requirements] + else: # list or tuple of packages + requirements = [x for x in requirements if x not in exclude] - s = '' - n = 0 + n = 0 # number of packages updates for r in requirements: try: pkg.require(r) - except (pkg.VersionConflict, pkg.DistributionNotFound): # exception if requirements not met - s += f'"{r}" ' - n += 1 + except Exception as e: # DistributionNotFound or VersionConflict if requirements not met + s = f"{prefix} {r} not found and is required by YOLOv3" + if install: + print(f"{s}, attempting auto-update...") + try: + assert check_online(), f"'pip install {r}' skipped (offline)" + print(check_output(f"pip install '{r}'", shell=True).decode()) + n += 1 + except Exception as e: + print(f'{prefix} {e}') + else: + print(f'{s}. Please install and rerun your command.') - if s and install and AUTOINSTALL: # check environment variable - LOGGER.info(f"{prefix} YOLOv3 requirement{'s' * (n > 1)} {s}not found, attempting AutoUpdate...") - try: - # assert check_online(), "AutoUpdate skipped (offline)" - LOGGER.info(check_output(f'pip install {s} {cmds}', shell=True).decode()) - source = file if 'file' in locals() else requirements - s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ - f"{prefix} âš ī¸ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" - LOGGER.info(s) - except Exception as e: - LOGGER.warning(f'{prefix} ❌ {e}') + if n: # if packages updated + source = file.resolve() if 'file' in locals() else requirements + s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ + f"{prefix} âš ī¸ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" + print(emojis(s)) def check_img_size(imgsz, s=32, floor=0): @@ -419,30 +299,28 @@ def check_img_size(imgsz, s=32, floor=0): if isinstance(imgsz, int): # integer i.e. img_size=640 new_size = max(make_divisible(imgsz, int(s)), floor) else: # list i.e. img_size=[640, 480] - imgsz = list(imgsz) # convert to list if tuple new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz] if new_size != imgsz: - LOGGER.warning(f'WARNING âš ī¸ --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}') + print(f'WARNING: --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}') return new_size -def check_imshow(warn=False): +def check_imshow(): # Check if environment supports image displays try: - assert not is_notebook() - assert not is_docker() + assert not is_docker(), 'cv2.imshow() is disabled in Docker environments' + assert not is_colab(), 'cv2.imshow() is disabled in Google Colab environments' cv2.imshow('test', np.zeros((1, 1, 3))) cv2.waitKey(1) cv2.destroyAllWindows() cv2.waitKey(1) return True except Exception as e: - if warn: - LOGGER.warning(f'WARNING âš ī¸ Environment does not support cv2.imshow() or PIL Image.show()\n{e}') + print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}') return False -def check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''): +def check_suffix(file='yolov3.pt', suffix=('.pt',), msg=''): # Check file(s) for acceptable suffix if file and suffix: if isinstance(suffix, str): @@ -450,7 +328,7 @@ def check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''): for f in file if isinstance(file, (list, tuple)) else [file]: s = Path(f).suffix.lower() # file suffix if len(s): - assert s in suffix, f'{msg}{f} acceptable suffix is {suffix}' + assert s in suffix, f"{msg}{f} acceptable suffix is {suffix}" def check_yaml(file, suffix=('.yaml', '.yml')): @@ -462,21 +340,18 @@ def check_file(file, suffix=''): # Search/download file (if necessary) and return path check_suffix(file, suffix) # optional file = str(file) # convert to str() - if os.path.isfile(file) or not file: # exists + if Path(file).is_file() or file == '': # exists return file elif file.startswith(('http:/', 'https:/')): # download - url = file # warning: Pathlib turns :// -> :/ + url = str(Path(file).as_posix()).replace(':/', '://') # Pathlib turns :// -> :/ file = Path(urllib.parse.unquote(file).split('?')[0]).name # '%2F' to '/', split https://url.com/file.txt?auth - if os.path.isfile(file): - LOGGER.info(f'Found {url} locally at {file}') # file already exists + if Path(file).is_file(): + print(f'Found {url} locally at {file}') # file already exists else: - LOGGER.info(f'Downloading {url} to {file}...') + print(f'Downloading {url} to {file}...') torch.hub.download_url_to_file(url, file) assert Path(file).exists() and Path(file).stat().st_size > 0, f'File download failed: {url}' # check return file - elif file.startswith('clearml://'): # ClearML Dataset ID - assert 'clearml' in sys.modules, "ClearML is not installed, so cannot use ClearML dataset. Try running 'pip install clearml'." - return file else: # search files = [] for d in 'data', 'models', 'utils': # search directories @@ -486,171 +361,84 @@ def check_file(file, suffix=''): return files[0] # return file -def check_font(font=FONT, progress=False): - # Download font to CONFIG_DIR if necessary - font = Path(font) - file = CONFIG_DIR / font.name - if not font.exists() and not file.exists(): - url = f'https://ultralytics.com/assets/{font.name}' - LOGGER.info(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, str(file), progress=progress) - - def check_dataset(data, autodownload=True): - # Download, check and/or unzip dataset if not found locally + # Download and/or unzip dataset if not found locally + # Usage: https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128_with_yaml.zip # Download (optional) extract_dir = '' - if isinstance(data, (str, Path)) and (is_zipfile(data) or is_tarfile(data)): - download(data, dir=f'{DATASETS_DIR}/{Path(data).stem}', unzip=True, delete=False, curl=False, threads=1) - data = next((DATASETS_DIR / Path(data).stem).rglob('*.yaml')) + if isinstance(data, (str, Path)) and str(data).endswith('.zip'): # i.e. gs://bucket/dir/coco128.zip + download(data, dir='../datasets', unzip=True, delete=False, curl=False, threads=1) + data = next((Path('../datasets') / Path(data).stem).rglob('*.yaml')) extract_dir, autodownload = data.parent, False # Read yaml (optional) if isinstance(data, (str, Path)): - data = yaml_load(data) # dictionary - - # Checks - for k in 'train', 'val', 'names': - assert k in data, emojis(f"data.yaml '{k}:' field missing ❌") - if isinstance(data['names'], (list, tuple)): # old array format - data['names'] = dict(enumerate(data['names'])) # convert to dict - assert all(isinstance(k, int) for k in data['names'].keys()), 'data.yaml names keys must be integers, i.e. 2: car' - data['nc'] = len(data['names']) - - # Resolve paths - path = Path(extract_dir or data.get('path') or '') # optional 'path' default to '.' - if not path.is_absolute(): - path = (ROOT / path).resolve() - data['path'] = path # download scripts - for k in 'train', 'val', 'test': - if data.get(k): # prepend path - if isinstance(data[k], str): - x = (path / data[k]).resolve() - if not x.exists() and data[k].startswith('../'): - x = (path / data[k][3:]).resolve() - data[k] = str(x) - else: - data[k] = [str((path / x).resolve()) for x in data[k]] + with open(data, errors='ignore') as f: + data = yaml.safe_load(f) # dictionary # Parse yaml + path = extract_dir or Path(data.get('path') or '') # optional 'path' default to '.' + for k in 'train', 'val', 'test': + if data.get(k): # prepend path + data[k] = str(path / data[k]) if isinstance(data[k], str) else [str(path / x) for x in data[k]] + + assert 'nc' in data, "Dataset 'nc' key missing." + if 'names' not in data: + data['names'] = [f'class{i}' for i in range(data['nc'])] # assign class names if missing train, val, test, s = (data.get(x) for x in ('train', 'val', 'test', 'download')) if val: val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path if not all(x.exists() for x in val): - LOGGER.info('\nDataset not found âš ī¸, missing paths %s' % [str(x) for x in val if not x.exists()]) - if not s or not autodownload: - raise Exception('Dataset not found ❌') - t = time.time() - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - LOGGER.info(f'Downloading {s} to {f}...') - torch.hub.download_url_to_file(s, f) - Path(DATASETS_DIR).mkdir(parents=True, exist_ok=True) # create root - unzip_file(f, path=DATASETS_DIR) # unzip - Path(f).unlink() # remove zip - r = None # success - elif s.startswith('bash '): # bash script - LOGGER.info(f'Running {s} ...') - r = subprocess.run(s, shell=True) - else: # python script - r = exec(s, {'yaml': data}) # return None - dt = f'({round(time.time() - t, 1)}s)' - s = f"success ✅ {dt}, saved to {colorstr('bold', DATASETS_DIR)}" if r in (0, None) else f'failure {dt} ❌' - LOGGER.info(f'Dataset download {s}') - check_font('Arial.ttf' if is_ascii(data['names']) else 'Arial.Unicode.ttf', progress=True) # download fonts + print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()]) + if s and autodownload: # download script + root = path.parent if 'path' in data else '..' # unzip directory i.e. '../' + if s.startswith('http') and s.endswith('.zip'): # URL + f = Path(s).name # filename + print(f'Downloading {s} to {f}...') + torch.hub.download_url_to_file(s, f) + Path(root).mkdir(parents=True, exist_ok=True) # create root + ZipFile(f).extractall(path=root) # unzip + Path(f).unlink() # remove zip + r = None # success + elif s.startswith('bash '): # bash script + print(f'Running {s} ...') + r = os.system(s) + else: # python script + r = exec(s, {'yaml': data}) # return None + print(f"Dataset autodownload {f'success, saved to {root}' if r in (0, None) else 'failure'}\n") + else: + raise Exception('Dataset not found.') + return data # dictionary -def check_amp(model): - # Check PyTorch Automatic Mixed Precision (AMP) functionality. Return True on correct operation - from models.common import AutoShape, DetectMultiBackend - - def amp_allclose(model, im): - # All close FP32 vs AMP results - m = AutoShape(model, verbose=False) # model - a = m(im).xywhn[0] # FP32 inference - m.amp = True - b = m(im).xywhn[0] # AMP inference - return a.shape == b.shape and torch.allclose(a, b, atol=0.1) # close to 10% absolute tolerance - - prefix = colorstr('AMP: ') - device = next(model.parameters()).device # get model device - if device.type in ('cpu', 'mps'): - return False # AMP only used on CUDA devices - f = ROOT / 'data' / 'images' / 'bus.jpg' # image to check - im = f if f.exists() else 'https://ultralytics.com/images/bus.jpg' if check_online() else np.ones((640, 640, 3)) - try: - assert amp_allclose(deepcopy(model), im) or amp_allclose(DetectMultiBackend('yolov5n.pt', device), im) - LOGGER.info(f'{prefix}checks passed ✅') - return True - except Exception: - help_url = 'https://github.com/ultralytics/yolov5/issues/7908' - LOGGER.warning(f'{prefix}checks failed ❌, disabling Automatic Mixed Precision. See {help_url}') - return False - - -def yaml_load(file='data.yaml'): - # Single-line safe yaml loading - with open(file, errors='ignore') as f: - return yaml.safe_load(f) - - -def yaml_save(file='data.yaml', data={}): - # Single-line safe yaml saving - with open(file, 'w') as f: - yaml.safe_dump({k: str(v) if isinstance(v, Path) else v for k, v in data.items()}, f, sort_keys=False) - - -def unzip_file(file, path=None, exclude=('.DS_Store', '__MACOSX')): - # Unzip a *.zip file to path/, excluding files containing strings in exclude list - if path is None: - path = Path(file).parent # default path - with ZipFile(file) as zipObj: - for f in zipObj.namelist(): # list all archived filenames in the zip - if all(x not in f for x in exclude): - zipObj.extract(f, path=path) - - def url2file(url): # Convert URL to filename, i.e. https://url.com/file.txt?auth -> file.txt url = str(Path(url)).replace(':/', '://') # Pathlib turns :// -> :/ - return Path(urllib.parse.unquote(url)).name.split('?')[0] # '%2F' to '/', split https://url.com/file.txt?auth + file = Path(urllib.parse.unquote(url)).name.split('?')[0] # '%2F' to '/', split https://url.com/file.txt?auth + return file -def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1, retry=3): - # Multithreaded file download and unzip function, used in data.yaml for autodownload +def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1): + # Multi-threaded file download and unzip function, used in data.yaml for autodownload def download_one(url, dir): # Download 1 file - success = True - if os.path.isfile(url): - f = Path(url) # filename - else: # does not exist - f = dir / Path(url).name - LOGGER.info(f'Downloading {url} to {f}...') - for i in range(retry + 1): - if curl: - s = 'sS' if threads > 1 else '' # silent - r = subprocess.run(f'curl -# -{s}L "{url}" -o "{f}" --retry 9 -C -'.split()) - success = r == 0 - else: - torch.hub.download_url_to_file(url, f, progress=threads == 1) # torch download - success = f.is_file() - if success: - break - elif i < retry: - LOGGER.warning(f'âš ī¸ Download failure, retrying {i + 1}/{retry} {url}...') - else: - LOGGER.warning(f'❌ Failed to download {url}...') - - if unzip and success and (f.suffix == '.gz' or is_zipfile(f) or is_tarfile(f)): - LOGGER.info(f'Unzipping {f}...') - if is_zipfile(f): - unzip_file(f, dir) # unzip - elif is_tarfile(f): - subprocess.run(['tar', 'xf', f, '--directory', f.parent], check=True) # unzip + f = dir / Path(url).name # filename + if Path(url).is_file(): # exists in current path + Path(url).rename(f) # move to dir + elif not f.exists(): + print(f'Downloading {url} to {f}...') + if curl: + os.system(f"curl -L '{url}' -o '{f}' --retry 9 -C -") # curl download, retry and resume on fail + else: + torch.hub.download_url_to_file(url, f, progress=True) # torch download + if unzip and f.suffix in ('.zip', '.gz'): + print(f'Unzipping {f}...') + if f.suffix == '.zip': + ZipFile(f).extractall(path=dir) # unzip elif f.suffix == '.gz': - subprocess.run(['tar', 'xfz', f, '--directory', f.parent], check=True) # unzip + os.system(f'tar xfz {f} --directory {f.parent}') # unzip if delete: f.unlink() # remove zip @@ -658,7 +446,7 @@ def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1, retry dir.mkdir(parents=True, exist_ok=True) # make directory if threads > 1: pool = ThreadPool(threads) - pool.imap(lambda x: download_one(*x), zip(url, repeat(dir))) # multithreaded + pool.imap(lambda x: download_one(*x), zip(url, repeat(dir))) # multi-threaded pool.close() pool.join() else: @@ -667,15 +455,13 @@ def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1, retry def make_divisible(x, divisor): - # Returns nearest x divisible by divisor - if isinstance(divisor, torch.Tensor): - divisor = int(divisor.max()) # to int + # Returns x evenly divisible by divisor return math.ceil(x / divisor) * divisor def clean_str(s): # Cleans a string by replacing special characters with underscore _ - return re.sub(pattern='[|@#!¥¡$â‚Ŧ%&()=?Âŋ^*;:,¨´><+]', repl='_', string=s) + return re.sub(pattern="[|@#!¥¡$â‚Ŧ%&()=?Âŋ^*;:,¨´><+]", repl="_", string=s) def one_cycle(y1=0.0, y2=1.0, steps=100): @@ -686,26 +472,25 @@ def one_cycle(y1=0.0, y2=1.0, steps=100): def colorstr(*input): # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string - colors = { - 'black': '\033[30m', # basic colors - 'red': '\033[31m', - 'green': '\033[32m', - 'yellow': '\033[33m', - 'blue': '\033[34m', - 'magenta': '\033[35m', - 'cyan': '\033[36m', - 'white': '\033[37m', - 'bright_black': '\033[90m', # bright colors - 'bright_red': '\033[91m', - 'bright_green': '\033[92m', - 'bright_yellow': '\033[93m', - 'bright_blue': '\033[94m', - 'bright_magenta': '\033[95m', - 'bright_cyan': '\033[96m', - 'bright_white': '\033[97m', - 'end': '\033[0m', # misc - 'bold': '\033[1m', - 'underline': '\033[4m'} + colors = {'black': '\033[30m', # basic colors + 'red': '\033[31m', + 'green': '\033[32m', + 'yellow': '\033[33m', + 'blue': '\033[34m', + 'magenta': '\033[35m', + 'cyan': '\033[36m', + 'white': '\033[37m', + 'bright_black': '\033[90m', # bright colors + 'bright_red': '\033[91m', + 'bright_green': '\033[92m', + 'bright_yellow': '\033[93m', + 'bright_blue': '\033[94m', + 'bright_magenta': '\033[95m', + 'bright_cyan': '\033[96m', + 'bright_white': '\033[97m', + 'end': '\033[0m', # misc + 'bold': '\033[1m', + 'underline': '\033[4m'} return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] @@ -715,7 +500,7 @@ def labels_to_class_weights(labels, nc=80): return torch.Tensor() labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(int) # labels = [class xywh] + classes = labels[:, 0].astype(np.int) # labels = [class xywh] weights = np.bincount(classes, minlength=nc) # occurrences per class # Prepend gridpoint count (for uCE training) @@ -725,14 +510,15 @@ def labels_to_class_weights(labels, nc=80): weights[weights == 0] = 1 # replace empty bins with 1 weights = 1 / weights # number of targets per class weights /= weights.sum() # normalize - return torch.from_numpy(weights).float() + return torch.from_numpy(weights) def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): # Produces image weights based on class_weights and image contents - # Usage: index = random.choices(range(n), weights=image_weights, k=1) # weighted image sample - class_counts = np.array([np.bincount(x[:, 0].astype(int), minlength=nc) for x in labels]) - return (class_weights.reshape(1, nc) * class_counts).sum(1) + class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels]) + image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) + # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample + return image_weights def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) @@ -741,59 +527,59 @@ def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - return [ - 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] + x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, + 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, + 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] + return x def xyxy2xywh(x): # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[..., 0] = (x[..., 0] + x[..., 2]) / 2 # x center - y[..., 1] = (x[..., 1] + x[..., 3]) / 2 # y center - y[..., 2] = x[..., 2] - x[..., 0] # width - y[..., 3] = x[..., 3] - x[..., 1] # height + y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center + y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center + y[:, 2] = x[:, 2] - x[:, 0] # width + y[:, 3] = x[:, 3] - x[:, 1] # height return y def xywh2xyxy(x): # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[..., 0] = x[..., 0] - x[..., 2] / 2 # top left x - y[..., 1] = x[..., 1] - x[..., 3] / 2 # top left y - y[..., 2] = x[..., 0] + x[..., 2] / 2 # bottom right x - y[..., 3] = x[..., 1] + x[..., 3] / 2 # bottom right y + y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x + y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y + y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x + y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y return y def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0): # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[..., 0] = w * (x[..., 0] - x[..., 2] / 2) + padw # top left x - y[..., 1] = h * (x[..., 1] - x[..., 3] / 2) + padh # top left y - y[..., 2] = w * (x[..., 0] + x[..., 2] / 2) + padw # bottom right x - y[..., 3] = h * (x[..., 1] + x[..., 3] / 2) + padh # bottom right y + y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x + y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y + y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x + y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y return y def xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0): # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] normalized where xy1=top-left, xy2=bottom-right if clip: - clip_boxes(x, (h - eps, w - eps)) # warning: inplace clip + clip_coords(x, (h - eps, w - eps)) # warning: inplace clip y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[..., 0] = ((x[..., 0] + x[..., 2]) / 2) / w # x center - y[..., 1] = ((x[..., 1] + x[..., 3]) / 2) / h # y center - y[..., 2] = (x[..., 2] - x[..., 0]) / w # width - y[..., 3] = (x[..., 3] - x[..., 1]) / h # height + y[:, 0] = ((x[:, 0] + x[:, 2]) / 2) / w # x center + y[:, 1] = ((x[:, 1] + x[:, 3]) / 2) / h # y center + y[:, 2] = (x[:, 2] - x[:, 0]) / w # width + y[:, 3] = (x[:, 3] - x[:, 1]) / h # height return y def xyn2xy(x, w=640, h=640, padw=0, padh=0): # Convert normalized segments into pixel segments, shape (n,2) y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[..., 0] = w * x[..., 0] + padw # top left x - y[..., 1] = h * x[..., 1] + padh # top left y + y[:, 0] = w * x[:, 0] + padw # top left x + y[:, 1] = h * x[:, 1] + padh # top left y return y @@ -817,30 +603,13 @@ def segments2boxes(segments): def resample_segments(segments, n=1000): # Up-sample an (n,2) segment for i, s in enumerate(segments): - s = np.concatenate((s, s[0:1, :]), axis=0) x = np.linspace(0, len(s) - 1, n) xp = np.arange(len(s)) segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy return segments -def scale_boxes(img1_shape, boxes, img0_shape, ratio_pad=None): - # Rescale boxes (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - boxes[..., [0, 2]] -= pad[0] # x padding - boxes[..., [1, 3]] -= pad[1] # y padding - boxes[..., :4] /= gain - clip_boxes(boxes, img0_shape) - return boxes - - -def scale_segments(img1_shape, segments, img0_shape, ratio_pad=None, normalize=False): +def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): # Rescale coords (xyxy) from img1_shape to img0_shape if ratio_pad is None: # calculate from img0_shape gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new @@ -849,81 +618,50 @@ def scale_segments(img1_shape, segments, img0_shape, ratio_pad=None, normalize=F gain = ratio_pad[0][0] pad = ratio_pad[1] - segments[:, 0] -= pad[0] # x padding - segments[:, 1] -= pad[1] # y padding - segments /= gain - clip_segments(segments, img0_shape) - if normalize: - segments[:, 0] /= img0_shape[1] # width - segments[:, 1] /= img0_shape[0] # height - return segments + coords[:, [0, 2]] -= pad[0] # x padding + coords[:, [1, 3]] -= pad[1] # y padding + coords[:, :4] /= gain + clip_coords(coords, img0_shape) + return coords -def clip_boxes(boxes, shape): - # Clip boxes (xyxy) to image shape (height, width) +def clip_coords(boxes, shape): + # Clip bounding xyxy bounding boxes to image shape (height, width) if isinstance(boxes, torch.Tensor): # faster individually - boxes[..., 0].clamp_(0, shape[1]) # x1 - boxes[..., 1].clamp_(0, shape[0]) # y1 - boxes[..., 2].clamp_(0, shape[1]) # x2 - boxes[..., 3].clamp_(0, shape[0]) # y2 + boxes[:, 0].clamp_(0, shape[1]) # x1 + boxes[:, 1].clamp_(0, shape[0]) # y1 + boxes[:, 2].clamp_(0, shape[1]) # x2 + boxes[:, 3].clamp_(0, shape[0]) # y2 else: # np.array (faster grouped) - boxes[..., [0, 2]] = boxes[..., [0, 2]].clip(0, shape[1]) # x1, x2 - boxes[..., [1, 3]] = boxes[..., [1, 3]].clip(0, shape[0]) # y1, y2 + boxes[:, [0, 2]] = boxes[:, [0, 2]].clip(0, shape[1]) # x1, x2 + boxes[:, [1, 3]] = boxes[:, [1, 3]].clip(0, shape[0]) # y1, y2 -def clip_segments(segments, shape): - # Clip segments (xy1,xy2,...) to image shape (height, width) - if isinstance(segments, torch.Tensor): # faster individually - segments[:, 0].clamp_(0, shape[1]) # x - segments[:, 1].clamp_(0, shape[0]) # y - else: # np.array (faster grouped) - segments[:, 0] = segments[:, 0].clip(0, shape[1]) # x - segments[:, 1] = segments[:, 1].clip(0, shape[0]) # y - - -def non_max_suppression( - prediction, - conf_thres=0.25, - iou_thres=0.45, - classes=None, - agnostic=False, - multi_label=False, - labels=(), - max_det=300, - nm=0, # number of masks -): - """Non-Maximum Suppression (NMS) on inference results to reject overlapping detections +def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, + labels=(), max_det=300): + """Runs Non-Maximum Suppression (NMS) on inference results Returns: list of detections, on (n,6) tensor per image [xyxy, conf, cls] """ + nc = prediction.shape[2] - 5 # number of classes + xc = prediction[..., 4] > conf_thres # candidates + # Checks assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0' assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0' - if isinstance(prediction, (list, tuple)): # YOLOv3 model in validation model, output = (inference_out, loss_out) - prediction = prediction[0] # select only inference output - - device = prediction.device - mps = 'mps' in device.type # Apple MPS - if mps: # MPS not fully supported yet, convert tensors to CPU before NMS - prediction = prediction.cpu() - bs = prediction.shape[0] # batch size - nc = prediction.shape[2] - nm - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates # Settings - # min_wh = 2 # (pixels) minimum box width and height - max_wh = 7680 # (pixels) maximum box width and height + min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 0.5 + 0.05 * bs # seconds to quit after + time_limit = 10.0 # seconds to quit after redundant = True # require redundant detections multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) merge = False # use merge-NMS t = time.time() - mi = 5 + nc # mask start index - output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs + output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] for xi, x in enumerate(prediction): # image index, image inference # Apply constraints # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height @@ -931,11 +669,11 @@ def non_max_suppression( # Cat apriori labels if autolabelling if labels and len(labels[xi]): - lb = labels[xi] - v = torch.zeros((len(lb), nc + nm + 5), device=x.device) - v[:, :4] = lb[:, 1:5] # box + l = labels[xi] + v = torch.zeros((len(l), nc + 5), device=x.device) + v[:, :4] = l[:, 1:5] # box v[:, 4] = 1.0 # conf - v[range(len(lb)), lb[:, 0].long() + 5] = 1.0 # cls + v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls x = torch.cat((x, v), 0) # If none remain process next image @@ -945,17 +683,16 @@ def non_max_suppression( # Compute conf x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - # Box/Mask - box = xywh2xyxy(x[:, :4]) # center_x, center_y, width, height) to (x1, y1, x2, y2) - mask = x[:, mi:] # zero columns if no masks + # Box (center x, center y, width, height) to (x1, y1, x2, y2) + box = xywh2xyxy(x[:, :4]) # Detections matrix nx6 (xyxy, conf, cls) if multi_label: - i, j = (x[:, 5:mi] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, 5 + j, None], j[:, None].float(), mask[i]), 1) + i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T + x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) else: # best class only - conf, j = x[:, 5:mi].max(1, keepdim=True) - x = torch.cat((box, conf, j.float(), mask), 1)[conf.view(-1) > conf_thres] + conf, j = x[:, 5:].max(1, keepdim=True) + x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] # Filter by class if classes is not None: @@ -969,13 +706,15 @@ def non_max_suppression( n = x.shape[0] # number of boxes if not n: # no boxes continue - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence and remove excess boxes + elif n > max_nms: # excess boxes + x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence # Batched NMS c = x[:, 5:6] * (0 if agnostic else max_wh) # classes boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - i = i[:max_det] # limit detections + if i.shape[0] > max_det: # limit detections + i = i[:max_det] if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix @@ -985,10 +724,8 @@ def non_max_suppression( i = i[iou.sum(1) > 1] # require redundancy output[xi] = x[i] - if mps: - output[xi] = output[xi].to(device) if (time.time() - t) > time_limit: - LOGGER.warning(f'WARNING âš ī¸ NMS time limit {time_limit:.3f}s exceeded') + print(f'WARNING: NMS time limit {time_limit}s exceeded') break # time limit exceeded return output @@ -999,7 +736,7 @@ def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_op x = torch.load(f, map_location=torch.device('cpu')) if x.get('ema'): x['model'] = x['ema'] # replace model with ema - for k in 'optimizer', 'best_fitness', 'ema', 'updates': # keys + for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys x[k] = None x['epoch'] = -1 x['model'].half() # to FP16 @@ -1007,13 +744,13 @@ def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_op p.requires_grad = False torch.save(x, s or f) mb = os.path.getsize(s or f) / 1E6 # filesize - LOGGER.info(f"Optimizer stripped from {f},{f' saved as {s},' if s else ''} {mb:.1f}MB") + print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB") -def print_mutation(keys, results, hyp, save_dir, bucket, prefix=colorstr('evolve: ')): - evolve_csv = save_dir / 'evolve.csv' - evolve_yaml = save_dir / 'hyp_evolve.yaml' - keys = tuple(keys) + tuple(hyp.keys()) # [results + hyps] +def print_mutation(results, hyp, save_dir, bucket): + evolve_csv, results_csv, evolve_yaml = save_dir / 'evolve.csv', save_dir / 'results.csv', save_dir / 'hyp_evolve.yaml' + keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', + 'val/box_loss', 'val/obj_loss', 'val/cls_loss') + tuple(hyp.keys()) # [results + hyps] keys = tuple(x.strip() for x in keys) vals = results + tuple(hyp.values()) n = len(keys) @@ -1021,32 +758,32 @@ def print_mutation(keys, results, hyp, save_dir, bucket, prefix=colorstr('evolve # Download (optional) if bucket: url = f'gs://{bucket}/evolve.csv' - if gsutil_getsize(url) > (evolve_csv.stat().st_size if evolve_csv.exists() else 0): - subprocess.run(['gsutil', 'cp', f'{url}', f'{save_dir}']) # download evolve.csv if larger than local + if gsutil_getsize(url) > (os.path.getsize(evolve_csv) if os.path.exists(evolve_csv) else 0): + os.system(f'gsutil cp {url} {save_dir}') # download evolve.csv if larger than local # Log to evolve.csv s = '' if evolve_csv.exists() else (('%20s,' * n % keys).rstrip(',') + '\n') # add header with open(evolve_csv, 'a') as f: f.write(s + ('%20.5g,' * n % vals).rstrip(',') + '\n') + # Print to screen + print(colorstr('evolve: ') + ', '.join(f'{x.strip():>20s}' for x in keys)) + print(colorstr('evolve: ') + ', '.join(f'{x:20.5g}' for x in vals), end='\n\n\n') + # Save yaml with open(evolve_yaml, 'w') as f: - data = pd.read_csv(evolve_csv, skipinitialspace=True) + data = pd.read_csv(evolve_csv) data = data.rename(columns=lambda x: x.strip()) # strip keys - i = np.argmax(fitness(data.values[:, :4])) # - generations = len(data) - f.write('# YOLOv3 Hyperparameter Evolution Results\n' + f'# Best generation: {i}\n' + - f'# Last generation: {generations - 1}\n' + '# ' + ', '.join(f'{x.strip():>20s}' for x in keys[:7]) + - '\n' + '# ' + ', '.join(f'{x:>20.5g}' for x in data.values[i, :7]) + '\n\n') - yaml.safe_dump(data.loc[i][7:].to_dict(), f, sort_keys=False) - - # Print to screen - LOGGER.info(prefix + f'{generations} generations finished, current result:\n' + prefix + - ', '.join(f'{x.strip():>20s}' for x in keys) + '\n' + prefix + ', '.join(f'{x:20.5g}' - for x in vals) + '\n\n') + i = np.argmax(fitness(data.values[:, :7])) # + f.write('# YOLOv3 Hyperparameter Evolution Results\n' + + f'# Best generation: {i}\n' + + f'# Last generation: {len(data)}\n' + + '# ' + ', '.join(f'{x.strip():>20s}' for x in keys[:7]) + '\n' + + '# ' + ', '.join(f'{x:>20.5g}' for x in data.values[i, :7]) + '\n\n') + yaml.safe_dump(hyp, f, sort_keys=False) if bucket: - subprocess.run(['gsutil', 'cp', f'{evolve_csv}', f'{evolve_yaml}', f'gs://{bucket}']) # upload + os.system(f'gsutil cp {evolve_csv} {evolve_yaml} gs://{bucket}') # upload def apply_classifier(x, model, img, im0): @@ -1064,14 +801,15 @@ def apply_classifier(x, model, img, im0): d[:, :4] = xywh2xyxy(b).long() # Rescale boxes from img_size to im0 size - scale_boxes(img.shape[2:], d[:, :4], im0[i].shape) + scale_coords(img.shape[2:], d[:, :4], im0[i].shape) # Classes pred_cls1 = d[:, 5].long() ims = [] - for a in d: + for j, a in enumerate(d): # per item cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] im = cv2.resize(cutout, (224, 224)) # BGR + # cv2.imwrite('example%i.jpg' % j, cutout) im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 @@ -1089,47 +827,15 @@ def increment_path(path, exist_ok=False, sep='', mkdir=False): path = Path(path) # os-agnostic if path.exists() and not exist_ok: path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '') - - # Method 1 - for n in range(2, 9999): - p = f'{path}{sep}{n}{suffix}' # increment path - if not os.path.exists(p): # - break - path = Path(p) - - # Method 2 (deprecated) - # dirs = glob.glob(f"{path}{sep}*") # similar paths - # matches = [re.search(rf"{path.stem}{sep}(\d+)", d) for d in dirs] - # i = [int(m.groups()[0]) for m in matches if m] # indices - # n = max(i) + 1 if i else 2 # increment number - # path = Path(f"{path}{sep}{n}{suffix}") # increment path - + dirs = glob.glob(f"{path}{sep}*") # similar paths + matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs] + i = [int(m.groups()[0]) for m in matches if m] # indices + n = max(i) + 1 if i else 2 # increment number + path = Path(f"{path}{sep}{n}{suffix}") # increment path if mkdir: path.mkdir(parents=True, exist_ok=True) # make directory - return path -# OpenCV Multilanguage-friendly functions ------------------------------------------------------------------------------------ -imshow_ = cv2.imshow # copy to avoid recursion errors - - -def imread(path, flags=cv2.IMREAD_COLOR): - return cv2.imdecode(np.fromfile(path, np.uint8), flags) - - -def imwrite(path, im): - try: - cv2.imencode(Path(path).suffix, im)[1].tofile(path) - return True - except Exception: - return False - - -def imshow(path, im): - imshow_(path.encode('unicode_escape').decode(), im) - - -cv2.imread, cv2.imwrite, cv2.imshow = imread, imwrite, imshow # redefine - -# Variables ------------------------------------------------------------------------------------------------------------ +# Variables +NCOLS = 0 if is_docker() else shutil.get_terminal_size().columns # terminal window size diff --git a/yolov3/utils/loggers/__init__.py b/yolov3/utils/loggers/__init__.py index 82e4458a..a234ce2c 100644 --- a/yolov3/utils/loggers/__init__.py +++ b/yolov3/utils/loggers/__init__.py @@ -5,26 +5,25 @@ Logging utils import os import warnings -from pathlib import Path +from threading import Thread import pkg_resources as pkg import torch from torch.utils.tensorboard import SummaryWriter -from utils.general import LOGGER, colorstr, cv2 -from utils.loggers.clearml.clearml_utils import ClearmlLogger +from utils.general import colorstr, emojis from utils.loggers.wandb.wandb_utils import WandbLogger -from utils.plots import plot_images, plot_labels, plot_results +from utils.plots import plot_images, plot_results from utils.torch_utils import de_parallel -LOGGERS = ('csv', 'tb', 'wandb', 'clearml', 'comet') # *.csv, TensorBoard, Weights & Biases, ClearML +LOGGERS = ('csv', 'tb', 'wandb') # text-file, TensorBoard, Weights & Biases RANK = int(os.getenv('RANK', -1)) try: import wandb assert hasattr(wandb, '__version__') # verify package import not local dir - if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in {0, -1}: + if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in [0, -1]: try: wandb_login_success = wandb.login(timeout=30) except wandb.errors.UsageError: # known non-TTY terminal issue @@ -34,64 +33,30 @@ try: except (ImportError, AssertionError): wandb = None -try: - import clearml - - assert hasattr(clearml, '__version__') # verify package import not local dir -except (ImportError, AssertionError): - clearml = None - -try: - if RANK not in [0, -1]: - comet_ml = None - else: - import comet_ml - - assert hasattr(comet_ml, '__version__') # verify package import not local dir - from utils.loggers.comet import CometLogger - -except (ModuleNotFoundError, ImportError, AssertionError): - comet_ml = None - class Loggers(): - # YOLOv3 Loggers class + # Loggers class def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, logger=None, include=LOGGERS): self.save_dir = save_dir self.weights = weights self.opt = opt self.hyp = hyp - self.plots = not opt.noplots # plot results self.logger = logger # for printing results to console self.include = include - self.keys = [ - 'train/box_loss', - 'train/obj_loss', - 'train/cls_loss', # train loss - 'metrics/precision', - 'metrics/recall', - 'metrics/mAP_0.5', - 'metrics/mAP_0.5:0.95', # metrics - 'val/box_loss', - 'val/obj_loss', - 'val/cls_loss', # val loss - 'x/lr0', - 'x/lr1', - 'x/lr2'] # params - self.best_keys = ['best/epoch', 'best/precision', 'best/recall', 'best/mAP_0.5', 'best/mAP_0.5:0.95'] + self.keys = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss + 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', # metrics + 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss + 'x/lr0', 'x/lr1', 'x/lr2'] # params for k in LOGGERS: setattr(self, k, None) # init empty logger dictionary self.csv = True # always log to csv - # Messages - if not clearml: - prefix = colorstr('ClearML: ') - s = f"{prefix}run 'pip install clearml' to automatically track, visualize and remotely train YOLOv3 🚀 in ClearML" - self.logger.info(s) - if not comet_ml: - prefix = colorstr('Comet: ') - s = f"{prefix}run 'pip install comet_ml' to automatically track and visualize YOLOv3 🚀 runs in Comet" - self.logger.info(s) + # Message + if not wandb: + prefix = colorstr('Weights & Biases: ') + s = f"{prefix}run 'pip install wandb' to automatically track and visualize YOLOv3 🚀 runs (RECOMMENDED)" + print(emojis(s)) + # TensorBoard s = self.save_dir if 'tb' in self.include and not self.opt.evolve: @@ -101,127 +66,53 @@ class Loggers(): # W&B if wandb and 'wandb' in self.include: + wandb_artifact_resume = isinstance(self.opt.resume, str) and self.opt.resume.startswith('wandb-artifact://') + run_id = torch.load(self.weights).get('wandb_id') if self.opt.resume and not wandb_artifact_resume else None self.opt.hyp = self.hyp # add hyperparameters - self.wandb = WandbLogger(self.opt) + self.wandb = WandbLogger(self.opt, run_id) else: self.wandb = None - # ClearML - if clearml and 'clearml' in self.include: - try: - self.clearml = ClearmlLogger(self.opt, self.hyp) - except Exception: - self.clearml = None - prefix = colorstr('ClearML: ') - LOGGER.warning(f'{prefix}WARNING âš ī¸ ClearML is installed but not configured, skipping ClearML logging.' - f' See https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml#readme') - - else: - self.clearml = None - - # Comet - if comet_ml and 'comet' in self.include: - if isinstance(self.opt.resume, str) and self.opt.resume.startswith('comet://'): - run_id = self.opt.resume.split('/')[-1] - self.comet_logger = CometLogger(self.opt, self.hyp, run_id=run_id) - - else: - self.comet_logger = CometLogger(self.opt, self.hyp) - - else: - self.comet_logger = None - - @property - def remote_dataset(self): - # Get data_dict if custom dataset artifact link is provided - data_dict = None - if self.clearml: - data_dict = self.clearml.data_dict - if self.wandb: - data_dict = self.wandb.data_dict - if self.comet_logger: - data_dict = self.comet_logger.data_dict - - return data_dict - - def on_train_start(self): - if self.comet_logger: - self.comet_logger.on_train_start() - - def on_pretrain_routine_start(self): - if self.comet_logger: - self.comet_logger.on_pretrain_routine_start() - - def on_pretrain_routine_end(self, labels, names): + def on_pretrain_routine_end(self): # Callback runs on pre-train routine end - if self.plots: - plot_labels(labels, names, self.save_dir) - paths = self.save_dir.glob('*labels*.jpg') # training labels - if self.wandb: - self.wandb.log({'Labels': [wandb.Image(str(x), caption=x.name) for x in paths]}) - # if self.clearml: - # pass # ClearML saves these images automatically using hooks - if self.comet_logger: - self.comet_logger.on_pretrain_routine_end(paths) + paths = self.save_dir.glob('*labels*.jpg') # training labels + if self.wandb: + self.wandb.log({"Labels": [wandb.Image(str(x), caption=x.name) for x in paths]}) - def on_train_batch_end(self, model, ni, imgs, targets, paths, vals): - log_dict = dict(zip(self.keys[:3], vals)) + def on_train_batch_end(self, ni, model, imgs, targets, paths, plots, sync_bn): # Callback runs on train batch end - # ni: number integrated batches (since train start) - if self.plots: + if plots: + if ni == 0: + if not sync_bn: # tb.add_graph() --sync known issue https://github.com/ultralytics/yolov5/issues/3754 + with warnings.catch_warnings(): + warnings.simplefilter('ignore') # suppress jit trace warning + self.tb.add_graph(torch.jit.trace(de_parallel(model), imgs[0:1], strict=False), []) if ni < 3: f = self.save_dir / f'train_batch{ni}.jpg' # filename - plot_images(imgs, targets, paths, f) - if ni == 0 and self.tb and not self.opt.sync_bn: - log_tensorboard_graph(self.tb, model, imgsz=(self.opt.imgsz, self.opt.imgsz)) - if ni == 10 and (self.wandb or self.clearml): + Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start() + if self.wandb and ni == 10: files = sorted(self.save_dir.glob('train*.jpg')) - if self.wandb: - self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]}) - if self.clearml: - self.clearml.log_debug_samples(files, title='Mosaics') - - if self.comet_logger: - self.comet_logger.on_train_batch_end(log_dict, step=ni) + self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]}) def on_train_epoch_end(self, epoch): # Callback runs on train epoch end if self.wandb: self.wandb.current_epoch = epoch + 1 - if self.comet_logger: - self.comet_logger.on_train_epoch_end(epoch) - - def on_val_start(self): - if self.comet_logger: - self.comet_logger.on_val_start() - def on_val_image_end(self, pred, predn, path, names, im): # Callback runs on val image end if self.wandb: self.wandb.val_one_image(pred, predn, path, names, im) - if self.clearml: - self.clearml.log_image_with_boxes(path, pred, names, im) - def on_val_batch_end(self, batch_i, im, targets, paths, shapes, out): - if self.comet_logger: - self.comet_logger.on_val_batch_end(batch_i, im, targets, paths, shapes, out) - - def on_val_end(self, nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix): + def on_val_end(self): # Callback runs on val end - if self.wandb or self.clearml: - files = sorted(self.save_dir.glob('val*.jpg')) if self.wandb: - self.wandb.log({'Validation': [wandb.Image(str(f), caption=f.name) for f in files]}) - if self.clearml: - self.clearml.log_debug_samples(files, title='Validation') - - if self.comet_logger: - self.comet_logger.on_val_end(nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix) + files = sorted(self.save_dir.glob('val*.jpg')) + self.wandb.log({"Validation": [wandb.Image(str(f), caption=f.name) for f in files]}) def on_fit_epoch_end(self, vals, epoch, best_fitness, fi): # Callback runs at the end of each fit (train+val) epoch - x = dict(zip(self.keys, vals)) + x = {k: v for k, v in zip(self.keys, vals)} # dict if self.csv: file = self.save_dir / 'results.csv' n = len(x) + 1 # number of cols @@ -232,170 +123,37 @@ class Loggers(): if self.tb: for k, v in x.items(): self.tb.add_scalar(k, v, epoch) - elif self.clearml: # log to ClearML if TensorBoard not used - for k, v in x.items(): - title, series = k.split('/') - self.clearml.task.get_logger().report_scalar(title, series, v, epoch) if self.wandb: - if best_fitness == fi: - best_results = [epoch] + vals[3:7] - for i, name in enumerate(self.best_keys): - self.wandb.wandb_run.summary[name] = best_results[i] # log best results in the summary self.wandb.log(x) - self.wandb.end_epoch() - - if self.clearml: - self.clearml.current_epoch_logged_images = set() # reset epoch image limit - self.clearml.current_epoch += 1 - - if self.comet_logger: - self.comet_logger.on_fit_epoch_end(x, epoch=epoch) + self.wandb.end_epoch(best_result=best_fitness == fi) def on_model_save(self, last, epoch, final_epoch, best_fitness, fi): # Callback runs on model save event - if (epoch + 1) % self.opt.save_period == 0 and not final_epoch and self.opt.save_period != -1: - if self.wandb: + if self.wandb: + if ((epoch + 1) % self.opt.save_period == 0 and not final_epoch) and self.opt.save_period != -1: self.wandb.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi) - if self.clearml: - self.clearml.task.update_output_model(model_path=str(last), - model_name='Latest Model', - auto_delete_file=False) - if self.comet_logger: - self.comet_logger.on_model_save(last, epoch, final_epoch, best_fitness, fi) - - def on_train_end(self, last, best, epoch, results): - # Callback runs on training end, i.e. saving best model - if self.plots: + def on_train_end(self, last, best, plots, epoch, results): + # Callback runs on training end + if plots: plot_results(file=self.save_dir / 'results.csv') # save results.png files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))] files = [(self.save_dir / f) for f in files if (self.save_dir / f).exists()] # filter - self.logger.info(f"Results saved to {colorstr('bold', self.save_dir)}") - if self.tb and not self.clearml: # These images are already captured by ClearML by now, we don't want doubles + if self.tb: + import cv2 for f in files: self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC') if self.wandb: - self.wandb.log(dict(zip(self.keys[3:10], results))) - self.wandb.log({'Results': [wandb.Image(str(f), caption=f.name) for f in files]}) + self.wandb.log({"Results": [wandb.Image(str(f), caption=f.name) for f in files]}) # Calling wandb.log. TODO: Refactor this into WandbLogger.log_model if not self.opt.evolve: - wandb.log_artifact(str(best if best.exists() else last), - type='model', - name=f'run_{self.wandb.wandb_run.id}_model', + wandb.log_artifact(str(best if best.exists() else last), type='model', + name='run_' + self.wandb.wandb_run.id + '_model', aliases=['latest', 'best', 'stripped']) - self.wandb.finish_run() - - if self.clearml and not self.opt.evolve: - self.clearml.task.update_output_model(model_path=str(best if best.exists() else last), - name='Best Model', - auto_delete_file=False) - - if self.comet_logger: - final_results = dict(zip(self.keys[3:10], results)) - self.comet_logger.on_train_end(files, self.save_dir, last, best, epoch, final_results) - - def on_params_update(self, params: dict): - # Update hyperparams or configs of the experiment - if self.wandb: - self.wandb.wandb_run.config.update(params, allow_val_change=True) - if self.comet_logger: - self.comet_logger.on_params_update(params) - - -class GenericLogger: - """ - YOLOv5 General purpose logger for non-task specific logging - Usage: from utils.loggers import GenericLogger; logger = GenericLogger(...) - Arguments - opt: Run arguments - console_logger: Console logger - include: loggers to include - """ - - def __init__(self, opt, console_logger, include=('tb', 'wandb')): - # init default loggers - self.save_dir = Path(opt.save_dir) - self.include = include - self.console_logger = console_logger - self.csv = self.save_dir / 'results.csv' # CSV logger - if 'tb' in self.include: - prefix = colorstr('TensorBoard: ') - self.console_logger.info( - f"{prefix}Start with 'tensorboard --logdir {self.save_dir.parent}', view at http://localhost:6006/") - self.tb = SummaryWriter(str(self.save_dir)) - - if wandb and 'wandb' in self.include: - self.wandb = wandb.init(project=web_project_name(str(opt.project)), - name=None if opt.name == 'exp' else opt.name, - config=opt) - else: - self.wandb = None - - def log_metrics(self, metrics, epoch): - # Log metrics dictionary to all loggers - if self.csv: - keys, vals = list(metrics.keys()), list(metrics.values()) - n = len(metrics) + 1 # number of cols - s = '' if self.csv.exists() else (('%23s,' * n % tuple(['epoch'] + keys)).rstrip(',') + '\n') # header - with open(self.csv, 'a') as f: - f.write(s + ('%23.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\n') - - if self.tb: - for k, v in metrics.items(): - self.tb.add_scalar(k, v, epoch) - - if self.wandb: - self.wandb.log(metrics, step=epoch) - - def log_images(self, files, name='Images', epoch=0): - # Log images to all loggers - files = [Path(f) for f in (files if isinstance(files, (tuple, list)) else [files])] # to Path - files = [f for f in files if f.exists()] # filter by exists - - if self.tb: - for f in files: - self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC') - - if self.wandb: - self.wandb.log({name: [wandb.Image(str(f), caption=f.name) for f in files]}, step=epoch) - - def log_graph(self, model, imgsz=(640, 640)): - # Log model graph to all loggers - if self.tb: - log_tensorboard_graph(self.tb, model, imgsz) - - def log_model(self, model_path, epoch=0, metadata={}): - # Log model to all loggers - if self.wandb: - art = wandb.Artifact(name=f'run_{wandb.run.id}_model', type='model', metadata=metadata) - art.add_file(str(model_path)) - wandb.log_artifact(art) - - def update_params(self, params): - # Update the parameters logged - if self.wandb: - wandb.run.config.update(params, allow_val_change=True) - - -def log_tensorboard_graph(tb, model, imgsz=(640, 640)): - # Log model graph to TensorBoard - try: - p = next(model.parameters()) # for device, type - imgsz = (imgsz, imgsz) if isinstance(imgsz, int) else imgsz # expand - im = torch.zeros((1, 3, *imgsz)).to(p.device).type_as(p) # input image (WARNING: must be zeros, not empty) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress jit trace warning - tb.add_graph(torch.jit.trace(de_parallel(model), im, strict=False), []) - except Exception as e: - LOGGER.warning(f'WARNING âš ī¸ TensorBoard graph visualization failure {e}') - - -def web_project_name(project): - # Convert local project name to web project name - if not project.startswith('runs/train'): - return project - suffix = '-Classify' if project.endswith('-cls') else '-Segment' if project.endswith('-seg') else '' - return f'YOLOv5{suffix}' + self.wandb.finish_run() + else: + self.wandb.finish_run() + self.wandb = WandbLogger(self.opt) diff --git a/yolov3/utils/loggers/wandb/wandb_utils.py b/yolov3/utils/loggers/wandb/wandb_utils.py index adb7840e..7087e4e9 100644 --- a/yolov3/utils/loggers/wandb/wandb_utils.py +++ b/yolov3/utils/loggers/wandb/wandb_utils.py @@ -1,32 +1,108 @@ -# YOLOv3 🚀 by Ultralytics, GPL-3.0 license - -# WARNING âš ī¸ wandb is deprecated and will be removed in future release. -# See supported integrations at https://github.com/ultralytics/yolov5#integrations +"""Utilities and tools for tracking runs with Weights & Biases.""" import logging import os import sys from contextlib import contextmanager from pathlib import Path +from typing import Dict -from utils.general import LOGGER, colorstr +import pkg_resources as pkg +import yaml +from tqdm import tqdm FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory +ROOT = FILE.parents[3] # root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH -RANK = int(os.getenv('RANK', -1)) -DEPRECATION_WARNING = f"{colorstr('wandb')}: WARNING âš ī¸ wandb is deprecated and will be removed in a future release. " \ - f'See supported integrations at https://github.com/ultralytics/yolov5#integrations.' + +from utils.datasets import LoadImagesAndLabels, img2label_paths +from utils.general import LOGGER, check_dataset, check_file try: import wandb assert hasattr(wandb, '__version__') # verify package import not local dir - LOGGER.warning(DEPRECATION_WARNING) except (ImportError, AssertionError): wandb = None +RANK = int(os.getenv('RANK', -1)) +WANDB_ARTIFACT_PREFIX = 'wandb-artifact://' + + +def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX): + return from_string[len(prefix):] + + +def check_wandb_config_file(data_config_file): + wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path + if Path(wandb_config).is_file(): + return wandb_config + return data_config_file + + +def check_wandb_dataset(data_file): + is_trainset_wandb_artifact = False + is_valset_wandb_artifact = False + if check_file(data_file) and data_file.endswith('.yaml'): + with open(data_file, errors='ignore') as f: + data_dict = yaml.safe_load(f) + is_trainset_wandb_artifact = (isinstance(data_dict['train'], str) and + data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX)) + is_valset_wandb_artifact = (isinstance(data_dict['val'], str) and + data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX)) + if is_trainset_wandb_artifact or is_valset_wandb_artifact: + return data_dict + else: + return check_dataset(data_file) + + +def get_run_info(run_path): + run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX)) + run_id = run_path.stem + project = run_path.parent.stem + entity = run_path.parent.parent.stem + model_artifact_name = 'run_' + run_id + '_model' + return entity, project, run_id, model_artifact_name + + +def check_wandb_resume(opt): + process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None + if isinstance(opt.resume, str): + if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): + if RANK not in [-1, 0]: # For resuming DDP runs + entity, project, run_id, model_artifact_name = get_run_info(opt.resume) + api = wandb.Api() + artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest') + modeldir = artifact.download() + opt.weights = str(Path(modeldir) / "last.pt") + return True + return None + + +def process_wandb_config_ddp_mode(opt): + with open(check_file(opt.data), errors='ignore') as f: + data_dict = yaml.safe_load(f) # data dict + train_dir, val_dir = None, None + if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX): + api = wandb.Api() + train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias) + train_dir = train_artifact.download() + train_path = Path(train_dir) / 'data/images/' + data_dict['train'] = str(train_path) + + if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX): + api = wandb.Api() + val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias) + val_dir = val_artifact.download() + val_path = Path(val_dir) / 'data/images/' + data_dict['val'] = str(val_path) + if train_dir or val_dir: + ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml') + with open(ddp_data_path, 'w') as f: + yaml.safe_dump(data_dict, f) + opt.data = ddp_data_path + class WandbLogger(): """Log training runs, datasets, models, and predictions to Weights & Biases. @@ -46,7 +122,7 @@ class WandbLogger(): """ - Initialize WandbLogger instance - Upload dataset if opt.upload_dataset is True - - Setup training processes if job_type is 'Training' + - Setup trainig processes if job_type is 'Training' arguments: opt (namespace) -- Commandline arguments for this run @@ -56,31 +132,82 @@ class WandbLogger(): """ # Pre-training routine -- self.job_type = job_type - self.wandb, self.wandb_run = wandb, wandb.run if wandb else None + self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run self.val_artifact, self.train_artifact = None, None self.train_artifact_path, self.val_artifact_path = None, None self.result_artifact = None self.val_table, self.result_table = None, None + self.bbox_media_panel_images = [] + self.val_table_path_map = None self.max_imgs_to_log = 16 + self.wandb_artifact_data_dict = None self.data_dict = None - if self.wandb: + # It's more elegant to stick to 1 wandb.init call, + # but useful config data is overwritten in the WandbLogger's wandb.init call + if isinstance(opt.resume, str): # checks resume from artifact + if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): + entity, project, run_id, model_artifact_name = get_run_info(opt.resume) + model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name + assert wandb, 'install wandb to resume wandb runs' + # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config + self.wandb_run = wandb.init(id=run_id, + project=project, + entity=entity, + resume='allow', + allow_val_change=True) + opt.resume = model_artifact_name + elif self.wandb: self.wandb_run = wandb.init(config=opt, - resume='allow', - project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem, + resume="allow", + project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem, entity=opt.entity, name=opt.name if opt.name != 'exp' else None, job_type=job_type, id=run_id, allow_val_change=True) if not wandb.run else wandb.run - if self.wandb_run: if self.job_type == 'Training': - if isinstance(opt.data, dict): - # This means another dataset manager has already processed the dataset info (e.g. ClearML) - # and they will have stored the already processed dict in opt.data - self.data_dict = opt.data + if opt.upload_dataset: + if not opt.resume: + self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt) + + if opt.resume: + # resume from artifact + if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX): + self.data_dict = dict(self.wandb_run.config.data_dict) + else: # local resume + self.data_dict = check_wandb_dataset(opt.data) + else: + self.data_dict = check_wandb_dataset(opt.data) + self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict + + # write data_dict to config. useful for resuming from artifacts. Do this only when not resuming. + self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict}, + allow_val_change=True) self.setup_training(opt) + if self.job_type == 'Dataset Creation': + self.data_dict = self.check_and_upload_dataset(opt) + + def check_and_upload_dataset(self, opt): + """ + Check if the dataset format is compatible and upload it as W&B artifact + + arguments: + opt (namespace)-- Commandline arguments for current run + + returns: + Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links. + """ + assert wandb, 'Install wandb to upload dataset' + config_path = self.log_dataset_artifact(opt.data, + opt.single_cls, + 'YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem) + LOGGER.info(f"Created dataset config file {config_path}") + with open(config_path, errors='ignore') as f: + wandb_data_dict = yaml.safe_load(f) + return wandb_data_dict + def setup_training(self, opt): """ Setup the necessary processes for training YOLO models: @@ -95,18 +222,77 @@ class WandbLogger(): self.log_dict, self.current_epoch = {}, 0 self.bbox_interval = opt.bbox_interval if isinstance(opt.resume, str): - model_dir, _ = self.download_model_artifact(opt) - if model_dir: - self.weights = Path(model_dir) / 'last.pt' + modeldir, _ = self.download_model_artifact(opt) + if modeldir: + self.weights = Path(modeldir) / "last.pt" config = self.wandb_run.config - opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp, opt.imgsz = str( + opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp = str( self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs, \ - config.hyp, config.imgsz + config.hyp + data_dict = self.data_dict + if self.val_artifact is None: # If --upload_dataset is set, use the existing artifact, don't download + self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(data_dict.get('train'), + opt.artifact_alias) + self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(data_dict.get('val'), + opt.artifact_alias) + if self.train_artifact_path is not None: + train_path = Path(self.train_artifact_path) / 'data/images/' + data_dict['train'] = str(train_path) + if self.val_artifact_path is not None: + val_path = Path(self.val_artifact_path) / 'data/images/' + data_dict['val'] = str(val_path) + + if self.val_artifact is not None: + self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") + self.result_table = wandb.Table(["epoch", "id", "ground truth", "prediction", "avg_confidence"]) + self.val_table = self.val_artifact.get("val") + if self.val_table_path_map is None: + self.map_val_table_path() if opt.bbox_interval == -1: self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1 - if opt.evolve or opt.noplots: - self.bbox_interval = opt.bbox_interval = opt.epochs + 1 # disable bbox_interval + train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None + # Update the the data_dict to point to local artifacts dir + if train_from_artifact: + self.data_dict = data_dict + + def download_dataset_artifact(self, path, alias): + """ + download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX + + arguments: + path -- path of the dataset to be used for training + alias (str)-- alias of the artifact to be download/used for training + + returns: + (str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset + is found otherwise returns (None, None) + """ + if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX): + artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias) + dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\", "/")) + assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'" + datadir = dataset_artifact.download() + return datadir, dataset_artifact + return None, None + + def download_model_artifact(self, opt): + """ + download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX + + arguments: + opt (namespace) -- Commandline arguments for this run + """ + if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): + model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest") + assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist' + modeldir = model_artifact.download() + epochs_trained = model_artifact.metadata.get('epochs_trained') + total_epochs = model_artifact.metadata.get('total_epochs') + is_finished = total_epochs is None + assert not is_finished, 'training is finished, can only resume incomplete runs.' + return modeldir, model_artifact + return None, None def log_model(self, path, opt, epoch, fitness_score, best_model=False): """ @@ -119,22 +305,166 @@ class WandbLogger(): fitness_score (float) -- fitness score for current epoch best_model (boolean) -- Boolean representing if the current checkpoint is the best yet. """ - model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', - type='model', - metadata={ - 'original_url': str(path), - 'epochs_trained': epoch + 1, - 'save period': opt.save_period, - 'project': opt.project, - 'total_epochs': opt.epochs, - 'fitness_score': fitness_score}) + model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', type='model', metadata={ + 'original_url': str(path), + 'epochs_trained': epoch + 1, + 'save period': opt.save_period, + 'project': opt.project, + 'total_epochs': opt.epochs, + 'fitness_score': fitness_score + }) model_artifact.add_file(str(path / 'last.pt'), name='last.pt') wandb.log_artifact(model_artifact, aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else '']) - LOGGER.info(f'Saving model artifact on epoch {epoch + 1}') + LOGGER.info(f"Saving model artifact on epoch {epoch + 1}") + + def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False): + """ + Log the dataset as W&B artifact and return the new data file with W&B links + + arguments: + data_file (str) -- the .yaml file with information about the dataset like - path, classes etc. + single_class (boolean) -- train multi-class data as single-class + project (str) -- project name. Used to construct the artifact path + overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new + file with _wandb postfix. Eg -> data_wandb.yaml + + returns: + the new .yaml file with artifact links. it can be used to start training directly from artifacts + """ + self.data_dict = check_dataset(data_file) # parse and check + data = dict(self.data_dict) + nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names']) + names = {k: v for k, v in enumerate(names)} # to index dictionary + self.train_artifact = self.create_dataset_table(LoadImagesAndLabels( + data['train'], rect=True, batch_size=1), names, name='train') if data.get('train') else None + self.val_artifact = self.create_dataset_table(LoadImagesAndLabels( + data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None + if data.get('train'): + data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train') + if data.get('val'): + data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val') + path = Path(data_file).stem + path = (path if overwrite_config else path + '_wandb') + '.yaml' # updated data.yaml path + data.pop('download', None) + data.pop('path', None) + with open(path, 'w') as f: + yaml.safe_dump(data, f) + + if self.job_type == 'Training': # builds correct artifact pipeline graph + self.wandb_run.use_artifact(self.val_artifact) + self.wandb_run.use_artifact(self.train_artifact) + self.val_artifact.wait() + self.val_table = self.val_artifact.get('val') + self.map_val_table_path() + else: + self.wandb_run.log_artifact(self.train_artifact) + self.wandb_run.log_artifact(self.val_artifact) + return path + + def map_val_table_path(self): + """ + Map the validation dataset Table like name of file -> it's id in the W&B Table. + Useful for - referencing artifacts for evaluation. + """ + self.val_table_path_map = {} + LOGGER.info("Mapping dataset") + for i, data in enumerate(tqdm(self.val_table.data)): + self.val_table_path_map[data[3]] = data[0] + + def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int,str], name: str = 'dataset'): + """ + Create and return W&B artifact containing W&B Table of the dataset. + + arguments: + dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table + class_to_id -- hash map that maps class ids to labels + name -- name of the artifact + + returns: + dataset artifact to be logged or used + """ + # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging + artifact = wandb.Artifact(name=name, type="dataset") + img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None + img_files = tqdm(dataset.img_files) if not img_files else img_files + for img_file in img_files: + if Path(img_file).is_dir(): + artifact.add_dir(img_file, name='data/images') + labels_path = 'labels'.join(dataset.path.rsplit('images', 1)) + artifact.add_dir(labels_path, name='data/labels') + else: + artifact.add_file(img_file, name='data/images/' + Path(img_file).name) + label_file = Path(img2label_paths([img_file])[0]) + artifact.add_file(str(label_file), + name='data/labels/' + label_file.name) if label_file.exists() else None + table = wandb.Table(columns=["id", "train_image", "Classes", "name"]) + class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()]) + for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)): + box_data, img_classes = [], {} + for cls, *xywh in labels[:, 1:].tolist(): + cls = int(cls) + box_data.append({"position": {"middle": [xywh[0], xywh[1]], "width": xywh[2], "height": xywh[3]}, + "class_id": cls, + "box_caption": "%s" % (class_to_id[cls])}) + img_classes[cls] = class_to_id[cls] + boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space + table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()), + Path(paths).name) + artifact.add(table, name) + return artifact + + def log_training_progress(self, predn, path, names): + """ + Build evaluation Table. Uses reference from validation dataset table. + + arguments: + predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class] + path (str): local path of the current evaluation image + names (dict(int, str)): hash map that maps class ids to labels + """ + class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()]) + box_data = [] + total_conf = 0 + for *xyxy, conf, cls in predn.tolist(): + if conf >= 0.25: + box_data.append( + {"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]}, + "class_id": int(cls), + "box_caption": f"{names[cls]} {conf:.3f}", + "scores": {"class_score": conf}, + "domain": "pixel"}) + total_conf += conf + boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space + id = self.val_table_path_map[Path(path).name] + self.result_table.add_data(self.current_epoch, + id, + self.val_table.data[id][1], + wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set), + total_conf / max(1, len(box_data)) + ) def val_one_image(self, pred, predn, path, names, im): - pass + """ + Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel + + arguments: + pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class] + predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class] + path (str): local path of the current evaluation image + """ + if self.val_table and self.result_table: # Log Table if Val dataset is uploaded as artifact + self.log_training_progress(predn, path, names) + + if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0: + if self.current_epoch % self.bbox_interval == 0: + box_data = [{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]}, + "class_id": int(cls), + "box_caption": f"{names[cls]} {conf:.3f}", + "scores": {"class_score": conf}, + "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()] + boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space + self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name)) def log(self, log_dict): """ @@ -147,7 +477,7 @@ class WandbLogger(): for key, value in log_dict.items(): self.log_dict[key] = value - def end_epoch(self): + def end_epoch(self, best_result=False): """ commit the log_dict, model artifacts and Tables to W&B and flush the log_dict. @@ -156,15 +486,25 @@ class WandbLogger(): """ if self.wandb_run: with all_logging_disabled(): + if self.bbox_media_panel_images: + self.log_dict["BoundingBoxDebugger"] = self.bbox_media_panel_images try: wandb.log(self.log_dict) except BaseException as e: - LOGGER.info( - f'An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}' - ) + LOGGER.info(f"An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}") self.wandb_run.finish() self.wandb_run = None + self.log_dict = {} + self.bbox_media_panel_images = [] + if self.result_artifact: + self.result_artifact.add(self.result_table, 'result') + wandb.log_artifact(self.result_artifact, aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), + ('best' if best_result else '')]) + + wandb.log({"evaluation": self.result_table}) + self.result_table = wandb.Table(["epoch", "id", "ground truth", "prediction", "avg_confidence"]) + self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") def finish_run(self): """ @@ -175,7 +515,6 @@ class WandbLogger(): with all_logging_disabled(): wandb.log(self.log_dict) wandb.run.finish() - LOGGER.warning(DEPRECATION_WARNING) @contextmanager diff --git a/yolov3/utils/metrics.py b/yolov3/utils/metrics.py index 5dab6130..90490955 100644 --- a/yolov3/utils/metrics.py +++ b/yolov3/utils/metrics.py @@ -11,8 +11,6 @@ import matplotlib.pyplot as plt import numpy as np import torch -from utils import TryExcept, threaded - def fitness(x): # Model fitness as a weighted combination of metrics @@ -20,15 +18,7 @@ def fitness(x): return (x[:, :4] * w).sum(1) -def smooth(y, f=0.05): - # Box filter of fraction f - nf = round(len(y) * f * 2) // 2 + 1 # number of filter elements (must be odd) - p = np.ones(nf // 2) # ones padding - yp = np.concatenate((p * y[0], y, p * y[-1]), 0) # y padded - return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed - - -def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16, prefix=''): +def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()): """ Compute the average precision, given the recall and precision curves. Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. # Arguments @@ -47,7 +37,7 @@ def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] # Find unique classes - unique_classes, nt = np.unique(target_cls, return_counts=True) + unique_classes = np.unique(target_cls) nc = unique_classes.shape[0] # number of classes, number of detections # Create Precision-Recall curve and compute AP for each class @@ -55,44 +45,42 @@ def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) for ci, c in enumerate(unique_classes): i = pred_cls == c - n_l = nt[ci] # number of labels + n_l = (target_cls == c).sum() # number of labels n_p = i.sum() # number of predictions + if n_p == 0 or n_l == 0: continue + else: + # Accumulate FPs and TPs + fpc = (1 - tp[i]).cumsum(0) + tpc = tp[i].cumsum(0) - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) + # Recall + recall = tpc / (n_l + 1e-16) # recall curve + r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases - # Recall - recall = tpc / (n_l + eps) # recall curve - r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases + # Precision + precision = tpc / (tpc + fpc) # precision curve + p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) - if plot and j == 0: - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 + # AP from recall-precision curve + for j in range(tp.shape[1]): + ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) + if plot and j == 0: + py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 # Compute F1 (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + eps) + f1 = 2 * p * r / (p + r + 1e-16) names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data - names = dict(enumerate(names)) # to dict + names = {i: v for i, v in enumerate(names)} # to dict if plot: - plot_pr_curve(px, py, ap, Path(save_dir) / f'{prefix}PR_curve.png', names) - plot_mc_curve(px, f1, Path(save_dir) / f'{prefix}F1_curve.png', names, ylabel='F1') - plot_mc_curve(px, p, Path(save_dir) / f'{prefix}P_curve.png', names, ylabel='Precision') - plot_mc_curve(px, r, Path(save_dir) / f'{prefix}R_curve.png', names, ylabel='Recall') + plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names) + plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1') + plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision') + plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall') - i = smooth(f1.mean(0), 0.1).argmax() # max F1 index - p, r, f1 = p[:, i], r[:, i], f1[:, i] - tp = (r * nt).round() # true positives - fp = (tp / (p + eps) - tp).round() # false positives - return tp, fp, p, r, f1, ap, unique_classes.astype(int) + i = f1.mean(0).argmax() # max F1 index + return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32') def compute_ap(recall, precision): @@ -141,12 +129,6 @@ class ConfusionMatrix: Returns: None, updates confusion matrix accordingly """ - if detections is None: - gt_classes = labels.int() - for gc in gt_classes: - self.matrix[self.nc, gc] += 1 # background FN - return - detections = detections[detections[:, 4] > self.conf] gt_classes = labels[:, 0].int() detection_classes = detections[:, 5].int() @@ -164,55 +146,43 @@ class ConfusionMatrix: matches = np.zeros((0, 3)) n = matches.shape[0] > 0 - m0, m1, _ = matches.transpose().astype(int) + m0, m1, _ = matches.transpose().astype(np.int16) for i, gc in enumerate(gt_classes): j = m0 == i if n and sum(j) == 1: self.matrix[detection_classes[m1[j]], gc] += 1 # correct else: - self.matrix[self.nc, gc] += 1 # true background + self.matrix[self.nc, gc] += 1 # background FP if n: for i, dc in enumerate(detection_classes): if not any(m1 == i): - self.matrix[dc, self.nc] += 1 # predicted background + self.matrix[dc, self.nc] += 1 # background FN - def tp_fp(self): - tp = self.matrix.diagonal() # true positives - fp = self.matrix.sum(1) - tp # false positives - # fn = self.matrix.sum(0) - tp # false negatives (missed detections) - return tp[:-1], fp[:-1] # remove background class + def matrix(self): + return self.matrix - @TryExcept('WARNING âš ī¸ ConfusionMatrix plot failure') def plot(self, normalize=True, save_dir='', names=()): - import seaborn as sn + try: + import seaborn as sn - array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns - array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) + array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-6) if normalize else 1) # normalize columns + array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) - fig, ax = plt.subplots(1, 1, figsize=(12, 9), tight_layout=True) - nc, nn = self.nc, len(names) # number of classes, names - sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size - labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels - ticklabels = (names + ['background']) if labels else 'auto' - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered - sn.heatmap(array, - ax=ax, - annot=nc < 30, - annot_kws={ - 'size': 8}, - cmap='Blues', - fmt='.2f', - square=True, - vmin=0.0, - xticklabels=ticklabels, - yticklabels=ticklabels).set_facecolor((1, 1, 1)) - ax.set_xlabel('True') - ax.set_ylabel('Predicted') - ax.set_title('Confusion Matrix') - fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) - plt.close(fig) + fig = plt.figure(figsize=(12, 9), tight_layout=True) + sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size + labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels + with warnings.catch_warnings(): + warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered + sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True, + xticklabels=names + ['background FP'] if labels else "auto", + yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1)) + fig.axes[0].set_xlabel('True') + fig.axes[0].set_ylabel('Predicted') + fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) + plt.close() + except Exception as e: + print(f'WARNING: ConfusionMatrix plot failure: {e}') def print(self): for i in range(self.nc + 1): @@ -224,19 +194,19 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7 # Get the coordinates of bounding boxes if xywh: # transform from xywh to xyxy - (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, -1), box2.chunk(4, -1) + (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, 1), box2.chunk(4, 1) w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2 b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_ b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_ else: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, -1) - b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, -1) - w1, h1 = b1_x2 - b1_x1, (b1_y2 - b1_y1).clamp(eps) - w2, h2 = b2_x2 - b2_x1, (b2_y2 - b2_y1).clamp(eps) + b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, 1) + b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, 1) + w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps + w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps # Intersection area - inter = (b1_x2.minimum(b2_x2) - b1_x1.maximum(b2_x1)).clamp(0) * \ - (b1_y2.minimum(b2_y2) - b1_y1.maximum(b2_y1)).clamp(0) + inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ + (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) # Union Area union = w1 * h1 + w2 * h2 - inter + eps @@ -244,13 +214,13 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7 # IoU iou = inter / union if CIoU or DIoU or GIoU: - cw = b1_x2.maximum(b2_x2) - b1_x1.minimum(b2_x1) # convex (smallest enclosing box) width - ch = b1_y2.maximum(b2_y2) - b1_y1.minimum(b2_y1) # convex height + cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width + ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2 if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * (torch.atan(w2 / h2) - torch.atan(w1 / h1)).pow(2) + v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) with torch.no_grad(): alpha = v / (v - iou + (1 + eps)) return iou - (rho2 / c2 + v * alpha) # CIoU @@ -260,7 +230,7 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7 return iou # IoU -def box_iou(box1, box2, eps=1e-7): +def box_iou(box1, box2): # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py """ Return intersection-over-union (Jaccard index) of boxes. @@ -273,24 +243,30 @@ def box_iou(box1, box2, eps=1e-7): IoU values for every element in boxes1 and boxes2 """ + def box_area(box): + # box = 4xn + return (box[2] - box[0]) * (box[3] - box[1]) + + area1 = box_area(box1.T) + area2 = box_area(box2.T) + # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - (a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2) - inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp(0).prod(2) - - # IoU = inter / (area1 + area2 - inter) - return inter / ((a2 - a1).prod(2) + (b2 - b1).prod(2) - inter + eps) + inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) + return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) -def bbox_ioa(box1, box2, eps=1e-7): +def bbox_ioa(box1, box2, eps=1E-7): """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2 box1: np.array of shape(4) box2: np.array of shape(nx4) returns: np.array of shape(n) """ + box2 = box2.transpose() + # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1 - b2_x1, b2_y1, b2_x2, b2_y2 = box2.T + b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] + b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] # Intersection area inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ @@ -303,19 +279,17 @@ def bbox_ioa(box1, box2, eps=1e-7): return inter_area / box2_area -def wh_iou(wh1, wh2, eps=1e-7): +def wh_iou(wh1, wh2): # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 wh1 = wh1[:, None] # [N,1,2] wh2 = wh2[None] # [1,M,2] inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter + eps) # iou = inter / (area1 + area2 - inter) + return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter) # Plots ---------------------------------------------------------------------------------------------------------------- - -@threaded -def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()): +def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()): # Precision-recall curve fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) py = np.stack(py, axis=1) @@ -331,14 +305,12 @@ def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()): ax.set_ylabel('Precision') ax.set_xlim(0, 1) ax.set_ylim(0, 1) - ax.legend(bbox_to_anchor=(1.04, 1), loc='upper left') - ax.set_title('Precision-Recall Curve') - fig.savefig(save_dir, dpi=250) - plt.close(fig) + plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") + fig.savefig(Path(save_dir), dpi=250) + plt.close() -@threaded -def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'): +def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'): # Metric-confidence curve fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) @@ -348,13 +320,12 @@ def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confi else: ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) - y = smooth(py.mean(0), 0.05) + y = py.mean(0) ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) ax.set_xlim(0, 1) ax.set_ylim(0, 1) - ax.legend(bbox_to_anchor=(1.04, 1), loc='upper left') - ax.set_title(f'{ylabel}-Confidence Curve') - fig.savefig(save_dir, dpi=250) - plt.close(fig) + plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") + fig.savefig(Path(save_dir), dpi=250) + plt.close() diff --git a/yolov3/utils/plots.py b/yolov3/utils/plots.py index 64655a3f..16ae44a7 100644 --- a/yolov3/utils/plots.py +++ b/yolov3/utils/plots.py @@ -3,12 +3,10 @@ Plotting utils """ -import contextlib import math import os from copy import copy from pathlib import Path -from urllib.error import URLError import cv2 import matplotlib @@ -19,13 +17,12 @@ import seaborn as sn import torch from PIL import Image, ImageDraw, ImageFont -from utils import TryExcept, threaded -from utils.general import (CONFIG_DIR, FONT, LOGGER, check_font, check_requirements, clip_boxes, increment_path, - is_ascii, xywh2xyxy, xyxy2xywh) +from utils.general import (LOGGER, Timeout, check_requirements, clip_coords, increment_path, is_ascii, is_chinese, + try_except, user_config_dir, xywh2xyxy, xyxy2xywh) from utils.metrics import fitness -from utils.segment.general import scale_image # Settings +CONFIG_DIR = user_config_dir() # Ultralytics settings dir RANK = int(os.getenv('RANK', -1)) matplotlib.rc('font', **{'size': 11}) matplotlib.use('Agg') # for writing to files only @@ -35,9 +32,9 @@ class Colors: # Ultralytics color palette https://ultralytics.com/ def __init__(self): # hex = matplotlib.colors.TABLEAU_COLORS.values() - hexs = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', - '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') - self.palette = [self.hex2rgb(f'#{c}') for c in hexs] + hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', + '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') + self.palette = [self.hex2rgb('#' + c) for c in hex] self.n = len(self.palette) def __call__(self, i, bgr=False): @@ -52,33 +49,35 @@ class Colors: colors = Colors() # create instance for 'from utils.plots import colors' -def check_pil_font(font=FONT, size=10): +def check_font(font='Arial.ttf', size=10): # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary font = Path(font) font = font if font.exists() else (CONFIG_DIR / font.name) try: return ImageFont.truetype(str(font) if font.exists() else font.name, size) - except Exception: # download if missing + except Exception as e: # download if missing + url = "https://ultralytics.com/assets/" + font.name + print(f'Downloading {url} to {font}...') + torch.hub.download_url_to_file(url, str(font), progress=False) try: - check_font(font) return ImageFont.truetype(str(font), size) except TypeError: check_requirements('Pillow>=8.4.0') # known issue https://github.com/ultralytics/yolov5/issues/5374 - except URLError: # not online - return ImageFont.load_default() class Annotator: - # YOLOv3 Annotator for train/val mosaics and jpgs and detect/hub inference annotations + if RANK in (-1, 0): + check_font() # download TTF if necessary + + # Annotator for train/val mosaics and jpgs and detect/hub inference annotations def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=False, example='abc'): assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.' - non_ascii = not is_ascii(example) # non-latin labels, i.e. asian, arabic, cyrillic - self.pil = pil or non_ascii + self.pil = pil or not is_ascii(example) or is_chinese(example) if self.pil: # use PIL self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) self.draw = ImageDraw.Draw(self.im) - self.font = check_pil_font(font='Arial.Unicode.ttf' if non_ascii else font, - size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12)) + self.font = check_font(font='Arial.Unicode.ttf' if is_chinese(example) else font, + size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12)) else: # use cv2 self.im = im self.lw = line_width or max(round(sum(im.shape) / 2 * 0.003), 2) # line width @@ -88,14 +87,12 @@ class Annotator: if self.pil or not is_ascii(label): self.draw.rectangle(box, width=self.lw, outline=color) # box if label: - w, h = self.font.getsize(label) # text width, height (WARNING: deprecated) in 9.2.0 - # _, _, w, h = self.font.getbbox(label) # text width, height (New) + w, h = self.font.getsize(label) # text width, height outside = box[1] - h >= 0 # label fits outside box - self.draw.rectangle( - (box[0], box[1] - h if outside else box[1], box[0] + w + 1, - box[1] + 1 if outside else box[1] + h + 1), - fill=color, - ) + self.draw.rectangle([box[0], + box[1] - h if outside else box[1], + box[0] + w + 1, + box[1] + 1 if outside else box[1] + h + 1], fill=color) # self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls') # for PIL>8.0 self.draw.text((box[0], box[1] - h if outside else box[1]), label, fill=txt_color, font=self.font) else: # cv2 @@ -104,62 +101,20 @@ class Annotator: if label: tf = max(self.lw - 1, 1) # font thickness w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0] # text width, height - outside = p1[1] - h >= 3 + outside = p1[1] - h - 3 >= 0 # label fits outside box p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3 cv2.rectangle(self.im, p1, p2, color, -1, cv2.LINE_AA) # filled - cv2.putText(self.im, - label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), - 0, - self.lw / 3, - txt_color, - thickness=tf, - lineType=cv2.LINE_AA) - - def masks(self, masks, colors, im_gpu, alpha=0.5, retina_masks=False): - """Plot masks at once. - Args: - masks (tensor): predicted masks on cuda, shape: [n, h, w] - colors (List[List[Int]]): colors for predicted masks, [[r, g, b] * n] - im_gpu (tensor): img is in cuda, shape: [3, h, w], range: [0, 1] - alpha (float): mask transparency: 0.0 fully transparent, 1.0 opaque - """ - if self.pil: - # convert to numpy first - self.im = np.asarray(self.im).copy() - if len(masks) == 0: - self.im[:] = im_gpu.permute(1, 2, 0).contiguous().cpu().numpy() * 255 - colors = torch.tensor(colors, device=im_gpu.device, dtype=torch.float32) / 255.0 - colors = colors[:, None, None] # shape(n,1,1,3) - masks = masks.unsqueeze(3) # shape(n,h,w,1) - masks_color = masks * (colors * alpha) # shape(n,h,w,3) - - inv_alph_masks = (1 - masks * alpha).cumprod(0) # shape(n,h,w,1) - mcs = (masks_color * inv_alph_masks).sum(0) * 2 # mask color summand shape(n,h,w,3) - - im_gpu = im_gpu.flip(dims=[0]) # flip channel - im_gpu = im_gpu.permute(1, 2, 0).contiguous() # shape(h,w,3) - im_gpu = im_gpu * inv_alph_masks[-1] + mcs - im_mask = (im_gpu * 255).byte().cpu().numpy() - self.im[:] = im_mask if retina_masks else scale_image(im_gpu.shape, im_mask, self.im.shape) - if self.pil: - # convert im back to PIL and update draw - self.fromarray(self.im) + cv2.putText(self.im, label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), 0, self.lw / 3, txt_color, + thickness=tf, lineType=cv2.LINE_AA) def rectangle(self, xy, fill=None, outline=None, width=1): # Add rectangle to image (PIL-only) self.draw.rectangle(xy, fill, outline, width) - def text(self, xy, text, txt_color=(255, 255, 255), anchor='top'): + def text(self, xy, text, txt_color=(255, 255, 255)): # Add text to image (PIL-only) - if anchor == 'bottom': # start y from font bottom - w, h = self.font.getsize(text) # text width, height - xy[1] += 1 - h - self.draw.text(xy, text, fill=txt_color, font=self.font) - - def fromarray(self, im): - # Update self.im from a numpy array - self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) - self.draw = ImageDraw.Draw(self.im) + w, h = self.font.getsize(text) # text width, height + self.draw.text((xy[0], xy[1] - h + 1), text, fill=txt_color, font=self.font) def result(self): # Return annotated image as array @@ -177,7 +132,7 @@ def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detec if 'Detect' not in module_type: batch, channels, height, width = x.shape # batch, channels, height, width if height > 1 and width > 1: - f = save_dir / f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename + f = f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels n = min(n, channels) # number of plots @@ -188,10 +143,9 @@ def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detec ax[i].imshow(blocks[i].squeeze()) # cmap='gray' ax[i].axis('off') - LOGGER.info(f'Saving {f}... ({n}/{channels})') - plt.savefig(f, dpi=300, bbox_inches='tight') + print(f'Saving {save_dir / f}... ({n}/{channels})') + plt.savefig(save_dir / f, dpi=300, bbox_inches='tight') plt.close() - np.save(str(f.with_suffix('.npy')), x[0].cpu().numpy()) # npy save def hist2d(x, y, n=100): @@ -216,31 +170,26 @@ def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): return filtfilt(b, a, data) # forward-backward filter -def output_to_target(output, max_det=300): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] for plotting +def output_to_target(output): + # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] targets = [] for i, o in enumerate(output): - box, conf, cls = o[:max_det, :6].cpu().split((4, 1, 1), 1) - j = torch.full((conf.shape[0], 1), i) - targets.append(torch.cat((j, cls, xyxy2xywh(box), conf), 1)) - return torch.cat(targets, 0).numpy() + for *box, conf, cls in o.cpu().numpy(): + targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) + return np.array(targets) -@threaded -def plot_images(images, targets, paths=None, fname='images.jpg', names=None): +def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=1920, max_subplots=16): # Plot image grid with labels if isinstance(images, torch.Tensor): images = images.cpu().float().numpy() if isinstance(targets, torch.Tensor): targets = targets.cpu().numpy() - - max_size = 1920 # max image size - max_subplots = 16 # max image subplots, i.e. 4x4 + if np.max(images[0]) <= 1: + images *= 255 # de-normalise (optional) bs, _, h, w = images.shape # batch size, _, height, width bs = min(bs, max_subplots) # limit plot images ns = np.ceil(bs ** 0.5) # number of subplots (square) - if np.max(images[0]) <= 1: - images *= 255 # de-normalise (optional) # Build Image mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init @@ -260,12 +209,12 @@ def plot_images(images, targets, paths=None, fname='images.jpg', names=None): # Annotate fs = int((h + w) * ns * 0.01) # font size - annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True, example=names) + annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True) for i in range(i + 1): x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders if paths: - annotator.text((x + 5, y + 5), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames + annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames if len(targets) > 0: ti = targets[targets[:, 0] == i] # image targets boxes = xywh2xyxy(ti[:, 2:6]).T @@ -346,7 +295,7 @@ def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_ ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel() fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]: + # for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov3', 'yolov3-spp', 'yolov3-tiny']]: for f in sorted(save_dir.glob('study*.txt')): y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T x = np.arange(y.shape[1]) if x is None else np.array(x) @@ -357,19 +306,11 @@ def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_ ax[i].set_title(s[i]) j = y[3].argmax() + 1 - ax2.plot(y[5, 1:j], - y[3, 1:j] * 1E2, - '.-', - linewidth=2, - markersize=8, + ax2.plot(y[5, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8, label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', - linewidth=2, - markersize=8, - alpha=.25, - label='EfficientDet') + 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') ax2.grid(alpha=0.2) ax2.set_yticks(np.arange(20, 60, 5)) @@ -383,7 +324,8 @@ def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_ plt.savefig(f, dpi=300) -@TryExcept() # known issue https://github.com/ultralytics/yolov5/issues/5395 +@try_except # known issue https://github.com/ultralytics/yolov5/issues/5395 +@Timeout(30) # known issue https://github.com/ultralytics/yolov5/issues/5611 def plot_labels(labels, names=(), save_dir=Path('')): # plot dataset labels LOGGER.info(f"Plotting labels to {save_dir / 'labels.jpg'}... ") @@ -400,12 +342,11 @@ def plot_labels(labels, names=(), save_dir=Path('')): matplotlib.use('svg') # faster ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - with contextlib.suppress(Exception): # color histogram bars by class - [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # known issue #3195 + # [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # update colors bug #3195 ax[0].set_ylabel('instances') if 0 < len(names) < 30: ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(list(names.values()), rotation=90, fontsize=10) + ax[0].set_xticklabels(names, rotation=90, fontsize=10) else: ax[0].set_xlabel('classes') sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) @@ -429,35 +370,6 @@ def plot_labels(labels, names=(), save_dir=Path('')): plt.close() -def imshow_cls(im, labels=None, pred=None, names=None, nmax=25, verbose=False, f=Path('images.jpg')): - # Show classification image grid with labels (optional) and predictions (optional) - from utils.augmentations import denormalize - - names = names or [f'class{i}' for i in range(1000)] - blocks = torch.chunk(denormalize(im.clone()).cpu().float(), len(im), - dim=0) # select batch index 0, block by channels - n = min(len(blocks), nmax) # number of plots - m = min(8, round(n ** 0.5)) # 8 x 8 default - fig, ax = plt.subplots(math.ceil(n / m), m) # 8 rows x n/8 cols - ax = ax.ravel() if m > 1 else [ax] - # plt.subplots_adjust(wspace=0.05, hspace=0.05) - for i in range(n): - ax[i].imshow(blocks[i].squeeze().permute((1, 2, 0)).numpy().clip(0.0, 1.0)) - ax[i].axis('off') - if labels is not None: - s = names[labels[i]] + (f'—{names[pred[i]]}' if pred is not None else '') - ax[i].set_title(s, fontsize=8, verticalalignment='top') - plt.savefig(f, dpi=300, bbox_inches='tight') - plt.close() - if verbose: - LOGGER.info(f'Saving {f}') - if labels is not None: - LOGGER.info('True: ' + ' '.join(f'{names[i]:3s}' for i in labels[:nmax])) - if pred is not None: - LOGGER.info('Predicted:' + ' '.join(f'{names[i]:3s}' for i in pred[:nmax])) - return f - - def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; plot_evolve() # Plot evolve.csv hyp evolution results evolve_csv = Path(evolve_csv) @@ -468,7 +380,6 @@ def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; j = np.argmax(f) # max fitness index plt.figure(figsize=(10, 12), tight_layout=True) matplotlib.rc('font', **{'size': 8}) - print(f'Best results from row {j} of {evolve_csv}:') for i, k in enumerate(keys[7:]): v = x[:, 7 + i] mu = v[j] # best single result @@ -492,20 +403,20 @@ def plot_results(file='path/to/results.csv', dir=''): ax = ax.ravel() files = list(save_dir.glob('results*.csv')) assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.' - for f in files: + for fi, f in enumerate(files): try: data = pd.read_csv(f) s = [x.strip() for x in data.columns] x = data.values[:, 0] for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]): - y = data.values[:, j].astype('float') + y = data.values[:, j] # y[y == 0] = np.nan # don't show zero values ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8) ax[i].set_title(s[j], fontsize=12) # if j in [8, 9, 10]: # share train and val loss y axes # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) except Exception as e: - LOGGER.info(f'Warning: Plotting error for {f}: {e}') + print(f'Warning: Plotting error for {f}: {e}') ax[1].legend() fig.savefig(save_dir / 'results.png', dpi=200) plt.close() @@ -542,7 +453,7 @@ def profile_idetection(start=0, stop=0, labels=(), save_dir=''): plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) -def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False, BGR=False, save=True): +def save_one_box(xyxy, im, file='image.jpg', gain=1.02, pad=10, square=False, BGR=False, save=True): # Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop xyxy = torch.tensor(xyxy).view(-1, 4) b = xyxy2xywh(xyxy) # boxes @@ -550,11 +461,9 @@ def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False, b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad xyxy = xywh2xyxy(b).long() - clip_boxes(xyxy, im.shape) + clip_coords(xyxy, im.shape) crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)] if save: file.parent.mkdir(parents=True, exist_ok=True) # make directory - f = str(increment_path(file).with_suffix('.jpg')) - # cv2.imwrite(f, crop) # save BGR, https://github.com/ultralytics/yolov5/issues/7007 chroma subsampling issue - Image.fromarray(crop[..., ::-1]).save(f, quality=95, subsampling=0) # save RGB + cv2.imwrite(str(increment_path(file).with_suffix('.jpg')), crop) return crop diff --git a/yolov3/utils/torch_utils.py b/yolov3/utils/torch_utils.py index 073d9bd0..d3692297 100644 --- a/yolov3/utils/torch_utils.py +++ b/yolov3/utils/torch_utils.py @@ -3,12 +3,12 @@ PyTorch utils """ +import datetime import math import os import platform import subprocess import time -import warnings from contextlib import contextmanager from copy import deepcopy from pathlib import Path @@ -17,77 +17,20 @@ import torch import torch.distributed as dist import torch.nn as nn import torch.nn.functional as F -from torch.nn.parallel import DistributedDataParallel as DDP -from utils.general import LOGGER, check_version, colorstr, file_date, git_describe - -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) +from utils.general import LOGGER try: import thop # for FLOPs computation except ImportError: thop = None -# Suppress PyTorch warnings -warnings.filterwarnings('ignore', message='User provided device_type of \'cuda\', but CUDA is not available. Disabling') -warnings.filterwarnings('ignore', category=UserWarning) - - -def smart_inference_mode(torch_1_9=check_version(torch.__version__, '1.9.0')): - # Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator - def decorate(fn): - return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn) - - return decorate - - -def smartCrossEntropyLoss(label_smoothing=0.0): - # Returns nn.CrossEntropyLoss with label smoothing enabled for torch>=1.10.0 - if check_version(torch.__version__, '1.10.0'): - return nn.CrossEntropyLoss(label_smoothing=label_smoothing) - if label_smoothing > 0: - LOGGER.warning(f'WARNING âš ī¸ label smoothing {label_smoothing} requires torch>=1.10.0') - return nn.CrossEntropyLoss() - - -def smart_DDP(model): - # Model DDP creation with checks - assert not check_version(torch.__version__, '1.12.0', pinned=True), \ - 'torch==1.12.0 torchvision==0.13.0 DDP training is not supported due to a known issue. ' \ - 'Please upgrade or downgrade torch to use DDP. See https://github.com/ultralytics/yolov5/issues/8395' - if check_version(torch.__version__, '1.11.0'): - return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK, static_graph=True) - else: - return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK) - - -def reshape_classifier_output(model, n=1000): - # Update a TorchVision classification model to class count 'n' if required - from models.common import Classify - name, m = list((model.model if hasattr(model, 'model') else model).named_children())[-1] # last module - if isinstance(m, Classify): # YOLOv3 Classify() head - if m.linear.out_features != n: - m.linear = nn.Linear(m.linear.in_features, n) - elif isinstance(m, nn.Linear): # ResNet, EfficientNet - if m.out_features != n: - setattr(model, name, nn.Linear(m.in_features, n)) - elif isinstance(m, nn.Sequential): - types = [type(x) for x in m] - if nn.Linear in types: - i = types.index(nn.Linear) # nn.Linear index - if m[i].out_features != n: - m[i] = nn.Linear(m[i].in_features, n) - elif nn.Conv2d in types: - i = types.index(nn.Conv2d) # nn.Conv2d index - if m[i].out_channels != n: - m[i] = nn.Conv2d(m[i].in_channels, n, m[i].kernel_size, m[i].stride, bias=m[i].bias is not None) - @contextmanager def torch_distributed_zero_first(local_rank: int): - # Decorator to make all processes in distributed training wait for each local_master to do something + """ + Decorator to make all processes in distributed training wait for each local_master to do something. + """ if local_rank not in [-1, 0]: dist.barrier(device_ids=[local_rank]) yield @@ -95,70 +38,69 @@ def torch_distributed_zero_first(local_rank: int): dist.barrier(device_ids=[0]) -def device_count(): - # Returns number of CUDA devices available. Safe version of torch.cuda.device_count(). Supports Linux and Windows - assert platform.system() in ('Linux', 'Windows'), 'device_count() only supported on Linux or Windows' +def date_modified(path=__file__): + # return human-readable file modification date, i.e. '2021-3-26' + t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime) + return f'{t.year}-{t.month}-{t.day}' + + +def git_describe(path=Path(__file__).parent): # path must be a directory + # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe + s = f'git -C {path} describe --tags --long --always' try: - cmd = 'nvidia-smi -L | wc -l' if platform.system() == 'Linux' else 'nvidia-smi -L | find /c /v ""' # Windows - return int(subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]) - except Exception: - return 0 + return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1] + except subprocess.CalledProcessError as e: + return '' # not a git repository -def select_device(device='', batch_size=0, newline=True): - # device = None or 'cpu' or 0 or '0' or '0,1,2,3' - s = f'YOLOv3 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} ' - device = str(device).strip().lower().replace('cuda:', '').replace('none', '') # to string, 'cuda:0' to '0' +def select_device(device='', batch_size=None, newline=True): + # device = 'cpu' or '0' or '0,1,2,3' + s = f'YOLOv3 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string + device = str(device).strip().lower().replace('cuda:', '') # to string, 'cuda:0' to '0' cpu = device == 'cpu' - mps = device == 'mps' # Apple Metal Performance Shaders (MPS) - if cpu or mps: + if cpu: os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False elif device: # non-cpu device requested - os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - must be before assert is_available() - assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(',', '')), \ - f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)" + os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable + assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability - if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available + cuda = not cpu and torch.cuda.is_available() + if cuda: devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7 n = len(devices) # device count - if n > 1 and batch_size > 0: # check batch_size is divisible by device_count + if n > 1 and batch_size: # check batch_size is divisible by device_count assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' space = ' ' * (len(s) + 1) for i, d in enumerate(devices): p = torch.cuda.get_device_properties(i) - s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB - arg = 'cuda:0' - elif mps and getattr(torch, 'has_mps', False) and torch.backends.mps.is_available(): # prefer MPS if available - s += 'MPS\n' - arg = 'mps' - else: # revert to CPU + s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2:.0f}MiB)\n" # bytes to MB + else: s += 'CPU\n' - arg = 'cpu' if not newline: s = s.rstrip() - LOGGER.info(s) - return torch.device(arg) + LOGGER.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe + return torch.device('cuda:0' if cuda else 'cpu') def time_sync(): - # PyTorch-accurate time + # pytorch-accurate time if torch.cuda.is_available(): torch.cuda.synchronize() return time.time() def profile(input, ops, n=10, device=None): - """ YOLOv3 speed/memory/FLOPs profiler - Usage: - input = torch.randn(16, 3, 640, 640) - m1 = lambda x: x * torch.sigmoid(x) - m2 = nn.SiLU() - profile(input, [m1, m2], n=100) # profile over 100 iterations - """ + # speed/memory/FLOPs profiler + # + # Usage: + # input = torch.randn(16, 3, 640, 640) + # m1 = lambda x: x * torch.sigmoid(x) + # m2 = nn.SiLU() + # profile(input, [m1, m2], n=100) # profile over 100 iterations + results = [] - if not isinstance(device, torch.device): - device = select_device(device) + device = device or select_device() print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}" f"{'input':>24s}{'output':>24s}") @@ -171,7 +113,7 @@ def profile(input, ops, n=10, device=None): tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward try: flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs - except Exception: + except: flops = 0 try: @@ -182,14 +124,15 @@ def profile(input, ops, n=10, device=None): try: _ = (sum(yi.sum() for yi in y) if isinstance(y, list) else y).sum().backward() t[2] = time_sync() - except Exception: # no backward method + except Exception as e: # no backward method # print(e) # for debug t[2] = float('nan') tf += (t[1] - t[0]) * 1000 / n # ms per op forward tb += (t[2] - t[1]) * 1000 / n # ms per op backward mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB) - s_in, s_out = (tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' for x in (x, y)) # shapes - p = sum(x.numel() for x in m.parameters()) if isinstance(m, nn.Module) else 0 # parameters + s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' + s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list' + p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}') results.append([p, flops, mem, tf, tb, s_in, s_out]) except Exception as e: @@ -238,30 +181,30 @@ def sparsity(model): def prune(model, amount=0.3): # Prune model to requested global sparsity import torch.nn.utils.prune as prune + print('Pruning model... ', end='') for name, m in model.named_modules(): if isinstance(m, nn.Conv2d): prune.l1_unstructured(m, name='weight', amount=amount) # prune prune.remove(m, 'weight') # make permanent - LOGGER.info(f'Model pruned to {sparsity(model):.3g} global sparsity') + print(' %.3g global sparsity' % sparsity(model)) def fuse_conv_and_bn(conv, bn): - # Fuse Conv2d() and BatchNorm2d() layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ + # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ fusedconv = nn.Conv2d(conv.in_channels, conv.out_channels, kernel_size=conv.kernel_size, stride=conv.stride, padding=conv.padding, - dilation=conv.dilation, groups=conv.groups, bias=True).requires_grad_(False).to(conv.weight.device) - # Prepare filters + # prepare filters w_conv = conv.weight.clone().view(conv.out_channels, -1) w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - # Prepare spatial bias + # prepare spatial bias b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) @@ -269,7 +212,7 @@ def fuse_conv_and_bn(conv, bn): return fusedconv -def model_info(model, verbose=False, imgsz=640): +def model_info(model, verbose=False, img_size=640): # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] n_p = sum(x.numel() for x in model.parameters()) # number parameters n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients @@ -281,29 +224,29 @@ def model_info(model, verbose=False, imgsz=640): (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) try: # FLOPs - p = next(model.parameters()) - stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 # max stride - im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format - flops = thop.profile(deepcopy(model), inputs=(im,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs - imgsz = imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/float - fs = f', {flops * imgsz[0] / stride * imgsz[1] / stride:.1f} GFLOPs' # 640x640 GFLOPs - except Exception: + from thop import profile + stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 + img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input + flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs + img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float + fs = ', %.1f GFLOPs' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPs + except (ImportError, Exception): fs = '' - name = Path(model.yaml_file).stem.replace('yolov5', 'YOLOv3') if hasattr(model, 'yaml_file') else 'Model' - LOGGER.info(f'{name} summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}') + LOGGER.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}") def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) - # Scales img(bs,3,y,x) by ratio constrained to gs-multiple + # scales img(bs,3,y,x) by ratio constrained to gs-multiple if ratio == 1.0: return img - h, w = img.shape[2:] - s = (int(h * ratio), int(w * ratio)) # new size - img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize - if not same_shape: # pad/crop img - h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w)) - return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean + else: + h, w = img.shape[2:] + s = (int(h * ratio), int(w * ratio)) # new size + img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize + if not same_shape: # pad/crop img + h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w)) + return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean def copy_attr(a, b, include=(), exclude=()): @@ -315,71 +258,8 @@ def copy_attr(a, b, include=(), exclude=()): setattr(a, k, v) -def smart_optimizer(model, name='Adam', lr=0.001, momentum=0.9, decay=1e-5): - # YOLOv3 3-param group optimizer: 0) weights with decay, 1) weights no decay, 2) biases no decay - g = [], [], [] # optimizer parameter groups - bn = tuple(v for k, v in nn.__dict__.items() if 'Norm' in k) # normalization layers, i.e. BatchNorm2d() - for v in model.modules(): - for p_name, p in v.named_parameters(recurse=0): - if p_name == 'bias': # bias (no decay) - g[2].append(p) - elif p_name == 'weight' and isinstance(v, bn): # weight (no decay) - g[1].append(p) - else: - g[0].append(p) # weight (with decay) - - if name == 'Adam': - optimizer = torch.optim.Adam(g[2], lr=lr, betas=(momentum, 0.999)) # adjust beta1 to momentum - elif name == 'AdamW': - optimizer = torch.optim.AdamW(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0) - elif name == 'RMSProp': - optimizer = torch.optim.RMSprop(g[2], lr=lr, momentum=momentum) - elif name == 'SGD': - optimizer = torch.optim.SGD(g[2], lr=lr, momentum=momentum, nesterov=True) - else: - raise NotImplementedError(f'Optimizer {name} not implemented.') - - optimizer.add_param_group({'params': g[0], 'weight_decay': decay}) # add g0 with weight_decay - optimizer.add_param_group({'params': g[1], 'weight_decay': 0.0}) # add g1 (BatchNorm2d weights) - LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups " - f'{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias') - return optimizer - - -def smart_hub_load(repo='ultralytics/yolov5', model='yolov5s', **kwargs): - # YOLOv3 torch.hub.load() wrapper with smart error/issue handling - if check_version(torch.__version__, '1.9.1'): - kwargs['skip_validation'] = True # validation causes GitHub API rate limit errors - if check_version(torch.__version__, '1.12.0'): - kwargs['trust_repo'] = True # argument required starting in torch 0.12 - try: - return torch.hub.load(repo, model, **kwargs) - except Exception: - return torch.hub.load(repo, model, force_reload=True, **kwargs) - - -def smart_resume(ckpt, optimizer, ema=None, weights='yolov5s.pt', epochs=300, resume=True): - # Resume training from a partially trained checkpoint - best_fitness = 0.0 - start_epoch = ckpt['epoch'] + 1 - if ckpt['optimizer'] is not None: - optimizer.load_state_dict(ckpt['optimizer']) # optimizer - best_fitness = ckpt['best_fitness'] - if ema and ckpt.get('ema'): - ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) # EMA - ema.updates = ckpt['updates'] - if resume: - assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.\n' \ - f"Start a new training without --resume, i.e. 'python train.py --weights {weights}'" - LOGGER.info(f'Resuming training from {weights} from epoch {start_epoch} to {epochs} total epochs') - if epochs < start_epoch: - LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.") - epochs += ckpt['epoch'] # finetune additional epochs - return best_fitness, start_epoch, epochs - - class EarlyStopping: - # YOLOv3 simple early stopper + # simple early stopper def __init__(self, patience=30): self.best_fitness = 0.0 # i.e. mAP self.best_epoch = 0 @@ -402,30 +282,36 @@ class EarlyStopping: class ModelEMA: - """ Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models - Keeps a moving average of everything in the model state_dict (parameters and buffers) - For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage + """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models + Keep a moving average of everything in the model state_dict (parameters and buffers). + This is intended to allow functionality like + https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage + A smoothed version of the weights is necessary for some training schemes to perform well. + This class is sensitive where it is initialized in the sequence of model init, + GPU assignment and distributed training wrappers. """ - def __init__(self, model, decay=0.9999, tau=2000, updates=0): + def __init__(self, model, decay=0.9999, updates=0): # Create EMA - self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA + self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA + # if next(model.parameters()).device.type != 'cpu': + # self.ema.half() # FP16 EMA self.updates = updates # number of EMA updates - self.decay = lambda x: decay * (1 - math.exp(-x / tau)) # decay exponential ramp (to help early epochs) + self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs) for p in self.ema.parameters(): p.requires_grad_(False) def update(self, model): # Update EMA parameters - self.updates += 1 - d = self.decay(self.updates) + with torch.no_grad(): + self.updates += 1 + d = self.decay(self.updates) - msd = de_parallel(model).state_dict() # model state_dict - for k, v in self.ema.state_dict().items(): - if v.dtype.is_floating_point: # true for FP16 and FP32 - v *= d - v += (1 - d) * msd[k].detach() - # assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype} and model {msd[k].dtype} must be FP32' + msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict + for k, v in self.ema.state_dict().items(): + if v.dtype.is_floating_point: + v *= d + v += (1 - d) * msd[k].detach() def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): # Update EMA attributes diff --git a/yolov3/val.py b/yolov3/val.py index f44089fe..24b28ad1 100644 --- a/yolov3/val.py +++ b/yolov3/val.py @@ -1,30 +1,17 @@ # YOLOv3 🚀 by Ultralytics, GPL-3.0 license """ -Validate a trained YOLOv3 detection model on a detection dataset +Validate a trained model accuracy on a custom dataset Usage: - $ python val.py --weights yolov5s.pt --data coco128.yaml --img 640 - -Usage - formats: - $ python val.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s_openvino_model # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU - yolov5s_paddle_model # PaddlePaddle + $ python path/to/val.py --data coco128.yaml --weights yolov3.pt --img 640 """ import argparse import json import os -import subprocess import sys from pathlib import Path +from threading import Thread import numpy as np import torch @@ -38,13 +25,13 @@ ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative from models.common import DetectMultiBackend from utils.callbacks import Callbacks -from utils.dataloaders import create_dataloader -from utils.general import (LOGGER, TQDM_BAR_FORMAT, Profile, check_dataset, check_img_size, check_requirements, - check_yaml, coco80_to_coco91_class, colorstr, increment_path, non_max_suppression, - print_args, scale_boxes, xywh2xyxy, xyxy2xywh) -from utils.metrics import ConfusionMatrix, ap_per_class, box_iou +from utils.datasets import create_dataloader +from utils.general import (LOGGER, NCOLS, box_iou, check_dataset, check_img_size, check_requirements, check_yaml, + coco80_to_coco91_class, colorstr, increment_path, non_max_suppression, print_args, + scale_coords, xywh2xyxy, xyxy2xywh) +from utils.metrics import ConfusionMatrix, ap_per_class from utils.plots import output_to_target, plot_images, plot_val_study -from utils.torch_utils import select_device, smart_inference_mode +from utils.torch_utils import select_device, time_sync def save_one_txt(predn, save_conf, shape, file): @@ -63,50 +50,45 @@ def save_one_json(predn, jdict, path, class_map): box = xyxy2xywh(predn[:, :4]) # xywh box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner for p, b in zip(predn.tolist(), box.tolist()): - jdict.append({ - 'image_id': image_id, - 'category_id': class_map[int(p[5])], - 'bbox': [round(x, 3) for x in b], - 'score': round(p[4], 5)}) + jdict.append({'image_id': image_id, + 'category_id': class_map[int(p[5])], + 'bbox': [round(x, 3) for x in b], + 'score': round(p[4], 5)}) def process_batch(detections, labels, iouv): """ - Return correct prediction matrix + Return correct predictions matrix. Both sets of boxes are in (x1, y1, x2, y2) format. Arguments: - detections (array[N, 6]), x1, y1, x2, y2, conf, class - labels (array[M, 5]), class, x1, y1, x2, y2 + detections (Array[N, 6]), x1, y1, x2, y2, conf, class + labels (Array[M, 5]), class, x1, y1, x2, y2 Returns: - correct (array[N, 10]), for 10 IoU levels + correct (Array[N, 10]), for 10 IoU levels """ - correct = np.zeros((detections.shape[0], iouv.shape[0])).astype(bool) + correct = torch.zeros(detections.shape[0], iouv.shape[0], dtype=torch.bool, device=iouv.device) iou = box_iou(labels[:, 1:], detections[:, :4]) - correct_class = labels[:, 0:1] == detections[:, 5] - for i in range(len(iouv)): - x = torch.where((iou >= iouv[i]) & correct_class) # IoU > threshold and classes match - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detect, iou] - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - # matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - correct[matches[:, 1].astype(int), i] = True - return torch.tensor(correct, dtype=torch.bool, device=iouv.device) + x = torch.where((iou >= iouv[0]) & (labels[:, 0:1] == detections[:, 5])) # IoU above threshold and classes match + if x[0].shape[0]: + matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detection, iou] + if x[0].shape[0] > 1: + matches = matches[matches[:, 2].argsort()[::-1]] + matches = matches[np.unique(matches[:, 1], return_index=True)[1]] + # matches = matches[matches[:, 2].argsort()[::-1]] + matches = matches[np.unique(matches[:, 0], return_index=True)[1]] + matches = torch.Tensor(matches).to(iouv.device) + correct[matches[:, 1].long()] = matches[:, 2:3] >= iouv + return correct -@smart_inference_mode() -def run( - data, +@torch.no_grad() +def run(data, weights=None, # model.pt path(s) batch_size=32, # batch size imgsz=640, # inference size (pixels) conf_thres=0.001, # confidence threshold iou_thres=0.6, # NMS IoU threshold - max_det=300, # maximum detections per image task='val', # train, val, test, speed or study device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - workers=8, # max dataloader workers (per RANK in DDP mode) single_cls=False, # treat as single-class dataset augment=False, # augmented inference verbose=False, # verbose output @@ -125,11 +107,12 @@ def run( plots=True, callbacks=Callbacks(), compute_loss=None, -): + ): # Initialize/load model and set device training = model is not None if training: # called by train.py - device, pt, jit, engine = next(model.parameters()).device, True, False, False # get model device, PyTorch model + device, pt = next(model.parameters()).device, True # get model device, PyTorch model + half &= device.type != 'cpu' # half precision only supported on CUDA model.half() if half else model.float() else: # called directly @@ -140,149 +123,130 @@ def run( (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir # Load model - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine + model = DetectMultiBackend(weights, device=device, dnn=dnn) + stride, pt = model.stride, model.pt imgsz = check_img_size(imgsz, s=stride) # check image size - half = model.fp16 # FP16 supported on limited backends with CUDA - if engine: - batch_size = model.batch_size + half &= pt and device.type != 'cpu' # half precision only supported by PyTorch on CUDA + if pt: + model.model.half() if half else model.model.float() else: - device = model.device - if not (pt or jit): - batch_size = 1 # export.py models default to batch-size 1 - LOGGER.info(f'Forcing --batch-size 1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models') + half = False + batch_size = 1 # export.py models default to batch-size 1 + device = torch.device('cpu') + LOGGER.info(f'Forcing --batch-size 1 square inference shape(1,3,{imgsz},{imgsz}) for non-PyTorch backends') # Data data = check_dataset(data) # check # Configure model.eval() - cuda = device.type != 'cpu' - is_coco = isinstance(data.get('val'), str) and data['val'].endswith(f'coco{os.sep}val2017.txt') # COCO dataset + is_coco = isinstance(data.get('val'), str) and data['val'].endswith('coco/val2017.txt') # COCO dataset nc = 1 if single_cls else int(data['nc']) # number of classes - iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for mAP@0.5:0.95 + iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95 niou = iouv.numel() # Dataloader if not training: - if pt and not single_cls: # check --weights are trained on --data - ncm = model.model.nc - assert ncm == nc, f'{weights} ({ncm} classes) trained on different --data than what you passed ({nc} ' \ - f'classes). Pass correct combination of --weights and --data that are trained together.' - model.warmup(imgsz=(1 if pt else batch_size, 3, imgsz, imgsz)) # warmup - pad, rect = (0.0, False) if task == 'speed' else (0.5, pt) # square inference for benchmarks + if pt and device.type != 'cpu': + model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.model.parameters()))) # warmup + pad = 0.0 if task == 'speed' else 0.5 task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images - dataloader = create_dataloader(data[task], - imgsz, - batch_size, - stride, - single_cls, - pad=pad, - rect=rect, - workers=workers, + dataloader = create_dataloader(data[task], imgsz, batch_size, stride, single_cls, pad=pad, rect=pt, prefix=colorstr(f'{task}: '))[0] seen = 0 confusion_matrix = ConfusionMatrix(nc=nc) - names = model.names if hasattr(model, 'names') else model.module.names # get class names - if isinstance(names, (list, tuple)): # old format - names = dict(enumerate(names)) + names = {k: v for k, v in enumerate(model.names if hasattr(model, 'names') else model.module.names)} class_map = coco80_to_coco91_class() if is_coco else list(range(1000)) - s = ('%22s' + '%11s' * 6) % ('Class', 'Images', 'Instances', 'P', 'R', 'mAP50', 'mAP50-95') - tp, fp, p, r, f1, mp, mr, map50, ap50, map = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 - dt = Profile(), Profile(), Profile() # profiling times + s = ('%20s' + '%11s' * 6) % ('Class', 'Images', 'Labels', 'P', 'R', 'mAP@.5', 'mAP@.5:.95') + dt, p, r, f1, mp, mr, map50, map = [0.0, 0.0, 0.0], 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 loss = torch.zeros(3, device=device) jdict, stats, ap, ap_class = [], [], [], [] - callbacks.run('on_val_start') - pbar = tqdm(dataloader, desc=s, bar_format=TQDM_BAR_FORMAT) # progress bar + pbar = tqdm(dataloader, desc=s, ncols=NCOLS, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar for batch_i, (im, targets, paths, shapes) in enumerate(pbar): - callbacks.run('on_val_batch_start') - with dt[0]: - if cuda: - im = im.to(device, non_blocking=True) - targets = targets.to(device) - im = im.half() if half else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - nb, _, height, width = im.shape # batch size, channels, height, width + t1 = time_sync() + if pt: + im = im.to(device, non_blocking=True) + targets = targets.to(device) + im = im.half() if half else im.float() # uint8 to fp16/32 + im /= 255 # 0 - 255 to 0.0 - 1.0 + nb, _, height, width = im.shape # batch size, channels, height, width + t2 = time_sync() + dt[0] += t2 - t1 # Inference - with dt[1]: - preds, train_out = model(im) if compute_loss else (model(im, augment=augment), None) + out, train_out = model(im) if training else model(im, augment=augment, val=True) # inference, loss outputs + dt[1] += time_sync() - t2 # Loss if compute_loss: - loss += compute_loss(train_out, targets)[1] # box, obj, cls + loss += compute_loss([x.float() for x in train_out], targets)[1] # box, obj, cls # NMS - targets[:, 2:] *= torch.tensor((width, height, width, height), device=device) # to pixels + targets[:, 2:] *= torch.Tensor([width, height, width, height]).to(device) # to pixels lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling - with dt[2]: - preds = non_max_suppression(preds, - conf_thres, - iou_thres, - labels=lb, - multi_label=True, - agnostic=single_cls, - max_det=max_det) + t3 = time_sync() + out = non_max_suppression(out, conf_thres, iou_thres, labels=lb, multi_label=True, agnostic=single_cls) + dt[2] += time_sync() - t3 # Metrics - for si, pred in enumerate(preds): + for si, pred in enumerate(out): labels = targets[targets[:, 0] == si, 1:] - nl, npr = labels.shape[0], pred.shape[0] # number of labels, predictions + nl = len(labels) + tcls = labels[:, 0].tolist() if nl else [] # target class path, shape = Path(paths[si]), shapes[si][0] - correct = torch.zeros(npr, niou, dtype=torch.bool, device=device) # init seen += 1 - if npr == 0: + if len(pred) == 0: if nl: - stats.append((correct, *torch.zeros((2, 0), device=device), labels[:, 0])) - if plots: - confusion_matrix.process_batch(detections=None, labels=labels[:, 0]) + stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls)) continue # Predictions if single_cls: pred[:, 5] = 0 predn = pred.clone() - scale_boxes(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred + scale_coords(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred # Evaluate if nl: tbox = xywh2xyxy(labels[:, 1:5]) # target boxes - scale_boxes(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels + scale_coords(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels correct = process_batch(predn, labelsn, iouv) if plots: confusion_matrix.process_batch(predn, labelsn) - stats.append((correct, pred[:, 4], pred[:, 5], labels[:, 0])) # (correct, conf, pcls, tcls) + else: + correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool) + stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls)) # (correct, conf, pcls, tcls) # Save/log if save_txt: - save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt') + save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / (path.stem + '.txt')) if save_json: save_one_json(predn, jdict, path, class_map) # append to COCO-JSON dictionary callbacks.run('on_val_image_end', pred, predn, path, names, im[si]) # Plot images if plots and batch_i < 3: - plot_images(im, targets, paths, save_dir / f'val_batch{batch_i}_labels.jpg', names) # labels - plot_images(im, output_to_target(preds), paths, save_dir / f'val_batch{batch_i}_pred.jpg', names) # pred - - callbacks.run('on_val_batch_end', batch_i, im, targets, paths, shapes, preds) + f = save_dir / f'val_batch{batch_i}_labels.jpg' # labels + Thread(target=plot_images, args=(im, targets, paths, f, names), daemon=True).start() + f = save_dir / f'val_batch{batch_i}_pred.jpg' # predictions + Thread(target=plot_images, args=(im, output_to_target(out), paths, f, names), daemon=True).start() # Compute metrics - stats = [torch.cat(x, 0).cpu().numpy() for x in zip(*stats)] # to numpy + stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy if len(stats) and stats[0].any(): - tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names) + p, r, ap, f1, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names) ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95 mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() - nt = np.bincount(stats[3].astype(int), minlength=nc) # number of targets per class + nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class + else: + nt = torch.zeros(1) # Print results - pf = '%22s' + '%11i' * 2 + '%11.3g' * 4 # print format + pf = '%20s' + '%11i' * 2 + '%11.3g' * 4 # print format LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) - if nt.sum() == 0: - LOGGER.warning(f'WARNING âš ī¸ no labels found in {task} set, can not compute metrics without labels') # Print results per class if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats): @@ -290,7 +254,7 @@ def run( LOGGER.info(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) # Print speeds - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image + t = tuple(x / seen * 1E3 for x in dt) # speeds per image if not training: shape = (batch_size, 3, imgsz, imgsz) LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t) @@ -298,19 +262,19 @@ def run( # Plots if plots: confusion_matrix.plot(save_dir=save_dir, names=list(names.values())) - callbacks.run('on_val_end', nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix) + callbacks.run('on_val_end') # Save JSON if save_json and len(jdict): w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights - anno_json = str(Path('../datasets/coco/annotations/instances_val2017.json')) # annotations - pred_json = str(save_dir / f'{w}_predictions.json') # predictions + anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json + pred_json = str(save_dir / f"{w}_predictions.json") # predictions json LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...') with open(pred_json, 'w') as f: json.dump(jdict, f) try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb - check_requirements('pycocotools>=2.0.6') + check_requirements(['pycocotools']) from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval @@ -318,7 +282,7 @@ def run( pred = anno.loadRes(pred_json) # init predictions api eval = COCOeval(anno, pred, 'bbox') if is_coco: - eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.im_files] # image IDs to evaluate + eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] # image IDs to evaluate eval.evaluate() eval.accumulate() eval.summarize() @@ -340,15 +304,13 @@ def run( def parse_opt(): parser = argparse.ArgumentParser() parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3-tiny.pt', help='model path(s)') + parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3.pt', help='model.pt path(s)') parser.add_argument('--batch-size', type=int, default=32, help='batch size') parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold') parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold') - parser.add_argument('--max-det', type=int, default=300, help='maximum detections per image') parser.add_argument('--task', default='val', help='train, val, test, speed or study') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') parser.add_argument('--augment', action='store_true', help='augmented inference') parser.add_argument('--verbose', action='store_true', help='report mAP by class') @@ -365,31 +327,29 @@ def parse_opt(): opt.data = check_yaml(opt.data) # check YAML opt.save_json |= opt.data.endswith('coco.yaml') opt.save_txt |= opt.save_hybrid - print_args(vars(opt)) + print_args(FILE.stem, opt) return opt def main(opt): - check_requirements(exclude=('tensorboard', 'thop')) + check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop')) if opt.task in ('train', 'val', 'test'): # run normally if opt.conf_thres > 0.001: # https://github.com/ultralytics/yolov5/issues/1466 - LOGGER.info(f'WARNING âš ī¸ confidence threshold {opt.conf_thres} > 0.001 produces invalid results') - if opt.save_hybrid: - LOGGER.info('WARNING âš ī¸ --save-hybrid will return high mAP from hybrid labels, not from predictions alone') + LOGGER.info(f'WARNING: confidence threshold {opt.conf_thres} >> 0.001 will produce invalid mAP values.') run(**vars(opt)) else: weights = opt.weights if isinstance(opt.weights, list) else [opt.weights] - opt.half = torch.cuda.is_available() and opt.device != 'cpu' # FP16 for fastest results + opt.half = True # FP16 for fastest results if opt.task == 'speed': # speed benchmarks - # python val.py --task speed --data coco.yaml --batch 1 --weights yolov5n.pt yolov5s.pt... + # python val.py --task speed --data coco.yaml --batch 1 --weights yolov3.pt yolov3-spp.pt... opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False for opt.weights in weights: run(**vars(opt), plots=False) elif opt.task == 'study': # speed vs mAP benchmarks - # python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n.pt yolov5s.pt... + # python val.py --task study --data coco.yaml --iou 0.7 --weights yolov3.pt yolov3-spp.pt... for opt.weights in weights: f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt' # filename to save to x, y = list(range(256, 1536 + 128, 128)), [] # x axis (image sizes), y axis @@ -398,12 +358,10 @@ def main(opt): r, _, t = run(**vars(opt), plots=False) y.append(r + t) # results and times np.savetxt(f, y, fmt='%10.4g') # save - subprocess.run('zip -r study.zip study_*.txt'.split()) + os.system('zip -r study.zip study_*.txt') plot_val_study(x=x) # plot - else: - raise NotImplementedError(f'--task {opt.task} not in ("train", "val", "test", "speed", "study")') -if __name__ == '__main__': +if __name__ == "__main__": opt = parse_opt() main(opt)