yolo with runs.zip file exp 14 is the best weight

This commit is contained in:
Apoorva Gupta 2023-02-21 21:43:51 +05:30
parent 34abb2b0dd
commit dbe80aca78
43 changed files with 3670 additions and 5669 deletions

View File

@ -1,6 +1,6 @@
## Contributing to YOLOv3 🚀 ## Contributing to YOLOv3 🚀
We love your input! We want to make contributing to as easy and transparent as possible, whether it's: We love your input! We want to make contributing to YOLOv3 as easy and transparent as possible, whether it's:
- Reporting a bug - Reporting a bug
- Discussing the current state of the code - Discussing the current state of the code
@ -8,7 +8,7 @@ We love your input! We want to make contributing to as easy and transparent as p
- Proposing a new feature - Proposing a new feature
- Becoming a maintainer - Becoming a maintainer
works so well due to our combined community effort, and for every small improvement you contribute you will be YOLOv3 works so well due to our combined community effort, and for every small improvement you contribute you will be
helping push the frontiers of what's possible in AI 😃! helping push the frontiers of what's possible in AI 😃!
## Submitting a Pull Request (PR) 🛠️ ## Submitting a Pull Request (PR) 🛠️
@ -18,72 +18,73 @@ Submitting a PR is easy! This example shows how to submit a PR for updating `req
### 1. Select File to Update ### 1. Select File to Update
Select `requirements.txt` to update by clicking on it in GitHub. Select `requirements.txt` to update by clicking on it in GitHub.
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p> <p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
### 2. Click 'Edit this file' ### 2. Click 'Edit this file'
The button is in the top-right corner. Button is in top-right corner.
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p> <p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
### 3. Make Changes ### 3. Make Changes
Change the `matplotlib` version from `3.2.2` to `3.3`. Change `matplotlib` version from `3.2.2` to `3.3`.
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p> <p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
### 4. Preview Changes and Submit PR ### 4. Preview Changes and Submit PR
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
changes** button. All done, your PR is now submitted to for review and approval 😃! changes** button. All done, your PR is now submitted to YOLOv3 for review and approval 😃!
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p> <p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
### PR recommendations ### PR recommendations
To allow your work to be integrated as seamlessly as possible, we advise you to: To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is **up-to-date** with `ultralytics/yolov5` `master` branch. If your PR is behind you can update - ✅ Verify your PR is **up-to-date with upstream/master.** If your PR is behind upstream/master an
your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally. automatic [GitHub actions](https://github.com/ultralytics/yolov3/blob/master/.github/workflows/rebase.yml) rebase may
be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature'
with the name of your local branch:
<p align="center"><img width="751" alt="Screenshot 2022-08-29 at 22 47 15" src="https://user-images.githubusercontent.com/26833433/187295893-50ed9f44-b2c9-4138-a614-de69bd1753d7.png"></p> ```bash
git remote add upstream https://github.com/ultralytics/yolov3.git
git fetch upstream
git checkout feature # <----- replace 'feature' with local branch name
git merge upstream/master
git push -u origin -f
```
- ✅ Verify all Continuous Integration (CI) **checks are passing**. - ✅ Verify all Continuous Integration (CI) **checks are passing**.
<p align="center"><img width="751" alt="Screenshot 2022-08-29 at 22 47 03" src="https://user-images.githubusercontent.com/26833433/187296922-545c5498-f64a-4d8c-8300-5fa764360da6.png"></p>
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase
but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
## Submitting a Bug Report 🐛 ## Submitting a Bug Report 🐛
If you spot a problem with please submit a Bug Report! If you spot a problem with YOLOv3 please submit a Bug Report!
For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few
short guidelines below to help users provide what we need to get started. short guidelines below to help users provide what we need in order to get started.
When asking a question, people will be better able to provide help if you provide **code** that they can easily When asking a question, people will be better able to provide help if you provide **code** that they can easily
understand and use to **reproduce** the problem. This is referred to by community members as creating understand and use to **reproduce** the problem. This is referred to by community members as creating
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
the problem should be: the problem should be:
-**Minimal** Use as little code as possible that still produces the same problem ***Minimal** Use as little code as possible that still produces the same problem
-**Complete** Provide **all** parts someone else needs to reproduce your problem in the question itself ***Complete** Provide **all** parts someone else needs to reproduce your problem in the question itself
-**Reproducible** Test the code you're about to provide to make sure it reproduces the problem ***Reproducible** Test the code you're about to provide to make sure it reproduces the problem
In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
should be: should be:
-**Current** Verify that your code is up-to-date with the current ***Current** Verify that your code is up-to-date with current
GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new GitHub [master](https://github.com/ultralytics/yolov3/tree/master), and if necessary `git pull` or `git clone` a new
copy to ensure your problem has not already been resolved by previous commits. copy to ensure your problem has not already been resolved by previous commits.
-**Unmodified** Your problem must be reproducible without any modifications to the codebase in this ***Unmodified** Your problem must be reproducible without any modifications to the codebase in this
repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️. repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **
**Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and provide Bug Report** [template](https://github.com/ultralytics/yolov3/issues/new/choose) and providing
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
understand and diagnose your problem. understand and diagnose your problem.

View File

@ -1,103 +1,94 @@
<div align="center"> <div align="center">
<p> <p>
<a align="center" href="https://ultralytics.com/yolov3" target="_blank"> <a align="left" href="https://ultralytics.com/yolov3" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov3/banner-yolov3.png"></a> <img width="850" src="https://user-images.githubusercontent.com/26833433/99805965-8f2ca800-2b3d-11eb-8fad-13a96b222a23.jpg"></a>
</p> </p>
[English](README.md) | [简体中文](README.zh-CN.md)
<br> <br>
<div> <div>
<a href="https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv3 CI"></a> <a href="https://github.com/ultralytics/yolov3/actions"><img src="https://github.com/ultralytics/yolov3/workflows/CI%20CPU%20testing/badge.svg" alt="CI CPU testing"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv3 Citation"></a> <a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv3 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov3"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov3?logo=docker" alt="Docker Pulls"></a> <a href="https://hub.docker.com/r/ultralytics/yolov3"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov3?logo=docker" alt="Docker Pulls"></a>
<br> <br>
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a> <a href="https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov3"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a> <a href="https://join.slack.com/t/ultralytics/shared_invite/zt-w29ei8bp-jczz7QYUmDtgo6r6KcMIAg"><img src="https://img.shields.io/badge/Slack-Join_Forum-blue.svg?logo=slack" alt="Join Forum"></a>
</div>
<br>
YOLOv3 🚀 is the world's most loved vision AI, representing <a href="https://ultralytics.com">Ultralytics</a> open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
To request an Enterprise License please complete the form at <a href="https://ultralytics.com/license">Ultralytics Licensing</a>.
<div align="center">
<a href="https://github.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.producthunt.com/@glenn_jocher" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-producthunt.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.facebook.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-facebook.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="2%" alt="" /></a>
</div>
</div> </div>
<br> <br>
## <div align="center">YOLOv8 🚀 NEW</div>
We are thrilled to announce the launch of Ultralytics YOLOv8 🚀, our NEW cutting-edge, state-of-the-art (SOTA) model
released at **[https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics)**.
YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of
object detection, image segmentation and image classification tasks.
See the [YOLOv8 Docs](https://docs.ultralytics.com) for details and get started with:
```commandline
pip install ultralytics
```
<div align="center"> <div align="center">
<a href="https://ultralytics.com/yolov8" target="_blank"> <a href="https://github.com/ultralytics">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/yolo-comparison-plots.png"></a> <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-github.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.linkedin.com/company/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-linkedin.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://twitter.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-twitter.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://youtube.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-youtube.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.facebook.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-facebook.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.instagram.com/ultralytics/">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="2%"/>
</a>
</div>
<br>
<p>
YOLOv3 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents <a href="https://ultralytics.com">Ultralytics</a>
open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
</p>
<!--
<a align="center" href="https://ultralytics.com/yolov3" target="_blank">
<img width="800" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-api.png"></a>
-->
</div> </div>
## <div align="center">Documentation</div> ## <div align="center">Documentation</div>
See the [YOLOv3 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment. See below for quickstart examples. See the [YOLOv3 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
## <div align="center">Quick Start Examples</div>
<details open> <details open>
<summary>Install</summary> <summary>Install</summary>
Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.6.0**](https://www.python.org/) is required with all
[**Python>=3.7.0**](https://www.python.org/) environment, including [requirements.txt](https://github.com/ultralytics/yolov3/blob/master/requirements.txt) installed including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
<!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
```bash ```bash
git clone https://github.com/ultralytics/yolov3 # clone $ git clone https://github.com/ultralytics/yolov3
cd yolov3 $ cd yolov3
pip install -r requirements.txt # install $ pip install -r requirements.txt
``` ```
</details> </details>
<details> <details open>
<summary>Inference</summary> <summary>Inference</summary>
YOLOv3 [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) inference. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest Inference with YOLOv3 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download
YOLOv3 [release](https://github.com/ultralytics/yolov5/releases). from the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases).
```python ```python
import torch import torch
# Model # Model
model = torch.hub.load("ultralytics/yolov3", "yolov3") # or yolov5n - yolov5x6, custom model = torch.hub.load('ultralytics/yolov3', 'yolov3') # or yolov3-spp, yolov3-tiny, custom
# Images # Images
img = "https://ultralytics.com/images/zidane.jpg" # or file, Path, PIL, OpenCV, numpy, list img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference # Inference
results = model(img) results = model(img)
@ -108,21 +99,20 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
</details> </details>
<details> <details>
<summary>Inference with detect.py</summary> <summary>Inference with detect.py</summary>
`detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from `detect.py` runs inference on a variety of sources, downloading models automatically from
the latest YOLOv3 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases) and saving results to `runs/detect`.
```bash ```bash
python detect.py --weights yolov5s.pt --source 0 # webcam $ python detect.py --source 0 # webcam
img.jpg # image img.jpg # image
vid.mp4 # video vid.mp4 # video
screen # screenshot
path/ # directory path/ # directory
list.txt # list of images path/*.jpg # glob
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube 'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
``` ```
@ -132,21 +122,7 @@ python detect.py --weights yolov5s.pt --source 0 #
<details> <details>
<summary>Training</summary> <summary>Training</summary>
The commands below reproduce YOLOv3 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh)
results. [Models](https://github.com/ultralytics/yolov5/tree/master/models)
and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest
YOLOv3 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are
1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://github.com/ultralytics/yolov5/issues/475) times faster). Use the
largest `--batch-size` possible, or pass `--batch-size -1` for
YOLOv3 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
```bash
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
yolov5s 64
yolov5m 40
yolov5l 24
yolov5x 16
```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png"> <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
@ -155,270 +131,20 @@ python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml -
<details open> <details open>
<summary>Tutorials</summary> <summary>Tutorials</summary>
- [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)  🚀 RECOMMENDED * [Train Custom Data](https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data)&nbsp; 🚀 RECOMMENDED
- [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)  ☘️ * [Tips for Best Training Results](https://github.com/ultralytics/yolov3/wiki/Tips-for-Best-Training-Results)&nbsp; ☘️
RECOMMENDED RECOMMENDED
- [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) * [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)&nbsp; 🌟 NEW
- [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) 🌟 NEW * [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975)&nbsp; 🌟 NEW
- [TFLite, ONNX, CoreML, TensorRT Export](https://github.com/ultralytics/yolov5/issues/251) 🚀 * [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
- [NVIDIA Jetson Nano Deployment](https://github.com/ultralytics/yolov5/issues/9627) 🌟 NEW * [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)&nbsp; ⭐ NEW
- [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303) * [TorchScript, ONNX, CoreML Export](https://github.com/ultralytics/yolov5/issues/251) 🚀
- [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318) * [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
- [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304) * [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
- [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607) * [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
- [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314) * [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)
- [Architecture Summary](https://github.com/ultralytics/yolov5/issues/6998) 🌟 NEW * [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)&nbsp; ⭐ NEW
- [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975)  🌟 NEW * [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx)
- [ClearML Logging](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml) 🌟 NEW
- [YOLOv3 with Neural Magic's Deepsparse](https://bit.ly/yolov5-neuralmagic) 🌟 NEW
- [Comet Logging](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/comet) 🌟 NEW
</details>
## <div align="center">Integrations</div>
<br>
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov3/banner-integrations.png"></a>
<br>
<br>
<div align="center">
<a href="https://roboflow.com/?ref=ultralytics">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-roboflow.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://cutt.ly/yolov5-readme-clearml">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-clearml.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://bit.ly/yolov5-readme-comet2">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-comet.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://bit.ly/yolov5-neuralmagic">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" /></a>
</div>
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
| :--------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
| Label and export your custom datasets directly to YOLOv3 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv3 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv3 models, resume training, and interactively visualise and debug predictions | Run YOLOv3 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
## <div align="center">Ultralytics HUB</div>
Experience seamless AI with [Ultralytics HUB](https://bit.ly/ultralytics_hub) ⭐, the all-in-one solution for data visualization, YOLO 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://ultralytics.com/app_install). Start your journey for **Free** now!
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
## <div align="center">Why YOLOv3</div>
YOLOv3 has been designed to be super easy to get started and simple to learn. We prioritize real-world results.
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png"></p>
<details>
<summary>YOLOv3-P5 640 Figure</summary>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png"></p>
</details>
<details>
<summary>Figure Notes</summary>
- **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
- **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
- **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
### Pretrained Checkpoints
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | Speed<br><sup>CPU b1<br>(ms) | Speed<br><sup>V100 b1<br>(ms) | Speed<br><sup>V100 b32<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ----------------------------------------------------------------------------------------------- | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- |
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
| | | | | | | | | |
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [TTA] | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
<details>
<summary>Table Notes</summary>
- All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## <div align="center">Segmentation</div>
Our new YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) instance segmentation models are the fastest and most accurate in the world, beating all current [SOTA benchmarks](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco). We've made them super simple to train, validate and deploy. See full details in our [Release Notes](https://github.com/ultralytics/yolov5/releases/v7.0) and visit our [YOLOv5 Segmentation Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) for quickstart tutorials.
<details>
<summary>Segmentation Checkpoints</summary>
<div align="center">
<a align="center" href="https://ultralytics.com/yolov5" target="_blank">
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png"></a>
</div>
We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) notebooks for easy reproducibility.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Train time<br><sup>300 epochs<br>A100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TRT A100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------ | --------------------- | -------------------- | --------------------- | --------------------------------------------- | ------------------------------ | ------------------------------ | ------------------ | ---------------------- |
| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** |
| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 |
| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 |
| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 |
| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
- All checkpoints are trained to 300 epochs with SGD optimizer with `lr0=0.01` and `weight_decay=5e-5` at image size 640 and all default settings.<br>Runs logged to https://wandb.ai/glenn-jocher/YOLOv5_v70_official
- **Accuracy** values are for single-model single-scale on COCO dataset.<br>Reproduce by `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
- **Speed** averaged over 100 inference images using a [Colab Pro](https://colab.research.google.com/signup) A100 High-RAM instance. Values indicate inference speed only (NMS adds about 1ms per image). <br>Reproduce by `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
- **Export** to ONNX at FP32 and TensorRT at FP16 done with `export.py`. <br>Reproduce by `python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
</details>
<details>
<summary>Segmentation Usage Examples &nbsp;<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/segment/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></summary>
### Train
YOLOv5 segmentation training supports auto-download COCO128-seg segmentation dataset with `--data coco128-seg.yaml` argument and manual download of COCO-segments dataset with `bash data/scripts/get_coco.sh --train --val --segments` and then `python train.py --data coco.yaml`.
```bash
# Single-GPU
python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640
# Multi-GPU DDP
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3
```
### Val
Validate YOLOv5s-seg mask mAP on COCO dataset:
```bash
bash data/scripts/get_coco.sh --val --segments # download COCO val segments split (780MB, 5000 images)
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # validate
```
### Predict
Use pretrained YOLOv5m-seg.pt to predict bus.jpg:
```bash
python segment/predict.py --weights yolov5m-seg.pt --data data/images/bus.jpg
```
```python
model = torch.hub.load(
"ultralytics/yolov5", "custom", "yolov5m-seg.pt"
) # load from PyTorch Hub (WARNING: inference not yet supported)
```
| ![zidane](https://user-images.githubusercontent.com/26833433/203113421-decef4c4-183d-4a0a-a6c2-6435b33bc5d3.jpg) | ![bus](https://user-images.githubusercontent.com/26833433/203113416-11fe0025-69f7-4874-a0a6-65d0bfe2999a.jpg) |
| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
### Export
Export YOLOv5s-seg model to ONNX and TensorRT:
```bash
python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0
```
</details>
## <div align="center">Classification</div>
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) brings support for classification model training, validation and deployment! See full details in our [Release Notes](https://github.com/ultralytics/yolov5/releases/v6.2) and visit our [YOLOv5 Classification Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) for quickstart tutorials.
<details>
<summary>Classification Checkpoints</summary>
<br>
We trained YOLOv5-cls classification models on ImageNet for 90 epochs using a 4xA100 instance, and we trained ResNet and EfficientNet models alongside with the same default training settings to compare. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) for easy reproducibility.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Training<br><sup>90 epochs<br>4xA100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TensorRT V100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@224 (B) |
| -------------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | -------------------------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ---------------------- |
| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 |
| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 |
| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 |
| | | | | | | | | |
| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 |
| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 |
| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 |
| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 |
| | | | | | | | | |
| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 |
| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 |
| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 |
| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 |
<details>
<summary>Table Notes (click to expand)</summary>
- All checkpoints are trained to 90 epochs with SGD optimizer with `lr0=0.001` and `weight_decay=5e-5` at image size 224 and all default settings.<br>Runs logged to https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2
- **Accuracy** values are for single-model single-scale on [ImageNet-1k](https://www.image-net.org/index.php) dataset.<br>Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224`
- **Speed** averaged over 100 inference images using a Google [Colab Pro](https://colab.research.google.com/signup) V100 High-RAM instance.<br>Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
- **Export** to ONNX at FP32 and TensorRT at FP16 done with `export.py`. <br>Reproduce by `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
</details>
</details>
<details>
<summary>Classification Usage Examples &nbsp;<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/classify/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></summary>
### Train
YOLOv5 classification training supports auto-download of MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof, and ImageNet datasets with the `--data` argument. To start training on MNIST for example use `--data mnist`.
```bash
# Single-GPU
python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128
# Multi-GPU DDP
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3
```
### Val
Validate YOLOv5m-cls accuracy on ImageNet-1k dataset:
```bash
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
```
### Predict
Use pretrained YOLOv5s-cls.pt to predict bus.jpg:
```bash
python classify/predict.py --weights yolov5s-cls.pt --data data/images/bus.jpg
```
```python
model = torch.hub.load(
"ultralytics/yolov5", "custom", "yolov5s-cls.pt"
) # load from PyTorch Hub
```
### Export
Export a group of trained YOLOv5s-cls, ResNet and EfficientNet models to ONNX and TensorRT:
```bash
python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224
```
</details> </details>
@ -427,67 +153,121 @@ python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --inclu
Get started in seconds with our verified environments. Click each icon below for details. Get started in seconds with our verified environments. Click each icon below for details.
<div align="center"> <div align="center">
<a href="https://bit.ly/yolov5-paperspace-notebook"> <a href="https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-gradient.png" width="10%" /></a> <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-colab-small.png" width="15%"/>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" /> </a>
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"> <a href="https://www.kaggle.com/ultralytics/yolov3">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-colab-small.png" width="10%" /></a> <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-kaggle-small.png" width="15%"/>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" /> </a>
<a href="https://www.kaggle.com/ultralytics/yolov5"> <a href="https://hub.docker.com/r/ultralytics/yolov3">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-kaggle-small.png" width="10%" /></a> <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-docker-small.png" width="15%"/>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" /> </a>
<a href="https://hub.docker.com/r/ultralytics/yolov5"> <a href="https://github.com/ultralytics/yolov3/wiki/AWS-Quickstart">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-docker-small.png" width="10%" /></a> <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-aws-small.png" width="15%"/>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" /> </a>
<a href="https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart"> <a href="https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-aws-small.png" width="10%" /></a> <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-gcp-small.png" width="15%"/>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" /> </a>
<a href="https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-gcp-small.png" width="10%" /></a>
</div> </div>
## <div align="center">Integrations</div>
<div align="center">
<a href="https://wandb.ai/site?utm_campaign=repo_yolo_readme">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-wb-long.png" width="49%"/>
</a>
<a href="https://roboflow.com/?ref=ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-roboflow-long.png" width="49%"/>
</a>
</div>
|Weights and Biases|Roboflow ⭐ NEW|
|:-:|:-:|
|Automatically track and visualize all your YOLOv3 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv3 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) |
## <div align="center">Why YOLOv5</div>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/136901921-abcfcd9d-f978-4942-9b97-0e3f202907df.png"></p>
<details>
<summary>YOLOv3-P5 640 Figure (click to expand)</summary>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/136763877-b174052b-c12f-48d2-8bc4-545e3853398e.png"></p>
</details>
<details>
<summary>Figure Notes (click to expand)</summary>
* **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
* **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
* **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
* **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
### Pretrained Checkpoints
[assets]: https://github.com/ultralytics/yolov5/releases
[TTA]: https://github.com/ultralytics/yolov5/issues/303
|Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
|--- |--- |--- |--- |--- |--- |--- |--- |---
|[YOLOv5n][assets] |640 |28.4 |46.0 |**45** |**6.3**|**0.6**|**1.9**|**4.5**
|[YOLOv5s][assets] |640 |37.2 |56.0 |98 |6.4 |0.9 |7.2 |16.5
|[YOLOv5m][assets] |640 |45.2 |63.9 |224 |8.2 |1.7 |21.2 |49.0
|[YOLOv5l][assets] |640 |48.8 |67.2 |430 |10.1 |2.7 |46.5 |109.1
|[YOLOv5x][assets] |640 |50.7 |68.9 |766 |12.1 |4.8 |86.7 |205.7
| | | | | | | | |
|[YOLOv5n6][assets] |1280 |34.0 |50.7 |153 |8.1 |2.1 |3.2 |4.6
|[YOLOv5s6][assets] |1280 |44.5 |63.0 |385 |8.2 |3.6 |16.8 |12.6
|[YOLOv5m6][assets] |1280 |51.0 |69.0 |887 |11.1 |6.8 |35.7 |50.0
|[YOLOv5l6][assets] |1280 |53.6 |71.6 |1784 |15.8 |10.5 |76.8 |111.4
|[YOLOv5x6][assets]<br>+ [TTA][TTA]|1280<br>1536 |54.7<br>**55.4** |**72.4**<br>72.3 |3136<br>- |26.2<br>- |19.4<br>- |140.7<br>- |209.8<br>-
<details>
<summary>Table Notes (click to expand)</summary>
* All checkpoints are trained to 300 epochs with default settings and hyperparameters.
* **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
* **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45`
* **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## <div align="center">Contribute</div> ## <div align="center">Contribute</div>
We love your input! We want to make contributing to YOLOv3 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv3 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors! We love your input! We want to make contributing to YOLOv3 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv3 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors!
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 --> <a href="https://github.com/ultralytics/yolov3/graphs/contributors"><img src="https://opencollective.com/ultralytics/contributors.svg?width=990" /></a>
<a href="https://github.com/ultralytics/yolov5/graphs/contributors">
<img src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" /></a>
## <div align="center">License</div>
YOLOv3 is available under two different licenses:
- **GPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) file for details.
- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license).
## <div align="center">Contact</div> ## <div align="center">Contact</div>
For YOLOv3 bug reports and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues) or the [Ultralytics Community Forum](https://community.ultralytics.com/). For YOLOv3 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov3/issues). For business inquiries or
professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).
<br> <br>
<div align="center">
<a href="https://github.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.producthunt.com/@glenn_jocher" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-producthunt.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.facebook.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-facebook.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="3%" alt="" /></a>
</div>
[tta]: https://github.com/ultralytics/yolov5/issues/303 <div align="center">
<a href="https://github.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-github.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://www.linkedin.com/company/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-linkedin.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://twitter.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-twitter.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://youtube.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-youtube.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://www.facebook.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-facebook.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://www.instagram.com/ultralytics/">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="3%"/>
</a>
</div>

View File

@ -1,10 +1,10 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/ by Argo AI # Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/
# Example usage: python train.py --data Argoverse.yaml # Example usage: python train.py --data Argoverse.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── Argoverse ← downloads here (31.3 GB) # └── Argoverse ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -14,15 +14,8 @@ val: Argoverse-1.1/images/val/ # val images (relative to 'path') 15062 images
test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview
# Classes # Classes
names: nc: 8 # number of classes
0: person names: ['person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign'] # class names
1: bicycle
2: car
3: motorcycle
4: bus
5: truck
6: traffic_light
7: stop_sign
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------
@ -39,7 +32,7 @@ download: |
for annot in tqdm(a['annotations'], desc=f"Converting {set} to YOLOv3 format..."): for annot in tqdm(a['annotations'], desc=f"Converting {set} to YOLOv3 format..."):
img_id = annot['image_id'] img_id = annot['image_id']
img_name = a['images'][img_id]['name'] img_name = a['images'][img_id]['name']
img_label_name = f'{img_name[:-3]}txt' img_label_name = img_name[:-3] + "txt"
cls = annot['category_id'] # instance class id cls = annot['category_id'] # instance class id
x_center, y_center, width, height = annot['bbox'] x_center, y_center, width, height = annot['bbox']
@ -63,7 +56,7 @@ download: |
# Download # Download
dir = Path(yaml['path']) # dataset root dir dir = Path('../datasets/Argoverse') # dataset root dir
urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip'] urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip']
download(urls, dir=dir, delete=False) download(urls, dir=dir, delete=False)

View File

@ -1,10 +1,10 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Global Wheat 2020 dataset http://www.global-wheat.com/ by University of Saskatchewan # Global Wheat 2020 dataset http://www.global-wheat.com/
# Example usage: python train.py --data GlobalWheat2020.yaml # Example usage: python train.py --data GlobalWheat2020.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── GlobalWheat2020 ← downloads here (7.0 GB) # └── GlobalWheat2020 ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -26,15 +26,14 @@ test: # test images (optional) 1276 images
- images/uq_1 - images/uq_1
# Classes # Classes
names: nc: 1 # number of classes
0: wheat_head names: ['wheat_head'] # class names
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------
download: | download: |
from utils.general import download, Path from utils.general import download, Path
# Download # Download
dir = Path(yaml['path']) # dataset root dir dir = Path(yaml['path']) # dataset root dir
urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip', urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip',

View File

@ -1,10 +1,10 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19 by Trax Retail # SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19
# Example usage: python train.py --data SKU-110K.yaml # Example usage: python train.py --data SKU-110K.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── SKU-110K ← downloads here (13.6 GB) # └── SKU-110K ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -14,8 +14,8 @@ val: val.txt # val images (relative to 'path') 588 images
test: test.txt # test images (optional) 2936 images test: test.txt # test images (optional) 2936 images
# Classes # Classes
names: nc: 1 # number of classes
0: object names: ['object'] # class names
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------
@ -24,7 +24,6 @@ download: |
from tqdm import tqdm from tqdm import tqdm
from utils.general import np, pd, Path, download, xyxy2xywh from utils.general import np, pd, Path, download, xyxy2xywh
# Download # Download
dir = Path(yaml['path']) # dataset root dir dir = Path(yaml['path']) # dataset root dir
parent = Path(dir.parent) # download dir parent = Path(dir.parent) # download dir

View File

@ -1,10 +1,10 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset by Tianjin University # VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset
# Example usage: python train.py --data VisDrone.yaml # Example usage: python train.py --data VisDrone.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── VisDrone ← downloads here (2.3 GB) # └── VisDrone ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -14,17 +14,8 @@ val: VisDrone2019-DET-val/images # val images (relative to 'path') 548 images
test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images
# Classes # Classes
names: nc: 10 # number of classes
0: pedestrian names: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor']
1: people
2: bicycle
3: car
4: van
5: truck
6: tricycle
7: awning-tricycle
8: bus
9: motor
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------
@ -63,7 +54,7 @@ download: |
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip', 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip', 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip'] 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip']
download(urls, dir=dir, curl=True, threads=4) download(urls, dir=dir)
# Convert # Convert
for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev': for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':

View File

@ -1,107 +1,35 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# COCO 2017 dataset http://cocodataset.org by Microsoft # COCO 2017 dataset http://cocodataset.org
# Example usage: python train.py --data coco.yaml # Example usage: python train.py --data coco.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── coco ← downloads here (20.1 GB) # └── coco ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco # dataset root dir path: ../datasets/coco # dataset root dir
train: train2017.txt # train images (relative to 'path') 118287 images train: train2017.txt # train images (relative to 'path') 118287 images
val: val2017.txt # val images (relative to 'path') 5000 images val: val2017.txt # train images (relative to 'path') 5000 images
test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794 test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
# Classes # Classes
names: nc: 80 # number of classes
0: person names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
1: bicycle 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
2: car 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
3: motorcycle 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
4: airplane 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
5: bus 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
6: train 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
7: truck 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
8: boat 'hair drier', 'toothbrush'] # class names
9: traffic light
10: fire hydrant
11: stop sign
12: parking meter
13: bench
14: bird
15: cat
16: dog
17: horse
18: sheep
19: cow
20: elephant
21: bear
22: zebra
23: giraffe
24: backpack
25: umbrella
26: handbag
27: tie
28: suitcase
29: frisbee
30: skis
31: snowboard
32: sports ball
33: kite
34: baseball bat
35: baseball glove
36: skateboard
37: surfboard
38: tennis racket
39: bottle
40: wine glass
41: cup
42: fork
43: knife
44: spoon
45: bowl
46: banana
47: apple
48: sandwich
49: orange
50: broccoli
51: carrot
52: hot dog
53: pizza
54: donut
55: cake
56: chair
57: couch
58: potted plant
59: bed
60: dining table
61: toilet
62: tv
63: laptop
64: mouse
65: remote
66: keyboard
67: cell phone
68: microwave
69: oven
70: toaster
71: sink
72: refrigerator
73: book
74: clock
75: vase
76: scissors
77: teddy bear
78: hair drier
79: toothbrush
# Download script/URL (optional) # Download script/URL (optional)
download: | download: |
from utils.general import download, Path from utils.general import download, Path
# Download labels # Download labels
segments = False # segment or box labels segments = False # segment or box labels
dir = Path(yaml['path']) # dataset root dir dir = Path(yaml['path']) # dataset root dir

View File

@ -1,10 +1,10 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics # COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
# Example usage: python train.py --data coco128.yaml # Example usage: python train.py --data coco128.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── coco128 ← downloads here (7 MB) # └── coco128 ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -14,87 +14,16 @@ val: images/train2017 # val images (relative to 'path') 128 images
test: # test images (optional) test: # test images (optional)
# Classes # Classes
names: nc: 80 # number of classes
0: person names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
1: bicycle 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
2: car 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
3: motorcycle 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
4: airplane 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
5: bus 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
6: train 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
7: truck 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
8: boat 'hair drier', 'toothbrush'] # class names
9: traffic light
10: fire hydrant
11: stop sign
12: parking meter
13: bench
14: bird
15: cat
16: dog
17: horse
18: sheep
19: cow
20: elephant
21: bear
22: zebra
23: giraffe
24: backpack
25: umbrella
26: handbag
27: tie
28: suitcase
29: frisbee
30: skis
31: snowboard
32: sports ball
33: kite
34: baseball bat
35: baseball glove
36: skateboard
37: surfboard
38: tennis racket
39: bottle
40: wine glass
41: cup
42: fork
43: knife
44: spoon
45: bowl
46: banana
47: apple
48: sandwich
49: orange
50: broccoli
51: carrot
52: hot dog
53: pizza
54: donut
55: cake
56: chair
57: couch
58: potted plant
59: bed
60: dining table
61: toilet
62: tv
63: laptop
64: mouse
65: remote
66: keyboard
67: cell phone
68: microwave
69: oven
70: toaster
71: sink
72: refrigerator
73: book
74: clock
75: vase
76: scissors
77: teddy bear
78: hair drier
79: toothbrush
# Download script/URL (optional) # Download script/URL (optional)

View File

@ -0,0 +1,18 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# COCO 2017 dataset http://cocodataset.org
# Example usage: python train.py --data coco.yaml
# parent
# ├── yolov3
# └── datasets
# └── coco ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../pipe-dataset/ # dataset root dir
train: train/images # train images (relative to 'path') 118287 images
val: val/images # train images (relative to 'path') 5000 images
# test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
# Classes
nc: 1 # number of classes
names: ['pipe'] # class names

View File

@ -4,7 +4,7 @@
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf) lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1 momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4 weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok) warmup_epochs: 3.0 # warmup epochs (fractions ok)

View File

@ -0,0 +1,34 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for COCO training from scratch
# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

View File

@ -1,10 +1,10 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Objects365 dataset https://www.objects365.org/ by Megvii # Objects365 dataset https://www.objects365.org/
# Example usage: python train.py --data Objects365.yaml # Example usage: python train.py --data Objects365.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── Objects365 ← downloads here (712 GB = 367G data + 345G zips) # └── Objects365 ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -14,382 +14,56 @@ val: images/val # val images (relative to 'path') 80000 images
test: # test images (optional) test: # test images (optional)
# Classes # Classes
names: nc: 365 # number of classes
0: Person names: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
1: Sneakers 'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
2: Chair 'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
3: Other Shoes 'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
4: Hat 'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',
5: Car 'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',
6: Lamp 'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',
7: Glasses 'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',
8: Bottle 'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',
9: Desk 'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',
10: Cup 'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',
11: Street Lights 'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',
12: Cabinet/shelf 'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',
13: Handbag/Satchel 'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',
14: Bracelet 'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',
15: Plate 'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',
16: Picture/Frame 'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',
17: Helmet 'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',
18: Book 'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',
19: Gloves 'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',
20: Storage box 'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',
21: Boat 'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',
22: Leather Shoes 'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',
23: Flower 'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',
24: Bench 'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',
25: Potted Plant 'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',
26: Bowl/Basin 'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',
27: Flag 'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',
28: Pillow 'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',
29: Boots 'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',
30: Vase 'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',
31: Microphone 'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',
32: Necklace 'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',
33: Ring 'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',
34: SUV 'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',
35: Wine Glass 'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',
36: Belt 'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',
37: Monitor/TV 'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
38: Backpack 'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
39: Umbrella 'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
40: Traffic Light 'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis']
41: Speaker
42: Watch
43: Tie
44: Trash bin Can
45: Slippers
46: Bicycle
47: Stool
48: Barrel/bucket
49: Van
50: Couch
51: Sandals
52: Basket
53: Drum
54: Pen/Pencil
55: Bus
56: Wild Bird
57: High Heels
58: Motorcycle
59: Guitar
60: Carpet
61: Cell Phone
62: Bread
63: Camera
64: Canned
65: Truck
66: Traffic cone
67: Cymbal
68: Lifesaver
69: Towel
70: Stuffed Toy
71: Candle
72: Sailboat
73: Laptop
74: Awning
75: Bed
76: Faucet
77: Tent
78: Horse
79: Mirror
80: Power outlet
81: Sink
82: Apple
83: Air Conditioner
84: Knife
85: Hockey Stick
86: Paddle
87: Pickup Truck
88: Fork
89: Traffic Sign
90: Balloon
91: Tripod
92: Dog
93: Spoon
94: Clock
95: Pot
96: Cow
97: Cake
98: Dinning Table
99: Sheep
100: Hanger
101: Blackboard/Whiteboard
102: Napkin
103: Other Fish
104: Orange/Tangerine
105: Toiletry
106: Keyboard
107: Tomato
108: Lantern
109: Machinery Vehicle
110: Fan
111: Green Vegetables
112: Banana
113: Baseball Glove
114: Airplane
115: Mouse
116: Train
117: Pumpkin
118: Soccer
119: Skiboard
120: Luggage
121: Nightstand
122: Tea pot
123: Telephone
124: Trolley
125: Head Phone
126: Sports Car
127: Stop Sign
128: Dessert
129: Scooter
130: Stroller
131: Crane
132: Remote
133: Refrigerator
134: Oven
135: Lemon
136: Duck
137: Baseball Bat
138: Surveillance Camera
139: Cat
140: Jug
141: Broccoli
142: Piano
143: Pizza
144: Elephant
145: Skateboard
146: Surfboard
147: Gun
148: Skating and Skiing shoes
149: Gas stove
150: Donut
151: Bow Tie
152: Carrot
153: Toilet
154: Kite
155: Strawberry
156: Other Balls
157: Shovel
158: Pepper
159: Computer Box
160: Toilet Paper
161: Cleaning Products
162: Chopsticks
163: Microwave
164: Pigeon
165: Baseball
166: Cutting/chopping Board
167: Coffee Table
168: Side Table
169: Scissors
170: Marker
171: Pie
172: Ladder
173: Snowboard
174: Cookies
175: Radiator
176: Fire Hydrant
177: Basketball
178: Zebra
179: Grape
180: Giraffe
181: Potato
182: Sausage
183: Tricycle
184: Violin
185: Egg
186: Fire Extinguisher
187: Candy
188: Fire Truck
189: Billiards
190: Converter
191: Bathtub
192: Wheelchair
193: Golf Club
194: Briefcase
195: Cucumber
196: Cigar/Cigarette
197: Paint Brush
198: Pear
199: Heavy Truck
200: Hamburger
201: Extractor
202: Extension Cord
203: Tong
204: Tennis Racket
205: Folder
206: American Football
207: earphone
208: Mask
209: Kettle
210: Tennis
211: Ship
212: Swing
213: Coffee Machine
214: Slide
215: Carriage
216: Onion
217: Green beans
218: Projector
219: Frisbee
220: Washing Machine/Drying Machine
221: Chicken
222: Printer
223: Watermelon
224: Saxophone
225: Tissue
226: Toothbrush
227: Ice cream
228: Hot-air balloon
229: Cello
230: French Fries
231: Scale
232: Trophy
233: Cabbage
234: Hot dog
235: Blender
236: Peach
237: Rice
238: Wallet/Purse
239: Volleyball
240: Deer
241: Goose
242: Tape
243: Tablet
244: Cosmetics
245: Trumpet
246: Pineapple
247: Golf Ball
248: Ambulance
249: Parking meter
250: Mango
251: Key
252: Hurdle
253: Fishing Rod
254: Medal
255: Flute
256: Brush
257: Penguin
258: Megaphone
259: Corn
260: Lettuce
261: Garlic
262: Swan
263: Helicopter
264: Green Onion
265: Sandwich
266: Nuts
267: Speed Limit Sign
268: Induction Cooker
269: Broom
270: Trombone
271: Plum
272: Rickshaw
273: Goldfish
274: Kiwi fruit
275: Router/modem
276: Poker Card
277: Toaster
278: Shrimp
279: Sushi
280: Cheese
281: Notepaper
282: Cherry
283: Pliers
284: CD
285: Pasta
286: Hammer
287: Cue
288: Avocado
289: Hamimelon
290: Flask
291: Mushroom
292: Screwdriver
293: Soap
294: Recorder
295: Bear
296: Eggplant
297: Board Eraser
298: Coconut
299: Tape Measure/Ruler
300: Pig
301: Showerhead
302: Globe
303: Chips
304: Steak
305: Crosswalk Sign
306: Stapler
307: Camel
308: Formula 1
309: Pomegranate
310: Dishwasher
311: Crab
312: Hoverboard
313: Meat ball
314: Rice Cooker
315: Tuba
316: Calculator
317: Papaya
318: Antelope
319: Parrot
320: Seal
321: Butterfly
322: Dumbbell
323: Donkey
324: Lion
325: Urinal
326: Dolphin
327: Electric Drill
328: Hair Dryer
329: Egg tart
330: Jellyfish
331: Treadmill
332: Lighter
333: Grapefruit
334: Game board
335: Mop
336: Radish
337: Baozi
338: Target
339: French
340: Spring Rolls
341: Monkey
342: Rabbit
343: Pencil Case
344: Yak
345: Red Cabbage
346: Binoculars
347: Asparagus
348: Barbell
349: Scallop
350: Noddles
351: Comb
352: Dumpling
353: Oyster
354: Table Tennis paddle
355: Cosmetics Brush/Eyeliner Pencil
356: Chainsaw
357: Eraser
358: Lobster
359: Durian
360: Okra
361: Lipstick
362: Cosmetics Mirror
363: Curling
364: Table Tennis
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------
download: | download: |
from pycocotools.coco import COCO
from tqdm import tqdm from tqdm import tqdm
from utils.general import Path, check_requirements, download, np, xyxy2xywhn from utils.general import Path, download, np, xyxy2xywhn
check_requirements(('pycocotools>=2.0',))
from pycocotools.coco import COCO
# Make Directories # Make Directories
dir = Path(yaml['path']) # dataset root dir dir = Path(yaml['path']) # dataset root dir

View File

@ -1,22 +1,18 @@
#!/bin/bash #!/bin/bash
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Download latest models from https://github.com/ultralytics/yolov5/releases # Download latest models from https://github.com/ultralytics/yolov3/releases
# Example usage: bash data/scripts/download_weights.sh # Example usage: bash path/to/download_weights.sh
# parent # parent
# └── yolov5 # └── yolov3
# ├── yolov5s.pt ← downloads here # ├── yolov3.pt ← downloads here
# ├── yolov5m.pt # ├── yolov3-spp.pt
# └── ... # └── ...
python - <<EOF python - <<EOF
from utils.downloads import attempt_download from utils.downloads import attempt_download
p5 = list('nsmlx') # P5 models models = ['yolov3', 'yolov3-spp', 'yolov3-tiny']
p6 = [f'{x}6' for x in p5] # P6 models for x in models:
cls = [f'{x}-cls' for x in p5] # classification models attempt_download(f'{x}.pt')
seg = [f'{x}-seg' for x in p5] # classification models
for x in p5 + p6 + cls + seg:
attempt_download(f'weights/yolov5{x}.pt')
EOF EOF

View File

@ -3,54 +3,25 @@
# Download COCO 2017 dataset http://cocodataset.org # Download COCO 2017 dataset http://cocodataset.org
# Example usage: bash data/scripts/get_coco.sh # Example usage: bash data/scripts/get_coco.sh
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── coco ← downloads here # └── coco ← downloads here
# Arguments (optional) Usage: bash data/scripts/get_coco.sh --train --val --test --segments
if [ "$#" -gt 0 ]; then
for opt in "$@"; do
case "${opt}" in
--train) train=true ;;
--val) val=true ;;
--test) test=true ;;
--segments) segments=true ;;
esac
done
else
train=true
val=true
test=false
segments=false
fi
# Download/unzip labels # Download/unzip labels
d='../datasets' # unzip directory d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
if [ "$segments" == "true" ]; then f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
f='coco2017labels-segments.zip' # 168 MB
else
f='coco2017labels.zip' # 46 MB
fi
echo 'Downloading' $url$f ' ...' echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
# Download/unzip images # Download/unzip images
d='../datasets/coco/images' # unzip directory d='../datasets/coco/images' # unzip directory
url=http://images.cocodataset.org/zips/ url=http://images.cocodataset.org/zips/
if [ "$train" == "true" ]; then f1='train2017.zip' # 19G, 118k images
f='train2017.zip' # 19G, 118k images f2='val2017.zip' # 1G, 5k images
f3='test2017.zip' # 7G, 41k images (optional)
for f in $f1 $f2; do
echo 'Downloading' $url$f '...' echo 'Downloading' $url$f '...'
curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
fi done
if [ "$val" == "true" ]; then
f='val2017.zip' # 1G, 5k images
echo 'Downloading' $url$f '...'
curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f &
fi
if [ "$test" == "true" ]; then
f='test2017.zip' # 7G, 41k images (optional)
echo 'Downloading' $url$f '...'
curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f &
fi
wait # finish background tasks wait # finish background tasks

4
yolov3/data/scripts/get_coco128.sh Executable file → Normal file
View File

@ -3,7 +3,7 @@
# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) # Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
# Example usage: bash data/scripts/get_coco128.sh # Example usage: bash data/scripts/get_coco128.sh
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── coco128 ← downloads here # └── coco128 ← downloads here
@ -12,6 +12,6 @@ d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco128.zip' # or 'coco128-segments.zip', 68 MB f='coco128.zip' # or 'coco128-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...' echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
wait # finish background tasks wait # finish background tasks

View File

@ -1,10 +1,10 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC by University of Oxford # PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC
# Example usage: python train.py --data VOC.yaml # Example usage: python train.py --data VOC.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── VOC ← downloads here (2.8 GB) # └── VOC ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -20,27 +20,9 @@ test: # test images (optional)
- images/test2007 - images/test2007
# Classes # Classes
names: nc: 20 # number of classes
0: aeroplane names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
1: bicycle 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
2: bird
3: boat
4: bottle
5: bus
6: car
7: cat
8: chair
9: cow
10: diningtable
11: dog
12: horse
13: motorbike
14: person
15: pottedplant
16: sheep
17: sofa
18: train
19: tvmonitor
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------
@ -65,34 +47,32 @@ download: |
w = int(size.find('width').text) w = int(size.find('width').text)
h = int(size.find('height').text) h = int(size.find('height').text)
names = list(yaml['names'].values()) # names list
for obj in root.iter('object'): for obj in root.iter('object'):
cls = obj.find('name').text cls = obj.find('name').text
if cls in names and int(obj.find('difficult').text) != 1: if cls in yaml['names'] and not int(obj.find('difficult').text) == 1:
xmlbox = obj.find('bndbox') xmlbox = obj.find('bndbox')
bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')]) bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')])
cls_id = names.index(cls) # class id cls_id = yaml['names'].index(cls) # class id
out_file.write(" ".join([str(a) for a in (cls_id, *bb)]) + '\n') out_file.write(" ".join([str(a) for a in (cls_id, *bb)]) + '\n')
# Download # Download
dir = Path(yaml['path']) # dataset root dir dir = Path(yaml['path']) # dataset root dir
url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/' url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
urls = [f'{url}VOCtrainval_06-Nov-2007.zip', # 446MB, 5012 images urls = [url + 'VOCtrainval_06-Nov-2007.zip', # 446MB, 5012 images
f'{url}VOCtest_06-Nov-2007.zip', # 438MB, 4953 images url + 'VOCtest_06-Nov-2007.zip', # 438MB, 4953 images
f'{url}VOCtrainval_11-May-2012.zip'] # 1.95GB, 17126 images url + 'VOCtrainval_11-May-2012.zip'] # 1.95GB, 17126 images
download(urls, dir=dir / 'images', delete=False, curl=True, threads=3) download(urls, dir=dir / 'images', delete=False)
# Convert # Convert
path = dir / 'images/VOCdevkit' path = dir / f'images/VOCdevkit'
for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'): for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'):
imgs_path = dir / 'images' / f'{image_set}{year}' imgs_path = dir / 'images' / f'{image_set}{year}'
lbs_path = dir / 'labels' / f'{image_set}{year}' lbs_path = dir / 'labels' / f'{image_set}{year}'
imgs_path.mkdir(exist_ok=True, parents=True) imgs_path.mkdir(exist_ok=True, parents=True)
lbs_path.mkdir(exist_ok=True, parents=True) lbs_path.mkdir(exist_ok=True, parents=True)
with open(path / f'VOC{year}/ImageSets/Main/{image_set}.txt') as f: image_ids = open(path / f'VOC{year}/ImageSets/Main/{image_set}.txt').read().strip().split()
image_ids = f.read().strip().split()
for id in tqdm(image_ids, desc=f'{image_set}{year}'): for id in tqdm(image_ids, desc=f'{image_set}{year}'):
f = path / f'VOC{year}/JPEGImages/{id}.jpg' # old img path f = path / f'VOC{year}/JPEGImages/{id}.jpg' # old img path
lb_path = (lbs_path / f.name).with_suffix('.txt') # new label path lb_path = (lbs_path / f.name).with_suffix('.txt') # new label path

View File

@ -1,11 +1,11 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# DIUx xView 2018 Challenge https://challenge.xviewdataset.org by U.S. National Geospatial-Intelligence Agency (NGA) # xView 2018 dataset https://challenge.xviewdataset.org
# -------- DOWNLOAD DATA MANUALLY and jar xf val_images.zip to 'datasets/xView' before running train command! -------- # -------- DOWNLOAD DATA MANUALLY from URL above and unzip to 'datasets/xView' before running train command! --------
# Example usage: python train.py --data xView.yaml # Example usage: python train.py --data xView.yaml
# parent # parent
# ├── yolov5 # ├── yolov3
# └── datasets # └── datasets
# └── xView ← downloads here (20.7 GB) # └── xView ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
@ -14,67 +14,16 @@ train: images/autosplit_train.txt # train images (relative to 'path') 90% of 84
val: images/autosplit_val.txt # train images (relative to 'path') 10% of 847 train images val: images/autosplit_val.txt # train images (relative to 'path') 10% of 847 train images
# Classes # Classes
names: nc: 60 # number of classes
0: Fixed-wing Aircraft names: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
1: Small Aircraft 'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
2: Cargo Plane 'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
3: Helicopter 'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
4: Passenger Vehicle 'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',
5: Small Car 'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
6: Bus 'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
7: Pickup Truck 'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
8: Utility Truck 'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower'] # class names
9: Truck
10: Cargo Truck
11: Truck w/Box
12: Truck Tractor
13: Trailer
14: Truck w/Flatbed
15: Truck w/Liquid
16: Crane Truck
17: Railway Vehicle
18: Passenger Car
19: Cargo Car
20: Flat Car
21: Tank car
22: Locomotive
23: Maritime Vessel
24: Motorboat
25: Sailboat
26: Tugboat
27: Barge
28: Fishing Vessel
29: Ferry
30: Yacht
31: Container Ship
32: Oil Tanker
33: Engineering Vehicle
34: Tower crane
35: Container Crane
36: Reach Stacker
37: Straddle Carrier
38: Mobile Crane
39: Dump Truck
40: Haul Truck
41: Scraper/Tractor
42: Front loader/Bulldozer
43: Excavator
44: Cement Mixer
45: Ground Grader
46: Hut/Tent
47: Shed
48: Building
49: Aircraft Hangar
50: Damaged Building
51: Facility
52: Construction Site
53: Vehicle Lot
54: Helipad
55: Storage Tank
56: Shipping container lot
57: Shipping Container
58: Pylon
59: Tower
# Download script/URL (optional) --------------------------------------------------------------------------------------- # Download script/URL (optional) ---------------------------------------------------------------------------------------
@ -87,7 +36,7 @@ download: |
from PIL import Image from PIL import Image
from tqdm import tqdm from tqdm import tqdm
from utils.dataloaders import autosplit from utils.datasets import autosplit
from utils.general import download, xyxy2xywhn from utils.general import download, xyxy2xywhn

View File

@ -1,61 +1,44 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
""" """
Run YOLOv3 detection inference on images, videos, directories, globs, YouTube, webcam, streams, etc. Run inference on images, videos, directories, streams, etc.
Usage - sources: Usage:
$ python detect.py --weights yolov5s.pt --source 0 # webcam $ python path/to/detect.py --weights yolov3.pt --source 0 # webcam
img.jpg # image img.jpg # image
vid.mp4 # video vid.mp4 # video
screen # screenshot
path/ # directory path/ # directory
list.txt # list of images path/*.jpg # glob
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube 'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
Usage - formats:
$ python detect.py --weights yolov5s.pt # PyTorch
yolov5s.torchscript # TorchScript
yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s_openvino_model # OpenVINO
yolov5s.engine # TensorRT
yolov5s.mlmodel # CoreML (macOS-only)
yolov5s_saved_model # TensorFlow SavedModel
yolov5s.pb # TensorFlow GraphDef
yolov5s.tflite # TensorFlow Lite
yolov5s_edgetpu.tflite # TensorFlow Edge TPU
yolov5s_paddle_model # PaddlePaddle
""" """
import argparse import argparse
import os import os
import platform
import sys import sys
from pathlib import Path from pathlib import Path
import cv2
import torch import torch
import torch.backends.cudnn as cudnn
FILE = Path(__file__).resolve() FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # YOLOv3 root directory ROOT = FILE.parents[0] # root directory
if str(ROOT) not in sys.path: if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import DetectMultiBackend from models.common import DetectMultiBackend
from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams from utils.datasets import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, from utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr,
increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh) increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import select_device, smart_inference_mode from utils.torch_utils import select_device, time_sync
@smart_inference_mode() @torch.no_grad()
def run( def run(weights=ROOT / 'yolov3.pt', # model.pt path(s)
weights=ROOT / 'yolov5s.pt', # model path or triton URL source=ROOT / 'data/images', # file/dir/URL/glob, 0 for webcam
source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam) imgsz=640, # inference size (pixels)
data=ROOT / 'data/coco128.yaml', # dataset.yaml path
imgsz=(640, 640), # inference size (height, width)
conf_thres=0.25, # confidence threshold conf_thres=0.25, # confidence threshold
iou_thres=0.45, # NMS IOU threshold iou_thres=0.45, # NMS IOU threshold
max_det=1000, # maximum detections per image max_det=1000, # maximum detections per image
@ -78,14 +61,12 @@ def run(
hide_conf=False, # hide confidences hide_conf=False, # hide confidences
half=False, # use FP16 half-precision inference half=False, # use FP16 half-precision inference
dnn=False, # use OpenCV DNN for ONNX inference dnn=False, # use OpenCV DNN for ONNX inference
vid_stride=1, # video frame-rate stride ):
):
source = str(source) source = str(source)
save_img = not nosave and not source.endswith('.txt') # save inference images save_img = not nosave and not source.endswith('.txt') # save inference images
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
webcam = source.isnumeric() or source.endswith('.streams') or (is_url and not is_file) webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
screenshot = source.lower().startswith('screen')
if is_url and is_file: if is_url and is_file:
source = check_file(source) # download source = check_file(source) # download
@ -95,41 +76,49 @@ def run(
# Load model # Load model
device = select_device(device) device = select_device(device)
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) model = DetectMultiBackend(weights, device=device, dnn=dnn)
stride, names, pt = model.stride, model.names, model.pt stride, names, pt, jit, onnx = model.stride, model.names, model.pt, model.jit, model.onnx
imgsz = check_img_size(imgsz, s=stride) # check image size imgsz = check_img_size(imgsz, s=stride) # check image size
# Half
half &= pt and device.type != 'cpu' # half precision only supported by PyTorch on CUDA
if pt:
model.model.half() if half else model.model.float()
# Dataloader # Dataloader
bs = 1 # batch_size
if webcam: if webcam:
view_img = check_imshow(warn=True) view_img = check_imshow()
dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) cudnn.benchmark = True # set True to speed up constant image size inference
bs = len(dataset) dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt and not jit)
elif screenshot: bs = len(dataset) # batch_size
dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt)
else: else:
dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt and not jit)
bs = 1 # batch_size
vid_path, vid_writer = [None] * bs, [None] * bs vid_path, vid_writer = [None] * bs, [None] * bs
# Run inference # Run inference
model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup if pt and device.type != 'cpu':
seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) model(torch.zeros(1, 3, *imgsz).to(device).type_as(next(model.model.parameters()))) # warmup
dt, seen = [0.0, 0.0, 0.0], 0
for path, im, im0s, vid_cap, s in dataset: for path, im, im0s, vid_cap, s in dataset:
with dt[0]: t1 = time_sync()
im = torch.from_numpy(im).to(model.device) im = torch.from_numpy(im).to(device)
im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 im = im.half() if half else im.float() # uint8 to fp16/32
im /= 255 # 0 - 255 to 0.0 - 1.0 im /= 255 # 0 - 255 to 0.0 - 1.0
if len(im.shape) == 3: if len(im.shape) == 3:
im = im[None] # expand for batch dim im = im[None] # expand for batch dim
t2 = time_sync()
dt[0] += t2 - t1
# Inference # Inference
with dt[1]:
visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
pred = model(im, augment=augment, visualize=visualize) pred = model(im, augment=augment, visualize=visualize)
t3 = time_sync()
dt[1] += t3 - t2
# NMS # NMS
with dt[2]:
pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
dt[2] += time_sync() - t3
# Second-stage classifier (optional) # Second-stage classifier (optional)
# pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
@ -152,11 +141,11 @@ def run(
annotator = Annotator(im0, line_width=line_thickness, example=str(names)) annotator = Annotator(im0, line_width=line_thickness, example=str(names))
if len(det): if len(det):
# Rescale boxes from img_size to im0 size # Rescale boxes from img_size to im0 size
det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
# Print results # Print results
for c in det[:, 5].unique(): for c in det[:, -1].unique():
n = (det[:, 5] == c).sum() # detections per class n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
# Write results # Write results
@ -164,7 +153,7 @@ def run(
if save_txt: # Write to file if save_txt: # Write to file
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
with open(f'{txt_path}.txt', 'a') as f: with open(txt_path + '.txt', 'a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n') f.write(('%g ' * len(line)).rstrip() % line + '\n')
if save_img or save_crop or view_img: # Add bbox to image if save_img or save_crop or view_img: # Add bbox to image
@ -174,13 +163,12 @@ def run(
if save_crop: if save_crop:
save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)
# Print time (inference-only)
LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')
# Stream results # Stream results
im0 = annotator.result() im0 = annotator.result()
if view_img: if view_img:
if platform.system() == 'Linux' and p not in windows:
windows.append(p)
cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux)
cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
cv2.imshow(str(p), im0) cv2.imshow(str(p), im0)
cv2.waitKey(1) # 1 millisecond cv2.waitKey(1) # 1 millisecond
@ -199,32 +187,24 @@ def run(
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
else: # stream else: # stream
fps, w, h = 30, im0.shape[1], im0.shape[0] fps, w, h = 30, im0.shape[1], im0.shape[0]
save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos save_path += '.mp4'
vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
vid_writer[i].write(im0) vid_writer[i].write(im0)
# Print time (inference-only)
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
# Print results # Print results
t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image t = tuple(x / seen * 1E3 for x in dt) # speeds per image
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
if save_txt or save_img: if save_txt or save_img:
s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
if update: if update:
strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) strip_optimizer(weights) # update model (to fix SourceChangeWarning)
def parse_opt(): def parse_opt():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--weights', parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3.pt', help='model path(s)')
nargs='+', parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')
type=str,
default=ROOT / 'yolov3-tiny.pt',
help='model path or triton URL')
parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
@ -248,10 +228,9 @@ def parse_opt():
parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride')
opt = parser.parse_args() opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(vars(opt)) print_args(FILE.stem, opt)
return opt return opt
@ -260,6 +239,6 @@ def main(opt):
run(**vars(opt)) run(**vars(opt))
if __name__ == '__main__': if __name__ == "__main__":
opt = parse_opt() opt = parse_opt()
main(opt) main(opt)

View File

@ -1,360 +1,170 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
""" """
Export a YOLOv3 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit Export a PyTorch model to TorchScript, ONNX, CoreML, TensorFlow (saved_model, pb, TFLite, TF.js,) formats
TensorFlow exports authored by https://github.com/zldrobit
Format | `export.py --include` | Model
--- | --- | ---
PyTorch | - | yolov5s.pt
TorchScript | `torchscript` | yolov5s.torchscript
ONNX | `onnx` | yolov5s.onnx
OpenVINO | `openvino` | yolov5s_openvino_model/
TensorRT | `engine` | yolov5s.engine
CoreML | `coreml` | yolov5s.mlmodel
TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/
TensorFlow GraphDef | `pb` | yolov5s.pb
TensorFlow Lite | `tflite` | yolov5s.tflite
TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite
TensorFlow.js | `tfjs` | yolov5s_web_model/
PaddlePaddle | `paddle` | yolov5s_paddle_model/
Requirements:
$ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU
$ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU
Usage: Usage:
$ python export.py --weights yolov5s.pt --include torchscript onnx openvino engine coreml tflite ... $ python path/to/export.py --weights yolov3.pt --include torchscript onnx coreml saved_model pb tflite tfjs
Inference: Inference:
$ python detect.py --weights yolov5s.pt # PyTorch $ python path/to/detect.py --weights yolov3.pt
yolov5s.torchscript # TorchScript yolov3.onnx (must export with --dynamic)
yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn yolov3_saved_model
yolov5s_openvino_model # OpenVINO yolov3.pb
yolov5s.engine # TensorRT yolov3.tflite
yolov5s.mlmodel # CoreML (macOS-only)
yolov5s_saved_model # TensorFlow SavedModel
yolov5s.pb # TensorFlow GraphDef
yolov5s.tflite # TensorFlow Lite
yolov5s_edgetpu.tflite # TensorFlow Edge TPU
yolov5s_paddle_model # PaddlePaddle
TensorFlow.js: TensorFlow.js:
$ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
$ npm install $ npm install
$ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model $ ln -s ../../yolov5/yolov3_web_model public/yolov3_web_model
$ npm start $ npm start
""" """
import argparse import argparse
import contextlib
import json import json
import os import os
import platform
import re
import subprocess import subprocess
import sys import sys
import time import time
import warnings
from pathlib import Path from pathlib import Path
import pandas as pd
import torch import torch
import torch.nn as nn
from torch.utils.mobile_optimizer import optimize_for_mobile from torch.utils.mobile_optimizer import optimize_for_mobile
FILE = Path(__file__).resolve() FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # YOLOv3 root directory ROOT = FILE.parents[0] # root directory
if str(ROOT) not in sys.path: if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH sys.path.append(str(ROOT)) # add ROOT to PATH
if platform.system() != 'Windows': ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import Conv
from models.experimental import attempt_load from models.experimental import attempt_load
from models.yolo import ClassificationModel, Detect, DetectionModel, SegmentationModel from models.yolo import Detect
from utils.dataloaders import LoadImages from utils.activations import SiLU
from utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_version, from utils.datasets import LoadImages
check_yaml, colorstr, file_size, get_default_args, print_args, url2file, yaml_save) from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, colorstr, file_size, print_args,
from utils.torch_utils import select_device, smart_inference_mode url2file)
from utils.torch_utils import select_device
MACOS = platform.system() == 'Darwin' # macOS environment
def export_formats():
# YOLOv3 export formats
x = [
['PyTorch', '-', '.pt', True, True],
['TorchScript', 'torchscript', '.torchscript', True, True],
['ONNX', 'onnx', '.onnx', True, True],
['OpenVINO', 'openvino', '_openvino_model', True, False],
['TensorRT', 'engine', '.engine', False, True],
['CoreML', 'coreml', '.mlmodel', True, False],
['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True],
['TensorFlow GraphDef', 'pb', '.pb', True, True],
['TensorFlow Lite', 'tflite', '.tflite', True, False],
['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False],
['TensorFlow.js', 'tfjs', '_web_model', False, False],
['PaddlePaddle', 'paddle', '_paddle_model', True, True],]
return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU'])
def try_export(inner_func):
# YOLOv3 export decorator, i..e @try_export
inner_args = get_default_args(inner_func)
def outer_func(*args, **kwargs):
prefix = inner_args['prefix']
try:
with Profile() as dt:
f, model = inner_func(*args, **kwargs)
LOGGER.info(f'{prefix} export success ✅ {dt.t:.1f}s, saved as {f} ({file_size(f):.1f} MB)')
return f, model
except Exception as e:
LOGGER.info(f'{prefix} export failure ❌ {dt.t:.1f}s: {e}')
return None, None
return outer_func
@try_export
def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')): def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):
# YOLOv3 TorchScript model export # TorchScript model export
try:
LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...') LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')
f = file.with_suffix('.torchscript') f = file.with_suffix('.torchscript.pt')
ts = torch.jit.trace(model, im, strict=False) ts = torch.jit.trace(model, im, strict=False)
d = {'shape': im.shape, 'stride': int(max(model.stride)), 'names': model.names} d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names}
extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap() extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap()
if optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html (optimize_for_mobile(ts) if optimize else ts).save(f, _extra_files=extra_files)
optimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files)
else: LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
ts.save(str(f), _extra_files=extra_files) except Exception as e:
return f, None LOGGER.info(f'{prefix} export failure: {e}')
@try_export def export_onnx(model, im, file, opset, train, dynamic, simplify, prefix=colorstr('ONNX:')):
def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX:')): # ONNX export
# YOLOv3 ONNX export try:
check_requirements('onnx>=1.12.0') check_requirements(('onnx',))
import onnx import onnx
LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...') LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
f = file.with_suffix('.onnx') f = file.with_suffix('.onnx')
output_names = ['output0', 'output1'] if isinstance(model, SegmentationModel) else ['output0'] torch.onnx.export(model, im, f, verbose=False, opset_version=opset,
if dynamic: training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
dynamic = {'images': {0: 'batch', 2: 'height', 3: 'width'}} # shape(1,3,640,640) do_constant_folding=not train,
if isinstance(model, SegmentationModel):
dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'} # shape(1,32,160,160)
elif isinstance(model, DetectionModel):
dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
torch.onnx.export(
model.cpu() if dynamic else model, # --dynamic only compatible with cpu
im.cpu() if dynamic else im,
f,
verbose=False,
opset_version=opset,
do_constant_folding=True, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False
input_names=['images'], input_names=['images'],
output_names=output_names, output_names=['output'],
dynamic_axes=dynamic or None) dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'}, # shape(1,3,640,640)
'output': {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
} if dynamic else None)
# Checks # Checks
model_onnx = onnx.load(f) # load onnx model model_onnx = onnx.load(f) # load onnx model
onnx.checker.check_model(model_onnx) # check onnx model onnx.checker.check_model(model_onnx) # check onnx model
# LOGGER.info(onnx.helper.printable_graph(model_onnx.graph)) # print
# Metadata
d = {'stride': int(max(model.stride)), 'names': model.names}
for k, v in d.items():
meta = model_onnx.metadata_props.add()
meta.key, meta.value = k, str(v)
onnx.save(model_onnx, f)
# Simplify # Simplify
if simplify: if simplify:
try: try:
cuda = torch.cuda.is_available() check_requirements(('onnx-simplifier',))
check_requirements(('onnxruntime-gpu' if cuda else 'onnxruntime', 'onnx-simplifier>=0.4.1'))
import onnxsim import onnxsim
LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...') LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
model_onnx, check = onnxsim.simplify(model_onnx) model_onnx, check = onnxsim.simplify(
model_onnx,
dynamic_input_shape=dynamic,
input_shapes={'images': list(im.shape)} if dynamic else None)
assert check, 'assert check failed' assert check, 'assert check failed'
onnx.save(model_onnx, f) onnx.save(model_onnx, f)
except Exception as e: except Exception as e:
LOGGER.info(f'{prefix} simplifier failure: {e}') LOGGER.info(f'{prefix} simplifier failure: {e}')
return f, model_onnx LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
LOGGER.info(f"{prefix} run --dynamic ONNX model inference with: 'python detect.py --weights {f}'")
except Exception as e:
LOGGER.info(f'{prefix} export failure: {e}')
@try_export def export_coreml(model, im, file, prefix=colorstr('CoreML:')):
def export_openvino(file, metadata, half, prefix=colorstr('OpenVINO:')): # CoreML export
# YOLOv3 OpenVINO export ct_model = None
check_requirements('openvino-dev') # requires openvino-dev: https://pypi.org/project/openvino-dev/ try:
import openvino.inference_engine as ie check_requirements(('coremltools',))
LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...')
f = str(file).replace('.pt', f'_openvino_model{os.sep}')
cmd = f"mo --input_model {file.with_suffix('.onnx')} --output_dir {f} --data_type {'FP16' if half else 'FP32'}"
subprocess.run(cmd.split(), check=True, env=os.environ) # export
yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml
return f, None
@try_export
def export_paddle(model, im, file, metadata, prefix=colorstr('PaddlePaddle:')):
# YOLOv3 Paddle export
check_requirements(('paddlepaddle', 'x2paddle'))
import x2paddle
from x2paddle.convert import pytorch2paddle
LOGGER.info(f'\n{prefix} starting export with X2Paddle {x2paddle.__version__}...')
f = str(file).replace('.pt', f'_paddle_model{os.sep}')
pytorch2paddle(module=model, save_dir=f, jit_type='trace', input_examples=[im]) # export
yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml
return f, None
@try_export
def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')):
# YOLOv3 CoreML export
check_requirements('coremltools')
import coremltools as ct import coremltools as ct
LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...') LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
f = file.with_suffix('.mlmodel') f = file.with_suffix('.mlmodel')
model.train() # CoreML exports should be placed in model.train() mode
ts = torch.jit.trace(model, im, strict=False) # TorchScript model ts = torch.jit.trace(model, im, strict=False) # TorchScript model
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])]) ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None)
if bits < 32:
if MACOS: # quantization only supported on macOS
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=DeprecationWarning) # suppress numpy==1.20 float warning
ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode)
else:
print(f'{prefix} quantization only supported on macOS, skipping...')
ct_model.save(f) ct_model.save(f)
return f, ct_model
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
return ct_model
@try_export def export_saved_model(model, im, file, dynamic,
def export_engine(model, im, file, half, dynamic, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')): tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,
# YOLOv3 TensorRT export https://developer.nvidia.com/tensorrt conf_thres=0.25, prefix=colorstr('TensorFlow saved_model:')):
assert im.device.type != 'cpu', 'export running on CPU but must be on GPU, i.e. `python export.py --device 0`' # TensorFlow saved_model export
try: keras_model = None
import tensorrt as trt
except Exception:
if platform.system() == 'Linux':
check_requirements('nvidia-tensorrt', cmds='-U --index-url https://pypi.ngc.nvidia.com')
import tensorrt as trt
if trt.__version__[0] == '7': # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012
grid = model.model[-1].anchor_grid
model.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid]
export_onnx(model, im, file, 12, dynamic, simplify) # opset 12
model.model[-1].anchor_grid = grid
else: # TensorRT >= 8
check_version(trt.__version__, '8.0.0', hard=True) # require tensorrt>=8.0.0
export_onnx(model, im, file, 12, dynamic, simplify) # opset 12
onnx = file.with_suffix('.onnx')
LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')
assert onnx.exists(), f'failed to export ONNX file: {onnx}'
f = file.with_suffix('.engine') # TensorRT engine file
logger = trt.Logger(trt.Logger.INFO)
if verbose:
logger.min_severity = trt.Logger.Severity.VERBOSE
builder = trt.Builder(logger)
config = builder.create_builder_config()
config.max_workspace_size = workspace * 1 << 30
# config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30) # fix TRT 8.4 deprecation notice
flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
network = builder.create_network(flag)
parser = trt.OnnxParser(network, logger)
if not parser.parse_from_file(str(onnx)):
raise RuntimeError(f'failed to load ONNX file: {onnx}')
inputs = [network.get_input(i) for i in range(network.num_inputs)]
outputs = [network.get_output(i) for i in range(network.num_outputs)]
for inp in inputs:
LOGGER.info(f'{prefix} input "{inp.name}" with shape{inp.shape} {inp.dtype}')
for out in outputs:
LOGGER.info(f'{prefix} output "{out.name}" with shape{out.shape} {out.dtype}')
if dynamic:
if im.shape[0] <= 1:
LOGGER.warning(f'{prefix} WARNING ⚠️ --dynamic model requires maximum --batch-size argument')
profile = builder.create_optimization_profile()
for inp in inputs:
profile.set_shape(inp.name, (1, *im.shape[1:]), (max(1, im.shape[0] // 2), *im.shape[1:]), im.shape)
config.add_optimization_profile(profile)
LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine as {f}')
if builder.platform_has_fast_fp16 and half:
config.set_flag(trt.BuilderFlag.FP16)
with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
t.write(engine.serialize())
return f, None
@try_export
def export_saved_model(model,
im,
file,
dynamic,
tf_nms=False,
agnostic_nms=False,
topk_per_class=100,
topk_all=100,
iou_thres=0.45,
conf_thres=0.25,
keras=False,
prefix=colorstr('TensorFlow SavedModel:')):
# YOLOv3 TensorFlow SavedModel export
try: try:
import tensorflow as tf import tensorflow as tf
except Exception: from tensorflow import keras
check_requirements(f"tensorflow{'' if torch.cuda.is_available() else '-macos' if MACOS else '-cpu'}")
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from models.tf import TFModel from models.tf import TFDetect, TFModel
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
f = str(file).replace('.pt', '_saved_model') f = str(file).replace('.pt', '_saved_model')
batch_size, ch, *imgsz = list(im.shape) # BCHW batch_size, ch, *imgsz = list(im.shape) # BCHW
tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz) tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
im = tf.zeros((batch_size, *imgsz, ch)) # BHWC order for TensorFlow im = tf.zeros((batch_size, *imgsz, 3)) # BHWC order for TensorFlow
_ = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) y = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
inputs = tf.keras.Input(shape=(*imgsz, ch), batch_size=None if dynamic else batch_size) inputs = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
keras_model = tf.keras.Model(inputs=inputs, outputs=outputs) keras_model = keras.Model(inputs=inputs, outputs=outputs)
keras_model.trainable = False keras_model.trainable = False
keras_model.summary() keras_model.summary()
if keras:
keras_model.save(f, save_format='tf') keras_model.save(f, save_format='tf')
else:
spec = tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype) LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
m = tf.function(lambda x: keras_model(x)) # full model except Exception as e:
m = m.get_concrete_function(spec) LOGGER.info(f'\n{prefix} export failure: {e}')
frozen_func = convert_variables_to_constants_v2(m)
tfm = tf.Module() return keras_model
tfm.__call__ = tf.function(lambda x: frozen_func(x)[:4] if tf_nms else frozen_func(x), [spec])
tfm.__call__(im)
tf.saved_model.save(tfm,
f,
options=tf.saved_model.SaveOptions(experimental_custom_gradients=False) if check_version(
tf.__version__, '2.6') else tf.saved_model.SaveOptions())
return f, keras_model
@try_export def export_pb(keras_model, im, file, prefix=colorstr('TensorFlow GraphDef:')):
def export_pb(keras_model, file, prefix=colorstr('TensorFlow GraphDef:')): # TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow
# YOLOv3 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow try:
import tensorflow as tf import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
@ -366,14 +176,19 @@ def export_pb(keras_model, file, prefix=colorstr('TensorFlow GraphDef:')):
frozen_func = convert_variables_to_constants_v2(m) frozen_func = convert_variables_to_constants_v2(m)
frozen_func.graph.as_graph_def() frozen_func.graph.as_graph_def()
tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False) tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)
return f, None
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
@try_export def export_tflite(keras_model, im, file, int8, data, ncalib, prefix=colorstr('TensorFlow Lite:')):
def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')): # TensorFlow Lite export
# YOLOv3 TensorFlow Lite export try:
import tensorflow as tf import tensorflow as tf
from models.tf import representative_dataset_gen
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
batch_size, ch, *imgsz = list(im.shape) # BCHW batch_size, ch, *imgsz = list(im.shape) # BCHW
f = str(file).replace('.pt', '-fp16.tflite') f = str(file).replace('.pt', '-fp16.tflite')
@ -383,156 +198,90 @@ def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=c
converter.target_spec.supported_types = [tf.float16] converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.optimizations = [tf.lite.Optimize.DEFAULT]
if int8: if int8:
from models.tf import representative_dataset_gen dataset = LoadImages(check_dataset(data)['train'], img_size=imgsz, auto=False) # representative data
dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False) converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib)
converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = [] converter.target_spec.supported_types = []
converter.inference_input_type = tf.uint8 # or tf.int8 converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8 converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_quantizer = True converter.experimental_new_quantizer = False
f = str(file).replace('.pt', '-int8.tflite') f = str(file).replace('.pt', '-int8.tflite')
if nms or agnostic_nms:
converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)
tflite_model = converter.convert() tflite_model = converter.convert()
open(f, 'wb').write(tflite_model) open(f, "wb").write(tflite_model)
return f, None LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
@try_export def export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')):
def export_edgetpu(file, prefix=colorstr('Edge TPU:')): # TensorFlow.js export
# YOLOv3 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/ try:
cmd = 'edgetpu_compiler --version' check_requirements(('tensorflowjs',))
help_url = 'https://coral.ai/docs/edgetpu/compiler/' import re
assert platform.system() == 'Linux', f'export only supported on Linux. See {help_url}'
if subprocess.run(f'{cmd} >/dev/null', shell=True).returncode != 0:
LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}')
sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0 # sudo installed on system
for c in (
'curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -',
'echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list',
'sudo apt-get update', 'sudo apt-get install edgetpu-compiler'):
subprocess.run(c if sudo else c.replace('sudo ', ''), shell=True, check=True)
ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]
LOGGER.info(f'\n{prefix} starting export with Edge TPU compiler {ver}...')
f = str(file).replace('.pt', '-int8_edgetpu.tflite') # Edge TPU model
f_tfl = str(file).replace('.pt', '-int8.tflite') # TFLite model
cmd = f'edgetpu_compiler -s -d -k 10 --out_dir {file.parent} {f_tfl}'
subprocess.run(cmd.split(), check=True)
return f, None
@try_export
def export_tfjs(file, int8, prefix=colorstr('TensorFlow.js:')):
# YOLOv3 TensorFlow.js export
check_requirements('tensorflowjs')
import tensorflowjs as tfjs import tensorflowjs as tfjs
LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...') LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
f = str(file).replace('.pt', '_web_model') # js dir f = str(file).replace('.pt', '_web_model') # js dir
f_pb = file.with_suffix('.pb') # *.pb path f_pb = file.with_suffix('.pb') # *.pb path
f_json = f'{f}/model.json' # *.json path f_json = f + '/model.json' # *.json path
int8_export = ' --quantize_uint8 ' if int8 else '' cmd = f"tensorflowjs_converter --input_format=tf_frozen_model " \
f"--output_node_names='Identity,Identity_1,Identity_2,Identity_3' {f_pb} {f}"
subprocess.run(cmd, shell=True)
cmd = f'tensorflowjs_converter --input_format=tf_frozen_model {int8_export}' \ json = open(f_json).read()
f'--output_node_names=Identity,Identity_1,Identity_2,Identity_3 {f_pb} {f}'
subprocess.run(cmd.split())
json = Path(f_json).read_text()
with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
subst = re.sub( subst = re.sub(
r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, ' r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, ' r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, ' r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}}}', r'{"outputs": {"Identity": {"name": "Identity"}, ' r'"Identity.?.?": {"name": "Identity.?.?"}}}',
r'{"outputs": {"Identity": {"name": "Identity"}, '
r'"Identity_1": {"name": "Identity_1"}, ' r'"Identity_1": {"name": "Identity_1"}, '
r'"Identity_2": {"name": "Identity_2"}, ' r'"Identity_2": {"name": "Identity_2"}, '
r'"Identity_3": {"name": "Identity_3"}}}', json) r'"Identity_3": {"name": "Identity_3"}}}',
json)
j.write(subst) j.write(subst)
return f, None
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def add_tflite_metadata(file, metadata, num_outputs): @torch.no_grad()
# Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata def run(data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
with contextlib.suppress(ImportError): weights=ROOT / 'yolov3.pt', # weights path
# check_requirements('tflite_support')
from tflite_support import flatbuffers
from tflite_support import metadata as _metadata
from tflite_support import metadata_schema_py_generated as _metadata_fb
tmp_file = Path('/tmp/meta.txt')
with open(tmp_file, 'w') as meta_f:
meta_f.write(str(metadata))
model_meta = _metadata_fb.ModelMetadataT()
label_file = _metadata_fb.AssociatedFileT()
label_file.name = tmp_file.name
model_meta.associatedFiles = [label_file]
subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [_metadata_fb.TensorMetadataT()]
subgraph.outputTensorMetadata = [_metadata_fb.TensorMetadataT()] * num_outputs
model_meta.subgraphMetadata = [subgraph]
b = flatbuffers.Builder(0)
b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()
populator = _metadata.MetadataPopulator.with_model_file(file)
populator.load_metadata_buffer(metadata_buf)
populator.load_associated_files([str(tmp_file)])
populator.populate()
tmp_file.unlink()
@smart_inference_mode()
def run(
data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
weights=ROOT / 'yolov5s.pt', # weights path
imgsz=(640, 640), # image (height, width) imgsz=(640, 640), # image (height, width)
batch_size=1, # batch size batch_size=1, # batch size
device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu
include=('torchscript', 'onnx'), # include formats include=('torchscript', 'onnx', 'coreml'), # include formats
half=False, # FP16 half-precision export half=False, # FP16 half-precision export
inplace=False, # set YOLOv3 Detect() inplace=True inplace=False, # set Detect() inplace=True
keras=False, # use Keras train=False, # model.train() mode
optimize=False, # TorchScript: optimize for mobile optimize=False, # TorchScript: optimize for mobile
int8=False, # CoreML/TF INT8 quantization int8=False, # CoreML/TF INT8 quantization
dynamic=False, # ONNX/TF/TensorRT: dynamic axes dynamic=False, # ONNX/TF: dynamic axes
simplify=False, # ONNX: simplify model simplify=False, # ONNX: simplify model
opset=12, # ONNX: opset version opset=12, # ONNX: opset version
verbose=False, # TensorRT: verbose log
workspace=4, # TensorRT: workspace size (GB)
nms=False, # TF: add NMS to model
agnostic_nms=False, # TF: add agnostic NMS to model
topk_per_class=100, # TF.js NMS: topk per class to keep topk_per_class=100, # TF.js NMS: topk per class to keep
topk_all=100, # TF.js NMS: topk for all classes to keep topk_all=100, # TF.js NMS: topk for all classes to keep
iou_thres=0.45, # TF.js NMS: IoU threshold iou_thres=0.45, # TF.js NMS: IoU threshold
conf_thres=0.25, # TF.js NMS: confidence threshold conf_thres=0.25 # TF.js NMS: confidence threshold
): ):
t = time.time() t = time.time()
include = [x.lower() for x in include] # to lowercase include = [x.lower() for x in include]
fmts = tuple(export_formats()['Argument'][1:]) # --include arguments tf_exports = list(x in include for x in ('saved_model', 'pb', 'tflite', 'tfjs')) # TensorFlow exports
flags = [x in include for x in fmts] imgsz *= 2 if len(imgsz) == 1 else 1 # expand
assert sum(flags) == len(include), f'ERROR: Invalid --include {include}, valid --include arguments are {fmts}' file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights)
jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle = flags # export booleans
file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights) # PyTorch weights
# Load PyTorch model # Load PyTorch model
device = select_device(device) device = select_device(device)
if half: assert not (device.type == 'cpu' and half), '--half only compatible with GPU export, i.e. use --device 0'
assert device.type != 'cpu' or coreml, '--half only compatible with GPU export, i.e. use --device 0' model = attempt_load(weights, map_location=device, inplace=True, fuse=True) # load FP32 model
assert not dynamic, '--half not compatible with --dynamic, i.e. use either --half or --dynamic but not both' nc, names = model.nc, model.names # number of classes, class names
model = attempt_load(weights, device=device, inplace=True, fuse=True) # load FP32 model
# Checks
imgsz *= 2 if len(imgsz) == 1 else 1 # expand
if optimize:
assert device.type == 'cpu', '--optimize not compatible with cuda devices, i.e. use --device cpu'
# Input # Input
gs = int(max(model.stride)) # grid size (max stride) gs = int(max(model.stride)) # grid size (max stride)
@ -540,116 +289,81 @@ def run(
im = torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection im = torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection
# Update model # Update model
model.eval() if half:
im, model = im.half(), model.half() # to FP16
model.train() if train else model.eval() # training mode = no Detect() layer grid construction
for k, m in model.named_modules(): for k, m in model.named_modules():
if isinstance(m, Detect): if isinstance(m, Conv): # assign export-friendly activations
if isinstance(m.act, nn.SiLU):
m.act = SiLU()
elif isinstance(m, Detect):
m.inplace = inplace m.inplace = inplace
m.dynamic = dynamic m.onnx_dynamic = dynamic
m.export = True # m.forward = m.forward_export # assign forward (optional)
for _ in range(2): for _ in range(2):
y = model(im) # dry runs y = model(im) # dry runs
if half and not coreml: LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} ({file_size(file):.1f} MB)")
im, model = im.half(), model.half() # to FP16
shape = tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shape
metadata = {'stride': int(max(model.stride)), 'names': model.names} # model metadata
LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} with output shape {shape} ({file_size(file):.1f} MB)")
# Exports # Exports
f = [''] * len(fmts) # exported filenames if 'torchscript' in include:
warnings.filterwarnings(action='ignore', category=torch.jit.TracerWarning) # suppress TracerWarning export_torchscript(model, im, file, optimize)
if jit: # TorchScript if 'onnx' in include:
f[0], _ = export_torchscript(model, im, file, optimize) export_onnx(model, im, file, opset, train, dynamic, simplify)
if engine: # TensorRT required before ONNX if 'coreml' in include:
f[1], _ = export_engine(model, im, file, half, dynamic, simplify, workspace, verbose) export_coreml(model, im, file)
if onnx or xml: # OpenVINO requires ONNX
f[2], _ = export_onnx(model, im, file, opset, dynamic, simplify) # TensorFlow Exports
if xml: # OpenVINO if any(tf_exports):
f[3], _ = export_openvino(file, metadata, half) pb, tflite, tfjs = tf_exports[1:]
if coreml: # CoreML assert not (tflite and tfjs), 'TFLite and TF.js models must be exported separately, please pass only one type.'
f[4], _ = export_coreml(model, im, file, int8, half) model = export_saved_model(model.cpu(), im, file, dynamic, tf_nms=tfjs, agnostic_nms=tfjs,
if any((saved_model, pb, tflite, edgetpu, tfjs)): # TensorFlow formats topk_per_class=topk_per_class, topk_all=topk_all, conf_thres=conf_thres,
assert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.' iou_thres=iou_thres) # keras model
assert not isinstance(model, ClassificationModel), 'ClassificationModel export to TF formats not yet supported.'
f[5], s_model = export_saved_model(model.cpu(),
im,
file,
dynamic,
tf_nms=nms or agnostic_nms or tfjs,
agnostic_nms=agnostic_nms or tfjs,
topk_per_class=topk_per_class,
topk_all=topk_all,
iou_thres=iou_thres,
conf_thres=conf_thres,
keras=keras)
if pb or tfjs: # pb prerequisite to tfjs if pb or tfjs: # pb prerequisite to tfjs
f[6], _ = export_pb(s_model, file) export_pb(model, im, file)
if tflite or edgetpu: if tflite:
f[7], _ = export_tflite(s_model, im, file, int8 or edgetpu, data=data, nms=nms, agnostic_nms=agnostic_nms) export_tflite(model, im, file, int8=int8, data=data, ncalib=100)
if edgetpu:
f[8], _ = export_edgetpu(file)
add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))
if tfjs: if tfjs:
f[9], _ = export_tfjs(file, int8) export_tfjs(model, im, file)
if paddle: # PaddlePaddle
f[10], _ = export_paddle(model, im, file, metadata)
# Finish # Finish
f = [str(x) for x in f if x] # filter out '' and None LOGGER.info(f'\nExport complete ({time.time() - t:.2f}s)'
if any(f):
cls, det, seg = (isinstance(model, x) for x in (ClassificationModel, DetectionModel, SegmentationModel)) # type
det &= not seg # segmentation models inherit from SegmentationModel(DetectionModel)
dir = Path('segment' if seg else 'classify' if cls else '')
h = '--half' if half else '' # --half FP16 inference arg
s = '# WARNING ⚠️ ClassificationModel not yet supported for PyTorch Hub AutoShape inference' if cls else \
'# WARNING ⚠️ SegmentationModel not yet supported for PyTorch Hub AutoShape inference' if seg else ''
LOGGER.info(f'\nExport complete ({time.time() - t:.1f}s)'
f"\nResults saved to {colorstr('bold', file.parent.resolve())}" f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
f"\nDetect: python {dir / ('detect.py' if det else 'predict.py')} --weights {f[-1]} {h}" f'\nVisualize with https://netron.app')
f"\nValidate: python {dir / 'val.py'} --weights {f[-1]} {h}"
f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}') {s}"
f'\nVisualize: https://netron.app')
return f # return list of exported files/dirs
def parse_opt(known=False): def parse_opt():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3-tiny.pt', help='model.pt path(s)') parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='weights path')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')
parser.add_argument('--batch-size', type=int, default=1, help='batch size') parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--half', action='store_true', help='FP16 half-precision export') parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
parser.add_argument('--inplace', action='store_true', help='set Detect() inplace=True') parser.add_argument('--inplace', action='store_true', help='set YOLOv3 Detect() inplace=True')
parser.add_argument('--keras', action='store_true', help='TF: use Keras') parser.add_argument('--train', action='store_true', help='model.train() mode')
parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile') parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization') parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes') parser.add_argument('--dynamic', action='store_true', help='ONNX/TF: dynamic axes')
parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model') parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
parser.add_argument('--opset', type=int, default=17, help='ONNX: opset version') parser.add_argument('--opset', type=int, default=13, help='ONNX: opset version')
parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log')
parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)')
parser.add_argument('--nms', action='store_true', help='TF: add NMS to model')
parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model')
parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep') parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')
parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep') parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')
parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold') parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')
parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold') parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')
parser.add_argument( parser.add_argument('--include', nargs='+',
'--include', default=['torchscript', 'onnx'],
nargs='+', help='available formats are (torchscript, onnx, coreml, saved_model, pb, tflite, tfjs)')
default=['torchscript'], opt = parser.parse_args()
help='torchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle') print_args(FILE.stem, opt)
opt = parser.parse_known_args()[0] if known else parser.parse_args()
print_args(vars(opt))
return opt return opt
def main(opt): def main(opt):
for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]):
run(**vars(opt)) run(**vars(opt))
if __name__ == '__main__': if __name__ == "__main__":
opt = parse_opt() opt = parse_opt()
main(opt) main(opt)

View File

@ -1,66 +1,52 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
""" """
PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5 PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/
Usage: Usage:
import torch import torch
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # official model model = torch.hub.load('ultralytics/yolov3', 'yolov3')
model = torch.hub.load('ultralytics/yolov5:master', 'yolov5s') # from branch
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.pt') # custom/local model
model = torch.hub.load('.', 'custom', 'yolov5s.pt', source='local') # local repo
""" """
import torch import torch
def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
"""Creates or loads a YOLOv3 model """Creates a specified model
Arguments: Arguments:
name (str): model name 'yolov5s' or path 'path/to/best.pt' name (str): name of model, i.e. 'yolov3'
pretrained (bool): load pretrained weights into the model pretrained (bool): load pretrained weights into the model
channels (int): number of input channels channels (int): number of input channels
classes (int): number of model classes classes (int): number of model classes
autoshape (bool): apply YOLOv3 .autoshape() wrapper to model autoshape (bool): apply .autoshape() wrapper to model
verbose (bool): print all information to screen verbose (bool): print all information to screen
device (str, torch.device, None): device to use for model parameters device (str, torch.device, None): device to use for model parameters
Returns: Returns:
YOLOv3 model pytorch model
""" """
from pathlib import Path from pathlib import Path
from models.common import AutoShape, DetectMultiBackend
from models.experimental import attempt_load from models.experimental import attempt_load
from models.yolo import ClassificationModel, DetectionModel, SegmentationModel from models.yolo import Model
from utils.downloads import attempt_download from utils.downloads import attempt_download
from utils.general import LOGGER, check_requirements, intersect_dicts, logging from utils.general import check_requirements, intersect_dicts, set_logging
from utils.torch_utils import select_device from utils.torch_utils import select_device
if not verbose: file = Path(__file__).resolve()
LOGGER.setLevel(logging.WARNING) check_requirements(exclude=('tensorboard', 'thop', 'opencv-python'))
check_requirements(exclude=('opencv-python', 'tensorboard', 'thop')) set_logging(verbose=verbose)
name = Path(name)
path = name.with_suffix('.pt') if name.suffix == '' and not name.is_dir() else name # checkpoint path save_dir = Path('') if str(name).endswith('.pt') else file.parent
path = (save_dir / name).with_suffix('.pt') # checkpoint path
try: try:
device = select_device(device) device = select_device(('0' if torch.cuda.is_available() else 'cpu') if device is None else device)
if pretrained and channels == 3 and classes == 80: if pretrained and channels == 3 and classes == 80:
try: model = attempt_load(path, map_location=device) # download/load FP32 model
model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model
if autoshape:
if model.pt and isinstance(model.model, ClassificationModel):
LOGGER.warning('WARNING ⚠️ YOLOv3 ClassificationModel is not yet AutoShape compatible. '
'You must pass torch tensors in BCHW to this model, i.e. shape(1,3,224,224).')
elif model.pt and isinstance(model.model, SegmentationModel):
LOGGER.warning('WARNING ⚠️ YOLOv3 SegmentationModel is not yet AutoShape compatible. '
'You will not be able to run inference with this model.')
else: else:
model = AutoShape(model) # for file/URI/PIL/cv2/np inputs and NMS cfg = list((Path(__file__).parent / 'models').rglob(f'{name}.yaml'))[0] # model.yaml path
except Exception: model = Model(cfg, channels, classes) # create model
model = attempt_load(path, device=device, fuse=False) # arbitrary model
else:
cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0] # model.yaml path
model = DetectionModel(cfg, channels, classes) # create model
if pretrained: if pretrained:
ckpt = torch.load(attempt_download(path), map_location=device) # load ckpt = torch.load(attempt_download(path), map_location=device) # load
csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
@ -68,102 +54,54 @@ def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbo
model.load_state_dict(csd, strict=False) # load model.load_state_dict(csd, strict=False) # load
if len(ckpt['model'].names) == classes: if len(ckpt['model'].names) == classes:
model.names = ckpt['model'].names # set class names attribute model.names = ckpt['model'].names # set class names attribute
if not verbose: if autoshape:
LOGGER.setLevel(logging.INFO) # reset to default model = model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS
return model.to(device) return model.to(device)
except Exception as e: except Exception as e:
help_url = 'https://github.com/ultralytics/yolov5/issues/36' help_url = 'https://github.com/ultralytics/yolov5/issues/36'
s = f'{e}. Cache may be out of date, try `force_reload=True` or see {help_url} for help.' s = 'Cache may be out of date, try `force_reload=True`. See %s for help.' % help_url
raise Exception(s) from e raise Exception(s) from e
def custom(path='path/to/model.pt', autoshape=True, _verbose=True, device=None): def custom(path='path/to/model.pt', autoshape=True, verbose=True, device=None):
# YOLOv3 custom or local model # custom or local model
return _create(path, autoshape=autoshape, verbose=_verbose, device=device) return _create(path, autoshape=autoshape, verbose=verbose, device=device)
def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): def yolov3(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
# YOLOv3-nano model https://github.com/ultralytics/yolov5 # YOLOv3 model https://github.com/ultralytics/yolov3
return _create('yolov5n', pretrained, channels, classes, autoshape, _verbose, device) return _create('yolov3', pretrained, channels, classes, autoshape, verbose, device)
def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): def yolov3_spp(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
# YOLOv3-small model https://github.com/ultralytics/yolov5 # YOLOv3-SPP model https://github.com/ultralytics/yolov3
return _create('yolov5s', pretrained, channels, classes, autoshape, _verbose, device) return _create('yolov3-spp', pretrained, channels, classes, autoshape, verbose, device)
def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): def yolov3_tiny(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
# YOLOv3-medium model https://github.com/ultralytics/yolov5 # YOLOv3-tiny model https://github.com/ultralytics/yolov3
return _create('yolov5m', pretrained, channels, classes, autoshape, _verbose, device) return _create('yolov3-tiny', pretrained, channels, classes, autoshape, verbose, device)
def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
# YOLOv3-large model https://github.com/ultralytics/yolov5
return _create('yolov5l', pretrained, channels, classes, autoshape, _verbose, device)
def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
# YOLOv3-xlarge model https://github.com/ultralytics/yolov5
return _create('yolov5x', pretrained, channels, classes, autoshape, _verbose, device)
def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
# YOLOv3-nano-P6 model https://github.com/ultralytics/yolov5
return _create('yolov5n6', pretrained, channels, classes, autoshape, _verbose, device)
def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
# YOLOv3-small-P6 model https://github.com/ultralytics/yolov5
return _create('yolov5s6', pretrained, channels, classes, autoshape, _verbose, device)
def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
# YOLOv3-medium-P6 model https://github.com/ultralytics/yolov5
return _create('yolov5m6', pretrained, channels, classes, autoshape, _verbose, device)
def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
# YOLOv3-large-P6 model https://github.com/ultralytics/yolov5
return _create('yolov5l6', pretrained, channels, classes, autoshape, _verbose, device)
def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
# YOLOv3-xlarge-P6 model https://github.com/ultralytics/yolov5
return _create('yolov5x6', pretrained, channels, classes, autoshape, _verbose, device)
if __name__ == '__main__': if __name__ == '__main__':
import argparse model = _create(name='yolov3-tiny', pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) # pretrained
# model = custom(path='path/to/model.pt') # custom
# Verify inference
from pathlib import Path from pathlib import Path
import cv2
import numpy as np import numpy as np
from PIL import Image from PIL import Image
from utils.general import cv2, print_args imgs = ['data/images/zidane.jpg', # filename
# Argparser
parser = argparse.ArgumentParser()
parser.add_argument('--model', type=str, default='yolov5s', help='model name')
opt = parser.parse_args()
print_args(vars(opt))
# Model
model = _create(name=opt.model, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True)
# model = custom(path='path/to/model.pt') # custom
# Images
imgs = [
'data/images/zidane.jpg', # filename
Path('data/images/zidane.jpg'), # Path Path('data/images/zidane.jpg'), # Path
'https://ultralytics.com/images/zidane.jpg', # URI 'https://ultralytics.com/images/zidane.jpg', # URI
cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV
Image.open('data/images/bus.jpg'), # PIL Image.open('data/images/bus.jpg'), # PIL
np.zeros((320, 640, 3))] # numpy np.zeros((320, 640, 3))] # numpy
# Inference results = model(imgs) # batched inference
results = model(imgs, size=320) # batched inference
# Results
results.print() results.print()
results.save() results.save()

View File

@ -3,17 +3,12 @@
Common modules Common modules
""" """
import ast
import contextlib
import json import json
import math import math
import platform import platform
import warnings import warnings
import zipfile
from collections import OrderedDict, namedtuple
from copy import copy from copy import copy
from pathlib import Path from pathlib import Path
from urllib.parse import urlparse
import cv2 import cv2
import numpy as np import numpy as np
@ -21,37 +16,30 @@ import pandas as pd
import requests import requests
import torch import torch
import torch.nn as nn import torch.nn as nn
from IPython.display import display
from PIL import Image from PIL import Image
from torch.cuda import amp from torch.cuda import amp
from utils import TryExcept from utils.datasets import exif_transpose, letterbox
from utils.dataloaders import exif_transpose, letterbox from utils.general import (LOGGER, check_requirements, check_suffix, colorstr, increment_path, make_divisible,
from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr, non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)
increment_path, is_notebook, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy,
xyxy2xywh, yaml_load)
from utils.plots import Annotator, colors, save_one_box from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import copy_attr, smart_inference_mode from utils.torch_utils import time_sync
def autopad(k, p=None, d=1): # kernel, padding, dilation def autopad(k, p=None): # kernel, padding
# Pad to 'same' shape outputs # Pad to 'same'
if d > 1:
k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
if p is None: if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p return p
class Conv(nn.Module): class Conv(nn.Module):
# Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation) # Standard convolution
default_act = nn.SiLU() # default activation def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
super().__init__() super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False) self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
self.bn = nn.BatchNorm2d(c2) self.bn = nn.BatchNorm2d(c2)
self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity() self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
def forward(self, x): def forward(self, x):
return self.act(self.bn(self.conv(x))) return self.act(self.bn(self.conv(x)))
@ -61,15 +49,9 @@ class Conv(nn.Module):
class DWConv(Conv): class DWConv(Conv):
# Depth-wise convolution # Depth-wise convolution class
def __init__(self, c1, c2, k=1, s=1, d=1, act=True): # ch_in, ch_out, kernel, stride, dilation, activation def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act) super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
class DWConvTranspose2d(nn.ConvTranspose2d):
# Depth-wise transpose convolution
def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out
super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2))
class TransformerLayer(nn.Module): class TransformerLayer(nn.Module):
@ -104,8 +86,8 @@ class TransformerBlock(nn.Module):
if self.conv is not None: if self.conv is not None:
x = self.conv(x) x = self.conv(x)
b, _, w, h = x.shape b, _, w, h = x.shape
p = x.flatten(2).permute(2, 0, 1) p = x.flatten(2).unsqueeze(0).transpose(0, 3).squeeze(3)
return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h) return self.tr(p + self.linear(p)).unsqueeze(3).transpose(0, 3).reshape(b, self.c2, w, h)
class Bottleneck(nn.Module): class Bottleneck(nn.Module):
@ -137,21 +119,7 @@ class BottleneckCSP(nn.Module):
def forward(self, x): def forward(self, x):
y1 = self.cv3(self.m(self.cv1(x))) y1 = self.cv3(self.m(self.cv1(x)))
y2 = self.cv2(x) y2 = self.cv2(x)
return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1)))) return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
class CrossConv(nn.Module):
# Cross Convolution Downsample
def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
# ch_in, ch_out, kernel, stride, groups, expansion, shortcut
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, (1, k), (1, s))
self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
self.add = shortcut and c1 == c2
def forward(self, x):
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
class C3(nn.Module): class C3(nn.Module):
@ -161,19 +129,12 @@ class C3(nn.Module):
c_ = int(c2 * e) # hidden channels c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1) self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c1, c_, 1, 1) self.cv2 = Conv(c1, c_, 1, 1)
self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2) self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2)
self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
# self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])
def forward(self, x): def forward(self, x):
return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1)) return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
class C3x(C3):
# C3 module with cross-convolutions
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e)
self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)))
class C3TR(C3): class C3TR(C3):
@ -217,7 +178,7 @@ class SPP(nn.Module):
class SPPF(nn.Module): class SPPF(nn.Module):
# Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv3 by Glenn Jocher # Spatial Pyramid Pooling - Fast (SPPF) layer for by Glenn Jocher
def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
super().__init__() super().__init__()
c_ = c1 // 2 # hidden channels c_ = c1 // 2 # hidden channels
@ -231,18 +192,18 @@ class SPPF(nn.Module):
warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
y1 = self.m(x) y1 = self.m(x)
y2 = self.m(y1) y2 = self.m(y1)
return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1)) return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
class Focus(nn.Module): class Focus(nn.Module):
# Focus wh information into c-space # Focus wh information into c-space
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__() super().__init__()
self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act) self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
# self.contract = Contract(gain=2) # self.contract = Contract(gain=2)
def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1)) return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
# return self.conv(self.contract(x)) # return self.conv(self.contract(x))
@ -251,12 +212,12 @@ class GhostConv(nn.Module):
def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
super().__init__() super().__init__()
c_ = c2 // 2 # hidden channels c_ = c2 // 2 # hidden channels
self.cv1 = Conv(c1, c_, k, s, None, g, act=act) self.cv1 = Conv(c1, c_, k, s, None, g, act)
self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act) self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
def forward(self, x): def forward(self, x):
y = self.cv1(x) y = self.cv1(x)
return torch.cat((y, self.cv2(y)), 1) return torch.cat([y, self.cv2(y)], 1)
class GhostBottleneck(nn.Module): class GhostBottleneck(nn.Module):
@ -264,12 +225,11 @@ class GhostBottleneck(nn.Module):
def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
super().__init__() super().__init__()
c_ = c2 // 2 c_ = c2 // 2
self.conv = nn.Sequential( self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
GhostConv(c1, c_, 1, 1), # pw
DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
GhostConv(c_, c2, 1, 1, act=False)) # pw-linear GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
act=False)) if s == 2 else nn.Identity() Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
def forward(self, x): def forward(self, x):
return self.conv(x) + self.shortcut(x) return self.conv(x) + self.shortcut(x)
@ -314,350 +274,159 @@ class Concat(nn.Module):
class DetectMultiBackend(nn.Module): class DetectMultiBackend(nn.Module):
# YOLOv3 MultiBackend class for python inference on various backends # MultiBackend class for python inference on various backends
def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True): def __init__(self, weights='yolov3.pt', device=None, dnn=True):
# Usage: # Usage:
# PyTorch: weights = *.pt # PyTorch: weights = *.pt
# TorchScript: *.torchscript # TorchScript: *.torchscript.pt
# ONNX Runtime: *.onnx
# ONNX OpenCV DNN: *.onnx --dnn
# OpenVINO: *_openvino_model
# CoreML: *.mlmodel # CoreML: *.mlmodel
# TensorRT: *.engine # TensorFlow: *_saved_model
# TensorFlow SavedModel: *_saved_model # TensorFlow: *.pb
# TensorFlow GraphDef: *.pb
# TensorFlow Lite: *.tflite # TensorFlow Lite: *.tflite
# TensorFlow Edge TPU: *_edgetpu.tflite # ONNX Runtime: *.onnx
# PaddlePaddle: *_paddle_model # OpenCV DNN: *.onnx with dnn=True
from models.experimental import attempt_download, attempt_load # scoped to avoid circular import
super().__init__() super().__init__()
w = str(weights[0] if isinstance(weights, list) else weights) w = str(weights[0] if isinstance(weights, list) else weights)
pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w) suffix, suffixes = Path(w).suffix.lower(), ['.pt', '.onnx', '.tflite', '.pb', '', '.mlmodel']
fp16 &= pt or jit or onnx or engine # FP16 check_suffix(w, suffixes) # check weights have acceptable suffix
nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH) pt, onnx, tflite, pb, saved_model, coreml = (suffix == x for x in suffixes) # backend booleans
stride = 32 # default stride jit = pt and 'torchscript' in w.lower()
cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA stride, names = 64, [f'class{i}' for i in range(1000)] # assign defaults
if not (pt or triton):
w = attempt_download(w) # download if not local
if pt: # PyTorch if jit: # TorchScript
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
stride = max(int(model.stride.max()), 32) # model stride
names = model.module.names if hasattr(model, 'module') else model.names # get class names
model.half() if fp16 else model.float()
self.model = model # explicitly assign for to(), cpu(), cuda(), half()
elif jit: # TorchScript
LOGGER.info(f'Loading {w} for TorchScript inference...') LOGGER.info(f'Loading {w} for TorchScript inference...')
extra_files = {'config.txt': ''} # model metadata extra_files = {'config.txt': ''} # model metadata
model = torch.jit.load(w, _extra_files=extra_files, map_location=device) model = torch.jit.load(w, _extra_files=extra_files)
model.half() if fp16 else model.float() if extra_files['config.txt']:
if extra_files['config.txt']: # load metadata dict d = json.loads(extra_files['config.txt']) # extra_files dict
d = json.loads(extra_files['config.txt'],
object_hook=lambda d: {int(k) if k.isdigit() else k: v
for k, v in d.items()})
stride, names = int(d['stride']), d['names'] stride, names = int(d['stride']), d['names']
elif pt: # PyTorch
from models.experimental import attempt_load # scoped to avoid circular import
model = torch.jit.load(w) if 'torchscript' in w else attempt_load(weights, map_location=device)
stride = int(model.stride.max()) # model stride
names = model.module.names if hasattr(model, 'module') else model.names # get class names
elif coreml: # CoreML *.mlmodel
import coremltools as ct
model = ct.models.MLModel(w)
elif dnn: # ONNX OpenCV DNN elif dnn: # ONNX OpenCV DNN
LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...') LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')
check_requirements('opencv-python>=4.5.4') check_requirements(('opencv-python>=4.5.4',))
net = cv2.dnn.readNetFromONNX(w) net = cv2.dnn.readNetFromONNX(w)
elif onnx: # ONNX Runtime elif onnx: # ONNX Runtime
LOGGER.info(f'Loading {w} for ONNX Runtime inference...') LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
cuda = torch.cuda.is_available()
check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime')) check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
import onnxruntime import onnxruntime
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider'] providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
session = onnxruntime.InferenceSession(w, providers=providers) session = onnxruntime.InferenceSession(w, providers=providers)
output_names = [x.name for x in session.get_outputs()] else: # TensorFlow model (TFLite, pb, saved_model)
meta = session.get_modelmeta().custom_metadata_map # metadata
if 'stride' in meta:
stride, names = int(meta['stride']), eval(meta['names'])
elif xml: # OpenVINO
LOGGER.info(f'Loading {w} for OpenVINO inference...')
check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/
from openvino.runtime import Core, Layout, get_batch
ie = Core()
if not Path(w).is_file(): # if not *.xml
w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
if network.get_parameters()[0].get_layout().empty:
network.get_parameters()[0].set_layout(Layout('NCHW'))
batch_dim = get_batch(network)
if batch_dim.is_static:
batch_size = batch_dim.get_length()
executable_network = ie.compile_model(network, device_name='CPU') # device_name="MYRIAD" for Intel NCS2
stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata
elif engine: # TensorRT
LOGGER.info(f'Loading {w} for TensorRT inference...')
import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0
if device.type == 'cpu':
device = torch.device('cuda:0')
Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
logger = trt.Logger(trt.Logger.INFO)
with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
model = runtime.deserialize_cuda_engine(f.read())
context = model.create_execution_context()
bindings = OrderedDict()
output_names = []
fp16 = False # default updated below
dynamic = False
for i in range(model.num_bindings):
name = model.get_binding_name(i)
dtype = trt.nptype(model.get_binding_dtype(i))
if model.binding_is_input(i):
if -1 in tuple(model.get_binding_shape(i)): # dynamic
dynamic = True
context.set_binding_shape(i, tuple(model.get_profile_shape(0, i)[2]))
if dtype == np.float16:
fp16 = True
else: # output
output_names.append(name)
shape = tuple(context.get_binding_shape(i))
im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device)
bindings[name] = Binding(name, dtype, shape, im, int(im.data_ptr()))
binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size
elif coreml: # CoreML
LOGGER.info(f'Loading {w} for CoreML inference...')
import coremltools as ct
model = ct.models.MLModel(w)
elif saved_model: # TF SavedModel
LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...')
import tensorflow as tf import tensorflow as tf
keras = False # assume TF1 saved_model if pb: # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w)
elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...')
import tensorflow as tf
def wrap_frozen_graph(gd, inputs, outputs): def wrap_frozen_graph(gd, inputs, outputs):
x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=''), []) # wrapped x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped
ge = x.graph.as_graph_element return x.prune(tf.nest.map_structure(x.graph.as_graph_element, inputs),
return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs)) tf.nest.map_structure(x.graph.as_graph_element, outputs))
def gd_outputs(gd): LOGGER.info(f'Loading {w} for TensorFlow *.pb inference...')
name_list, input_list = [], [] graph_def = tf.Graph().as_graph_def()
for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef graph_def.ParseFromString(open(w, 'rb').read())
name_list.append(node.name) frozen_func = wrap_frozen_graph(gd=graph_def, inputs="x:0", outputs="Identity:0")
input_list.extend(node.input) elif saved_model:
return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp')) LOGGER.info(f'Loading {w} for TensorFlow saved_model inference...')
model = tf.keras.models.load_model(w)
gd = tf.Graph().as_graph_def() # TF GraphDef elif tflite: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
with open(w, 'rb') as f: if 'edgetpu' in w.lower():
gd.ParseFromString(f.read()) LOGGER.info(f'Loading {w} for TensorFlow Edge TPU inference...')
frozen_func = wrap_frozen_graph(gd, inputs='x:0', outputs=gd_outputs(gd)) import tflite_runtime.interpreter as tfli
elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python delegate = {'Linux': 'libedgetpu.so.1', # install https://coral.ai/software/#edgetpu-runtime
try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu
from tflite_runtime.interpreter import Interpreter, load_delegate
except ImportError:
import tensorflow as tf
Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate,
if edgetpu: # TF Edge TPU https://coral.ai/software/#edgetpu-runtime
LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...')
delegate = {
'Linux': 'libedgetpu.so.1',
'Darwin': 'libedgetpu.1.dylib', 'Darwin': 'libedgetpu.1.dylib',
'Windows': 'edgetpu.dll'}[platform.system()] 'Windows': 'edgetpu.dll'}[platform.system()]
interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)]) interpreter = tfli.Interpreter(model_path=w, experimental_delegates=[tfli.load_delegate(delegate)])
else: # TFLite else:
LOGGER.info(f'Loading {w} for TensorFlow Lite inference...') LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')
interpreter = Interpreter(model_path=w) # load TFLite model interpreter = tf.lite.Interpreter(model_path=w) # load TFLite model
interpreter.allocate_tensors() # allocate interpreter.allocate_tensors() # allocate
input_details = interpreter.get_input_details() # inputs input_details = interpreter.get_input_details() # inputs
output_details = interpreter.get_output_details() # outputs output_details = interpreter.get_output_details() # outputs
# load metadata
with contextlib.suppress(zipfile.BadZipFile):
with zipfile.ZipFile(w, 'r') as model:
meta_file = model.namelist()[0]
meta = ast.literal_eval(model.read(meta_file).decode('utf-8'))
stride, names = int(meta['stride']), meta['names']
elif tfjs: # TF.js
raise NotImplementedError('ERROR: YOLOv3 TF.js inference is not supported')
elif paddle: # PaddlePaddle
LOGGER.info(f'Loading {w} for PaddlePaddle inference...')
check_requirements('paddlepaddle-gpu' if cuda else 'paddlepaddle')
import paddle.inference as pdi
if not Path(w).is_file(): # if not *.pdmodel
w = next(Path(w).rglob('*.pdmodel')) # get *.pdmodel file from *_paddle_model dir
weights = Path(w).with_suffix('.pdiparams')
config = pdi.Config(str(w), str(weights))
if cuda:
config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0)
predictor = pdi.create_predictor(config)
input_handle = predictor.get_input_handle(predictor.get_input_names()[0])
output_names = predictor.get_output_names()
elif triton: # NVIDIA Triton Inference Server
LOGGER.info(f'Using {w} as Triton Inference Server...')
check_requirements('tritonclient[all]')
from utils.triton import TritonRemoteModel
model = TritonRemoteModel(url=w)
nhwc = model.runtime.startswith('tensorflow')
else:
raise NotImplementedError(f'ERROR: {w} is not a supported format')
# class names
if 'names' not in locals():
names = yaml_load(data)['names'] if data else {i: f'class{i}' for i in range(999)}
if names[0] == 'n01440764' and len(names) == 1000: # ImageNet
names = yaml_load(ROOT / 'data/ImageNet.yaml')['names'] # human-readable names
self.__dict__.update(locals()) # assign all variables to self self.__dict__.update(locals()) # assign all variables to self
def forward(self, im, augment=False, visualize=False): def forward(self, im, augment=False, visualize=False, val=False):
# YOLOv3 MultiBackend inference # MultiBackend inference
b, ch, h, w = im.shape # batch, channel, height, width b, ch, h, w = im.shape # batch, channel, height, width
if self.fp16 and im.dtype != torch.float16:
im = im.half() # to FP16
if self.nhwc:
im = im.permute(0, 2, 3, 1) # torch BCHW to numpy BHWC shape(1,320,192,3)
if self.pt: # PyTorch if self.pt: # PyTorch
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im) y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize)
elif self.jit: # TorchScript return y if val else y[0]
y = self.model(im) elif self.coreml: # CoreML *.mlmodel
elif self.dnn: # ONNX OpenCV DNN im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
im = im.cpu().numpy() # torch to numpy
self.net.setInput(im)
y = self.net.forward()
elif self.onnx: # ONNX Runtime
im = im.cpu().numpy() # torch to numpy
y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
elif self.xml: # OpenVINO
im = im.cpu().numpy() # FP32
y = list(self.executable_network([im]).values())
elif self.engine: # TensorRT
if self.dynamic and im.shape != self.bindings['images'].shape:
i = self.model.get_binding_index('images')
self.context.set_binding_shape(i, im.shape) # reshape if dynamic
self.bindings['images'] = self.bindings['images']._replace(shape=im.shape)
for name in self.output_names:
i = self.model.get_binding_index(name)
self.bindings[name].data.resize_(tuple(self.context.get_binding_shape(i)))
s = self.bindings['images'].shape
assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}"
self.binding_addrs['images'] = int(im.data_ptr())
self.context.execute_v2(list(self.binding_addrs.values()))
y = [self.bindings[x].data for x in sorted(self.output_names)]
elif self.coreml: # CoreML
im = im.cpu().numpy()
im = Image.fromarray((im[0] * 255).astype('uint8')) im = Image.fromarray((im[0] * 255).astype('uint8'))
# im = im.resize((192, 320), Image.ANTIALIAS) # im = im.resize((192, 320), Image.ANTIALIAS)
y = self.model.predict({'image': im}) # coordinates are xywh normalized y = self.model.predict({'image': im}) # coordinates are xywh normalized
if 'confidence' in y:
box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels
conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float) conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1) y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)
else: elif self.onnx: # ONNX
y = list(reversed(y.values())) # reversed for segmentation models (pred, proto) im = im.cpu().numpy() # torch to numpy
elif self.paddle: # PaddlePaddle if self.dnn: # ONNX OpenCV DNN
im = im.cpu().numpy().astype(np.float32) self.net.setInput(im)
self.input_handle.copy_from_cpu(im) y = self.net.forward()
self.predictor.run() else: # ONNX Runtime
y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names] y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0]
elif self.triton: # NVIDIA Triton Inference Server else: # TensorFlow model (TFLite, pb, saved_model)
y = self.model(im) im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) if self.pb:
im = im.cpu().numpy() y = self.frozen_func(x=self.tf.constant(im)).numpy()
if self.saved_model: # SavedModel elif self.saved_model:
y = self.model(im, training=False) if self.keras else self.model(im) y = self.model(im, training=False).numpy()
elif self.pb: # GraphDef elif self.tflite:
y = self.frozen_func(x=self.tf.constant(im)) input, output = self.input_details[0], self.output_details[0]
else: # Lite or Edge TPU
input = self.input_details[0]
int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model
if int8: if int8:
scale, zero_point = input['quantization'] scale, zero_point = input['quantization']
im = (im / scale + zero_point).astype(np.uint8) # de-scale im = (im / scale + zero_point).astype(np.uint8) # de-scale
self.interpreter.set_tensor(input['index'], im) self.interpreter.set_tensor(input['index'], im)
self.interpreter.invoke() self.interpreter.invoke()
y = [] y = self.interpreter.get_tensor(output['index'])
for output in self.output_details:
x = self.interpreter.get_tensor(output['index'])
if int8: if int8:
scale, zero_point = output['quantization'] scale, zero_point = output['quantization']
x = (x.astype(np.float32) - zero_point) * scale # re-scale y = (y.astype(np.float32) - zero_point) * scale # re-scale
y.append(x) y[..., 0] *= w # x
y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y] y[..., 1] *= h # y
y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels y[..., 2] *= w # w
y[..., 3] *= h # h
if isinstance(y, (list, tuple)): y = torch.tensor(y)
return self.from_numpy(y[0]) if len(y) == 1 else [self.from_numpy(x) for x in y] return (y, []) if val else y
else:
return self.from_numpy(y)
def from_numpy(self, x):
return torch.from_numpy(x).to(self.device) if isinstance(x, np.ndarray) else x
def warmup(self, imgsz=(1, 3, 640, 640)):
# Warmup model by running inference once
warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb, self.triton
if any(warmup_types) and (self.device.type != 'cpu' or self.triton):
im = torch.empty(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input
for _ in range(2 if self.jit else 1): #
self.forward(im) # warmup
@staticmethod
def _model_type(p='path/to/model.pt'):
# Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx
# types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle]
from export import export_formats
from utils.downloads import is_url
sf = list(export_formats().Suffix) # export suffixes
if not is_url(p, check=False):
check_suffix(p, sf) # checks
url = urlparse(p) # if url may be Triton inference server
types = [s in Path(p).name for s in sf]
types[8] &= not types[9] # tflite &= not edgetpu
triton = not any(types) and all([any(s in url.scheme for s in ['http', 'grpc']), url.netloc])
return types + [triton]
@staticmethod
def _load_metadata(f=Path('path/to/meta.yaml')):
# Load metadata from meta.yaml if it exists
if f.exists():
d = yaml_load(f)
return d['stride'], d['names'] # assign stride, names
return None, None
class AutoShape(nn.Module): class AutoShape(nn.Module):
# YOLOv3 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
conf = 0.25 # NMS confidence threshold conf = 0.25 # NMS confidence threshold
iou = 0.45 # NMS IoU threshold iou = 0.45 # NMS IoU threshold
agnostic = False # NMS class-agnostic
multi_label = False # NMS multiple labels per box
classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
multi_label = False # NMS multiple labels per box
max_det = 1000 # maximum number of detections per image max_det = 1000 # maximum number of detections per image
amp = False # Automatic Mixed Precision (AMP) inference
def __init__(self, model, verbose=True): def __init__(self, model):
super().__init__() super().__init__()
if verbose:
LOGGER.info('Adding AutoShape... ')
copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes
self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance
self.pt = not self.dmb or model.pt # PyTorch model
self.model = model.eval() self.model = model.eval()
if self.pt:
m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() def autoshape(self):
m.inplace = False # Detect.inplace=False for safe multithread inference LOGGER.info('AutoShape already enabled, skipping... ') # model already converted to model.autoshape()
m.export = True # do not output loss values return self
def _apply(self, fn): def _apply(self, fn):
# Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
self = super()._apply(fn) self = super()._apply(fn)
if self.pt: m = self.model.model[-1] # Detect()
m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect()
m.stride = fn(m.stride) m.stride = fn(m.stride)
m.grid = list(map(fn, m.grid)) m.grid = list(map(fn, m.grid))
if isinstance(m.anchor_grid, list): if isinstance(m.anchor_grid, list):
m.anchor_grid = list(map(fn, m.anchor_grid)) m.anchor_grid = list(map(fn, m.anchor_grid))
return self return self
@smart_inference_mode() @torch.no_grad()
def forward(self, ims, size=640, augment=False, profile=False): def forward(self, imgs, size=640, augment=False, profile=False):
# Inference from various sources. For size(height=640, width=1280), RGB images example inputs are: # Inference from various sources. For height=640, width=1280, RGB images example inputs are:
# file: ims = 'data/images/zidane.jpg' # str or PosixPath # file: imgs = 'data/images/zidane.jpg' # str or PosixPath
# URI: = 'https://ultralytics.com/images/zidane.jpg' # URI: = 'https://ultralytics.com/images/zidane.jpg'
# OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
# PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
@ -665,20 +434,16 @@ class AutoShape(nn.Module):
# torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
# multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
dt = (Profile(), Profile(), Profile()) t = [time_sync()]
with dt[0]: p = next(self.model.parameters()) # for device and type
if isinstance(size, int): # expand if isinstance(imgs, torch.Tensor): # torch
size = (size, size) with amp.autocast(enabled=p.device.type != 'cpu'):
p = next(self.model.parameters()) if self.pt else torch.empty(1, device=self.model.device) # param return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference
if isinstance(ims, torch.Tensor): # torch
with amp.autocast(autocast):
return self.model(ims.to(p.device).type_as(p), augment=augment) # inference
# Pre-process # Pre-process
n, ims = (len(ims), list(ims)) if isinstance(ims, (list, tuple)) else (1, [ims]) # number, list of images n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images
shape0, shape1, files = [], [], [] # image and inference shapes, filenames shape0, shape1, files = [], [], [] # image and inference shapes, filenames
for i, im in enumerate(ims): for i, im in enumerate(imgs):
f = f'image{i}' # filename f = f'image{i}' # filename
if isinstance(im, (str, Path)): # filename or uri if isinstance(im, (str, Path)): # filename or uri
im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
@ -688,116 +453,110 @@ class AutoShape(nn.Module):
files.append(Path(f).with_suffix('.jpg').name) files.append(Path(f).with_suffix('.jpg').name)
if im.shape[0] < 5: # image in CHW if im.shape[0] < 5: # image in CHW
im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
im = im[..., :3] if im.ndim == 3 else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) # enforce 3ch input im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input
s = im.shape[:2] # HWC s = im.shape[:2] # HWC
shape0.append(s) # image shape shape0.append(s) # image shape
g = max(size) / max(s) # gain g = (size / max(s)) # gain
shape1.append([int(y * g) for y in s]) shape1.append([y * g for y in s])
ims[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
shape1 = [make_divisible(x, self.stride) for x in np.array(shape1).max(0)] # inf shape shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape
x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad
x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW x = np.stack(x, 0) if n > 1 else x[0][None] # stack
x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW
x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
t.append(time_sync())
with amp.autocast(autocast): with amp.autocast(enabled=p.device.type != 'cpu'):
# Inference # Inference
with dt[1]: y = self.model(x, augment, profile)[0] # forward
y = self.model(x, augment=augment) # forward t.append(time_sync())
# Post-process # Post-process
with dt[2]: y = non_max_suppression(y, self.conf, iou_thres=self.iou, classes=self.classes,
y = non_max_suppression(y if self.dmb else y[0], multi_label=self.multi_label, max_det=self.max_det) # NMS
self.conf,
self.iou,
self.classes,
self.agnostic,
self.multi_label,
max_det=self.max_det) # NMS
for i in range(n): for i in range(n):
scale_boxes(shape1, y[i][:, :4], shape0[i]) scale_coords(shape1, y[i][:, :4], shape0[i])
return Detections(ims, y, files, dt, self.names, x.shape) t.append(time_sync())
return Detections(imgs, y, files, t, self.names, x.shape)
class Detections: class Detections:
# YOLOv3 detections class for inference results # detections class for inference results
def __init__(self, ims, pred, files, times=(0, 0, 0), names=None, shape=None): def __init__(self, imgs, pred, files, times=None, names=None, shape=None):
super().__init__() super().__init__()
d = pred[0].device # device d = pred[0].device # device
gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in ims] # normalizations gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs] # normalizations
self.ims = ims # list of images as numpy arrays self.imgs = imgs # list of images as numpy arrays
self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
self.names = names # class names self.names = names # class names
self.files = files # image filenames self.files = files # image filenames
self.times = times # profiling times
self.xyxy = pred # xyxy pixels self.xyxy = pred # xyxy pixels
self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
self.n = len(self.pred) # number of images (batch size) self.n = len(self.pred) # number of images (batch size)
self.t = tuple(x.t / self.n * 1E3 for x in times) # timestamps (ms) self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms)
self.s = tuple(shape) # inference BCHW shape self.s = shape # inference BCHW shape
def _run(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')): def display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')):
s, crops = '', [] crops = []
for i, (im, pred) in enumerate(zip(self.ims, self.pred)): for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):
s += f'\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string
if pred.shape[0]: if pred.shape[0]:
for c in pred[:, -1].unique(): for c in pred[:, -1].unique():
n = (pred[:, -1] == c).sum() # detections per class n = (pred[:, -1] == c).sum() # detections per class
s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
s = s.rstrip(', ')
if show or save or render or crop: if show or save or render or crop:
annotator = Annotator(im, example=str(self.names)) annotator = Annotator(im, example=str(self.names))
for *box, conf, cls in reversed(pred): # xyxy, confidence, class for *box, conf, cls in reversed(pred): # xyxy, confidence, class
label = f'{self.names[int(cls)]} {conf:.2f}' label = f'{self.names[int(cls)]} {conf:.2f}'
if crop: if crop:
file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
crops.append({ crops.append({'box': box, 'conf': conf, 'cls': cls, 'label': label,
'box': box,
'conf': conf,
'cls': cls,
'label': label,
'im': save_one_box(box, im, file=file, save=save)}) 'im': save_one_box(box, im, file=file, save=save)})
else: # all others else: # all others
annotator.box_label(box, label if labels else '', color=colors(cls)) annotator.box_label(box, label, color=colors(cls))
im = annotator.im im = annotator.im
else: else:
s += '(no detections)' s += '(no detections)'
im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
if pprint:
LOGGER.info(s.rstrip(', '))
if show: if show:
display(im) if is_notebook() else im.show(self.files[i]) im.show(self.files[i]) # show
if save: if save:
f = self.files[i] f = self.files[i]
im.save(save_dir / f) # save im.save(save_dir / f) # save
if i == self.n - 1: if i == self.n - 1:
LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}") LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
if render: if render:
self.ims[i] = np.asarray(im) self.imgs[i] = np.asarray(im)
if pprint:
s = s.lstrip('\n')
return f'{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}' % self.t
if crop: if crop:
if save: if save:
LOGGER.info(f'Saved results to {save_dir}\n') LOGGER.info(f'Saved results to {save_dir}\n')
return crops return crops
@TryExcept('Showing images is not supported in this environment') def print(self):
def show(self, labels=True): self.display(pprint=True) # print results
self._run(show=True, labels=labels) # show results LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' %
self.t)
def save(self, labels=True, save_dir='runs/detect/exp', exist_ok=False): def show(self):
save_dir = increment_path(save_dir, exist_ok, mkdir=True) # increment save_dir self.display(show=True) # show results
self._run(save=True, labels=labels, save_dir=save_dir) # save results
def crop(self, save=True, save_dir='runs/detect/exp', exist_ok=False): def save(self, save_dir='runs/detect/exp'):
save_dir = increment_path(save_dir, exist_ok, mkdir=True) if save else None save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir
return self._run(crop=True, save=save, save_dir=save_dir) # crop results self.display(save=True, save_dir=save_dir) # save results
def render(self, labels=True): def crop(self, save=True, save_dir='runs/detect/exp'):
self._run(render=True, labels=labels) # render results save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None
return self.ims return self.display(crop=True, save=save, save_dir=save_dir) # crop results
def render(self):
self.display(render=True) # render results
return self.imgs
def pandas(self): def pandas(self):
# return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
@ -811,57 +570,24 @@ class Detections:
def tolist(self): def tolist(self):
# return a list of Detections objects, i.e. 'for result in results.tolist():' # return a list of Detections objects, i.e. 'for result in results.tolist():'
r = range(self.n) # iterable x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)]
x = [Detections([self.ims[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r] for d in x:
# for d in x: for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
# for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: setattr(d, k, getattr(d, k)[0]) # pop out of list
# setattr(d, k, getattr(d, k)[0]) # pop out of list
return x return x
def print(self): def __len__(self):
LOGGER.info(self.__str__())
def __len__(self): # override len(results)
return self.n return self.n
def __str__(self): # override print(results)
return self._run(pprint=True) # print results
def __repr__(self):
return f'YOLOv3 {self.__class__} instance\n' + self.__str__()
class Proto(nn.Module):
# YOLOv3 mask Proto module for segmentation models
def __init__(self, c1, c_=256, c2=32): # ch_in, number of protos, number of masks
super().__init__()
self.cv1 = Conv(c1, c_, k=3)
self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
self.cv2 = Conv(c_, c_, k=3)
self.cv3 = Conv(c_, c2)
def forward(self, x):
return self.cv3(self.cv2(self.upsample(self.cv1(x))))
class Classify(nn.Module): class Classify(nn.Module):
# YOLOv3 classification head, i.e. x(b,c1,20,20) to x(b,c2) # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
def __init__(self, def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
c1,
c2,
k=1,
s=1,
p=None,
g=1,
dropout_p=0.0): # ch_in, ch_out, kernel, stride, padding, groups, dropout probability
super().__init__() super().__init__()
c_ = 1280 # efficientnet_b0 size self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
self.conv = Conv(c1, c_, k, s, autopad(k, p), g) self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1) self.flat = nn.Flatten()
self.drop = nn.Dropout(p=dropout_p, inplace=True)
self.linear = nn.Linear(c_, c2) # to x(b,c2)
def forward(self, x): def forward(self, x):
if isinstance(x, list): z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
x = torch.cat(x, 1) return self.flat(self.conv(z)) # flatten to x(b,c2)
return self.linear(self.drop(self.pool(self.conv(x)).flatten(1)))

View File

@ -8,9 +8,24 @@ import numpy as np
import torch import torch
import torch.nn as nn import torch.nn as nn
from models.common import Conv
from utils.downloads import attempt_download from utils.downloads import attempt_download
class CrossConv(nn.Module):
# Cross Convolution Downsample
def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
# ch_in, ch_out, kernel, stride, groups, expansion, shortcut
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, (1, k), (1, s))
self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
self.add = shortcut and c1 == c2
def forward(self, x):
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
class Sum(nn.Module): class Sum(nn.Module):
# Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
def __init__(self, n, weight=False): # n: number of inputs def __init__(self, n, weight=False): # n: number of inputs
@ -48,8 +63,8 @@ class MixConv2d(nn.Module):
a[0] = 1 a[0] = 1
c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
self.m = nn.ModuleList([ self.m = nn.ModuleList(
nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)]) [nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)])
self.bn = nn.BatchNorm2d(c2) self.bn = nn.BatchNorm2d(c2)
self.act = nn.SiLU() self.act = nn.SiLU()
@ -63,49 +78,44 @@ class Ensemble(nn.ModuleList):
super().__init__() super().__init__()
def forward(self, x, augment=False, profile=False, visualize=False): def forward(self, x, augment=False, profile=False, visualize=False):
y = [module(x, augment, profile, visualize)[0] for module in self] y = []
for module in self:
y.append(module(x, augment, profile, visualize)[0])
# y = torch.stack(y).max(0)[0] # max ensemble # y = torch.stack(y).max(0)[0] # max ensemble
# y = torch.stack(y).mean(0) # mean ensemble # y = torch.stack(y).mean(0) # mean ensemble
y = torch.cat(y, 1) # nms ensemble y = torch.cat(y, 1) # nms ensemble
return y, None # inference, train output return y, None # inference, train output
def attempt_load(weights, device=None, inplace=True, fuse=True): def attempt_load(weights, map_location=None, inplace=True, fuse=True):
# Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
from models.yolo import Detect, Model from models.yolo import Detect, Model
# Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
model = Ensemble() model = Ensemble()
for w in weights if isinstance(weights, list) else [weights]: for w in weights if isinstance(weights, list) else [weights]:
ckpt = torch.load(attempt_download(w), map_location='cpu') # load ckpt = torch.load(attempt_download(w), map_location=map_location) # load
ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model ckpt = (ckpt['ema'] or ckpt['model']).float() # FP32 model
model.append(ckpt.fuse().eval() if fuse else ckpt.eval()) # fused or un-fused model in eval mode
# Model compatibility updates # Compatibility updates
if not hasattr(ckpt, 'stride'):
ckpt.stride = torch.tensor([32.])
if hasattr(ckpt, 'names') and isinstance(ckpt.names, (list, tuple)):
ckpt.names = dict(enumerate(ckpt.names)) # convert to dict
model.append(ckpt.fuse().eval() if fuse and hasattr(ckpt, 'fuse') else ckpt.eval()) # model in eval mode
# Module compatibility updates
for m in model.modules(): for m in model.modules():
t = type(m) t = type(m)
if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model): if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model):
m.inplace = inplace # torch 1.7.0 compatibility m.inplace = inplace # torch 1.7.0 compatibility
if t is Detect and not isinstance(m.anchor_grid, list): if t is Detect:
if not isinstance(m.anchor_grid, list): # new Detect Layer compatibility
delattr(m, 'anchor_grid') delattr(m, 'anchor_grid')
setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl) setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)
elif t is Conv:
m._non_persistent_buffers_set = set() # torch 1.6.0 compatibility
elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'): elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'):
m.recompute_scale_factor = None # torch 1.11.0 compatibility m.recompute_scale_factor = None # torch 1.11.0 compatibility
# Return model
if len(model) == 1: if len(model) == 1:
return model[-1] return model[-1] # return model
else:
# Return detection ensemble
print(f'Ensemble created with {weights}\n') print(f'Ensemble created with {weights}\n')
for k in 'names', 'nc', 'yaml': for k in ['names']:
setattr(model, k, getattr(model[0], k)) setattr(model, k, getattr(model[-1], k))
model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
assert all(model[0].nc == m.nc for m in model), f'Models have different class counts: {[m.nc for m in model]}' return model # return ensemble
return model

View File

@ -1,22 +1,25 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
""" """
TensorFlow, Keras and TFLite versions of YOLOv3 TensorFlow, Keras and TFLite versions of
Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127 Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127
Usage: Usage:
$ python models/tf.py --weights yolov5s.pt $ python models/tf.py --weights yolov3.pt
Export: Export:
$ python export.py --weights yolov5s.pt --include saved_model pb tflite tfjs $ python path/to/export.py --weights yolov3.pt --include saved_model pb tflite tfjs
""" """
import argparse import argparse
import logging
import sys import sys
from copy import deepcopy from copy import deepcopy
from pathlib import Path from pathlib import Path
from packaging import version
FILE = Path(__file__).resolve() FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # YOLOv3 root directory ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path: if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH sys.path.append(str(ROOT)) # add ROOT to PATH
# ROOT = ROOT.relative_to(Path.cwd()) # relative # ROOT = ROOT.relative_to(Path.cwd()) # relative
@ -25,15 +28,21 @@ import numpy as np
import tensorflow as tf import tensorflow as tf
import torch import torch
import torch.nn as nn import torch.nn as nn
from keras import backend
from keras.engine.base_layer import Layer
from keras.engine.input_spec import InputSpec
from keras.utils import conv_utils
from tensorflow import keras from tensorflow import keras
from models.common import (C3, SPP, SPPF, Bottleneck, BottleneckCSP, C3x, Concat, Conv, CrossConv, DWConv, from models.common import C3, SPP, SPPF, Bottleneck, BottleneckCSP, Concat, Conv, DWConv, Focus, autopad
DWConvTranspose2d, Focus, autopad) from models.experimental import CrossConv, MixConv2d, attempt_load
from models.experimental import MixConv2d, attempt_load from models.yolo import Detect
from models.yolo import Detect, Segment
from utils.activations import SiLU from utils.activations import SiLU
from utils.general import LOGGER, make_divisible, print_args from utils.general import LOGGER, make_divisible, print_args
# isort: off
from tensorflow.python.util.tf_export import keras_export
class TFBN(keras.layers.Layer): class TFBN(keras.layers.Layer):
# TensorFlow BatchNormalization wrapper # TensorFlow BatchNormalization wrapper
@ -50,14 +59,33 @@ class TFBN(keras.layers.Layer):
return self.bn(inputs) return self.bn(inputs)
class TFMaxPool2d(keras.layers.Layer):
# TensorFlow MAX Pooling
def __init__(self, k, s, p, w=None):
super().__init__()
self.pool = keras.layers.MaxPool2D(pool_size=k, strides=s, padding='valid')
def call(self, inputs):
return self.pool(inputs)
class TFZeroPad2d(keras.layers.Layer):
# TensorFlow MAX Pooling
def __init__(self, p, w=None):
super().__init__()
if version.parse(tf.__version__) < version.parse('2.11.0'):
self.zero_pad = ZeroPadding2D(padding=p)
else:
self.zero_pad = keras.layers.ZeroPadding2D(padding=((p[0], p[1]), (p[2], p[3])))
def call(self, inputs):
return self.zero_pad(inputs)
class TFPad(keras.layers.Layer): class TFPad(keras.layers.Layer):
# Pad inputs in spatial dimensions 1 and 2
def __init__(self, pad): def __init__(self, pad):
super().__init__() super().__init__()
if isinstance(pad, int):
self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]]) self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]])
else: # tuple/list
self.pad = tf.constant([[0, 0], [pad[0], pad[0]], [pad[1], pad[1]], [0, 0]])
def call(self, inputs): def call(self, inputs):
return tf.pad(inputs, self.pad, mode='constant', constant_values=0) return tf.pad(inputs, self.pad, mode='constant', constant_values=0)
@ -69,69 +97,31 @@ class TFConv(keras.layers.Layer):
# ch_in, ch_out, weights, kernel, stride, padding, groups # ch_in, ch_out, weights, kernel, stride, padding, groups
super().__init__() super().__init__()
assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument" assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
assert isinstance(k, int), "Convolution with multiple kernels are not allowed."
# TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding) # TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding)
# see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch # see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch
conv = keras.layers.Conv2D( conv = keras.layers.Conv2D(
filters=c2, c2, k, s, 'SAME' if s == 1 else 'VALID', use_bias=False if hasattr(w, 'bn') else True,
kernel_size=k,
strides=s,
padding='SAME' if s == 1 else 'VALID',
use_bias=not hasattr(w, 'bn'),
kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()), kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy())) bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv]) self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
self.act = activations(w.act) if act else tf.identity
# activations
if isinstance(w.act, nn.LeakyReLU):
self.act = (lambda x: keras.activations.relu(x, alpha=0.1)) if act else tf.identity
elif isinstance(w.act, nn.Hardswish):
self.act = (lambda x: x * tf.nn.relu6(x + 3) * 0.166666667) if act else tf.identity
elif isinstance(w.act, (nn.SiLU, SiLU)):
self.act = (lambda x: keras.activations.swish(x)) if act else tf.identity
else:
raise Exception(f'no matching TensorFlow activation found for {w.act}')
def call(self, inputs): def call(self, inputs):
return self.act(self.bn(self.conv(inputs))) return self.act(self.bn(self.conv(inputs)))
class TFDWConv(keras.layers.Layer):
# Depthwise convolution
def __init__(self, c1, c2, k=1, s=1, p=None, act=True, w=None):
# ch_in, ch_out, weights, kernel, stride, padding, groups
super().__init__()
assert c2 % c1 == 0, f'TFDWConv() output={c2} must be a multiple of input={c1} channels'
conv = keras.layers.DepthwiseConv2D(
kernel_size=k,
depth_multiplier=c2 // c1,
strides=s,
padding='SAME' if s == 1 else 'VALID',
use_bias=not hasattr(w, 'bn'),
depthwise_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
self.act = activations(w.act) if act else tf.identity
def call(self, inputs):
return self.act(self.bn(self.conv(inputs)))
class TFDWConvTranspose2d(keras.layers.Layer):
# Depthwise ConvTranspose2d
def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0, w=None):
# ch_in, ch_out, weights, kernel, stride, padding, groups
super().__init__()
assert c1 == c2, f'TFDWConv() output={c2} must be equal to input={c1} channels'
assert k == 4 and p1 == 1, 'TFDWConv() only valid for k=4 and p1=1'
weight, bias = w.weight.permute(2, 3, 1, 0).numpy(), w.bias.numpy()
self.c1 = c1
self.conv = [
keras.layers.Conv2DTranspose(filters=1,
kernel_size=k,
strides=s,
padding='VALID',
output_padding=p2,
use_bias=True,
kernel_initializer=keras.initializers.Constant(weight[..., i:i + 1]),
bias_initializer=keras.initializers.Constant(bias[i])) for i in range(c1)]
def call(self, inputs):
return tf.concat([m(x) for m, x in zip(self.conv, tf.split(inputs, self.c1, 3))], 3)[:, 1:-1, 1:-1]
class TFFocus(keras.layers.Layer): class TFFocus(keras.layers.Layer):
# Focus wh information into c-space # Focus wh information into c-space
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None): def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
@ -141,8 +131,10 @@ class TFFocus(keras.layers.Layer):
def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c) def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c)
# inputs = inputs / 255 # normalize 0-255 to 0-1 # inputs = inputs / 255 # normalize 0-255 to 0-1
inputs = [inputs[:, ::2, ::2, :], inputs[:, 1::2, ::2, :], inputs[:, ::2, 1::2, :], inputs[:, 1::2, 1::2, :]] return self.conv(tf.concat([inputs[:, ::2, ::2, :],
return self.conv(tf.concat(inputs, 3)) inputs[:, 1::2, ::2, :],
inputs[:, ::2, 1::2, :],
inputs[:, 1::2, 1::2, :]], 3))
class TFBottleneck(keras.layers.Layer): class TFBottleneck(keras.layers.Layer):
@ -158,32 +150,15 @@ class TFBottleneck(keras.layers.Layer):
return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs)) return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))
class TFCrossConv(keras.layers.Layer):
# Cross Convolution
def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False, w=None):
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = TFConv(c1, c_, (1, k), (1, s), w=w.cv1)
self.cv2 = TFConv(c_, c2, (k, 1), (s, 1), g=g, w=w.cv2)
self.add = shortcut and c1 == c2
def call(self, inputs):
return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))
class TFConv2d(keras.layers.Layer): class TFConv2d(keras.layers.Layer):
# Substitution for PyTorch nn.Conv2D # Substitution for PyTorch nn.Conv2D
def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None): def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None):
super().__init__() super().__init__()
assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument" assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
self.conv = keras.layers.Conv2D(filters=c2, self.conv = keras.layers.Conv2D(
kernel_size=k, c2, k, s, 'VALID', use_bias=bias,
strides=s, kernel_initializer=keras.initializers.Constant(w.weight.permute(2, 3, 1, 0).numpy()),
padding='VALID', bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None, )
use_bias=bias,
kernel_initializer=keras.initializers.Constant(
w.weight.permute(2, 3, 1, 0).numpy()),
bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None)
def call(self, inputs): def call(self, inputs):
return self.conv(inputs) return self.conv(inputs)
@ -200,7 +175,7 @@ class TFBottleneckCSP(keras.layers.Layer):
self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3) self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3)
self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4) self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4)
self.bn = TFBN(w.bn) self.bn = TFBN(w.bn)
self.act = lambda x: keras.activations.swish(x) self.act = lambda x: keras.activations.relu(x, alpha=0.1)
self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)]) self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])
def call(self, inputs): def call(self, inputs):
@ -224,22 +199,6 @@ class TFC3(keras.layers.Layer):
return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3)) return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
class TFC3x(keras.layers.Layer):
# 3 module with cross-convolutions
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
# ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)
self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)
self.m = keras.Sequential([
TFCrossConv(c_, c_, k=3, s=1, g=g, e=1.0, shortcut=shortcut, w=w.m[j]) for j in range(n)])
def call(self, inputs):
return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
class TFSPP(keras.layers.Layer): class TFSPP(keras.layers.Layer):
# Spatial pyramid pooling layer used in YOLOv3-SPP # Spatial pyramid pooling layer used in YOLOv3-SPP
def __init__(self, c1, c2, k=(5, 9, 13), w=None): def __init__(self, c1, c2, k=(5, 9, 13), w=None):
@ -271,7 +230,6 @@ class TFSPPF(keras.layers.Layer):
class TFDetect(keras.layers.Layer): class TFDetect(keras.layers.Layer):
# TF YOLOv3 Detect layer
def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None): # detection layer def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None): # detection layer
super().__init__() super().__init__()
self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32) self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32)
@ -281,7 +239,8 @@ class TFDetect(keras.layers.Layer):
self.na = len(anchors[0]) // 2 # number of anchors self.na = len(anchors[0]) // 2 # number of anchors
self.grid = [tf.zeros(1)] * self.nl # init grid self.grid = [tf.zeros(1)] * self.nl # init grid
self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32) self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32)
self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]), [self.nl, 1, -1, 1, 2]) self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]),
[self.nl, 1, -1, 1, 2])
self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)] self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)]
self.training = False # set to False after building model self.training = False # set to False after building model
self.imgsz = imgsz self.imgsz = imgsz
@ -296,21 +255,19 @@ class TFDetect(keras.layers.Layer):
x.append(self.m[i](inputs[i])) x.append(self.m[i](inputs[i]))
# x(bs,20,20,255) to x(bs,3,20,20,85) # x(bs,20,20,255) to x(bs,3,20,20,85)
ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i] ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]
x[i] = tf.reshape(x[i], [-1, ny * nx, self.na, self.no]) x[i] = tf.transpose(tf.reshape(x[i], [-1, ny * nx, self.na, self.no]), [0, 2, 1, 3])
if not self.training: # inference if not self.training: # inference
y = x[i] y = tf.sigmoid(x[i])
grid = tf.transpose(self.grid[i], [0, 2, 1, 3]) - 0.5 xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
anchor_grid = tf.transpose(self.anchor_grid[i], [0, 2, 1, 3]) * 4 wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]
xy = (tf.sigmoid(y[..., 0:2]) * 2 + grid) * self.stride[i] # xy
wh = tf.sigmoid(y[..., 2:4]) ** 2 * anchor_grid
# Normalize xywh to 0-1 to reduce calibration error # Normalize xywh to 0-1 to reduce calibration error
xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32) xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32) wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
y = tf.concat([xy, wh, tf.sigmoid(y[..., 4:5 + self.nc]), y[..., 5 + self.nc:]], -1) y = tf.concat([xy, wh, y[..., 4:]], -1)
z.append(tf.reshape(y, [-1, self.na * ny * nx, self.no])) z.append(tf.reshape(y, [-1, 3 * ny * nx, self.no]))
return tf.transpose(x, [0, 2, 1, 3]) if self.training else (tf.concat(z, 1),) return x if self.training else (tf.concat(z, 1), x)
@staticmethod @staticmethod
def _make_grid(nx=20, ny=20): def _make_grid(nx=20, ny=20):
@ -320,44 +277,11 @@ class TFDetect(keras.layers.Layer):
return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32) return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32)
class TFSegment(TFDetect):
# YOLOv3 Segment head for segmentation models
def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), imgsz=(640, 640), w=None):
super().__init__(nc, anchors, ch, imgsz, w)
self.nm = nm # number of masks
self.npr = npr # number of protos
self.no = 5 + nc + self.nm # number of outputs per anchor
self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)] # output conv
self.proto = TFProto(ch[0], self.npr, self.nm, w=w.proto) # protos
self.detect = TFDetect.call
def call(self, x):
p = self.proto(x[0])
# p = TFUpsample(None, scale_factor=4, mode='nearest')(self.proto(x[0])) # (optional) full-size protos
p = tf.transpose(p, [0, 3, 1, 2]) # from shape(1,160,160,32) to shape(1,32,160,160)
x = self.detect(self, x)
return (x, p) if self.training else (x[0], p)
class TFProto(keras.layers.Layer):
def __init__(self, c1, c_=256, c2=32, w=None):
super().__init__()
self.cv1 = TFConv(c1, c_, k=3, w=w.cv1)
self.upsample = TFUpsample(None, scale_factor=2, mode='nearest')
self.cv2 = TFConv(c_, c_, k=3, w=w.cv2)
self.cv3 = TFConv(c_, c2, w=w.cv3)
def call(self, inputs):
return self.cv3(self.cv2(self.upsample(self.cv1(inputs))))
class TFUpsample(keras.layers.Layer): class TFUpsample(keras.layers.Layer):
# TF version of torch.nn.Upsample()
def __init__(self, size, scale_factor, mode, w=None): # warning: all arguments needed including 'w' def __init__(self, size, scale_factor, mode, w=None): # warning: all arguments needed including 'w'
super().__init__() super().__init__()
assert scale_factor % 2 == 0, 'scale_factor must be multiple of 2' assert scale_factor == 2, "scale_factor must be 2"
self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * scale_factor, x.shape[2] * scale_factor), mode) self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * 2, x.shape[2] * 2), method=mode)
# self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode) # self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode)
# with default arguments: align_corners=False, half_pixel_centers=False # with default arguments: align_corners=False, half_pixel_centers=False
# self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x, # self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x,
@ -368,10 +292,9 @@ class TFUpsample(keras.layers.Layer):
class TFConcat(keras.layers.Layer): class TFConcat(keras.layers.Layer):
# TF version of torch.concat()
def __init__(self, dimension=1, w=None): def __init__(self, dimension=1, w=None):
super().__init__() super().__init__()
assert dimension == 1, 'convert only NCHW to NHWC concat' assert dimension == 1, "convert only NCHW to NHWC concat"
self.d = 3 self.d = 3
def call(self, inputs): def call(self, inputs):
@ -395,26 +318,22 @@ def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3)
pass pass
n = max(round(n * gd), 1) if n > 1 else n # depth gain n = max(round(n * gd), 1) if n > 1 else n # depth gain
if m in [ if m in [nn.Conv2d, Conv, Bottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3]:
nn.Conv2d, Conv, DWConv, DWConvTranspose2d, Bottleneck, SPP, SPPF, MixConv2d, Focus, CrossConv,
BottleneckCSP, C3, C3x]:
c1, c2 = ch[f], args[0] c1, c2 = ch[f], args[0]
c2 = make_divisible(c2 * gw, 8) if c2 != no else c2 c2 = make_divisible(c2 * gw, 8) if c2 != no else c2
args = [c1, c2, *args[1:]] args = [c1, c2, *args[1:]]
if m in [BottleneckCSP, C3, C3x]: if m in [BottleneckCSP, C3]:
args.insert(2, n) args.insert(2, n)
n = 1 n = 1
elif m is nn.BatchNorm2d: elif m is nn.BatchNorm2d:
args = [ch[f]] args = [ch[f]]
elif m is Concat: elif m is Concat:
c2 = sum(ch[-1 if x == -1 else x + 1] for x in f) c2 = sum(ch[-1 if x == -1 else x + 1] for x in f)
elif m in [Detect, Segment]: elif m is Detect:
args.append([ch[x + 1] for x in f]) args.append([ch[x + 1] for x in f])
if isinstance(args[1], int): # number of anchors if isinstance(args[1], int): # number of anchors
args[1] = [list(range(args[1] * 2))] * len(f) args[1] = [list(range(args[1] * 2))] * len(f)
if m is Segment:
args[3] = make_divisible(args[3] * gw, 8)
args.append(imgsz) args.append(imgsz)
else: else:
c2 = ch[f] c2 = ch[f]
@ -435,8 +354,7 @@ def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3)
class TFModel: class TFModel:
# TF YOLOv3 model def __init__(self, cfg='yolov3.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes
super().__init__() super().__init__()
if isinstance(cfg, dict): if isinstance(cfg, dict):
self.yaml = cfg # model dict self.yaml = cfg # model dict
@ -452,17 +370,11 @@ class TFModel:
self.yaml['nc'] = nc # override yaml value self.yaml['nc'] = nc # override yaml value
self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz) self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz)
def predict(self, def predict(self, inputs, tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,
inputs,
tf_nms=False,
agnostic_nms=False,
topk_per_class=100,
topk_all=100,
iou_thres=0.45,
conf_thres=0.25): conf_thres=0.25):
y = [] # outputs y = [] # outputs
x = inputs x = inputs
for m in self.model.layers: for i, m in enumerate(self.model.layers):
if m.f != -1: # if not from previous layer if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
@ -477,18 +389,15 @@ class TFModel:
scores = probs * classes scores = probs * classes
if agnostic_nms: if agnostic_nms:
nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres) nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres)
return nms, x[1]
else: else:
boxes = tf.expand_dims(boxes, 2) boxes = tf.expand_dims(boxes, 2)
nms = tf.image.combined_non_max_suppression(boxes, nms = tf.image.combined_non_max_suppression(
scores, boxes, scores, topk_per_class, topk_all, iou_thres, conf_thres, clip_boxes=False)
topk_per_class, return nms, x[1]
topk_all,
iou_thres, return x[0] # output only first tensor [1,6300,85] = [xywh, conf, class0, class1, ...]
conf_thres, # x = x[0][0] # [x(1,6300,85), ...] to x(6300,85)
clip_boxes=False)
return (nms,)
return x # output [1,6300,85] = [xywh, conf, class0, class1, ...]
# x = x[0] # [x(1,6300,85), ...] to x(6300,85)
# xywh = x[..., :4] # x(6300,4) boxes # xywh = x[..., :4] # x(6300,4) boxes
# conf = x[..., 4:5] # x(6300,1) confidences # conf = x[..., 4:5] # x(6300,1) confidences
# cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes # cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes
@ -505,8 +414,7 @@ class AgnosticNMS(keras.layers.Layer):
# TF Agnostic NMS # TF Agnostic NMS
def call(self, input, topk_all, iou_thres, conf_thres): def call(self, input, topk_all, iou_thres, conf_thres):
# wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450 # wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450
return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres), return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres), input,
input,
fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32), fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32),
name='agnostic_nms') name='agnostic_nms')
@ -515,69 +423,50 @@ class AgnosticNMS(keras.layers.Layer):
boxes, classes, scores = x boxes, classes, scores = x
class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32) class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32)
scores_inp = tf.reduce_max(scores, -1) scores_inp = tf.reduce_max(scores, -1)
selected_inds = tf.image.non_max_suppression(boxes, selected_inds = tf.image.non_max_suppression(
scores_inp, boxes, scores_inp, max_output_size=topk_all, iou_threshold=iou_thres, score_threshold=conf_thres)
max_output_size=topk_all,
iou_threshold=iou_thres,
score_threshold=conf_thres)
selected_boxes = tf.gather(boxes, selected_inds) selected_boxes = tf.gather(boxes, selected_inds)
padded_boxes = tf.pad(selected_boxes, padded_boxes = tf.pad(selected_boxes,
paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]], paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]],
mode='CONSTANT', mode="CONSTANT", constant_values=0.0)
constant_values=0.0)
selected_scores = tf.gather(scores_inp, selected_inds) selected_scores = tf.gather(scores_inp, selected_inds)
padded_scores = tf.pad(selected_scores, padded_scores = tf.pad(selected_scores,
paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]], paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
mode='CONSTANT', mode="CONSTANT", constant_values=-1.0)
constant_values=-1.0)
selected_classes = tf.gather(class_inds, selected_inds) selected_classes = tf.gather(class_inds, selected_inds)
padded_classes = tf.pad(selected_classes, padded_classes = tf.pad(selected_classes,
paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]], paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
mode='CONSTANT', mode="CONSTANT", constant_values=-1.0)
constant_values=-1.0)
valid_detections = tf.shape(selected_inds)[0] valid_detections = tf.shape(selected_inds)[0]
return padded_boxes, padded_scores, padded_classes, valid_detections return padded_boxes, padded_scores, padded_classes, valid_detections
def activations(act=nn.SiLU):
# Returns TF activation from input PyTorch activation
if isinstance(act, nn.LeakyReLU):
return lambda x: keras.activations.relu(x, alpha=0.1)
elif isinstance(act, nn.Hardswish):
return lambda x: x * tf.nn.relu6(x + 3) * 0.166666667
elif isinstance(act, (nn.SiLU, SiLU)):
return lambda x: keras.activations.swish(x)
else:
raise Exception(f'no matching TensorFlow activation found for PyTorch activation {act}')
def representative_dataset_gen(dataset, ncalib=100): def representative_dataset_gen(dataset, ncalib=100):
# Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays # Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays
for n, (path, img, im0s, vid_cap, string) in enumerate(dataset): for n, (path, img, im0s, vid_cap, string) in enumerate(dataset):
im = np.transpose(img, [1, 2, 0]) input = np.transpose(img, [1, 2, 0])
im = np.expand_dims(im, axis=0).astype(np.float32) input = np.expand_dims(input, axis=0).astype(np.float32)
im /= 255 input /= 255
yield [im] yield [input]
if n >= ncalib: if n >= ncalib:
break break
def run( def run(weights=ROOT / 'yolov3.pt', # weights path
weights=ROOT / 'yolov5s.pt', # weights path
imgsz=(640, 640), # inference size h,w imgsz=(640, 640), # inference size h,w
batch_size=1, # batch size batch_size=1, # batch size
dynamic=False, # dynamic batch size dynamic=False, # dynamic batch size
): ):
# PyTorch model # PyTorch model
im = torch.zeros((batch_size, 3, *imgsz)) # BCHW image im = torch.zeros((batch_size, 3, *imgsz)) # BCHW image
model = attempt_load(weights, device=torch.device('cpu'), inplace=True, fuse=False) model = attempt_load(weights, map_location=torch.device('cpu'), inplace=True, fuse=False)
_ = model(im) # inference y = model(im) # inference
model.info() model.info()
# TensorFlow model # TensorFlow model
im = tf.zeros((batch_size, *imgsz, 3)) # BHWC image im = tf.zeros((batch_size, *imgsz, 3)) # BHWC image
tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz) tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
_ = tf_model.predict(im) # inference y = tf_model.predict(im) # inference
# Keras model # Keras model
im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size) im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
@ -587,15 +476,146 @@ def run(
LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\nUse export.py for TF model export.') LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\nUse export.py for TF model export.')
@keras_export("keras.layers.ZeroPadding2D")
class ZeroPadding2D(Layer):
"""Zero-padding layer for 2D input (e.g. picture).
This layer can add rows and columns of zeros
at the top, bottom, left and right side of an image tensor.
Examples:
>>> input_shape = (1, 1, 2, 2)
>>> x = np.arange(np.prod(input_shape)).reshape(input_shape)
>>> print(x)
[[[[0 1]
[2 3]]]]
>>> y = tf.keras.layers.ZeroPadding2D(padding=1)(x)
>>> print(y)
tf.Tensor(
[[[[0 0]
[0 0]
[0 0]
[0 0]]
[[0 0]
[0 1]
[2 3]
[0 0]]
[[0 0]
[0 0]
[0 0]
[0 0]]]], shape=(1, 3, 4, 2), dtype=int64)
Args:
padding: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.
- If int: the same symmetric padding
is applied to height and width.
- If tuple of 2 ints:
interpreted as two different
symmetric padding values for height and width:
`(symmetric_height_pad, symmetric_width_pad)`.
- If tuple of 2 tuples of 2 ints:
interpreted as
`((top_pad, bottom_pad), (left_pad, right_pad))`
data_format: A string,
one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch_size, height, width, channels)` while `channels_first`
corresponds to inputs with shape
`(batch_size, channels, height, width)`.
It defaults to the `image_data_format` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be "channels_last".
Input shape:
4D tensor with shape:
- If `data_format` is `"channels_last"`:
`(batch_size, rows, cols, channels)`
- If `data_format` is `"channels_first"`:
`(batch_size, channels, rows, cols)`
Output shape:
4D tensor with shape:
- If `data_format` is `"channels_last"`:
`(batch_size, padded_rows, padded_cols, channels)`
- If `data_format` is `"channels_first"`:
`(batch_size, channels, padded_rows, padded_cols)`
"""
def __init__(self, padding=(1, 1), data_format=None, **kwargs):
super().__init__(**kwargs)
self.data_format = conv_utils.normalize_data_format(data_format)
if isinstance(padding, int):
self.padding = ((padding, padding), (padding, padding))
elif hasattr(padding, "__len__"):
if len(padding) == 4:
padding = ((padding[0], padding[1]), (padding[2], padding[3]))
if len(padding) != 2:
raise ValueError(
f"`padding` should have two elements. Received: {padding}."
)
height_padding = conv_utils.normalize_tuple(
padding[0], 2, "1st entry of padding", allow_zero=True
)
width_padding = conv_utils.normalize_tuple(
padding[1], 2, "2nd entry of padding", allow_zero=True
)
self.padding = (height_padding, width_padding)
else:
raise ValueError(
"`padding` should be either an int, "
"a tuple of 2 ints "
"(symmetric_height_pad, symmetric_width_pad), "
"or a tuple of 2 tuples of 2 ints "
"((top_pad, bottom_pad), (left_pad, right_pad)). "
f"Received: {padding}."
)
self.input_spec = InputSpec(ndim=4)
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
if self.data_format == "channels_first":
if input_shape[2] is not None:
rows = input_shape[2] + self.padding[0][0] + self.padding[0][1]
else:
rows = None
if input_shape[3] is not None:
cols = input_shape[3] + self.padding[1][0] + self.padding[1][1]
else:
cols = None
return tf.TensorShape([input_shape[0], input_shape[1], rows, cols])
elif self.data_format == "channels_last":
if input_shape[1] is not None:
rows = input_shape[1] + self.padding[0][0] + self.padding[0][1]
else:
rows = None
if input_shape[2] is not None:
cols = input_shape[2] + self.padding[1][0] + self.padding[1][1]
else:
cols = None
return tf.TensorShape([input_shape[0], rows, cols, input_shape[3]])
def call(self, inputs):
return backend.spatial_2d_padding(
inputs, padding=self.padding, data_format=self.data_format
)
def get_config(self):
config = {"padding": self.padding, "data_format": self.data_format}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
def parse_opt(): def parse_opt():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path') parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='weights path')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
parser.add_argument('--batch-size', type=int, default=1, help='batch size') parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--dynamic', action='store_true', help='dynamic batch size') parser.add_argument('--dynamic', action='store_true', help='dynamic batch size')
opt = parser.parse_args() opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(vars(opt)) print_args(FILE.stem, opt)
return opt return opt
@ -603,6 +623,6 @@ def main(opt):
run(**vars(opt)) run(**vars(opt))
if __name__ == '__main__': if __name__ == "__main__":
opt = parse_opt() opt = parse_opt()
main(opt) main(opt)

View File

@ -3,29 +3,26 @@
YOLO-specific modules YOLO-specific modules
Usage: Usage:
$ python models/yolo.py --cfg yolov5s.yaml $ python path/to/models/yolo.py --cfg yolov3.yaml
""" """
import argparse import argparse
import os
import platform
import sys import sys
from copy import deepcopy from copy import deepcopy
from pathlib import Path from pathlib import Path
FILE = Path(__file__).resolve() FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # YOLOv3 root directory ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path: if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH sys.path.append(str(ROOT)) # add ROOT to PATH
if platform.system() != 'Windows': # ROOT = ROOT.relative_to(Path.cwd()) # relative
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import * from models.common import *
from models.experimental import * from models.experimental import *
from utils.autoanchor import check_anchor_order from utils.autoanchor import check_anchor_order
from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args
from utils.plots import feature_visualization from utils.plots import feature_visualization
from utils.torch_utils import (fuse_conv_and_bn, initialize_weights, model_info, profile, scale_img, select_device, from utils.torch_utils import (copy_attr, fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device,
time_sync) time_sync)
try: try:
@ -35,10 +32,8 @@ except ImportError:
class Detect(nn.Module): class Detect(nn.Module):
# YOLOv3 Detect head for detection models
stride = None # strides computed during build stride = None # strides computed during build
dynamic = False # force grid reconstruction onnx_dynamic = False # ONNX export parameter
export = False # export mode
def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
super().__init__() super().__init__()
@ -46,11 +41,11 @@ class Detect(nn.Module):
self.no = nc + 5 # number of outputs per anchor self.no = nc + 5 # number of outputs per anchor
self.nl = len(anchors) # number of detection layers self.nl = len(anchors) # number of detection layers
self.na = len(anchors[0]) // 2 # number of anchors self.na = len(anchors[0]) // 2 # number of anchors
self.grid = [torch.empty(0) for _ in range(self.nl)] # init grid self.grid = [torch.zeros(1)] * self.nl # init grid
self.anchor_grid = [torch.empty(0) for _ in range(self.nl)] # init anchor grid self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
self.inplace = inplace # use inplace ops (e.g. slice assignment) self.inplace = inplace # use in-place ops (e.g. slice assignment)
def forward(self, x): def forward(self, x):
z = [] # inference output z = [] # inference output
@ -60,110 +55,35 @@ class Detect(nn.Module):
x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
if not self.training: # inference if not self.training: # inference
if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
if isinstance(self, Segment): # (boxes + masks) y = x[i].sigmoid()
xy, wh, conf, mask = x[i].split((2, 2, self.nc + 1, self.no - self.nc - 5), 4) if self.inplace:
xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[i] # xy y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i] # wh y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
y = torch.cat((xy, wh, conf.sigmoid(), mask), 4) else: # for on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
else: # Detect (boxes only) xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4) wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy y = torch.cat((xy, wh, y[..., 4:]), -1)
wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh z.append(y.view(bs, -1, self.no))
y = torch.cat((xy, wh, conf), 4)
z.append(y.view(bs, self.na * nx * ny, self.no))
return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x) return x if self.training else (torch.cat(z, 1), x)
def _make_grid(self, nx=20, ny=20, i=0, torch_1_10=check_version(torch.__version__, '1.10.0')): def _make_grid(self, nx=20, ny=20, i=0):
d = self.anchors[i].device d = self.anchors[i].device
t = self.anchors[i].dtype if check_version(torch.__version__, '1.10.0'): # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility
shape = 1, self.na, ny, nx, 2 # grid shape yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)], indexing='ij')
y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t) else:
yv, xv = torch.meshgrid(y, x, indexing='ij') if torch_1_10 else torch.meshgrid(y, x) # torch>=0.7 compatibility yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)])
grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5 grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float()
anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape) anchor_grid = (self.anchors[i].clone() * self.stride[i]) \
.view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float()
return grid, anchor_grid return grid, anchor_grid
class Segment(Detect): class Model(nn.Module):
# YOLOv3 Segment head for segmentation models def __init__(self, cfg='yolov3.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), inplace=True):
super().__init__(nc, anchors, ch, inplace)
self.nm = nm # number of masks
self.npr = npr # number of protos
self.no = 5 + nc + self.nm # number of outputs per anchor
self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
self.proto = Proto(ch[0], self.npr, self.nm) # protos
self.detect = Detect.forward
def forward(self, x):
p = self.proto(x[0])
x = self.detect(self, x)
return (x, p) if self.training else (x[0], p) if self.export else (x[0], p, x[1])
class BaseModel(nn.Module):
# YOLOv3 base model
def forward(self, x, profile=False, visualize=False):
return self._forward_once(x, profile, visualize) # single-scale inference, train
def _forward_once(self, x, profile=False, visualize=False):
y, dt = [], [] # outputs
for m in self.model:
if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
if profile:
self._profile_one_layer(m, x, dt)
x = m(x) # run
y.append(x if m.i in self.save else None) # save output
if visualize:
feature_visualization(x, m.type, m.i, save_dir=visualize)
return x
def _profile_one_layer(self, m, x, dt):
c = m == self.model[-1] # is final layer, copy input as inplace fix
o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
t = time_sync()
for _ in range(10):
m(x.copy() if c else x)
dt.append((time_sync() - t) * 100)
if m == self.model[0]:
LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} module")
LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
if c:
LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total")
def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
LOGGER.info('Fusing layers... ')
for m in self.model.modules():
if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
delattr(m, 'bn') # remove batchnorm
m.forward = m.forward_fuse # update forward
self.info()
return self
def info(self, verbose=False, img_size=640): # print model information
model_info(self, verbose, img_size)
def _apply(self, fn):
# Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
self = super()._apply(fn)
m = self.model[-1] # Detect()
if isinstance(m, (Detect, Segment)):
m.stride = fn(m.stride)
m.grid = list(map(fn, m.grid))
if isinstance(m.anchor_grid, list):
m.anchor_grid = list(map(fn, m.anchor_grid))
return self
class DetectionModel(BaseModel):
# YOLOv3 detection model
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super().__init__() super().__init__()
if isinstance(cfg, dict): if isinstance(cfg, dict):
self.yaml = cfg # model dict self.yaml = cfg # model dict
@ -187,13 +107,12 @@ class DetectionModel(BaseModel):
# Build strides, anchors # Build strides, anchors
m = self.model[-1] # Detect() m = self.model[-1] # Detect()
if isinstance(m, (Detect, Segment)): if isinstance(m, Detect):
s = 256 # 2x min stride s = 256 # 2x min stride
m.inplace = self.inplace m.inplace = self.inplace
forward = lambda x: self.forward(x)[0] if isinstance(m, Segment) else self.forward(x) m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
check_anchor_order(m)
m.anchors /= m.stride.view(-1, 1, 1) m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride self.stride = m.stride
self._initialize_biases() # only run once self._initialize_biases() # only run once
@ -221,6 +140,19 @@ class DetectionModel(BaseModel):
y = self._clip_augmented(y) # clip augmented tails y = self._clip_augmented(y) # clip augmented tails
return torch.cat(y, 1), None # augmented inference, train return torch.cat(y, 1), None # augmented inference, train
def _forward_once(self, x, profile=False, visualize=False):
y, dt = [], [] # outputs
for m in self.model:
if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
if profile:
self._profile_one_layer(m, x, dt)
x = m(x) # run
y.append(x if m.i in self.save else None) # save output
if visualize:
feature_visualization(x, m.type, m.i, save_dir=visualize)
return x
def _descale_pred(self, p, flips, scale, img_size): def _descale_pred(self, p, flips, scale, img_size):
# de-scale predictions following augmented inference (inverse operation) # de-scale predictions following augmented inference (inverse operation)
if self.inplace: if self.inplace:
@ -249,6 +181,19 @@ class DetectionModel(BaseModel):
y[-1] = y[-1][:, i:] # small y[-1] = y[-1][:, i:] # small
return y return y
def _profile_one_layer(self, m, x, dt):
c = isinstance(m, Detect) # is final layer, copy input as inplace fix
o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
t = time_sync()
for _ in range(10):
m(x.copy() if c else x)
dt.append((time_sync() - t) * 100)
if m == self.model[0]:
LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}")
LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
if c:
LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total")
def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
# https://arxiv.org/abs/1708.02002 section 3.3 # https://arxiv.org/abs/1708.02002 section 3.3
# cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
@ -256,52 +201,55 @@ class DetectionModel(BaseModel):
for mi, s in zip(m.m, m.stride): # from for mi, s in zip(m.m, m.stride): # from
b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
b.data[:, 5:5 + m.nc] += math.log(0.6 / (m.nc - 0.99999)) if cf is None else torch.log(cf / cf.sum()) # cls b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls
mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
def _print_biases(self):
m = self.model[-1] # Detect() module
for mi in m.m: # from
b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
LOGGER.info(
('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
Model = DetectionModel # retain 'Model' class for backwards compatibility # def _print_weights(self):
# for m in self.model.modules():
# if type(m) is Bottleneck:
# LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
LOGGER.info('Fusing layers... ')
for m in self.model.modules():
if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
delattr(m, 'bn') # remove batchnorm
m.forward = m.forward_fuse # update forward
self.info()
return self
class SegmentationModel(DetectionModel): def autoshape(self): # add AutoShape module
# segmentation model LOGGER.info('Adding AutoShape... ')
def __init__(self, cfg='yolov5s-seg.yaml', ch=3, nc=None, anchors=None): m = AutoShape(self) # wrap model
super().__init__(cfg, ch, nc, anchors) copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
return m
def info(self, verbose=False, img_size=640): # print model information
model_info(self, verbose, img_size)
class ClassificationModel(BaseModel): def _apply(self, fn):
# classification model # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
def __init__(self, cfg=None, model=None, nc=1000, cutoff=10): # yaml, model, number of classes, cutoff index self = super()._apply(fn)
super().__init__() m = self.model[-1] # Detect()
self._from_detection_model(model, nc, cutoff) if model is not None else self._from_yaml(cfg) if isinstance(m, Detect):
m.stride = fn(m.stride)
def _from_detection_model(self, model, nc=1000, cutoff=10): m.grid = list(map(fn, m.grid))
# Create a classification model from a detection model if isinstance(m.anchor_grid, list):
if isinstance(model, DetectMultiBackend): m.anchor_grid = list(map(fn, m.anchor_grid))
model = model.model # unwrap DetectMultiBackend return self
model.model = model.model[:cutoff] # backbone
m = model.model[-1] # last layer
ch = m.conv.in_channels if hasattr(m, 'conv') else m.cv1.conv.in_channels # ch into module
c = Classify(ch, nc) # Classify()
c.i, c.f, c.type = m.i, m.f, 'models.common.Classify' # index, from, type
model.model[-1] = c # replace
self.model = model.model
self.stride = model.stride
self.save = []
self.nc = nc
def _from_yaml(self, cfg):
# Create a classification model from a *.yaml file
self.model = None
def parse_model(d, ch): # model_dict, input_channels(3) def parse_model(d, ch): # model_dict, input_channels(3)
# Parse a model.yaml dictionary
LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
anchors, nc, gd, gw, act = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'], d.get('activation') anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
if act:
Conv.default_act = eval(act) # redefine default activation, i.e. Conv.default_act = nn.SiLU()
LOGGER.info(f"{colorstr('activation:')} {act}") # print
na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
no = na * (nc + 5) # number of outputs = anchors * (classes + 5) no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
@ -309,32 +257,30 @@ def parse_model(d, ch): # model_dict, input_channels(3)
for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
m = eval(m) if isinstance(m, str) else m # eval strings m = eval(m) if isinstance(m, str) else m # eval strings
for j, a in enumerate(args): for j, a in enumerate(args):
with contextlib.suppress(NameError): try:
args[j] = eval(a) if isinstance(a, str) else a # eval strings args[j] = eval(a) if isinstance(a, str) else a # eval strings
except NameError:
pass
n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain
if m in { if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]:
BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x}:
c1, c2 = ch[f], args[0] c1, c2 = ch[f], args[0]
if c2 != no: # if not output if c2 != no: # if not output
c2 = make_divisible(c2 * gw, 8) c2 = make_divisible(c2 * gw, 8)
args = [c1, c2, *args[1:]] args = [c1, c2, *args[1:]]
if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x}: if m in [BottleneckCSP, C3, C3TR, C3Ghost]:
args.insert(2, n) # number of repeats args.insert(2, n) # number of repeats
n = 1 n = 1
elif m is nn.BatchNorm2d: elif m is nn.BatchNorm2d:
args = [ch[f]] args = [ch[f]]
elif m is Concat: elif m is Concat:
c2 = sum(ch[x] for x in f) c2 = sum(ch[x] for x in f)
# TODO: channel, gw, gd elif m is Detect:
elif m in {Detect, Segment}:
args.append([ch[x] for x in f]) args.append([ch[x] for x in f])
if isinstance(args[1], int): # number of anchors if isinstance(args[1], int): # number of anchors
args[1] = [list(range(args[1] * 2))] * len(f) args[1] = [list(range(args[1] * 2))] * len(f)
if m is Segment:
args[3] = make_divisible(args[3] * gw, 8)
elif m is Contract: elif m is Contract:
c2 = ch[f] * args[0] ** 2 c2 = ch[f] * args[0] ** 2
elif m is Expand: elif m is Expand:
@ -357,34 +303,34 @@ def parse_model(d, ch): # model_dict, input_channels(3)
if __name__ == '__main__': if __name__ == '__main__':
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') parser.add_argument('--cfg', type=str, default='yolov3yaml', help='model.yaml')
parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--profile', action='store_true', help='profile model speed') parser.add_argument('--profile', action='store_true', help='profile model speed')
parser.add_argument('--line-profile', action='store_true', help='profile model speed layer by layer')
parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') parser.add_argument('--test', action='store_true', help='test all yolo*.yaml')
opt = parser.parse_args() opt = parser.parse_args()
opt.cfg = check_yaml(opt.cfg) # check YAML opt.cfg = check_yaml(opt.cfg) # check YAML
print_args(vars(opt)) print_args(FILE.stem, opt)
device = select_device(opt.device) device = select_device(opt.device)
# Create model # Create model
im = torch.rand(opt.batch_size, 3, 640, 640).to(device)
model = Model(opt.cfg).to(device) model = Model(opt.cfg).to(device)
model.train()
# Options # Profile
if opt.line_profile: # profile layer by layer if opt.profile:
model(im, profile=True) img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
y = model(img, profile=True)
elif opt.profile: # profile forward-backward # Test all models
results = profile(input=im, ops=[model], n=3) if opt.test:
elif opt.test: # test all models
for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'):
try: try:
_ = Model(cfg) _ = Model(cfg)
except Exception as e: except Exception as e:
print(f'Error in {cfg}: {e}') print(f'Error in {cfg}: {e}')
else: # report fused model summary # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898)
model.fuse() # from torch.utils.tensorboard import SummaryWriter
# tb_writer = SummaryWriter('.')
# LOGGER.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/")
# tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph

30
yolov3/requirements.txt Normal file → Executable file
View File

@ -1,35 +1,31 @@
# YOLOv3 requirements # YOLOv3 requirements
# Usage: pip install -r requirements.txt # Usage: pip install -r requirements.txt
# Base ------------------------------------------------------------------------ # Base ----------------------------------------
gitpython
ipython # interactive notebook
matplotlib>=3.2.2 matplotlib>=3.2.2
numpy>=1.18.5 numpy>=1.18.5
opencv-python>=4.1.1 opencv-python>=4.1.1
Pillow>=7.1.2 Pillow>=7.1.2
psutil # system resources
PyYAML>=5.3.1 PyYAML>=5.3.1
requests>=2.23.0 requests>=2.23.0
scipy>=1.4.1 scipy>=1.4.1
thop>=0.1.1 # FLOPs computation torch>=1.7.0 # see https://pytorch.org/get-started/locally/ (recommended)
torch>=1.7.0 # see https://pytorch.org/get-started/locally (recommended)
torchvision>=0.8.1 torchvision>=0.8.1
tqdm>=4.64.0 tqdm>=4.64.0
# protobuf<=3.20.1 # https://github.com/ultralytics/yolov5/issues/8012 # protobuf<=3.20.1 # https://github.com/ultralytics/yolov5/issues/8012
# Logging --------------------------------------------------------------------- # Logging -------------------------------------
tensorboard>=2.4.1 tensorboard>=2.4.1
# clearml>=1.2.0 # clearml
# comet # comet
# Plotting -------------------------------------------------------------------- # Plotting ------------------------------------
pandas>=1.1.4 pandas>=1.1.4
seaborn>=0.11.0 seaborn>=0.11.0
# Export ---------------------------------------------------------------------- # Export --------------------------------------
# coremltools>=6.0 # CoreML export # coremltools>=6.0 # CoreML export
# onnx>=1.12.0 # ONNX export # onnx>=1.9.0 # ONNX export
# onnx-simplifier>=0.4.1 # ONNX simplifier # onnx-simplifier>=0.4.1 # ONNX simplifier
# nvidia-pyindex # TensorRT export # nvidia-pyindex # TensorRT export
# nvidia-tensorrt # TensorRT export # nvidia-tensorrt # TensorRT export
@ -38,14 +34,14 @@ seaborn>=0.11.0
# tensorflowjs>=3.9.0 # TF.js export # tensorflowjs>=3.9.0 # TF.js export
# openvino-dev # OpenVINO export # openvino-dev # OpenVINO export
# Deploy ---------------------------------------------------------------------- # Deploy --------------------------------------
setuptools>=65.5.1 # Snyk vulnerability fix
wheel>=0.38.0 # Snyk vulnerability fix
# tritonclient[all]~=2.24.0 # tritonclient[all]~=2.24.0
# Extras ---------------------------------------------------------------------- # Extras --------------------------------------
ipython # interactive notebook
psutil # system utilization
thop>=0.1.1 # FLOPs computation
# mss # screenshots # mss # screenshots
# albumentations>=1.0.3 # albumentations>=1.0.3
# pycocotools>=2.0.6 # COCO mAP # pycocotools>=2.0 # COCO mAP
# roboflow # roboflow
# ultralytics # HUB https://hub.ultralytics.com

Binary file not shown.

View File

@ -1,10 +1,10 @@
# Project-wide configuration file, can be used for package metadata and other toll configurations # Project-wide configuration file, can be used for package metadata and other toll configurations
# Example usage: global configuration for PEP8 (via flake8) setting or default pytest arguments # Example usage: global configuration for PEP8 (via flake8) setting or default pytest arguments
# Local usage: pip install pre-commit, pre-commit run --all-files
[metadata] [metadata]
license_file = LICENSE license_file = LICENSE
description_file = README.md description-file = README.md
[tool:pytest] [tool:pytest]
norecursedirs = norecursedirs =
@ -16,6 +16,7 @@ addopts =
--durations=25 --durations=25
--color=yes --color=yes
[flake8] [flake8]
max-line-length = 120 max-line-length = 120
exclude = .tox,*.egg,build,temp exclude = .tox,*.egg,build,temp
@ -25,30 +26,26 @@ verbose = 2
# https://pep8.readthedocs.io/en/latest/intro.html#error-codes # https://pep8.readthedocs.io/en/latest/intro.html#error-codes
format = pylint format = pylint
# see: https://www.flake8rules.com/ # see: https://www.flake8rules.com/
ignore = E731,F405,E402,F401,W504,E127,E231,E501,F403 ignore =
# E731: Do not assign a lambda expression, use a def E731 # Do not assign a lambda expression, use a def
# F405: name may be undefined, or defined from star imports: module F405
# E402: module level import not at top of file E402
# F401: module imported but unused F841
# W504: line break after binary operator E741
# E127: continuation line over-indented for visual indent F821
# E231: missing whitespace after ,, ;, or : E722
# E501: line too long F401
# F403: from module import * used; unable to detect undefined names W504
E127
W504
E231
E501
F403
E302
F541
[isort] [isort]
# https://pycqa.github.io/isort/docs/configuration/options.html # https://pycqa.github.io/isort/docs/configuration/options.html
line_length = 120 line_length = 120
# see: https://pycqa.github.io/isort/docs/configuration/multi_line_output_modes.html
multi_line_output = 0 multi_line_output = 0
[yapf]
based_on_style = pep8
spaces_before_comment = 2
COLUMN_LIMIT = 120
COALESCE_BRACKETS = True
SPACES_AROUND_POWER_OPERATOR = True
SPACE_BETWEEN_ENDING_COMMA_AND_CLOSING_BRACKET = False
SPLIT_BEFORE_CLOSING_BRACKET = False
SPLIT_BEFORE_FIRST_ARGUMENT = False
# EACH_DICT_ENTRY_ON_SEPARATE_LINE = False

View File

@ -1,25 +1,14 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
""" """
Train a YOLOv3 model on a custom dataset. Train a model on a custom dataset
Models and datasets download automatically from the latest YOLOv3 release.
Usage - Single-GPU training: Usage:
$ python train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (recommended) $ python path/to/train.py --data coco128.yaml --weights yolov3.pt --img 640
$ python train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch
Usage - Multi-GPU DDP training:
$ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 train.py --data coco128.yaml --weights yolov5s.pt --img 640 --device 0,1,2,3
Models: https://github.com/ultralytics/yolov5/tree/master/models
Datasets: https://github.com/ultralytics/yolov5/tree/master/data
Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
""" """
import argparse import argparse
import math import math
import os import os
import random import random
import subprocess
import sys import sys
import time import time
from copy import deepcopy from copy import deepcopy
@ -31,46 +20,49 @@ import torch
import torch.distributed as dist import torch.distributed as dist
import torch.nn as nn import torch.nn as nn
import yaml import yaml
from torch.optim import lr_scheduler from torch.cuda import amp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import SGD, Adam, lr_scheduler
from tqdm import tqdm from tqdm import tqdm
FILE = Path(__file__).resolve() FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # YOLOv3 root directory ROOT = FILE.parents[0] # root directory
if str(ROOT) not in sys.path: if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
import val as validate # for end-of-epoch mAP import val # for end-of-epoch mAP
from models.experimental import attempt_load from models.experimental import attempt_load
from models.yolo import Model from models.yolo import Model
from utils.autoanchor import check_anchors from utils.autoanchor import check_anchors
from utils.autobatch import check_train_batch_size from utils.autobatch import check_train_batch_size
from utils.callbacks import Callbacks from utils.callbacks import Callbacks
from utils.dataloaders import create_dataloader from utils.datasets import create_dataloader
from utils.downloads import attempt_download, is_url from utils.downloads import attempt_download
from utils.general import (LOGGER, TQDM_BAR_FORMAT, check_amp, check_dataset, check_file, check_git_info, from utils.general import (LOGGER, NCOLS, check_dataset, check_file, check_git_status, check_img_size,
check_git_status, check_img_size, check_requirements, check_suffix, check_yaml, colorstr, check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path,
get_latest_run, increment_path, init_seeds, intersect_dicts, labels_to_class_weights, init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods,
labels_to_image_weights, methods, one_cycle, print_args, print_mutation, strip_optimizer, one_cycle, print_args, print_mutation, strip_optimizer)
yaml_save)
from utils.loggers import Loggers from utils.loggers import Loggers
from utils.loggers.comet.comet_utils import check_comet_resume from utils.loggers.wandb.wandb_utils import check_wandb_resume
from utils.loss import ComputeLoss from utils.loss import ComputeLoss
from utils.metrics import fitness from utils.metrics import fitness
from utils.plots import plot_evolve from utils.plots import plot_evolve, plot_labels
from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, from utils.torch_utils import EarlyStopping, ModelEMA, de_parallel, select_device, torch_distributed_zero_first
smart_resume, torch_distributed_zero_first)
LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
RANK = int(os.getenv('RANK', -1)) RANK = int(os.getenv('RANK', -1))
WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
GIT_INFO = check_git_info()
def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary def train(hyp, # path/to/hyp.yaml or hyp dictionary
save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = \ opt,
Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze device,
callbacks.run('on_pretrain_routine_start') callbacks
):
save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, = \
Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze
# Directories # Directories
w = save_dir / 'weights' # weights dir w = save_dir / 'weights' # weights dir
@ -82,36 +74,36 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
with open(hyp, errors='ignore') as f: with open(hyp, errors='ignore') as f:
hyp = yaml.safe_load(f) # load hyps dict hyp = yaml.safe_load(f) # load hyps dict
LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
opt.hyp = hyp.copy() # for saving hyps to checkpoints
# Save run settings # Save run settings
if not evolve: with open(save_dir / 'hyp.yaml', 'w') as f:
yaml_save(save_dir / 'hyp.yaml', hyp) yaml.safe_dump(hyp, f, sort_keys=False)
yaml_save(save_dir / 'opt.yaml', vars(opt)) with open(save_dir / 'opt.yaml', 'w') as f:
yaml.safe_dump(vars(opt), f, sort_keys=False)
data_dict = None
# Loggers # Loggers
data_dict = None if RANK in [-1, 0]:
if RANK in {-1, 0}:
loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance
if loggers.wandb:
data_dict = loggers.wandb.data_dict
if resume:
weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp
# Register actions # Register actions
for k in methods(loggers): for k in methods(loggers):
callbacks.register_action(k, callback=getattr(loggers, k)) callbacks.register_action(k, callback=getattr(loggers, k))
# Process custom dataset artifact link
data_dict = loggers.remote_dataset
if resume: # If resuming runs from remote artifact
weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size
# Config # Config
plots = not evolve and not opt.noplots # create plots plots = not evolve # create plots
cuda = device.type != 'cpu' cuda = device.type != 'cpu'
init_seeds(opt.seed + 1 + RANK, deterministic=True) init_seeds(1 + RANK)
with torch_distributed_zero_first(LOCAL_RANK): with torch_distributed_zero_first(LOCAL_RANK):
data_dict = data_dict or check_dataset(data) # check if None data_dict = data_dict or check_dataset(data) # check if None
train_path, val_path = data_dict['train'], data_dict['val'] train_path, val_path = data_dict['train'], data_dict['val']
nc = 1 if single_cls else int(data_dict['nc']) # number of classes nc = 1 if single_cls else int(data_dict['nc']) # number of classes
names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check
is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset
# Model # Model
@ -120,7 +112,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
if pretrained: if pretrained:
with torch_distributed_zero_first(LOCAL_RANK): with torch_distributed_zero_first(LOCAL_RANK):
weights = attempt_download(weights) # download if not found locally weights = attempt_download(weights) # download if not found locally
ckpt = torch.load(weights, map_location='cpu') # load checkpoint to CPU to avoid CUDA memory leak ckpt = torch.load(weights, map_location=device) # load checkpoint
model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys
csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
@ -129,13 +121,11 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report
else: else:
model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
amp = check_amp(model) # check AMP
# Freeze # Freeze
freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze freeze = [f'model.{x}.' for x in range(freeze)] # layers to freeze
for k, v in model.named_parameters(): for k, v in model.named_parameters():
v.requires_grad = True # train all layers v.requires_grad = True # train all layers
# v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results)
if any(x in k for x in freeze): if any(x in k for x in freeze):
LOGGER.info(f'freezing {k}') LOGGER.info(f'freezing {k}')
v.requires_grad = False v.requires_grad = False
@ -146,35 +136,70 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
# Batch size # Batch size
if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size
batch_size = check_train_batch_size(model, imgsz, amp) batch_size = check_train_batch_size(model, imgsz)
loggers.on_params_update({'batch_size': batch_size})
# Optimizer # Optimizer
nbs = 64 # nominal batch size nbs = 64 # nominal batch size
accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing
hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay
optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay']) LOGGER.info(f"Scaled weight_decay = {hyp['weight_decay']}")
g0, g1, g2 = [], [], [] # optimizer parameter groups
for v in model.modules():
if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter): # bias
g2.append(v.bias)
if isinstance(v, nn.BatchNorm2d): # weight (no decay)
g0.append(v.weight)
elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter): # weight (with decay)
g1.append(v.weight)
if opt.adam:
optimizer = Adam(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
else:
optimizer = SGD(g0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
optimizer.add_param_group({'params': g1, 'weight_decay': hyp['weight_decay']}) # add g1 with weight_decay
optimizer.add_param_group({'params': g2}) # add g2 (biases)
LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__} with parameter groups "
f"{len(g0)} weight, {len(g1)} weight (no decay), {len(g2)} bias")
del g0, g1, g2
# Scheduler # Scheduler
if opt.cos_lr: if opt.linear_lr:
lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
else: else:
lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs)
# EMA # EMA
ema = ModelEMA(model) if RANK in {-1, 0} else None ema = ModelEMA(model) if RANK in [-1, 0] else None
# Resume # Resume
best_fitness, start_epoch = 0.0, 0 start_epoch, best_fitness = 0, 0.0
if pretrained: if pretrained:
# Optimizer
if ckpt['optimizer'] is not None:
optimizer.load_state_dict(ckpt['optimizer'])
best_fitness = ckpt['best_fitness']
# EMA
if ema and ckpt.get('ema'):
ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
ema.updates = ckpt['updates']
# Epochs
start_epoch = ckpt['epoch'] + 1
if resume: if resume:
best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.'
if epochs < start_epoch:
LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.")
epochs += ckpt['epoch'] # finetune additional epochs
del ckpt, csd del ckpt, csd
# DP mode # DP mode
if cuda and RANK == -1 and torch.cuda.device_count() > 1: if cuda and RANK == -1 and torch.cuda.device_count() > 1:
LOGGER.warning('WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' LOGGER.warning('WARNING: DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n'
'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.')
model = torch.nn.DataParallel(model) model = torch.nn.DataParallel(model)
@ -184,53 +209,41 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
LOGGER.info('Using SyncBatchNorm()') LOGGER.info('Using SyncBatchNorm()')
# Trainloader # Trainloader
train_loader, dataset = create_dataloader(train_path, train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls,
imgsz, hyp=hyp, augment=True, cache=opt.cache, rect=opt.rect, rank=LOCAL_RANK,
batch_size // WORLD_SIZE, workers=workers, image_weights=opt.image_weights, quad=opt.quad,
gs, prefix=colorstr('train: '), shuffle=True)
single_cls, mlc = int(np.concatenate(dataset.labels, 0)[:, 0].max()) # max label class
hyp=hyp, nb = len(train_loader) # number of batches
augment=True,
cache=None if opt.cache == 'val' else opt.cache,
rect=opt.rect,
rank=LOCAL_RANK,
workers=workers,
image_weights=opt.image_weights,
quad=opt.quad,
prefix=colorstr('train: '),
shuffle=True,
seed=opt.seed)
labels = np.concatenate(dataset.labels, 0)
mlc = int(labels[:, 0].max()) # max label class
assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}'
# Process 0 # Process 0
if RANK in {-1, 0}: if RANK in [-1, 0]:
val_loader = create_dataloader(val_path, val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls,
imgsz, hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1,
batch_size // WORLD_SIZE * 2, workers=workers, pad=0.5,
gs,
single_cls,
hyp=hyp,
cache=None if noval else opt.cache,
rect=True,
rank=-1,
workers=workers * 2,
pad=0.5,
prefix=colorstr('val: '))[0] prefix=colorstr('val: '))[0]
if not resume: if not resume:
labels = np.concatenate(dataset.labels, 0)
# c = torch.tensor(labels[:, 0]) # classes
# cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
# model._initialize_biases(cf.to(device))
if plots:
plot_labels(labels, names, save_dir)
# Anchors
if not opt.noautoanchor: if not opt.noautoanchor:
check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
model.half().float() # pre-reduce anchor precision model.half().float() # pre-reduce anchor precision
callbacks.run('on_pretrain_routine_end', labels, names) callbacks.run('on_pretrain_routine_end')
# DDP mode # DDP mode
if cuda and RANK != -1: if cuda and RANK != -1:
model = smart_DDP(model) model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)
# Model attributes # Model parameters
nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps)
hyp['box'] *= 3 / nl # scale to layers hyp['box'] *= 3 / nl # scale to layers
hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers
@ -243,23 +256,20 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
# Start training # Start training
t0 = time.time() t0 = time.time()
nb = len(train_loader) # number of batches nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)
nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations)
# nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
last_opt_step = -1 last_opt_step = -1
maps = np.zeros(nc) # mAP per class maps = np.zeros(nc) # mAP per class
results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
scheduler.last_epoch = start_epoch - 1 # do not move scheduler.last_epoch = start_epoch - 1 # do not move
scaler = torch.cuda.amp.GradScaler(enabled=amp) scaler = amp.GradScaler(enabled=cuda)
stopper, stop = EarlyStopping(patience=opt.patience), False stopper = EarlyStopping(patience=opt.patience)
compute_loss = ComputeLoss(model) # init loss class compute_loss = ComputeLoss(model) # init loss class
callbacks.run('on_train_start')
LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Logging results to {colorstr('bold', save_dir)}\n" f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...') f'Starting training for {epochs} epochs...')
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
callbacks.run('on_train_epoch_start')
model.train() model.train()
# Update image weights (optional, single-GPU only) # Update image weights (optional, single-GPU only)
@ -276,12 +286,11 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
if RANK != -1: if RANK != -1:
train_loader.sampler.set_epoch(epoch) train_loader.sampler.set_epoch(epoch)
pbar = enumerate(train_loader) pbar = enumerate(train_loader)
LOGGER.info(('\n' + '%11s' * 7) % ('Epoch', 'GPU_mem', 'box_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size')) LOGGER.info(('\n' + '%10s' * 7) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'labels', 'img_size'))
if RANK in {-1, 0}: if RANK in [-1, 0]:
pbar = tqdm(pbar, total=nb, bar_format=TQDM_BAR_FORMAT) # progress bar pbar = tqdm(pbar, total=nb, ncols=NCOLS, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar
optimizer.zero_grad() optimizer.zero_grad()
for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
callbacks.run('on_train_batch_start')
ni = i + nb * epoch # number integrated batches (since train start) ni = i + nb * epoch # number integrated batches (since train start)
imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0
@ -292,7 +301,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())
for j, x in enumerate(optimizer.param_groups): for j, x in enumerate(optimizer.param_groups):
# bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)]) x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
if 'momentum' in x: if 'momentum' in x:
x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
@ -305,7 +314,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
# Forward # Forward
with torch.cuda.amp.autocast(amp): with amp.autocast(enabled=cuda):
pred = model(imgs) # forward pred = model(imgs) # forward
loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
if RANK != -1: if RANK != -1:
@ -316,10 +325,8 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
# Backward # Backward
scaler.scale(loss).backward() scaler.scale(loss).backward()
# Optimize - https://pytorch.org/docs/master/notes/amp_examples.html # Optimize
if ni - last_opt_step >= accumulate: if ni - last_opt_step >= accumulate:
scaler.unscale_(optimizer) # unscale gradients
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients
scaler.step(optimizer) # optimizer.step scaler.step(optimizer) # optimizer.step
scaler.update() scaler.update()
optimizer.zero_grad() optimizer.zero_grad()
@ -328,30 +335,27 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
last_opt_step = ni last_opt_step = ni
# Log # Log
if RANK in {-1, 0}: if RANK in [-1, 0]:
mloss = (mloss * i + loss_items) / (i + 1) # update mean losses mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB)
pbar.set_description(('%11s' * 2 + '%11.4g' * 5) % pbar.set_description(('%10s' * 2 + '%10.4g' * 5) % (
(f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1]))
callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths, list(mloss)) callbacks.run('on_train_batch_end', ni, model, imgs, targets, paths, plots, opt.sync_bn)
if callbacks.stop_training:
return
# end batch ------------------------------------------------------------------------------------------------ # end batch ------------------------------------------------------------------------------------------------
# Scheduler # Scheduler
lr = [x['lr'] for x in optimizer.param_groups] # for loggers lr = [x['lr'] for x in optimizer.param_groups] # for loggers
scheduler.step() scheduler.step()
if RANK in {-1, 0}: if RANK in [-1, 0]:
# mAP # mAP
callbacks.run('on_train_epoch_end', epoch=epoch) callbacks.run('on_train_epoch_end', epoch=epoch)
ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights'])
final_epoch = (epoch + 1 == epochs) or stopper.possible_stop final_epoch = (epoch + 1 == epochs) or stopper.possible_stop
if not noval or final_epoch: # Calculate mAP if not noval or final_epoch: # Calculate mAP
results, maps, _ = validate.run(data_dict, results, maps, _ = val.run(data_dict,
batch_size=batch_size // WORLD_SIZE * 2, batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz, imgsz=imgsz,
half=amp,
model=ema.ema, model=ema.ema,
single_cls=single_cls, single_cls=single_cls,
dataloader=val_loader, dataloader=val_loader,
@ -362,7 +366,6 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
# Update best mAP # Update best mAP
fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
stop = stopper(epoch=epoch, fitness=fi) # early stop check
if fi > best_fitness: if fi > best_fitness:
best_fitness = fi best_fitness = fi
log_vals = list(mloss) + list(results) + lr log_vals = list(mloss) + list(results) + lr
@ -370,62 +373,65 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
# Save model # Save model
if (not nosave) or (final_epoch and not evolve): # if save if (not nosave) or (final_epoch and not evolve): # if save
ckpt = { ckpt = {'epoch': epoch,
'epoch': epoch,
'best_fitness': best_fitness, 'best_fitness': best_fitness,
'model': deepcopy(de_parallel(model)).half(), 'model': deepcopy(de_parallel(model)).half(),
'ema': deepcopy(ema.ema).half(), 'ema': deepcopy(ema.ema).half(),
'updates': ema.updates, 'updates': ema.updates,
'optimizer': optimizer.state_dict(), 'optimizer': optimizer.state_dict(),
'opt': vars(opt), 'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None,
'git': GIT_INFO, # {remote, branch, commit} if a git repo
'date': datetime.now().isoformat()} 'date': datetime.now().isoformat()}
# Save last, best and delete # Save last, best and delete
torch.save(ckpt, last) torch.save(ckpt, last)
if best_fitness == fi: if best_fitness == fi:
torch.save(ckpt, best) torch.save(ckpt, best)
if opt.save_period > 0 and epoch % opt.save_period == 0: if (epoch > 0) and (opt.save_period > 0) and (epoch % opt.save_period == 0):
torch.save(ckpt, w / f'epoch{epoch}.pt') torch.save(ckpt, w / f'epoch{epoch}.pt')
del ckpt del ckpt
callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi)
# EarlyStopping # Stop Single-GPU
if RANK != -1: # if DDP training if RANK == -1 and stopper(epoch=epoch, fitness=fi):
broadcast_list = [stop if RANK == 0 else None] break
dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks
if RANK != 0: # Stop DDP TODO: known issues shttps://github.com/ultralytics/yolov5/pull/4576
stop = broadcast_list[0] # stop = stopper(epoch=epoch, fitness=fi)
if stop: # if RANK == 0:
break # must break all DDP ranks # dist.broadcast_object_list([stop], 0) # broadcast 'stop' to all ranks
# Stop DPP
# with torch_distributed_zero_first(RANK):
# if stop:
# break # must break all DDP ranks
# end epoch ---------------------------------------------------------------------------------------------------- # end epoch ----------------------------------------------------------------------------------------------------
# end training ----------------------------------------------------------------------------------------------------- # end training -----------------------------------------------------------------------------------------------------
if RANK in {-1, 0}: if RANK in [-1, 0]:
LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.')
for f in last, best: for f in last, best:
if f.exists(): if f.exists():
strip_optimizer(f) # strip optimizers strip_optimizer(f) # strip optimizers
if f is best: if f is best:
LOGGER.info(f'\nValidating {f}...') LOGGER.info(f'\nValidating {f}...')
results, _, _ = validate.run( results, _, _ = val.run(data_dict,
data_dict,
batch_size=batch_size // WORLD_SIZE * 2, batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz, imgsz=imgsz,
model=attempt_load(f, device).half(), model=attempt_load(f, device).half(),
iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 iou_thres=0.65 if is_coco else 0.60, # best pycocotools results at 0.65
single_cls=single_cls, single_cls=single_cls,
dataloader=val_loader, dataloader=val_loader,
save_dir=save_dir, save_dir=save_dir,
save_json=is_coco, save_json=is_coco,
verbose=True, verbose=True,
plots=plots, plots=True,
callbacks=callbacks, callbacks=callbacks,
compute_loss=compute_loss) # val best model with plots compute_loss=compute_loss) # val best model with plots
if is_coco: if is_coco:
callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi) callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi)
callbacks.run('on_train_end', last, best, epoch, results) callbacks.run('on_train_end', last, best, plots, epoch, results)
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}")
torch.cuda.empty_cache() torch.cuda.empty_cache()
return results return results
@ -433,105 +439,95 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
def parse_opt(known=False): def parse_opt(known=False):
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default=ROOT / 'yolov3-tiny.pt', help='initial weights path') parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='initial weights path')
parser.add_argument('--cfg', type=str, default='', help='model.yaml path') parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch.yaml', help='hyperparameters path')
parser.add_argument('--epochs', type=int, default=100, help='total training epochs') parser.add_argument('--epochs', type=int, default=300)
parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch') parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--rect', action='store_true', help='rectangular training')
parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
parser.add_argument('--noval', action='store_true', help='only validate final epoch') parser.add_argument('--noval', action='store_true', help='only validate final epoch')
parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
parser.add_argument('--noplots', action='store_true', help='save no plot files')
parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk') parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name') parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
parser.add_argument('--name', default='exp', help='save to project/name') parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--quad', action='store_true', help='quad dataloader') parser.add_argument('--quad', action='store_true', help='quad dataloader')
parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') parser.add_argument('--linear-lr', action='store_true', help='linear LR')
parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') parser.add_argument('--freeze', type=int, default=0, help='Number of layers to freeze. backbone=10, all=24')
parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)') parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
parser.add_argument('--seed', type=int, default=0, help='Global training seed') parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify')
# Logger arguments # Weights & Biases arguments
parser.add_argument('--entity', default=None, help='Entity') parser.add_argument('--entity', default=None, help='W&B: Entity')
parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='Upload data, "val" option') parser.add_argument('--upload_dataset', action='store_true', help='W&B: Upload dataset as artifact table')
parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval') parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
parser.add_argument('--artifact_alias', type=str, default='latest', help='Version of dataset artifact to use') parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')
return parser.parse_known_args()[0] if known else parser.parse_args() opt = parser.parse_known_args()[0] if known else parser.parse_args()
return opt
def main(opt, callbacks=Callbacks()): def main(opt, callbacks=Callbacks()):
# Checks # Checks
if RANK in {-1, 0}: if RANK in [-1, 0]:
print_args(vars(opt)) print_args(FILE.stem, opt)
check_git_status() check_git_status()
check_requirements() check_requirements(exclude=['thop'])
# Resume (from specified or most recent last.pt) # Resume
if opt.resume and not check_comet_resume(opt) and not opt.evolve: if opt.resume and not check_wandb_resume(opt) and not opt.evolve: # resume an interrupted run
last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
opt_data = opt.data # original dataset with open(Path(ckpt).parent.parent / 'opt.yaml', errors='ignore') as f:
if opt_yaml.is_file(): opt = argparse.Namespace(**yaml.safe_load(f)) # replace
with open(opt_yaml, errors='ignore') as f: opt.cfg, opt.weights, opt.resume = '', ckpt, True # reinstate
d = yaml.safe_load(f) LOGGER.info(f'Resuming training from {ckpt}')
else:
d = torch.load(last, map_location='cpu')['opt']
opt = argparse.Namespace(**d) # replace
opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate
if is_url(opt_data):
opt.data = check_file(opt_data) # avoid HUB resume auth timeout
else: else:
opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \
check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks
assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
if opt.evolve: if opt.evolve:
if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve
opt.project = str(ROOT / 'runs/evolve') opt.project = str(ROOT / 'runs/evolve')
opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume
if opt.name == 'cfg':
opt.name = Path(opt.cfg).stem # use model.yaml as name
opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))
# DDP mode # DDP mode
device = select_device(opt.device, batch_size=opt.batch_size) device = select_device(opt.device, batch_size=opt.batch_size)
if LOCAL_RANK != -1: if LOCAL_RANK != -1:
msg = 'is not compatible with YOLOv3 Multi-GPU DDP training'
assert not opt.image_weights, f'--image-weights {msg}'
assert not opt.evolve, f'--evolve {msg}'
assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size'
assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE'
assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'
assert opt.batch_size % WORLD_SIZE == 0, '--batch-size must be multiple of CUDA device count'
assert not opt.image_weights, '--image-weights argument is not compatible with DDP training'
assert not opt.evolve, '--evolve argument is not compatible with DDP training'
torch.cuda.set_device(LOCAL_RANK) torch.cuda.set_device(LOCAL_RANK)
device = torch.device('cuda', LOCAL_RANK) device = torch.device('cuda', LOCAL_RANK)
dist.init_process_group(backend='nccl' if dist.is_nccl_available() else 'gloo') dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo")
# Train # Train
if not opt.evolve: if not opt.evolve:
train(opt.hyp, opt, device, callbacks) train(opt.hyp, opt, device, callbacks)
if WORLD_SIZE > 1 and RANK == 0:
LOGGER.info('Destroying process group... ')
dist.destroy_process_group()
# Evolve hyperparameters (optional) # Evolve hyperparameters (optional)
else: else:
# Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
meta = { meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
'weight_decay': (1, 0.0, 0.001), # optimizer weight decay 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
@ -565,14 +561,11 @@ def main(opt, callbacks=Callbacks()):
hyp = yaml.safe_load(f) # load hyps dict hyp = yaml.safe_load(f) # load hyps dict
if 'anchors' not in hyp: # anchors commented in hyp.yaml if 'anchors' not in hyp: # anchors commented in hyp.yaml
hyp['anchors'] = 3 hyp['anchors'] = 3
if opt.noautoanchor:
del hyp['anchors'], meta['anchors']
opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch
# ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv'
if opt.bucket: if opt.bucket:
subprocess.run( os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {save_dir}') # download evolve.csv if exists
f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}'.split()) # download evolve.csv if exists
for _ in range(opt.evolve): # generations to evolve for _ in range(opt.evolve): # generations to evolve
if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate
@ -608,28 +601,25 @@ def main(opt, callbacks=Callbacks()):
# Train mutation # Train mutation
results = train(hyp.copy(), opt, device, callbacks) results = train(hyp.copy(), opt, device, callbacks)
callbacks = Callbacks()
# Write mutation results # Write mutation results
keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss', print_mutation(results, hyp.copy(), save_dir, opt.bucket)
'val/obj_loss', 'val/cls_loss')
print_mutation(keys, results, hyp.copy(), save_dir, opt.bucket)
# Plot results # Plot results
plot_evolve(evolve_csv) plot_evolve(evolve_csv)
LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n' LOGGER.info(f'Hyperparameter evolution finished\n'
f"Results saved to {colorstr('bold', save_dir)}\n" f"Results saved to {colorstr('bold', save_dir)}\n"
f'Usage example: $ python train.py --hyp {evolve_yaml}') f'Use best hyperparameters example: $ python train.py --hyp {evolve_yaml}')
def run(**kwargs): def run(**kwargs):
# Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov3.pt')
opt = parse_opt(True) opt = parse_opt(True)
for k, v in kwargs.items(): for k, v in kwargs.items():
setattr(opt, k, v) setattr(opt, k, v)
main(opt) main(opt)
return opt
if __name__ == '__main__': if __name__ == "__main__":
opt = parse_opt() opt = parse_opt()
main(opt) main(opt)

1243
yolov3/tutorial.ipynb vendored

File diff suppressed because it is too large Load Diff

View File

@ -3,78 +3,16 @@
utils/initialization utils/initialization
""" """
import contextlib
import platform
import threading
def notebook_init():
def emojis(str=''): # For notebooks
# Return platform-dependent emoji-safe version of string
return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
class TryExcept(contextlib.ContextDecorator):
# YOLOv3 TryExcept class. Usage: @TryExcept() decorator or 'with TryExcept():' context manager
def __init__(self, msg=''):
self.msg = msg
def __enter__(self):
pass
def __exit__(self, exc_type, value, traceback):
if value:
print(emojis(f"{self.msg}{': ' if self.msg else ''}{value}"))
return True
def threaded(func):
# Multi-threads a target function and returns thread. Usage: @threaded decorator
def wrapper(*args, **kwargs):
thread = threading.Thread(target=func, args=args, kwargs=kwargs, daemon=True)
thread.start()
return thread
return wrapper
def join_threads(verbose=False):
# Join all daemon threads, i.e. atexit.register(lambda: join_threads())
main_thread = threading.current_thread()
for t in threading.enumerate():
if t is not main_thread:
if verbose:
print(f'Joining thread {t.name}')
t.join()
def notebook_init(verbose=True):
# Check system software and hardware
print('Checking setup...') print('Checking setup...')
import os
import shutil
from utils.general import check_font, check_requirements, is_colab
from utils.torch_utils import select_device # imports
check_font()
import psutil
from IPython import display # to display images and clear console output from IPython import display # to display images and clear console output
if is_colab(): from utils.general import emojis
shutil.rmtree('/content/sample_data', ignore_errors=True) # remove colab /sample_data directory from utils.torch_utils import select_device # imports
# System info
if verbose:
gb = 1 << 30 # bytes to GiB (1024 ** 3)
ram = psutil.virtual_memory().total
total, used, free = shutil.disk_usage('/')
display.clear_output() display.clear_output()
s = f'({os.cpu_count()} CPUs, {ram / gb:.1f} GB RAM, {(total - free) / gb:.1f}/{total / gb:.1f} GB disk)'
else:
s = ''
select_device(newline=False) select_device(newline=False)
print(emojis(f'Setup complete ✅ {s}')) print(emojis('Setup complete ✅'))
return display return display

View File

@ -8,32 +8,29 @@ import torch.nn as nn
import torch.nn.functional as F import torch.nn.functional as F
class SiLU(nn.Module): # SiLU https://arxiv.org/pdf/1606.08415.pdf ----------------------------------------------------------------------------
# SiLU activation https://arxiv.org/pdf/1606.08415.pdf class SiLU(nn.Module): # export-friendly version of nn.SiLU()
@staticmethod @staticmethod
def forward(x): def forward(x):
return x * torch.sigmoid(x) return x * torch.sigmoid(x)
class Hardswish(nn.Module): class Hardswish(nn.Module): # export-friendly version of nn.Hardswish()
# Hard-SiLU activation
@staticmethod @staticmethod
def forward(x): def forward(x):
# return x * F.hardsigmoid(x) # for TorchScript and CoreML # return x * F.hardsigmoid(x) # for torchscript and CoreML
return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0 # for TorchScript, CoreML and ONNX return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0 # for torchscript, CoreML and ONNX
# Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------
class Mish(nn.Module): class Mish(nn.Module):
# Mish activation https://github.com/digantamisra98/Mish
@staticmethod @staticmethod
def forward(x): def forward(x):
return x * F.softplus(x).tanh() return x * F.softplus(x).tanh()
class MemoryEfficientMish(nn.Module): class MemoryEfficientMish(nn.Module):
# Mish activation memory-efficient
class F(torch.autograd.Function): class F(torch.autograd.Function):
@staticmethod @staticmethod
def forward(ctx, x): def forward(ctx, x):
ctx.save_for_backward(x) ctx.save_for_backward(x)
@ -50,8 +47,8 @@ class MemoryEfficientMish(nn.Module):
return self.F.apply(x) return self.F.apply(x)
# FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------
class FReLU(nn.Module): class FReLU(nn.Module):
# FReLU activation https://arxiv.org/abs/2007.11824
def __init__(self, c1, k=3): # ch_in, kernel def __init__(self, c1, k=3): # ch_in, kernel
super().__init__() super().__init__()
self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False) self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)
@ -61,8 +58,9 @@ class FReLU(nn.Module):
return torch.max(x, self.bn(self.conv(x))) return torch.max(x, self.bn(self.conv(x)))
# ACON https://arxiv.org/pdf/2009.04759.pdf ----------------------------------------------------------------------------
class AconC(nn.Module): class AconC(nn.Module):
r""" ACON activation (activate or not) r""" ACON activation (activate or not).
AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter
according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>. according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
""" """
@ -79,7 +77,7 @@ class AconC(nn.Module):
class MetaAconC(nn.Module): class MetaAconC(nn.Module):
r""" ACON activation (activate or not) r""" ACON activation (activate or not).
MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network
according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>. according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
""" """

View File

@ -8,42 +8,34 @@ import random
import cv2 import cv2
import numpy as np import numpy as np
import torch
import torchvision.transforms as T
import torchvision.transforms.functional as TF
from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box, xywhn2xyxy from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box
from utils.metrics import bbox_ioa from utils.metrics import bbox_ioa
IMAGENET_MEAN = 0.485, 0.456, 0.406 # RGB mean
IMAGENET_STD = 0.229, 0.224, 0.225 # RGB standard deviation
class Albumentations: class Albumentations:
# YOLOv3 Albumentations class (optional, only used if package is installed) # Albumentations class (optional, only used if package is installed)
def __init__(self, size=640): def __init__(self):
self.transform = None self.transform = None
prefix = colorstr('albumentations: ')
try: try:
import albumentations as A import albumentations as A
check_version(A.__version__, '1.0.3', hard=True) # version requirement check_version(A.__version__, '1.0.3', hard=True) # version requirement
T = [ self.transform = A.Compose([
A.RandomResizedCrop(height=size, width=size, scale=(0.8, 1.0), ratio=(0.9, 1.11), p=0.0),
A.Blur(p=0.01), A.Blur(p=0.01),
A.MedianBlur(p=0.01), A.MedianBlur(p=0.01),
A.ToGray(p=0.01), A.ToGray(p=0.01),
A.CLAHE(p=0.01), A.CLAHE(p=0.01),
A.RandomBrightnessContrast(p=0.0), A.RandomBrightnessContrast(p=0.0),
A.RandomGamma(p=0.0), A.RandomGamma(p=0.0),
A.ImageCompression(quality_lower=75, p=0.0)] # transforms A.ImageCompression(quality_lower=75, p=0.0)],
self.transform = A.Compose(T, bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))
LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p)) LOGGER.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p))
except ImportError: # package not installed, skip except ImportError: # package not installed, skip
pass pass
except Exception as e: except Exception as e:
LOGGER.info(f'{prefix}{e}') LOGGER.info(colorstr('albumentations: ') + f'{e}')
def __call__(self, im, labels, p=1.0): def __call__(self, im, labels, p=1.0):
if self.transform and random.random() < p: if self.transform and random.random() < p:
@ -52,18 +44,6 @@ class Albumentations:
return im, labels return im, labels
def normalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD, inplace=False):
# Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = (x - mean) / std
return TF.normalize(x, mean, std, inplace=inplace)
def denormalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD):
# Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = x * std + mean
for i in range(3):
x[:, i] = x[:, i] * std[i] + mean[i]
return x
def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5): def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
# HSV color-space augmentation # HSV color-space augmentation
if hgain or sgain or vgain: if hgain or sgain or vgain:
@ -141,14 +121,7 @@ def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleF
return im, ratio, (dw, dh) return im, ratio, (dw, dh)
def random_perspective(im, def random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,
targets=(),
segments=(),
degrees=10,
translate=.1,
scale=.1,
shear=10,
perspective=0.0,
border=(0, 0)): border=(0, 0)):
# torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10)) # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10))
# targets = [cls, xyxy] # targets = [cls, xyxy]
@ -201,7 +174,7 @@ def random_perspective(im,
# Transform label coordinates # Transform label coordinates
n = len(targets) n = len(targets)
if n: if n:
use_segments = any(x.any() for x in segments) and len(segments) == n use_segments = any(x.any() for x in segments)
new = np.zeros((n, 4)) new = np.zeros((n, 4))
if use_segments: # warp segments if use_segments: # warp segments
segments = resample_segments(segments) # upsample segments = resample_segments(segments) # upsample
@ -250,10 +223,12 @@ def copy_paste(im, labels, segments, p=0.5):
if (ioa < 0.30).all(): # allow 30% obscuration of existing labels if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
labels = np.concatenate((labels, [[l[0], *box]]), 0) labels = np.concatenate((labels, [[l[0], *box]]), 0)
segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (1, 1, 1), cv2.FILLED) cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
result = cv2.flip(im, 1) # augment segments (flip left-right) result = cv2.bitwise_and(src1=im, src2=im_new)
i = cv2.flip(im_new, 1).astype(bool) result = cv2.flip(result, 1) # augment segments (flip left-right)
i = result > 0 # pixels to replace
# i[:, :] = result.max(2).reshape(h, w, 1) # act over ch
im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug
return im, labels, segments return im, labels, segments
@ -280,7 +255,7 @@ def cutout(im, labels, p=0.5):
# return unobscured labels # return unobscured labels
if len(labels) and s > 0.03: if len(labels) and s > 0.03:
box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
ioa = bbox_ioa(box, xywhn2xyxy(labels[:, 1:5], w, h)) # intersection over area ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
labels = labels[ioa < 0.60] # remove >60% obscured labels labels = labels[ioa < 0.60] # remove >60% obscured labels
return labels return labels
@ -294,104 +269,9 @@ def mixup(im, labels, im2, labels2):
return im, labels return im, labels
def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
# Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
w1, h1 = box1[2] - box1[0], box1[3] - box1[1] w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
w2, h2 = box2[2] - box2[0], box2[3] - box2[1] w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates
def classify_albumentations(
augment=True,
size=224,
scale=(0.08, 1.0),
ratio=(0.75, 1.0 / 0.75), # 0.75, 1.33
hflip=0.5,
vflip=0.0,
jitter=0.4,
mean=IMAGENET_MEAN,
std=IMAGENET_STD,
auto_aug=False):
# YOLOv3 classification Albumentations (optional, only used if package is installed)
prefix = colorstr('albumentations: ')
try:
import albumentations as A
from albumentations.pytorch import ToTensorV2
check_version(A.__version__, '1.0.3', hard=True) # version requirement
if augment: # Resize and crop
T = [A.RandomResizedCrop(height=size, width=size, scale=scale, ratio=ratio)]
if auto_aug:
# TODO: implement AugMix, AutoAug & RandAug in albumentation
LOGGER.info(f'{prefix}auto augmentations are currently not supported')
else:
if hflip > 0:
T += [A.HorizontalFlip(p=hflip)]
if vflip > 0:
T += [A.VerticalFlip(p=vflip)]
if jitter > 0:
color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, satuaration, 0 hue
T += [A.ColorJitter(*color_jitter, 0)]
else: # Use fixed crop for eval set (reproducibility)
T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)]
T += [A.Normalize(mean=mean, std=std), ToTensorV2()] # Normalize and convert to Tensor
LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p))
return A.Compose(T)
except ImportError: # package not installed, skip
LOGGER.warning(f'{prefix}⚠️ not found, install with `pip install albumentations` (recommended)')
except Exception as e:
LOGGER.info(f'{prefix}{e}')
def classify_transforms(size=224):
# Transforms to apply if albumentations not installed
assert isinstance(size, int), f'ERROR: classify_transforms size {size} must be integer, not (list, tuple)'
# T.Compose([T.ToTensor(), T.Resize(size), T.CenterCrop(size), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)])
return T.Compose([CenterCrop(size), ToTensor(), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)])
class LetterBox:
# YOLOv3 LetterBox class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()])
def __init__(self, size=(640, 640), auto=False, stride=32):
super().__init__()
self.h, self.w = (size, size) if isinstance(size, int) else size
self.auto = auto # pass max size integer, automatically solve for short side using stride
self.stride = stride # used with auto
def __call__(self, im): # im = np.array HWC
imh, imw = im.shape[:2]
r = min(self.h / imh, self.w / imw) # ratio of new/old
h, w = round(imh * r), round(imw * r) # resized image
hs, ws = (math.ceil(x / self.stride) * self.stride for x in (h, w)) if self.auto else self.h, self.w
top, left = round((hs - h) / 2 - 0.1), round((ws - w) / 2 - 0.1)
im_out = np.full((self.h, self.w, 3), 114, dtype=im.dtype)
im_out[top:top + h, left:left + w] = cv2.resize(im, (w, h), interpolation=cv2.INTER_LINEAR)
return im_out
class CenterCrop:
# YOLOv3 CenterCrop class for image preprocessing, i.e. T.Compose([CenterCrop(size), ToTensor()])
def __init__(self, size=640):
super().__init__()
self.h, self.w = (size, size) if isinstance(size, int) else size
def __call__(self, im): # im = np.array HWC
imh, imw = im.shape[:2]
m = min(imh, imw) # min dimension
top, left = (imh - m) // 2, (imw - m) // 2
return cv2.resize(im[top:top + m, left:left + m], (self.w, self.h), interpolation=cv2.INTER_LINEAR)
class ToTensor:
# YOLOv3 ToTensor class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()])
def __init__(self, half=False):
super().__init__()
self.half = half
def __call__(self, im): # im = np.array HWC in BGR order
im = np.ascontiguousarray(im.transpose((2, 0, 1))[::-1]) # HWC to CHW -> BGR to RGB -> contiguous
im = torch.from_numpy(im) # to torch
im = im.half() if self.half else im.float() # uint8 to fp16/32
im /= 255.0 # 0-255 to 0.0-1.0
return im

View File

@ -1,6 +1,6 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
""" """
AutoAnchor utils Auto-anchor utils
""" """
import random import random
@ -10,23 +10,21 @@ import torch
import yaml import yaml
from tqdm import tqdm from tqdm import tqdm
from utils import TryExcept from utils.general import LOGGER, colorstr, emojis
from utils.general import LOGGER, TQDM_BAR_FORMAT, colorstr
PREFIX = colorstr('AutoAnchor: ') PREFIX = colorstr('AutoAnchor: ')
def check_anchor_order(m): def check_anchor_order(m):
# Check anchor order against stride order for YOLOv3 Detect() module m, and correct if necessary # Check anchor order against stride order for Detect() module m, and correct if necessary
a = m.anchors.prod(-1).mean(-1).view(-1) # mean anchor area per output layer a = m.anchors.prod(-1).view(-1) # anchor area
da = a[-1] - a[0] # delta a da = a[-1] - a[0] # delta a
ds = m.stride[-1] - m.stride[0] # delta s ds = m.stride[-1] - m.stride[0] # delta s
if da and (da.sign() != ds.sign()): # same order if da.sign() != ds.sign(): # same order
LOGGER.info(f'{PREFIX}Reversing anchor order') LOGGER.info(f'{PREFIX}Reversing anchor order')
m.anchors[:] = m.anchors.flip(0) m.anchors[:] = m.anchors.flip(0)
@TryExcept(f'{PREFIX}ERROR')
def check_anchors(dataset, model, thr=4.0, imgsz=640): def check_anchors(dataset, model, thr=4.0, imgsz=640):
# Check anchor fit to data, recompute if necessary # Check anchor fit to data, recompute if necessary
m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
@ -42,26 +40,26 @@ def check_anchors(dataset, model, thr=4.0, imgsz=640):
bpr = (best > 1 / thr).float().mean() # best possible recall bpr = (best > 1 / thr).float().mean() # best possible recall
return bpr, aat return bpr, aat
stride = m.stride.to(m.anchors.device).view(-1, 1, 1) # model strides anchors = m.anchors.clone() * m.stride.to(m.anchors.device).view(-1, 1, 1) # current anchors
anchors = m.anchors.clone() * stride # current anchors
bpr, aat = metric(anchors.cpu().view(-1, 2)) bpr, aat = metric(anchors.cpu().view(-1, 2))
s = f'\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). ' s = f'\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). '
if bpr > 0.98: # threshold to recompute if bpr > 0.98: # threshold to recompute
LOGGER.info(f'{s}Current anchors are a good fit to dataset ✅') LOGGER.info(emojis(f'{s}Current anchors are a good fit to dataset ✅'))
else: else:
LOGGER.info(f'{s}Anchors are a poor fit to dataset ⚠️, attempting to improve...') LOGGER.info(emojis(f'{s}Anchors are a poor fit to dataset ⚠️, attempting to improve...'))
na = m.anchors.numel() // 2 # number of anchors na = m.anchors.numel() // 2 # number of anchors
try:
anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
except Exception as e:
LOGGER.info(f'{PREFIX}ERROR: {e}')
new_bpr = metric(anchors)[0] new_bpr = metric(anchors)[0]
if new_bpr > bpr: # replace anchors if new_bpr > bpr: # replace anchors
anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
m.anchors[:] = anchors.clone().view_as(m.anchors) m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss
check_anchor_order(m) # must be in pixel-space (not grid-space) check_anchor_order(m)
m.anchors /= stride LOGGER.info(f'{PREFIX}New anchors saved to model. Update model *.yaml to use these anchors in the future.')
s = f'{PREFIX}Done ✅ (optional: update model *.yaml to use these anchors in the future)'
else: else:
s = f'{PREFIX}Done ⚠️ (original anchors better than new anchors, proceeding with original anchors)' LOGGER.info(f'{PREFIX}Original anchors better than new anchors. Proceeding with original anchors.')
LOGGER.info(s)
def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
@ -83,7 +81,6 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen
""" """
from scipy.cluster.vq import kmeans from scipy.cluster.vq import kmeans
npr = np.random
thr = 1 / thr thr = 1 / thr
def metric(k, wh): # compute metrics def metric(k, wh): # compute metrics
@ -103,7 +100,7 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen
s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n' \ s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n' \
f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \ f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \
f'past_thr={x[x > thr].mean():.3f}-mean: ' f'past_thr={x[x > thr].mean():.3f}-mean: '
for x in k: for i, x in enumerate(k):
s += '%i,%i, ' % (round(x[0]), round(x[1])) s += '%i,%i, ' % (round(x[0]), round(x[1]))
if verbose: if verbose:
LOGGER.info(s[:-2]) LOGGER.info(s[:-2])
@ -112,7 +109,7 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen
if isinstance(dataset, str): # *.yaml file if isinstance(dataset, str): # *.yaml file
with open(dataset, errors='ignore') as f: with open(dataset, errors='ignore') as f:
data_dict = yaml.safe_load(f) # model dict data_dict = yaml.safe_load(f) # model dict
from utils.dataloaders import LoadImagesAndLabels from utils.datasets import LoadImagesAndLabels
dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
# Get label wh # Get label wh
@ -122,21 +119,18 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen
# Filter # Filter
i = (wh0 < 3.0).any(1).sum() i = (wh0 < 3.0).any(1).sum()
if i: if i:
LOGGER.info(f'{PREFIX}WARNING ⚠️ Extremely small objects found: {i} of {len(wh0)} labels are <3 pixels in size') LOGGER.info(f'{PREFIX}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
wh = wh0[(wh0 >= 2.0).any(1)].astype(np.float32) # filter > 2 pixels wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
# wh = wh * (npr.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
# Kmeans init # Kmeans calculation
try:
LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...') LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...')
assert n <= len(wh) # apply overdetermined constraint
s = wh.std(0) # sigmas for whitening s = wh.std(0) # sigmas for whitening
k = kmeans(wh / s, n, iter=30)[0] * s # points k, dist = kmeans(wh / s, n, iter=30) # points, mean distance
assert n == len(k) # kmeans may return fewer points than requested if wh is insufficient or too similar assert len(k) == n, f'{PREFIX}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}'
except Exception: k *= s
LOGGER.warning(f'{PREFIX}WARNING ⚠️ switching strategies from kmeans to random init') wh = torch.tensor(wh, dtype=torch.float32) # filtered
k = np.sort(npr.rand(n * 2)).reshape(n, 2) * img_size # random init wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered
wh, wh0 = (torch.tensor(x, dtype=torch.float32) for x in (wh, wh0))
k = print_results(k, verbose=False) k = print_results(k, verbose=False)
# Plot # Plot
@ -152,8 +146,9 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen
# fig.savefig('wh.png', dpi=200) # fig.savefig('wh.png', dpi=200)
# Evolve # Evolve
npr = np.random
f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
pbar = tqdm(range(gen), bar_format=TQDM_BAR_FORMAT) # progress bar pbar = tqdm(range(gen), desc=f'{PREFIX}Evolving anchors with Genetic Algorithm:') # progress bar
for _ in pbar: for _ in pbar:
v = np.ones(sh) v = np.ones(sh)
while (v == 1).all(): # mutate until a change occurs (prevent duplicates) while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
@ -166,4 +161,4 @@ def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen
if verbose: if verbose:
print_results(k, verbose) print_results(k, verbose)
return print_results(k).astype(np.float32) return print_results(k)

View File

@ -7,66 +7,51 @@ from copy import deepcopy
import numpy as np import numpy as np
import torch import torch
from torch.cuda import amp
from utils.general import LOGGER, colorstr from utils.general import LOGGER, colorstr
from utils.torch_utils import profile from utils.torch_utils import profile
def check_train_batch_size(model, imgsz=640, amp=True): def check_train_batch_size(model, imgsz=640):
# Check YOLOv3 training batch size # Check training batch size
with torch.cuda.amp.autocast(amp): with amp.autocast():
return autobatch(deepcopy(model).train(), imgsz) # compute optimal batch size return autobatch(deepcopy(model).train(), imgsz) # compute optimal batch size
def autobatch(model, imgsz=640, fraction=0.8, batch_size=16): def autobatch(model, imgsz=640, fraction=0.9, batch_size=16):
# Automatically estimate best YOLOv3 batch size to use `fraction` of available CUDA memory # Automatically estimate best batch size to use `fraction` of available CUDA memory
# Usage: # Usage:
# import torch # import torch
# from utils.autobatch import autobatch # from utils.autobatch import autobatch
# model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) # model = torch.hub.load('ultralytics/yolov3', 'yolov3', autoshape=False)
# print(autobatch(model)) # print(autobatch(model))
# Check device
prefix = colorstr('AutoBatch: ') prefix = colorstr('AutoBatch: ')
LOGGER.info(f'{prefix}Computing optimal batch size for --imgsz {imgsz}') LOGGER.info(f'{prefix}Computing optimal batch size for --imgsz {imgsz}')
device = next(model.parameters()).device # get model device device = next(model.parameters()).device # get model device
if device.type == 'cpu': if device.type == 'cpu':
LOGGER.info(f'{prefix}CUDA not detected, using default CPU batch-size {batch_size}') LOGGER.info(f'{prefix}CUDA not detected, using default CPU batch-size {batch_size}')
return batch_size return batch_size
if torch.backends.cudnn.benchmark:
LOGGER.info(f'{prefix} ⚠️ Requires torch.backends.cudnn.benchmark=False, using default batch-size {batch_size}')
return batch_size
# Inspect CUDA memory
gb = 1 << 30 # bytes to GiB (1024 ** 3)
d = str(device).upper() # 'CUDA:0' d = str(device).upper() # 'CUDA:0'
properties = torch.cuda.get_device_properties(device) # device properties properties = torch.cuda.get_device_properties(device) # device properties
t = properties.total_memory / gb # GiB total t = properties.total_memory / 1024 ** 3 # (GiB)
r = torch.cuda.memory_reserved(device) / gb # GiB reserved r = torch.cuda.memory_reserved(device) / 1024 ** 3 # (GiB)
a = torch.cuda.memory_allocated(device) / gb # GiB allocated a = torch.cuda.memory_allocated(device) / 1024 ** 3 # (GiB)
f = t - (r + a) # GiB free f = t - (r + a) # free inside reserved
LOGGER.info(f'{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free') LOGGER.info(f'{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free')
# Profile batch sizes
batch_sizes = [1, 2, 4, 8, 16] batch_sizes = [1, 2, 4, 8, 16]
try: try:
img = [torch.empty(b, 3, imgsz, imgsz) for b in batch_sizes] img = [torch.zeros(b, 3, imgsz, imgsz) for b in batch_sizes]
results = profile(img, model, n=3, device=device) y = profile(img, model, n=3, device=device)
except Exception as e: except Exception as e:
LOGGER.warning(f'{prefix}{e}') LOGGER.warning(f'{prefix}{e}')
# Fit a solution y = [x[2] for x in y if x] # memory [2]
y = [x[2] for x in results if x] # memory [2] batch_sizes = batch_sizes[:len(y)]
p = np.polyfit(batch_sizes[:len(y)], y, deg=1) # first degree polynomial fit p = np.polyfit(batch_sizes, y, deg=1) # first degree polynomial fit
b = int((f * fraction - p[1]) / p[0]) # y intercept (optimal batch size) b = int((f * fraction - p[1]) / p[0]) # y intercept (optimal batch size)
if None in results: # some sizes failed LOGGER.info(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%)')
i = results.index(None) # first fail index
if b >= batch_sizes[i]: # y intercept above failure point
b = batch_sizes[max(i - 1, 0)] # select prior safe point
if b < 1 or b > 1024: # b outside of safe range
b = batch_size
LOGGER.warning(f'{prefix}WARNING ⚠️ CUDA anomaly detected, recommend restart environment and retry command.')
fraction = (np.polyval(p, b) + r + a) / t # actual fraction predicted
LOGGER.info(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%) ✅')
return b return b

View File

@ -3,19 +3,17 @@
Callback utils Callback utils
""" """
import threading
class Callbacks: class Callbacks:
"""" """"
Handles all registered callbacks for YOLOv3 Hooks Handles all registered callbacks for Hooks
""" """
def __init__(self):
# Define the available callbacks # Define the available callbacks
self._callbacks = { _callbacks = {
'on_pretrain_routine_start': [], 'on_pretrain_routine_start': [],
'on_pretrain_routine_end': [], 'on_pretrain_routine_end': [],
'on_train_start': [], 'on_train_start': [],
'on_train_epoch_start': [], 'on_train_epoch_start': [],
'on_train_batch_start': [], 'on_train_batch_start': [],
@ -23,26 +21,28 @@ class Callbacks:
'on_before_zero_grad': [], 'on_before_zero_grad': [],
'on_train_batch_end': [], 'on_train_batch_end': [],
'on_train_epoch_end': [], 'on_train_epoch_end': [],
'on_val_start': [], 'on_val_start': [],
'on_val_batch_start': [], 'on_val_batch_start': [],
'on_val_image_end': [], 'on_val_image_end': [],
'on_val_batch_end': [], 'on_val_batch_end': [],
'on_val_end': [], 'on_val_end': [],
'on_fit_epoch_end': [], # fit = train + val 'on_fit_epoch_end': [], # fit = train + val
'on_model_save': [], 'on_model_save': [],
'on_train_end': [], 'on_train_end': [],
'on_params_update': [],
'teardown': [],} 'teardown': [],
self.stop_training = False # set True to interrupt training }
def register_action(self, hook, name='', callback=None): def register_action(self, hook, name='', callback=None):
""" """
Register a new action to a callback hook Register a new action to a callback hook
Args: Args:
hook: The callback hook name to register the action to hook The callback hook name to register the action to
name: The name of the action for later reference name The name of the action for later reference
callback: The callback to fire callback The callback to fire
""" """
assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}" assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}"
assert callable(callback), f"callback '{callback}' is not callable" assert callable(callback), f"callback '{callback}' is not callable"
@ -53,24 +53,24 @@ class Callbacks:
Returns all the registered actions by callback hook Returns all the registered actions by callback hook
Args: Args:
hook: The name of the hook to check, defaults to all hook The name of the hook to check, defaults to all
""" """
return self._callbacks[hook] if hook else self._callbacks if hook:
return self._callbacks[hook]
else:
return self._callbacks
def run(self, hook, *args, thread=False, **kwargs): def run(self, hook, *args, **kwargs):
""" """
Loop through the registered actions and fire all callbacks on main thread Loop through the registered actions and fire all callbacks
Args: Args:
hook: The name of the hook to check, defaults to all hook The name of the hook to check, defaults to all
args: Arguments to receive from YOLOv3 args Arguments to receive from
thread: (boolean) Run callbacks in daemon thread kwargs Keyword Arguments to receive from
kwargs: Keyword Arguments to receive from YOLOv3
""" """
assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}" assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}"
for logger in self._callbacks[hook]: for logger in self._callbacks[hook]:
if thread:
threading.Thread(target=logger['callback'], args=args, kwargs=kwargs, daemon=True).start()
else:
logger['callback'](*args, **kwargs) logger['callback'](*args, **kwargs)

View File

@ -3,104 +3,147 @@
Download utils Download utils
""" """
import logging import os
import platform
import subprocess import subprocess
import time
import urllib import urllib
from pathlib import Path from pathlib import Path
from zipfile import ZipFile
import requests import requests
import torch import torch
def is_url(url, check=True):
# Check if string is URL and check if URL exists
try:
url = str(url)
result = urllib.parse.urlparse(url)
assert all([result.scheme, result.netloc]) # check if is url
return (urllib.request.urlopen(url).getcode() == 200) if check else True # check if exists online
except (AssertionError, urllib.request.HTTPError):
return False
def gsutil_getsize(url=''): def gsutil_getsize(url=''):
# gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8') s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
return eval(s.split(' ')[0]) if len(s) else 0 # bytes return eval(s.split(' ')[0]) if len(s) else 0 # bytes
def url_getsize(url='https://ultralytics.com/images/bus.jpg'):
# Return downloadable file size in bytes
response = requests.head(url, allow_redirects=True)
return int(response.headers.get('content-length', -1))
def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''): def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
# Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes
from utils.general import LOGGER
file = Path(file) file = Path(file)
assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}" assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}"
try: # url1 try: # url1
LOGGER.info(f'Downloading {url} to {file}...') print(f'Downloading {url} to {file}...')
torch.hub.download_url_to_file(url, str(file), progress=LOGGER.level <= logging.INFO) torch.hub.download_url_to_file(url, str(file))
assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check
except Exception as e: # url2 except Exception as e: # url2
if file.exists(): file.unlink(missing_ok=True) # remove partial downloads
file.unlink() # remove partial downloads print(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...')
LOGGER.info(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...') os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail
subprocess.run(
f"curl -# -L '{url2 or url}' -o '{file}' --retry 3 -C -".split()) # curl download, retry and resume on fail
finally: finally:
if not file.exists() or file.stat().st_size < min_bytes: # check if not file.exists() or file.stat().st_size < min_bytes: # check
if file.exists(): file.unlink(missing_ok=True) # remove partial downloads
file.unlink() # remove partial downloads print(f"ERROR: {assert_msg}\n{error_msg}")
LOGGER.info(f'ERROR: {assert_msg}\n{error_msg}') print('')
LOGGER.info('')
def attempt_download(file, repo='ultralytics/yolov5', release='v7.0'): def attempt_download(file, repo='ultralytics/yolov3'): # from utils.downloads import *; attempt_download()
# Attempt file download from GitHub release assets if not found locally. release = 'latest', 'v7.0', etc. # Attempt file download if does not exist
from utils.general import LOGGER
def github_assets(repository, version='latest'):
# Return GitHub repo tag (i.e. 'v7.0') and assets (i.e. ['yolov5s.pt', 'yolov5m.pt', ...])
if version != 'latest':
version = f'tags/{version}' # i.e. tags/v7.0
response = requests.get(f'https://api.github.com/repos/{repository}/releases/{version}').json() # github api
return response['tag_name'], [x['name'] for x in response['assets']] # tag, assets
file = Path(str(file).strip().replace("'", '')) file = Path(str(file).strip().replace("'", ''))
if not file.exists(): if not file.exists():
# URL specified # URL specified
name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc. name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc.
if str(file).startswith(('http:/', 'https:/')): # download if str(file).startswith(('http:/', 'https:/')): # download
url = str(file).replace(':/', '://') # Pathlib turns :// -> :/ url = str(file).replace(':/', '://') # Pathlib turns :// -> :/
file = name.split('?')[0] # parse authentication https://url.com/file.txt?auth... name = name.split('?')[0] # parse authentication https://url.com/file.txt?auth...
if Path(file).is_file(): safe_download(file=name, url=url, min_bytes=1E5)
LOGGER.info(f'Found {url} locally at {file}') # file already exists return name
else:
safe_download(file=file, url=url, min_bytes=1E5)
return file
# GitHub assets # GitHub assets
assets = [f'yolov5{size}{suffix}.pt' for size in 'nsmlx' for suffix in ('', '6', '-cls', '-seg')] # default file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required)
try: try:
tag, assets = github_assets(repo, release) response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api
except Exception: assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov3.pt'...]
try: tag = response['tag_name'] # i.e. 'v1.0'
tag, assets = github_assets(repo) # latest release except: # fallback plan
except Exception: assets = ['yolov3.pt', 'yolov3-spp.pt', 'yolov3-tiny.pt']
try: try:
tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1] tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1]
except Exception: except:
tag = release tag = 'v9.5.0' # current release
file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required)
if name in assets: if name in assets:
safe_download(file, safe_download(file,
url=f'https://github.com/{repo}/releases/download/{tag}/{name}', url=f'https://github.com/{repo}/releases/download/{tag}/{name}',
# url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}', # backup url (optional)
min_bytes=1E5, min_bytes=1E5,
error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/{tag}') error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/')
return str(file) return str(file)
def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'):
# Downloads a file from Google Drive. from yolov3.utils.downloads import *; gdrive_download()
t = time.time()
file = Path(file)
cookie = Path('cookie') # gdrive cookie
print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
file.unlink(missing_ok=True) # remove existing file
cookie.unlink(missing_ok=True) # remove existing cookie
# Attempt file download
out = "NUL" if platform.system() == "Windows" else "/dev/null"
os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
if os.path.exists('cookie'): # large file
s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
else: # small file
s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
r = os.system(s) # execute, capture return
cookie.unlink(missing_ok=True) # remove existing cookie
# Error check
if r != 0:
file.unlink(missing_ok=True) # remove partial
print('Download error ') # raise Exception('Download error')
return r
# Unzip if archive
if file.suffix == '.zip':
print('unzipping... ', end='')
ZipFile(file).extractall(path=file.parent) # unzip
file.unlink() # remove zip
print(f'Done ({time.time() - t:.1f}s)')
return r
def get_token(cookie="./cookie"):
with open(cookie) as f:
for line in f:
if "download" in line:
return line.split()[-1]
return ""
# Google utils: https://cloud.google.com/storage/docs/reference/libraries ----------------------------------------------
#
#
# def upload_blob(bucket_name, source_file_name, destination_blob_name):
# # Uploads a file to a bucket
# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
#
# storage_client = storage.Client()
# bucket = storage_client.get_bucket(bucket_name)
# blob = bucket.blob(destination_blob_name)
#
# blob.upload_from_filename(source_file_name)
#
# print('File {} uploaded to {}.'.format(
# source_file_name,
# destination_blob_name))
#
#
# def download_blob(bucket_name, source_blob_name, destination_file_name):
# # Uploads a blob from a bucket
# storage_client = storage.Client()
# bucket = storage_client.get_bucket(bucket_name)
# blob = bucket.blob(source_blob_name)
#
# blob.download_to_filename(destination_file_name)
#
# print('Blob {} downloaded to {}.'.format(
# source_blob_name,
# destination_file_name))

886
yolov3/utils/general.py Normal file → Executable file

File diff suppressed because it is too large Load Diff

View File

@ -5,26 +5,25 @@ Logging utils
import os import os
import warnings import warnings
from pathlib import Path from threading import Thread
import pkg_resources as pkg import pkg_resources as pkg
import torch import torch
from torch.utils.tensorboard import SummaryWriter from torch.utils.tensorboard import SummaryWriter
from utils.general import LOGGER, colorstr, cv2 from utils.general import colorstr, emojis
from utils.loggers.clearml.clearml_utils import ClearmlLogger
from utils.loggers.wandb.wandb_utils import WandbLogger from utils.loggers.wandb.wandb_utils import WandbLogger
from utils.plots import plot_images, plot_labels, plot_results from utils.plots import plot_images, plot_results
from utils.torch_utils import de_parallel from utils.torch_utils import de_parallel
LOGGERS = ('csv', 'tb', 'wandb', 'clearml', 'comet') # *.csv, TensorBoard, Weights & Biases, ClearML LOGGERS = ('csv', 'tb', 'wandb') # text-file, TensorBoard, Weights & Biases
RANK = int(os.getenv('RANK', -1)) RANK = int(os.getenv('RANK', -1))
try: try:
import wandb import wandb
assert hasattr(wandb, '__version__') # verify package import not local dir assert hasattr(wandb, '__version__') # verify package import not local dir
if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in {0, -1}: if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in [0, -1]:
try: try:
wandb_login_success = wandb.login(timeout=30) wandb_login_success = wandb.login(timeout=30)
except wandb.errors.UsageError: # known non-TTY terminal issue except wandb.errors.UsageError: # known non-TTY terminal issue
@ -34,64 +33,30 @@ try:
except (ImportError, AssertionError): except (ImportError, AssertionError):
wandb = None wandb = None
try:
import clearml
assert hasattr(clearml, '__version__') # verify package import not local dir
except (ImportError, AssertionError):
clearml = None
try:
if RANK not in [0, -1]:
comet_ml = None
else:
import comet_ml
assert hasattr(comet_ml, '__version__') # verify package import not local dir
from utils.loggers.comet import CometLogger
except (ModuleNotFoundError, ImportError, AssertionError):
comet_ml = None
class Loggers(): class Loggers():
# YOLOv3 Loggers class # Loggers class
def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, logger=None, include=LOGGERS): def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, logger=None, include=LOGGERS):
self.save_dir = save_dir self.save_dir = save_dir
self.weights = weights self.weights = weights
self.opt = opt self.opt = opt
self.hyp = hyp self.hyp = hyp
self.plots = not opt.noplots # plot results
self.logger = logger # for printing results to console self.logger = logger # for printing results to console
self.include = include self.include = include
self.keys = [ self.keys = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss
'train/box_loss', 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', # metrics
'train/obj_loss', 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss
'train/cls_loss', # train loss 'x/lr0', 'x/lr1', 'x/lr2'] # params
'metrics/precision',
'metrics/recall',
'metrics/mAP_0.5',
'metrics/mAP_0.5:0.95', # metrics
'val/box_loss',
'val/obj_loss',
'val/cls_loss', # val loss
'x/lr0',
'x/lr1',
'x/lr2'] # params
self.best_keys = ['best/epoch', 'best/precision', 'best/recall', 'best/mAP_0.5', 'best/mAP_0.5:0.95']
for k in LOGGERS: for k in LOGGERS:
setattr(self, k, None) # init empty logger dictionary setattr(self, k, None) # init empty logger dictionary
self.csv = True # always log to csv self.csv = True # always log to csv
# Messages # Message
if not clearml: if not wandb:
prefix = colorstr('ClearML: ') prefix = colorstr('Weights & Biases: ')
s = f"{prefix}run 'pip install clearml' to automatically track, visualize and remotely train YOLOv3 🚀 in ClearML" s = f"{prefix}run 'pip install wandb' to automatically track and visualize YOLOv3 🚀 runs (RECOMMENDED)"
self.logger.info(s) print(emojis(s))
if not comet_ml:
prefix = colorstr('Comet: ')
s = f"{prefix}run 'pip install comet_ml' to automatically track and visualize YOLOv3 🚀 runs in Comet"
self.logger.info(s)
# TensorBoard # TensorBoard
s = self.save_dir s = self.save_dir
if 'tb' in self.include and not self.opt.evolve: if 'tb' in self.include and not self.opt.evolve:
@ -101,127 +66,53 @@ class Loggers():
# W&B # W&B
if wandb and 'wandb' in self.include: if wandb and 'wandb' in self.include:
wandb_artifact_resume = isinstance(self.opt.resume, str) and self.opt.resume.startswith('wandb-artifact://')
run_id = torch.load(self.weights).get('wandb_id') if self.opt.resume and not wandb_artifact_resume else None
self.opt.hyp = self.hyp # add hyperparameters self.opt.hyp = self.hyp # add hyperparameters
self.wandb = WandbLogger(self.opt) self.wandb = WandbLogger(self.opt, run_id)
else: else:
self.wandb = None self.wandb = None
# ClearML def on_pretrain_routine_end(self):
if clearml and 'clearml' in self.include:
try:
self.clearml = ClearmlLogger(self.opt, self.hyp)
except Exception:
self.clearml = None
prefix = colorstr('ClearML: ')
LOGGER.warning(f'{prefix}WARNING ⚠️ ClearML is installed but not configured, skipping ClearML logging.'
f' See https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml#readme')
else:
self.clearml = None
# Comet
if comet_ml and 'comet' in self.include:
if isinstance(self.opt.resume, str) and self.opt.resume.startswith('comet://'):
run_id = self.opt.resume.split('/')[-1]
self.comet_logger = CometLogger(self.opt, self.hyp, run_id=run_id)
else:
self.comet_logger = CometLogger(self.opt, self.hyp)
else:
self.comet_logger = None
@property
def remote_dataset(self):
# Get data_dict if custom dataset artifact link is provided
data_dict = None
if self.clearml:
data_dict = self.clearml.data_dict
if self.wandb:
data_dict = self.wandb.data_dict
if self.comet_logger:
data_dict = self.comet_logger.data_dict
return data_dict
def on_train_start(self):
if self.comet_logger:
self.comet_logger.on_train_start()
def on_pretrain_routine_start(self):
if self.comet_logger:
self.comet_logger.on_pretrain_routine_start()
def on_pretrain_routine_end(self, labels, names):
# Callback runs on pre-train routine end # Callback runs on pre-train routine end
if self.plots:
plot_labels(labels, names, self.save_dir)
paths = self.save_dir.glob('*labels*.jpg') # training labels paths = self.save_dir.glob('*labels*.jpg') # training labels
if self.wandb: if self.wandb:
self.wandb.log({'Labels': [wandb.Image(str(x), caption=x.name) for x in paths]}) self.wandb.log({"Labels": [wandb.Image(str(x), caption=x.name) for x in paths]})
# if self.clearml:
# pass # ClearML saves these images automatically using hooks
if self.comet_logger:
self.comet_logger.on_pretrain_routine_end(paths)
def on_train_batch_end(self, model, ni, imgs, targets, paths, vals): def on_train_batch_end(self, ni, model, imgs, targets, paths, plots, sync_bn):
log_dict = dict(zip(self.keys[:3], vals))
# Callback runs on train batch end # Callback runs on train batch end
# ni: number integrated batches (since train start) if plots:
if self.plots: if ni == 0:
if not sync_bn: # tb.add_graph() --sync known issue https://github.com/ultralytics/yolov5/issues/3754
with warnings.catch_warnings():
warnings.simplefilter('ignore') # suppress jit trace warning
self.tb.add_graph(torch.jit.trace(de_parallel(model), imgs[0:1], strict=False), [])
if ni < 3: if ni < 3:
f = self.save_dir / f'train_batch{ni}.jpg' # filename f = self.save_dir / f'train_batch{ni}.jpg' # filename
plot_images(imgs, targets, paths, f) Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start()
if ni == 0 and self.tb and not self.opt.sync_bn: if self.wandb and ni == 10:
log_tensorboard_graph(self.tb, model, imgsz=(self.opt.imgsz, self.opt.imgsz))
if ni == 10 and (self.wandb or self.clearml):
files = sorted(self.save_dir.glob('train*.jpg')) files = sorted(self.save_dir.glob('train*.jpg'))
if self.wandb:
self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]}) self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]})
if self.clearml:
self.clearml.log_debug_samples(files, title='Mosaics')
if self.comet_logger:
self.comet_logger.on_train_batch_end(log_dict, step=ni)
def on_train_epoch_end(self, epoch): def on_train_epoch_end(self, epoch):
# Callback runs on train epoch end # Callback runs on train epoch end
if self.wandb: if self.wandb:
self.wandb.current_epoch = epoch + 1 self.wandb.current_epoch = epoch + 1
if self.comet_logger:
self.comet_logger.on_train_epoch_end(epoch)
def on_val_start(self):
if self.comet_logger:
self.comet_logger.on_val_start()
def on_val_image_end(self, pred, predn, path, names, im): def on_val_image_end(self, pred, predn, path, names, im):
# Callback runs on val image end # Callback runs on val image end
if self.wandb: if self.wandb:
self.wandb.val_one_image(pred, predn, path, names, im) self.wandb.val_one_image(pred, predn, path, names, im)
if self.clearml:
self.clearml.log_image_with_boxes(path, pred, names, im)
def on_val_batch_end(self, batch_i, im, targets, paths, shapes, out): def on_val_end(self):
if self.comet_logger:
self.comet_logger.on_val_batch_end(batch_i, im, targets, paths, shapes, out)
def on_val_end(self, nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix):
# Callback runs on val end # Callback runs on val end
if self.wandb or self.clearml:
files = sorted(self.save_dir.glob('val*.jpg'))
if self.wandb: if self.wandb:
self.wandb.log({'Validation': [wandb.Image(str(f), caption=f.name) for f in files]}) files = sorted(self.save_dir.glob('val*.jpg'))
if self.clearml: self.wandb.log({"Validation": [wandb.Image(str(f), caption=f.name) for f in files]})
self.clearml.log_debug_samples(files, title='Validation')
if self.comet_logger:
self.comet_logger.on_val_end(nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix)
def on_fit_epoch_end(self, vals, epoch, best_fitness, fi): def on_fit_epoch_end(self, vals, epoch, best_fitness, fi):
# Callback runs at the end of each fit (train+val) epoch # Callback runs at the end of each fit (train+val) epoch
x = dict(zip(self.keys, vals)) x = {k: v for k, v in zip(self.keys, vals)} # dict
if self.csv: if self.csv:
file = self.save_dir / 'results.csv' file = self.save_dir / 'results.csv'
n = len(x) + 1 # number of cols n = len(x) + 1 # number of cols
@ -232,170 +123,37 @@ class Loggers():
if self.tb: if self.tb:
for k, v in x.items(): for k, v in x.items():
self.tb.add_scalar(k, v, epoch) self.tb.add_scalar(k, v, epoch)
elif self.clearml: # log to ClearML if TensorBoard not used
for k, v in x.items():
title, series = k.split('/')
self.clearml.task.get_logger().report_scalar(title, series, v, epoch)
if self.wandb: if self.wandb:
if best_fitness == fi:
best_results = [epoch] + vals[3:7]
for i, name in enumerate(self.best_keys):
self.wandb.wandb_run.summary[name] = best_results[i] # log best results in the summary
self.wandb.log(x) self.wandb.log(x)
self.wandb.end_epoch() self.wandb.end_epoch(best_result=best_fitness == fi)
if self.clearml:
self.clearml.current_epoch_logged_images = set() # reset epoch image limit
self.clearml.current_epoch += 1
if self.comet_logger:
self.comet_logger.on_fit_epoch_end(x, epoch=epoch)
def on_model_save(self, last, epoch, final_epoch, best_fitness, fi): def on_model_save(self, last, epoch, final_epoch, best_fitness, fi):
# Callback runs on model save event # Callback runs on model save event
if (epoch + 1) % self.opt.save_period == 0 and not final_epoch and self.opt.save_period != -1:
if self.wandb: if self.wandb:
if ((epoch + 1) % self.opt.save_period == 0 and not final_epoch) and self.opt.save_period != -1:
self.wandb.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi) self.wandb.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi)
if self.clearml:
self.clearml.task.update_output_model(model_path=str(last),
model_name='Latest Model',
auto_delete_file=False)
if self.comet_logger: def on_train_end(self, last, best, plots, epoch, results):
self.comet_logger.on_model_save(last, epoch, final_epoch, best_fitness, fi) # Callback runs on training end
if plots:
def on_train_end(self, last, best, epoch, results):
# Callback runs on training end, i.e. saving best model
if self.plots:
plot_results(file=self.save_dir / 'results.csv') # save results.png plot_results(file=self.save_dir / 'results.csv') # save results.png
files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))] files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))]
files = [(self.save_dir / f) for f in files if (self.save_dir / f).exists()] # filter files = [(self.save_dir / f) for f in files if (self.save_dir / f).exists()] # filter
self.logger.info(f"Results saved to {colorstr('bold', self.save_dir)}")
if self.tb and not self.clearml: # These images are already captured by ClearML by now, we don't want doubles if self.tb:
import cv2
for f in files: for f in files:
self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC') self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC')
if self.wandb: if self.wandb:
self.wandb.log(dict(zip(self.keys[3:10], results))) self.wandb.log({"Results": [wandb.Image(str(f), caption=f.name) for f in files]})
self.wandb.log({'Results': [wandb.Image(str(f), caption=f.name) for f in files]})
# Calling wandb.log. TODO: Refactor this into WandbLogger.log_model # Calling wandb.log. TODO: Refactor this into WandbLogger.log_model
if not self.opt.evolve: if not self.opt.evolve:
wandb.log_artifact(str(best if best.exists() else last), wandb.log_artifact(str(best if best.exists() else last), type='model',
type='model', name='run_' + self.wandb.wandb_run.id + '_model',
name=f'run_{self.wandb.wandb_run.id}_model',
aliases=['latest', 'best', 'stripped']) aliases=['latest', 'best', 'stripped'])
self.wandb.finish_run() self.wandb.finish_run()
if self.clearml and not self.opt.evolve:
self.clearml.task.update_output_model(model_path=str(best if best.exists() else last),
name='Best Model',
auto_delete_file=False)
if self.comet_logger:
final_results = dict(zip(self.keys[3:10], results))
self.comet_logger.on_train_end(files, self.save_dir, last, best, epoch, final_results)
def on_params_update(self, params: dict):
# Update hyperparams or configs of the experiment
if self.wandb:
self.wandb.wandb_run.config.update(params, allow_val_change=True)
if self.comet_logger:
self.comet_logger.on_params_update(params)
class GenericLogger:
"""
YOLOv5 General purpose logger for non-task specific logging
Usage: from utils.loggers import GenericLogger; logger = GenericLogger(...)
Arguments
opt: Run arguments
console_logger: Console logger
include: loggers to include
"""
def __init__(self, opt, console_logger, include=('tb', 'wandb')):
# init default loggers
self.save_dir = Path(opt.save_dir)
self.include = include
self.console_logger = console_logger
self.csv = self.save_dir / 'results.csv' # CSV logger
if 'tb' in self.include:
prefix = colorstr('TensorBoard: ')
self.console_logger.info(
f"{prefix}Start with 'tensorboard --logdir {self.save_dir.parent}', view at http://localhost:6006/")
self.tb = SummaryWriter(str(self.save_dir))
if wandb and 'wandb' in self.include:
self.wandb = wandb.init(project=web_project_name(str(opt.project)),
name=None if opt.name == 'exp' else opt.name,
config=opt)
else: else:
self.wandb = None self.wandb.finish_run()
self.wandb = WandbLogger(self.opt)
def log_metrics(self, metrics, epoch):
# Log metrics dictionary to all loggers
if self.csv:
keys, vals = list(metrics.keys()), list(metrics.values())
n = len(metrics) + 1 # number of cols
s = '' if self.csv.exists() else (('%23s,' * n % tuple(['epoch'] + keys)).rstrip(',') + '\n') # header
with open(self.csv, 'a') as f:
f.write(s + ('%23.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\n')
if self.tb:
for k, v in metrics.items():
self.tb.add_scalar(k, v, epoch)
if self.wandb:
self.wandb.log(metrics, step=epoch)
def log_images(self, files, name='Images', epoch=0):
# Log images to all loggers
files = [Path(f) for f in (files if isinstance(files, (tuple, list)) else [files])] # to Path
files = [f for f in files if f.exists()] # filter by exists
if self.tb:
for f in files:
self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC')
if self.wandb:
self.wandb.log({name: [wandb.Image(str(f), caption=f.name) for f in files]}, step=epoch)
def log_graph(self, model, imgsz=(640, 640)):
# Log model graph to all loggers
if self.tb:
log_tensorboard_graph(self.tb, model, imgsz)
def log_model(self, model_path, epoch=0, metadata={}):
# Log model to all loggers
if self.wandb:
art = wandb.Artifact(name=f'run_{wandb.run.id}_model', type='model', metadata=metadata)
art.add_file(str(model_path))
wandb.log_artifact(art)
def update_params(self, params):
# Update the parameters logged
if self.wandb:
wandb.run.config.update(params, allow_val_change=True)
def log_tensorboard_graph(tb, model, imgsz=(640, 640)):
# Log model graph to TensorBoard
try:
p = next(model.parameters()) # for device, type
imgsz = (imgsz, imgsz) if isinstance(imgsz, int) else imgsz # expand
im = torch.zeros((1, 3, *imgsz)).to(p.device).type_as(p) # input image (WARNING: must be zeros, not empty)
with warnings.catch_warnings():
warnings.simplefilter('ignore') # suppress jit trace warning
tb.add_graph(torch.jit.trace(de_parallel(model), im, strict=False), [])
except Exception as e:
LOGGER.warning(f'WARNING ⚠️ TensorBoard graph visualization failure {e}')
def web_project_name(project):
# Convert local project name to web project name
if not project.startswith('runs/train'):
return project
suffix = '-Classify' if project.endswith('-cls') else '-Segment' if project.endswith('-seg') else ''
return f'YOLOv5{suffix}'

View File

@ -1,32 +1,108 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license """Utilities and tools for tracking runs with Weights & Biases."""
# WARNING ⚠️ wandb is deprecated and will be removed in future release.
# See supported integrations at https://github.com/ultralytics/yolov5#integrations
import logging import logging
import os import os
import sys import sys
from contextlib import contextmanager from contextlib import contextmanager
from pathlib import Path from pathlib import Path
from typing import Dict
from utils.general import LOGGER, colorstr import pkg_resources as pkg
import yaml
from tqdm import tqdm
FILE = Path(__file__).resolve() FILE = Path(__file__).resolve()
ROOT = FILE.parents[3] # YOLOv5 root directory ROOT = FILE.parents[3] # root directory
if str(ROOT) not in sys.path: if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH sys.path.append(str(ROOT)) # add ROOT to PATH
RANK = int(os.getenv('RANK', -1))
DEPRECATION_WARNING = f"{colorstr('wandb')}: WARNING ⚠️ wandb is deprecated and will be removed in a future release. " \ from utils.datasets import LoadImagesAndLabels, img2label_paths
f'See supported integrations at https://github.com/ultralytics/yolov5#integrations.' from utils.general import LOGGER, check_dataset, check_file
try: try:
import wandb import wandb
assert hasattr(wandb, '__version__') # verify package import not local dir assert hasattr(wandb, '__version__') # verify package import not local dir
LOGGER.warning(DEPRECATION_WARNING)
except (ImportError, AssertionError): except (ImportError, AssertionError):
wandb = None wandb = None
RANK = int(os.getenv('RANK', -1))
WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):
return from_string[len(prefix):]
def check_wandb_config_file(data_config_file):
wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path
if Path(wandb_config).is_file():
return wandb_config
return data_config_file
def check_wandb_dataset(data_file):
is_trainset_wandb_artifact = False
is_valset_wandb_artifact = False
if check_file(data_file) and data_file.endswith('.yaml'):
with open(data_file, errors='ignore') as f:
data_dict = yaml.safe_load(f)
is_trainset_wandb_artifact = (isinstance(data_dict['train'], str) and
data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX))
is_valset_wandb_artifact = (isinstance(data_dict['val'], str) and
data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX))
if is_trainset_wandb_artifact or is_valset_wandb_artifact:
return data_dict
else:
return check_dataset(data_file)
def get_run_info(run_path):
run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX))
run_id = run_path.stem
project = run_path.parent.stem
entity = run_path.parent.parent.stem
model_artifact_name = 'run_' + run_id + '_model'
return entity, project, run_id, model_artifact_name
def check_wandb_resume(opt):
process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None
if isinstance(opt.resume, str):
if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
if RANK not in [-1, 0]: # For resuming DDP runs
entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
api = wandb.Api()
artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest')
modeldir = artifact.download()
opt.weights = str(Path(modeldir) / "last.pt")
return True
return None
def process_wandb_config_ddp_mode(opt):
with open(check_file(opt.data), errors='ignore') as f:
data_dict = yaml.safe_load(f) # data dict
train_dir, val_dir = None, None
if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX):
api = wandb.Api()
train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias)
train_dir = train_artifact.download()
train_path = Path(train_dir) / 'data/images/'
data_dict['train'] = str(train_path)
if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX):
api = wandb.Api()
val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias)
val_dir = val_artifact.download()
val_path = Path(val_dir) / 'data/images/'
data_dict['val'] = str(val_path)
if train_dir or val_dir:
ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml')
with open(ddp_data_path, 'w') as f:
yaml.safe_dump(data_dict, f)
opt.data = ddp_data_path
class WandbLogger(): class WandbLogger():
"""Log training runs, datasets, models, and predictions to Weights & Biases. """Log training runs, datasets, models, and predictions to Weights & Biases.
@ -46,7 +122,7 @@ class WandbLogger():
""" """
- Initialize WandbLogger instance - Initialize WandbLogger instance
- Upload dataset if opt.upload_dataset is True - Upload dataset if opt.upload_dataset is True
- Setup training processes if job_type is 'Training' - Setup trainig processes if job_type is 'Training'
arguments: arguments:
opt (namespace) -- Commandline arguments for this run opt (namespace) -- Commandline arguments for this run
@ -56,31 +132,82 @@ class WandbLogger():
""" """
# Pre-training routine -- # Pre-training routine --
self.job_type = job_type self.job_type = job_type
self.wandb, self.wandb_run = wandb, wandb.run if wandb else None self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run
self.val_artifact, self.train_artifact = None, None self.val_artifact, self.train_artifact = None, None
self.train_artifact_path, self.val_artifact_path = None, None self.train_artifact_path, self.val_artifact_path = None, None
self.result_artifact = None self.result_artifact = None
self.val_table, self.result_table = None, None self.val_table, self.result_table = None, None
self.bbox_media_panel_images = []
self.val_table_path_map = None
self.max_imgs_to_log = 16 self.max_imgs_to_log = 16
self.wandb_artifact_data_dict = None
self.data_dict = None self.data_dict = None
if self.wandb: # It's more elegant to stick to 1 wandb.init call,
self.wandb_run = wandb.init(config=opt, # but useful config data is overwritten in the WandbLogger's wandb.init call
if isinstance(opt.resume, str): # checks resume from artifact
if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name
assert wandb, 'install wandb to resume wandb runs'
# Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config
self.wandb_run = wandb.init(id=run_id,
project=project,
entity=entity,
resume='allow', resume='allow',
project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem, allow_val_change=True)
opt.resume = model_artifact_name
elif self.wandb:
self.wandb_run = wandb.init(config=opt,
resume="allow",
project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem,
entity=opt.entity, entity=opt.entity,
name=opt.name if opt.name != 'exp' else None, name=opt.name if opt.name != 'exp' else None,
job_type=job_type, job_type=job_type,
id=run_id, id=run_id,
allow_val_change=True) if not wandb.run else wandb.run allow_val_change=True) if not wandb.run else wandb.run
if self.wandb_run: if self.wandb_run:
if self.job_type == 'Training': if self.job_type == 'Training':
if isinstance(opt.data, dict): if opt.upload_dataset:
# This means another dataset manager has already processed the dataset info (e.g. ClearML) if not opt.resume:
# and they will have stored the already processed dict in opt.data self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt)
self.data_dict = opt.data
if opt.resume:
# resume from artifact
if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
self.data_dict = dict(self.wandb_run.config.data_dict)
else: # local resume
self.data_dict = check_wandb_dataset(opt.data)
else:
self.data_dict = check_wandb_dataset(opt.data)
self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict
# write data_dict to config. useful for resuming from artifacts. Do this only when not resuming.
self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict},
allow_val_change=True)
self.setup_training(opt) self.setup_training(opt)
if self.job_type == 'Dataset Creation':
self.data_dict = self.check_and_upload_dataset(opt)
def check_and_upload_dataset(self, opt):
"""
Check if the dataset format is compatible and upload it as W&B artifact
arguments:
opt (namespace)-- Commandline arguments for current run
returns:
Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links.
"""
assert wandb, 'Install wandb to upload dataset'
config_path = self.log_dataset_artifact(opt.data,
opt.single_cls,
'YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem)
LOGGER.info(f"Created dataset config file {config_path}")
with open(config_path, errors='ignore') as f:
wandb_data_dict = yaml.safe_load(f)
return wandb_data_dict
def setup_training(self, opt): def setup_training(self, opt):
""" """
Setup the necessary processes for training YOLO models: Setup the necessary processes for training YOLO models:
@ -95,18 +222,77 @@ class WandbLogger():
self.log_dict, self.current_epoch = {}, 0 self.log_dict, self.current_epoch = {}, 0
self.bbox_interval = opt.bbox_interval self.bbox_interval = opt.bbox_interval
if isinstance(opt.resume, str): if isinstance(opt.resume, str):
model_dir, _ = self.download_model_artifact(opt) modeldir, _ = self.download_model_artifact(opt)
if model_dir: if modeldir:
self.weights = Path(model_dir) / 'last.pt' self.weights = Path(modeldir) / "last.pt"
config = self.wandb_run.config config = self.wandb_run.config
opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp, opt.imgsz = str( opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp = str(
self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs, \ self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs, \
config.hyp, config.imgsz config.hyp
data_dict = self.data_dict
if self.val_artifact is None: # If --upload_dataset is set, use the existing artifact, don't download
self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(data_dict.get('train'),
opt.artifact_alias)
self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(data_dict.get('val'),
opt.artifact_alias)
if self.train_artifact_path is not None:
train_path = Path(self.train_artifact_path) / 'data/images/'
data_dict['train'] = str(train_path)
if self.val_artifact_path is not None:
val_path = Path(self.val_artifact_path) / 'data/images/'
data_dict['val'] = str(val_path)
if self.val_artifact is not None:
self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
self.result_table = wandb.Table(["epoch", "id", "ground truth", "prediction", "avg_confidence"])
self.val_table = self.val_artifact.get("val")
if self.val_table_path_map is None:
self.map_val_table_path()
if opt.bbox_interval == -1: if opt.bbox_interval == -1:
self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1 self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1
if opt.evolve or opt.noplots: train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None
self.bbox_interval = opt.bbox_interval = opt.epochs + 1 # disable bbox_interval # Update the the data_dict to point to local artifacts dir
if train_from_artifact:
self.data_dict = data_dict
def download_dataset_artifact(self, path, alias):
"""
download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX
arguments:
path -- path of the dataset to be used for training
alias (str)-- alias of the artifact to be download/used for training
returns:
(str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset
is found otherwise returns (None, None)
"""
if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX):
artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias)
dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\", "/"))
assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'"
datadir = dataset_artifact.download()
return datadir, dataset_artifact
return None, None
def download_model_artifact(self, opt):
"""
download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX
arguments:
opt (namespace) -- Commandline arguments for this run
"""
if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest")
assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist'
modeldir = model_artifact.download()
epochs_trained = model_artifact.metadata.get('epochs_trained')
total_epochs = model_artifact.metadata.get('total_epochs')
is_finished = total_epochs is None
assert not is_finished, 'training is finished, can only resume incomplete runs.'
return modeldir, model_artifact
return None, None
def log_model(self, path, opt, epoch, fitness_score, best_model=False): def log_model(self, path, opt, epoch, fitness_score, best_model=False):
""" """
@ -119,22 +305,166 @@ class WandbLogger():
fitness_score (float) -- fitness score for current epoch fitness_score (float) -- fitness score for current epoch
best_model (boolean) -- Boolean representing if the current checkpoint is the best yet. best_model (boolean) -- Boolean representing if the current checkpoint is the best yet.
""" """
model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', type='model', metadata={
type='model',
metadata={
'original_url': str(path), 'original_url': str(path),
'epochs_trained': epoch + 1, 'epochs_trained': epoch + 1,
'save period': opt.save_period, 'save period': opt.save_period,
'project': opt.project, 'project': opt.project,
'total_epochs': opt.epochs, 'total_epochs': opt.epochs,
'fitness_score': fitness_score}) 'fitness_score': fitness_score
})
model_artifact.add_file(str(path / 'last.pt'), name='last.pt') model_artifact.add_file(str(path / 'last.pt'), name='last.pt')
wandb.log_artifact(model_artifact, wandb.log_artifact(model_artifact,
aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else '']) aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else ''])
LOGGER.info(f'Saving model artifact on epoch {epoch + 1}') LOGGER.info(f"Saving model artifact on epoch {epoch + 1}")
def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False):
"""
Log the dataset as W&B artifact and return the new data file with W&B links
arguments:
data_file (str) -- the .yaml file with information about the dataset like - path, classes etc.
single_class (boolean) -- train multi-class data as single-class
project (str) -- project name. Used to construct the artifact path
overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new
file with _wandb postfix. Eg -> data_wandb.yaml
returns:
the new .yaml file with artifact links. it can be used to start training directly from artifacts
"""
self.data_dict = check_dataset(data_file) # parse and check
data = dict(self.data_dict)
nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names'])
names = {k: v for k, v in enumerate(names)} # to index dictionary
self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(
data['train'], rect=True, batch_size=1), names, name='train') if data.get('train') else None
self.val_artifact = self.create_dataset_table(LoadImagesAndLabels(
data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None
if data.get('train'):
data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train')
if data.get('val'):
data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val')
path = Path(data_file).stem
path = (path if overwrite_config else path + '_wandb') + '.yaml' # updated data.yaml path
data.pop('download', None)
data.pop('path', None)
with open(path, 'w') as f:
yaml.safe_dump(data, f)
if self.job_type == 'Training': # builds correct artifact pipeline graph
self.wandb_run.use_artifact(self.val_artifact)
self.wandb_run.use_artifact(self.train_artifact)
self.val_artifact.wait()
self.val_table = self.val_artifact.get('val')
self.map_val_table_path()
else:
self.wandb_run.log_artifact(self.train_artifact)
self.wandb_run.log_artifact(self.val_artifact)
return path
def map_val_table_path(self):
"""
Map the validation dataset Table like name of file -> it's id in the W&B Table.
Useful for - referencing artifacts for evaluation.
"""
self.val_table_path_map = {}
LOGGER.info("Mapping dataset")
for i, data in enumerate(tqdm(self.val_table.data)):
self.val_table_path_map[data[3]] = data[0]
def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int,str], name: str = 'dataset'):
"""
Create and return W&B artifact containing W&B Table of the dataset.
arguments:
dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table
class_to_id -- hash map that maps class ids to labels
name -- name of the artifact
returns:
dataset artifact to be logged or used
"""
# TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging
artifact = wandb.Artifact(name=name, type="dataset")
img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None
img_files = tqdm(dataset.img_files) if not img_files else img_files
for img_file in img_files:
if Path(img_file).is_dir():
artifact.add_dir(img_file, name='data/images')
labels_path = 'labels'.join(dataset.path.rsplit('images', 1))
artifact.add_dir(labels_path, name='data/labels')
else:
artifact.add_file(img_file, name='data/images/' + Path(img_file).name)
label_file = Path(img2label_paths([img_file])[0])
artifact.add_file(str(label_file),
name='data/labels/' + label_file.name) if label_file.exists() else None
table = wandb.Table(columns=["id", "train_image", "Classes", "name"])
class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()])
for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)):
box_data, img_classes = [], {}
for cls, *xywh in labels[:, 1:].tolist():
cls = int(cls)
box_data.append({"position": {"middle": [xywh[0], xywh[1]], "width": xywh[2], "height": xywh[3]},
"class_id": cls,
"box_caption": "%s" % (class_to_id[cls])})
img_classes[cls] = class_to_id[cls]
boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space
table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()),
Path(paths).name)
artifact.add(table, name)
return artifact
def log_training_progress(self, predn, path, names):
"""
Build evaluation Table. Uses reference from validation dataset table.
arguments:
predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class]
path (str): local path of the current evaluation image
names (dict(int, str)): hash map that maps class ids to labels
"""
class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()])
box_data = []
total_conf = 0
for *xyxy, conf, cls in predn.tolist():
if conf >= 0.25:
box_data.append(
{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
"class_id": int(cls),
"box_caption": f"{names[cls]} {conf:.3f}",
"scores": {"class_score": conf},
"domain": "pixel"})
total_conf += conf
boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
id = self.val_table_path_map[Path(path).name]
self.result_table.add_data(self.current_epoch,
id,
self.val_table.data[id][1],
wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set),
total_conf / max(1, len(box_data))
)
def val_one_image(self, pred, predn, path, names, im): def val_one_image(self, pred, predn, path, names, im):
pass """
Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel
arguments:
pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class]
predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class]
path (str): local path of the current evaluation image
"""
if self.val_table and self.result_table: # Log Table if Val dataset is uploaded as artifact
self.log_training_progress(predn, path, names)
if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0:
if self.current_epoch % self.bbox_interval == 0:
box_data = [{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
"class_id": int(cls),
"box_caption": f"{names[cls]} {conf:.3f}",
"scores": {"class_score": conf},
"domain": "pixel"} for *xyxy, conf, cls in pred.tolist()]
boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name))
def log(self, log_dict): def log(self, log_dict):
""" """
@ -147,7 +477,7 @@ class WandbLogger():
for key, value in log_dict.items(): for key, value in log_dict.items():
self.log_dict[key] = value self.log_dict[key] = value
def end_epoch(self): def end_epoch(self, best_result=False):
""" """
commit the log_dict, model artifacts and Tables to W&B and flush the log_dict. commit the log_dict, model artifacts and Tables to W&B and flush the log_dict.
@ -156,15 +486,25 @@ class WandbLogger():
""" """
if self.wandb_run: if self.wandb_run:
with all_logging_disabled(): with all_logging_disabled():
if self.bbox_media_panel_images:
self.log_dict["BoundingBoxDebugger"] = self.bbox_media_panel_images
try: try:
wandb.log(self.log_dict) wandb.log(self.log_dict)
except BaseException as e: except BaseException as e:
LOGGER.info( LOGGER.info(f"An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}")
f'An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}'
)
self.wandb_run.finish() self.wandb_run.finish()
self.wandb_run = None self.wandb_run = None
self.log_dict = {} self.log_dict = {}
self.bbox_media_panel_images = []
if self.result_artifact:
self.result_artifact.add(self.result_table, 'result')
wandb.log_artifact(self.result_artifact, aliases=['latest', 'last', 'epoch ' + str(self.current_epoch),
('best' if best_result else '')])
wandb.log({"evaluation": self.result_table})
self.result_table = wandb.Table(["epoch", "id", "ground truth", "prediction", "avg_confidence"])
self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
def finish_run(self): def finish_run(self):
""" """
@ -175,7 +515,6 @@ class WandbLogger():
with all_logging_disabled(): with all_logging_disabled():
wandb.log(self.log_dict) wandb.log(self.log_dict)
wandb.run.finish() wandb.run.finish()
LOGGER.warning(DEPRECATION_WARNING)
@contextmanager @contextmanager

View File

@ -11,8 +11,6 @@ import matplotlib.pyplot as plt
import numpy as np import numpy as np
import torch import torch
from utils import TryExcept, threaded
def fitness(x): def fitness(x):
# Model fitness as a weighted combination of metrics # Model fitness as a weighted combination of metrics
@ -20,15 +18,7 @@ def fitness(x):
return (x[:, :4] * w).sum(1) return (x[:, :4] * w).sum(1)
def smooth(y, f=0.05): def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()):
# Box filter of fraction f
nf = round(len(y) * f * 2) // 2 + 1 # number of filter elements (must be odd)
p = np.ones(nf // 2) # ones padding
yp = np.concatenate((p * y[0], y, p * y[-1]), 0) # y padded
return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed
def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16, prefix=''):
""" Compute the average precision, given the recall and precision curves. """ Compute the average precision, given the recall and precision curves.
Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
# Arguments # Arguments
@ -47,7 +37,7 @@ def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names
tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
# Find unique classes # Find unique classes
unique_classes, nt = np.unique(target_cls, return_counts=True) unique_classes = np.unique(target_cls)
nc = unique_classes.shape[0] # number of classes, number of detections nc = unique_classes.shape[0] # number of classes, number of detections
# Create Precision-Recall curve and compute AP for each class # Create Precision-Recall curve and compute AP for each class
@ -55,17 +45,18 @@ def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names
ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
for ci, c in enumerate(unique_classes): for ci, c in enumerate(unique_classes):
i = pred_cls == c i = pred_cls == c
n_l = nt[ci] # number of labels n_l = (target_cls == c).sum() # number of labels
n_p = i.sum() # number of predictions n_p = i.sum() # number of predictions
if n_p == 0 or n_l == 0: if n_p == 0 or n_l == 0:
continue continue
else:
# Accumulate FPs and TPs # Accumulate FPs and TPs
fpc = (1 - tp[i]).cumsum(0) fpc = (1 - tp[i]).cumsum(0)
tpc = tp[i].cumsum(0) tpc = tp[i].cumsum(0)
# Recall # Recall
recall = tpc / (n_l + eps) # recall curve recall = tpc / (n_l + 1e-16) # recall curve
r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
# Precision # Precision
@ -79,20 +70,17 @@ def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names
py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
# Compute F1 (harmonic mean of precision and recall) # Compute F1 (harmonic mean of precision and recall)
f1 = 2 * p * r / (p + r + eps) f1 = 2 * p * r / (p + r + 1e-16)
names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data
names = dict(enumerate(names)) # to dict names = {i: v for i, v in enumerate(names)} # to dict
if plot: if plot:
plot_pr_curve(px, py, ap, Path(save_dir) / f'{prefix}PR_curve.png', names) plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
plot_mc_curve(px, f1, Path(save_dir) / f'{prefix}F1_curve.png', names, ylabel='F1') plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
plot_mc_curve(px, p, Path(save_dir) / f'{prefix}P_curve.png', names, ylabel='Precision') plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
plot_mc_curve(px, r, Path(save_dir) / f'{prefix}R_curve.png', names, ylabel='Recall') plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
i = smooth(f1.mean(0), 0.1).argmax() # max F1 index i = f1.mean(0).argmax() # max F1 index
p, r, f1 = p[:, i], r[:, i], f1[:, i] return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
tp = (r * nt).round() # true positives
fp = (tp / (p + eps) - tp).round() # false positives
return tp, fp, p, r, f1, ap, unique_classes.astype(int)
def compute_ap(recall, precision): def compute_ap(recall, precision):
@ -141,12 +129,6 @@ class ConfusionMatrix:
Returns: Returns:
None, updates confusion matrix accordingly None, updates confusion matrix accordingly
""" """
if detections is None:
gt_classes = labels.int()
for gc in gt_classes:
self.matrix[self.nc, gc] += 1 # background FN
return
detections = detections[detections[:, 4] > self.conf] detections = detections[detections[:, 4] > self.conf]
gt_classes = labels[:, 0].int() gt_classes = labels[:, 0].int()
detection_classes = detections[:, 5].int() detection_classes = detections[:, 5].int()
@ -164,55 +146,43 @@ class ConfusionMatrix:
matches = np.zeros((0, 3)) matches = np.zeros((0, 3))
n = matches.shape[0] > 0 n = matches.shape[0] > 0
m0, m1, _ = matches.transpose().astype(int) m0, m1, _ = matches.transpose().astype(np.int16)
for i, gc in enumerate(gt_classes): for i, gc in enumerate(gt_classes):
j = m0 == i j = m0 == i
if n and sum(j) == 1: if n and sum(j) == 1:
self.matrix[detection_classes[m1[j]], gc] += 1 # correct self.matrix[detection_classes[m1[j]], gc] += 1 # correct
else: else:
self.matrix[self.nc, gc] += 1 # true background self.matrix[self.nc, gc] += 1 # background FP
if n: if n:
for i, dc in enumerate(detection_classes): for i, dc in enumerate(detection_classes):
if not any(m1 == i): if not any(m1 == i):
self.matrix[dc, self.nc] += 1 # predicted background self.matrix[dc, self.nc] += 1 # background FN
def tp_fp(self): def matrix(self):
tp = self.matrix.diagonal() # true positives return self.matrix
fp = self.matrix.sum(1) - tp # false positives
# fn = self.matrix.sum(0) - tp # false negatives (missed detections)
return tp[:-1], fp[:-1] # remove background class
@TryExcept('WARNING ⚠️ ConfusionMatrix plot failure')
def plot(self, normalize=True, save_dir='', names=()): def plot(self, normalize=True, save_dir='', names=()):
try:
import seaborn as sn import seaborn as sn
array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-6) if normalize else 1) # normalize columns
array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
fig, ax = plt.subplots(1, 1, figsize=(12, 9), tight_layout=True) fig = plt.figure(figsize=(12, 9), tight_layout=True)
nc, nn = self.nc, len(names) # number of classes, names sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels
ticklabels = (names + ['background']) if labels else 'auto'
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered
sn.heatmap(array, sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
ax=ax, xticklabels=names + ['background FP'] if labels else "auto",
annot=nc < 30, yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
annot_kws={ fig.axes[0].set_xlabel('True')
'size': 8}, fig.axes[0].set_ylabel('Predicted')
cmap='Blues',
fmt='.2f',
square=True,
vmin=0.0,
xticklabels=ticklabels,
yticklabels=ticklabels).set_facecolor((1, 1, 1))
ax.set_xlabel('True')
ax.set_ylabel('Predicted')
ax.set_title('Confusion Matrix')
fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
plt.close(fig) plt.close()
except Exception as e:
print(f'WARNING: ConfusionMatrix plot failure: {e}')
def print(self): def print(self):
for i in range(self.nc + 1): for i in range(self.nc + 1):
@ -224,19 +194,19 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7
# Get the coordinates of bounding boxes # Get the coordinates of bounding boxes
if xywh: # transform from xywh to xyxy if xywh: # transform from xywh to xyxy
(x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, -1), box2.chunk(4, -1) (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, 1), box2.chunk(4, 1)
w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2 w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2
b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_ b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_
b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_ b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_
else: # x1, y1, x2, y2 = box1 else: # x1, y1, x2, y2 = box1
b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, -1) b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, 1)
b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, -1) b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, 1)
w1, h1 = b1_x2 - b1_x1, (b1_y2 - b1_y1).clamp(eps) w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
w2, h2 = b2_x2 - b2_x1, (b2_y2 - b2_y1).clamp(eps) w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
# Intersection area # Intersection area
inter = (b1_x2.minimum(b2_x2) - b1_x1.maximum(b2_x1)).clamp(0) * \ inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
(b1_y2.minimum(b2_y2) - b1_y1.maximum(b2_y1)).clamp(0) (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
# Union Area # Union Area
union = w1 * h1 + w2 * h2 - inter + eps union = w1 * h1 + w2 * h2 - inter + eps
@ -244,13 +214,13 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7
# IoU # IoU
iou = inter / union iou = inter / union
if CIoU or DIoU or GIoU: if CIoU or DIoU or GIoU:
cw = b1_x2.maximum(b2_x2) - b1_x1.minimum(b2_x1) # convex (smallest enclosing box) width cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
ch = b1_y2.maximum(b2_y2) - b1_y1.minimum(b2_y1) # convex height ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2 rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2
if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
v = (4 / math.pi ** 2) * (torch.atan(w2 / h2) - torch.atan(w1 / h1)).pow(2) v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
with torch.no_grad(): with torch.no_grad():
alpha = v / (v - iou + (1 + eps)) alpha = v / (v - iou + (1 + eps))
return iou - (rho2 / c2 + v * alpha) # CIoU return iou - (rho2 / c2 + v * alpha) # CIoU
@ -260,7 +230,7 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7
return iou # IoU return iou # IoU
def box_iou(box1, box2, eps=1e-7): def box_iou(box1, box2):
# https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
""" """
Return intersection-over-union (Jaccard index) of boxes. Return intersection-over-union (Jaccard index) of boxes.
@ -273,24 +243,30 @@ def box_iou(box1, box2, eps=1e-7):
IoU values for every element in boxes1 and boxes2 IoU values for every element in boxes1 and boxes2
""" """
def box_area(box):
# box = 4xn
return (box[2] - box[0]) * (box[3] - box[1])
area1 = box_area(box1.T)
area2 = box_area(box2.T)
# inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
(a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2) inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp(0).prod(2) return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
# IoU = inter / (area1 + area2 - inter)
return inter / ((a2 - a1).prod(2) + (b2 - b1).prod(2) - inter + eps)
def bbox_ioa(box1, box2, eps=1e-7): def bbox_ioa(box1, box2, eps=1E-7):
""" Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2 """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2
box1: np.array of shape(4) box1: np.array of shape(4)
box2: np.array of shape(nx4) box2: np.array of shape(nx4)
returns: np.array of shape(n) returns: np.array of shape(n)
""" """
box2 = box2.transpose()
# Get the coordinates of bounding boxes # Get the coordinates of bounding boxes
b1_x1, b1_y1, b1_x2, b1_y2 = box1 b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
b2_x1, b2_y1, b2_x2, b2_y2 = box2.T b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
# Intersection area # Intersection area
inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
@ -303,19 +279,17 @@ def bbox_ioa(box1, box2, eps=1e-7):
return inter_area / box2_area return inter_area / box2_area
def wh_iou(wh1, wh2, eps=1e-7): def wh_iou(wh1, wh2):
# Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
wh1 = wh1[:, None] # [N,1,2] wh1 = wh1[:, None] # [N,1,2]
wh2 = wh2[None] # [1,M,2] wh2 = wh2[None] # [1,M,2]
inter = torch.min(wh1, wh2).prod(2) # [N,M] inter = torch.min(wh1, wh2).prod(2) # [N,M]
return inter / (wh1.prod(2) + wh2.prod(2) - inter + eps) # iou = inter / (area1 + area2 - inter) return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
# Plots ---------------------------------------------------------------------------------------------------------------- # Plots ----------------------------------------------------------------------------------------------------------------
def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
@threaded
def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()):
# Precision-recall curve # Precision-recall curve
fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
py = np.stack(py, axis=1) py = np.stack(py, axis=1)
@ -331,14 +305,12 @@ def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()):
ax.set_ylabel('Precision') ax.set_ylabel('Precision')
ax.set_xlim(0, 1) ax.set_xlim(0, 1)
ax.set_ylim(0, 1) ax.set_ylim(0, 1)
ax.legend(bbox_to_anchor=(1.04, 1), loc='upper left') plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
ax.set_title('Precision-Recall Curve') fig.savefig(Path(save_dir), dpi=250)
fig.savefig(save_dir, dpi=250) plt.close()
plt.close(fig)
@threaded def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'):
# Metric-confidence curve # Metric-confidence curve
fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
@ -348,13 +320,12 @@ def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confi
else: else:
ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
y = smooth(py.mean(0), 0.05) y = py.mean(0)
ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
ax.set_xlabel(xlabel) ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel) ax.set_ylabel(ylabel)
ax.set_xlim(0, 1) ax.set_xlim(0, 1)
ax.set_ylim(0, 1) ax.set_ylim(0, 1)
ax.legend(bbox_to_anchor=(1.04, 1), loc='upper left') plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
ax.set_title(f'{ylabel}-Confidence Curve') fig.savefig(Path(save_dir), dpi=250)
fig.savefig(save_dir, dpi=250) plt.close()
plt.close(fig)

View File

@ -3,12 +3,10 @@
Plotting utils Plotting utils
""" """
import contextlib
import math import math
import os import os
from copy import copy from copy import copy
from pathlib import Path from pathlib import Path
from urllib.error import URLError
import cv2 import cv2
import matplotlib import matplotlib
@ -19,13 +17,12 @@ import seaborn as sn
import torch import torch
from PIL import Image, ImageDraw, ImageFont from PIL import Image, ImageDraw, ImageFont
from utils import TryExcept, threaded from utils.general import (LOGGER, Timeout, check_requirements, clip_coords, increment_path, is_ascii, is_chinese,
from utils.general import (CONFIG_DIR, FONT, LOGGER, check_font, check_requirements, clip_boxes, increment_path, try_except, user_config_dir, xywh2xyxy, xyxy2xywh)
is_ascii, xywh2xyxy, xyxy2xywh)
from utils.metrics import fitness from utils.metrics import fitness
from utils.segment.general import scale_image
# Settings # Settings
CONFIG_DIR = user_config_dir() # Ultralytics settings dir
RANK = int(os.getenv('RANK', -1)) RANK = int(os.getenv('RANK', -1))
matplotlib.rc('font', **{'size': 11}) matplotlib.rc('font', **{'size': 11})
matplotlib.use('Agg') # for writing to files only matplotlib.use('Agg') # for writing to files only
@ -35,9 +32,9 @@ class Colors:
# Ultralytics color palette https://ultralytics.com/ # Ultralytics color palette https://ultralytics.com/
def __init__(self): def __init__(self):
# hex = matplotlib.colors.TABLEAU_COLORS.values() # hex = matplotlib.colors.TABLEAU_COLORS.values()
hexs = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB',
'2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7')
self.palette = [self.hex2rgb(f'#{c}') for c in hexs] self.palette = [self.hex2rgb('#' + c) for c in hex]
self.n = len(self.palette) self.n = len(self.palette)
def __call__(self, i, bgr=False): def __call__(self, i, bgr=False):
@ -52,32 +49,34 @@ class Colors:
colors = Colors() # create instance for 'from utils.plots import colors' colors = Colors() # create instance for 'from utils.plots import colors'
def check_pil_font(font=FONT, size=10): def check_font(font='Arial.ttf', size=10):
# Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary
font = Path(font) font = Path(font)
font = font if font.exists() else (CONFIG_DIR / font.name) font = font if font.exists() else (CONFIG_DIR / font.name)
try: try:
return ImageFont.truetype(str(font) if font.exists() else font.name, size) return ImageFont.truetype(str(font) if font.exists() else font.name, size)
except Exception: # download if missing except Exception as e: # download if missing
url = "https://ultralytics.com/assets/" + font.name
print(f'Downloading {url} to {font}...')
torch.hub.download_url_to_file(url, str(font), progress=False)
try: try:
check_font(font)
return ImageFont.truetype(str(font), size) return ImageFont.truetype(str(font), size)
except TypeError: except TypeError:
check_requirements('Pillow>=8.4.0') # known issue https://github.com/ultralytics/yolov5/issues/5374 check_requirements('Pillow>=8.4.0') # known issue https://github.com/ultralytics/yolov5/issues/5374
except URLError: # not online
return ImageFont.load_default()
class Annotator: class Annotator:
# YOLOv3 Annotator for train/val mosaics and jpgs and detect/hub inference annotations if RANK in (-1, 0):
check_font() # download TTF if necessary
# Annotator for train/val mosaics and jpgs and detect/hub inference annotations
def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=False, example='abc'): def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=False, example='abc'):
assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.' assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.'
non_ascii = not is_ascii(example) # non-latin labels, i.e. asian, arabic, cyrillic self.pil = pil or not is_ascii(example) or is_chinese(example)
self.pil = pil or non_ascii
if self.pil: # use PIL if self.pil: # use PIL
self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) self.im = im if isinstance(im, Image.Image) else Image.fromarray(im)
self.draw = ImageDraw.Draw(self.im) self.draw = ImageDraw.Draw(self.im)
self.font = check_pil_font(font='Arial.Unicode.ttf' if non_ascii else font, self.font = check_font(font='Arial.Unicode.ttf' if is_chinese(example) else font,
size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12)) size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12))
else: # use cv2 else: # use cv2
self.im = im self.im = im
@ -88,14 +87,12 @@ class Annotator:
if self.pil or not is_ascii(label): if self.pil or not is_ascii(label):
self.draw.rectangle(box, width=self.lw, outline=color) # box self.draw.rectangle(box, width=self.lw, outline=color) # box
if label: if label:
w, h = self.font.getsize(label) # text width, height (WARNING: deprecated) in 9.2.0 w, h = self.font.getsize(label) # text width, height
# _, _, w, h = self.font.getbbox(label) # text width, height (New)
outside = box[1] - h >= 0 # label fits outside box outside = box[1] - h >= 0 # label fits outside box
self.draw.rectangle( self.draw.rectangle([box[0],
(box[0], box[1] - h if outside else box[1], box[0] + w + 1, box[1] - h if outside else box[1],
box[1] + 1 if outside else box[1] + h + 1), box[0] + w + 1,
fill=color, box[1] + 1 if outside else box[1] + h + 1], fill=color)
)
# self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls') # for PIL>8.0 # self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls') # for PIL>8.0
self.draw.text((box[0], box[1] - h if outside else box[1]), label, fill=txt_color, font=self.font) self.draw.text((box[0], box[1] - h if outside else box[1]), label, fill=txt_color, font=self.font)
else: # cv2 else: # cv2
@ -104,62 +101,20 @@ class Annotator:
if label: if label:
tf = max(self.lw - 1, 1) # font thickness tf = max(self.lw - 1, 1) # font thickness
w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0] # text width, height w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0] # text width, height
outside = p1[1] - h >= 3 outside = p1[1] - h - 3 >= 0 # label fits outside box
p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3 p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3
cv2.rectangle(self.im, p1, p2, color, -1, cv2.LINE_AA) # filled cv2.rectangle(self.im, p1, p2, color, -1, cv2.LINE_AA) # filled
cv2.putText(self.im, cv2.putText(self.im, label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), 0, self.lw / 3, txt_color,
label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), thickness=tf, lineType=cv2.LINE_AA)
0,
self.lw / 3,
txt_color,
thickness=tf,
lineType=cv2.LINE_AA)
def masks(self, masks, colors, im_gpu, alpha=0.5, retina_masks=False):
"""Plot masks at once.
Args:
masks (tensor): predicted masks on cuda, shape: [n, h, w]
colors (List[List[Int]]): colors for predicted masks, [[r, g, b] * n]
im_gpu (tensor): img is in cuda, shape: [3, h, w], range: [0, 1]
alpha (float): mask transparency: 0.0 fully transparent, 1.0 opaque
"""
if self.pil:
# convert to numpy first
self.im = np.asarray(self.im).copy()
if len(masks) == 0:
self.im[:] = im_gpu.permute(1, 2, 0).contiguous().cpu().numpy() * 255
colors = torch.tensor(colors, device=im_gpu.device, dtype=torch.float32) / 255.0
colors = colors[:, None, None] # shape(n,1,1,3)
masks = masks.unsqueeze(3) # shape(n,h,w,1)
masks_color = masks * (colors * alpha) # shape(n,h,w,3)
inv_alph_masks = (1 - masks * alpha).cumprod(0) # shape(n,h,w,1)
mcs = (masks_color * inv_alph_masks).sum(0) * 2 # mask color summand shape(n,h,w,3)
im_gpu = im_gpu.flip(dims=[0]) # flip channel
im_gpu = im_gpu.permute(1, 2, 0).contiguous() # shape(h,w,3)
im_gpu = im_gpu * inv_alph_masks[-1] + mcs
im_mask = (im_gpu * 255).byte().cpu().numpy()
self.im[:] = im_mask if retina_masks else scale_image(im_gpu.shape, im_mask, self.im.shape)
if self.pil:
# convert im back to PIL and update draw
self.fromarray(self.im)
def rectangle(self, xy, fill=None, outline=None, width=1): def rectangle(self, xy, fill=None, outline=None, width=1):
# Add rectangle to image (PIL-only) # Add rectangle to image (PIL-only)
self.draw.rectangle(xy, fill, outline, width) self.draw.rectangle(xy, fill, outline, width)
def text(self, xy, text, txt_color=(255, 255, 255), anchor='top'): def text(self, xy, text, txt_color=(255, 255, 255)):
# Add text to image (PIL-only) # Add text to image (PIL-only)
if anchor == 'bottom': # start y from font bottom
w, h = self.font.getsize(text) # text width, height w, h = self.font.getsize(text) # text width, height
xy[1] += 1 - h self.draw.text((xy[0], xy[1] - h + 1), text, fill=txt_color, font=self.font)
self.draw.text(xy, text, fill=txt_color, font=self.font)
def fromarray(self, im):
# Update self.im from a numpy array
self.im = im if isinstance(im, Image.Image) else Image.fromarray(im)
self.draw = ImageDraw.Draw(self.im)
def result(self): def result(self):
# Return annotated image as array # Return annotated image as array
@ -177,7 +132,7 @@ def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detec
if 'Detect' not in module_type: if 'Detect' not in module_type:
batch, channels, height, width = x.shape # batch, channels, height, width batch, channels, height, width = x.shape # batch, channels, height, width
if height > 1 and width > 1: if height > 1 and width > 1:
f = save_dir / f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename f = f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename
blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels
n = min(n, channels) # number of plots n = min(n, channels) # number of plots
@ -188,10 +143,9 @@ def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detec
ax[i].imshow(blocks[i].squeeze()) # cmap='gray' ax[i].imshow(blocks[i].squeeze()) # cmap='gray'
ax[i].axis('off') ax[i].axis('off')
LOGGER.info(f'Saving {f}... ({n}/{channels})') print(f'Saving {save_dir / f}... ({n}/{channels})')
plt.savefig(f, dpi=300, bbox_inches='tight') plt.savefig(save_dir / f, dpi=300, bbox_inches='tight')
plt.close() plt.close()
np.save(str(f.with_suffix('.npy')), x[0].cpu().numpy()) # npy save
def hist2d(x, y, n=100): def hist2d(x, y, n=100):
@ -216,31 +170,26 @@ def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
return filtfilt(b, a, data) # forward-backward filter return filtfilt(b, a, data) # forward-backward filter
def output_to_target(output, max_det=300): def output_to_target(output):
# Convert model output to target format [batch_id, class_id, x, y, w, h, conf] for plotting # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
targets = [] targets = []
for i, o in enumerate(output): for i, o in enumerate(output):
box, conf, cls = o[:max_det, :6].cpu().split((4, 1, 1), 1) for *box, conf, cls in o.cpu().numpy():
j = torch.full((conf.shape[0], 1), i) targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf])
targets.append(torch.cat((j, cls, xyxy2xywh(box), conf), 1)) return np.array(targets)
return torch.cat(targets, 0).numpy()
@threaded def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=1920, max_subplots=16):
def plot_images(images, targets, paths=None, fname='images.jpg', names=None):
# Plot image grid with labels # Plot image grid with labels
if isinstance(images, torch.Tensor): if isinstance(images, torch.Tensor):
images = images.cpu().float().numpy() images = images.cpu().float().numpy()
if isinstance(targets, torch.Tensor): if isinstance(targets, torch.Tensor):
targets = targets.cpu().numpy() targets = targets.cpu().numpy()
if np.max(images[0]) <= 1:
max_size = 1920 # max image size images *= 255 # de-normalise (optional)
max_subplots = 16 # max image subplots, i.e. 4x4
bs, _, h, w = images.shape # batch size, _, height, width bs, _, h, w = images.shape # batch size, _, height, width
bs = min(bs, max_subplots) # limit plot images bs = min(bs, max_subplots) # limit plot images
ns = np.ceil(bs ** 0.5) # number of subplots (square) ns = np.ceil(bs ** 0.5) # number of subplots (square)
if np.max(images[0]) <= 1:
images *= 255 # de-normalise (optional)
# Build Image # Build Image
mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
@ -260,12 +209,12 @@ def plot_images(images, targets, paths=None, fname='images.jpg', names=None):
# Annotate # Annotate
fs = int((h + w) * ns * 0.01) # font size fs = int((h + w) * ns * 0.01) # font size
annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True, example=names) annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True)
for i in range(i + 1): for i in range(i + 1):
x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders
if paths: if paths:
annotator.text((x + 5, y + 5), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames
if len(targets) > 0: if len(targets) > 0:
ti = targets[targets[:, 0] == i] # image targets ti = targets[targets[:, 0] == i] # image targets
boxes = xywh2xyxy(ti[:, 2:6]).T boxes = xywh2xyxy(ti[:, 2:6]).T
@ -346,7 +295,7 @@ def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_
ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel() ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel()
fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
# for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]: # for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov3', 'yolov3-spp', 'yolov3-tiny']]:
for f in sorted(save_dir.glob('study*.txt')): for f in sorted(save_dir.glob('study*.txt')):
y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
x = np.arange(y.shape[1]) if x is None else np.array(x) x = np.arange(y.shape[1]) if x is None else np.array(x)
@ -357,19 +306,11 @@ def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_
ax[i].set_title(s[i]) ax[i].set_title(s[i])
j = y[3].argmax() + 1 j = y[3].argmax() + 1
ax2.plot(y[5, 1:j], ax2.plot(y[5, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8,
y[3, 1:j] * 1E2,
'.-',
linewidth=2,
markersize=8,
label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
'k.-', 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet')
linewidth=2,
markersize=8,
alpha=.25,
label='EfficientDet')
ax2.grid(alpha=0.2) ax2.grid(alpha=0.2)
ax2.set_yticks(np.arange(20, 60, 5)) ax2.set_yticks(np.arange(20, 60, 5))
@ -383,7 +324,8 @@ def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_
plt.savefig(f, dpi=300) plt.savefig(f, dpi=300)
@TryExcept() # known issue https://github.com/ultralytics/yolov5/issues/5395 @try_except # known issue https://github.com/ultralytics/yolov5/issues/5395
@Timeout(30) # known issue https://github.com/ultralytics/yolov5/issues/5611
def plot_labels(labels, names=(), save_dir=Path('')): def plot_labels(labels, names=(), save_dir=Path('')):
# plot dataset labels # plot dataset labels
LOGGER.info(f"Plotting labels to {save_dir / 'labels.jpg'}... ") LOGGER.info(f"Plotting labels to {save_dir / 'labels.jpg'}... ")
@ -400,12 +342,11 @@ def plot_labels(labels, names=(), save_dir=Path('')):
matplotlib.use('svg') # faster matplotlib.use('svg') # faster
ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel()
y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8)
with contextlib.suppress(Exception): # color histogram bars by class # [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # update colors bug #3195
[y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # known issue #3195
ax[0].set_ylabel('instances') ax[0].set_ylabel('instances')
if 0 < len(names) < 30: if 0 < len(names) < 30:
ax[0].set_xticks(range(len(names))) ax[0].set_xticks(range(len(names)))
ax[0].set_xticklabels(list(names.values()), rotation=90, fontsize=10) ax[0].set_xticklabels(names, rotation=90, fontsize=10)
else: else:
ax[0].set_xlabel('classes') ax[0].set_xlabel('classes')
sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9)
@ -429,35 +370,6 @@ def plot_labels(labels, names=(), save_dir=Path('')):
plt.close() plt.close()
def imshow_cls(im, labels=None, pred=None, names=None, nmax=25, verbose=False, f=Path('images.jpg')):
# Show classification image grid with labels (optional) and predictions (optional)
from utils.augmentations import denormalize
names = names or [f'class{i}' for i in range(1000)]
blocks = torch.chunk(denormalize(im.clone()).cpu().float(), len(im),
dim=0) # select batch index 0, block by channels
n = min(len(blocks), nmax) # number of plots
m = min(8, round(n ** 0.5)) # 8 x 8 default
fig, ax = plt.subplots(math.ceil(n / m), m) # 8 rows x n/8 cols
ax = ax.ravel() if m > 1 else [ax]
# plt.subplots_adjust(wspace=0.05, hspace=0.05)
for i in range(n):
ax[i].imshow(blocks[i].squeeze().permute((1, 2, 0)).numpy().clip(0.0, 1.0))
ax[i].axis('off')
if labels is not None:
s = names[labels[i]] + (f'{names[pred[i]]}' if pred is not None else '')
ax[i].set_title(s, fontsize=8, verticalalignment='top')
plt.savefig(f, dpi=300, bbox_inches='tight')
plt.close()
if verbose:
LOGGER.info(f'Saving {f}')
if labels is not None:
LOGGER.info('True: ' + ' '.join(f'{names[i]:3s}' for i in labels[:nmax]))
if pred is not None:
LOGGER.info('Predicted:' + ' '.join(f'{names[i]:3s}' for i in pred[:nmax]))
return f
def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; plot_evolve() def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; plot_evolve()
# Plot evolve.csv hyp evolution results # Plot evolve.csv hyp evolution results
evolve_csv = Path(evolve_csv) evolve_csv = Path(evolve_csv)
@ -468,7 +380,6 @@ def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *;
j = np.argmax(f) # max fitness index j = np.argmax(f) # max fitness index
plt.figure(figsize=(10, 12), tight_layout=True) plt.figure(figsize=(10, 12), tight_layout=True)
matplotlib.rc('font', **{'size': 8}) matplotlib.rc('font', **{'size': 8})
print(f'Best results from row {j} of {evolve_csv}:')
for i, k in enumerate(keys[7:]): for i, k in enumerate(keys[7:]):
v = x[:, 7 + i] v = x[:, 7 + i]
mu = v[j] # best single result mu = v[j] # best single result
@ -492,20 +403,20 @@ def plot_results(file='path/to/results.csv', dir=''):
ax = ax.ravel() ax = ax.ravel()
files = list(save_dir.glob('results*.csv')) files = list(save_dir.glob('results*.csv'))
assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.' assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.'
for f in files: for fi, f in enumerate(files):
try: try:
data = pd.read_csv(f) data = pd.read_csv(f)
s = [x.strip() for x in data.columns] s = [x.strip() for x in data.columns]
x = data.values[:, 0] x = data.values[:, 0]
for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]): for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]):
y = data.values[:, j].astype('float') y = data.values[:, j]
# y[y == 0] = np.nan # don't show zero values # y[y == 0] = np.nan # don't show zero values
ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8) ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8)
ax[i].set_title(s[j], fontsize=12) ax[i].set_title(s[j], fontsize=12)
# if j in [8, 9, 10]: # share train and val loss y axes # if j in [8, 9, 10]: # share train and val loss y axes
# ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
except Exception as e: except Exception as e:
LOGGER.info(f'Warning: Plotting error for {f}: {e}') print(f'Warning: Plotting error for {f}: {e}')
ax[1].legend() ax[1].legend()
fig.savefig(save_dir / 'results.png', dpi=200) fig.savefig(save_dir / 'results.png', dpi=200)
plt.close() plt.close()
@ -542,7 +453,7 @@ def profile_idetection(start=0, stop=0, labels=(), save_dir=''):
plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200)
def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False, BGR=False, save=True): def save_one_box(xyxy, im, file='image.jpg', gain=1.02, pad=10, square=False, BGR=False, save=True):
# Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop # Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop
xyxy = torch.tensor(xyxy).view(-1, 4) xyxy = torch.tensor(xyxy).view(-1, 4)
b = xyxy2xywh(xyxy) # boxes b = xyxy2xywh(xyxy) # boxes
@ -550,11 +461,9 @@ def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False,
b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square
b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad
xyxy = xywh2xyxy(b).long() xyxy = xywh2xyxy(b).long()
clip_boxes(xyxy, im.shape) clip_coords(xyxy, im.shape)
crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)] crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)]
if save: if save:
file.parent.mkdir(parents=True, exist_ok=True) # make directory file.parent.mkdir(parents=True, exist_ok=True) # make directory
f = str(increment_path(file).with_suffix('.jpg')) cv2.imwrite(str(increment_path(file).with_suffix('.jpg')), crop)
# cv2.imwrite(f, crop) # save BGR, https://github.com/ultralytics/yolov5/issues/7007 chroma subsampling issue
Image.fromarray(crop[..., ::-1]).save(f, quality=95, subsampling=0) # save RGB
return crop return crop

View File

@ -3,12 +3,12 @@
PyTorch utils PyTorch utils
""" """
import datetime
import math import math
import os import os
import platform import platform
import subprocess import subprocess
import time import time
import warnings
from contextlib import contextmanager from contextlib import contextmanager
from copy import deepcopy from copy import deepcopy
from pathlib import Path from pathlib import Path
@ -17,77 +17,20 @@ import torch
import torch.distributed as dist import torch.distributed as dist
import torch.nn as nn import torch.nn as nn
import torch.nn.functional as F import torch.nn.functional as F
from torch.nn.parallel import DistributedDataParallel as DDP
from utils.general import LOGGER, check_version, colorstr, file_date, git_describe from utils.general import LOGGER
LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
RANK = int(os.getenv('RANK', -1))
WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
try: try:
import thop # for FLOPs computation import thop # for FLOPs computation
except ImportError: except ImportError:
thop = None thop = None
# Suppress PyTorch warnings
warnings.filterwarnings('ignore', message='User provided device_type of \'cuda\', but CUDA is not available. Disabling')
warnings.filterwarnings('ignore', category=UserWarning)
def smart_inference_mode(torch_1_9=check_version(torch.__version__, '1.9.0')):
# Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator
def decorate(fn):
return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn)
return decorate
def smartCrossEntropyLoss(label_smoothing=0.0):
# Returns nn.CrossEntropyLoss with label smoothing enabled for torch>=1.10.0
if check_version(torch.__version__, '1.10.0'):
return nn.CrossEntropyLoss(label_smoothing=label_smoothing)
if label_smoothing > 0:
LOGGER.warning(f'WARNING ⚠️ label smoothing {label_smoothing} requires torch>=1.10.0')
return nn.CrossEntropyLoss()
def smart_DDP(model):
# Model DDP creation with checks
assert not check_version(torch.__version__, '1.12.0', pinned=True), \
'torch==1.12.0 torchvision==0.13.0 DDP training is not supported due to a known issue. ' \
'Please upgrade or downgrade torch to use DDP. See https://github.com/ultralytics/yolov5/issues/8395'
if check_version(torch.__version__, '1.11.0'):
return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK, static_graph=True)
else:
return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)
def reshape_classifier_output(model, n=1000):
# Update a TorchVision classification model to class count 'n' if required
from models.common import Classify
name, m = list((model.model if hasattr(model, 'model') else model).named_children())[-1] # last module
if isinstance(m, Classify): # YOLOv3 Classify() head
if m.linear.out_features != n:
m.linear = nn.Linear(m.linear.in_features, n)
elif isinstance(m, nn.Linear): # ResNet, EfficientNet
if m.out_features != n:
setattr(model, name, nn.Linear(m.in_features, n))
elif isinstance(m, nn.Sequential):
types = [type(x) for x in m]
if nn.Linear in types:
i = types.index(nn.Linear) # nn.Linear index
if m[i].out_features != n:
m[i] = nn.Linear(m[i].in_features, n)
elif nn.Conv2d in types:
i = types.index(nn.Conv2d) # nn.Conv2d index
if m[i].out_channels != n:
m[i] = nn.Conv2d(m[i].in_channels, n, m[i].kernel_size, m[i].stride, bias=m[i].bias is not None)
@contextmanager @contextmanager
def torch_distributed_zero_first(local_rank: int): def torch_distributed_zero_first(local_rank: int):
# Decorator to make all processes in distributed training wait for each local_master to do something """
Decorator to make all processes in distributed training wait for each local_master to do something.
"""
if local_rank not in [-1, 0]: if local_rank not in [-1, 0]:
dist.barrier(device_ids=[local_rank]) dist.barrier(device_ids=[local_rank])
yield yield
@ -95,70 +38,69 @@ def torch_distributed_zero_first(local_rank: int):
dist.barrier(device_ids=[0]) dist.barrier(device_ids=[0])
def device_count(): def date_modified(path=__file__):
# Returns number of CUDA devices available. Safe version of torch.cuda.device_count(). Supports Linux and Windows # return human-readable file modification date, i.e. '2021-3-26'
assert platform.system() in ('Linux', 'Windows'), 'device_count() only supported on Linux or Windows' t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
return f'{t.year}-{t.month}-{t.day}'
def git_describe(path=Path(__file__).parent): # path must be a directory
# return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
s = f'git -C {path} describe --tags --long --always'
try: try:
cmd = 'nvidia-smi -L | wc -l' if platform.system() == 'Linux' else 'nvidia-smi -L | find /c /v ""' # Windows return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
return int(subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]) except subprocess.CalledProcessError as e:
except Exception: return '' # not a git repository
return 0
def select_device(device='', batch_size=0, newline=True): def select_device(device='', batch_size=None, newline=True):
# device = None or 'cpu' or 0 or '0' or '0,1,2,3' # device = 'cpu' or '0' or '0,1,2,3'
s = f'YOLOv3 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} ' s = f'YOLOv3 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string
device = str(device).strip().lower().replace('cuda:', '').replace('none', '') # to string, 'cuda:0' to '0' device = str(device).strip().lower().replace('cuda:', '') # to string, 'cuda:0' to '0'
cpu = device == 'cpu' cpu = device == 'cpu'
mps = device == 'mps' # Apple Metal Performance Shaders (MPS) if cpu:
if cpu or mps:
os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
elif device: # non-cpu device requested elif device: # non-cpu device requested
os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - must be before assert is_available() os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(',', '')), \ assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability
f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)"
if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available cuda = not cpu and torch.cuda.is_available()
if cuda:
devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7 devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7
n = len(devices) # device count n = len(devices) # device count
if n > 1 and batch_size > 0: # check batch_size is divisible by device_count if n > 1 and batch_size: # check batch_size is divisible by device_count
assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
space = ' ' * (len(s) + 1) space = ' ' * (len(s) + 1)
for i, d in enumerate(devices): for i, d in enumerate(devices):
p = torch.cuda.get_device_properties(i) p = torch.cuda.get_device_properties(i)
s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2:.0f}MiB)\n" # bytes to MB
arg = 'cuda:0' else:
elif mps and getattr(torch, 'has_mps', False) and torch.backends.mps.is_available(): # prefer MPS if available
s += 'MPS\n'
arg = 'mps'
else: # revert to CPU
s += 'CPU\n' s += 'CPU\n'
arg = 'cpu'
if not newline: if not newline:
s = s.rstrip() s = s.rstrip()
LOGGER.info(s) LOGGER.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe
return torch.device(arg) return torch.device('cuda:0' if cuda else 'cpu')
def time_sync(): def time_sync():
# PyTorch-accurate time # pytorch-accurate time
if torch.cuda.is_available(): if torch.cuda.is_available():
torch.cuda.synchronize() torch.cuda.synchronize()
return time.time() return time.time()
def profile(input, ops, n=10, device=None): def profile(input, ops, n=10, device=None):
""" YOLOv3 speed/memory/FLOPs profiler # speed/memory/FLOPs profiler
Usage: #
input = torch.randn(16, 3, 640, 640) # Usage:
m1 = lambda x: x * torch.sigmoid(x) # input = torch.randn(16, 3, 640, 640)
m2 = nn.SiLU() # m1 = lambda x: x * torch.sigmoid(x)
profile(input, [m1, m2], n=100) # profile over 100 iterations # m2 = nn.SiLU()
""" # profile(input, [m1, m2], n=100) # profile over 100 iterations
results = [] results = []
if not isinstance(device, torch.device): device = device or select_device()
device = select_device(device)
print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}" print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}"
f"{'input':>24s}{'output':>24s}") f"{'input':>24s}{'output':>24s}")
@ -171,7 +113,7 @@ def profile(input, ops, n=10, device=None):
tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward
try: try:
flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs
except Exception: except:
flops = 0 flops = 0
try: try:
@ -182,14 +124,15 @@ def profile(input, ops, n=10, device=None):
try: try:
_ = (sum(yi.sum() for yi in y) if isinstance(y, list) else y).sum().backward() _ = (sum(yi.sum() for yi in y) if isinstance(y, list) else y).sum().backward()
t[2] = time_sync() t[2] = time_sync()
except Exception: # no backward method except Exception as e: # no backward method
# print(e) # for debug # print(e) # for debug
t[2] = float('nan') t[2] = float('nan')
tf += (t[1] - t[0]) * 1000 / n # ms per op forward tf += (t[1] - t[0]) * 1000 / n # ms per op forward
tb += (t[2] - t[1]) * 1000 / n # ms per op backward tb += (t[2] - t[1]) * 1000 / n # ms per op backward
mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB) mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB)
s_in, s_out = (tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' for x in (x, y)) # shapes s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
p = sum(x.numel() for x in m.parameters()) if isinstance(m, nn.Module) else 0 # parameters s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}') print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}')
results.append([p, flops, mem, tf, tb, s_in, s_out]) results.append([p, flops, mem, tf, tb, s_in, s_out])
except Exception as e: except Exception as e:
@ -238,30 +181,30 @@ def sparsity(model):
def prune(model, amount=0.3): def prune(model, amount=0.3):
# Prune model to requested global sparsity # Prune model to requested global sparsity
import torch.nn.utils.prune as prune import torch.nn.utils.prune as prune
print('Pruning model... ', end='')
for name, m in model.named_modules(): for name, m in model.named_modules():
if isinstance(m, nn.Conv2d): if isinstance(m, nn.Conv2d):
prune.l1_unstructured(m, name='weight', amount=amount) # prune prune.l1_unstructured(m, name='weight', amount=amount) # prune
prune.remove(m, 'weight') # make permanent prune.remove(m, 'weight') # make permanent
LOGGER.info(f'Model pruned to {sparsity(model):.3g} global sparsity') print(' %.3g global sparsity' % sparsity(model))
def fuse_conv_and_bn(conv, bn): def fuse_conv_and_bn(conv, bn):
# Fuse Conv2d() and BatchNorm2d() layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
fusedconv = nn.Conv2d(conv.in_channels, fusedconv = nn.Conv2d(conv.in_channels,
conv.out_channels, conv.out_channels,
kernel_size=conv.kernel_size, kernel_size=conv.kernel_size,
stride=conv.stride, stride=conv.stride,
padding=conv.padding, padding=conv.padding,
dilation=conv.dilation,
groups=conv.groups, groups=conv.groups,
bias=True).requires_grad_(False).to(conv.weight.device) bias=True).requires_grad_(False).to(conv.weight.device)
# Prepare filters # prepare filters
w_conv = conv.weight.clone().view(conv.out_channels, -1) w_conv = conv.weight.clone().view(conv.out_channels, -1)
w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
# Prepare spatial bias # prepare spatial bias
b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
@ -269,7 +212,7 @@ def fuse_conv_and_bn(conv, bn):
return fusedconv return fusedconv
def model_info(model, verbose=False, imgsz=640): def model_info(model, verbose=False, img_size=640):
# Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
n_p = sum(x.numel() for x in model.parameters()) # number parameters n_p = sum(x.numel() for x in model.parameters()) # number parameters
n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
@ -281,23 +224,23 @@ def model_info(model, verbose=False, imgsz=640):
(i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
try: # FLOPs try: # FLOPs
p = next(model.parameters()) from thop import profile
stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 # max stride stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
flops = thop.profile(deepcopy(model), inputs=(im,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs
imgsz = imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/float img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
fs = f', {flops * imgsz[0] / stride * imgsz[1] / stride:.1f} GFLOPs' # 640x640 GFLOPs fs = ', %.1f GFLOPs' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPs
except Exception: except (ImportError, Exception):
fs = '' fs = ''
name = Path(model.yaml_file).stem.replace('yolov5', 'YOLOv3') if hasattr(model, 'yaml_file') else 'Model' LOGGER.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
LOGGER.info(f'{name} summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}')
def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
# Scales img(bs,3,y,x) by ratio constrained to gs-multiple # scales img(bs,3,y,x) by ratio constrained to gs-multiple
if ratio == 1.0: if ratio == 1.0:
return img return img
else:
h, w = img.shape[2:] h, w = img.shape[2:]
s = (int(h * ratio), int(w * ratio)) # new size s = (int(h * ratio), int(w * ratio)) # new size
img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
@ -315,71 +258,8 @@ def copy_attr(a, b, include=(), exclude=()):
setattr(a, k, v) setattr(a, k, v)
def smart_optimizer(model, name='Adam', lr=0.001, momentum=0.9, decay=1e-5):
# YOLOv3 3-param group optimizer: 0) weights with decay, 1) weights no decay, 2) biases no decay
g = [], [], [] # optimizer parameter groups
bn = tuple(v for k, v in nn.__dict__.items() if 'Norm' in k) # normalization layers, i.e. BatchNorm2d()
for v in model.modules():
for p_name, p in v.named_parameters(recurse=0):
if p_name == 'bias': # bias (no decay)
g[2].append(p)
elif p_name == 'weight' and isinstance(v, bn): # weight (no decay)
g[1].append(p)
else:
g[0].append(p) # weight (with decay)
if name == 'Adam':
optimizer = torch.optim.Adam(g[2], lr=lr, betas=(momentum, 0.999)) # adjust beta1 to momentum
elif name == 'AdamW':
optimizer = torch.optim.AdamW(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0)
elif name == 'RMSProp':
optimizer = torch.optim.RMSprop(g[2], lr=lr, momentum=momentum)
elif name == 'SGD':
optimizer = torch.optim.SGD(g[2], lr=lr, momentum=momentum, nesterov=True)
else:
raise NotImplementedError(f'Optimizer {name} not implemented.')
optimizer.add_param_group({'params': g[0], 'weight_decay': decay}) # add g0 with weight_decay
optimizer.add_param_group({'params': g[1], 'weight_decay': 0.0}) # add g1 (BatchNorm2d weights)
LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups "
f'{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias')
return optimizer
def smart_hub_load(repo='ultralytics/yolov5', model='yolov5s', **kwargs):
# YOLOv3 torch.hub.load() wrapper with smart error/issue handling
if check_version(torch.__version__, '1.9.1'):
kwargs['skip_validation'] = True # validation causes GitHub API rate limit errors
if check_version(torch.__version__, '1.12.0'):
kwargs['trust_repo'] = True # argument required starting in torch 0.12
try:
return torch.hub.load(repo, model, **kwargs)
except Exception:
return torch.hub.load(repo, model, force_reload=True, **kwargs)
def smart_resume(ckpt, optimizer, ema=None, weights='yolov5s.pt', epochs=300, resume=True):
# Resume training from a partially trained checkpoint
best_fitness = 0.0
start_epoch = ckpt['epoch'] + 1
if ckpt['optimizer'] is not None:
optimizer.load_state_dict(ckpt['optimizer']) # optimizer
best_fitness = ckpt['best_fitness']
if ema and ckpt.get('ema'):
ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) # EMA
ema.updates = ckpt['updates']
if resume:
assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.\n' \
f"Start a new training without --resume, i.e. 'python train.py --weights {weights}'"
LOGGER.info(f'Resuming training from {weights} from epoch {start_epoch} to {epochs} total epochs')
if epochs < start_epoch:
LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.")
epochs += ckpt['epoch'] # finetune additional epochs
return best_fitness, start_epoch, epochs
class EarlyStopping: class EarlyStopping:
# YOLOv3 simple early stopper # simple early stopper
def __init__(self, patience=30): def __init__(self, patience=30):
self.best_fitness = 0.0 # i.e. mAP self.best_fitness = 0.0 # i.e. mAP
self.best_epoch = 0 self.best_epoch = 0
@ -402,30 +282,36 @@ class EarlyStopping:
class ModelEMA: class ModelEMA:
""" Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
Keeps a moving average of everything in the model state_dict (parameters and buffers) Keep a moving average of everything in the model state_dict (parameters and buffers).
For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage This is intended to allow functionality like
https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
A smoothed version of the weights is necessary for some training schemes to perform well.
This class is sensitive where it is initialized in the sequence of model init,
GPU assignment and distributed training wrappers.
""" """
def __init__(self, model, decay=0.9999, tau=2000, updates=0): def __init__(self, model, decay=0.9999, updates=0):
# Create EMA # Create EMA
self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
# if next(model.parameters()).device.type != 'cpu':
# self.ema.half() # FP16 EMA
self.updates = updates # number of EMA updates self.updates = updates # number of EMA updates
self.decay = lambda x: decay * (1 - math.exp(-x / tau)) # decay exponential ramp (to help early epochs) self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
for p in self.ema.parameters(): for p in self.ema.parameters():
p.requires_grad_(False) p.requires_grad_(False)
def update(self, model): def update(self, model):
# Update EMA parameters # Update EMA parameters
with torch.no_grad():
self.updates += 1 self.updates += 1
d = self.decay(self.updates) d = self.decay(self.updates)
msd = de_parallel(model).state_dict() # model state_dict msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
for k, v in self.ema.state_dict().items(): for k, v in self.ema.state_dict().items():
if v.dtype.is_floating_point: # true for FP16 and FP32 if v.dtype.is_floating_point:
v *= d v *= d
v += (1 - d) * msd[k].detach() v += (1 - d) * msd[k].detach()
# assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype} and model {msd[k].dtype} must be FP32'
def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
# Update EMA attributes # Update EMA attributes

View File

@ -1,30 +1,17 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license # YOLOv3 🚀 by Ultralytics, GPL-3.0 license
""" """
Validate a trained YOLOv3 detection model on a detection dataset Validate a trained model accuracy on a custom dataset
Usage: Usage:
$ python val.py --weights yolov5s.pt --data coco128.yaml --img 640 $ python path/to/val.py --data coco128.yaml --weights yolov3.pt --img 640
Usage - formats:
$ python val.py --weights yolov5s.pt # PyTorch
yolov5s.torchscript # TorchScript
yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s_openvino_model # OpenVINO
yolov5s.engine # TensorRT
yolov5s.mlmodel # CoreML (macOS-only)
yolov5s_saved_model # TensorFlow SavedModel
yolov5s.pb # TensorFlow GraphDef
yolov5s.tflite # TensorFlow Lite
yolov5s_edgetpu.tflite # TensorFlow Edge TPU
yolov5s_paddle_model # PaddlePaddle
""" """
import argparse import argparse
import json import json
import os import os
import subprocess
import sys import sys
from pathlib import Path from pathlib import Path
from threading import Thread
import numpy as np import numpy as np
import torch import torch
@ -38,13 +25,13 @@ ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import DetectMultiBackend from models.common import DetectMultiBackend
from utils.callbacks import Callbacks from utils.callbacks import Callbacks
from utils.dataloaders import create_dataloader from utils.datasets import create_dataloader
from utils.general import (LOGGER, TQDM_BAR_FORMAT, Profile, check_dataset, check_img_size, check_requirements, from utils.general import (LOGGER, NCOLS, box_iou, check_dataset, check_img_size, check_requirements, check_yaml,
check_yaml, coco80_to_coco91_class, colorstr, increment_path, non_max_suppression, coco80_to_coco91_class, colorstr, increment_path, non_max_suppression, print_args,
print_args, scale_boxes, xywh2xyxy, xyxy2xywh) scale_coords, xywh2xyxy, xyxy2xywh)
from utils.metrics import ConfusionMatrix, ap_per_class, box_iou from utils.metrics import ConfusionMatrix, ap_per_class
from utils.plots import output_to_target, plot_images, plot_val_study from utils.plots import output_to_target, plot_images, plot_val_study
from utils.torch_utils import select_device, smart_inference_mode from utils.torch_utils import select_device, time_sync
def save_one_txt(predn, save_conf, shape, file): def save_one_txt(predn, save_conf, shape, file):
@ -63,8 +50,7 @@ def save_one_json(predn, jdict, path, class_map):
box = xyxy2xywh(predn[:, :4]) # xywh box = xyxy2xywh(predn[:, :4]) # xywh
box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner
for p, b in zip(predn.tolist(), box.tolist()): for p, b in zip(predn.tolist(), box.tolist()):
jdict.append({ jdict.append({'image_id': image_id,
'image_id': image_id,
'category_id': class_map[int(p[5])], 'category_id': class_map[int(p[5])],
'bbox': [round(x, 3) for x in b], 'bbox': [round(x, 3) for x in b],
'score': round(p[4], 5)}) 'score': round(p[4], 5)})
@ -72,41 +58,37 @@ def save_one_json(predn, jdict, path, class_map):
def process_batch(detections, labels, iouv): def process_batch(detections, labels, iouv):
""" """
Return correct prediction matrix Return correct predictions matrix. Both sets of boxes are in (x1, y1, x2, y2) format.
Arguments: Arguments:
detections (array[N, 6]), x1, y1, x2, y2, conf, class detections (Array[N, 6]), x1, y1, x2, y2, conf, class
labels (array[M, 5]), class, x1, y1, x2, y2 labels (Array[M, 5]), class, x1, y1, x2, y2
Returns: Returns:
correct (array[N, 10]), for 10 IoU levels correct (Array[N, 10]), for 10 IoU levels
""" """
correct = np.zeros((detections.shape[0], iouv.shape[0])).astype(bool) correct = torch.zeros(detections.shape[0], iouv.shape[0], dtype=torch.bool, device=iouv.device)
iou = box_iou(labels[:, 1:], detections[:, :4]) iou = box_iou(labels[:, 1:], detections[:, :4])
correct_class = labels[:, 0:1] == detections[:, 5] x = torch.where((iou >= iouv[0]) & (labels[:, 0:1] == detections[:, 5])) # IoU above threshold and classes match
for i in range(len(iouv)):
x = torch.where((iou >= iouv[i]) & correct_class) # IoU > threshold and classes match
if x[0].shape[0]: if x[0].shape[0]:
matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detect, iou] matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detection, iou]
if x[0].shape[0] > 1: if x[0].shape[0] > 1:
matches = matches[matches[:, 2].argsort()[::-1]] matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 1], return_index=True)[1]] matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
# matches = matches[matches[:, 2].argsort()[::-1]] # matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 0], return_index=True)[1]] matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
correct[matches[:, 1].astype(int), i] = True matches = torch.Tensor(matches).to(iouv.device)
return torch.tensor(correct, dtype=torch.bool, device=iouv.device) correct[matches[:, 1].long()] = matches[:, 2:3] >= iouv
return correct
@smart_inference_mode() @torch.no_grad()
def run( def run(data,
data,
weights=None, # model.pt path(s) weights=None, # model.pt path(s)
batch_size=32, # batch size batch_size=32, # batch size
imgsz=640, # inference size (pixels) imgsz=640, # inference size (pixels)
conf_thres=0.001, # confidence threshold conf_thres=0.001, # confidence threshold
iou_thres=0.6, # NMS IoU threshold iou_thres=0.6, # NMS IoU threshold
max_det=300, # maximum detections per image
task='val', # train, val, test, speed or study task='val', # train, val, test, speed or study
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
workers=8, # max dataloader workers (per RANK in DDP mode)
single_cls=False, # treat as single-class dataset single_cls=False, # treat as single-class dataset
augment=False, # augmented inference augment=False, # augmented inference
verbose=False, # verbose output verbose=False, # verbose output
@ -125,11 +107,12 @@ def run(
plots=True, plots=True,
callbacks=Callbacks(), callbacks=Callbacks(),
compute_loss=None, compute_loss=None,
): ):
# Initialize/load model and set device # Initialize/load model and set device
training = model is not None training = model is not None
if training: # called by train.py if training: # called by train.py
device, pt, jit, engine = next(model.parameters()).device, True, False, False # get model device, PyTorch model device, pt = next(model.parameters()).device, True # get model device, PyTorch model
half &= device.type != 'cpu' # half precision only supported on CUDA half &= device.type != 'cpu' # half precision only supported on CUDA
model.half() if half else model.float() model.half() if half else model.float()
else: # called directly else: # called directly
@ -140,149 +123,130 @@ def run(
(save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
# Load model # Load model
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) model = DetectMultiBackend(weights, device=device, dnn=dnn)
stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine stride, pt = model.stride, model.pt
imgsz = check_img_size(imgsz, s=stride) # check image size imgsz = check_img_size(imgsz, s=stride) # check image size
half = model.fp16 # FP16 supported on limited backends with CUDA half &= pt and device.type != 'cpu' # half precision only supported by PyTorch on CUDA
if engine: if pt:
batch_size = model.batch_size model.model.half() if half else model.model.float()
else: else:
device = model.device half = False
if not (pt or jit):
batch_size = 1 # export.py models default to batch-size 1 batch_size = 1 # export.py models default to batch-size 1
LOGGER.info(f'Forcing --batch-size 1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models') device = torch.device('cpu')
LOGGER.info(f'Forcing --batch-size 1 square inference shape(1,3,{imgsz},{imgsz}) for non-PyTorch backends')
# Data # Data
data = check_dataset(data) # check data = check_dataset(data) # check
# Configure # Configure
model.eval() model.eval()
cuda = device.type != 'cpu' is_coco = isinstance(data.get('val'), str) and data['val'].endswith('coco/val2017.txt') # COCO dataset
is_coco = isinstance(data.get('val'), str) and data['val'].endswith(f'coco{os.sep}val2017.txt') # COCO dataset
nc = 1 if single_cls else int(data['nc']) # number of classes nc = 1 if single_cls else int(data['nc']) # number of classes
iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for mAP@0.5:0.95 iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95
niou = iouv.numel() niou = iouv.numel()
# Dataloader # Dataloader
if not training: if not training:
if pt and not single_cls: # check --weights are trained on --data if pt and device.type != 'cpu':
ncm = model.model.nc model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.model.parameters()))) # warmup
assert ncm == nc, f'{weights} ({ncm} classes) trained on different --data than what you passed ({nc} ' \ pad = 0.0 if task == 'speed' else 0.5
f'classes). Pass correct combination of --weights and --data that are trained together.'
model.warmup(imgsz=(1 if pt else batch_size, 3, imgsz, imgsz)) # warmup
pad, rect = (0.0, False) if task == 'speed' else (0.5, pt) # square inference for benchmarks
task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images
dataloader = create_dataloader(data[task], dataloader = create_dataloader(data[task], imgsz, batch_size, stride, single_cls, pad=pad, rect=pt,
imgsz,
batch_size,
stride,
single_cls,
pad=pad,
rect=rect,
workers=workers,
prefix=colorstr(f'{task}: '))[0] prefix=colorstr(f'{task}: '))[0]
seen = 0 seen = 0
confusion_matrix = ConfusionMatrix(nc=nc) confusion_matrix = ConfusionMatrix(nc=nc)
names = model.names if hasattr(model, 'names') else model.module.names # get class names names = {k: v for k, v in enumerate(model.names if hasattr(model, 'names') else model.module.names)}
if isinstance(names, (list, tuple)): # old format
names = dict(enumerate(names))
class_map = coco80_to_coco91_class() if is_coco else list(range(1000)) class_map = coco80_to_coco91_class() if is_coco else list(range(1000))
s = ('%22s' + '%11s' * 6) % ('Class', 'Images', 'Instances', 'P', 'R', 'mAP50', 'mAP50-95') s = ('%20s' + '%11s' * 6) % ('Class', 'Images', 'Labels', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')
tp, fp, p, r, f1, mp, mr, map50, ap50, map = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 dt, p, r, f1, mp, mr, map50, map = [0.0, 0.0, 0.0], 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0
dt = Profile(), Profile(), Profile() # profiling times
loss = torch.zeros(3, device=device) loss = torch.zeros(3, device=device)
jdict, stats, ap, ap_class = [], [], [], [] jdict, stats, ap, ap_class = [], [], [], []
callbacks.run('on_val_start') pbar = tqdm(dataloader, desc=s, ncols=NCOLS, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar
pbar = tqdm(dataloader, desc=s, bar_format=TQDM_BAR_FORMAT) # progress bar
for batch_i, (im, targets, paths, shapes) in enumerate(pbar): for batch_i, (im, targets, paths, shapes) in enumerate(pbar):
callbacks.run('on_val_batch_start') t1 = time_sync()
with dt[0]: if pt:
if cuda:
im = im.to(device, non_blocking=True) im = im.to(device, non_blocking=True)
targets = targets.to(device) targets = targets.to(device)
im = im.half() if half else im.float() # uint8 to fp16/32 im = im.half() if half else im.float() # uint8 to fp16/32
im /= 255 # 0 - 255 to 0.0 - 1.0 im /= 255 # 0 - 255 to 0.0 - 1.0
nb, _, height, width = im.shape # batch size, channels, height, width nb, _, height, width = im.shape # batch size, channels, height, width
t2 = time_sync()
dt[0] += t2 - t1
# Inference # Inference
with dt[1]: out, train_out = model(im) if training else model(im, augment=augment, val=True) # inference, loss outputs
preds, train_out = model(im) if compute_loss else (model(im, augment=augment), None) dt[1] += time_sync() - t2
# Loss # Loss
if compute_loss: if compute_loss:
loss += compute_loss(train_out, targets)[1] # box, obj, cls loss += compute_loss([x.float() for x in train_out], targets)[1] # box, obj, cls
# NMS # NMS
targets[:, 2:] *= torch.tensor((width, height, width, height), device=device) # to pixels targets[:, 2:] *= torch.Tensor([width, height, width, height]).to(device) # to pixels
lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling
with dt[2]: t3 = time_sync()
preds = non_max_suppression(preds, out = non_max_suppression(out, conf_thres, iou_thres, labels=lb, multi_label=True, agnostic=single_cls)
conf_thres, dt[2] += time_sync() - t3
iou_thres,
labels=lb,
multi_label=True,
agnostic=single_cls,
max_det=max_det)
# Metrics # Metrics
for si, pred in enumerate(preds): for si, pred in enumerate(out):
labels = targets[targets[:, 0] == si, 1:] labels = targets[targets[:, 0] == si, 1:]
nl, npr = labels.shape[0], pred.shape[0] # number of labels, predictions nl = len(labels)
tcls = labels[:, 0].tolist() if nl else [] # target class
path, shape = Path(paths[si]), shapes[si][0] path, shape = Path(paths[si]), shapes[si][0]
correct = torch.zeros(npr, niou, dtype=torch.bool, device=device) # init
seen += 1 seen += 1
if npr == 0: if len(pred) == 0:
if nl: if nl:
stats.append((correct, *torch.zeros((2, 0), device=device), labels[:, 0])) stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
if plots:
confusion_matrix.process_batch(detections=None, labels=labels[:, 0])
continue continue
# Predictions # Predictions
if single_cls: if single_cls:
pred[:, 5] = 0 pred[:, 5] = 0
predn = pred.clone() predn = pred.clone()
scale_boxes(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred scale_coords(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred
# Evaluate # Evaluate
if nl: if nl:
tbox = xywh2xyxy(labels[:, 1:5]) # target boxes tbox = xywh2xyxy(labels[:, 1:5]) # target boxes
scale_boxes(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels scale_coords(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels
labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels
correct = process_batch(predn, labelsn, iouv) correct = process_batch(predn, labelsn, iouv)
if plots: if plots:
confusion_matrix.process_batch(predn, labelsn) confusion_matrix.process_batch(predn, labelsn)
stats.append((correct, pred[:, 4], pred[:, 5], labels[:, 0])) # (correct, conf, pcls, tcls) else:
correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool)
stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls)) # (correct, conf, pcls, tcls)
# Save/log # Save/log
if save_txt: if save_txt:
save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt') save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / (path.stem + '.txt'))
if save_json: if save_json:
save_one_json(predn, jdict, path, class_map) # append to COCO-JSON dictionary save_one_json(predn, jdict, path, class_map) # append to COCO-JSON dictionary
callbacks.run('on_val_image_end', pred, predn, path, names, im[si]) callbacks.run('on_val_image_end', pred, predn, path, names, im[si])
# Plot images # Plot images
if plots and batch_i < 3: if plots and batch_i < 3:
plot_images(im, targets, paths, save_dir / f'val_batch{batch_i}_labels.jpg', names) # labels f = save_dir / f'val_batch{batch_i}_labels.jpg' # labels
plot_images(im, output_to_target(preds), paths, save_dir / f'val_batch{batch_i}_pred.jpg', names) # pred Thread(target=plot_images, args=(im, targets, paths, f, names), daemon=True).start()
f = save_dir / f'val_batch{batch_i}_pred.jpg' # predictions
callbacks.run('on_val_batch_end', batch_i, im, targets, paths, shapes, preds) Thread(target=plot_images, args=(im, output_to_target(out), paths, f, names), daemon=True).start()
# Compute metrics # Compute metrics
stats = [torch.cat(x, 0).cpu().numpy() for x in zip(*stats)] # to numpy stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy
if len(stats) and stats[0].any(): if len(stats) and stats[0].any():
tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names) p, r, ap, f1, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)
ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95 ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95
mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
nt = np.bincount(stats[3].astype(int), minlength=nc) # number of targets per class nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class
else:
nt = torch.zeros(1)
# Print results # Print results
pf = '%22s' + '%11i' * 2 + '%11.3g' * 4 # print format pf = '%20s' + '%11i' * 2 + '%11.3g' * 4 # print format
LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, map50, map))
if nt.sum() == 0:
LOGGER.warning(f'WARNING ⚠️ no labels found in {task} set, can not compute metrics without labels')
# Print results per class # Print results per class
if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats): if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats):
@ -290,7 +254,7 @@ def run(
LOGGER.info(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) LOGGER.info(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))
# Print speeds # Print speeds
t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image t = tuple(x / seen * 1E3 for x in dt) # speeds per image
if not training: if not training:
shape = (batch_size, 3, imgsz, imgsz) shape = (batch_size, 3, imgsz, imgsz)
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t) LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t)
@ -298,19 +262,19 @@ def run(
# Plots # Plots
if plots: if plots:
confusion_matrix.plot(save_dir=save_dir, names=list(names.values())) confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
callbacks.run('on_val_end', nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix) callbacks.run('on_val_end')
# Save JSON # Save JSON
if save_json and len(jdict): if save_json and len(jdict):
w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights
anno_json = str(Path('../datasets/coco/annotations/instances_val2017.json')) # annotations anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json
pred_json = str(save_dir / f'{w}_predictions.json') # predictions pred_json = str(save_dir / f"{w}_predictions.json") # predictions json
LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...') LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...')
with open(pred_json, 'w') as f: with open(pred_json, 'w') as f:
json.dump(jdict, f) json.dump(jdict, f)
try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
check_requirements('pycocotools>=2.0.6') check_requirements(['pycocotools'])
from pycocotools.coco import COCO from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval from pycocotools.cocoeval import COCOeval
@ -318,7 +282,7 @@ def run(
pred = anno.loadRes(pred_json) # init predictions api pred = anno.loadRes(pred_json) # init predictions api
eval = COCOeval(anno, pred, 'bbox') eval = COCOeval(anno, pred, 'bbox')
if is_coco: if is_coco:
eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.im_files] # image IDs to evaluate eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] # image IDs to evaluate
eval.evaluate() eval.evaluate()
eval.accumulate() eval.accumulate()
eval.summarize() eval.summarize()
@ -340,15 +304,13 @@ def run(
def parse_opt(): def parse_opt():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3-tiny.pt', help='model path(s)') parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3.pt', help='model.pt path(s)')
parser.add_argument('--batch-size', type=int, default=32, help='batch size') parser.add_argument('--batch-size', type=int, default=32, help='batch size')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)')
parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold') parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold') parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold')
parser.add_argument('--max-det', type=int, default=300, help='maximum detections per image')
parser.add_argument('--task', default='val', help='train, val, test, speed or study') parser.add_argument('--task', default='val', help='train, val, test, speed or study')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
parser.add_argument('--augment', action='store_true', help='augmented inference') parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--verbose', action='store_true', help='report mAP by class') parser.add_argument('--verbose', action='store_true', help='report mAP by class')
@ -365,31 +327,29 @@ def parse_opt():
opt.data = check_yaml(opt.data) # check YAML opt.data = check_yaml(opt.data) # check YAML
opt.save_json |= opt.data.endswith('coco.yaml') opt.save_json |= opt.data.endswith('coco.yaml')
opt.save_txt |= opt.save_hybrid opt.save_txt |= opt.save_hybrid
print_args(vars(opt)) print_args(FILE.stem, opt)
return opt return opt
def main(opt): def main(opt):
check_requirements(exclude=('tensorboard', 'thop')) check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop'))
if opt.task in ('train', 'val', 'test'): # run normally if opt.task in ('train', 'val', 'test'): # run normally
if opt.conf_thres > 0.001: # https://github.com/ultralytics/yolov5/issues/1466 if opt.conf_thres > 0.001: # https://github.com/ultralytics/yolov5/issues/1466
LOGGER.info(f'WARNING ⚠️ confidence threshold {opt.conf_thres} > 0.001 produces invalid results') LOGGER.info(f'WARNING: confidence threshold {opt.conf_thres} >> 0.001 will produce invalid mAP values.')
if opt.save_hybrid:
LOGGER.info('WARNING ⚠️ --save-hybrid will return high mAP from hybrid labels, not from predictions alone')
run(**vars(opt)) run(**vars(opt))
else: else:
weights = opt.weights if isinstance(opt.weights, list) else [opt.weights] weights = opt.weights if isinstance(opt.weights, list) else [opt.weights]
opt.half = torch.cuda.is_available() and opt.device != 'cpu' # FP16 for fastest results opt.half = True # FP16 for fastest results
if opt.task == 'speed': # speed benchmarks if opt.task == 'speed': # speed benchmarks
# python val.py --task speed --data coco.yaml --batch 1 --weights yolov5n.pt yolov5s.pt... # python val.py --task speed --data coco.yaml --batch 1 --weights yolov3.pt yolov3-spp.pt...
opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False
for opt.weights in weights: for opt.weights in weights:
run(**vars(opt), plots=False) run(**vars(opt), plots=False)
elif opt.task == 'study': # speed vs mAP benchmarks elif opt.task == 'study': # speed vs mAP benchmarks
# python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n.pt yolov5s.pt... # python val.py --task study --data coco.yaml --iou 0.7 --weights yolov3.pt yolov3-spp.pt...
for opt.weights in weights: for opt.weights in weights:
f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt' # filename to save to f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt' # filename to save to
x, y = list(range(256, 1536 + 128, 128)), [] # x axis (image sizes), y axis x, y = list(range(256, 1536 + 128, 128)), [] # x axis (image sizes), y axis
@ -398,12 +358,10 @@ def main(opt):
r, _, t = run(**vars(opt), plots=False) r, _, t = run(**vars(opt), plots=False)
y.append(r + t) # results and times y.append(r + t) # results and times
np.savetxt(f, y, fmt='%10.4g') # save np.savetxt(f, y, fmt='%10.4g') # save
subprocess.run('zip -r study.zip study_*.txt'.split()) os.system('zip -r study.zip study_*.txt')
plot_val_study(x=x) # plot plot_val_study(x=x) # plot
else:
raise NotImplementedError(f'--task {opt.task} not in ("train", "val", "test", "speed", "study")')
if __name__ == '__main__': if __name__ == "__main__":
opt = parse_opt() opt = parse_opt()
main(opt) main(opt)