Compare commits

...

2673 Commits

Author SHA1 Message Date
dbe80aca78 yolo with runs.zip file exp 14 is the best weight 2023-02-21 21:43:51 +05:30
34abb2b0dd Add 'yolov3/' from commit '76d848608107780ef92eae7fcbb151b91b6ee368'
git-subtree-dir: yolov3
git-subtree-mainline: acb43f001dc87d510517c6975bc993cc6008d7f2
git-subtree-split: 76d848608107780ef92eae7fcbb151b91b6ee368
2023-02-21 21:38:24 +05:30
Glenn Jocher
76d8486081
Update README.md (#2021) 2023-02-20 13:52:59 +01:00
Glenn Jocher
527ce02916
Update .pre-commit-config.yaml (#2019)
* Update .pre-commit-config.yaml

* Update __init__.py

* Update .pre-commit-config.yaml

* Precommit updates
2023-02-17 21:52:12 +01:00
Glenn Jocher
a0a4012739
Update downloads.py (#2018)
* Update downloads.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-02-17 21:27:34 +01:00
Glenn Jocher
21a56e51e5
Update README.md (#2016)
* Update README.md

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-02-15 20:34:42 +04:00
Glenn Jocher
2fe0555522
Update README (#2015) 2023-02-15 20:33:12 +04:00
Glenn Jocher
6c8bc40309
Update README (#2013)
* Update README

* Update README

* Update README

* Update README.md
2023-02-13 20:28:02 +04:00
Glenn Jocher
50f78bfd08
README link fixes (#2012)
Link fixes
2023-02-13 01:46:19 +04:00
Glenn Jocher
f50bcfcc3e
YOLOv3 general updates, improvements and fixes (#2011)
* YOLOv3 updates

* Add missing files

* Reformat

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Reformat

* Reformat

* Reformat

* Reformat

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-02-11 15:20:10 +04:00
Glenn Jocher
1a2d5c6a5a
Update Dockerfile (#2010)
* Update Dockerfile

* Update Dockerfile
2023-02-11 02:15:51 +04:00
Glenn Jocher
6013704cf5 Updates 2023-02-11 02:13:22 +04:00
Glenn Jocher
e7b8da6493
Update Dockerfile (#2009) 2023-02-11 02:08:53 +04:00
Glenn Jocher
a57a6df95b
Update greetings.yml (#2007) 2023-02-11 02:02:13 +04:00
Glenn Jocher
05209583a0
Update README.md (#2006) 2023-02-11 02:01:57 +04:00
dependabot[bot]
ae460cf4ff
Bump cirrus-actions/rebase from 1.7 to 1.8 (#1999)
Bumps [cirrus-actions/rebase](https://github.com/cirrus-actions/rebase) from 1.7 to 1.8.
- [Release notes](https://github.com/cirrus-actions/rebase/releases)
- [Commits](https://github.com/cirrus-actions/rebase/compare/1.7...1.8)

---
updated-dependencies:
- dependency-name: cirrus-actions/rebase
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-02 21:10:43 +01:00
dependabot[bot]
91b040619f
Bump actions/stale from 6 to 7 (#2000)
* Bump actions/stale from 6 to 7

Bumps [actions/stale](https://github.com/actions/stale) from 6 to 7.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v6...v7)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2022-12-27 13:51:06 +01:00
Glenn Jocher
2813de7cc3 Created using Colaboratory 2022-12-19 10:57:59 +01:00
s-mohaghegh97
a441ab1593
fix half bug. (#1989)
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2022-11-27 17:14:27 -08:00
s-mohaghegh97
d5790b0c66
fix tflite converter bug for tiny models. (#1990)
* fix tflite converter bug for tiny models.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2022-11-27 17:13:50 -08:00
Glenn Jocher
dd838e2586
Update ci-testing.yml (#1981) 2022-10-16 11:55:23 +02:00
Glenn Jocher
9219f135d5
Update requirements.txt (#1980) 2022-10-16 11:50:15 +02:00
Glenn Jocher
88a803126b
Update ci-testing.yml 2022-10-16 11:45:19 +02:00
dependabot[bot]
b0b071dda8
Bump actions/stale from 5 to 6 (#1975)
Bumps [actions/stale](https://github.com/actions/stale) from 5 to 6.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-26 12:38:40 +02:00
Glenn Jocher
3f855edca5
Update requirements.txt (#1973) 2022-09-24 13:15:34 +02:00
Glenn Jocher
0bbd0558ed
Update ci-testing.yml remove macos-latest (#1969)
Update ci-testing.yml
2022-09-03 03:22:09 +02:00
pre-commit-ci[bot]
92c3bd7a4e
[pre-commit.ci] pre-commit suggestions (#1961)
updates:
- [github.com/pre-commit/pre-commit-hooks: v4.1.0 → v4.3.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.1.0...v4.3.0)
- [github.com/asottile/pyupgrade: v2.31.1 → v2.34.0](https://github.com/asottile/pyupgrade/compare/v2.31.1...v2.34.0)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2022-07-04 22:09:35 +02:00
alex-fdias
b3244d05cd
Fix downloading file by URL (Windows) (#1958)
as_posix() needed so that backslashes are output as forward slashes in the URL string (Windows)

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2022-06-29 18:18:08 +02:00
Glenn Jocher
7ec9614961
Update loss.py (#1959)
* Update loss.py

* Update metrics.py

* Update loss.py
2022-06-29 18:07:06 +02:00
dependabot[bot]
0aa65efcc0
Bump actions/setup-python from 3 to 4 (#1956)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 3 to 4.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-13 11:41:14 +02:00
dependabot[bot]
3508a982f5
Bump cirrus-actions/rebase from 1.6 to 1.7 (#1944)
Bumps [cirrus-actions/rebase](https://github.com/cirrus-actions/rebase) from 1.6 to 1.7.
- [Release notes](https://github.com/cirrus-actions/rebase/releases)
- [Commits](https://github.com/cirrus-actions/rebase/compare/1.6...1.7)

---
updated-dependencies:
- dependency-name: cirrus-actions/rebase
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-16 10:49:39 +02:00
dependabot[bot]
d58ba5e7a7
Bump github/codeql-action from 1 to 2 (#1939)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 1 to 2.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-01 22:25:22 -07:00
dependabot[bot]
f212505c93
Bump cirrus-actions/rebase from 1.5 to 1.6 (#1929)
Bumps [cirrus-actions/rebase](https://github.com/cirrus-actions/rebase) from 1.5 to 1.6.
- [Release notes](https://github.com/cirrus-actions/rebase/releases)
- [Commits](https://github.com/cirrus-actions/rebase/compare/1.5...1.6)

---
updated-dependencies:
- dependency-name: cirrus-actions/rebase
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-19 15:07:16 -07:00
Sahil Chachra
ae37b2daa7
Fix ONNX inference code (#1928) 2022-04-11 12:40:56 +02:00
dependabot[bot]
c2c113e5eb
Bump actions/stale from 4 to 5 (#1927)
Bumps [actions/stale](https://github.com/actions/stale) from 4 to 5.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-11 10:25:41 +02:00
pre-commit-ci[bot]
8a372c340c
[pre-commit.ci] pre-commit suggestions (#1924)
updates:
- [github.com/asottile/pyupgrade: v2.31.0 → v2.31.1](https://github.com/asottile/pyupgrade/compare/v2.31.0...v2.31.1)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2022-04-04 22:29:34 +02:00
dependabot[bot]
9f9e650bf8
Bump actions/cache from 2.1.7 to 3 (#1920)
Bumps [actions/cache](https://github.com/actions/cache) from 2.1.7 to 3.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2.1.7...v3)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-28 10:37:53 +02:00
Glenn Jocher
7093a2b543
PyTorch 1.11.0 compatibility updates (#1914)
* PyTorch 1.11.0 compatibility updates

Resolves `AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'` first raised in https://github.com/ultralytics/yolov5/issues/5499 and observed in all CI runs on just-released PyTorch 1.11.0.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update experimental.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2022-03-10 12:59:47 +01:00
dependabot[bot]
b6f6b5b965
Bump actions/checkout from 2 to 3 (#1912)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-08 11:43:15 +01:00
dependabot[bot]
e6507907f8
Bump actions/setup-python from 2 to 3 (#1911)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 2 to 3.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-08 11:42:59 +01:00
Glenn Jocher
0519223a62
Fix yolov3.yaml remove extra bracket (#1902)
* Fix yolov3.yaml remove extra bracket

Resolves https://github.com/ultralytics/yolov3/issues/1887#issuecomment-1041135181

* Update yolov3.yaml
2022-02-16 10:14:23 +01:00
pre-commit-ci[bot]
0f80f2f905
[pre-commit.ci] pre-commit suggestions (#1883)
updates:
- [github.com/pre-commit/pre-commit-hooks: v4.0.1 → v4.1.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.0.1...v4.1.0)
- [github.com/asottile/pyupgrade: v2.23.1 → v2.31.0](https://github.com/asottile/pyupgrade/compare/v2.23.1...v2.31.0)
- [github.com/PyCQA/isort: 5.9.3 → 5.10.1](https://github.com/PyCQA/isort/compare/5.9.3...5.10.1)
- [github.com/PyCQA/flake8: 3.9.2 → 4.0.1](https://github.com/PyCQA/flake8/compare/3.9.2...4.0.1)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2022-01-03 10:33:15 -08:00
Glenn Jocher
9d0e1cf298
Update requirements.txt (#1869)
* Update requirements.txt

* Add wandb.errors.UsageError

* bug fix
2021-12-01 15:37:56 +01:00
dependabot[bot]
c35400cffd
Bump actions/cache from 2.1.6 to 2.1.7 (#1867)
Bumps [actions/cache](https://github.com/actions/cache) from 2.1.6 to 2.1.7.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2.1.6...v2.1.7)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-29 12:05:54 +01:00
Glenn Jocher
b870de528e
Update tutorial.ipynb (#1859) 2021-11-14 22:48:38 +01:00
Glenn Jocher
9577bb1d4a Created using Colaboratory 2021-11-14 22:43:14 +01:00
Glenn Jocher
93a2bcc760 Created using Colaboratory 2021-11-14 22:33:34 +01:00
Glenn Jocher
7eb23e3c1d
YOLOv5 v6.0 compatibility update (#1857)
* Initial commit

* Initial commit

* Cleanup

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix precommit errors

* Remove TF builds from CI

* export last.pt

* Created using Colaboratory

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2021-11-14 22:22:59 +01:00
dependabot[bot]
1be31704c9
Bump pip from 18.1 to 19.2 in /utils/google_app_engine (#1787)
Bumps [pip](https://github.com/pypa/pip) from 18.1 to 19.2.
- [Release notes](https://github.com/pypa/pip/releases)
- [Changelog](https://github.com/pypa/pip/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/pip/compare/18.1...19.2)

---
updated-dependencies:
- dependency-name: pip
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-06-09 21:27:16 +02:00
Glenn Jocher
66e54d3d2c
Update stale.yml (#1784) 2021-06-06 18:50:08 +02:00
Glenn Jocher
ab7ff9dd4c
Revert "cv2.imread(img, -1) for IMREAD_UNCHANGED" (#1778) 2021-05-31 10:40:13 +02:00
Glenn Jocher
044eb9142b
Update README.md (#1777) 2021-05-30 19:40:48 +02:00
Glenn Jocher
4d0c2e6eee YOLOv5 v5.0 release compatibility update for YOLOv3 2021-05-30 18:55:56 +02:00
Peretz Cohen
47ac6833ca
Add Open in Kaggle badge (#1773)
* Update tutorial.ipynb

add Open in Kaggle badge

* Update tutorial.ipynb

Open badge in same window

* add space between badges

* add space 2

* remove align left

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2021-05-27 16:50:12 +02:00
Glenn Jocher
26cb451811
Update README.md (#1760) 2021-05-12 19:50:15 +02:00
Glenn Jocher
69eecec7be
Update https://ultralytics.com/images/zidane.jpg (#1759) 2021-05-12 18:40:32 +02:00
Glenn Jocher
11c554c31e Creado con Colaboratory 2021-05-12 18:38:37 +02:00
Glenn Jocher
af7b923bfa Created using Colaboratory 2021-05-12 14:28:11 +02:00
Glenn Jocher
331df67aac
Create FUNDING.yaml (#1743)
This should set up a "Sponsor" button on the repository to allow users and organizations to help with the development of YOLOv5 with financial contributions! 

I feel like 10 sponsors could really help fund Ultralytics' caffeine  addiction and get YOLOv5 🚀 developed and deployed faster than ever! 😃
2021-04-19 12:37:10 +02:00
Glenn Jocher
b9849003c8 Created using Colaboratory 2021-04-12 23:38:05 +02:00
Glenn Jocher
be29298b5c Created using Colaboratory 2021-04-12 18:18:05 +02:00
Glenn Jocher
8eb4cde090
YOLOv5 v5.0 release compatibility update for YOLOv3 (#1737)
* YOLOv5 v5.0 release compatibility update

* Update README

* Update README

* Conv act LeakyReLU(0.1)

* update plots_study()

* update speeds
2021-04-12 18:00:47 +02:00
Glenn Jocher
5d8f03020c
Update README.md 2021-04-06 13:26:56 +02:00
Glenn Jocher
c1f8dd94b7
Update google_utils.py (#1690) 2021-02-22 17:57:47 -08:00
Glenn Jocher
d3533715ba
Update README.md 2021-02-16 16:11:36 -08:00
Glenn Jocher
daa4600fd3
Update google_utils.py 2021-02-16 11:09:54 -08:00
huntr.dev | the place to protect open source
cf5db95953
Security Fix for Arbitrary Code Execution - huntr.dev (#1672)
* fixed arbitary code execution

* Update train.py

* Full to Safe

Co-authored-by: Asjid Kalam <asjid.kalam@gmail.com>
Co-authored-by: Jamie Slome <jamie@418sec.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2021-01-25 09:39:34 -08:00
Glenn Jocher
9f4e853c60
GitHub API rate limit fallback (#1661) 2021-01-13 19:55:34 -08:00
Glenn Jocher
2271a2ebd8
check_git_status() bug fix (#1660) 2021-01-13 10:26:09 -08:00
Yonghye Kwon
bc69220782
remove unused variable in def compute_loss function (#1659)
remove unused variable in def compute_loss function
2021-01-13 09:13:13 -08:00
Glenn Jocher
166a4d590f
v9.1 release (#1658) 2021-01-12 23:05:32 -08:00
Glenn Jocher
0bc1db58d8
GitHub API rate limit fix (#1653) 2021-01-10 12:02:55 -08:00
Glenn Jocher
162773d968
Update torch_utils.py (#1652) 2021-01-09 21:34:12 -08:00
Glenn Jocher
d9b29951c1 Update google_utils.py 2021-01-08 10:57:40 -08:00
Glenn Jocher
d88829cebe
actions/stale@v3 (#1647) 2021-01-07 11:25:15 -08:00
Glenn Jocher
4f2341c0ad
W&B ID reset on training completion (#1852) 2021-01-06 16:39:03 -08:00
Glenn Jocher
84ad6080ae
Update Torch CUDA Synchronize (#1637) 2021-01-03 14:37:22 -08:00
Glenn Jocher
7d9535f80e
Update yolo.py nn.zeroPad2d() (#1638) 2021-01-03 11:42:10 -08:00
Glenn Jocher
865e046e11
Update yolov3-tiny.yaml 2021-01-02 13:04:12 -08:00
Glenn Jocher
6ba36265fb
FROM nvcr.io/nvidia/pytorch:20.12-py3 (#1620) 2020-12-22 17:48:40 -08:00
Glenn Jocher
1c39505d4e
leaf Variable inplace bug fix (#1619) 2020-12-22 17:29:54 -08:00
Glenn Jocher
7bff2d369a
Update Dependabot config file (#1615) 2020-12-22 17:25:40 -08:00
dependabot-preview[bot]
a21595e2e2
Create Dependabot config file (#1615)
* Create Dependabot config file

* Update greetings.yml

* Update greetings.yml

* Update dependabot.yml

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-12-19 19:37:04 -08:00
Glenn Jocher
1afde520d1
Simplified PyTorch Hub loading of custom models (#1610)
* Simplified PyTorch Hub loading of custom models

* Update hubconf.py
2020-12-19 19:01:15 -08:00
Glenn Jocher
883a5aff5a
FReLU bias=False bug fix (#1607) 2020-12-10 13:07:10 -08:00
Glenn Jocher
61fb2dbd20
Simplify autoshape() post-process (#1603)
* Simplify autoshape() post-process

* cleanup
2020-12-09 07:43:58 -08:00
Glenn Jocher
6b1fe3e9dd
Normalized mosaic plotting bug fix (#1600) 2020-12-08 18:46:33 -08:00
Glenn Jocher
8d236eea3c
Hybrid auto-labelling support (#1599)
* Introduce hybrid auto-labelling support

* cleanup
2020-12-08 18:16:12 -08:00
Glenn Jocher
7e846c7d3c
Reinstate PR curve sentinel values (#1598) 2020-12-08 17:40:19 -08:00
Glenn Jocher
ce9feb42b4
Create codeql-analysis.yml (#1597)
* Create codeql-analysis.yml

* Update ci-testing.yml

* Update codeql-analysis.yml

* Update ci-testing.yml
2020-12-08 17:03:22 -08:00
Glenn Jocher
e285034b4b
Hub device mismatch bug fix (#1594) 2020-12-06 18:01:13 +01:00
Glenn Jocher
4a07280884
Pycocotools best.pt after COCO train (#1593) 2020-12-06 14:58:50 +01:00
Glenn Jocher
adc49abc71
Implement default class names (#1592) 2020-12-06 11:55:27 +01:00
Glenn Jocher
8f95dcf253
Update download_weights.sh with usage example (#1591) 2020-12-06 10:08:15 +01:00
Glenn Jocher
dbcb192f2d
Increase FLOPS robustness (#1589) 2020-12-05 11:41:17 +01:00
Glenn Jocher
d1ad63206b
Add bias to Classify() (#1588) 2020-12-04 15:08:02 +01:00
Glenn Jocher
75431d89ee
Update matplotlib.use('Agg') tight (#1584) 2020-12-02 15:53:23 +01:00
Glenn Jocher
eac1ba63d9
Update matplotlib svg backend (#1583) 2020-12-02 14:05:29 +01:00
SergioSanchezMontesUAM
5ead90a9d6
Update .gitignore datasets dir (#1582) 2020-12-02 13:01:45 +01:00
Glenn Jocher
5b46d49719
plot_images() scale bug fix (#1580)
From https://github.com/ultralytics/yolov5/pull/1566
2020-12-01 14:19:06 +01:00
Glenn Jocher
4f890d13ee
Daemon thread plots (#1578) 2020-11-30 16:47:28 +01:00
Glenn Jocher
e6d5408f5a
FROM nvcr.io/nvidia/pytorch:20.10-py3 2020-11-29 17:49:19 +01:00
Glenn Jocher
430890ead8
Update README.md 2020-11-29 14:21:32 +01:00
Glenn Jocher
fed9451454
f.read().strip() (#1577) 2020-11-29 12:01:42 +01:00
Glenn Jocher
bc5c898c93
Update labels_to_image_weights() (#1576) 2020-11-28 12:25:57 +01:00
Glenn Jocher
f28f862245
Ignore W&B logging dir wandb/ (#1571) 2020-11-27 01:32:55 +01:00
Glenn Jocher
152f50e8f9
Remove ignore for git files (#1099) 2020-11-27 01:30:37 +01:00
Glenn Jocher
f78f991a74 FROM nvcr.io/nvidia/pytorch:20.11-py3 2020-11-27 01:27:25 +01:00
Glenn Jocher
d312d25747
Ignore W&B logging dir wandb/ (#1571) 2020-11-26 22:22:03 +01:00
Glenn Jocher
76807fae71
YOLOv5 Forward Compatibility Update (#1569)
* YOLOv5 forward compatibility update

* add data dir

* ci test yolov3

* update build_targets()

* update build_targets()

* update build_targets()

* update yolov3-spp.yaml

* add yolov3-tiny.yaml

* add yolov3-tiny.yaml

* Update yolov3-tiny.yaml

* thop bug fix

* Detection() device bug fix

* Use torchvision.ops.nms()

* Remove redundant download mirror

* CI tests with yolov3-tiny

* Update README.md

* Synch train and test iou_thresh

* update requirements.txt

* Cat apriori autolabels

* Confusion matrix

* Autosplit

* Autosplit

* Update README.md

* AP no plot

* Update caching

* Update caching

* Caching bug fix

* --image-weights bug fix

* datasets bug fix

* mosaic plots bug fix

* plot_study

* boxes.max()

* boxes.max()

* boxes.max()

* boxes.max()

* boxes.max()

* boxes.max()

* update

* Update README

* Update README

* Update README.md

* Update README.md

* results png

* Update README

* Targets scaling bug fix

* update plot_study

* update plot_study

* update plot_study

* update plot_study

* Targets scaling bug fix

* Finish Readme.md

* Finish Readme.md

* Finish Readme.md

* Update README.md

* Creado con Colaboratory
2020-11-26 20:24:00 +01:00
Glenn Jocher
98068efebc
Update greetings.yml 2020-11-13 13:28:20 +01:00
Glenn Jocher
46cd0d8cc4
Grid indices overflow bug fix 2 (#1551) 2020-11-09 20:59:57 +01:00
Glenn Jocher
95460570d9
Grid indices overflow bug fix (#1551) 2020-11-06 19:19:58 +01:00
Glenn Jocher
ac601cf681
Grid indices overflow bug fix (#1551) 2020-11-06 13:38:13 +01:00
Shiwei Song
cf652962fd
fix padding for rectangular inference (#1524)
Co-authored-by: swsong <swsong@stratosphere.mobi>
2020-10-19 12:17:14 +02:00
Glenn Jocher
54722d00bb
Update stale.yml 2020-10-08 11:50:52 +02:00
e96031413
4d49957f5a
Update requirements.txt (#1481)
* Update requirements.txt

I found that if we would like to calculate FLOPS in this project, we must install thop.
but there's no thop package inside the requirements.txt

https://github.com/ultralytics/yolov3/blob/master/utils/torch_utils.py#L108

* Update requirements.txt

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-09-10 12:00:53 -07:00
Glenn Jocher
bf34ae007f Global code reformat and optimize imports 2020-08-22 17:49:27 -07:00
Glenn Jocher
64ff05c499
Update greetings.yml 2020-08-13 14:56:52 -07:00
Glenn Jocher
0ad44bc7e8
Update greetings.yml 2020-08-13 14:49:54 -07:00
Glenn Jocher
3e7e1e16c5
Update greetings.yml 2020-08-13 14:43:37 -07:00
Glenn Jocher
3d09ca366c
reverse plotting low to high confidence (#1448) 2020-08-12 13:51:10 -07:00
Glenn Jocher
f14c143926
Update greetings.yml 2020-08-11 00:56:04 -07:00
Glenn Jocher
2a74d1fd7d update requirements.txt 2020-08-08 12:57:41 -07:00
Glenn Jocher
af22cd7be3 add .gitattributes file 2020-08-08 11:12:19 -07:00
Glenn Jocher
2ba4ee3242 update README.md 2020-08-03 19:54:27 -07:00
Glenn Jocher
061806bb1f update README.md 2020-08-03 19:54:03 -07:00
Glenn Jocher
7163b5e89f update README.md 2020-08-03 19:53:06 -07:00
Glenn Jocher
ee82e3db5d update requirements.txt (#1431) 2020-08-03 19:37:38 -07:00
Glenn Jocher
0613806286
Update greetings.yml 2020-07-31 00:09:09 -07:00
Glenn Jocher
c65e4d4446
Update stale.yml 2020-07-31 00:05:49 -07:00
priteshgohil
e0a5a6b411
edit in comments (#1417)
Co-authored-by: Priteshkumar Bharatbhai Gohil <pgohil@assystemtechnologies.com>
2020-07-27 10:29:45 -07:00
e96031413
8de13f114d
Modify Line 104 on getting coco dataset (#1415)
The correct command for downloading coco dataset 2014 is supposed to be "!bash yolov3/data/get_coco2014.sh"
2020-07-26 23:50:05 -07:00
Glenn Jocher
e80cc2b80e
Update datasets.py 2020-07-20 10:34:06 -07:00
Glenn Jocher
f61fa7de2b
Update datasets.py 2020-07-20 10:33:49 -07:00
Glenn Jocher
cec59f12c8 windows –-weights '' fix #192
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-07-18 10:48:33 -07:00
Glenn Jocher
8241bf67bb update issue templates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-07-09 17:45:01 -07:00
Glenn Jocher
fe6ecb9f86 Merge remote-tracking branch 'origin/master' 2020-07-09 17:44:06 -07:00
Glenn Jocher
2861288b03 update issue templates 2020-07-09 17:44:01 -07:00
Glenn Jocher
bdf546150d
Update requirements.txt #1339 2020-07-08 21:33:10 -07:00
Glenn Jocher
c7f8dfcb87 Merge remote-tracking branch 'origin/master' 2020-07-02 16:42:49 -07:00
Glenn Jocher
2b0f4f6f9d update .dockerignore 2020-07-02 16:42:45 -07:00
tjiagoM
fa78fc4e34
partial support for dropout layer (#1366) 2020-07-02 14:35:20 -07:00
Glenn Jocher
63996a8bfe --resume update 2020-06-30 21:45:06 -07:00
Glenn Jocher
751d7d5cb4 Merge remote-tracking branch 'origin/master' 2020-06-30 16:20:00 -07:00
Glenn Jocher
f8e5338f0a --resume epochs update 2020-06-30 16:19:56 -07:00
Glenn Jocher
46575cfad5
Update README.md 2020-06-27 23:59:54 -07:00
Glenn Jocher
fc0394e038
Update README.md 2020-06-27 23:54:46 -07:00
Glenn Jocher
eadc06bce8
Update README.md 2020-06-27 23:52:45 -07:00
Glenn Jocher
7f953b2106 Merge remote-tracking branch 'origin/master' 2020-06-27 09:09:07 -07:00
Glenn Jocher
9b9715668c add yolov4-tiny.cfg #1350 2020-06-27 09:09:02 -07:00
Jason Nataprawira
e1fb453079
Update requirements.txt (#1339)
Add torchvision
2020-06-25 06:09:41 -07:00
NanoCode012
a587d39cd4
Fixed train.py SyntaxError due to last commit (#1336)
Fixed unexpected character after line continuation character on line 148,150, and 151
2020-06-24 11:37:09 -07:00
Chang Lee
8a414743e2
Fixed string format error during weight conversion (#1334) 2020-06-22 19:07:51 -07:00
Glenn Jocher
e276e3a103
Update greetings.yml 2020-06-22 15:20:08 -07:00
Oulbacha Reda
a97f350461
Non-output layer freeze in train.py (#1333)
Freeze layers that aren't of type YOLOLayer and that aren't the conv layers preceeding them
2020-06-22 13:15:40 -07:00
Glenn Jocher
ca7794ed05 update test.py 2020-06-20 10:02:18 -07:00
Glenn Jocher
207a17de31 Merge remote-tracking branch 'origin/master' 2020-06-20 09:58:53 -07:00
Glenn Jocher
183e3833d2 update datasets.py 2020-06-20 09:58:48 -07:00
FuLin
10dc08f91b
revert value of gs back to 32(from 64) (#1317) 2020-06-19 09:54:57 -07:00
Glenn Jocher
dc06836968 update README.md 2020-06-18 12:45:19 -07:00
Glenn Jocher
9fd02ae224 update --bug-report.md 2020-06-18 12:40:26 -07:00
Glenn Jocher
049c458e2d Merge remote-tracking branch 'origin/master' 2020-06-18 12:39:37 -07:00
Glenn Jocher
512b518c20 update --bug-report.md 2020-06-18 12:39:33 -07:00
Glenn Jocher
a475620306
Update README.md 2020-06-15 12:32:05 -07:00
Glenn Jocher
89a3ecac4b
Update README.md 2020-06-15 12:30:12 -07:00
Glenn Jocher
f51ace44f9 update README.md 2020-06-15 12:28:16 -07:00
Glenn Jocher
509644a622 greeting update 2020-06-15 12:27:03 -07:00
Glenn Jocher
eca5b9c1d3 Merge remote-tracking branch 'origin/master' 2020-06-15 12:25:53 -07:00
Glenn Jocher
c78d49f190 check_file() update from yolov5 2020-06-15 12:25:48 -07:00
Glenn Jocher
c4b0f986d1
Update README.md 2020-06-10 16:46:22 -07:00
Glenn Jocher
936ac746ce update README.md 2020-06-10 16:45:09 -07:00
Glenn Jocher
0671f04e1f YOLOv5 greeting 2020-06-09 16:00:10 -07:00
Glenn Jocher
3cac096d7e YOLOv5 greeting 2020-06-09 15:50:10 -07:00
Glenn Jocher
82f653b0f5 webcam multiple bounding box bug fix #1188 2020-06-02 23:59:03 -07:00
Glenn Jocher
64b8960074 remove dependency 2020-06-02 21:45:33 -07:00
Glenn Jocher
8c533a92b0 remove dependency 2020-06-02 11:22:28 -07:00
Glenn Jocher
cf7a4d31d3 bug fix in local to global path replacement 2020-05-28 20:50:02 -07:00
Glenn Jocher
2c39dba675 Merge remote-tracking branch 'origin/master' 2020-05-28 14:01:42 -07:00
Glenn Jocher
e99ff3aad0 local path robustness 2020-05-28 14:01:38 -07:00
Glenn Jocher
39a2d32c0f
Bug fix #1247 2020-05-27 09:32:19 -07:00
Glenn Jocher
d136ddeeba tight_layout=True 2020-05-25 12:42:58 -07:00
Glenn Jocher
d6d6fb5e5b print('Optimizer stripped from %s' % f) 2020-05-24 20:30:30 -07:00
Glenn Jocher
23f85a68b8 tight_layout=True 2020-05-24 10:51:35 -07:00
Glenn Jocher
16ea613628 caching introspection update 2020-05-22 16:06:21 -07:00
Glenn Jocher
4879fd22e9 caching introspection update 2020-05-22 16:03:08 -07:00
Glenn Jocher
002884ae5e multi_label burnin addition 2020-05-21 14:40:45 -07:00
Glenn Jocher
2cc2b2cf0d label *.npy saving for faster caching 2020-05-20 21:39:18 -07:00
Glenn Jocher
3ddaf3b63c label *.npy saving for faster caching 2020-05-20 21:13:41 -07:00
Glenn Jocher
cd5f6227d9
Update README.md 2020-05-18 12:03:14 -07:00
Glenn Jocher
eacded6a2c add stride order reversal for c53*.cfg 2020-05-17 22:45:48 -07:00
Glenn Jocher
bc9da228e0 add stride order reversal for c53*.cfg 2020-05-17 22:11:02 -07:00
Glenn Jocher
da40084b37 burnin update 2020-05-17 21:03:36 -07:00
Glenn Jocher
0c7d7427e4 [conf > conf_thres] update 2020-05-17 20:59:19 -07:00
Glenn Jocher
5b572681ff pseudo labeling bug fix 2020-05-17 19:28:06 -07:00
Glenn Jocher
316d99c377 yolov5 regress updates to yolov3 2020-05-17 15:19:33 -07:00
Glenn Jocher
c8f4ee6c46 yolov5 regress updates to yolov3 - build_targets() 2020-05-17 15:10:31 -07:00
Glenn Jocher
110ead20e6 yolov5 regress updates to yolov3 2020-05-17 15:00:07 -07:00
Glenn Jocher
c94019f159 iglob bug fix 2020-05-17 14:31:14 -07:00
Glenn Jocher
bbd82bb94d updates 2020-05-17 14:30:12 -07:00
Glenn Jocher
6bfb3a96c8 iglob bug fix 2020-05-16 22:43:12 -07:00
Glenn Jocher
37bd5490ef iglob file-search improvements 2020-05-16 22:25:21 -07:00
Glenn Jocher
27c7b02fff --save-txt extension fix 2020-05-16 11:51:49 -07:00
orcund
3a71daf4bc
Pseudo Labeling (#1149)
* Added pseudo labeling

* Delete print_test.py

* Refactor label generation

* Update detect.py

* Update detect.py

* Update utils.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-05-16 11:09:57 -07:00
Glenn Jocher
3f27ef1253 pycocotools and numpy 1.17 fix for #1182 2020-05-15 20:50:58 -07:00
Glenn Jocher
f6a19d5b32 add cd53-based *.cfg 2020-05-15 16:20:04 -07:00
Glenn Jocher
20891926c9 add stride order reversal for c53*.cfg 2020-05-15 14:40:14 -07:00
Glenn Jocher
6fe67595cb add stride order reversal for c53*.cfg 2020-05-15 11:32:23 -07:00
IlyaOvodov
b2fcfc573e
convert(...) changed to save converted file alongside the original file (#1167) 2020-05-13 09:08:55 -07:00
Glenn Jocher
c066d7d439 Merge remote-tracking branch 'origin/master' 2020-05-12 09:53:18 -07:00
Glenn Jocher
0cf88f046d hyp evolution bug fix #1160 2020-05-12 09:53:13 -07:00
Glenn Jocher
031c2144ec
Update README.md 2020-05-12 08:31:36 -07:00
Glenn Jocher
894a3e54ca
Update --bug-report.md 2020-05-11 11:14:34 -07:00
Glenn Jocher
9f04e175f6 nms torch.mm() update 2020-05-10 11:26:37 -07:00
Glenn Jocher
ae2bc020eb git status check - linux and darwin 2020-05-09 22:35:44 -07:00
Glenn Jocher
965155ee60 CUBLAS bug fix #1139 2020-05-06 10:26:28 -07:00
Glenn Jocher
832ceba559 update bug report template 2020-05-06 10:14:31 -07:00
Glenn Jocher
d405959893 cleanup 2020-05-04 13:33:34 -07:00
Glenn Jocher
5d42cc1b9a
Update README.md 2020-05-02 18:44:31 -07:00
Glenn Jocher
b0b52eec53 yolov4 tensorrt 2020-05-02 11:09:09 -07:00
Glenn Jocher
23614b8c2e speed update 2020-05-02 10:24:26 -07:00
Glenn Jocher
add73a0e74 speed update 2020-05-02 10:23:40 -07:00
Glenn Jocher
ee7cba65a5 kmeans() cleanup 2020-05-02 09:20:19 -07:00
Glenn Jocher
be87b41aa2 update image display per #1114 2020-04-30 16:50:58 -07:00
Glenn Jocher
b0629d622c bug fix on #1114 2020-04-30 15:03:32 -07:00
Glenn Jocher
0ffbf5534e cleanup for #1114 2020-04-30 14:53:57 -07:00
Josh Veitch-Michaelis
fb1b5e09b2
faster and more informative training plots (#1114)
* faster and more informative training plots

* Update utils.py

Looks good. Needs pep8 linting, I'll do that in PyCharm later once PR is in.

* Update test.py

* Update train.py

f for the tb descriptor lets us plot several batches, i.e. to allow us to change L292 to 'if ni < 3' for 3 examples.

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-04-30 13:37:04 -07:00
Glenn Jocher
f1d73a29e5 Optimizer group report 2020-04-30 12:26:02 -07:00
Glenn Jocher
d62d68929c cleanup 2020-04-29 12:00:30 -07:00
Glenn Jocher
9f88f5cc21 cleanup 2020-04-29 11:34:59 -07:00
Glenn Jocher
9cc4951d4f auto reverse-strides for yolov4/panet 2020-04-28 15:24:14 -07:00
Glenn Jocher
c6ea2b58ea auto-accumulate update 2020-04-28 15:06:33 -07:00
Glenn Jocher
37cbe89ef0 test/train jpg for png 2020-04-28 13:45:27 -07:00
Josh Veitch-Michaelis
992d8af242
faster hsv augmentation (#1110)
As per https://github.com/ultralytics/yolov3/issues/1096
2020-04-28 12:59:44 -07:00
Glenn Jocher
15f1343dfc uncached label removal 2020-04-28 11:07:26 -07:00
Glenn Jocher
b1d385a8de yolov4-relu.cfg 2020-04-27 21:19:22 -07:00
Glenn Jocher
02ae0e3bbd reproduce results update 2020-04-27 21:05:19 -07:00
Glenn Jocher
8521c3cff9 cleanup 2020-04-27 15:22:36 -07:00
Glenn Jocher
e9d41bb566 Speed updated 2020-04-27 15:06:26 -07:00
Glenn Jocher
2518868508 MemoryEfficientMish() 2020-04-27 13:51:21 -07:00
Glenn Jocher
3aa347a321 add HardSwish() 2020-04-27 13:08:24 -07:00
Glenn Jocher
692f945819 Merge remote-tracking branch 'origin/master' 2020-04-27 11:20:33 -07:00
Glenn Jocher
f799c15611 result not updated from pycocotools 2020-04-27 11:20:27 -07:00
Josh Veitch-Michaelis
18702c9608
add tensorboard to requirements (#1100)
In a clean environment running training fails if tensorboard is not installed e.g.

```
Traceback (most recent call last):
  File "/home/josh/miniconda3/envs/ultralytics/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py", line 2, in <module>
    from tensorboard.summary.writer.record_writer import RecordWriter  # noqa F401
ModuleNotFoundError: No module named 'tensorboard'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 6, in <module>
    from torch.utils.tensorboard import SummaryWriter
  File "/home/josh/miniconda3/envs/ultralytics/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py", line 4, in <module>
    raise ImportError('TensorBoard logging requires TensorBoard with Python summary writer installed. '
ImportError: TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above.
```
2020-04-27 09:21:34 -07:00
Glenn Jocher
4a4bfb20de FLOPS verbose=False 2020-04-26 16:31:57 -07:00
Glenn Jocher
a0a3bab9e6 add Mish() support 2020-04-26 16:31:21 -07:00
Glenn Jocher
18d4ebfd12 add Mish() support 2020-04-26 16:25:46 -07:00
Glenn Jocher
11f228eb00 yolov4.cfg from alexeyab/darknet 2020-04-26 16:07:29 -07:00
Glenn Jocher
55757421de remove future 2020-04-26 14:09:12 -07:00
Glenn Jocher
efbeb283c4 ONNX grid float 2020-04-26 14:01:20 -07:00
Glenn Jocher
3bf0cb9c60 remove tb-nightly 2020-04-26 12:41:15 -07:00
Glenn Jocher
daedfc5487 reduce merge limit to 3000 2020-04-26 11:22:29 -07:00
Glenn Jocher
754a1b5bf8 reduce merge limit to 3000 2020-04-25 21:15:30 -07:00
Glenn Jocher
3554ab07fb anchor correction 2020-04-24 11:00:01 -07:00
Glenn Jocher
5a8efa5c1d auto --accumulate 2020-04-23 14:32:28 -07:00
Glenn Jocher
b3dfd89878 scheduler resume bug fix 2020-04-23 10:35:08 -07:00
Glenn Jocher
c29be7f85d torch >= 1.5 2020-04-22 17:55:23 -07:00
Glenn Jocher
aa854ecaa9 torch >= 1.5 2020-04-22 17:54:56 -07:00
Glenn Jocher
dda0afa22e onnx export IO layer names update 2020-04-22 16:00:20 -07:00
Glenn Jocher
748f60baae updated test default img_size 512 2020-04-22 14:32:51 -07:00
Glenn Jocher
345d65d18f updated train default img_size 320-640 2020-04-22 14:32:05 -07:00
Glenn Jocher
82a12e2c8e docker train update 2020-04-22 11:38:48 -07:00
Glenn Jocher
7cbac5a3ea train.py iou_t to 0.20 2020-04-22 11:34:34 -07:00
Glenn Jocher
77b3829d56 check_git_status() to train.py 2020-04-22 11:02:09 -07:00
Glenn Jocher
2f636d5740 .half() bug fix 2020-04-22 10:46:26 -07:00
Glenn Jocher
03c6a2d6fa cleanup 2020-04-21 12:31:37 -07:00
Glenn Jocher
a5bd0fe567 tensorboard comment=opt.name 2020-04-21 12:19:29 -07:00
Glenn Jocher
4c4f4f4dd4 Merge remote-tracking branch 'origin/master' 2020-04-21 09:14:15 -07:00
Glenn Jocher
6aed6c5fd0 attempt_download() update for '' weights 2020-04-21 09:13:38 -07:00
Glenn Jocher
22a6c441ce
Update README.md 2020-04-21 00:39:06 -07:00
Glenn Jocher
8b45360e28 detect cleanup 2020-04-20 16:47:28 -07:00
Glenn Jocher
cdb69d5929 cfg cleanup 2020-04-20 16:34:00 -07:00
Glenn Jocher
be3f322375 Tensorboard out of try, iou_t to 0.10 2020-04-20 09:57:15 -07:00
Glenn Jocher
accce6b565 git status check bug fix 2020-04-18 18:06:11 -07:00
Glenn Jocher
693c06b26c bug fix issues/1067 2020-04-18 12:07:44 -07:00
Glenn Jocher
bf1061c146 cleanup 2020-04-16 16:12:23 -07:00
Glenn Jocher
9ea856242f Merge remote-tracking branch 'origin/master' 2020-04-15 22:03:55 -07:00
Glenn Jocher
c3edf8daf4 move image size report 2020-04-15 22:03:51 -07:00
Glenn Jocher
716e618a18
Update greetings.yml 2020-04-15 13:14:54 -07:00
Glenn Jocher
6566f37cdc Merge remote-tracking branch 'origin/master' 2020-04-15 12:55:00 -07:00
Glenn Jocher
510eadcfa5 Apex and 'git pull' suggestions 2020-04-15 12:54:56 -07:00
Glenn Jocher
20a094ccb9
Update README.md 2020-04-15 12:25:42 -07:00
Glenn Jocher
b8c3644a18 ONNX export update 2020-04-15 12:12:59 -07:00
Glenn Jocher
628028c617 bias init 2020-04-15 11:50:54 -07:00
Glenn Jocher
a49ea80218 update initialize_weights() 2020-04-14 15:58:32 -07:00
Glenn Jocher
ac4c90c817 cleanup 2020-04-14 13:08:00 -07:00
Glenn Jocher
f5a2682a81 image sizes report 2020-04-14 12:02:08 -07:00
Glenn Jocher
763cdd5ae2 detailed image sizes report 2020-04-14 11:51:19 -07:00
Glenn Jocher
029e137bc2 bug fix 2020-04-14 04:34:40 -07:00
Glenn Jocher
1681249588 cleanup 2020-04-14 04:15:53 -07:00
Glenn Jocher
198a5a591d code cleanup 2020-04-14 04:15:05 -07:00
Glenn Jocher
25725c8569 bug fix 2020-04-14 03:13:30 -07:00
Glenn Jocher
835b0da68a new modules and init weights 2020-04-14 01:20:57 -07:00
Glenn Jocher
76fb8d48d4 ng dependence removed from build_targets() 2020-04-13 21:25:03 -07:00
Glenn Jocher
0dd5f8eee8 code cleanup 2020-04-13 18:25:59 -07:00
Glenn Jocher
ca3a9fcb0b return get_yolo_layers() 2020-04-13 17:56:12 -07:00
Glenn Jocher
b8574add37 new find_modules() fcn 2020-04-13 17:48:30 -07:00
Glenn Jocher
77e6bdd3c1 FLOPs at 480x640, BN init 2020-04-12 18:44:18 -07:00
Glenn Jocher
1038b0d269 multi-scale update 2020-04-12 18:22:54 -07:00
Glenn Jocher
46726dad13 torch.tensor(ng, device=device) 2020-04-12 13:02:00 -07:00
Glenn Jocher
efc754a794 add generations arg to kmeans() 2020-04-12 12:49:23 -07:00
Glenn Jocher
f65a50d13d Merge remote-tracking branch 'origin/master' 2020-04-12 10:20:28 -07:00
Glenn Jocher
eda31c8bd0 print speeds for save_json 2020-04-12 10:20:21 -07:00
Timothy M. Shead
ada2958105
Fix argparse string escapes in train.py. (#1045) 2020-04-12 10:00:50 -07:00
Glenn Jocher
ed1d4f5ae7 k for kernel_size 2020-04-11 12:37:03 -07:00
Glenn Jocher
a34219a54b pading from (k-1) // 2 to k // 2 2020-04-11 12:18:54 -07:00
Glenn Jocher
b574f765ce add warning to plot_results() 2020-04-11 11:04:10 -07:00
Glenn Jocher
7be71b02e2 get_yolo_layers() 2020-04-11 10:56:20 -07:00
Glenn Jocher
dcc2e99fb2 get_yolo_layers() 2020-04-11 10:55:49 -07:00
Glenn Jocher
58edfc4a84 kaiming weight init 2020-04-11 10:45:33 -07:00
Glenn Jocher
2cf23c4aee add MixConv2d() layer 2020-04-10 18:58:34 -07:00
Glenn Jocher
4bbfda5cde hist equalization 2020-04-10 17:29:57 -07:00
Glenn Jocher
9bc3a551d9 histogram equalization added to augmentation 2020-04-10 17:24:49 -07:00
Glenn Jocher
398f8eadec add thr=0.10 to kmean_anchors() 2020-04-10 16:34:32 -07:00
Glenn Jocher
aa8b1098dd adapt mosaic to img channel count 2020-04-10 16:28:59 -07:00
Glenn Jocher
6736d7d125 swap cv2.INTER_AREA for cv2.INTER_LINEAR 2020-04-10 12:47:07 -07:00
Glenn Jocher
b98ce11d3a add MixConv2d() layer 2020-04-09 20:20:23 -07:00
Glenn Jocher
6e19245dc8 auto strip optimizer from best.pt after training 2020-04-09 19:53:29 -07:00
Glenn Jocher
bc74822540 notebook update 2020-04-09 14:33:24 -07:00
Glenn Jocher
d1601ae0f3 training updates 2020-04-08 21:34:34 -07:00
Glenn Jocher
4120ac3aa6 training updates 2020-04-08 21:01:58 -07:00
Glenn Jocher
c7f5b6cc21 speed update 2020-04-08 20:53:17 -07:00
Glenn Jocher
d9af081b7f Merge remote-tracking branch 'origin/master' 2020-04-08 20:43:57 -07:00
Glenn Jocher
8f71b7865b run once to remove initial timing effects 2020-04-08 20:43:51 -07:00
Glenn Jocher
4f6843e249
Update README.md 2020-04-08 12:28:10 -07:00
Glenn Jocher
13fa2798a6
Update README.md 2020-04-08 12:27:33 -07:00
Glenn Jocher
b6959a2f54 ONNX export self.training=False 2020-04-08 10:25:52 -07:00
Glenn Jocher
97780cfdb4 parameterize grid size 2020-04-08 10:14:33 -07:00
Glenn Jocher
9f41b7601a inference speed and mAP updates 2020-04-08 10:02:20 -07:00
Glenn Jocher
f54d28ba63 improve assert no labels found 2020-04-08 09:57:59 -07:00
Glenn Jocher
933e05cb44 augment update 2020-04-07 17:35:35 -07:00
Glenn Jocher
6c5ecaf805 remove label loading during training 2020-04-07 16:57:22 -07:00
Glenn Jocher
b3e1d74478 remove imwrite from augment 2020-04-07 14:23:31 -07:00
Glenn Jocher
d79c3bd076 parameterize augment scales 2020-04-07 14:19:43 -07:00
Glenn Jocher
b9b14bef59 scale_img() bug fix 2020-04-07 13:35:47 -07:00
Glenn Jocher
067ee264c0 scale_img() bug fix 2020-04-07 12:59:52 -07:00
Glenn Jocher
68f58f4dec scale_img() update 2020-04-07 12:51:52 -07:00
Glenn Jocher
1a3c77df95 Merge remote-tracking branch 'origin/master' 2020-04-07 12:27:56 -07:00
Glenn Jocher
1a511d2906 updated --augment sizes and results 2020-04-07 12:27:49 -07:00
Wang Xinyu
b20e3c4c40
Update README, add link to yolov3-spp-tensorrt (#1017) 2020-04-06 22:56:39 -07:00
Glenn Jocher
4fa3fd2df3 tensorboard notice 2020-04-06 16:33:23 -07:00
Glenn Jocher
05ae6e8499 tensorboard/focal loss reporting update 2020-04-06 15:45:18 -07:00
Glenn Jocher
c7f93bae40 update coco-tuned hyp['cls'] to current dataset 2020-04-06 10:58:07 -07:00
Glenn Jocher
4fc0012829 initial batchnorm to 0.03 momentum 2020-04-05 18:03:49 -07:00
Glenn Jocher
26fc4fb018 dataloader default color to imagenet mean 114 2020-04-05 18:02:41 -07:00
Glenn Jocher
2baf4e3f93 imagenet normalization on layer 0 batchnorm2d() 2020-04-05 17:33:06 -07:00
Glenn Jocher
b70cfa9a29 mAP updates for rect inference 64 commit 2020-04-05 17:18:24 -07:00
Glenn Jocher
c6d4e80335 move inference augmentation to model.forward() 2020-04-05 17:14:26 -07:00
Glenn Jocher
4da5c6c114 rect padding to 64, mAP increase 42.7 to 42.9 2020-04-05 16:06:27 -07:00
Glenn Jocher
bb59ffe68f model forward() zip() removal 2020-04-05 15:22:32 -07:00
Glenn Jocher
a657345b45 add FeatureConcat() module 2020-04-05 14:47:41 -07:00
Glenn Jocher
968b2ec004 .fuse() after .eval() 2020-04-05 14:05:12 -07:00
Glenn Jocher
d04738a27c forward updated if-else 2020-04-05 13:49:13 -07:00
Glenn Jocher
e81a152a92 tensorboard notice and model verbose option 2020-04-05 13:35:58 -07:00
Glenn Jocher
a19b1a3b94 line thickness 2020-04-05 11:10:05 -07:00
Glenn Jocher
6203340888 detect.py multi_label default False 2020-04-05 11:05:49 -07:00
Glenn Jocher
41246aa042 tensorboard updates 2020-04-04 19:34:39 -07:00
Glenn Jocher
00c1fdd805 add MixConv2d() layer 2020-04-03 20:03:44 -07:00
Glenn Jocher
eb9fb245aa add support for standalone BatchNorm2d() 2020-04-03 14:21:47 -07:00
Glenn Jocher
682c2b27e7 smart bias bug fix 2020-04-03 12:42:09 -07:00
Glenn Jocher
41a002e798 grid.float() 2020-04-03 12:38:08 -07:00
Glenn Jocher
93055a9d58 create_grids() to YOLOLayer method 2020-04-02 20:23:55 -07:00
Glenn Jocher
91f563c2a2 create_grids() to YOLOLayer method 2020-04-02 19:10:51 -07:00
Glenn Jocher
207c6fcff9 merge NMS full matrix 2020-04-02 18:53:40 -07:00
Glenn Jocher
aa4591d7e9 batchnorm momentum to 0.03 2020-04-02 14:10:45 -07:00
Glenn Jocher
9155ef3f4f burnin merged with prebias 2020-04-02 14:08:21 -07:00
Glenn Jocher
27c7334e81 new layers.py file 2020-04-02 12:22:15 -07:00
Glenn Jocher
4ac60018f6 FLOPS report 2020-04-01 14:05:41 -07:00
Glenn Jocher
ea80ba65af documentation updates 2020-04-01 12:16:37 -07:00
Glenn Jocher
765b8f3a3b documentation update 2020-04-01 12:12:14 -07:00
Glenn Jocher
5b322b6038 nvcr update 2020-04-01 09:57:00 -07:00
Glenn Jocher
300e9a7ad6 merge NMS full matrix 2020-03-31 21:31:09 -07:00
Glenn Jocher
8d788e10c4 mAP updates 2020-03-31 19:07:41 -07:00
Glenn Jocher
02802e67f2 merge NMS full matrix 2020-03-31 18:18:08 -07:00
Glenn Jocher
f4eecef700 merge NMS speed/memory improvements 2020-03-31 15:37:23 -07:00
Glenn Jocher
992e0d7cb4 default test --conf to 0.001 2020-03-31 14:36:25 -07:00
Glenn Jocher
98271eb6ed remove deprecated models 2020-03-31 14:27:10 -07:00
Glenn Jocher
16862ea846 update 'reproduce our results' 2020-03-30 21:21:45 -07:00
Glenn Jocher
b2d9f1898f burnin lr ramp 300 iterations 2020-03-30 19:27:42 -07:00
Glenn Jocher
ac2aa56e0a feature fusion update 2020-03-30 17:53:17 -07:00
Glenn Jocher
108334db29 FLOPs update 2020-03-30 16:04:08 -07:00
Glenn Jocher
105882b3c6 GFLOPs correction 2020-03-30 15:30:53 -07:00
Glenn Jocher
de52a008a5 default --img-size to 512 2020-03-30 11:46:20 -07:00
Glenn Jocher
f6fc9634ab mAP updates 2020-03-30 11:37:38 -07:00
Glenn Jocher
eb151a881e NMS and test batch_size updates 2020-03-29 20:41:32 -07:00
Glenn Jocher
c6b59a0e8a LR schedule to 0.05 min 2020-03-29 13:29:06 -07:00
Glenn Jocher
9c5e76b93d EMA implemented by default 2020-03-29 13:14:54 -07:00
Glenn Jocher
dc8e56b9f3 mAP update 2020-03-28 16:03:46 -07:00
Glenn Jocher
ce17c26759 mAP updates 2020-03-27 13:52:07 -07:00
Glenn Jocher
f9d34587da Merge NMS update 2020-03-27 13:11:24 -07:00
GoogleWiki
582de735ad
utils.clip_coords doesn't work as expected. (#961)
* utils.clip_coords doesn't work as expected.

Box coords may be negative or exceed borders.

* Update utils.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-03-27 13:09:10 -07:00
Glenn Jocher
4a63b24b09 Merge NMS update 2020-03-26 19:50:29 -07:00
Glenn Jocher
01ee0c5e95 Merge NMS update 2020-03-26 18:41:04 -07:00
Glenn Jocher
dad59220f1 speed and comments update 2020-03-26 18:34:20 -07:00
Glenn Jocher
faab52913c mAP updates 2020-03-26 16:35:46 -07:00
Glenn Jocher
9568d4562d mAP updates 2020-03-26 16:22:58 -07:00
Glenn Jocher
5ab13e5aa2 Merge remote-tracking branch 'origin/master' 2020-03-26 16:20:11 -07:00
Glenn Jocher
470371ba59 Test augment update 2020-03-26 16:20:06 -07:00
Yonghye Kwon
20b2671de0
cleanup (#963)
* cleanup

cleanup

* Update train.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-03-26 14:46:17 -07:00
Glenn Jocher
a4721e90f8 defult batch-size to 16 2020-03-26 14:22:59 -07:00
Glenn Jocher
4a7d9bdba9 mAP increases 2020-03-26 14:14:52 -07:00
Glenn Jocher
a322fc5d4b Merge NMS update 2020-03-26 12:48:00 -07:00
Glenn Jocher
171b4129b5 Merge NMS update 2020-03-26 12:33:12 -07:00
Glenn Jocher
eac07f9da3 Merge NMS update 2020-03-26 12:20:01 -07:00
Glenn Jocher
94344f5bea test augmentation comments 2020-03-26 11:34:32 -07:00
Glenn Jocher
f91b1fb13a merge_batch NMS method 2020-03-26 11:28:46 -07:00
Glenn Jocher
c71ab7d506 augmented testing 2020-03-26 11:25:44 -07:00
Glenn Jocher
23b34f4db8 merge_batch NMS method 2020-03-25 23:29:33 -07:00
Glenn Jocher
aa0c64b5ac merge_batch NMS method 2020-03-25 23:24:57 -07:00
Glenn Jocher
3265d50f69 speed update 2020-03-19 18:15:09 -07:00
Glenn Jocher
89b6377723 Fuse by default when test.py called directly (faster) 2020-03-19 18:11:08 -07:00
Glenn Jocher
fff45c39a8 cleanup/speedup 2020-03-19 16:41:42 -07:00
Glenn Jocher
1b68fe7fde cleanup 2020-03-19 16:23:44 -07:00
Glenn Jocher
83c9cfb7de FocalLoss() and obj loss speed and stability update 2020-03-19 12:30:37 -07:00
Glenn Jocher
20454990ce FLOPS report 2020-03-19 12:30:07 -07:00
Glenn Jocher
60c8d194cd FocalLoss() and obj loss speed and stability update 2020-03-19 11:52:52 -07:00
Glenn Jocher
b3adc896f9 focal and obj loss speed/stability 2020-03-16 21:40:57 -07:00
Glenn Jocher
448c4a6e1f Remove deprecated --arc architecture options, implement --arc default for all cases 2020-03-16 20:46:25 -07:00
Glenn Jocher
77c6c01970 EMA class updates 2020-03-16 17:51:40 -07:00
Glenn Jocher
1a12667ce1 loss function cleanup 2020-03-16 17:31:37 -07:00
Glenn Jocher
f1208f784e updated run history 2020-03-16 15:36:56 -07:00
Glenn Jocher
2a12a91245 nvcr.io/nvidia/pytorch:20.02-py3 2020-03-16 15:19:58 -07:00
Glenn Jocher
c09fcfc4fe EMA class updates 2020-03-16 14:18:56 -07:00
Glenn Jocher
c4047000fe FocalLoss() updated to match TF 2020-03-16 14:03:50 -07:00
Glenn Jocher
07d2f0ad03 test/inference time augmentation 2020-03-15 18:39:54 -07:00
Glenn Jocher
adba66c3a6 EMA class updates 2020-03-14 18:08:48 -07:00
Glenn Jocher
851c9b9883 EMA class updates 2020-03-14 17:34:13 -07:00
Glenn Jocher
d91469a516 EMA class updates 2020-03-14 17:33:29 -07:00
Glenn Jocher
5ebbb2db28 ASFF implementation 2020-03-14 17:04:38 -07:00
Glenn Jocher
ea4c26b32d BatchNorm2d() to EfficientDet defaults: decay=0.997 eps=1e-4 2020-03-14 16:54:04 -07:00
Glenn Jocher
9ce4ec48a7 model.info() method implemented 2020-03-14 16:46:54 -07:00
Glenn Jocher
b89cc396af EMA class updates 2020-03-14 16:23:14 -07:00
Falak
666ba85ed3
Comment updates on box coordinates (#852)
* Update utils.py

Reusing function defined above

* Update utils.py

* Reverting change which break bbox coordinate computation

* Update utils.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-03-14 15:35:59 -07:00
Glenn Jocher
a52c0abf8d updates 2020-03-13 20:12:54 -07:00
Glenn Jocher
418269d739 FocalLoss() gamma and alpha default values 2020-03-13 16:51:30 -07:00
Glenn Jocher
208b9a73fe updates 2020-03-13 16:08:49 -07:00
Glenn Jocher
f30a8706e5 update to coco results82 2020-03-13 12:06:48 -07:00
Glenn Jocher
0de07da612 updates 2020-03-13 11:03:39 -07:00
Glenn Jocher
6aae5aca64 inplace clip_coords() clamp 2020-03-13 10:47:00 -07:00
Glenn Jocher
731305142b json dict bug fixes and speed improvements 2020-03-13 10:35:58 -07:00
Glenn Jocher
5362e8254e updates 2020-03-13 10:20:52 -07:00
Glenn Jocher
a2f5bc477c Merge remote-tracking branch 'origin/master' 2020-03-13 10:06:22 -07:00
Glenn Jocher
a278623067 replaced floatn() with round() 2020-03-13 10:06:17 -07:00
Glenn Jocher
32404890e9
Update greetings.yml 2020-03-12 13:35:21 -07:00
Glenn Jocher
1ca2b8712a
Update greetings.yml 2020-03-12 13:34:27 -07:00
Glenn Jocher
8a1f35eac6 updates 2020-03-12 01:09:17 -07:00
Glenn Jocher
41bf46a419 updates 2020-03-11 22:11:19 -07:00
Glenn Jocher
6ca8277de2 updates 2020-03-11 21:30:47 -07:00
Glenn Jocher
2d32423461 Merge remote-tracking branch 'origin/master' 2020-03-11 20:45:19 -07:00
Glenn Jocher
75e88561cb updates 2020-03-11 20:45:14 -07:00
Glenn Jocher
997cd7f70b
Update greetings.yml 2020-03-11 20:35:52 -07:00
Glenn Jocher
673a1d037d
Create greetings.yml 2020-03-11 18:12:54 -07:00
Glenn Jocher
e76d4d0ffc updates 2020-03-11 17:13:40 -07:00
Glenn Jocher
e40d4c87f2 updates 2020-03-11 15:57:37 -07:00
Glenn Jocher
4089735c5e updates 2020-03-11 14:50:50 -07:00
Glenn Jocher
320f9c6601 updates 2020-03-11 12:18:03 -07:00
Glenn Jocher
585064f300 updates 2020-03-10 13:33:14 -07:00
Glenn Jocher
7a83574022 updates 2020-03-10 12:17:23 -07:00
Glenn Jocher
d55dbc1f29 updates 2020-03-09 20:08:19 -07:00
Glenn Jocher
17a06dcf83 updates 2020-03-09 18:55:17 -07:00
Glenn Jocher
d8370d13ea updates 2020-03-09 18:49:35 -07:00
Glenn Jocher
821a72b2d3 updates 2020-03-09 18:39:00 -07:00
Glenn Jocher
f7f435446b updates 2020-03-09 18:24:20 -07:00
Glenn Jocher
25ad727a3d updates 2020-03-09 18:22:42 -07:00
Glenn Jocher
207cf14df4 updates 2020-03-09 18:03:34 -07:00
Glenn Jocher
204594f299 updates 2020-03-09 16:44:26 -07:00
Glenn Jocher
6130b70fe7 updates 2020-03-09 16:00:05 -07:00
Glenn Jocher
67e7ac221f updates 2020-03-09 14:20:38 -07:00
Glenn Jocher
5fb661b7d4 updates 2020-03-09 13:33:23 -07:00
Glenn Jocher
6bd51b75ea updates 2020-03-09 11:20:22 -07:00
Glenn Jocher
cd76a1a982 updates 2020-03-09 10:46:59 -07:00
Glenn Jocher
071d4113f6 updates 2020-03-09 10:43:49 -07:00
Glenn Jocher
4a90221e79 updates 2020-03-08 16:15:41 -07:00
Glenn Jocher
1d43b2a55a updates 2020-03-08 16:13:56 -07:00
Glenn Jocher
0037254bf2 updates 2020-03-08 13:20:31 -07:00
Glenn Jocher
23389da9ec updates 2020-03-08 12:35:04 -07:00
Glenn Jocher
952df070db updates 2020-03-08 12:05:42 -07:00
Glenn Jocher
17fbd6ed8c updates 2020-03-08 11:56:37 -07:00
Glenn Jocher
a4662bf306 updates 2020-03-08 11:53:18 -07:00
Glenn Jocher
3122f1fe82 updates 2020-03-08 11:52:35 -07:00
Glenn Jocher
4317335795 updates 2020-03-08 11:43:05 -07:00
Glenn Jocher
feea9c1a65 P and R evaluated at 0.5 score 2020-03-07 10:26:08 -08:00
Glenn Jocher
65eeb1bae5 updates 2020-03-05 17:08:14 -08:00
Glenn Jocher
7790d8b0e4 Merge remote-tracking branch 'origin/master' 2020-03-05 14:21:13 -08:00
Glenn Jocher
692b006f4d updates 2020-03-05 14:20:52 -08:00
Glenn Jocher
818d0b9f00
Update stale.yml 2020-03-05 13:48:29 -08:00
Glenn Jocher
2e8cee9fcb
Update stale.yml 2020-03-05 13:26:48 -08:00
Glenn Jocher
e2f235cf1e
Create stale.yml 2020-03-05 13:22:10 -08:00
Glenn Jocher
378f08c6d5 updates 2020-03-05 12:30:11 -08:00
Glenn Jocher
1dc1761f45 updates 2020-03-05 10:20:08 -08:00
Glenn Jocher
b8b89a3132 updates 2020-03-05 09:54:41 -08:00
Glenn Jocher
8b6c8a5318 updates 2020-03-04 16:33:14 -08:00
Glenn Jocher
4a5159710f updates 2020-03-04 14:55:56 -08:00
Glenn Jocher
1d45ec84bc updates 2020-03-04 14:12:31 -08:00
Glenn Jocher
2e88a56635 updates 2020-03-04 14:02:42 -08:00
Glenn Jocher
cdb229fc76 updates 2020-03-04 13:30:27 -08:00
Glenn Jocher
305c07bac8 updates 2020-03-04 13:24:18 -08:00
Glenn Jocher
981b452b1d updates 2020-03-04 13:20:08 -08:00
Glenn Jocher
6ab753a9e7 updates 2020-03-04 13:06:31 -08:00
Glenn Jocher
9c661e2d53 updates 2020-03-04 12:17:37 -08:00
Glenn Jocher
3e633783d8 updates 2020-03-04 11:36:21 -08:00
Glenn Jocher
1430a1e408 updates 2020-03-04 10:26:35 -08:00
Glenn Jocher
35eae3ace9 updates 2020-03-04 09:53:02 -08:00
Glenn Jocher
e482392161 updates 2020-03-04 09:00:48 -08:00
Glenn Jocher
eb81c0b9ae updates 2020-03-04 01:47:31 -08:00
Glenn Jocher
be01fc357b updates 2020-03-04 00:22:01 -08:00
Glenn Jocher
f915bf175c updates 2020-03-04 00:08:18 -08:00
Glenn Jocher
166f8c0e53 updates 2020-03-04 00:07:19 -08:00
Glenn Jocher
308f7c8563 updates 2020-03-03 19:16:13 -08:00
Glenn Jocher
dce753ead4 updates 2020-03-02 14:30:01 -08:00
Glenn Jocher
2774c1b398 updates 2020-03-02 14:28:08 -08:00
Glenn Jocher
44daace4ca updates 2020-03-02 14:07:09 -08:00
Glenn Jocher
84371f6811 updates 2020-03-01 21:33:16 -08:00
Glenn Jocher
7823473d2f updates 2020-03-01 20:55:20 -08:00
Glenn Jocher
e6cda0fea4 updates 2020-02-29 01:15:23 -08:00
Glenn Jocher
cc08e09219 updates 2020-02-28 10:06:35 -08:00
Glenn Jocher
b3ecfb10bc updates 2020-02-27 22:50:26 -08:00
Glenn Jocher
d5815ebfd2 updates 2020-02-27 13:40:14 -08:00
Glenn Jocher
0fb4a46ace updates 2020-02-27 13:30:23 -08:00
Glenn Jocher
3a1ca6454c updates 2020-02-27 13:00:00 -08:00
Glenn Jocher
6a99e39bd5 updates 2020-02-27 12:57:10 -08:00
Glenn Jocher
de3e539609 updates 2020-02-27 12:49:01 -08:00
Glenn Jocher
f3d3295f90 updates 2020-02-27 12:38:14 -08:00
Glenn Jocher
7e92f70e05 updates 2020-02-27 12:19:06 -08:00
Glenn Jocher
e7f85bcfb9 updates 2020-02-27 11:29:38 -08:00
Glenn Jocher
764514e44d updates 2020-02-26 13:52:33 -08:00
Glenn Jocher
7d7c22cb7e updates 2020-02-26 13:42:50 -08:00
Glenn Jocher
2baa67cde2 updates 2020-02-26 13:40:17 -08:00
Glenn Jocher
b12f1a9abe updates 2020-02-25 22:58:26 -08:00
Glenn Jocher
4f3d07f689 updates 2020-02-25 20:04:05 -08:00
Glenn Jocher
f743235fac updates 2020-02-24 12:44:22 -08:00
Glenn Jocher
4b720013d1 updates 2020-02-24 12:43:13 -08:00
Glenn Jocher
24957dca98 updates 2020-02-24 12:21:47 -08:00
Glenn Jocher
ef3bd7e12b updates 2020-02-24 09:06:17 -08:00
Glenn Jocher
a3671bde94 updates 2020-02-23 18:33:32 -08:00
Glenn Jocher
2624d55623 updates 2020-02-22 21:24:56 -08:00
Glenn Jocher
b052085cc4 updates 2020-02-22 21:21:45 -08:00
Glenn Jocher
817c0bfeed updates 2020-02-22 21:17:38 -08:00
Glenn Jocher
bc741f30e8 updates 2020-02-22 18:18:38 -08:00
Glenn Jocher
7608047531 updates 2020-02-22 17:43:11 -08:00
Glenn Jocher
3cf8a13910 updates 2020-02-22 12:56:20 -08:00
Glenn Jocher
2d9bc62526 updates 2020-02-22 12:54:09 -08:00
Glenn Jocher
b70e39ab9b updates 2020-02-22 12:48:24 -08:00
Glenn Jocher
b97b88b659 updates 2020-02-21 17:16:34 -08:00
Glenn Jocher
fa8882c98e updates 2020-02-21 15:11:11 -08:00
Glenn Jocher
afbc2f8d78 updates 2020-02-21 15:10:50 -08:00
Glenn Jocher
328ad4da04 updates 2020-02-19 18:37:17 -08:00
Glenn Jocher
1043832493 updates 2020-02-19 18:26:45 -08:00
Glenn Jocher
7f1b2bfe08 updates 2020-02-19 18:06:53 -08:00
Glenn Jocher
6fbab656c8 updates 2020-02-19 17:08:03 -08:00
Glenn Jocher
f92ad043bd updates 2020-02-19 16:05:57 -08:00
Glenn Jocher
00862e47ef updates 2020-02-19 15:16:00 -08:00
Glenn Jocher
a9cbc28214 updates 2020-02-19 14:57:58 -08:00
Glenn Jocher
f4a9e5cd58 updates 2020-02-19 12:59:56 -08:00
Glenn Jocher
ddd892dc20 updates 2020-02-18 21:04:58 -08:00
Glenn Jocher
b022648716 updates 2020-02-18 20:13:18 -08:00
Glenn Jocher
a971b33b74 updates 2020-02-17 17:34:40 -08:00
Glenn Jocher
4fa0a32d05 updates 2020-02-17 17:02:37 -08:00
Glenn Jocher
45ce01f859 updates 2020-02-17 15:28:11 -08:00
Glenn Jocher
9880dcd6cd updates 2020-02-17 15:10:11 -08:00
Glenn Jocher
426d5b82c6 updates 2020-02-17 12:36:11 -08:00
Glenn Jocher
aa45dc05b3 updates 2020-02-16 23:57:39 -08:00
Glenn Jocher
49d47adf17 updates 2020-02-16 23:30:14 -08:00
Glenn Jocher
cca620208e updates 2020-02-16 23:13:34 -08:00
Glenn Jocher
e840b7c781 add yolov3-spp-ultralytics.pt 2020-02-16 23:12:07 -08:00
Glenn Jocher
57798278ad updates 2020-02-14 21:32:29 -08:00
Glenn Jocher
740cd177dc updates 2020-02-14 21:26:16 -08:00
Glenn Jocher
11bcd0f988 updates 2020-02-12 15:19:06 -08:00
Glenn Jocher
ca22b5e40b save git info in docker images 2020-02-12 14:27:31 -08:00
Glenn Jocher
0958d81580 updates 2020-02-09 11:17:31 -08:00
Glenn Jocher
8bc9f56564 updates 2020-02-09 09:12:45 -08:00
Glenn Jocher
8bc7648b38 updates 2020-02-08 21:51:31 -08:00
Glenn Jocher
ca4960f7ff updates 2020-02-08 13:28:47 -08:00
Glenn Jocher
daddc560f6 updates 2020-02-08 09:48:28 -08:00
Glenn Jocher
58f04daec6 updates 2020-02-08 09:47:01 -08:00
Glenn Jocher
106b1961b6 updates 2020-02-07 10:53:09 -08:00
Yonghye Kwon
145ea67a2e
modify h range clip range in hsv augmentation (#825)
* h range clip range edit in hsv augmentation

h range is [0., 179,]

* Update datasets.py

reduced indexing operations and used inplace clip for hsv. Two clips are used unfortunately (double clip of axis 0), but the overall effect should be improved speed.

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-02-07 09:14:55 -08:00
Glenn Jocher
dca80f6f98 updates 2020-02-06 16:13:10 -08:00
Glenn Jocher
d778bf3c7a updates 2020-02-05 20:35:54 -08:00
Glenn Jocher
ec942bd23c updates 2020-02-05 20:27:01 -08:00
Glenn Jocher
e185719bd7 updates 2020-02-04 21:22:20 -08:00
Glenn Jocher
888cad1e31 updates 2020-02-02 23:35:03 -08:00
Glenn Jocher
785bfec286 updates 2020-02-02 09:19:44 -08:00
Glenn Jocher
8b18beb3db updates 2020-02-02 08:55:34 -08:00
Glenn Jocher
d23f721dcf updates 2020-01-31 09:36:28 -08:00
Glenn Jocher
f7772c791d updates 2020-01-31 09:27:40 -08:00
Glenn Jocher
189c7044fb updates 2020-01-31 09:00:45 -08:00
LinCoce
0c7af1a4d2
fusedconv bug fix, https://github.com/ultralytics/yolov3/issues/807 (#818)
Looks good. Thanks for catching the bug @LinCoce!
2020-01-30 21:58:26 -08:00
Glenn Jocher
6f769081d1 updates 2020-01-30 16:03:34 -08:00
Piotr Skalski
20b0601fa7
change of test batch image format from .jpg to .png, due to matplotlib bug (#817)
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-01-30 15:48:26 -08:00
Glenn Jocher
4b9d73f931 updates 2020-01-30 14:32:10 -08:00
Glenn Jocher
ac8d78382a updates 2020-01-30 14:29:58 -08:00
Glenn Jocher
2e4650e013 updates 2020-01-30 12:40:05 -08:00
Glenn Jocher
999463fbbd updates 2020-01-30 12:39:54 -08:00
Glenn Jocher
27c75b5210 updates 2020-01-30 12:37:47 -08:00
Glenn Jocher
ff7ee7f1f1 updates 2020-01-30 12:12:04 -08:00
Glenn Jocher
ce11ef28f8 updates 2020-01-29 21:52:00 -08:00
Glenn Jocher
2cf171465c updates 2020-01-29 21:17:31 -08:00
Glenn Jocher
9b78f4aa1b updates 2020-01-29 15:31:19 -08:00
Glenn Jocher
db4ac86eba updates 2020-01-29 14:28:48 -08:00
Glenn Jocher
3ee6eb438a updates 2020-01-29 14:26:37 -08:00
Glenn Jocher
8fac566a87 updates 2020-01-29 14:18:45 -08:00
Glenn Jocher
9e97c4cadb updates 2020-01-29 11:58:32 -08:00
Glenn Jocher
f405d45043 updates 2020-01-29 10:34:51 -08:00
Glenn Jocher
b2be564145 updates 2020-01-29 10:33:36 -08:00
Glenn Jocher
4e7d1053cf updates 2020-01-29 10:30:13 -08:00
Glenn Jocher
639fa30857 updates 2020-01-29 10:29:37 -08:00
Glenn Jocher
5a09c0e6af updates 2020-01-27 17:41:07 -05:00
Glenn Jocher
1961678177 updates 2020-01-27 17:03:27 -05:00
Glenn Jocher
72680a5992 updates 2020-01-27 16:52:40 -05:00
Glenn Jocher
cb0f4bbfe7 updates 2020-01-27 16:08:20 -05:00
Glenn Jocher
dd3cf27ece updates 2020-01-23 17:26:05 -08:00
Glenn Jocher
629b1b237a updates 2020-01-23 16:36:53 -08:00
Glenn Jocher
d498193456 updates 2020-01-23 15:15:53 -08:00
Glenn Jocher
07c8a03aa0 updates 2020-01-23 13:52:17 -08:00
Glenn Jocher
ee4f7a324d updates 2020-01-23 12:24:52 -08:00
Glenn Jocher
52041cffb9 updates 2020-01-22 18:19:42 -08:00
Glenn Jocher
9bb51aaf8c updates 2020-01-22 18:17:08 -08:00
Glenn Jocher
e6ec7c041c updates 2020-01-22 11:23:16 -08:00
Glenn Jocher
3de61b1fa5 updates 2020-01-22 11:08:03 -08:00
Glenn Jocher
f18913736b updates 2020-01-22 11:06:52 -08:00
Glenn Jocher
c7bf7f3d60 updates 2020-01-22 10:55:39 -08:00
Glenn Jocher
dc1f0a0d4f updates 2020-01-22 10:53:36 -08:00
Glenn Jocher
578e7f9500 updates 2020-01-21 23:18:34 -08:00
Glenn Jocher
6ba6181534 updates 2020-01-21 19:47:48 -08:00
Glenn Jocher
b9c2386ff0 updates 2020-01-21 19:46:12 -08:00
Glenn Jocher
5d73b190b0 updates 2020-01-21 17:23:35 -08:00
Glenn Jocher
6ccf19038d updates 2020-01-21 16:18:24 -08:00
Glenn Jocher
20e381edb4 updates 2020-01-20 22:43:37 -08:00
Glenn Jocher
d3738f5330 updates 2020-01-19 16:56:32 -08:00
Glenn Jocher
c91ffee852 updates 2020-01-19 16:55:29 -08:00
Glenn Jocher
19f75f986d updates 2020-01-19 16:54:49 -08:00
Glenn Jocher
abbf9fa2d4 updates 2020-01-19 15:37:56 -08:00
Glenn Jocher
51b5b3288f updates 2020-01-19 14:59:07 -08:00
Glenn Jocher
723431d6c3 updates 2020-01-19 12:15:42 -08:00
Glenn Jocher
3d1db0b5ac updates 2020-01-19 11:37:12 -08:00
Glenn Jocher
85fbb903f7 updates 2020-01-18 11:52:26 -08:00
Glenn Jocher
43956d6305 updates 2020-01-17 23:30:17 -08:00
Glenn Jocher
3bac3c63b1 updates 2020-01-17 19:42:04 -08:00
Glenn Jocher
bab855507a updates 2020-01-17 18:05:28 -08:00
Glenn Jocher
a4b8815ed9 updates 2020-01-17 17:58:37 -08:00
Glenn Jocher
dec2c7d9a6 updates 2020-01-17 17:52:28 -08:00
Glenn Jocher
cdb4680390 updates 2020-01-17 17:44:22 -08:00
Glenn Jocher
1ba9bd746b updates 2020-01-17 11:17:52 -08:00
Glenn Jocher
831d2b3dcc updates 2020-01-17 11:17:18 -08:00
Glenn Jocher
a8e1390028 updates 2020-01-17 10:55:30 -08:00
Glenn Jocher
1bc50ebfab updates 2020-01-17 10:49:07 -08:00
Glenn Jocher
c0cde1edf0 updates 2020-01-16 13:25:18 -08:00
Glenn Jocher
75933e93a1 updates 2020-01-16 09:47:33 -08:00
Glenn Jocher
4459a9474e updates 2020-01-15 12:27:54 -08:00
Glenn Jocher
6f777e2bc5 updates 2020-01-15 11:22:06 -08:00
Glenn Jocher
53e3d55a1e updates 2020-01-15 10:22:59 -08:00
Glenn Jocher
5831d2d6ba updates 2020-01-15 09:28:58 -08:00
Glenn Jocher
01dbdc45d7 updates 2020-01-14 22:22:24 -08:00
Glenn Jocher
c6b44befde updates 2020-01-14 22:11:09 -08:00
Glenn Jocher
78ac3bdcfb updates 2020-01-14 17:26:22 -08:00
Glenn Jocher
c5d7ff27e6 updates 2020-01-13 22:19:45 -08:00
Glenn Jocher
c67ba50266 updates 2020-01-13 20:59:59 -08:00
Glenn Jocher
25ccf54a94 updates 2020-01-12 17:05:03 -08:00
Glenn Jocher
33264f5567 updates 2020-01-12 16:18:29 -08:00
Glenn Jocher
5ac3eb42b6 updates 2020-01-12 15:56:42 -08:00
Glenn Jocher
b890ccecfc updates 2020-01-12 12:01:58 -08:00
Glenn Jocher
aeac9b78eb updates 2020-01-11 21:20:55 -08:00
Glenn Jocher
01d485831f updates 2020-01-11 20:15:41 -08:00
Glenn Jocher
77034467f6 updates 2020-01-11 20:13:29 -08:00
Glenn Jocher
4b56a370e6 updates 2020-01-11 13:13:57 -08:00
Glenn Jocher
9b84885775 updates 2020-01-11 13:12:58 -08:00
Glenn Jocher
5cda317902 updates 2020-01-11 13:11:30 -08:00
Glenn Jocher
1638ab71cd updates 2020-01-10 23:31:25 -08:00
Glenn Jocher
fc0748f876 updates 2020-01-10 23:28:54 -08:00
Glenn Jocher
ba265d91b2 updates 2020-01-10 16:09:36 -08:00
Glenn Jocher
b7a25e60ce updates 2020-01-10 13:41:47 -08:00
Glenn Jocher
c8a67adecc updates 2020-01-10 12:49:22 -08:00
Glenn Jocher
3505b57421 updates 2020-01-10 11:55:54 -08:00
Glenn Jocher
6e52f985fe updates 2020-01-10 11:45:51 -08:00
Glenn Jocher
6235d76976 updates 2020-01-10 10:12:40 -08:00
Glenn Jocher
793f6389dc updates 2020-01-10 09:30:05 -08:00
Glenn Jocher
0219eb094e updates 2020-01-09 21:05:26 -08:00
Glenn Jocher
759d275017 updates 2020-01-09 14:07:55 -08:00
Glenn Jocher
bb9c6e7a8f updates 2020-01-09 10:10:20 -08:00
Glenn Jocher
6b2153d334 updates 2020-01-09 09:59:53 -08:00
Glenn Jocher
5e5f3467d4 updates 2020-01-09 09:57:07 -08:00
Glenn Jocher
0afcd9db8a updates 2020-01-09 09:56:16 -08:00
Glenn Jocher
c1527e4ab1 updates 2020-01-08 18:48:41 -08:00
Glenn Jocher
fd8cd377c3 updates 2020-01-08 16:39:59 -08:00
Glenn Jocher
3e5b007e3a updates 2020-01-08 16:36:35 -08:00
Glenn Jocher
11ce877bdf updates 2020-01-08 09:42:01 -08:00
Glenn Jocher
bf42c31d9e updates 2020-01-06 14:25:11 -08:00
Glenn Jocher
3b5ca2ea90 updates 2020-01-06 14:16:23 -08:00
Glenn Jocher
fd0769c476 updates 2020-01-06 13:59:08 -08:00
Glenn Jocher
af23270482 updates 2020-01-06 13:57:20 -08:00
Glenn Jocher
09ff72bc7b updates 2020-01-06 12:35:10 -08:00
Glenn Jocher
3b1caf9a43 updates 2020-01-06 11:57:12 -08:00
Glenn Jocher
04a0a6f609 updates 2020-01-05 12:50:58 -08:00
Glenn Jocher
1aedf27886 updates 2020-01-05 06:11:00 -08:00
Glenn Jocher
8ef441616d updates 2020-01-04 12:26:02 -08:00
Glenn Jocher
d197c0be75 updates 2020-01-04 11:36:36 -08:00
Glenn Jocher
efe3c319b5 updates 2020-01-03 18:06:59 -08:00
Glenn Jocher
c948a4054c updates 2020-01-03 15:41:01 -08:00
Glenn Jocher
07c40a3f14 updates 2020-01-03 14:36:39 -08:00
Glenn Jocher
4fe9c90514 updates 2020-01-03 11:53:02 -08:00
Glenn Jocher
eca1a25dcd updates 2020-01-03 09:19:18 -08:00
Glenn Jocher
c0095c2bc9 updates 2020-01-02 21:00:38 -08:00
Glenn Jocher
d9568a2239 updates 2020-01-02 12:39:20 -08:00
Glenn Jocher
0b242a438b updates 2020-01-02 11:11:45 -08:00
Glenn Jocher
e0e8b7173c updates 2020-01-02 11:11:18 -08:00
Glenn Jocher
0883d2fda1 updates 2020-01-02 11:09:10 -08:00
Glenn Jocher
8841c4980c updates 2020-01-02 10:03:22 -08:00
Glenn Jocher
23288236a6 updates 2020-01-02 09:50:11 -08:00
Glenn Jocher
77850a2198 updates 2020-01-01 22:44:21 -08:00
Glenn Jocher
d92b75aec8 updates 2020-01-01 12:44:33 -08:00
Glenn Jocher
935bbfcc2b updates 2019-12-31 12:07:31 -08:00
Glenn Jocher
6290f9fdb7 updates 2019-12-30 16:13:06 -08:00
Glenn Jocher
9dd1316a70 updates 2019-12-30 15:41:47 -08:00
Glenn Jocher
d30e4eea37 updates 2019-12-30 15:39:17 -08:00
Glenn Jocher
2cf31ab7bc updates 2019-12-30 13:46:40 -08:00
Glenn Jocher
7b6bd39c9e updates 2019-12-30 13:46:21 -08:00
Glenn Jocher
cf92235b8d updates 2019-12-30 13:39:25 -08:00
Glenn Jocher
9e58191983 updates 2019-12-30 13:31:32 -08:00
Glenn Jocher
14ac814cf9 updates 2019-12-30 13:30:58 -08:00
Glenn Jocher
017a5ddad0 updates 2019-12-30 13:28:46 -08:00
Glenn Jocher
ad20ccce65 updates 2019-12-30 13:28:32 -08:00
Glenn Jocher
121526aa98 updates 2019-12-30 13:15:10 -08:00
Glenn Jocher
e4a797fc1e updates 2019-12-30 13:09:16 -08:00
Glenn Jocher
88579bd24e updates 2019-12-30 12:01:52 -08:00
Glenn Jocher
b636f7f7ab updates 2019-12-30 11:57:36 -08:00
Glenn Jocher
f3e87862a4 updates 2019-12-29 15:31:57 -08:00
Glenn Jocher
d13312b751 updates 2019-12-29 14:54:08 -08:00
Glenn Jocher
894218390b updates 2019-12-29 14:28:56 -08:00
Glenn Jocher
f964f29567 updates 2019-12-29 10:02:41 -08:00
Glenn Jocher
5f9229ecaf updates 2019-12-28 21:58:05 -08:00
Glenn Jocher
2e680fb544 updates 2019-12-28 20:22:44 -08:00
Glenn Jocher
609a9d94cf updates 2019-12-27 20:32:01 -08:00
Glenn Jocher
e5b5d6a880 updates 2019-12-27 14:58:31 -08:00
Glenn Jocher
162ddcf6c7 updates 2019-12-27 13:04:46 -08:00
Glenn Jocher
4843cc4e08 updates 2019-12-27 12:52:01 -08:00
Glenn Jocher
d7ea668c42 updates 2019-12-27 12:41:51 -08:00
Glenn Jocher
d859957c66 updates 2019-12-27 12:34:29 -08:00
Glenn Jocher
7ef7501c36 updates 2019-12-27 12:14:01 -08:00
Glenn Jocher
59de209ab2 updates 2019-12-27 11:51:27 -08:00
Glenn Jocher
043a0e457c updates 2019-12-27 11:30:27 -08:00
Glenn Jocher
56d7261083 updates 2019-12-27 10:52:19 -08:00
Glenn Jocher
2cc805edda updates 2019-12-27 10:31:12 -08:00
Glenn Jocher
45b7dfc054 updates 2019-12-27 10:08:58 -08:00
Glenn Jocher
440769b954 updates 2019-12-27 09:55:10 -08:00
Glenn Jocher
1c07b1906c updates 2019-12-27 09:41:19 -08:00
Glenn Jocher
b58f41ef53 updates 2019-12-27 09:29:09 -08:00
Glenn Jocher
2fe6c21ce8 updates 2019-12-27 09:28:10 -08:00
Glenn Jocher
0bdbe5648d updates 2019-12-27 08:16:18 -08:00
Glenn Jocher
4bbed32f01 updates 2019-12-27 08:10:05 -08:00
Glenn Jocher
a5f923d697 updates 2019-12-26 12:53:11 -08:00
Glenn Jocher
326503425b updates 2019-12-26 12:52:25 -08:00
Glenn Jocher
b4552091dc updates 2019-12-26 12:31:30 -08:00
Glenn Jocher
fea54c4a85 updates 2019-12-26 12:30:51 -08:00
Glenn Jocher
8ae06ad7c3 updates 2019-12-25 19:58:20 -08:00
Glenn Jocher
d1087f4987 updates 2019-12-25 14:55:11 -08:00
Glenn Jocher
34d9392bac updates 2019-12-25 14:47:50 -08:00
Glenn Jocher
cdc382e313 updates 2019-12-24 14:11:31 -08:00
Glenn Jocher
2ee0d0c714 updates 2019-12-24 13:59:20 -08:00
Glenn Jocher
f7ac56db39 updates 2019-12-24 13:58:45 -08:00
Glenn Jocher
8319011489 updates 2019-12-24 13:57:12 -08:00
Glenn Jocher
d595f0847d updates 2019-12-24 13:41:52 -08:00
Glenn Jocher
3c4e7751ed updates 2019-12-24 13:11:01 -08:00
Glenn Jocher
0f225afe33 updates 2019-12-24 12:42:22 -08:00
Glenn Jocher
804f82a4b0 updates 2019-12-24 12:26:35 -08:00
Glenn Jocher
1e1cffae8b updates 2019-12-23 23:34:30 -08:00
Glenn Jocher
05a9a6205f updates 2019-12-23 23:28:56 -08:00
Glenn Jocher
61609b54b1 updates 2019-12-23 20:52:57 -08:00
Glenn Jocher
f04fb9a9cd updates 2019-12-23 18:02:03 -08:00
Glenn Jocher
ba24e26f7e updates 2019-12-23 15:43:00 -08:00
Glenn Jocher
78dfa384ee updates 2019-12-23 12:24:48 -08:00
Glenn Jocher
c459bc6d4a updates 2019-12-23 12:11:37 -08:00
Glenn Jocher
26ed5e2ddc updates 2019-12-23 11:25:15 -08:00
Glenn Jocher
6946a2a8fc updates 2019-12-23 11:14:34 -08:00
Glenn Jocher
efc5ee480c updates 2019-12-23 11:13:00 -08:00
Glenn Jocher
db26b08f5b updates 2019-12-23 11:05:55 -08:00
Glenn Jocher
06e88fec08 updates 2019-12-23 10:33:58 -08:00
Glenn Jocher
209cc9e124 updates 2019-12-23 10:31:37 -08:00
Glenn Jocher
fd3a6a4cba updates 2019-12-23 10:30:13 -08:00
Glenn Jocher
f995d6093c updates 2019-12-23 10:22:07 -08:00
Glenn Jocher
a5160b44ca updates 2019-12-23 10:13:20 -08:00
Glenn Jocher
a51d83df33 updates 2019-12-23 10:11:12 -08:00
Glenn Jocher
dd5ead5b1d updates 2019-12-23 10:10:24 -08:00
Glenn Jocher
61009dbde8 updates 2019-12-23 08:27:21 -08:00
Glenn Jocher
80692334f4 updates 2019-12-23 08:25:40 -08:00
Glenn Jocher
d391f6d59b updates 2019-12-22 17:36:51 -08:00
Glenn Jocher
52573eb0bc updates 2019-12-22 16:21:17 -08:00
Glenn Jocher
0e17fb5905 updates 2019-12-22 16:05:43 -08:00
Glenn Jocher
0e54731bb8 updates 2019-12-22 14:19:46 -08:00
Glenn Jocher
a0b4d17f7e updates 2019-12-22 14:05:40 -08:00
Glenn Jocher
654b9834c2 updates 2019-12-22 13:28:51 -08:00
Glenn Jocher
62516f1919 updates 2019-12-22 13:26:46 -08:00
Glenn Jocher
e0833ed21e updates 2019-12-22 13:07:00 -08:00
Glenn Jocher
a96285870d updates 2019-12-22 13:04:44 -08:00
Glenn Jocher
8a5c520291 updates 2019-12-22 13:04:05 -08:00
Glenn Jocher
5766b5c555 updates 2019-12-22 13:03:45 -08:00
Glenn Jocher
8aeef8da72 updates 2019-12-22 11:08:02 -08:00
Glenn Jocher
c693219e57 updates 2019-12-22 08:12:23 -08:00
Glenn Jocher
707ce8cacb updates 2019-12-21 20:45:00 -08:00
Glenn Jocher
f00de54546 updates 2019-12-21 20:17:56 -08:00
Glenn Jocher
efb3768fff updates 2019-12-21 20:10:55 -08:00
Glenn Jocher
acdbaa7702 updates 2019-12-21 19:56:52 -08:00
Glenn Jocher
5e203d3b1a updates 2019-12-21 19:48:07 -08:00
Glenn Jocher
b7a53957b3 updates 2019-12-21 19:47:49 -08:00
Glenn Jocher
66fe3db8fb updates 2019-12-21 19:39:45 -08:00
Glenn Jocher
d56efafee1 updates 2019-12-21 19:30:22 -08:00
Glenn Jocher
3e33adb935 updates 2019-12-21 19:23:50 -08:00
Glenn Jocher
69da7e9da5 updates 2019-12-21 12:00:16 -08:00
Glenn Jocher
587b7a8dd0 updates 2019-12-21 09:32:47 -08:00
Glenn Jocher
083d482561 updates 2019-12-20 11:24:21 -08:00
Glenn Jocher
3854b933c3 updates 2019-12-20 11:18:55 -08:00
Glenn Jocher
821cf9a189 updates 2019-12-20 10:24:49 -08:00
Glenn Jocher
05b1e437a0 updates 2019-12-20 09:59:25 -08:00
Glenn Jocher
25580dfb84 updates 2019-12-20 09:44:21 -08:00
Glenn Jocher
9420b4d4bc updates 2019-12-20 09:23:33 -08:00
Glenn Jocher
43e3bccc73 updates 2019-12-20 09:10:35 -08:00
Glenn Jocher
442dbb6acf updates 2019-12-20 09:08:57 -08:00
Glenn Jocher
2bc6683325 updates 2019-12-20 09:07:25 -08:00
Glenn Jocher
8d54770859 updates 2019-12-20 08:41:28 -08:00
Glenn Jocher
2e1c415e59 updates 2019-12-19 20:07:58 -08:00
Glenn Jocher
9309d35478 updates 2019-12-19 19:35:14 -08:00
Glenn Jocher
ce9a2cb9d2 updates 2019-12-19 19:23:09 -08:00
Glenn Jocher
9048d96c71 updates 2019-12-19 18:56:40 -08:00
Glenn Jocher
aaaaa06156 updates 2019-12-19 18:55:48 -08:00
Glenn Jocher
674d0de170 updates 2019-12-19 18:32:45 -08:00
Glenn Jocher
fd949a8eb3 Merge remote-tracking branch 'origin/master' 2019-12-19 18:09:20 -08:00
Glenn Jocher
f5cd3596f5 updates 2019-12-19 18:09:13 -08:00
Marc
eac2c010c4 return kmeans targets (#722)
return kmeans targets
2019-12-18 13:13:20 -08:00
Glenn Jocher
ad73ce4334 updates 2019-12-18 10:24:10 -08:00
Glenn Jocher
8385f613d2 updates 2019-12-18 09:45:34 -08:00
Glenn Jocher
adc2663fe7 updates 2019-12-17 12:26:42 -08:00
Glenn Jocher
a5677d3f90 updates 2019-12-17 10:14:18 -08:00
Glenn Jocher
ecce92d5d8 updates 2019-12-16 22:18:26 -08:00
Glenn Jocher
9c03ac3b74 updates 2019-12-16 16:36:12 -08:00
Glenn Jocher
8666413c47 updates 2019-12-16 16:29:40 -08:00
Glenn Jocher
d7b010c514 updates 2019-12-16 15:49:15 -08:00
Glenn Jocher
9064d42b93 updates 2019-12-15 21:10:40 -08:00
Glenn Jocher
87c5e43e8c updates 2019-12-15 12:47:53 -08:00
Glenn Jocher
d884c33d21 updates 2019-12-15 12:43:30 -08:00
Glenn Jocher
03b5408e70 updates 2019-12-15 12:15:56 -08:00
Glenn Jocher
8164b305e5 updates 2019-12-14 15:45:40 -08:00
Glenn Jocher
8c13717f48 updates 2019-12-14 15:40:01 -08:00
Glenn Jocher
eb70e4b751 updates 2019-12-14 15:15:43 -08:00
Glenn Jocher
ddaa2976d7 updates 2019-12-14 15:15:20 -08:00
Glenn Jocher
df1be4c748 updates 2019-12-14 14:49:18 -08:00
Glenn Jocher
c0a7ace766 updates 2019-12-13 23:15:56 -08:00
Glenn Jocher
9c11bfe792 updates 2019-12-13 23:09:31 -08:00
Glenn Jocher
fa7a5fea2b updates 2019-12-13 19:32:09 -08:00
Glenn Jocher
0b19d4eb3d updates 2019-12-13 19:31:50 -08:00
Glenn Jocher
8638317bbb updates 2019-12-13 19:13:06 -08:00
Glenn Jocher
64c1ac3357 updates 2019-12-13 19:10:57 -08:00
Glenn Jocher
0465500b37 updates 2019-12-13 18:52:08 -08:00
Glenn Jocher
a4bdb8ce2e updates 2019-12-13 17:46:42 -08:00
Glenn Jocher
4b368b704b updates 2019-12-13 17:34:19 -08:00
Glenn Jocher
6b8425b9ec updates 2019-12-13 17:31:27 -08:00
Glenn Jocher
0a489bc1c3 updates 2019-12-13 16:21:03 -08:00
Glenn Jocher
dbbe406ac6 updates 2019-12-13 15:47:48 -08:00
Glenn Jocher
9c36d5efcd updates 2019-12-13 14:03:17 -08:00
Glenn Jocher
074a9250d8 updates 2019-12-13 12:27:52 -08:00
Glenn Jocher
1bb738c83f updates 2019-12-13 11:49:29 -08:00
Glenn Jocher
3f06fe6b12 updates 2019-12-13 11:05:05 -08:00
Glenn Jocher
b87bfa32c3 updates 2019-12-12 13:56:56 -08:00
Glenn Jocher
8d8daff390 updates 2019-12-11 13:40:11 -08:00
Glenn Jocher
db0e5cba6f updates 2019-12-11 13:30:54 -08:00
Glenn Jocher
2ca4517813 updates 2019-12-11 13:25:35 -08:00
Glenn Jocher
96a94b8cb9 updates 2019-12-11 13:21:39 -08:00
Glenn Jocher
25a11972d6
Update .dockerignore 2019-12-11 12:19:47 -08:00
Thomas Havlik
1a22bf9211 added coco/ to .dockerignore (#701) 2019-12-11 12:17:53 -08:00
Glenn Jocher
5f912d3add updates 2019-12-11 11:53:23 -08:00
Glenn Jocher
a6f87a28e7 updates 2019-12-10 20:02:58 -08:00
Glenn Jocher
9f24c12c14 updates 2019-12-10 18:25:14 -08:00
Glenn Jocher
bb1a87d77f updates 2019-12-10 18:04:24 -08:00
Glenn Jocher
2201cb4023 updates 2019-12-09 15:54:46 -08:00
Glenn Jocher
86588f1579 updates 2019-12-09 14:20:36 -08:00
Glenn Jocher
f430ddb103 updates 2019-12-09 13:49:50 -08:00
Glenn Jocher
8c5ebdf055 updates 2019-12-09 13:39:35 -08:00
Glenn Jocher
a6980a0f14 updates 2019-12-09 13:37:58 -08:00
Glenn Jocher
3bfbab7afd updates 2019-12-09 13:25:34 -08:00
Glenn Jocher
07c1fafba8 updates 2019-12-09 13:17:30 -08:00
Glenn Jocher
2391996474 updates 2019-12-08 20:15:25 -08:00
Glenn Jocher
2300cb964a updates 2019-12-08 19:58:42 -08:00
Glenn Jocher
37fa9afaff updates 2019-12-08 19:58:10 -08:00
Glenn Jocher
1bf717ef9c updates 2019-12-08 19:26:03 -08:00
Glenn Jocher
194b396187 updates 2019-12-08 19:22:33 -08:00
Glenn Jocher
35177c0e47 updates 2019-12-08 18:30:36 -08:00
Glenn Jocher
ca5da3dfe0 updates 2019-12-08 18:30:10 -08:00
Glenn Jocher
d603ac8e69 updates 2019-12-08 18:08:19 -08:00
Glenn Jocher
61c3cb9ecf updates 2019-12-08 17:57:23 -08:00
Glenn Jocher
e35397ee41 updates 2019-12-08 17:52:44 -08:00
Glenn Jocher
4942aacef9 updates 2019-12-08 17:19:42 -08:00
Glenn Jocher
b913d1ab55 updates 2019-12-08 17:00:13 -08:00
Glenn Jocher
50866ddaa9 updates 2019-12-08 16:44:33 -08:00
Glenn Jocher
b759356d2f updates 2019-12-08 16:36:52 -08:00
Glenn Jocher
267367b105 updates 2019-12-08 16:34:27 -08:00
Glenn Jocher
0fa4e498c1 updates 2019-12-08 16:34:01 -08:00
Glenn Jocher
3953d5c8b0 updates 2019-12-08 16:21:57 -08:00
Glenn Jocher
ea2076a6d2 updates 2019-12-08 16:20:27 -08:00
Glenn Jocher
29e60e50e5 updates 2019-12-08 16:16:16 -08:00
Glenn Jocher
b81c17aa9f updates 2019-12-08 16:02:55 -08:00
Glenn Jocher
01d9d551c3 updates 2019-12-08 15:35:13 -08:00
Glenn Jocher
638ecbe894 updates 2019-12-08 13:04:40 -08:00
Glenn Jocher
f373764e4d updates 2019-12-08 12:26:31 -08:00
Glenn Jocher
b81beb0f5f updates 2019-12-07 22:55:26 -08:00
Glenn Jocher
1f943e886f updates 2019-12-07 15:17:29 -08:00
Glenn Jocher
6fd450c904 updates 2019-12-07 15:06:38 -08:00
Glenn Jocher
0147b5036e updates 2019-12-07 15:05:00 -08:00
Glenn Jocher
91b5fb3c9f updates 2019-12-07 15:04:29 -08:00
Glenn Jocher
562ec85102 updates 2019-12-07 15:02:35 -08:00
Glenn Jocher
55ba979816 updates 2019-12-07 01:26:41 -08:00
Glenn Jocher
e6ae688bd3 updates 2019-12-07 00:55:36 -08:00
Glenn Jocher
c631cc2156 updates 2019-12-07 00:10:14 -08:00
Glenn Jocher
bb54408f73 updates 2019-12-07 00:05:37 -08:00
Glenn Jocher
d5176e4fc4 updates 2019-12-07 00:01:18 -08:00
Glenn Jocher
2c0985f366 updates 2019-12-06 23:58:47 -08:00
Glenn Jocher
a066a7b8ea updates 2019-12-06 19:05:51 -08:00
Glenn Jocher
115b333371 updates 2019-12-06 17:33:32 -08:00
Glenn Jocher
f2d47c1256 updates 2019-12-06 17:33:17 -08:00
Glenn Jocher
4988397458 updates 2019-12-06 17:31:07 -08:00
Glenn Jocher
ddaadf1bf9 updates 2019-12-06 17:24:15 -08:00
Glenn Jocher
af8af1ce68 updates 2019-12-06 16:35:15 -08:00
Glenn Jocher
5e747f8da9 updates 2019-12-06 14:13:07 -08:00
Glenn Jocher
3bd00360bc updates 2019-12-06 13:50:16 -08:00
Glenn Jocher
c702916495 updates 2019-12-06 13:47:17 -08:00
Glenn Jocher
ef133382c5 updates 2019-12-06 13:44:13 -08:00
Glenn Jocher
6067b22605 updates 2019-12-06 13:30:14 -08:00
Glenn Jocher
61e3fc1f8e updates 2019-12-06 12:56:22 -08:00
Glenn Jocher
d00a91aa1b updates 2019-12-06 11:02:02 -08:00
Glenn Jocher
6340074c2a updates 2019-12-05 20:24:42 -08:00
Glenn Jocher
d08cdad4af updates 2019-12-05 11:01:10 -08:00
Glenn Jocher
035faa6694 updates 2019-12-05 00:35:07 -08:00
Glenn Jocher
2421f3e252 updates 2019-12-05 00:17:27 -08:00
Glenn Jocher
28c103108b Merge remote-tracking branch 'origin/master'
# Conflicts:
#	test.py
#	train.py
2019-12-04 23:02:58 -08:00
Glenn Jocher
63c2736c12 updates 2019-12-04 23:02:32 -08:00
Yonghye Kwon
7f6bb9a39f efficient calling test dataloader during training (#688)
* efficient calling test dataloader

efficient calling test dataloader

* efficient calling test dataloader during training

efficient calling test dataloader during training

* Update test.py

* Update train.py

* Update train.py
2019-12-04 23:02:10 -08:00
Glenn Jocher
e27b124828 updates 2019-12-04 17:50:52 -08:00
Glenn Jocher
0a04eb9ff1 updates 2019-12-04 15:15:42 -08:00
Glenn Jocher
a2dc8a6b5a updates 2019-12-04 15:15:23 -08:00
Glenn Jocher
54daa69adb updates 2019-12-04 15:10:16 -08:00
Glenn Jocher
5a14f54b2d updates 2019-12-04 14:24:09 -08:00
Glenn Jocher
d2a9cc662c updates 2019-12-04 11:19:17 -08:00
Glenn Jocher
31b49cf870 updates 2019-12-04 10:36:39 -08:00
Glenn Jocher
24247450e2 updates 2019-12-04 09:07:38 -08:00
Glenn Jocher
b9fa92d3f7 updates 2019-12-03 17:22:58 -08:00
Glenn Jocher
cae901c2da updates 2019-12-03 15:34:20 -08:00
Glenn Jocher
fcdbd3ee35 updates 2019-12-03 13:49:20 -08:00
Glenn Jocher
0dd0fa7938 updates 2019-12-03 13:34:22 -08:00
Glenn Jocher
896fd6d025 updates 2019-12-03 12:52:19 -08:00
Glenn Jocher
31a0826297 Merge remote-tracking branch 'origin/master' 2019-12-03 12:50:13 -08:00
Glenn Jocher
c865d93403 updates 2019-12-03 12:50:04 -08:00
Glenn Jocher
7bd828859c Update issue templates 2019-12-03 12:36:26 -08:00
Glenn Jocher
384b299099 Update issue templates 2019-12-03 12:35:57 -08:00
Glenn Jocher
89e908dbb3 Update issue templates 2019-12-03 12:35:00 -08:00
Glenn Jocher
f6caec195d Update issue templates 2019-12-03 12:32:43 -08:00
Glenn Jocher
1e9ddc5a90 updates 2019-12-02 19:44:10 -08:00
Glenn Jocher
0fe246f399 updates 2019-12-02 18:22:21 -08:00
Glenn Jocher
cadd2f75ff updates 2019-12-02 16:46:15 -08:00
Glenn Jocher
cba3120ca6 updates 2019-12-02 15:26:36 -08:00
Glenn Jocher
ebb4d4c884 updates 2019-12-02 14:31:04 -08:00
Glenn Jocher
d68a59bffc updates 2019-12-02 14:23:20 -08:00
Glenn Jocher
93a70d958a updates 2019-12-02 11:31:19 -08:00
Glenn Jocher
3d91731519 updates 2019-12-01 14:07:09 -08:00
Glenn Jocher
e637ae44dd updates 2019-12-01 14:06:11 -08:00
Glenn Jocher
d6a7a614dc updates 2019-12-01 13:51:55 -08:00
Glenn Jocher
92690302bb updates 2019-12-01 13:49:38 -08:00
Glenn Jocher
033c51ed90 updates 2019-11-30 20:48:49 -08:00
Glenn Jocher
5455ddd6f7 updates 2019-11-30 20:47:14 -08:00
Glenn Jocher
5bcc2b38b8 updates 2019-11-30 19:24:08 -08:00
Glenn Jocher
93c348f353 updates 2019-11-30 18:52:37 -08:00
Glenn Jocher
6a05cf56c2 updates 2019-11-30 18:45:43 -08:00
Glenn Jocher
8be4b41b3d updates 2019-11-30 18:19:17 -08:00
Glenn Jocher
34155887bc updates 2019-11-30 17:48:21 -08:00
Glenn Jocher
6992c68e33 updates 2019-11-30 17:47:49 -08:00
Glenn Jocher
0f6954fa04 updates 2019-11-30 17:47:33 -08:00
Glenn Jocher
a699c901d3 updates 2019-11-30 17:38:29 -08:00
Glenn Jocher
f2ec1cb9ea updates 2019-11-30 17:19:44 -08:00
Glenn Jocher
4e0067cdc9 updates 2019-11-30 17:14:53 -08:00
Glenn Jocher
e28a425384 updates 2019-11-30 17:13:21 -08:00
Glenn Jocher
3cdbf246c9 updates 2019-11-30 17:03:47 -08:00
Glenn Jocher
1ff01f0973 updates 2019-11-30 16:58:56 -08:00
Glenn Jocher
ff41a15a2b updates 2019-11-30 15:34:57 -08:00
Glenn Jocher
8a13bf0f3f updates 2019-11-30 15:33:10 -08:00
Glenn Jocher
23ca2f2e7e updates 2019-11-30 15:32:39 -08:00
Glenn Jocher
9f0273a459 updates 2019-11-30 14:43:23 -08:00
Glenn Jocher
937d8fa53e updates 2019-11-30 14:17:32 -08:00
Glenn Jocher
8d4790349b updates 2019-11-30 14:16:01 -08:00
Glenn Jocher
5a1bc71406 updates 2019-11-30 13:20:22 -08:00
Glenn Jocher
8afc18e028 updates 2019-11-30 13:00:20 -08:00
Glenn Jocher
f365946c2f updates 2019-11-30 12:43:41 -08:00
Glenn Jocher
e613bbc88c updates 2019-11-29 19:10:01 -08:00
Glenn Jocher
77012f8f97 updates 2019-11-29 18:20:57 -08:00
Glenn Jocher
51d666a81a updates 2019-11-28 09:05:13 -10:00
Glenn Jocher
6258061a81 updates 2019-11-27 23:36:02 -10:00
Glenn Jocher
bccff3bfc1 updates 2019-11-27 23:31:25 -10:00
Glenn Jocher
340e0371f8 updates 2019-11-27 22:36:01 -10:00
Glenn Jocher
9e9a6a1425 updates 2019-11-27 15:50:29 -10:00
Glenn Jocher
82b62c9855 updates 2019-11-27 15:50:00 -10:00
Glenn Jocher
4b251406e2 updates 2019-11-27 15:04:05 -10:00
Glenn Jocher
91fca0e17d updates 2019-11-27 15:03:05 -10:00
Glenn Jocher
9319ae8ff9 updates 2019-11-27 15:00:41 -10:00
Glenn Jocher
413afab11c updates 2019-11-27 14:59:46 -10:00
Glenn Jocher
9c1d7d5248 updates 2019-11-27 14:52:33 -10:00
Glenn Jocher
ea19c33a87 updates 2019-11-27 14:35:18 -10:00
Glenn Jocher
3dec99b16c updates 2019-11-26 16:03:45 -10:00
Glenn Jocher
0417b3a527 updates 2019-11-26 13:53:05 -10:00
Glenn Jocher
78a2de52b5 updates 2019-11-26 13:23:47 -10:00
Glenn Jocher
b04392e298 updates 2019-11-26 12:59:13 -10:00
Glenn Jocher
40ae87cb46 updates 2019-11-26 12:36:21 -10:00
Glenn Jocher
0fe40cb687 updates 2019-11-26 12:34:47 -10:00
Glenn Jocher
92f742618c updates 2019-11-26 10:26:14 -10:00
Glenn Jocher
b269ed7b29 updates 2019-11-25 18:42:48 -10:00
Glenn Jocher
3c57ff7b1b updates 2019-11-25 17:24:05 -10:00
Glenn Jocher
90cfb91858 updates 2019-11-25 17:13:10 -10:00
Glenn Jocher
75e8ec323f updates 2019-11-25 11:45:28 -10:00
Glenn Jocher
0245ff9133 updates 2019-11-25 08:26:41 -10:00
Francisco Reveriano
26e3a28bee Update train.py for distributive programming (#655)
When attempting to running this function in a multi-GPU environment I kept on getting a runtime issue. I was able to solve this problem by passing this keyword. I first found the solution here: 
https://github.com/pytorch/pytorch/issues/22436
and in the pytorch tutorial

'RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). '
2019-11-24 22:21:36 -10:00
Glenn Jocher
a0ef217842 updates 2019-11-24 20:10:39 -10:00
Glenn Jocher
9b55bbf9e2 updates 2019-11-24 20:08:24 -10:00
Glenn Jocher
7773651e8e updates 2019-11-24 18:38:30 -10:00
Glenn Jocher
2f1c9a3f6f updates 2019-11-24 18:31:06 -10:00
Glenn Jocher
f12a2a513a updates 2019-11-24 18:29:29 -10:00
Glenn Jocher
5f00d7419e updates 2019-11-23 19:27:33 -10:00
Glenn Jocher
4aff400777 updates 2019-11-23 19:23:31 -10:00
Glenn Jocher
b027c66048 updates 2019-11-23 13:34:37 -10:00
Glenn Jocher
6c6aa483d7 updates 2019-11-23 13:23:38 -10:00
Glenn Jocher
46161ed94d updates 2019-11-23 12:09:46 -10:00
Glenn Jocher
55a6b05228 updates 2019-11-23 09:35:11 -10:00
Glenn Jocher
bdf11ffdf1 updates 2019-11-23 09:25:21 -10:00
Glenn Jocher
d623a425d9 updates 2019-11-22 16:20:11 -10:00
Glenn Jocher
f1e8d23d39 updates 2019-11-22 14:36:49 -10:00
Glenn Jocher
4c61611ce0 updates 2019-11-22 14:20:35 -10:00
Glenn Jocher
a137c21dc0 updates 2019-11-22 14:06:16 -10:00
Glenn Jocher
54d907d8c8 updates 2019-11-22 14:03:46 -10:00
Glenn Jocher
46da9fd26c updates 2019-11-22 13:38:28 -10:00
Glenn Jocher
bbd6c884e6 updates 2019-11-22 13:27:23 -10:00
Glenn Jocher
e701979862 updates 2019-11-22 13:03:29 -10:00
Glenn Jocher
3834b77961 updates 2019-11-21 11:52:48 -08:00
Glenn Jocher
7c59715fda updates 2019-11-21 00:00:17 -08:00
Glenn Jocher
f38723c0bd updates 2019-11-20 19:34:22 -08:00
Glenn Jocher
a0067ac8fb updates 2019-11-20 19:10:36 -08:00
Glenn Jocher
74b57500c7 updates 2019-11-20 16:02:57 -08:00
Glenn Jocher
3a4ed8b3ab updates 2019-11-20 13:40:24 -08:00
Glenn Jocher
bb209111c4 updates 2019-11-20 13:36:15 -08:00
Glenn Jocher
8e327e3bd0 updates 2019-11-20 13:33:25 -08:00
Glenn Jocher
2950f4c816 updates 2019-11-20 13:26:50 -08:00
Glenn Jocher
c14ea59c71 updates 2019-11-20 13:24:50 -08:00
Glenn Jocher
bd498ae776 updates 2019-11-20 13:14:24 -08:00
Glenn Jocher
bac4cc58fd updates 2019-11-20 12:51:05 -08:00
Glenn Jocher
e58f0a68b6 updates 2019-11-20 12:05:40 -08:00
Glenn Jocher
429d44282c updates 2019-11-19 20:42:44 -08:00
Glenn Jocher
253e746d30 updates 2019-11-19 19:00:40 -08:00
Glenn Jocher
d355e539d9 updates 2019-11-19 18:47:22 -08:00
Glenn Jocher
d94b6e88e3 updates 2019-11-19 18:16:35 -08:00
Glenn Jocher
d9805d2fb6 updates 2019-11-19 12:42:12 -08:00
Glenn Jocher
b758b9c76e updates 2019-11-18 15:01:33 -08:00
Glenn Jocher
2ba1a4c9cc updates 2019-11-18 12:01:17 -08:00
Glenn Jocher
7ebb7d1310 updates 2019-11-18 10:15:17 -08:00
Glenn Jocher
9c716a39c3 updates 2019-11-17 19:00:12 -08:00
Glenn Jocher
a1151c04a7 updates 2019-11-17 18:48:50 -08:00
Glenn Jocher
b4a71d0588 updates 2019-11-17 17:17:52 -08:00
Glenn Jocher
bb936f758a updates 2019-11-17 12:21:59 -08:00
Glenn Jocher
eb32fca702 updates 2019-11-16 22:09:31 -08:00
Glenn Jocher
0466285f59 updates 2019-11-16 22:09:15 -08:00
Glenn Jocher
dc82956aff updates 2019-11-16 13:12:56 -08:00
Glenn Jocher
84cb744761 updates 2019-11-16 12:34:38 -08:00
Glenn Jocher
fe9ade6a64 updates 2019-11-16 12:07:19 -08:00
Glenn Jocher
d93ca0410b updates 2019-11-15 13:42:53 -08:00
Glenn Jocher
b6a2e1b073 updates 2019-11-14 19:14:00 -08:00
Glenn Jocher
fa7c517ece updates 2019-11-14 18:20:54 -08:00
Glenn Jocher
2433a99451 updates 2019-11-14 17:48:06 -08:00
Glenn Jocher
eb8b39535a updates 2019-11-14 17:32:28 -08:00
Glenn Jocher
985006a52a updates 2019-11-14 17:25:29 -08:00
Glenn Jocher
9daa5e858a updates 2019-11-14 17:22:09 -08:00
Glenn Jocher
fedc2150b3 updates 2019-11-14 17:12:55 -08:00
Glenn Jocher
6047be35cf updates 2019-11-14 15:08:58 -08:00
Glenn Jocher
a96e010251 updates 2019-11-14 15:07:27 -08:00
Glenn Jocher
8d5170f10f updates 2019-11-14 14:20:50 -08:00
Glenn Jocher
ac6112c184 updates 2019-11-14 13:14:47 -08:00
Glenn Jocher
444a9f7099 updates 2019-11-12 17:57:22 -08:00
Glenn Jocher
e66323e893 updates 2019-11-12 14:44:45 -08:00
Glenn Jocher
470ef6bc92 updates 2019-11-12 14:16:54 -08:00
Glenn Jocher
579fdc57f8 updates 2019-11-09 10:56:38 -08:00
Glenn Jocher
97ac36ec6c updates 2019-11-08 10:19:46 -08:00
Glenn Jocher
d0e000b008 updates 2019-11-07 20:11:03 -08:00
Glenn Jocher
d67b1cb1ad updates 2019-11-07 20:01:47 -08:00
Glenn Jocher
2efe423b34 updates 2019-11-07 19:45:22 -08:00
Glenn Jocher
8dd74426c0 updates 2019-11-07 19:34:59 -08:00
Glenn Jocher
d7f2c8ab72 updates 2019-11-07 19:32:33 -08:00
Glenn Jocher
bd10fb35c7 updates 2019-11-07 17:55:30 -08:00
Glenn Jocher
3005cd3a39 updates 2019-11-07 17:55:00 -08:00
Glenn Jocher
27b75e0d37 updates 2019-11-07 15:49:37 -08:00
Glenn Jocher
aae39ca894 updates 2019-11-07 14:46:16 -08:00
Glenn Jocher
09ca721f88 updates 2019-11-06 10:10:53 -08:00
Glenn Jocher
4320098cf5 updates 2019-11-04 18:10:47 -08:00
Glenn Jocher
f7f8bb23c2 updates 2019-11-04 16:34:45 -08:00
Glenn Jocher
fd3f2ed65f updates 2019-11-02 19:54:14 -07:00
Glenn Jocher
3ba7fc69b8 updates 2019-11-02 19:47:25 -07:00
Glenn Jocher
96263ff434 updates 2019-11-02 15:11:03 -07:00
Glenn Jocher
d5dfbedcda updates 2019-11-01 22:34:20 -07:00
Glenn Jocher
8d1ab548c1 updates 2019-10-25 11:04:10 -05:00
Glenn Jocher
b3b4ff4107 updates 2019-10-25 11:03:33 -05:00
Glenn Jocher
d957b20a53 updates 2019-10-25 11:03:04 -05:00
Glenn Jocher
39d247d7e8 updates 2019-10-25 10:55:08 -05:00
Glenn Jocher
d0e11b0ac4 updates 2019-10-25 10:39:44 -05:00
Glenn Jocher
d1271941ad updates 2019-10-16 01:40:40 +02:00
Glenn Jocher
d23ada04dc updates 2019-10-16 01:36:13 +02:00
Glenn Jocher
0be5e4132d updates 2019-10-16 01:32:07 +02:00
Glenn Jocher
376e00a3cf updates 2019-10-13 18:53:15 +02:00
Glenn Jocher
139161c522 updates 2019-10-13 18:39:32 +02:00
Glenn Jocher
725762b937 updates 2019-10-13 17:41:49 +02:00
Glenn Jocher
2f46e7d765 updates 2019-10-13 17:40:45 +02:00
Glenn Jocher
0985dc91d5 updates 2019-10-13 13:11:40 +02:00
Glenn Jocher
811b3b693f updates 2019-10-12 13:59:07 +02:00
Glenn Jocher
8397fa7a2a updates 2019-10-12 13:03:57 +02:00
Glenn Jocher
5afd90c900 updates 2019-10-12 11:22:50 +02:00
Glenn Jocher
171f25cfc6 updates 2019-10-12 01:18:41 +02:00
Glenn Jocher
a59350852b updates 2019-10-10 22:54:20 +02:00
Glenn Jocher
f67e1afe3e updates 2019-10-10 14:40:18 +02:00
Glenn Jocher
ee319aeefd updates 2019-10-09 03:16:27 +02:00
Glenn Jocher
500a798787 updates 2019-10-09 02:10:25 +02:00
Glenn Jocher
a4e9aa34ef updates 2019-10-09 00:14:27 +02:00
Glenn Jocher
1e3480d76c updates 2019-10-08 23:22:41 +02:00
Glenn Jocher
af5f7a15c5 updates 2019-10-08 19:07:28 +02:00
Glenn Jocher
8b2f85c290 updates 2019-10-08 18:13:04 +02:00
Glenn Jocher
f8aab0e952 updates 2019-10-08 15:19:13 +02:00
Glenn Jocher
8fea4514fb updates 2019-10-08 14:35:43 +02:00
Glenn Jocher
69745d8b8e updates 2019-10-08 14:08:23 +02:00
Glenn Jocher
88ba61505f updates 2019-10-08 13:30:21 +02:00
Glenn Jocher
a18ad6025f updates 2019-10-08 13:25:50 +02:00
Glenn Jocher
cfc562c2c8 updates 2019-10-08 12:35:25 +02:00
Glenn Jocher
d1398ec952 updates 2019-10-07 11:31:22 +02:00
Glenn Jocher
2e0303d44c updates 2019-10-07 00:51:10 +02:00
Glenn Jocher
1a8bbf600d updates 2019-10-07 00:50:47 +02:00
Glenn Jocher
58d510df52 updates 2019-10-06 16:30:35 +02:00
Glenn Jocher
bfcae0ac97 updates 2019-10-05 18:36:48 +02:00
Glenn Jocher
8c7a8ffecb updates 2019-10-05 15:29:27 +02:00
Glenn Jocher
b6d9a742ec updates 2019-10-05 15:28:02 +02:00
Glenn Jocher
563dad3b53 updates 2019-10-05 13:47:06 +02:00
Glenn Jocher
8610026e2c updates 2019-10-05 12:45:10 +02:00
Glenn Jocher
6345a1d218 updates 2019-10-01 17:24:33 +02:00
Glenn Jocher
84f0df6c34 updates 2019-10-01 16:04:56 +02:00
Glenn Jocher
9a48f23726 updates 2019-09-29 02:51:24 +02:00
Glenn Jocher
f9241f8861 updates 2019-09-28 23:09:06 +02:00
Glenn Jocher
004afa50fc updates 2019-09-28 01:47:22 +02:00
Glenn Jocher
b694e52e2d updates 2019-09-27 23:46:45 +02:00
Glenn Jocher
4aa60ea499 updates 2019-09-27 23:40:14 +02:00
Glenn Jocher
286e851fe7 updates 2019-09-27 23:36:42 +02:00
Glenn Jocher
b421afa508 updates 2019-09-27 23:35:04 +02:00
Glenn Jocher
c6d3efbf95 updates 2019-09-27 21:57:00 +02:00
Glenn Jocher
df8529a747 updates 2019-09-26 13:52:37 +02:00
Glenn Jocher
f146692ad0 updates 2019-09-26 12:58:26 +02:00
Glenn Jocher
cf462429d4 updates 2019-09-26 12:54:09 +02:00
Glenn Jocher
163025649a updates 2019-09-26 12:52:16 +02:00
Glenn Jocher
2487b0694f updates 2019-09-26 12:08:40 +02:00
Glenn Jocher
3072d72375 updates 2019-09-26 12:06:26 +02:00
Glenn Jocher
33e025838f updates 2019-09-26 03:30:31 +02:00
Glenn Jocher
6daebd3979 updates 2019-09-25 00:38:26 +02:00
Glenn Jocher
ccb971aa3c updates 2019-09-21 23:55:20 +02:00
Glenn Jocher
7de6584a34 updates 2019-09-21 02:46:16 +02:00
Glenn Jocher
db49211d70 updates 2019-09-20 20:31:37 +02:00
Glenn Jocher
eeb4cbc5c1 updates 2019-09-20 17:18:02 +02:00
Glenn Jocher
c9ba8ea366 updates 2019-09-20 15:24:00 +02:00
Glenn Jocher
96c442c3c3 updates 2019-09-20 15:23:08 +02:00
Glenn Jocher
2ad3276c77 updates 2019-09-20 15:02:32 +02:00
Glenn Jocher
9a18166382 updates 2019-09-20 14:58:57 +02:00
Glenn Jocher
a81f8ec0f3 updates 2019-09-20 13:22:11 +02:00
Glenn Jocher
dd913d0158 updates 2019-09-20 13:21:57 +02:00
Glenn Jocher
0e52b8f361 updates 2019-09-19 19:09:59 +02:00
Glenn Jocher
fdfc4a5e63 updates 2019-09-19 18:54:16 +02:00
Glenn Jocher
b5db03827f updates 2019-09-19 18:43:29 +02:00
Glenn Jocher
0f3f6c03e7 updates 2019-09-19 18:09:16 +02:00
Glenn Jocher
de0612ca09 updates 2019-09-19 18:08:21 +02:00
Glenn Jocher
5bacf9e0b8 updates 2019-09-19 18:05:27 +02:00
Glenn Jocher
c24702941f updates 2019-09-19 18:05:04 +02:00
Glenn Jocher
870020ed15 updates 2019-09-19 17:31:46 +02:00
Glenn Jocher
728a5698bc updates 2019-09-19 17:31:07 +02:00
Glenn Jocher
6fa58d3c40 updates 2019-09-19 17:19:43 +02:00
Glenn Jocher
dc445f42bf updates 2019-09-19 15:31:28 +02:00
Glenn Jocher
6d8e82c175 updates 2019-09-19 02:10:55 +02:00
Glenn Jocher
2f436d499a updates 2019-09-19 00:37:22 +02:00
Glenn Jocher
975b657262 updates 2019-09-19 00:36:11 +02:00
Glenn Jocher
80d71fa883 updates 2019-09-18 13:23:37 +02:00
Glenn Jocher
1f2e60ff43 updates 2019-09-18 02:25:09 +02:00
Glenn Jocher
e9437b2178 updates 2019-09-18 00:54:07 +02:00
Glenn Jocher
fff60c651a updates 2019-09-18 00:38:49 +02:00
Glenn Jocher
ce42db0d7b updates 2019-09-17 15:36:31 +02:00
Glenn Jocher
78bb153d7d updates 2019-09-17 15:16:29 +02:00
Glenn Jocher
425ed4b84b updates 2019-09-16 23:15:07 +02:00
Glenn Jocher
137aab762a updates 2019-09-16 23:10:21 +02:00
Glenn Jocher
f356b9387e updates 2019-09-16 23:09:58 +02:00
Glenn Jocher
e5ab942c14 updates 2019-09-16 22:42:52 +02:00
Glenn Jocher
ee0ce7b9bc updates 2019-09-16 22:16:41 +02:00
Glenn Jocher
c40ab12df2 updates 2019-09-16 21:08:14 +02:00
Glenn Jocher
08fa0f28bc updates 2019-09-16 20:59:51 +02:00
Glenn Jocher
87d2e51f0d updates 2019-09-16 20:05:54 +02:00
Glenn Jocher
78e9bf60d2 updates 2019-09-16 19:53:44 +02:00
Glenn Jocher
efe86b0c4c updates 2019-09-16 19:49:09 +02:00
Glenn Jocher
77dce00fa8 updates 2019-09-16 19:39:54 +02:00
Glenn Jocher
b62dc6f06a updates 2019-09-16 14:31:07 +02:00
Glenn Jocher
1ecf80bc28 updates 2019-09-13 16:29:06 +02:00
Glenn Jocher
d5b5f74167 updates 2019-09-13 16:27:15 +02:00
Glenn Jocher
4286bba40f updates 2019-09-13 16:00:52 +02:00
Glenn Jocher
5452bb7036 updates 2019-09-13 15:10:15 +02:00
Glenn Jocher
0bf08a9d93 updates 2019-09-12 15:02:59 +02:00
Glenn Jocher
121da9a6c0 updates 2019-09-12 12:52:59 +02:00
Glenn Jocher
780fa17f6a updates 2019-09-12 11:14:59 +02:00
Glenn Jocher
fb81559565 updates 2019-09-11 23:11:01 +02:00
Glenn Jocher
7997be8bba updates 2019-09-11 23:04:48 +02:00
Glenn Jocher
81e5514f4f updates 2019-09-11 22:57:56 +02:00
Glenn Jocher
6355bfa94e updates 2019-09-11 22:49:14 +02:00
Glenn Jocher
495ae6ca32 updates 2019-09-11 22:47:22 +02:00
Glenn Jocher
17cf9f4a07 updates 2019-09-11 22:21:39 +02:00
Glenn Jocher
3f6df0fb28 updates 2019-09-11 21:25:26 +02:00
Glenn Jocher
a1b50aaa43 updates 2019-09-11 21:24:22 +02:00
Glenn Jocher
806d7b92d8 updates 2019-09-11 14:25:48 +02:00
Glenn Jocher
270724e507 updates 2019-09-11 14:03:23 +02:00
Glenn Jocher
3f7f2c4a13 updates 2019-09-11 14:00:57 +02:00
Glenn Jocher
919aff828e updates 2019-09-11 13:15:16 +02:00
Glenn Jocher
a31b1489a4 updates 2019-09-10 21:25:01 +02:00
Glenn Jocher
2a75034322 updates 2019-09-10 17:28:36 +02:00
Glenn Jocher
4cabdfda3d updates 2019-09-10 17:06:06 +02:00
Glenn Jocher
0591f8da66 updates 2019-09-10 17:04:33 +02:00
Glenn Jocher
9b9d4b96a5 updates 2019-09-10 15:44:14 +02:00
Glenn Jocher
adb4894626 updates 2019-09-10 15:34:36 +02:00
Glenn Jocher
f20a03e28e updates 2019-09-10 14:59:45 +02:00
Glenn Jocher
671747318d updates 2019-09-10 14:25:56 +02:00
Glenn Jocher
d26df074a6 updates 2019-09-10 14:24:46 +02:00
Glenn Jocher
a2016201f3 updates 2019-09-10 14:04:16 +02:00
Glenn Jocher
95bc4736f3 updates 2019-09-10 13:17:05 +02:00
Glenn Jocher
256bf72f8e updates 2019-09-10 12:40:59 +02:00
Glenn Jocher
f270269d43 updates 2019-09-10 12:20:59 +02:00
Glenn Jocher
10b080d90c updates 2019-09-10 11:52:27 +02:00
Glenn Jocher
c1ad7e6c2b updates 2019-09-10 11:35:46 +02:00
Glenn Jocher
8fe2bf1d7f updates 2019-09-10 10:56:56 +02:00
Glenn Jocher
d1b6929043 updates 2019-09-10 01:34:23 +02:00
Glenn Jocher
4445715f4c updates 2019-09-09 22:42:38 +02:00
Glenn Jocher
b91899ffc4 updates 2019-09-09 21:52:29 +02:00
Glenn Jocher
ad3870c847
Update README.md 2019-09-09 21:33:54 +02:00
Glenn Jocher
d8f6adc775 updates 2019-09-09 21:18:05 +02:00
Glenn Jocher
334ea9da0d updates 2019-09-09 19:15:17 +02:00
Glenn Jocher
255e2e5a9f updates 2019-09-09 18:57:15 +02:00
Glenn Jocher
48a0f38f85 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-09 10:20:03 +02:00
Glenn Jocher
7706a1b8fb updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-09 10:19:46 +02:00
Glenn Jocher
b4b93be693 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-09 03:09:16 +02:00
Glenn Jocher
86692569bc updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-09 02:13:39 +02:00
Glenn Jocher
641996ecdf updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 15:55:27 +02:00
Glenn Jocher
94234c80b2 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 14:58:18 +02:00
Glenn Jocher
cca17f4d1e updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 14:49:10 +02:00
Glenn Jocher
24f1298949 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 14:34:44 +02:00
Glenn Jocher
abbf8de12f updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 14:32:42 +02:00
Glenn Jocher
2e6ac2228a updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 14:08:39 +02:00
Glenn Jocher
fd79bd474b updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 13:11:17 +02:00
Glenn Jocher
8da4695fd3 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 12:59:50 +02:00
Glenn Jocher
50a93e141c updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 10:05:42 +02:00
Glenn Jocher
6cd98c46d8 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-04 09:20:03 +02:00
Glenn Jocher
976eea04bd updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-03 17:23:59 +02:00
Glenn Jocher
447292eb36 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-03 16:20:55 +02:00
Glenn Jocher
b76962771e updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 20:53:49 +02:00
Glenn Jocher
0d5bf11fa5 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 16:41:41 +02:00
Glenn Jocher
109173d555 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 16:22:13 +02:00
Glenn Jocher
1e4351c4a2 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 16:11:55 +02:00
Glenn Jocher
39c198579f updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 16:09:05 +02:00
Glenn Jocher
bfe6d560c0 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 16:04:54 +02:00
Glenn Jocher
2877ac9286 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 14:28:13 +02:00
Glenn Jocher
ea61b46b31 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 14:02:08 +02:00
Glenn Jocher
32b54c81d5 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 12:09:00 +02:00
Glenn Jocher
8f913ba82a updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-02 11:59:13 +02:00
Glenn Jocher
71938356c8 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-01 17:36:42 +02:00
Glenn Jocher
b1f8267cc7 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-01 16:32:38 +02:00
Glenn Jocher
9251dfd6a5 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-01 16:28:25 +02:00
Glenn Jocher
c8c0660e6a updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-09-01 12:15:43 +02:00
Glenn Jocher
7ea0178a6c updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-08-31 21:19:02 +02:00
Glenn Jocher
516ca6c4fa updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-08-31 20:40:27 +02:00
Glenn Jocher
62f70712b6 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-08-31 20:35:21 +02:00
Glenn Jocher
4ec7cac0bf updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-08-31 20:34:14 +02:00
Glenn Jocher
30a8211064 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-08-31 20:08:02 +02:00
Glenn Jocher
eb24c46200 Merge remote-tracking branch 'origin/master'
# Conflicts:
#	detect.py
2019-08-31 20:06:58 +02:00
Glenn Jocher
38a3c7ff01 updates
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2019-08-31 20:05:59 +02:00
Glenn Jocher
80516dd758 updates 2019-08-31 19:22:53 +02:00
Glenn Jocher
0a725a4bad updates 2019-08-31 19:11:59 +02:00
Glenn Jocher
e926afd02b updates 2019-08-31 18:58:30 +02:00
Glenn Jocher
360a32811c weight_decay fix 2019-08-31 17:55:19 +02:00
Glenn Jocher
cfb0b7e426 updates 2019-08-31 17:42:09 +02:00
Glenn Jocher
478303fbe8 weight_decay fix 2019-08-31 14:53:17 +02:00
Glenn Jocher
975963f762 updates 2019-08-31 13:42:28 +02:00
Glenn Jocher
fffb7fb992 weight_decay fix 2019-08-31 13:39:14 +02:00
Glenn Jocher
3c56c07f1e weight_decay fix 2019-08-31 13:21:15 +02:00
Glenn Jocher
c27d8d69a6 updates 2019-08-29 20:19:05 +02:00
Glenn Jocher
d6dd9645e9 updates 2019-08-29 20:14:17 +02:00
Glenn Jocher
6c4a855d98 weight_decay fix 2019-08-29 20:13:29 +02:00
Glenn Jocher
dc9095ac5b updates 2019-08-29 19:57:08 +02:00
Glenn Jocher
fb3bdb9372 weight_decay fix 2019-08-29 19:02:42 +02:00
Glenn Jocher
12b169158f updates 2019-08-29 18:58:09 +02:00
Glenn Jocher
894fc1c47f updates 2019-08-29 17:59:24 +02:00
Glenn Jocher
c344efc224 weight_decay fix 2019-08-29 17:41:23 +02:00
Glenn Jocher
8eb381dc88 updates 2019-08-29 15:49:51 +02:00
Glenn Jocher
408baf66e2 weight_decay fix 2019-08-29 15:44:15 +02:00
Glenn Jocher
7d9ffe6d4e weight_decay fix 2019-08-29 14:29:07 +02:00
Glenn Jocher
31d807e589 weight_decay fix 2019-08-29 14:20:54 +02:00
Glenn Jocher
85a24dbc7e weight_decay fix 2019-08-28 16:50:34 +02:00
Glenn Jocher
93b72d059e weight_decay fix 2019-08-28 16:18:18 +02:00
Glenn Jocher
23dfeacfcd weight_decay fix 2019-08-28 16:15:10 +02:00
Glenn Jocher
c906047db3 updates 2019-08-27 12:57:19 +02:00
Glenn Jocher
798a7396f1 weight_decay fix 2019-08-26 16:24:19 +02:00
Glenn Jocher
ff82e4d488 weight_decay fix 2019-08-26 14:47:36 +02:00
Glenn Jocher
883ddcc682 updates 2019-08-25 20:35:15 +02:00
Glenn Jocher
bf8f0f3987 updates 2019-08-25 20:20:15 +02:00
Glenn Jocher
6260ac266f updates 2019-08-25 20:19:53 +02:00
Glenn Jocher
c4f9e3891e updates 2019-08-25 03:03:35 +02:00
Glenn Jocher
5258ed8bdd updates 2019-08-25 02:25:05 +02:00
Glenn Jocher
14e67196ea updates 2019-08-25 02:03:52 +02:00
Glenn Jocher
991362df57 updates 2019-08-25 01:58:12 +02:00
Glenn Jocher
a85f7d967c updates 2019-08-24 23:58:08 +02:00
Glenn Jocher
3ee457cd3d updates 2019-08-24 21:50:33 +02:00
Glenn Jocher
70be0d5d14 updates 2019-08-24 21:39:25 +02:00
Glenn Jocher
1064c37600 removed xy/wh loss reporting 2019-08-24 21:35:56 +02:00
Glenn Jocher
ca38c9050f updates 2019-08-24 21:20:25 +02:00
Glenn Jocher
790e25592f removed xy/wh loss reporting 2019-08-24 20:55:01 +02:00
Glenn Jocher
25a579e417 removed xy/wh loss reporting 2019-08-24 17:16:20 +02:00
Glenn Jocher
195adaea7d removed xy/wh loss reporting 2019-08-24 16:45:49 +02:00
Glenn Jocher
39dcf0d561 removed xy/wh loss reporting 2019-08-24 16:43:43 +02:00
Glenn Jocher
852487654f updates 2019-08-24 12:52:52 +02:00
Glenn Jocher
b88c4568ba updates 2019-08-24 12:37:55 +02:00
Glenn Jocher
4b424b2381 updates 2019-08-24 12:20:43 +02:00
Glenn Jocher
bbe22dd7b4 updates 2019-08-23 17:37:29 +02:00
Glenn Jocher
2f256ee274 updates 2019-08-23 17:24:50 +02:00
Glenn Jocher
5f2b551818 updates 2019-08-23 17:18:59 +02:00
Glenn Jocher
d2ef817b1f updates 2019-08-23 16:41:28 +02:00
Glenn Jocher
06e274f7e4 updates 2019-08-23 16:04:45 +02:00
Glenn Jocher
7f8318e680 updates 2019-08-23 15:46:12 +02:00
Glenn Jocher
f0622e2510 updates 2019-08-23 15:43:16 +02:00
Glenn Jocher
7593dedc4c updates 2019-08-23 15:37:25 +02:00
Glenn Jocher
d279aa0021 updates 2019-08-23 15:35:39 +02:00
Glenn Jocher
8ef49f2560 updates 2019-08-23 15:27:29 +02:00
Glenn Jocher
356c85bf0e updates 2019-08-23 15:24:26 +02:00
Glenn Jocher
135b38e9ba updates 2019-08-23 15:17:17 +02:00
Glenn Jocher
4e8e39da93 updates 2019-08-23 13:45:49 +02:00
Glenn Jocher
cd34368ec4 updates 2019-08-23 13:44:18 +02:00
Glenn Jocher
95c55f2e62 updates 2019-08-23 13:41:12 +02:00
Glenn Jocher
d777a57b9c updates 2019-08-23 13:39:43 +02:00
Glenn Jocher
081cd17007 updates 2019-08-23 13:31:32 +02:00
Glenn Jocher
fd653eca8a updates 2019-08-23 13:25:27 +02:00
Glenn Jocher
0d71fd8228 updates 2019-08-23 12:57:26 +02:00
Glenn Jocher
ff7f73b642 updates 2019-08-23 00:36:48 +02:00
Glenn Jocher
0040c85b9a updates 2019-08-22 23:41:51 +02:00
Glenn Jocher
858fc67954 updates 2019-08-22 16:39:55 +02:00
Glenn Jocher
11aac8930b updates 2019-08-22 15:48:06 +02:00
Glenn Jocher
baf6188df5 updates 2019-08-21 02:53:12 +02:00
Glenn Jocher
1ec9527f24 updates 2019-08-21 01:35:29 +02:00
Glenn Jocher
9c6b6968ba updates 2019-08-21 00:27:11 +02:00
Glenn Jocher
7046a95d61 updates 2019-08-21 00:23:41 +02:00
Glenn Jocher
9dfce6e5de updates 2019-08-21 00:21:36 +02:00
Glenn Jocher
291def2e77 updates 2019-08-21 00:13:30 +02:00
Glenn Jocher
c97b669c46 updates 2019-08-20 14:38:56 +02:00
Glenn Jocher
ac2b9d580d updates 2019-08-20 13:39:39 +02:00
Glenn Jocher
3147fd62e6 updates 2019-08-19 19:18:16 +02:00
Glenn Jocher
cfbc269fd0 updates 2019-08-19 18:21:15 +02:00
Glenn Jocher
44ea6984f9 updates 2019-08-19 18:03:33 +02:00
Glenn Jocher
d73ad897a4 Merge remote-tracking branch 'origin/master' 2019-08-19 17:07:25 +02:00
Glenn Jocher
d94ccd105d updates 2019-08-19 17:07:16 +02:00
Glenn Jocher
7413ea7f13
Update README.md 2019-08-19 14:52:53 +02:00
Glenn Jocher
4d1657afc9
Update examples.ipynb 2019-08-19 12:30:42 +02:00
Glenn Jocher
b64f75ead6
Update README.md 2019-08-19 12:23:31 +02:00
Glenn Jocher
906c348347
Update README.md 2019-08-19 12:18:14 +02:00
Glenn Jocher
291a1898a4
Update README.md 2019-08-19 02:45:48 +02:00
Glenn Jocher
e1f724b1a3 updates 2019-08-19 01:32:27 +02:00
Glenn Jocher
98a24c0a2f Focal Loss bias initialization 2019-08-19 01:27:41 +02:00
Glenn Jocher
96fd7141a2 updates 2019-08-18 21:35:52 +02:00
Glenn Jocher
ebd93e354d updates 2019-08-18 21:28:49 +02:00
Glenn Jocher
b779e6ef69 updates 2019-08-18 21:24:48 +02:00
Glenn Jocher
4050650669 updates 2019-08-18 14:20:46 +02:00
Glenn Jocher
7ee28a7bb6 updates 2019-08-18 13:05:32 +02:00
Glenn Jocher
3c4a9ff69e updates 2019-08-18 02:15:16 +02:00
Glenn Jocher
ee176000eb updates 2019-08-18 02:08:47 +02:00
Glenn Jocher
ce0b414677 updates 2019-08-18 02:04:49 +02:00
Glenn Jocher
43230c48bf updates 2019-08-18 02:02:04 +02:00
Glenn Jocher
0aece25ef6 updates 2019-08-18 01:58:35 +02:00
Glenn Jocher
fd2991386f updates 2019-08-18 01:55:45 +02:00
Glenn Jocher
3527e61526 Merge remote-tracking branch 'origin/master' 2019-08-18 00:53:34 +02:00
Glenn Jocher
3b9e94c5e6 updates 2019-08-18 00:53:27 +02:00
Glenn Jocher
c498d768cc
Update README.md 2019-08-18 00:47:52 +02:00
Glenn Jocher
2d57e5d877 updates 2019-08-17 19:20:39 +02:00
Glenn Jocher
926447e8c4 updates 2019-08-17 17:10:57 +02:00
Glenn Jocher
b72fb74ad0 updates 2019-08-17 14:15:27 +02:00
Glenn Jocher
b8c870711f updates 2019-08-17 14:09:38 +02:00
Glenn Jocher
a1200ef130 updates 2019-08-17 14:08:10 +02:00
Glenn Jocher
321bd95764 updates 2019-08-17 02:14:28 +02:00
Glenn Jocher
9953335cfe updates 2019-08-16 17:00:20 +02:00
Glenn Jocher
b450db18ae updates 2019-08-16 11:08:07 +02:00
Glenn Jocher
dbb2cbe0d5 updates 2019-08-16 01:31:59 +02:00
Glenn Jocher
b030d68108 updates 2019-08-16 00:42:25 +02:00
Glenn Jocher
b3717c9ef8 updates 2019-08-15 21:06:17 +02:00
Glenn Jocher
a172e96f10 updates 2019-08-15 21:01:24 +02:00
Glenn Jocher
ac88bc2dcf updates 2019-08-15 20:53:56 +02:00
Glenn Jocher
e0c116b366 updates 2019-08-15 20:22:44 +02:00
Glenn Jocher
1ea0ab320e updates 2019-08-15 20:19:30 +02:00
Glenn Jocher
4c50c7ea8b updates 2019-08-15 19:16:58 +02:00
Glenn Jocher
d76677da06 updates 2019-08-15 19:12:09 +02:00
Glenn Jocher
5a9fb2411d updates 2019-08-15 18:15:27 +02:00
Glenn Jocher
be7f4fa72f updates 2019-08-15 16:57:17 +02:00
Glenn Jocher
4f78eec83e updates 2019-08-15 16:11:34 +02:00
Glenn Jocher
c4cc95bdbd kmeans update 2019-08-15 16:09:36 +02:00
Glenn Jocher
a8996d5d3a updates 2019-08-15 14:10:08 +02:00
Glenn Jocher
1c0d408fbf updates 2019-08-15 13:44:42 +02:00
Glenn Jocher
48af6d136f updates 2019-08-14 15:54:40 +02:00
Glenn Jocher
c4f23e362e updates 2019-08-14 15:08:44 +02:00
Glenn Jocher
907195d77f
Update README.md 2019-08-12 14:39:26 +02:00
Glenn Jocher
7fb64dbf67 updates 2019-08-12 13:49:38 +02:00
Glenn Jocher
616bbdb435 Merge remote-tracking branch 'origin/master' 2019-08-12 13:37:34 +02:00
Glenn Jocher
4ac6e88ea9 memory-saving routs update 2019-08-12 13:37:11 +02:00
LukeAI
891b85490e Update requirements.txt (#444) 2019-08-12 12:38:36 +02:00
Glenn Jocher
daaa8194a9 updates 2019-08-12 12:25:26 +02:00
Glenn Jocher
89d0aa7164 updates 2019-08-12 00:43:04 +02:00
Glenn Jocher
3fca16d3ce updates 2019-08-12 00:22:23 +02:00
Glenn Jocher
d70ceec22a updates 2019-08-11 19:16:54 +02:00
Glenn Jocher
dfa999455f updates 2019-08-11 17:47:44 +02:00
Glenn Jocher
4e9a8661b2 updates 2019-08-11 15:22:53 +02:00
Glenn Jocher
636c1cff7a updates 2019-08-11 15:17:40 +02:00
Glenn Jocher
05914799fa updates 2019-08-11 14:19:05 +02:00
Glenn Jocher
e8a15ac1d7 updates 2019-08-10 22:11:55 +02:00
Glenn Jocher
5ff6e6b3a5 tensorboard updates 2019-08-09 19:35:02 +02:00
Glenn Jocher
f9755f8dac tensorboard updates 2019-08-09 19:29:36 +02:00
Glenn Jocher
298463e530 updates 2019-08-09 18:22:27 +02:00
Glenn Jocher
0c52cc0106 Merge remote-tracking branch 'origin/master' 2019-08-09 16:37:27 +02:00
Glenn Jocher
933f85f632 tensorboard updates 2019-08-09 16:37:19 +02:00
晨太狼
d41b444a15 Fix fuse (#440)
Fix fuse in models.py
2019-08-09 12:44:47 +02:00
Glenn Jocher
fdd5afa229 updates 2019-08-08 22:37:54 +02:00
Marc
22f75469ac Tensorboard support (#435) 2019-08-08 22:30:34 +02:00
Glenn Jocher
a21b9891b9 updates 2019-08-08 21:16:09 +02:00
Glenn Jocher
25bc5e5392 updates 2019-08-08 20:16:32 +02:00
Glenn Jocher
c49fe688b7 updates 2019-08-08 19:49:15 +02:00
Glenn Jocher
37c3e762e1 @ktian08 50.5 mAP evolved hyperparameters 2019-08-08 19:41:26 +02:00
Glenn Jocher
1ddde51b80 updates 2019-08-08 18:48:37 +02:00
Glenn Jocher
f43170817c updates 2019-08-07 16:45:13 +02:00
Glenn Jocher
056976b4fc updates 2019-08-07 01:54:41 +02:00
Glenn Jocher
b53d6d6ecf updates 2019-08-07 01:03:54 +02:00
Glenn Jocher
33601ca758 updates 2019-08-06 17:44:09 +02:00
Glenn Jocher
01bc76faeb updates 2019-08-06 17:24:30 +02:00
Glenn Jocher
141032045b updates 2019-08-06 16:57:33 +02:00
Glenn Jocher
082fdebfc1 updates 2019-08-06 14:57:12 +02:00
Glenn Jocher
50b1bb71be updates 2019-08-06 14:38:03 +02:00
Glenn Jocher
0e7cd7e283 updates 2019-08-06 14:36:12 +02:00
Glenn Jocher
bd2d3cc5d1 updates 2019-08-06 14:35:18 +02:00
Glenn Jocher
68a5f8e207 updates 2019-08-06 11:26:52 +02:00
Glenn Jocher
7462a95873 updates 2019-08-06 01:27:27 +02:00
Glenn Jocher
77bde15239 updates 2019-08-05 18:06:01 +02:00
Glenn Jocher
9a9224cfe6 updates 2019-08-05 17:45:32 +02:00
Glenn Jocher
1613d1c396 updates 2019-08-05 17:41:25 +02:00
Glenn Jocher
e1c407dab1 updates 2019-08-05 17:25:50 +02:00
Glenn Jocher
2195bb0e89 updates 2019-08-05 16:59:32 +02:00
Glenn Jocher
0c845a2ff0 updates 2019-08-05 15:52:22 +02:00
Glenn Jocher
b2e87d0844 updates 2019-08-05 15:50:45 +02:00
Glenn Jocher
a1a86cd784 updates 2019-08-05 15:40:40 +02:00
Glenn Jocher
5aeac1b0c1 updates 2019-08-05 14:15:12 +02:00
Glenn Jocher
268cfbe66e updates 2019-08-05 13:57:18 +02:00
Glenn Jocher
dfa7e047a4 updates 2019-08-05 13:55:42 +02:00
Glenn Jocher
5cbd18d871 updates 2019-08-05 13:50:06 +02:00
Glenn Jocher
cd5b9d3fdc updates 2019-08-05 13:32:48 +02:00
Glenn Jocher
f5248067dc updates 2019-08-05 03:21:56 +02:00
Glenn Jocher
042f34d029 updates 2019-08-05 03:02:48 +02:00
Glenn Jocher
e77ca7e4d9 updates 2019-08-05 02:55:03 +02:00
Glenn Jocher
4346f094d9 updates 2019-08-04 20:34:21 +02:00
Glenn Jocher
dd20bd5671 updates 2019-08-04 20:03:46 +02:00
Glenn Jocher
77c589dc0c updates 2019-08-04 19:59:02 +02:00
Glenn Jocher
618e8a8014 updates 2019-08-04 17:54:03 +02:00
Glenn Jocher
d1918edc70 updates 2019-08-04 17:53:03 +02:00
Glenn Jocher
4db005cd9b updates 2019-08-04 17:50:20 +02:00
Glenn Jocher
f0762134ce updates 2019-08-04 17:42:27 +02:00
Glenn Jocher
09a89da8a8 updates 2019-08-04 17:39:26 +02:00
Glenn Jocher
9d5d1c4e54 updates 2019-08-04 17:34:58 +02:00
Glenn Jocher
58f868a79a updates 2019-08-04 14:48:56 +02:00
Glenn Jocher
ebf56ea725 updates 2019-08-04 02:50:35 +02:00
Glenn Jocher
1646b8de4b updates 2019-08-04 01:07:54 +02:00
Glenn Jocher
2ee16e8280 updates 2019-08-04 00:12:46 +02:00
Glenn Jocher
f8b57a59ef updates 2019-08-03 22:51:19 +02:00
Glenn Jocher
fd802910ec updates 2019-08-03 20:30:25 +02:00
Glenn Jocher
658ef8a272 updates 2019-08-03 14:49:38 +02:00
Glenn Jocher
2d8311a83f updates 2019-08-03 14:38:06 +02:00
Glenn Jocher
cd1f1eeecc updates 2019-08-03 14:22:25 +02:00
Glenn Jocher
90daf8f19c updates 2019-08-03 14:14:10 +02:00
Yonghye Kwon
333cf92bb2 Replace EmptyLayer with nn.Sequential (#420)
* Replace EmptyLayer with nn.Sequential

* Update models.py

* Update models.py
2019-08-03 13:24:14 +02:00
Glenn Jocher
267b4301e2 Merge remote-tracking branch 'origin/master' 2019-08-02 16:28:56 +02:00
Glenn Jocher
c57d34b749 updates 2019-08-02 16:24:24 +02:00
Yonghye Kwon
2a284aacd0 Cleanup- edit for Readability (#417)
Cleanup- edit for Readability
2019-08-02 14:49:08 +02:00
Glenn Jocher
8501981f09 Merge remote-tracking branch 'origin/master' 2019-08-02 01:33:33 +02:00
Glenn Jocher
6d1cafba3a updates 2019-08-02 01:33:24 +02:00
Glenn Jocher
ae577f51e2
Update examples.ipynb 2019-08-01 22:44:22 +02:00
Glenn Jocher
02b7f2c7d6
Update README.md 2019-08-01 22:40:18 +02:00
Glenn Jocher
3f1a0d63e8
Update README.md 2019-08-01 22:38:35 +02:00
Glenn Jocher
161a934aac
Update README.md 2019-08-01 22:36:28 +02:00
Glenn Jocher
79111cb18f updates 2019-08-01 21:58:43 +02:00
Glenn Jocher
56f38ed6a2 updates 2019-08-01 18:29:57 +02:00
Glenn Jocher
e82f201578 updates 2019-08-01 18:20:47 +02:00
Glenn Jocher
62f1e21b14 updates 2019-08-01 03:53:11 +02:00
Glenn Jocher
7f1f738b74 Merge remote-tracking branch 'origin/master' 2019-08-01 03:31:21 +02:00
Glenn Jocher
f8de211f77 updates 2019-08-01 03:31:12 +02:00
Glenn Jocher
fa6322a15b INTER_AREA for INTER_LINEAR 2019-08-01 02:56:35 +02:00
Glenn Jocher
f9d616de9f updates 2019-08-01 02:28:11 +02:00
Glenn Jocher
e3d100dd34 updates 2019-08-01 02:21:40 +02:00
Glenn Jocher
69c7d07996 updates 2019-08-01 01:47:05 +02:00
Glenn Jocher
d07b9988e3 updates 2019-08-01 00:33:17 +02:00
Glenn Jocher
3b694fc8d0 Merge remote-tracking branch 'origin/master' 2019-08-01 00:10:10 +02:00
Glenn Jocher
3f9521bc08 updates 2019-08-01 00:09:45 +02:00
Glenn Jocher
5c288ca970 updates 2019-08-01 00:08:28 +02:00
idow09
7b3d9f02ec prevent failure when no training_results available (#409)
Use `chkpt.get('training_results')` instead of `chkpt.get('training_results')` so if the dict doesn't contain this key it won't throw a `KeyError
2019-07-31 14:12:27 +02:00
Yonghye Kwon
97edba0b7a Remove Unused Argument of parser : --var (#408)
Remove Unused Argument of parser : --var
2019-07-31 14:05:43 +02:00
Glenn Jocher
07ffc4e7f9 updates 2019-07-30 18:27:37 +02:00
Glenn Jocher
9ef8d42f0e updates 2019-07-30 18:25:53 +02:00
Glenn Jocher
e8303a366f updates 2019-07-30 18:24:20 +02:00
Glenn Jocher
b11bb91804 updates 2019-07-30 17:51:19 +02:00
Glenn Jocher
65abb1c82f updates 2019-07-30 15:58:10 +02:00
Glenn Jocher
8a74a683ae updates 2019-07-30 15:23:31 +02:00
Glenn Jocher
62d4a74052 updates 2019-07-30 15:15:23 +02:00
Glenn Jocher
65d70fca78 updates 2019-07-30 13:45:44 +02:00
Glenn Jocher
01aaf2c11c updates 2019-07-30 12:39:17 +02:00
Glenn Jocher
272b9c7c11 updates 2019-07-29 23:37:12 +02:00
Glenn Jocher
4af819449c updates 2019-07-29 12:06:29 +02:00
Glenn Jocher
981abf679c updates 2019-07-29 12:05:50 +02:00
Glenn Jocher
3a7711856e Merge remote-tracking branch 'origin/master' 2019-07-29 00:45:37 +02:00
Glenn Jocher
9cf7f2215a updates 2019-07-29 00:42:03 +02:00
Glenn Jocher
9bf31f8100
Update README.md 2019-07-28 15:57:01 +02:00
Glenn Jocher
dfed4a6425 updates 2019-07-26 23:55:11 +02:00
Glenn Jocher
c413aaefea updates 2019-07-26 23:52:13 +02:00
Glenn Jocher
def4a000aa updates 2019-07-26 19:13:40 +02:00
Glenn Jocher
72b0f65907 Merge remote-tracking branch 'origin/master'
# Conflicts:
#	README.md
2019-07-26 19:13:09 +02:00
Glenn Jocher
8615b3a7c3 updates 2019-07-26 19:11:59 +02:00
Glenn Jocher
3d225f381c
Update README.md 2019-07-26 16:58:39 +02:00
Glenn Jocher
d0c46a2ca4 updates 2019-07-26 15:41:02 +02:00
Glenn Jocher
7a81cc2499 updates 2019-07-26 12:00:43 +02:00
Glenn Jocher
88eea8f147 updates 2019-07-25 18:18:40 +02:00
Glenn Jocher
bfbc54666e updates 2019-07-25 18:09:24 +02:00
Glenn Jocher
bd92457604 updates 2019-07-25 17:50:19 +02:00
Glenn Jocher
8df3601663 updates 2019-07-25 17:49:54 +02:00
Glenn Jocher
df4f25e610 updates 2019-07-25 13:51:55 +02:00
Glenn Jocher
666d772fa7 updates 2019-07-25 13:37:01 +02:00
Glenn Jocher
a834377122 updates 2019-07-25 13:23:39 +02:00
Glenn Jocher
7b6cba86ef updates 2019-07-25 13:19:26 +02:00
Glenn Jocher
169d117870 updates 2019-07-24 20:32:24 +02:00
Glenn Jocher
db08ac5403 updates 2019-07-24 20:16:35 +02:00
Glenn Jocher
ce8a597dab updates 2019-07-24 19:31:38 +02:00
Glenn Jocher
4fb7fbf4bc updates 2019-07-24 19:02:24 +02:00
Glenn Jocher
e1425b7288 updates 2019-07-24 18:30:35 +02:00
Glenn Jocher
e28437720d updates 2019-07-24 18:28:11 +02:00
Glenn Jocher
ce4ea0c332 updates 2019-07-24 18:22:59 +02:00
Glenn Jocher
3cfc84a183 updates 2019-07-24 18:02:26 +02:00
Glenn Jocher
1cde55f7c9 updates 2019-07-24 15:56:10 +02:00
Glenn Jocher
a8596c6af4 updates 2019-07-24 13:32:14 +02:00
Glenn Jocher
5a34d3c9ab Merge remote-tracking branch 'origin/master' 2019-07-24 00:22:19 +02:00
Glenn Jocher
9c7fce0075 updates 2019-07-24 00:22:07 +02:00
Glenn Jocher
0736631ea7
Add files via upload 2019-07-23 17:06:48 +02:00
Glenn Jocher
cb5dcd612c Merge remote-tracking branch 'origin/master' 2019-07-23 17:03:18 +02:00
Glenn Jocher
9dc419b467 updates 2019-07-23 17:03:09 +02:00
Glenn Jocher
43a7a96798
Update README.md 2019-07-23 16:59:13 +02:00
Glenn Jocher
a8f0fa8eaa
Add files via upload 2019-07-23 16:56:48 +02:00
Glenn Jocher
8a2986e38f updates 2019-07-23 15:45:14 +02:00
Glenn Jocher
b025b3123e updates 2019-07-23 15:08:28 +02:00
Glenn Jocher
308eda38fd updates 2019-07-23 13:47:30 +02:00
Glenn Jocher
91eaf2f8fe updates 2019-07-23 01:35:03 +02:00
Glenn Jocher
d20131b2e8 updates 2019-07-22 21:28:59 +02:00
Glenn Jocher
7fa6ad6b47 updates 2019-07-22 21:24:24 +02:00
Glenn Jocher
8ae16adde4 updates 2019-07-22 16:14:41 +02:00
Glenn Jocher
1cfd370455 updates 2019-07-22 02:18:34 +02:00
Glenn Jocher
3b108afac3 updates 2019-07-21 22:49:11 +02:00
Glenn Jocher
a812b1f8b1 updates 2019-07-21 21:33:45 +02:00
Glenn Jocher
e97cc07715 updates 2019-07-21 21:28:38 +02:00
Glenn Jocher
aedf45c713 Merge remote-tracking branch 'origin/master' 2019-07-21 21:27:37 +02:00
Glenn Jocher
14cbd8c0ca updates 2019-07-21 19:18:15 +02:00
Glenn Jocher
1691a56bd2
Update README.md 2019-07-21 14:28:02 +02:00
Glenn Jocher
c9ed072cc7 updates 2019-07-21 05:06:58 +02:00
Glenn Jocher
9f7e71acab updates 2019-07-21 03:48:57 +02:00
Glenn Jocher
7d43ee0d02 updates 2019-07-21 03:39:01 +02:00
Glenn Jocher
e92a8afae1 updates 2019-07-21 01:05:45 +02:00
Glenn Jocher
05139a6bdd updates 2019-07-20 19:27:27 +02:00
Glenn Jocher
a6877daa41 updates 2019-07-20 18:55:36 +02:00
Glenn Jocher
0448d1109f updates 2019-07-20 18:46:51 +02:00
Glenn Jocher
bb80db54b7 updates 2019-07-20 18:31:58 +02:00
Glenn Jocher
0ed0b354ee updates 2019-07-20 17:31:21 +02:00
Glenn Jocher
f991f2f4d5 updates 2019-07-20 17:14:07 +02:00
Glenn Jocher
3512f7ff61 updates 2019-07-20 17:09:43 +02:00
Glenn Jocher
a179af6729 updates 2019-07-20 17:08:08 +02:00
Glenn Jocher
a39ee4d252 updates 2019-07-20 17:05:09 +02:00
Glenn Jocher
bc262aca2a updates 2019-07-20 15:27:42 +02:00
Glenn Jocher
deb200f6bf updates 2019-07-20 15:10:31 +02:00
Glenn Jocher
39f63b7110 updates 2019-07-20 15:04:41 +02:00
Glenn Jocher
4816969933 updates 2019-07-20 14:54:37 +02:00
Glenn Jocher
cb30d60f4e updates 2019-07-20 14:04:50 +02:00
Glenn Jocher
44b340321f updates 2019-07-20 13:20:01 +02:00
Glenn Jocher
d6edefa8ab updates 2019-07-20 01:28:29 +02:00
Glenn Jocher
d1abe51876 updates 2019-07-17 14:19:09 +02:00
Glenn Jocher
407a4c481d updates 2019-07-17 14:16:21 +02:00
Glenn Jocher
33838b558d updates 2019-07-17 14:14:42 +02:00
Glenn Jocher
34ddceea89 updates 2019-07-17 14:00:50 +02:00
Glenn Jocher
9d54a268c9 updates 2019-07-16 23:14:10 +02:00
Glenn Jocher
dc43968918 updates 2019-07-16 19:10:33 +02:00
Glenn Jocher
a773350224 updates 2019-07-16 19:09:40 +02:00
Glenn Jocher
ab51380448 updates 2019-07-16 19:00:03 +02:00
Glenn Jocher
c994963a84 updates 2019-07-16 18:59:46 +02:00
Glenn Jocher
153762dec0 updates 2019-07-16 18:58:49 +02:00
Glenn Jocher
64b606a3cd updates 2019-07-16 18:49:54 +02:00
Glenn Jocher
51d7e460a3 updates 2019-07-16 18:18:08 +02:00
Glenn Jocher
81540b80b9 updates 2019-07-16 18:06:24 +02:00
Glenn Jocher
b459587cb0 updates 2019-07-16 17:56:39 +02:00
Glenn Jocher
813024116b updates 2019-07-16 17:50:41 +02:00
Glenn Jocher
034d2949b9 updates 2019-07-16 17:43:01 +02:00
Glenn Jocher
09b3670579 updates 2019-07-16 17:35:20 +02:00
Glenn Jocher
954deadff3 updates 2019-07-16 01:03:15 +02:00
Glenn Jocher
8501aed49f updates 2019-07-15 17:54:31 +02:00
Glenn Jocher
96e25462e8 updates 2019-07-15 17:00:04 +02:00
Glenn Jocher
6893f1daf8 updates 2019-07-15 16:48:02 +02:00
Glenn Jocher
7c2623dd4f updates 2019-07-15 16:39:26 +02:00
Glenn Jocher
4e5a00fb72 updates 2019-07-15 16:27:13 +02:00
Glenn Jocher
e73e247442 updates 2019-07-15 16:07:25 +02:00
Glenn Jocher
e8c205b412 updates 2019-07-15 15:37:32 +02:00
Glenn Jocher
bcbb524944 updates 2019-07-15 13:02:10 +02:00
Glenn Jocher
3eabb1114c updates 2019-07-15 01:15:30 +02:00
Glenn Jocher
6509d8e588 updates 2019-07-14 22:28:48 +02:00
Glenn Jocher
9c776b8052 updates 2019-07-14 21:38:55 +02:00
Glenn Jocher
ac39ff5aa2 updates 2019-07-14 13:00:06 +02:00
Glenn Jocher
3fc676b28a updates 2019-07-14 11:29:07 +02:00
Glenn Jocher
831b6e39b6 updates 2019-07-12 17:02:04 +02:00
Glenn Jocher
03c6fe1ffe updates 2019-07-12 16:10:37 +02:00
Glenn Jocher
f906bc9872 updates 2019-07-12 15:45:57 +02:00
Glenn Jocher
0aa9759a90 updates 2019-07-12 15:44:39 +02:00
Glenn Jocher
bb38391342 updates 2019-07-12 14:28:46 +02:00
Glenn Jocher
c77b87489c updates 2019-07-12 12:24:43 +02:00
Glenn Jocher
bd9789aa00 equal layer weights 2019-07-12 12:23:17 +02:00
Glenn Jocher
5886200401 updates 2019-07-12 01:19:32 +02:00
Glenn Jocher
a2909c59f8 updates 2019-07-11 11:57:10 +02:00
Glenn Jocher
b005a17eff updates 2019-07-11 11:56:46 +02:00
Glenn Jocher
3373006d0e updates 2019-07-10 22:11:48 +02:00
Glenn Jocher
4f6ef59d92 updates 2019-07-10 20:47:05 +02:00
Glenn Jocher
a9e42a16f1 updates 2019-07-10 19:48:29 +02:00
Glenn Jocher
88a2c71a9f updates 2019-07-10 17:34:19 +02:00
Glenn Jocher
682d0485d6 updates 2019-07-10 17:33:24 +02:00
Glenn Jocher
53dfdfb367 updates 2019-07-10 16:49:06 +02:00
Glenn Jocher
f02ac89122 updates 2019-07-10 12:00:06 +02:00
Glenn Jocher
6bd2c22523 updates 2019-07-09 21:11:53 +02:00
Glenn Jocher
d5fd37de26 updates 2019-07-09 20:56:58 +02:00
Glenn Jocher
bf6d96330b updates 2019-07-09 18:40:29 +02:00
Glenn Jocher
7b8a134a0b updates 2019-07-09 18:16:35 +02:00
Glenn Jocher
5b0ba6d7b2 updates 2019-07-09 18:16:15 +02:00
Glenn Jocher
9c227dd2b4 updates 2019-07-09 14:18:19 +02:00
Glenn Jocher
bb1e551150 updates 2019-07-08 19:26:46 +02:00
Glenn Jocher
0bd763f528 updates 2019-07-08 18:32:31 +02:00
Glenn Jocher
feeaf734f2 updates 2019-07-08 18:04:44 +02:00
Glenn Jocher
da9ec7d12f updates 2019-07-08 18:00:19 +02:00
Glenn Jocher
59b1a1e89b updates 2019-07-08 15:52:13 +02:00
Glenn Jocher
60bc2c1fbd updates 2019-07-08 15:43:46 +02:00
Glenn Jocher
a8c73f1c50 updates 2019-07-08 15:28:29 +02:00
Glenn Jocher
94669fb704 updates 2019-07-08 15:24:20 +02:00
Glenn Jocher
291c3ec9c7 updates 2019-07-08 15:02:20 +02:00
Glenn Jocher
68b50f5cb6 updates 2019-07-08 12:43:15 +02:00
glenn-jocher
7a2d356297 GIoU to default 2019-07-07 23:53:56 +02:00
glenn-jocher
af3c5d0e35 GIoU to default 2019-07-07 23:42:24 +02:00
glenn-jocher
2b863c3bf2 Merge remote-tracking branch 'origin/master' 2019-07-07 23:24:43 +02:00
glenn-jocher
70f6379601 GIoU to default 2019-07-07 23:24:34 +02:00
Glenn Jocher
b0aadff56f
Update requirements.txt 2019-07-05 12:34:17 +02:00
glenn-jocher
32a52dfb02 GIoU to default 2019-07-05 12:33:37 +02:00
glenn-jocher
429bd3b8a9 GIoU to default 2019-07-05 11:41:43 +02:00
glenn-jocher
b649a95c9a GIoU to default 2019-07-05 00:36:37 +02:00
glenn-jocher
7246dd855c updates 2019-07-04 22:50:03 +02:00
glenn-jocher
abf59f1565 updates 2019-07-04 22:10:46 +02:00
glenn-jocher
d0eace6cec updates 2019-07-04 21:34:33 +02:00
glenn-jocher
5d5a7e0273 Merge remote-tracking branch 'origin/master' 2019-07-04 20:45:02 +02:00
glenn-jocher
1283a1e7e5 updates 2019-07-04 20:43:20 +02:00
Glenn Jocher
a5592093ef Merge remote-tracking branch 'origin/master' 2019-07-04 14:03:21 +02:00
Glenn Jocher
7a353a9c70 updates 2019-07-04 14:03:13 +02:00
glenn-jocher
109991198c updates 2019-07-03 16:18:08 +02:00
glenn-jocher
1e62ee2152 updates 2019-07-03 16:17:46 +02:00
glenn-jocher
ab141fcc1f updates 2019-07-03 15:37:04 +02:00
glenn-jocher
1d0a4a3ace updates 2019-07-03 14:42:11 +02:00
glenn-jocher
a8cf64af31 updates 2019-07-02 18:21:28 +02:00
Yonghye Kwon
ccf757b3ea changed the criteria for the best weight file (#356)
* changed the criteria for the best weight file

changed the criteria for the best weight file from loss to mAP

I trained the model on my custom dataset. But I failed to get a good results when I load the weight file that has the lowest loss on test dataset. 

I thought that the loss used in YOLO is not proper criteria for detection performance. So I changed the criteria from loss to mAP.

what do you think of this?

* Update train.py
2019-07-02 12:24:18 +02:00
glenn-jocher
1fd871abd8 updates 2019-07-01 17:44:42 +02:00
glenn-jocher
f43ee6ef94 updates 2019-07-01 17:17:29 +02:00
glenn-jocher
cf51cf9c99 updates 2019-07-01 17:14:42 +02:00
glenn-jocher
05358accbb updates 2019-07-01 15:23:30 +02:00
glenn-jocher
c4409aa2ed updates 2019-07-01 15:22:22 +02:00
glenn-jocher
b0d62e5204 updates 2019-07-01 15:21:06 +02:00
glenn-jocher
5e2b802f68 updates 2019-07-01 14:48:44 +02:00
glenn-jocher
09d065711a updates 2019-07-01 01:27:32 +02:00
glenn-jocher
63036deeb7 updates 2019-07-01 00:41:13 +02:00
glenn-jocher
32f5ea955b updates 2019-06-30 17:47:10 +02:00
glenn-jocher
db2674aa31 updates 2019-06-30 17:34:29 +02:00
glenn-jocher
5927d12aa7 Merge remote-tracking branch 'origin/master' 2019-06-30 15:24:43 +02:00
glenn-jocher
388b66dcd0 updates 2019-06-30 15:24:34 +02:00
Glenn Jocher
1990cd8013
Update README.md 2019-06-30 00:38:32 +02:00
Glenn Jocher
eeae43c414 updates 2019-06-28 00:38:52 +02:00
Jeremy Hu
b202baa31c update parse_model_cfg() (#350)
Removing the two lines for adding batch_normalize key to convolutional layers causes the parsing of the model to break when parsing it in models.py
2019-06-28 00:35:24 +02:00
Glenn Jocher
c1bb037cbe Merge remote-tracking branch 'origin/master' 2019-06-26 11:28:06 +02:00
Glenn Jocher
cbfc5a00e5 updates 2019-06-26 11:27:36 +02:00
Yonghye Kwon
37fe87ccd9 cleanup- delete a variable "yolo_layer_count" (#347) 2019-06-26 11:10:52 +02:00
Glenn Jocher
45540c787f updates 2019-06-25 19:36:11 +02:00
Glenn Jocher
1d76751e1f updates 2019-06-25 12:37:24 +02:00
Glenn Jocher
e4cc830690 updates 2019-06-25 12:21:27 +02:00
Glenn Jocher
9a56d97059 updates 2019-06-25 11:54:19 +02:00
Glenn Jocher
2244c72a1b updates 2019-06-25 11:45:38 +02:00
Glenn Jocher
6b222df35d updates 2019-06-24 15:56:20 +02:00
Glenn Jocher
d208f006a1 updates 2019-06-24 14:46:00 +02:00
Glenn Jocher
a3fcf20385 updates 2019-06-24 14:19:20 +02:00
Glenn Jocher
1827b79647 updates 2019-06-24 14:13:16 +02:00
Glenn Jocher
c56516ec11 updates 2019-06-24 13:51:54 +02:00
Glenn Jocher
0005823d1f updates 2019-06-24 13:43:17 +02:00
Glenn Jocher
57b616b8b1 updates 2019-06-23 22:01:11 +02:00
Glenn Jocher
0f2e136c05 updates 2019-06-22 19:21:05 +02:00
Glenn Jocher
ef3e1343e2 updates 2019-06-22 15:52:27 +02:00
Glenn Jocher
f501a0fc9d updates 2019-06-22 15:50:04 +02:00
Glenn Jocher
1a0385c77d updates 2019-06-21 23:11:24 +02:00
Glenn Jocher
4f7fee45ff updates 2019-06-21 21:27:50 +02:00
Glenn Jocher
5f6c2b3d12 updates 2019-06-21 13:19:23 +02:00
Glenn Jocher
1a9aa30efc updates 2019-06-21 11:57:26 +02:00
Glenn Jocher
3223c0171a updates 2019-06-21 10:58:12 +02:00
Glenn Jocher
a7e21b4315 updates 2019-06-21 10:24:06 +02:00
Glenn Jocher
7d7d7a6332 updates 2019-06-21 10:17:29 +02:00
Glenn Jocher
a40f421061 updates 2019-06-18 21:34:44 +02:00
Glenn Jocher
c3526e0eff updates 2019-06-18 17:35:47 +02:00
Glenn Jocher
1096596ad8 updates 2019-06-18 16:36:04 +02:00
Glenn Jocher
84c1fecd81 updates 2019-06-18 16:33:18 +02:00
Glenn Jocher
4fb2567aa5 updates 2019-06-18 16:32:37 +02:00
Glenn Jocher
6efb2c935f updates 2019-06-18 16:32:19 +02:00
Glenn Jocher
677bdf236c updates 2019-06-18 15:34:35 +02:00
Glenn Jocher
58203e49c8 updates 2019-06-17 18:02:04 +02:00
Glenn Jocher
55bb905072 updates 2019-06-17 17:45:54 +02:00
Glenn Jocher
f573250ae8 updates 2019-06-16 23:17:40 +02:00
Glenn Jocher
b59532883a updates 2019-06-15 17:06:58 +02:00
Glenn Jocher
6c77764bba updates 2019-06-15 14:05:19 +02:00
Glenn Jocher
40da693ff0 updates 2019-06-15 13:34:02 +02:00
Glenn Jocher
995dc3ca67 updates 2019-06-15 02:44:01 +02:00
Glenn Jocher
02291622fa updates 2019-06-15 02:10:15 +02:00
Glenn Jocher
bb3682024e updates 2019-06-15 01:35:55 +02:00
Glenn Jocher
8f609246db updates 2019-06-13 18:13:30 +02:00
Glenn Jocher
19d2232665 updates 2019-06-12 19:40:21 +02:00
Glenn Jocher
1e8df4db23 updates 2019-06-12 18:21:42 +02:00
Glenn Jocher
64134706d1 updates 2019-06-12 15:14:13 +02:00
Glenn Jocher
b33a0b6cf2 updates 2019-06-12 15:12:08 +02:00
Glenn Jocher
c5cb3c8a9e updates 2019-06-12 14:35:10 +02:00
Glenn Jocher
81b4a7833f updates 2019-06-12 14:30:40 +02:00
Glenn Jocher
bca423ee43 updates 2019-06-12 14:15:28 +02:00
Glenn Jocher
59cf3978fc updates 2019-06-12 13:59:20 +02:00
Glenn Jocher
e81c1ab501 updates 2019-06-12 13:57:32 +02:00
NirZarrabi
0f94dce1cb changed warpPerspective to warpAffine at line 380 (#328)
since the transformation is affine and not perspective it is more efficient to use the warpAffine function
2019-06-12 13:48:39 +02:00
Glenn Jocher
e42865a304 updates 2019-06-12 13:25:39 +02:00
Glenn Jocher
ef1703f2b8 updates 2019-06-12 13:19:17 +02:00
Glenn Jocher
b5630f145f updates 2019-06-12 13:10:31 +02:00
Glenn Jocher
8df215a8cc updates 2019-06-12 13:04:58 +02:00
Glenn Jocher
64933f7ce0 updates 2019-06-12 11:55:20 +02:00
Glenn Jocher
5edb0ec40d updates 2019-06-12 11:50:24 +02:00
Glenn Jocher
9c328b1b0e updates 2019-06-12 11:41:44 +02:00
Glenn Jocher
fe9cba308d updates 2019-06-12 11:41:00 +02:00
Glenn Jocher
dd7ca339f5 updates 2019-06-12 11:40:17 +02:00
Glenn Jocher
b0b6554eee updates 2019-06-12 11:37:25 +02:00
Glenn Jocher
d5e2daf79d updates 2019-06-12 11:36:32 +02:00
Glenn Jocher
051b251d41 updates 2019-06-12 11:35:53 +02:00
Glenn Jocher
01e11aee04 updates 2019-06-12 11:30:37 +02:00
Glenn Jocher
37a07a44a1 updates 2019-06-12 11:29:36 +02:00
Glenn Jocher
a2d392b5c3 updates 2019-06-12 11:25:56 +02:00
Glenn Jocher
d7a28bd9f7 updates 2019-06-05 13:49:56 +02:00
Glenn Jocher
c46e156ff8 updates 2019-06-01 18:29:14 +02:00
Glenn Jocher
7807337c5f updates 2019-05-31 14:30:27 +02:00
Glenn Jocher
a1d5f62334 updates 2019-05-31 13:53:09 +02:00
Glenn Jocher
a2c9cb9d2c updates 2019-05-31 01:33:17 +02:00
Glenn Jocher
ea11af6132 updates 2019-05-30 20:21:25 +02:00
Glenn Jocher
70ba7f0805 updates 2019-05-30 19:08:43 +02:00
Glenn Jocher
504d3b3f71 updates 2019-05-30 19:02:55 +02:00
Glenn Jocher
f7a517d72c updates 2019-05-30 01:40:35 +02:00
Glenn Jocher
cc043f60fb Merge remote-tracking branch 'origin/master' 2019-05-29 18:04:19 +02:00
Glenn Jocher
0847334241 updates 2019-05-29 18:04:11 +02:00
Glenn Jocher
b4610520ea
Update README.md 2019-05-29 02:02:55 +02:00
Glenn Jocher
126f70bbe9
Update README.md 2019-05-29 02:02:41 +02:00
Glenn Jocher
9cf5ab0c9d updates 2019-05-28 16:15:53 +02:00
Glenn Jocher
e819968ee3 updates 2019-05-28 16:14:37 +02:00
Glenn Jocher
f131a1d52e updates 2019-05-25 14:51:01 +02:00
Glenn Jocher
e2e3e35e07 updates 2019-05-25 14:43:07 +02:00
Glenn Jocher
6f05c5347e updates 2019-05-25 11:57:19 +02:00
Glenn Jocher
8f9ab337b0 updates 2019-05-23 13:19:49 +02:00
Glenn Jocher
001193b9c7 updates 2019-05-23 13:15:44 +02:00
Glenn Jocher
68b9df4dd4 updates 2019-05-23 12:36:13 +02:00
Glenn Jocher
3006c33c29 updates 2019-05-23 12:32:11 +02:00
Glenn Jocher
e67cee4a0c updates 2019-05-23 12:27:46 +02:00
Glenn Jocher
463dc56f31 updates 2019-05-21 17:39:04 +02:00
Glenn Jocher
19c7697434 updates 2019-05-21 17:37:34 +02:00
Glenn Jocher
520e58aa05 updates 2019-05-21 16:11:08 +02:00
Glenn Jocher
d09db54cb0 updates 2019-05-21 16:03:29 +02:00
Glenn Jocher
d2589fc5f7 updates 2019-05-21 15:00:16 +02:00
Glenn Jocher
0effcd02bf updates 2019-05-21 14:01:06 +02:00
Glenn Jocher
401a615d34 updates 2019-05-21 12:59:04 +02:00
Glenn Jocher
b2abd46437 updates 2019-05-21 12:58:18 +02:00
Glenn Jocher
c2950dbfb6 updates 2019-05-20 20:06:36 +02:00
Glenn Jocher
5aeef8fba7 updates 2019-05-20 17:50:12 +02:00
Glenn Jocher
13180b51ac updates 2019-05-20 16:16:43 +02:00
Glenn Jocher
8eced539e0 updates 2019-05-20 14:52:14 +02:00
Glenn Jocher
a9daf244b9 updates 2019-05-19 18:03:10 +02:00
Glenn Jocher
68490520db updates 2019-05-19 17:28:14 +02:00
Glenn Jocher
74d04349da Merge remote-tracking branch 'origin/master' 2019-05-18 23:24:33 +02:00
Glenn Jocher
b034382c8b updates 2019-05-18 23:24:26 +02:00
Dustin Kendall
0b4f4bb04b Encoding of video output - Resolves Issue 243 (#280)
* Hardcoded \'mpv4\' codec to work on multiple OS's and versions of ffmpeg

* changed fourcc code to code known to work on windows and linux, and added usefull arguments
2019-05-18 12:47:12 +02:00
Glenn Jocher
aae93a9651 updates 2019-05-18 12:07:57 +02:00
Glenn Jocher
6930fb44ac updates 2019-05-14 22:24:45 +02:00
Glenn Jocher
c8a30663f0 updates 2019-05-14 18:43:14 +02:00
Glenn Jocher
cc5660e7c0 updates 2019-05-14 18:37:13 +02:00
Glenn Jocher
4b15644b46 updates 2019-05-14 12:59:12 +02:00
Glenn Jocher
e42278e981 updates 2019-05-13 22:01:10 +02:00
Glenn Jocher
7cedb51dac updates 2019-05-13 21:36:33 +02:00
Glenn Jocher
48d4c5938d updates 2019-05-13 21:18:15 +02:00
Glenn Jocher
acf31e477b add *.jpeg support 2019-05-13 20:13:20 +02:00
Glenn Jocher
3a2da49f82 updates 2019-05-13 14:41:17 +02:00
Glenn Jocher
584e0a3be8 add *.jpeg support 2019-05-11 14:55:10 +02:00
Glenn Jocher
bc19e89247 add *.jpeg support 2019-05-11 14:38:48 +02:00
Glenn Jocher
c45cdc4fa3 updates 2019-05-10 17:08:00 +02:00
Glenn Jocher
9ffb40b0be updates 2019-05-10 16:29:37 +02:00
Glenn Jocher
ddd0474111 add *.jpeg support 2019-05-10 15:28:54 +02:00
Glenn Jocher
3f8b64974c add *.jpeg support 2019-05-10 15:24:03 +02:00
Glenn Jocher
ae03cf3eea add *.jpeg support 2019-05-10 15:16:02 +02:00
Glenn Jocher
31592c276f add *.jpeg support 2019-05-10 14:15:09 +02:00
ypw
9a13bb53c8 Update utils.py (#268) 2019-05-10 12:27:31 +02:00
Glenn Jocher
1a757524bf add *.jpeg support 2019-05-09 11:23:36 +02:00
Glenn Jocher
40bc015b75 updates 2019-05-08 18:54:55 +02:00
Glenn Jocher
5580694970 updates 2019-05-08 17:29:23 +02:00
Glenn Jocher
573e8c2840 updates 2019-05-08 14:27:47 +02:00
Glenn Jocher
e9246d8e63 updates 2019-05-08 14:25:30 +02:00
Glenn Jocher
fa11c951ad updates 2019-05-08 14:25:13 +02:00
Glenn Jocher
40a5680671 updates 2019-05-08 13:31:49 +02:00
Glenn Jocher
a8f0a3fede updates 2019-05-08 13:06:24 +02:00
Glenn Jocher
9ee59fe694 updates 2019-05-06 23:25:48 +02:00
Glenn Jocher
3ffdecc938 updates 2019-05-06 23:20:33 +02:00
Glenn Jocher
fc6ea5b1fd updates 2019-05-06 16:37:41 +02:00
Glenn Jocher
d7a4cabc07 updates 2019-05-06 16:16:13 +02:00
Glenn Jocher
2ebbe2f339 updates 2019-05-05 22:10:02 +02:00
Glenn Jocher
f05042b660 updates 2019-05-05 21:49:04 +02:00
Glenn Jocher
439c5a9839 updates 2019-05-05 21:02:18 +02:00
Glenn Jocher
74d5496b5b updates 2019-05-05 21:00:50 +02:00
Glenn Jocher
6e1c7922b3 updates 2019-05-05 20:34:30 +02:00
Glenn Jocher
6f1b011bd9 updates 2019-05-05 14:29:30 +02:00
Glenn Jocher
b6c97b1aaf updates 2019-05-05 14:27:10 +02:00
Glenn Jocher
4156e83319 updates 2019-05-05 14:13:05 +02:00
Glenn Jocher
a8f443518f updates 2019-05-05 13:44:12 +02:00
Glenn Jocher
7d857cda95 updates 2019-05-05 13:21:37 +02:00
Glenn Jocher
dd2d713484 updates 2019-05-03 20:51:30 +02:00
Glenn Jocher
09ee7b6f11 updates 2019-05-03 20:50:44 +02:00
Glenn Jocher
6316171f33 updates 2019-05-03 18:14:16 +02:00
Glenn Jocher
3c55d63a9d updates 2019-05-03 00:26:26 +02:00
Glenn Jocher
b901441e76 updates 2019-05-02 23:56:58 +02:00
Glenn Jocher
ae41d5855a updates 2019-05-02 15:47:27 +02:00
Glenn Jocher
7e6e1897ac updates 2019-04-29 18:02:31 +02:00
Glenn Jocher
587522ec56 updates 2019-04-29 17:58:18 +02:00
Glenn Jocher
94f954eba0 updates 2019-04-29 17:57:51 +02:00
Glenn Jocher
b1604be04c updates 2019-04-29 17:49:09 +02:00
Glenn Jocher
cbe01ddeb1 updates 2019-04-28 23:16:21 +02:00
Glenn Jocher
7652365b28 updates 2019-04-27 21:38:20 +02:00
Glenn Jocher
ccfd44c2f8 updates 2019-04-27 18:36:19 +02:00
Glenn Jocher
1e3fb6566c updates 2019-04-27 18:15:27 +02:00
Glenn Jocher
acaab77b7a updates 2019-04-27 17:58:16 +02:00
Glenn Jocher
a9108a296b updates 2019-04-27 17:57:07 +02:00
Glenn Jocher
469dede6d1 updates 2019-04-27 17:52:43 +02:00
Glenn Jocher
d25190e15b updates 2019-04-27 17:51:59 +02:00
Glenn Jocher
8f1becd55c updates 2019-04-27 17:49:22 +02:00
Glenn Jocher
55077b2770 updates 2019-04-27 17:44:26 +02:00
Glenn Jocher
76c45f4ed9 updates 2019-04-27 00:18:05 +02:00
Glenn Jocher
e1850bf234 updates 2019-04-26 23:33:13 +02:00
Glenn Jocher
84ebb5d143 updates 2019-04-26 23:25:00 +02:00
Glenn Jocher
96ea6a87cb updates 2019-04-26 23:10:20 +02:00
Glenn Jocher
5263608d12 updates 2019-04-26 14:49:40 +02:00
Glenn Jocher
0bed0a9a0e updates 2019-04-26 14:17:04 +02:00
Glenn Jocher
3b134e3a84 updates 2019-04-26 14:14:28 +02:00
Glenn Jocher
75f08c1cd1 updates 2019-04-26 13:56:44 +02:00
Glenn Jocher
7691e2e0f8 updates 2019-04-26 13:28:00 +02:00
Glenn Jocher
c5ec86082d updates 2019-04-26 12:24:18 +02:00
Glenn Jocher
674feb91f2 updates 2019-04-26 12:16:33 +02:00
Glenn Jocher
3535c401bb updates 2019-04-26 12:14:35 +02:00
Glenn Jocher
ed80ee8326 Merge remote-tracking branch 'origin/master' 2019-04-26 12:09:49 +02:00
Glenn Jocher
444adb405a updates 2019-04-26 12:09:43 +02:00
Glenn Jocher
ac31f91fc1
Update README.md 2019-04-26 12:01:43 +02:00
Glenn Jocher
9752e9c42c
Update README.md 2019-04-26 12:00:49 +02:00
Glenn Jocher
cf54fa7468 updates 2019-04-25 22:47:31 +02:00
Glenn Jocher
324f860235 updates 2019-04-25 20:50:37 +02:00
Glenn Jocher
c89982d134 updates 2019-04-24 22:10:24 +02:00
Glenn Jocher
3bb38215dd updates 2019-04-24 21:48:32 +02:00
Glenn Jocher
365d38bc0c updates 2019-04-24 21:41:18 +02:00
Glenn Jocher
3e71b8d48b updates 2019-04-24 21:37:32 +02:00
Glenn Jocher
fa0acebe2a updates 2019-04-24 21:28:11 +02:00
Glenn Jocher
83793ffb2b updates 2019-04-24 21:23:54 +02:00
Glenn Jocher
9c0cde69d5 updates 2019-04-24 21:02:23 +02:00
Glenn Jocher
387bdb010d updates 2019-04-24 19:56:04 +02:00
Glenn Jocher
fbf0014cd6 updates 2019-04-24 17:13:36 +02:00
Glenn Jocher
aa2df1eda7 updates 2019-04-24 17:02:52 +02:00
Glenn Jocher
20bb5f0a6a updates 2019-04-24 16:39:56 +02:00
Glenn Jocher
55aaf9a21e updates 2019-04-24 14:19:43 +02:00
Glenn Jocher
3a375f7132 updates 2019-04-24 14:09:15 +02:00
Glenn Jocher
bd2378fad1 updates 2019-04-24 13:30:24 +02:00
Glenn Jocher
1771ffb1cf updates 2019-04-24 12:58:14 +02:00
Glenn Jocher
87a450c933 updates 2019-04-23 18:54:27 +02:00
Glenn Jocher
e2b554ca12 updates 2019-04-23 18:53:36 +02:00
Glenn Jocher
1e6b55200e Merge remote-tracking branch 'origin/master' 2019-04-23 18:36:48 +02:00
Glenn Jocher
8d653ede3a updates 2019-04-23 18:36:43 +02:00
Glenn Jocher
5e9274f0bb
Update README.md 2019-04-23 17:05:42 +02:00
Glenn Jocher
50e5a4fe5c
Update README.md 2019-04-23 17:04:46 +02:00
Glenn Jocher
85a4cf0042 updates 2019-04-23 16:48:47 +02:00
Glenn Jocher
334c7c94cf updates 2019-04-22 23:27:31 +02:00
Glenn Jocher
eb4acecbb5 updates 2019-04-22 16:53:42 +02:00
Glenn Jocher
5f69861958 updates 2019-04-22 16:52:14 +02:00
Glenn Jocher
ab8d8cbc93 updates 2019-04-22 16:21:21 +02:00
Glenn Jocher
23cd4ecfa7 updates 2019-04-22 16:17:01 +02:00
Glenn Jocher
e5d11c68ac updates 2019-04-22 14:59:39 +02:00
Glenn Jocher
cf2caaad41 updates 2019-04-22 14:31:23 +02:00
Glenn Jocher
0bac735cc6 updates 2019-04-22 12:51:20 +02:00
Glenn Jocher
37799efa0b updates 2019-04-21 23:49:10 +02:00
Glenn Jocher
cfe354064c updates 2019-04-21 21:07:01 +02:00
Glenn Jocher
a6dc4347a3 updates 2019-04-21 20:36:04 +02:00
Glenn Jocher
5910353a86 Merge remote-tracking branch 'origin/master' 2019-04-21 20:35:19 +02:00
Glenn Jocher
2bfea0c980 updates 2019-04-21 20:35:11 +02:00
Glenn Jocher
14e4519620 updates 2019-04-21 20:30:11 +02:00
Glenn Jocher
4a4668224b Fuse Conv2d + BatchNorm2d 2019-04-20 22:46:23 +02:00
Glenn Jocher
f9d25f6d24 hyperparameter updates 2019-04-19 20:41:18 +02:00
Glenn Jocher
b4fa1d90d0 hyperparameter updates 2019-04-19 13:24:49 +02:00
Glenn Jocher
5962510b23 hyperparameter updates 2019-04-19 13:18:47 +02:00
Glenn Jocher
6525c76f9c hyperparameter updates 2019-04-19 12:37:38 +02:00
Glenn Jocher
2643af18b2 updates 2019-04-18 23:42:37 +02:00
Glenn Jocher
8b707c43c8 updates 2019-04-18 23:17:51 +02:00
Glenn Jocher
fad618b821 updates 2019-04-18 23:05:19 +02:00
Glenn Jocher
df3211ba4c updates 2019-04-18 23:02:54 +02:00
Glenn Jocher
0d770e14df updates 2019-04-18 22:55:50 +02:00
Glenn Jocher
b5bfc30759 updates 2019-04-18 22:40:32 +02:00
Glenn Jocher
cc50757d95 updates 2019-04-18 22:31:05 +02:00
Glenn Jocher
9b6347ac6c updates 2019-04-18 21:56:50 +02:00
Glenn Jocher
cf5bbc97ee updates 2019-04-18 21:44:57 +02:00
Glenn Jocher
03dd0b82ea updates 2019-04-18 21:42:21 +02:00
Glenn Jocher
40221894c2 updates 2019-04-18 21:18:54 +02:00
Glenn Jocher
02d6b2f9c5 updates 2019-04-18 19:25:41 +02:00
Glenn Jocher
97c488f8ef updates 2019-04-18 19:18:28 +02:00
Glenn Jocher
1d7ccb7580 updates 2019-04-18 19:17:08 +02:00
Glenn Jocher
023baa5aa2 updates 2019-04-18 17:07:41 +02:00
Glenn Jocher
31dc2e2be5 updates 2019-04-18 17:06:09 +02:00
Glenn Jocher
d822276cdb updates 2019-04-18 16:48:09 +02:00
Glenn Jocher
05ec8b3b9d updates 2019-04-18 16:45:38 +02:00
Glenn Jocher
c68093ee7d updates 2019-04-18 16:17:44 +02:00
Glenn Jocher
b7c5eb1503 updates 2019-04-18 16:07:47 +02:00
Glenn Jocher
c711719280 updates 2019-04-18 15:56:31 +02:00
Glenn Jocher
a4f2ad1660 updates 2019-04-18 15:39:05 +02:00
Glenn Jocher
25bf9e3611 updates 2019-04-18 15:31:03 +02:00
Glenn Jocher
8fcfb6ac3a updates 2019-04-18 15:24:58 +02:00
Glenn Jocher
ee410481a0 updates 2019-04-18 15:18:09 +02:00
Glenn Jocher
48f6529fd1 updates 2019-04-18 15:17:31 +02:00
Glenn Jocher
b177d01695 updates 2019-04-18 15:01:58 +02:00
Glenn Jocher
c5e58b6484 updates 2019-04-18 14:48:14 +02:00
Glenn Jocher
286257c5ac updates 2019-04-18 14:47:05 +02:00
Glenn Jocher
2089e4f4c8 updates 2019-04-18 14:33:32 +02:00
Glenn Jocher
f4dc0d84e4 updates 2019-04-18 12:27:28 +02:00
Glenn Jocher
8831913f10 updates 2019-04-18 12:21:39 +02:00
Glenn Jocher
9a440cfa15 updates 2019-04-18 02:13:04 +02:00
Glenn Jocher
5f21139623 updates 2019-04-17 19:06:15 +02:00
Glenn Jocher
2ed0de9785 updates 2019-04-17 19:04:01 +02:00
Glenn Jocher
319c0988cc updates 2019-04-17 18:48:08 +02:00
Glenn Jocher
5fe0346176 updates 2019-04-17 18:40:12 +02:00
Glenn Jocher
7787090165 updates 2019-04-17 18:33:16 +02:00
Glenn Jocher
27ca52c9ee updates 2019-04-17 18:22:40 +02:00
Glenn Jocher
ddab3802eb updates 2019-04-17 17:59:01 +02:00
Glenn Jocher
46c55ac3bd updates 2019-04-17 17:51:39 +02:00
Glenn Jocher
663e06f4f9 updates 2019-04-17 17:49:00 +02:00
Glenn Jocher
bf966d177f updates 2019-04-17 17:42:17 +02:00
Glenn Jocher
4107315afe updates 2019-04-17 17:41:05 +02:00
Glenn Jocher
c2809622c1 updates 2019-04-17 17:35:00 +02:00
Glenn Jocher
5d467c8ac4 updates 2019-04-17 17:31:40 +02:00
Glenn Jocher
fb88cb0609 updates 2019-04-17 17:29:23 +02:00
Glenn Jocher
f380c7abd2 updates 2019-04-17 17:27:51 +02:00
Glenn Jocher
5f04b93b42 updates 2019-04-17 16:15:08 +02:00
Glenn Jocher
a95e47533a updates 2019-04-17 16:11:26 +02:00
Glenn Jocher
0b8a28e3dd updates 2019-04-17 15:52:51 +02:00
Glenn Jocher
9c5524ba82 updates 2019-04-17 14:57:39 +02:00
Glenn Jocher
b04ea48153 updates 2019-04-17 13:34:17 +02:00
Glenn Jocher
582becc4bf updates 2019-04-17 13:30:28 +02:00
Glenn Jocher
628d7e5081 updates 2019-04-17 13:26:41 +02:00
Glenn Jocher
6d22628569 updates 2019-04-17 13:25:17 +02:00
Glenn Jocher
447a3d923d updates 2019-04-17 12:42:10 +02:00
Glenn Jocher
f5d343b9a6 updates 2019-04-17 02:15:45 +02:00
Glenn Jocher
9990e72e6d updates 2019-04-17 00:02:24 +02:00
Glenn Jocher
6204d83f3a updates 2019-04-16 23:57:43 +02:00
Glenn Jocher
f2e12f6266 updates 2019-04-16 23:47:25 +02:00
Glenn Jocher
0654467891 Merge remote-tracking branch 'origin/master' 2019-04-16 23:31:30 +02:00
Glenn Jocher
2c3d461392 updates 2019-04-16 23:27:31 +02:00
Glenn Jocher
ddc3c82c91
Update README.md 2019-04-16 22:29:00 +02:00
Glenn Jocher
d4b80b82c3
Update README.md 2019-04-16 14:01:55 +02:00
Glenn Jocher
a70c9f87a9 updates 2019-04-16 13:17:48 +02:00
Glenn Jocher
100f443722 updates 2019-04-16 13:03:24 +02:00
Glenn Jocher
a8fb235647 updates 2019-04-16 12:55:23 +02:00
Glenn Jocher
b5ec9cb128 updates 2019-04-16 12:49:34 +02:00
Glenn Jocher
54ebb2e593 pin_memory=True 2019-04-15 19:25:36 +02:00
Glenn Jocher
e3f0b0248c updates 2019-04-15 18:48:18 +02:00
Glenn Jocher
0f43cd7f43 Merge remote-tracking branch 'origin/master' 2019-04-15 18:19:52 +02:00
Glenn Jocher
a06abce40b updates 2019-04-15 18:19:45 +02:00
Glenn Jocher
0ce635fb53
Update README.md 2019-04-15 14:46:16 +02:00
Glenn Jocher
d5c3853593
Update README.md 2019-04-15 14:45:43 +02:00
Glenn Jocher
00cc02ba91 updates 2019-04-15 13:58:48 +02:00
Glenn Jocher
76bf9fceed updates 2019-04-15 13:57:07 +02:00
Glenn Jocher
1191dee71b updates 2019-04-15 13:55:52 +02:00
Glenn Jocher
3c6b168a0a updates 2019-04-14 23:22:35 +02:00
Glenn Jocher
09949cdafa updates 2019-04-14 16:00:04 +02:00
Glenn Jocher
52464f5a06 updates 2019-04-13 20:40:49 +02:00
Glenn Jocher
aeca7f72c4 updates 2019-04-13 20:38:00 +02:00
Glenn Jocher
947ee02115 updates 2019-04-13 20:32:29 +02:00
Glenn Jocher
95f3d8e043 updates 2019-04-13 20:11:08 +02:00
Glenn Jocher
f299d83f40 updates 2019-04-13 16:02:45 +02:00
Glenn Jocher
95696d24c0 updates 2019-04-12 17:19:00 +02:00
Glenn Jocher
50df252c4b updates 2019-04-12 14:58:19 +02:00
IlyaOvodov
5ea92e7ee2 FIX: trainig fails if targets list is empty (#198)
* FIX: trainig fails if targets list is empty

* Update utils.py
2019-04-12 14:55:26 +02:00
Glenn Jocher
24f86b008a
Update README.md 2019-04-12 14:24:51 +02:00
Glenn Jocher
bce3dd03e8 updates 2019-04-12 14:00:16 +02:00
Glenn Jocher
d5db50df8e updates 2019-04-11 18:26:52 +02:00
Glenn Jocher
c9a55a269b updates 2019-04-11 15:29:31 +02:00
Glenn Jocher
53b9892216 updates 2019-04-11 12:47:58 +02:00
Glenn Jocher
e6e6fb6f57 updates 2019-04-11 12:47:35 +02:00
Glenn Jocher
cbd5347cc3 updates 2019-04-11 12:41:07 +02:00
Glenn Jocher
835f975228 updates 2019-04-11 12:21:33 +02:00
Glenn Jocher
9c7dc10b7f updates 2019-04-10 16:51:58 +02:00
Glenn Jocher
f0b4f9f4fb updates 2019-04-10 16:39:15 +02:00
Glenn Jocher
a6a40e0592 updates 2019-04-10 16:23:10 +02:00
Glenn Jocher
d65d64bb7e updates 2019-04-10 16:17:08 +02:00
Glenn Jocher
bfc77ec88f updates 2019-04-10 13:51:33 +02:00
Glenn Jocher
7709e8aa72 updates 2019-04-09 16:28:14 +02:00
Glenn Jocher
2ca4c9aaec updates 2019-04-09 13:39:17 +02:00
Glenn Jocher
d8cbf9b7a7 updates 2019-04-09 13:21:39 +02:00
Glenn Jocher
3e85a4191a updates 2019-04-09 13:19:17 +02:00
Glenn Jocher
2e74f4be41 updates 2019-04-09 12:32:26 +02:00
Glenn Jocher
4b7eb0aec9 updates 2019-04-09 12:24:32 +02:00
Glenn Jocher
26b115c306 updates 2019-04-09 12:24:01 +02:00
Glenn Jocher
6cb3c61320 Merge remote-tracking branch 'origin/master' 2019-04-09 11:38:22 +02:00
Glenn Jocher
05881d0730 updates 2019-04-09 11:38:16 +02:00
Glenn Jocher
11366774e2
Update README.md 2019-04-09 11:05:45 +02:00
Glenn Jocher
e19b0effb2 updates 2019-04-08 23:45:52 +02:00
Glenn Jocher
dae3604705 Merge remote-tracking branch 'origin/master' 2019-04-08 15:42:11 +02:00
Glenn Jocher
3825e99ee3 updates 2019-04-08 15:41:14 +02:00
Glenn Jocher
34d083e6e8
Update README.md 2019-04-08 14:08:30 +02:00
Glenn Jocher
22ae1f7bee
Update README.md 2019-04-08 14:06:36 +02:00
Glenn Jocher
1d49e66580 updates 2019-04-06 20:33:58 +02:00
Glenn Jocher
287ad43c58 updates 2019-04-06 17:06:37 +02:00
Glenn Jocher
c948629fd3 updates 2019-04-06 16:16:40 +02:00
Glenn Jocher
d171596183 updates 2019-04-06 16:14:16 +02:00
Glenn Jocher
7ee48a43b6 updates 2019-04-06 16:13:35 +02:00
Glenn Jocher
112f061f4e updates 2019-04-06 16:13:11 +02:00
Glenn Jocher
a34a760d0f updates 2019-04-05 16:26:42 +02:00
Glenn Jocher
54e43b9ad6 updates 2019-04-05 16:19:51 +02:00
Glenn Jocher
65eccee4ef updates 2019-04-05 16:17:15 +02:00
Glenn Jocher
7b001d13c1 updates 2019-04-05 16:09:09 +02:00
Glenn Jocher
88eab43e5b updates 2019-04-05 16:08:34 +02:00
Glenn Jocher
2fab66607c updates 2019-04-05 16:08:18 +02:00
Glenn Jocher
5e79810e69 updates 2019-04-05 15:54:59 +02:00
Glenn Jocher
325b1ba4bc updates 2019-04-05 15:51:06 +02:00
Glenn Jocher
fe896c1792 updates 2019-04-05 15:43:41 +02:00
Glenn Jocher
1f889c575e updates 2019-04-05 15:37:16 +02:00
Glenn Jocher
cb352be02c updates 2019-04-05 15:34:42 +02:00
Glenn Jocher
7e82df6edc updates 2019-04-04 17:34:11 +02:00
Glenn Jocher
efc662351b updates 2019-04-03 17:24:13 +02:00
Glenn Jocher
a59caf053a updates 2019-04-03 15:00:27 +02:00
Glenn Jocher
0aff657e19 updates 2019-04-03 14:54:39 +02:00
Glenn Jocher
5b7325bd06 updates 2019-04-03 14:29:28 +02:00
Glenn Jocher
291b02a827 Merge remote-tracking branch 'origin/master' 2019-04-03 14:25:39 +02:00
Glenn Jocher
35d2576cb2 default changed to yolov3-spp 2019-04-03 14:25:31 +02:00
Glenn Jocher
b9bacf40ee
Update README.md 2019-04-03 14:22:32 +02:00
Glenn Jocher
149fcb04a5
Update README.md 2019-04-03 14:21:41 +02:00
Glenn Jocher
1ca0338f8e
Update README.md 2019-04-03 12:42:40 +02:00
Glenn Jocher
5170cd36b0 updates 2019-04-03 11:31:31 +02:00
Glenn Jocher
d79a54a4be updates 2019-04-03 11:07:31 +02:00
Glenn Jocher
c36f1e990b updates 2019-04-02 22:54:32 +02:00
Glenn Jocher
be10b75eb4 updates 2019-04-02 20:21:00 +02:00
Glenn Jocher
9b32d3cb5d updates 2019-04-02 19:19:02 +02:00
Glenn Jocher
f527d30ccd updates 2019-04-02 18:50:55 +02:00
Glenn Jocher
7f220a14cb updates 2019-04-02 18:10:53 +02:00
Glenn Jocher
03559eff6e updates 2019-04-02 18:05:25 +02:00
Glenn Jocher
1457f66419 updates 2019-04-02 18:04:04 +02:00
Glenn Jocher
d526ce0d11 updates 2019-04-02 16:33:52 +02:00
Glenn Jocher
658f2a4576 updates 2019-04-02 16:15:25 +02:00
Glenn Jocher
09100263e9 updates 2019-04-02 16:12:35 +02:00
Glenn Jocher
6b05c7750e updates 2019-04-02 16:06:31 +02:00
Glenn Jocher
3c233bc0b7 updates 2019-04-02 16:06:15 +02:00
Glenn Jocher
3f82380e12 updates 2019-04-02 15:09:13 +02:00
Glenn Jocher
a3ab6221cb updates 2019-04-02 14:42:29 +02:00
Glenn Jocher
9b92794e20 updates 2019-04-02 14:41:52 +02:00
Glenn Jocher
6ae25fc597 updates 2019-04-02 14:31:35 +02:00
Glenn Jocher
47400aa066 updates 2019-04-02 14:29:35 +02:00
Glenn Jocher
748ff9b5b9 updates 2019-04-02 14:29:15 +02:00
Glenn Jocher
330caefe69 updates 2019-04-02 14:19:53 +02:00
Glenn Jocher
af61da5d41 updates 2019-04-02 14:07:14 +02:00
Glenn Jocher
e3781460f8 updates 2019-04-02 14:02:35 +02:00
Glenn Jocher
c9328f663f updates 2019-04-02 13:56:54 +02:00
Glenn Jocher
01569d15e3 updates 2019-04-02 13:43:18 +02:00
Glenn Jocher
bd32517528 updates 2019-04-01 20:27:11 +02:00
Glenn Jocher
a76e8e3ee8 updates 2019-04-01 18:43:21 +02:00
Glenn Jocher
4f98fbde78 updates 2019-04-01 18:42:54 +02:00
Glenn Jocher
b56952d707 updates 2019-03-31 20:19:15 +02:00
Glenn Jocher
09b02d2029 updates 2019-03-31 19:57:44 +02:00
Gabriel Bianconi
8901e96a38 Save model by default (#178)
* Save model by default

* Update train.py
2019-03-31 19:11:13 +02:00
Gabriel Bianconi
6b828f184e Fix None bug in detect.py (#177) 2019-03-31 14:06:49 +02:00
Glenn Jocher
c0cacc45a1
mAP Update (#176)
* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates
2019-03-30 18:45:04 +01:00
Glenn Jocher
f2cb840123 Merge remote-tracking branch 'origin/master' 2019-03-28 18:55:20 +01:00
Glenn Jocher
1c48376d8d updates 2019-03-28 18:55:13 +01:00
Glenn Jocher
fdc02115b5
Update README.md 2019-03-28 13:46:23 +01:00
Fatih Baltacı
cb67d64a5f Update datasets.py (#169) 2019-03-27 15:44:19 +01:00
Glenn Jocher
47eab968ab updates 2019-03-26 18:02:57 +01:00
Glenn Jocher
5fd702aead updates 2019-03-25 19:39:14 +01:00
Glenn Jocher
bd440fa0c3 updates 2019-03-25 18:35:39 +01:00
Glenn Jocher
f5d398a68d updates 2019-03-25 15:33:36 +01:00
Glenn Jocher
ff06b698ed updates 2019-03-25 15:32:00 +01:00
Glenn Jocher
8f0caff640
Update issue templates 2019-03-25 15:27:09 +01:00
Glenn Jocher
c16b587ac7
Update train.py 2019-03-25 15:06:22 +01:00
Glenn Jocher
06d264198c
Update train.py 2019-03-25 15:05:35 +01:00
Glenn Jocher
c7192f64c9
Update train.py 2019-03-25 15:03:13 +01:00
Glenn Jocher
cd51e1137b
Add collate_fn() to DataLoader (#163)
Multi-GPU update with custom collate function to allow variable size target vector per image without needing to pad targets.
2019-03-25 14:59:38 +01:00
Glenn Jocher
49ae0a55b1
Merge pull request #162 from fatihbaltaci/master
Local tcp:ip and changed setting of rank and world-size
2019-03-25 13:35:05 +01:00
Fatih Baltacı
9208a91095
Update train.py 2019-03-25 15:22:05 +03:00
Glenn Jocher
16ad6f9739
Merge pull request #160 from perry0418/master
fix the multi gpu training bug: zero map
2019-03-25 11:57:40 +01:00
perry0418
5daea5882f
Update train.py
fix problem of multiple gpu training
2019-03-25 16:13:21 +08:00
perry0418
4884508110
Update utils.py
solve the multi-gpu training problem
2019-03-25 14:59:02 +08:00
perry0418
386835d7ca
Update train.py
solve the multi-gpu training problem.
2019-03-25 14:56:38 +08:00
perry0418
648ed20717
Merge pull request #1 from ultralytics/master
update from original master
2019-03-25 14:39:41 +08:00
Glenn Jocher
4114d5e9c9 updates 2019-03-22 17:53:06 +02:00
Glenn Jocher
b31f8fb017 updates 2019-03-22 15:08:03 +02:00
Glenn Jocher
75d8cbdd5f updates 2019-03-22 14:56:43 +02:00
Glenn Jocher
eafa2740db
Merge pull request #151 from WannaSeaU/patch-1
Empty label file may cause index error
2019-03-22 14:54:09 +02:00
Glenn Jocher
3532ee038f
Update datasets.py 2019-03-22 14:52:58 +02:00
WannaSeaU
cd188dbde6
Empty label file may cause index error 2019-03-22 18:59:09 +08:00
Glenn Jocher
476724be2d updates 2019-03-22 12:50:25 +02:00
Glenn Jocher
943db40f1a updates 2019-03-22 00:41:43 +02:00
Glenn Jocher
20beee0c5b updates 2019-03-21 22:51:22 +02:00
Glenn Jocher
176851f83a updates 2019-03-21 22:49:57 +02:00
Glenn Jocher
cd81f978be Merge remote-tracking branch 'origin/master' 2019-03-21 22:42:36 +02:00
Glenn Jocher
6aef4e6a78 updates 2019-03-21 22:41:12 +02:00
Glenn Jocher
1e62e94185
Update train.py 2019-03-21 18:30:14 +02:00
Glenn Jocher
8ebb4da5cc updates 2019-03-21 17:28:26 +02:00
Glenn Jocher
72afa02272 Merge remote-tracking branch 'origin/master' 2019-03-21 17:26:36 +02:00
Glenn Jocher
d047062074 updates 2019-03-21 17:26:31 +02:00
Glenn Jocher
aa95302880
Update README.md 2019-03-21 15:15:26 +02:00
Glenn Jocher
a024286ec1 updates 2019-03-21 15:05:20 +02:00
Glenn Jocher
a3067e7978 multi_thread dataloader 2019-03-21 14:57:41 +02:00
Glenn Jocher
0fb2653c59 Merge remote-tracking branch 'origin/master' 2019-03-21 14:49:10 +02:00
Glenn Jocher
70fe2204b4 multi_thread dataloader 2019-03-21 14:48:40 +02:00
Glenn Jocher
0bb3cc100a
Update README.md 2019-03-21 13:01:07 +02:00
Glenn Jocher
56d5b2fcc0
Update README.md 2019-03-21 13:00:24 +02:00
Glenn Jocher
be38caf284 updates 2019-03-21 12:13:09 +02:00
Glenn Jocher
2856af5036 updates 2019-03-21 12:11:08 +02:00
Glenn Jocher
aecf840701 updates 2019-03-21 12:09:54 +02:00
Glenn Jocher
03791babfb Merge branch 'master' of /Users/glennjocher/PycharmProjects/yolov3 with conflicts. 2019-03-21 12:08:55 +02:00
Glenn Jocher
ad49e70f47
Merge pull request #145 from perry0418/master
Update train.py
2019-03-21 12:04:50 +02:00
Glenn Jocher
d661fba8ae updates 2019-03-21 11:48:50 +02:00
perry0418
35396adc9c
Update train.py
solve the Multi-GPU --resume Error #138
https://github.com/ultralytics/yolov3/issues/138
2019-03-21 12:02:57 +08:00
Glenn Jocher
2cd6805063 updates 2019-03-21 02:28:30 +02:00
Glenn Jocher
fc87e9af1f Merge remote-tracking branch 'origin/master' 2019-03-21 01:57:56 +02:00
Glenn Jocher
ca67e2353b updates 2019-03-21 01:57:16 +02:00
Glenn Jocher
d1a1ea233a
Update README.md 2019-03-21 01:03:29 +02:00
Glenn Jocher
327aaebd7c updates 2019-03-20 22:10:18 +02:00
Glenn Jocher
9885903baf updates 2019-03-20 20:31:09 +02:00
Glenn Jocher
e7075f2b23 updates 2019-03-20 19:30:10 +02:00
Glenn Jocher
e0eb62706d Merge remote-tracking branch 'origin/master' 2019-03-20 19:21:00 +02:00
Glenn Jocher
f8994e89ea updates 2019-03-20 19:20:54 +02:00
Glenn Jocher
87cb8e661b
Update README.md 2019-03-20 14:08:24 +02:00
Glenn Jocher
7e8fc146e1
Update README.md 2019-03-20 13:35:39 +02:00
Glenn Jocher
a5468acb54
Update README.md 2019-03-20 13:35:03 +02:00
Glenn Jocher
83c6eba700
Update README.md 2019-03-20 13:26:46 +02:00
Glenn Jocher
613ce1be1a updates 2019-03-19 15:50:15 +02:00
Glenn Jocher
973715060d multi_gpu multi_scale 2019-03-19 15:48:52 +02:00
Glenn Jocher
bc989a0147 multi_gpu multi_scale 2019-03-19 15:44:36 +02:00
Glenn Jocher
735b1a370b multi_gpu multi_scale 2019-03-19 15:43:10 +02:00
Glenn Jocher
2f1afd2d69 multi_gpu multi_scale 2019-03-19 15:38:53 +02:00
Glenn Jocher
dcdd1ae6b7 multi_gpu multi_scale 2019-03-19 15:35:12 +02:00
Glenn Jocher
00e181a55a updates 2019-03-19 12:38:35 +02:00
Glenn Jocher
76f555c108 multi_gpu multi_scale 2019-03-19 12:34:12 +02:00
Glenn Jocher
feb5fcb16f multi_gpu multi_scale 2019-03-19 11:38:01 +02:00
Glenn Jocher
fa1002b76c Merge remote-tracking branch 'origin/master' 2019-03-19 10:38:43 +02:00
Glenn Jocher
056eed2833 multi_gpu multi_scale 2019-03-19 10:38:32 +02:00
Glenn Jocher
094edb4036
Update README.md 2019-03-18 12:49:18 +02:00
Glenn Jocher
096d55c120
Update README.md 2019-03-18 12:46:44 +02:00
Glenn Jocher
c468294a48
Update README.md 2019-03-18 12:45:49 +02:00
Glenn Jocher
e9a3883001
Update README.md 2019-03-18 12:33:31 +02:00
Glenn Jocher
32f1def48f multi_gpu 2019-03-18 00:59:24 +02:00
Glenn Jocher
7dba5d0171 multi_gpu 2019-03-18 00:48:56 +02:00
Glenn Jocher
f5247b397b multi_gpu 2019-03-18 00:19:52 +02:00
Glenn Jocher
45fac6bff1
multi_gpu (#135)
* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates

* updates
2019-03-17 23:45:39 +02:00
Glenn Jocher
8c730e03cd
Merge pull request #127 from dseuss/master
ONNX export for custom dataset
2019-03-16 12:14:06 +02:00
Glenn Jocher
2df8d7e9f6
nms speedup 2019-03-15 20:40:37 +02:00
Daniel Suess
10182ca39d Get rid of hardcoded values of 85 2019-03-13 13:00:25 +11:00
Daniel Suess
003de1917e Fix shape-mismatch in ONNX export 2019-03-13 12:50:13 +11:00
Glenn Jocher
c1c09eb3cc
Update train.py 2019-03-10 15:03:17 +01:00
Glenn Jocher
c719792d6b
Update README.md 2019-03-08 13:14:55 +01:00
Glenn Jocher
bc0f30933a updates 2019-03-07 17:16:38 +01:00
Glenn Jocher
ff9d343019
Update train.py 2019-03-07 13:28:54 +01:00
Glenn Jocher
473eb8d0c9
Update models.py 2019-03-05 18:43:51 +01:00
Glenn Jocher
6fb14fc903 updates 2019-03-05 17:14:40 +01:00
Glenn Jocher
a2ad00d6fc updates 2019-03-05 17:10:34 +01:00
Glenn Jocher
2c2d7bc63b
Update README.md 2019-03-05 16:23:33 +01:00
Glenn Jocher
3e6800fbc9 updates 2019-03-05 16:13:40 +01:00
Glenn Jocher
e5dc942fee updates 2019-03-04 17:38:38 +01:00
Glenn Jocher
54b62f5302 updates 2019-03-04 17:36:41 +01:00
Glenn Jocher
3bea4da604 updates 2019-03-04 17:34:53 +01:00
Glenn Jocher
3f21d5bb2e updates 2019-03-04 16:13:31 +01:00
Glenn Jocher
dc9f2ef6ba updates 2019-03-04 16:11:37 +01:00
Glenn Jocher
5fcdcefec3 updates 2019-03-04 16:06:13 +01:00
Glenn Jocher
5ca987eaed updates 2019-03-04 16:01:23 +01:00
Glenn Jocher
175e231c55 updates 2019-03-04 15:59:11 +01:00
Glenn Jocher
0b3a17362c updates 2019-03-04 15:56:58 +01:00
Glenn Jocher
5e0cce771e updates 2019-03-04 15:52:51 +01:00
Glenn Jocher
8069fc452a updates 2019-03-04 15:52:18 +01:00
Glenn Jocher
65ed0d1122 updates 2019-03-04 15:46:25 +01:00
Glenn Jocher
66211d1147 updates 2019-03-04 15:41:22 +01:00
Glenn Jocher
545f756090 updates 2019-02-28 15:40:30 +01:00
Glenn Jocher
55c6efbb39 updates 2019-02-27 14:46:28 +01:00
Glenn Jocher
303eef1d3d updates 2019-02-27 14:45:39 +01:00
Glenn Jocher
bf62d1d67e updates 2019-02-27 14:40:41 +01:00
Glenn Jocher
7b2e442ba2 updates 2019-02-27 14:38:57 +01:00
Glenn Jocher
e094bb14ba updates 2019-02-27 14:19:57 +01:00
Glenn Jocher
70339798c5 updates 2019-02-27 14:07:04 +01:00
Glenn Jocher
324dc6af6e updates 2019-02-27 13:21:39 +01:00
Glenn Jocher
41d55d452b updates 2019-02-27 12:52:02 +01:00
Glenn Jocher
036e3b3253 updates 2019-02-27 12:51:24 +01:00
Glenn Jocher
358f34afa8 updates 2019-02-27 12:32:25 +01:00
Glenn Jocher
9a27339e04 updates 2019-02-27 00:04:41 +01:00
Glenn Jocher
eb6a4b5b84 updates 2019-02-26 15:15:39 +01:00
Glenn Jocher
a6fdc7413b updates 2019-02-26 15:12:21 +01:00
Glenn Jocher
249313be6c updates 2019-02-26 15:11:22 +01:00
Glenn Jocher
57417e7080 updates 2019-02-26 15:00:27 +01:00
Glenn Jocher
cb63ce30ec updates 2019-02-26 14:57:28 +01:00
Glenn Jocher
707d6ea965 updates 2019-02-26 13:52:03 +01:00
Glenn Jocher
a2b3e18fc1 updates 2019-02-26 03:28:21 +01:00
Glenn Jocher
e1fa265f02 updates 2019-02-26 03:18:15 +01:00
Glenn Jocher
6814e925b5 updates 2019-02-26 03:12:32 +01:00
Glenn Jocher
41ba6dfd6b updates 2019-02-26 03:12:04 +01:00
Glenn Jocher
40fe489b80 updates 2019-02-26 02:55:32 +01:00
Glenn Jocher
90a20f93e5 updates 2019-02-26 02:53:11 +01:00
Glenn Jocher
f541861533 updates 2019-02-25 13:50:01 +01:00
Glenn Jocher
d2cd49f059 updates 2019-02-25 13:47:51 +01:00
Glenn Jocher
8af70386e8 updates 2019-02-23 23:50:23 +01:00
Glenn Jocher
9e60e97a6c updates 2019-02-22 16:24:30 +01:00
Glenn Jocher
ac22a717f1 updates 2019-02-22 16:15:20 +01:00
Glenn Jocher
12e605165e updates 2019-02-22 15:05:03 +01:00
Glenn Jocher
0f3018124f updates 2019-02-21 23:23:03 +01:00
Glenn Jocher
3f68a6776a updates 2019-02-21 20:45:53 +01:00
Glenn Jocher
e62736f8a8 updates 2019-02-21 20:16:58 +01:00
Glenn Jocher
485321ecb1 updates 2019-02-21 16:18:11 +01:00
Glenn Jocher
46e3343494 updates 2019-02-21 16:16:35 +01:00
Glenn Jocher
af853f604c updates 2019-02-21 15:57:18 +01:00
Glenn Jocher
ec308d605e updates 2019-02-21 10:59:05 +01:00
Glenn Jocher
646a21a5cd updates 2019-02-21 10:57:55 +01:00
Glenn Jocher
7a31f4b288 updates 2019-02-21 00:14:16 +01:00
Glenn Jocher
58d4826a11 updates 2019-02-20 23:52:36 +01:00
Glenn Jocher
a92b6d4d32 updates 2019-02-20 23:21:42 +01:00
Glenn Jocher
0b971eddff updates 2019-02-20 18:41:31 +01:00
Glenn Jocher
f8c675dbc0 updates 2019-02-20 17:44:41 +01:00
Glenn Jocher
ead4af98b0 updates 2019-02-20 15:11:55 +01:00
Glenn Jocher
a65a383d3a updates 2019-02-20 13:00:39 +01:00
Glenn Jocher
ed37551c38 updates 2019-02-20 12:55:06 +01:00
Glenn Jocher
344bea20eb updates 2019-02-20 12:53:38 +01:00
Glenn Jocher
f728bd21d2 updates 2019-02-20 12:52:39 +01:00
Glenn Jocher
315fe6ec14 xy and wh losses respectively merged 2019-02-20 12:50:40 +01:00
Glenn Jocher
44333ddd9f updates 2019-02-19 22:49:47 +01:00
Glenn Jocher
0772ebf7c9 xy and wh losses respectively merged 2019-02-19 22:19:59 +01:00
Glenn Jocher
3eb49be263 xy and wh losses respectively merged 2019-02-19 20:03:42 +01:00
Glenn Jocher
15bba5a345 xy and wh losses respectively merged 2019-02-19 19:55:33 +01:00
Glenn Jocher
9df279cded updates 2019-02-19 19:36:09 +01:00
Glenn Jocher
75225e4d99 updates 2019-02-19 19:30:56 +01:00
Glenn Jocher
a116dd36f7 updates 2019-02-19 19:18:03 +01:00
Glenn Jocher
f07dd72a09 updates 2019-02-19 19:01:31 +01:00
Glenn Jocher
9c96e7b6cd updates 2019-02-19 19:00:44 +01:00
Glenn Jocher
0dd791b7ad updates 2019-02-19 16:11:18 +01:00
Glenn Jocher
3157049c60 updates 2019-02-19 15:01:08 +01:00
Glenn Jocher
f16609b48b updates 2019-02-18 20:42:59 +01:00
Glenn Jocher
ce4ee36ca0 updates 2019-02-18 19:58:01 +01:00
Glenn Jocher
d81838e286 updates 2019-02-18 19:53:38 +01:00
Glenn Jocher
bbb750876e updates 2019-02-18 19:52:38 +01:00
Glenn Jocher
0f06fbd681 updates 2019-02-18 19:49:58 +01:00
Glenn Jocher
2ef92f5651 updates 2019-02-18 19:44:15 +01:00
Glenn Jocher
f788a57009 updates 2019-02-18 19:31:00 +01:00
Glenn Jocher
2ba45e4878 updates 2019-02-18 19:27:34 +01:00
Glenn Jocher
5f2d3aa9c3 updates 2019-02-18 19:24:53 +01:00
Glenn Jocher
77ce2cd43f updates 2019-02-18 19:21:21 +01:00
Glenn Jocher
adea337545 updates 2019-02-18 19:17:48 +01:00
Glenn Jocher
a80b2d1611 updates 2019-02-18 19:13:40 +01:00
Glenn Jocher
e4d62de5bc updates 2019-02-18 18:32:31 +01:00
Glenn Jocher
6e2cf074a1 updates 2019-02-18 17:48:35 +01:00
Glenn Jocher
fa0cbca69a updates 2019-02-18 17:47:30 +01:00
Glenn Jocher
8de043980a updates 2019-02-18 16:25:57 +01:00
Glenn Jocher
c535a8699a updates 2019-02-18 15:51:35 +01:00
Glenn Jocher
6deda82384 updates 2019-02-18 14:03:39 +01:00
Glenn Jocher
03685866fd updates 2019-02-17 18:04:23 +01:00
Glenn Jocher
e919635564 updates 2019-02-17 18:02:56 +01:00
Glenn Jocher
8646db7c19 updates 2019-02-17 17:34:45 +01:00
Glenn Jocher
1239e8dca3 updates 2019-02-17 17:32:53 +01:00
Glenn Jocher
12a42b9ca6 updates 2019-02-17 17:30:16 +01:00
Glenn Jocher
9086caf0bb updates 2019-02-16 14:47:16 +01:00
Glenn Jocher
c828f5459f select GPU0 if multiple available 2019-02-16 14:33:52 +01:00
Glenn Jocher
ee4abc8cdf optimize imports 2019-02-15 14:08:05 +02:00
Glenn Jocher
9b4e9924fb
Update README.md 2019-02-13 23:15:30 +02:00
Glenn Jocher
c68113cc71
Update README.md 2019-02-13 23:01:58 +02:00
Glenn Jocher
044602b545
Update README.md 2019-02-13 22:45:52 +02:00
Glenn Jocher
2634ff502d optimize imports 2019-02-12 18:21:06 +01:00
Glenn Jocher
0a6306b6cd optimize imports 2019-02-12 18:07:23 +01:00
Glenn Jocher
9706002b71 optimize imports 2019-02-12 18:05:58 +01:00
Glenn Jocher
7d5878872c updates 2019-02-12 17:56:24 +01:00
Glenn Jocher
9cc5ddd776 updates 2019-02-12 17:29:13 +01:00
Glenn Jocher
cc5e9a5a85 updates 2019-02-12 16:58:07 +01:00
Glenn Jocher
e4e64a9ff6 updates 2019-02-12 13:50:43 +01:00
Glenn Jocher
9f145d2aa7 updates 2019-02-11 22:44:25 +01:00
Glenn Jocher
742908257a updates 2019-02-11 18:17:38 +01:00
Glenn Jocher
e23b1a3d73 webcam updates 2019-02-11 18:15:51 +01:00
Glenn Jocher
585f2e2cc1 updates 2019-02-11 17:25:32 +01:00
Glenn Jocher
be2c70106b updates 2019-02-11 14:19:35 +01:00
Glenn Jocher
dea82efa29 updates 2019-02-11 14:19:06 +01:00
Glenn Jocher
0d1bcae1ef updates 2019-02-11 14:13:27 +01:00
Glenn Jocher
22e963a8b7 updates 2019-02-11 14:12:13 +01:00
Glenn Jocher
d85eafe550 updates 2019-02-11 14:11:24 +01:00
Glenn Jocher
5d76ebcc5b updates 2019-02-11 14:08:36 +01:00
Glenn Jocher
6f4f69f6ec
Merge pull request #87 from Ttayu/develop
Ignore private configuration files.
2019-02-11 14:05:59 +01:00
Glenn Jocher
6f58e1384a class labeling corrections 2019-02-11 13:47:58 +01:00
Glenn Jocher
786e10a197 class labeling corrections 2019-02-11 13:45:04 +01:00
Glenn Jocher
1ca352b328 class labeling corrections 2019-02-11 12:44:12 +01:00
Glenn Jocher
ebd682b25c updates 2019-02-11 12:40:14 +01:00
Glenn Jocher
003daea143 updates 2019-02-11 12:32:54 +01:00
Glenn Jocher
37b633d205 updates 2019-02-11 12:27:11 +01:00
Glenn Jocher
429fd6121c updates 2019-02-11 12:26:46 +01:00
Glenn Jocher
daed93102c updates 2019-02-11 12:26:30 +01:00
Glenn Jocher
3cd76b2185 updates 2019-02-10 23:27:31 +01:00
Glenn Jocher
ab2ea5a2f9 updates 2019-02-10 22:02:55 +01:00
Glenn Jocher
22dc8c0ea6 updates 2019-02-10 22:01:53 +01:00
Glenn Jocher
62761cffe6 updates 2019-02-10 21:41:57 +01:00
Glenn Jocher
715c4575bf updates 2019-02-10 21:34:15 +01:00
Glenn Jocher
917f9dd248 updates 2019-02-10 21:28:27 +01:00
Glenn Jocher
c60bad8b10 updates 2019-02-10 21:23:58 +01:00
Glenn Jocher
6f0086103c updates 2019-02-10 21:10:50 +01:00
Glenn Jocher
51eb173416 updates 2019-02-10 21:07:26 +01:00
Glenn Jocher
97909df1a6 updates 2019-02-10 21:06:22 +01:00
Glenn Jocher
9d12a162f8 updates 2019-02-10 21:01:49 +01:00
Glenn Jocher
e057f52780 updates 2019-02-10 20:32:04 +01:00
Ttayu
045651902c Ignore cfg and data directory. 2019-02-10 15:03:08 +09:00
Ttayu
a50782354f Revert "Ignore cfg and data directory."
This reverts commit 8db03998d34f2268a91cae54eead14f2319b84f4.
2019-02-10 14:48:50 +09:00
Ttayu
8db03998d3 Ignore cfg and data directory. 2019-02-10 06:39:32 +09:00
Glenn Jocher
d5b17c93ff updates 2019-02-09 22:39:04 +01:00
Glenn Jocher
5ec27663e6 updates 2019-02-09 22:38:51 +01:00
Glenn Jocher
f908f845ae updates 2019-02-09 22:14:07 +01:00
Glenn Jocher
1cd907c59b updates 2019-02-09 19:29:19 +01:00
Glenn Jocher
be934ba5a5 updates 2019-02-09 19:26:53 +01:00
Glenn Jocher
a701374014 updates 2019-02-09 19:24:51 +01:00
Glenn Jocher
a0936a4eac updates 2019-02-09 19:13:07 +01:00
Glenn Jocher
e88798aefd updates 2019-02-09 18:52:02 +01:00
Glenn Jocher
0913402606 updates 2019-02-09 15:14:31 +01:00
Glenn Jocher
12c9ac9764 updates 2019-02-09 15:12:32 +01:00
Glenn Jocher
30e67cb8b1 updates 2019-02-08 23:28:00 +01:00
Glenn Jocher
08f051c1d4 updates 2019-02-08 23:20:41 +01:00
Glenn Jocher
8dec060504 updates 2019-02-08 23:15:55 +01:00
Glenn Jocher
2a009d8d47 updates 2019-02-08 23:09:58 +01:00
Glenn Jocher
c37fda7d45 updates 2019-02-08 23:08:26 +01:00
Glenn Jocher
e77de1c3c7 updates 2019-02-08 23:03:27 +01:00
Glenn Jocher
334660d58f updates 2019-02-08 22:55:01 +01:00
Glenn Jocher
c2436d8197 updates 2019-02-08 22:43:05 +01:00
Glenn Jocher
d6abdaf8d0 updates 2019-02-08 17:17:48 +01:00
Glenn Jocher
8b88e50f2f updates 2019-02-08 16:50:48 +01:00
Glenn Jocher
8b9aae484b updates 2019-02-08 15:13:44 +01:00
Glenn Jocher
88804cad3b
Update models.py 2019-01-10 10:32:39 +01:00
Glenn Jocher
646a573740 updates 2019-01-09 11:48:04 +01:00
Glenn Jocher
acfe4aaf94 updates 2019-01-08 19:37:23 +01:00
Glenn Jocher
fcda9a2fa0 updates 2019-01-08 19:34:29 +01:00
Glenn Jocher
2dd2564c4e Merge remote-tracking branch 'origin/master' 2019-01-06 23:58:09 +01:00
Glenn Jocher
558b23bca7 updates 2019-01-06 23:57:59 +01:00
Glenn Jocher
c164c87935
Merge pull request #70 from jveitchmichaelis/patch-2
Fix absolute path in class name loader
2019-01-06 23:18:49 +02:00
Josh Veitch-Michaelis
212597fddd
Fix absolute path in class name loader 2019-01-06 20:54:04 +00:00
Glenn Jocher
8dfa653942 updates 2019-01-06 15:58:41 +02:00
Glenn Jocher
6e1ff541c9 updates 2019-01-06 14:23:04 +02:00
Glenn Jocher
178e1a346b updates 2019-01-05 17:23:17 +02:00
Glenn Jocher
d673b6c5f4 updates 2019-01-03 23:44:51 +01:00
Glenn Jocher
5a7313ca5a updates 2019-01-03 23:42:07 +01:00
Glenn Jocher
cff2a81315 updates 2019-01-03 23:41:31 +01:00
Glenn Jocher
b181c61f4b updates 2019-01-02 16:32:38 +01:00
Glenn Jocher
7283f26f6f updates 2019-01-01 17:52:45 +01:00
Glenn Jocher
0bb3fcb049 updates 2018-12-31 12:44:00 +01:00
Glenn Jocher
17a02ae3e4 updates 2018-12-31 12:33:34 +01:00
Glenn Jocher
36a06a1e90 updates 2018-12-31 12:31:38 +01:00
Glenn Jocher
cc018c73ad ONNX compatibility updates 2018-12-28 21:12:31 +01:00
Glenn Jocher
d1951c1868 ONNX compatibility updates 2018-12-28 20:15:26 +01:00
Glenn Jocher
16bc3b72c3 updates 2018-12-28 20:11:10 +01:00
Glenn Jocher
eec0dc7b6c ONNX compatibility updates 2018-12-28 20:09:06 +01:00
Glenn Jocher
8ad8a64a0d
Delete coco_augmentation_examples.jpg 2018-12-28 19:52:54 +01:00
Glenn Jocher
5e58999f50
Update README.md 2018-12-28 19:52:25 +01:00
Glenn Jocher
53adbc82ae
Delete zidane_result.jpg 2018-12-28 19:24:01 +01:00
Glenn Jocher
7f257d30c5
Update README.md 2018-12-28 19:23:35 +01:00
Glenn Jocher
1b629e754c
Update README.md 2018-12-28 19:18:15 +01:00
Glenn Jocher
ac2adc8be1
Update README.md 2018-12-28 19:17:33 +01:00
Glenn Jocher
110bc33023
Update README.md 2018-12-28 10:50:38 +01:00
Glenn Jocher
b41d6af01b
Update README.md 2018-12-28 10:50:08 +01:00
Glenn Jocher
9a98e806e0 ONNX compatibility updates 2018-12-28 08:48:10 +01:00
Glenn Jocher
4c5f4864fb ONNX compatibility updates 2018-12-26 15:57:18 +01:00
Glenn Jocher
8b34fbef33 ONNX compatibility updates 2018-12-26 15:46:39 +01:00
Glenn Jocher
29f2e80950 ONNX compatibility updates 2018-12-26 12:33:34 +01:00
Glenn Jocher
6940221948 ONNX compatibility updates 2018-12-26 12:32:34 +01:00
Glenn Jocher
647e1c6f52 ONNX compatibility updates 2018-12-25 13:24:21 +01:00
Glenn Jocher
febc55d96a updates 2018-12-25 13:21:02 +01:00
Glenn Jocher
b6ff9cad79 updates 2018-12-24 14:33:05 +01:00
Glenn Jocher
5403581e38 updates 2018-12-24 13:11:21 +01:00
Glenn Jocher
38fbc1e383 updates 2018-12-23 13:54:12 +01:00
Glenn Jocher
c4222cc7f7 updates 2018-12-23 13:50:44 +01:00
Glenn Jocher
fb4383f364 ONNX export compatability updates 2018-12-23 13:46:47 +01:00
Glenn Jocher
c50df0d1db ONNX export compatability updates 2018-12-23 12:51:02 +01:00
Glenn Jocher
69963ff1f5 updates 2018-12-23 10:53:47 +01:00
Glenn Jocher
aa3d1a2bbd updates 2018-12-22 14:25:02 +01:00
Glenn Jocher
465f847660 updates 2018-12-22 13:54:02 +01:00
Glenn Jocher
de2d835b91 updates 2018-12-22 13:53:45 +01:00
Glenn Jocher
47df31a270 updates 2018-12-22 13:20:40 +01:00
Glenn Jocher
9c3d9dca97 updates 2018-12-22 13:09:05 +01:00
Glenn Jocher
a21c131dd0 updates 2018-12-22 13:05:52 +01:00
Glenn Jocher
ffd45ebf0c updates 2018-12-22 12:58:59 +01:00
Glenn Jocher
a0ab4916fd yolov3-tiny addition 2018-12-22 12:49:55 +01:00
Glenn Jocher
bd5b9693bf enable yolov3-tiny inference 2018-12-22 12:36:33 +01:00
Glenn Jocher
34fb1bb8a9 updates 2018-12-20 22:08:47 +01:00
Glenn Jocher
62c186da25 updates 2018-12-19 23:48:52 +01:00
Glenn Jocher
b48c108ba0 updates 2018-12-17 22:43:55 +01:00
Glenn Jocher
682fd61385 updates 2018-12-17 22:43:30 +01:00
Glenn Jocher
89e8468895
Merge pull request #48 from jveitchmichaelis/patch-1
Remove auto-shutdown from get coco script
2018-12-17 21:49:55 +01:00
Josh Veitch-Michaelis
96d85ad4ba
Remove auto-shutdown from get coco script
This is presumably for unattended download on cloud systems, but the script should alert the user first. Automatically shutting down a system when you download some data shouldn't be default behaviour. It's also not in the original Darknet script (https://github.com/pjreddie/darknet/blob/master/scripts/get_coco_dataset.sh).

Alternatively run `get_coco_dataset.sh && sudo shutdown`.
2018-12-17 16:29:33 +00:00
Glenn Jocher
bf23be9965 updates 2018-12-16 15:16:52 +01:00
Glenn Jocher
18ccd184bf updates 2018-12-16 15:16:19 +01:00
Glenn Jocher
b52a49cf12 updates 2018-12-15 21:06:39 +01:00
Glenn Jocher
b079c1b10c updates 2018-12-15 21:01:14 +01:00
Glenn Jocher
900851200e updates 2018-12-15 20:52:35 +01:00
Glenn Jocher
21ab0c76fd updates 2018-12-12 17:27:52 +01:00
Glenn Jocher
3c95b5c104 updates 2018-12-12 17:26:46 +01:00
Glenn Jocher
b5a2747a6a updates 2018-12-12 17:02:37 +01:00
Glenn Jocher
c591936446 updates 2018-12-11 21:49:56 +01:00
Glenn Jocher
7fb729269b Merge remote-tracking branch 'origin/master' 2018-12-11 20:47:21 +01:00
Glenn Jocher
e28ac3de29 updates 2018-12-11 20:46:46 +01:00
Glenn Jocher
cb21b75920
Update README.md 2018-12-11 20:23:27 +01:00
Glenn Jocher
6c1cd4f3a2
Update README.md 2018-12-11 20:18:05 +01:00
Glenn Jocher
4f80ef3464
Update README.md 2018-12-11 19:54:31 +01:00
Glenn Jocher
c8bd1778f2
Delete coco_training_loss.png 2018-12-11 19:46:39 +01:00
Glenn Jocher
2e5c72321f
Update README.md 2018-12-11 19:46:15 +01:00
Glenn Jocher
3fe3951268 updates 2018-12-10 13:19:13 +01:00
Glenn Jocher
362b41436a
Merge pull request #45 from guigarfr/argparse
Argparse PR
2018-12-10 12:47:31 +01:00
Glenn Jocher
c63e96bc82
Merge branch 'master' into argparse 2018-12-06 13:13:35 +01:00
Glenn Jocher
27849f2474 updates 2018-12-06 13:01:49 +01:00
Guillermo García
d03ce45da5 train.py freeze-darknet53 shortened to freeze and action store_true
Traing with freeze: python train.py --freeze
Train without freeze: python train.py

Note: in the actual version freeze is only for first epoche
2018-12-05 16:57:16 +01:00
Guillermo García
868a116750 train.py remove hardcoded weights/ path for weights.
If I want to store my weights in 'weights2' path:
python train.py --weights-path weights2

Default is the same: weights
2018-12-05 16:57:16 +01:00
Guillermo García
9c0c1f23ab scripts: use data config defined class names
Shorten name of --data-config-path argument to --data-config
2018-12-05 16:57:16 +01:00
Guillermo García
89daa407e5 train.py report argument as store_true
Default is false: python train.py
If want the report: python train.py --report
2018-12-05 16:57:16 +01:00
Guillermo García
b1fb6fa33d train.py resume argument as store_true
Default is false.

If want to resume, call train.py --resume
2018-12-05 16:57:16 +01:00
Guillermo García
c807c16b79 Fix argument parser bad practice
Keep parsing inside __main__ block and call methods with arguments

Add double -- for long argument names (- reserved for shortcuts)
2018-12-05 16:57:16 +01:00
Guillermo García
5a566454f5 Extract seed and cuda initialization utils 2018-12-05 11:55:27 +01:00
Glenn Jocher
45ee668fd7 updates 2018-12-04 19:20:09 +01:00
Glenn Jocher
be8603b2dd updates 2018-12-04 19:17:03 +01:00
Glenn Jocher
fd6619d773 updates 2018-12-03 22:33:25 +01:00
Glenn Jocher
10cca39934 updates 2018-12-03 21:08:45 +01:00
Glenn Jocher
dc704edf17 updates 2018-12-03 20:56:54 +01:00
Glenn Jocher
43d74fd840 updates 2018-12-03 15:42:10 +01:00
Glenn Jocher
b64620cf75 updates 2018-12-03 14:12:46 +01:00
Glenn Jocher
40b536a426 updates 2018-12-03 14:08:59 +01:00
Glenn Jocher
147dfe10d8 Merge remote-tracking branch 'origin/master' 2018-12-03 14:05:57 +01:00
Glenn Jocher
5843c41dfc add multi_scale support 2018-12-03 14:05:50 +01:00
Glenn Jocher
4edd41e2e8
Update README.md 2018-12-03 14:03:27 +01:00
Glenn Jocher
f05934f2eb updates 2018-12-03 01:36:03 +01:00
Glenn Jocher
448a8f0f4b updates 2018-12-01 12:09:40 +01:00
Glenn Jocher
b0c0182062 updates 2018-11-30 11:57:14 +01:00
Glenn Jocher
0240ac44f6 updates 2018-11-30 11:56:38 +01:00
Glenn Jocher
35e445c5da updates 2018-11-29 22:10:35 +01:00
Glenn Jocher
bd649f241f updates 2018-11-29 12:12:48 +01:00
Glenn Jocher
af0033c9e9 updates 2018-11-29 11:59:29 +01:00
Glenn Jocher
d5331be0a0 updates 2018-11-29 11:43:19 +01:00
Glenn Jocher
053566b174 updates 2018-11-28 10:27:55 +01:00
Glenn Jocher
cc419d88ea updates 2018-11-28 10:25:00 +01:00
Glenn Jocher
5a0575af3a updates 2018-11-27 18:43:46 +01:00
Glenn Jocher
b07ee41867 updates 2018-11-27 18:14:48 +01:00
Glenn Jocher
ab9ee6aa9a updates 2018-11-23 19:45:39 +01:00
Glenn Jocher
82124805f8 updates 2018-11-23 18:13:35 +01:00
Glenn Jocher
7b13af707d updates 2018-11-23 18:09:47 +01:00
Glenn Jocher
bf66656b4e updates 2018-11-23 15:34:49 +01:00
Glenn Jocher
6e825acb72 updates 2018-11-23 15:32:41 +01:00
Glenn Jocher
887ab29c64 updates 2018-11-22 20:03:09 +01:00
Glenn Jocher
6f83c321c8 updates 2018-11-22 20:02:11 +01:00
Glenn Jocher
7ca924b172 updates 2018-11-22 17:17:07 +01:00
Glenn Jocher
075b629049 updates 2018-11-22 17:16:17 +01:00
Glenn Jocher
120af70798 updates 2018-11-22 17:13:47 +01:00
Glenn Jocher
57f2b3f6d7 updates 2018-11-22 16:42:58 +01:00
Glenn Jocher
06579775a3
Merge pull request #37 from nirbenz/nirbenz
Fixed NMS bug causing big CPU usage.
2018-11-22 16:29:28 +01:00
Glenn Jocher
154fae4430 updates 2018-11-22 15:04:02 +01:00
Glenn Jocher
b9d87be318 updates 2018-11-22 14:54:52 +01:00
Glenn Jocher
959c67b4ed updates 2018-11-22 14:48:57 +01:00
Nir Ben-Zvi
d41f85702d Fixed NMS bug causing big CPU usage. Note that using 'cross_class_nms' still takes a huge amount of time and should be fixed somehow. 2018-11-22 15:36:14 +02:00
Glenn Jocher
bec94be01a updates 2018-11-22 14:33:01 +01:00
Glenn Jocher
f18f288990 updates 2018-11-22 14:29:50 +01:00
Glenn Jocher
db515a4535 updates 2018-11-22 14:14:19 +01:00
Glenn Jocher
809667404f updates 2018-11-22 13:52:22 +01:00
Glenn Jocher
a46e500f9e updates 2018-11-21 19:24:00 +01:00
Glenn Jocher
8ac8d0a382 updates 2018-11-21 19:22:35 +01:00
Glenn Jocher
dae9b8f4b5 updates 2018-11-21 19:22:01 +01:00
Glenn Jocher
7283f52d0c updates 2018-11-21 19:10:10 +01:00
Glenn Jocher
b0b19b3b94 updates 2018-11-21 18:42:06 +01:00
Glenn Jocher
f1a94abafa updates 2018-11-21 18:25:52 +01:00
Glenn Jocher
4e4b67b3c5 updates 2018-11-21 18:01:18 +01:00
Glenn Jocher
4eed25dad0 updates 2018-11-19 16:54:00 +01:00
Glenn Jocher
e93fbf2338 updates 2018-11-19 13:15:15 +01:00
Glenn Jocher
6ab407231d updates 2018-11-19 12:55:03 +01:00
Glenn Jocher
aa895d2a07 updates 2018-11-17 14:37:36 +01:00
Glenn Jocher
1415a798fe updates 2018-11-17 12:54:44 +01:00
Glenn Jocher
dd7c3d2455 updates 2018-11-17 12:51:22 +01:00
Glenn Jocher
07f15b68d3 updates 2018-11-17 00:32:28 +01:00
Glenn Jocher
0bc111f0b4 updates 2018-11-16 22:35:44 +01:00
Glenn Jocher
ed1067bfb5 updates 2018-11-16 22:34:44 +01:00
Glenn Jocher
d2c5d7a5fd updates 2018-11-16 20:01:38 +01:00
Glenn Jocher
a021f97110 updates 2018-11-15 01:01:04 +01:00
Glenn Jocher
1ea87c49c4 updates 2018-11-15 00:57:15 +01:00
Glenn Jocher
a17280ac72 mAP recorded during training 2018-11-15 00:56:03 +01:00
Glenn Jocher
45c5567723 mAP recorded during training 2018-11-14 15:14:41 +00:00
Glenn Jocher
9dbc3ec1c4 updates 2018-11-13 11:20:01 +00:00
Glenn Jocher
f5ce1d5ef4 Merge remote-tracking branch 'origin/master' 2018-11-11 18:58:51 +01:00
Glenn Jocher
34bc12d2ad updates 2018-11-11 18:58:41 +01:00
Glenn Jocher
9f54f638ec
Update README.md 2018-11-10 19:50:06 +01:00
Glenn Jocher
e04bb75ff1 updates 2018-11-10 00:54:55 +01:00
Glenn Jocher
966bc16d1a updates 2018-11-10 00:53:53 +01:00
Glenn Jocher
98484bbe2f updates 2018-11-09 17:18:37 +01:00
Glenn Jocher
4bae1d0f75 updates 2018-11-09 17:03:26 +01:00
Glenn Jocher
b1a2735338 updates 2018-11-09 16:58:32 +01:00
Glenn Jocher
5177f3e7a0 updates 2018-11-09 16:48:55 +01:00
Glenn Jocher
664cbaab09 Adam optimizer 2018-11-09 16:44:12 +01:00
Glenn Jocher
538e5741c6 updates 2018-11-08 12:42:39 +01:00
Glenn Jocher
46a4de77cb updates 2018-11-08 12:29:35 +01:00
Glenn Jocher
2463030d6c updates 2018-11-08 12:28:19 +01:00
Glenn Jocher
c8e4a19879 updates 2018-11-08 12:27:14 +01:00
Glenn Jocher
a6d69cefe0 updates 2018-11-08 12:26:23 +01:00
Glenn Jocher
364da386f7 updates 2018-11-08 00:50:36 +01:00
Glenn Jocher
8d7660b438 updates 2018-11-07 15:17:00 +01:00
Glenn Jocher
edfad8095d updates 2018-11-05 23:34:26 +01:00
Glenn Jocher
6e5da1ce27 updates 2018-11-05 23:32:36 +01:00
Glenn Jocher
19ccb41eaf updates 2018-11-05 23:28:10 +01:00
Glenn Jocher
3afb29ad48 update multi-scale training 2018-11-05 23:20:45 +01:00
Glenn Jocher
0096bb4dd5 update multi-scale training 2018-11-05 23:20:21 +01:00
Glenn Jocher
77469a5268 update multi-scale training 2018-11-05 23:17:53 +01:00
Glenn Jocher
587097affb update LR scheduler 2018-11-05 09:20:18 +01:00
Glenn Jocher
2ccf68cf96 add multi_scale train option to argparser 2018-11-05 09:08:48 +01:00
Glenn Jocher
dc7b58bb3c add multi_scale train option to argparser 2018-11-05 09:07:15 +01:00
Glenn Jocher
b352f93f19 updates 2018-11-05 08:57:51 +01:00
Glenn Jocher
0c89122aab updates 2018-11-04 18:19:07 +01:00
Glenn Jocher
741626c55b initialize from darknet53 2018-10-30 15:20:52 +01:00
Glenn Jocher
26c52f9485 initialize from darknet53 2018-10-30 15:18:52 +01:00
Glenn Jocher
ed0390d0b5 initialize from darknet53 2018-10-30 14:58:56 +01:00
Glenn Jocher
332fe002b3 rename /checkpoints to /weights 2018-10-30 14:58:26 +01:00
Glenn Jocher
0ae90d0fb7 rename /checkpoints to /weights 2018-10-27 00:42:34 +02:00
Glenn Jocher
553254bbd6 updates 2018-10-27 00:10:40 +02:00
Glenn Jocher
b25b16dccc updates 2018-10-26 01:09:45 +02:00
Glenn Jocher
c0ff46256f updates 2018-10-26 01:03:46 +02:00
glennjocher
10f44ce830 updates 2018-10-21 16:08:31 +02:00
Glenn Jocher
05f28ab02b -batch_size from 12 to 16 2018-10-15 21:05:24 +02:00
Glenn Jocher
24a41972cb BCE to CE lconf + batch size 16 2018-10-11 17:43:34 +02:00
Glenn Jocher
d336e0053d per-class mAP report 2018-10-10 17:07:21 +02:00
Glenn Jocher
f79e7ffa76 updates 2018-10-10 16:16:17 +02:00
Glenn Jocher
d748bedb1d clean up train.py 2018-10-09 19:32:42 +02:00
Glenn Jocher
e7cd5d01c4 cleanup train.py 2018-10-09 19:28:27 +02:00
Glenn Jocher
0cc8f2be01 clean up train.py 2018-10-09 19:22:33 +02:00
Glenn Jocher
b7d039737a updates 2018-10-05 17:01:43 +02:00
Glenn Jocher
07ac4fef8d create step lr schedule 2018-10-05 17:01:07 +02:00
Glenn Jocher
c01b8e6b7c updates 2018-10-05 16:38:59 +02:00
Glenn Jocher
8a87521044 P and R conf thresh to 0.5 2018-10-05 16:01:27 +02:00
Glenn Jocher
bce94f6ade corrected numpy printoptions 2018-10-03 13:55:56 +02:00
Glenn Jocher
0058431e2e create step lr schedule 2018-09-28 14:26:46 +02:00
Glenn Jocher
ff630b1960 align loss to darknet 2018-09-25 03:45:52 +02:00
Glenn Jocher
7416c1842a updates 2018-09-25 01:33:26 +02:00
Glenn Jocher
c09dc09dba align loss to darknet 2018-09-25 01:30:51 +02:00
Glenn Jocher
208fd77fe4 create step lr schedule 2018-09-25 01:29:35 +02:00
Glenn Jocher
6528238953 align loss to darknet 2018-09-24 21:26:12 +02:00
Glenn Jocher
b542c2d899 align loss to darknet 2018-09-24 21:25:43 +02:00
Glenn Jocher
396a71001e align loss to darknet 2018-09-24 21:25:17 +02:00
Glenn Jocher
a75119b8f0 align loss to darknet 2018-09-24 20:32:05 +02:00
Glenn Jocher
750f528bfe align loss to darknet 2018-09-24 03:34:12 +02:00
Glenn Jocher
292af1f2f4 align loss to darknet 2018-09-24 03:10:42 +02:00
Glenn Jocher
313a3f6b0c updates 2018-09-24 03:06:04 +02:00
Glenn Jocher
5d402ad31a reapply yolo width and height 2018-09-23 22:41:36 +02:00
Glenn Jocher
cf9b4cfa52 update loss components 2018-09-23 22:25:23 +02:00
Glenn Jocher
bd3f617129 updates 2018-09-22 21:50:01 +02:00
Glenn Jocher
b93839dea7 add yolov3-spp.cfg 2018-09-21 16:00:41 +02:00
Glenn Jocher
a722601ef6 Adam to SGD with burn-in 2018-09-20 18:03:19 +02:00
Glenn Jocher
1cfde4aba8 nGT to nT 2018-09-19 04:32:16 +02:00
Glenn Jocher
29fbcb059f simplify train.py 2018-09-19 04:21:46 +02:00
Glenn Jocher
68de92f1a1 loss lambda corrections 2018-09-13 14:15:49 +02:00
Glenn Jocher
9514e74438 updates 2018-09-10 17:02:38 +02:00
Glenn Jocher
a8f8ec134b updates 2018-09-10 17:00:39 +02:00
Glenn Jocher
300e2b5dfc updates 2018-09-10 16:41:02 +02:00
Glenn Jocher
ff04315f96 updates 2018-09-10 16:35:00 +02:00
Glenn Jocher
34144aabe3 updates 2018-09-10 16:31:56 +02:00
Glenn Jocher
751e02de3e updates 2018-09-10 16:26:40 +02:00
Glenn Jocher
ba1b3d8fe5 updates 2018-09-10 16:19:00 +02:00
Glenn Jocher
c43be7b350 updates 2018-09-10 15:58:01 +02:00
Glenn Jocher
b19e3a049f updates 2018-09-10 15:52:28 +02:00
Glenn Jocher
c1492ae4fb updates 2018-09-10 15:50:37 +02:00
Glenn Jocher
873abaeef4 mAP corrected to per-class 2018-09-10 15:23:39 +02:00
Glenn Jocher
cd753d23f7 mAP corrected to per-class 2018-09-10 15:23:04 +02:00
Glenn Jocher
e7dab5a42f mAP corrected to per-class 2018-09-10 15:12:13 +02:00
Glenn Jocher
6116acb8c2 np.unique sorting correction 2018-09-09 16:14:24 +02:00
Glenn Jocher
a284fc921d updates 2018-09-08 14:46:22 +02:00
Glenn Jocher
af9864de7b updates 2018-09-05 14:59:49 +02:00
Glenn Jocher
0bfc4bcee3 updates 2018-09-04 15:08:32 +02:00
Glenn Jocher
966d85ba01 updates 2018-09-04 15:08:06 +02:00
Glenn Jocher
b04ee34035 updates 2018-09-04 14:41:52 +02:00
Glenn Jocher
f88b2bd153 updates 2018-09-04 14:39:48 +02:00
Glenn Jocher
283a3d27a4 updates 2018-09-04 14:38:20 +02:00
Glenn Jocher
3a0c16fbc2 updates 2018-09-04 14:36:51 +02:00
Glenn Jocher
aa77cbea11 updates 2018-09-03 00:44:55 +02:00
Glenn Jocher
defae83d77 updates 2018-09-03 00:35:46 +02:00
Glenn Jocher
d35cff2d22 updates 2018-09-02 23:43:26 +02:00
Glenn Jocher
caeba13b84 updates 2018-09-02 14:01:31 +02:00
Glenn Jocher
345f4773b7 updates 2018-09-02 13:18:59 +02:00
Glenn Jocher
c99cb43c55 updates 2018-09-02 13:17:38 +02:00
Glenn Jocher
8b6f1595e0 updates 2018-09-02 13:17:28 +02:00
Glenn Jocher
1c72eb03f0 updates 2018-09-02 13:13:22 +02:00
Glenn Jocher
521b4c02ff updates 2018-09-02 13:09:05 +02:00
Glenn Jocher
1d760a7046 updates 2018-09-02 13:03:55 +02:00
Glenn Jocher
e2a8f5bdce updates 2018-09-02 12:59:39 +02:00
Glenn Jocher
058bb7f38d updates 2018-09-02 12:55:05 +02:00
Glenn Jocher
58f2d9306b updates 2018-09-02 12:40:29 +02:00
Glenn Jocher
641e354948 updates 2018-09-02 11:38:39 +02:00
Glenn Jocher
e99bda0c54 updates 2018-09-02 11:26:56 +02:00
Glenn Jocher
8ed89d8c88 updates 2018-09-02 11:15:39 +02:00
Glenn Jocher
eeb546ed6f updates 2018-09-01 19:13:12 +02:00
Glenn Jocher
f823b3f122 updates 2018-09-01 18:57:18 +02:00
Glenn Jocher
5731658e23 updates 2018-09-01 18:49:11 +02:00
Glenn Jocher
a712315de2 updates 2018-09-01 18:48:53 +02:00
Glenn Jocher
2cfe763f86 updates 2018-09-01 18:48:03 +02:00
Glenn Jocher
19d23b63ee updates 2018-09-01 18:47:24 +02:00
Glenn Jocher
7d083f558a updates 2018-09-01 18:47:08 +02:00
Glenn Jocher
8fd8d8eb04 updates 2018-09-01 18:41:05 +02:00
Glenn Jocher
382100307a updates 2018-09-01 18:37:40 +02:00
Glenn Jocher
af92ac9e63 updates 2018-09-01 18:35:28 +02:00
Glenn Jocher
54a2047270 Update issue templates 2018-09-01 17:13:05 +02:00
Glenn Jocher
efbc48ff7f updates 2018-09-01 17:08:51 +02:00
Glenn Jocher
dbc19e7244 updates 2018-09-01 14:10:53 +02:00
Glenn Jocher
0575b04b67 updates 2018-09-01 14:10:06 +02:00
Glenn Jocher
3599793dfa updates 2018-09-01 14:04:42 +02:00
Glenn Jocher
7672505d45 updates 2018-09-01 13:43:07 +02:00
Glenn Jocher
aa346973ae updates 2018-09-01 13:41:34 +02:00
Glenn Jocher
45c7d4642b updates 2018-09-01 13:37:33 +02:00
Glenn Jocher
d381757b8f Merge remote-tracking branch 'origin/master'
# Conflicts:
#	README.md
2018-09-01 13:35:09 +02:00
Glenn Jocher
5bd70cfe70 updates 2018-09-01 13:34:50 +02:00
Glenn Jocher
b28fdb1570 updates 2018-09-01 13:34:05 +02:00
Glenn Jocher
d3be281418
Update README.md 2018-09-01 13:20:01 +02:00
Glenn Jocher
b76ba51012
Update README.md 2018-09-01 13:18:53 +02:00
Glenn Jocher
7fa8dd9257 updates 2018-09-01 13:17:21 +02:00
Glenn Jocher
c09703f4d4 updates 2018-09-01 13:11:57 +02:00
Glenn Jocher
b03f5a9a7a
Update README.md 2018-08-31 19:23:46 +02:00
Glenn Jocher
7500471526 updates 2018-08-27 00:01:41 +02:00
Glenn Jocher
54d1da904c updates 2018-08-27 00:00:25 +02:00
Glenn Jocher
a660211733 updates 2018-08-26 23:59:13 +02:00
Glenn Jocher
2769d79d05 updates 2018-08-26 23:52:06 +02:00
Glenn Jocher
8944db80b9 updates 2018-08-26 22:37:23 +02:00
Glenn Jocher
d378e32803 updates 2018-08-26 21:59:55 +02:00
Glenn Jocher
d602aed1d8 updates 2018-08-26 21:41:35 +02:00
Glenn Jocher
ebe27544eb updates 2018-08-26 20:30:47 +02:00
Glenn Jocher
6da62e433d updates 2018-08-26 20:24:37 +02:00
Glenn Jocher
56badeef8a updates 2018-08-26 19:40:30 +02:00
Glenn Jocher
af7144ba79 updates 2018-08-26 19:38:37 +02:00
Glenn Jocher
c0d1aad97e updates 2018-08-26 19:38:14 +02:00
Glenn Jocher
1bc4f89bca updates 2018-08-26 19:34:04 +02:00
Glenn Jocher
8a1d1b76c0 updates 2018-08-26 19:33:37 +02:00
Glenn Jocher
ad0860dbe2 updates 2018-08-26 17:09:10 +02:00
Glenn Jocher
42221c6822 updates 2018-08-26 15:42:09 +02:00
Glenn Jocher
f52b6281d3 updates 2018-08-26 15:41:28 +02:00
Glenn Jocher
3fb6cc8161 updates 2018-08-26 15:40:07 +02:00
Glenn Jocher
b965b6e9b7 updates 2018-08-26 11:52:27 +02:00
Glenn Jocher
184db1fb10 updates 2018-08-26 11:48:58 +02:00
Glenn Jocher
119d39599e updates 2018-08-26 11:48:19 +02:00
Glenn Jocher
65228ba8a2 updates 2018-08-26 11:47:38 +02:00
Glenn Jocher
e81ef205fe updates 2018-08-26 11:44:41 +02:00
Glenn Jocher
7f2df90277 updates 2018-08-26 11:42:34 +02:00
Glenn Jocher
ef84864251 updates 2018-08-26 11:39:43 +02:00
Glenn Jocher
55d63fe939
Update README.md 2018-08-26 11:35:56 +02:00
Glenn Jocher
2737b419ac updates 2018-08-26 11:33:36 +02:00
Glenn Jocher
a27276f055 updates 2018-08-26 11:30:46 +02:00
Glenn Jocher
67ee4f0c0d updates 2018-08-26 11:24:09 +02:00
Glenn Jocher
823a34af5a updates 2018-08-26 11:17:57 +02:00
Glenn Jocher
641e784ab5 updates 2018-08-26 11:12:10 +02:00
Glenn Jocher
8fc6a999c9 updates 2018-08-26 11:10:32 +02:00
Glenn Jocher
b83342d3ed updates 2018-08-26 11:06:01 +02:00
Glenn Jocher
a1bf591a78 updates 2018-08-26 11:05:13 +02:00
Glenn Jocher
be17c9aaf6 updates 2018-08-26 10:59:39 +02:00
Glenn Jocher
5463ab8aa0 updates 2018-08-26 10:57:06 +02:00
Glenn Jocher
c3731591af Initial commit 2018-08-26 10:51:39 +02:00
144 changed files with 23572 additions and 0 deletions

222
yolov3/.dockerignore Normal file
View File

@ -0,0 +1,222 @@
# Repo-specific DockerIgnore -------------------------------------------------------------------------------------------
.git
.cache
.idea
runs
output
coco
storage.googleapis.com
data/samples/*
**/results*.csv
*.jpg
# Neural Network weights -----------------------------------------------------------------------------------------------
**/*.pt
**/*.pth
**/*.onnx
**/*.engine
**/*.mlmodel
**/*.torchscript
**/*.torchscript.pt
**/*.tflite
**/*.h5
**/*.pb
*_saved_model/
*_web_model/
*_openvino_model/
# Below Copied From .gitignore -----------------------------------------------------------------------------------------
# Below Copied From .gitignore -----------------------------------------------------------------------------------------
# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
wandb/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# dotenv
.env
# virtualenv
.venv*
venv*/
ENV*/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
Icon?
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff:
.idea/*
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/dictionaries
.html # Bokeh Plots
.pg # TensorFlow Frozen Graphs
.avi # videos
# Sensitive or high-churn files:
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
# Gradle:
.idea/**/gradle.xml
.idea/**/libraries
# CMake
cmake-build-debug/
cmake-build-release/
# Mongo Explorer plugin:
.idea/**/mongoSettings.xml
## File-based project format:
*.iws
## Plugin-specific files:
# IntelliJ
out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Cursive Clojure plugin
.idea/replstate.xml
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties

2
yolov3/.gitattributes vendored Normal file
View File

@ -0,0 +1,2 @@
# this drop notebooks from GitHub language stats
*.ipynb linguist-vendored

View File

@ -0,0 +1,85 @@
name: 🐛 Bug Report
# title: " "
description: Problems with YOLOv3
labels: [bug, triage]
body:
- type: markdown
attributes:
value: |
Thank you for submitting a YOLOv3 🐛 Bug Report!
- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/ultralytics/yolov5/issues) to see if a similar bug report already exists.
options:
- label: >
I have searched the YOLOv3 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
required: true
- type: dropdown
attributes:
label: YOLOv3 Component
description: |
Please select the part of YOLOv3 where you found the bug.
multiple: true
options:
- "Training"
- "Validation"
- "Detection"
- "Export"
- "PyTorch Hub"
- "Multi-GPU"
- "Evolution"
- "Integrations"
- "Other"
validations:
required: false
- type: textarea
attributes:
label: Bug
description: Provide console output with error messages and/or screenshots of the bug.
placeholder: |
💡 ProTip! Include as much information as possible (screenshots, logs, tracebacks etc.) to receive the most helpful response.
validations:
required: true
- type: textarea
attributes:
label: Environment
description: Please specify the software and hardware you used to produce the bug.
placeholder: |
- YOLO: YOLOv3 🚀 v6.0-67-g60e42e1 torch 1.9.0+cu111 CUDA:0 (A100-SXM4-40GB, 40536MiB)
- OS: Ubuntu 20.04
- Python: 3.9.0
validations:
required: false
- type: textarea
attributes:
label: Minimal Reproducible Example
description: >
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to **reproduce** the problem.
This is referred to by community members as creating a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example).
placeholder: |
```
# Code to reproduce your issue here
```
validations:
required: false
- type: textarea
attributes:
label: Additional
description: Anything else you would like to share?
- type: checkboxes
attributes:
label: Are you willing to submit a PR?
description: >
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/yolov5/pulls) (PR) to help improve YOLOv3 for everyone, especially if you have a good understanding of how to implement a fix or feature.
See the YOLOv3 [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) to get started.
options:
- label: Yes I'd like to help by submitting a PR!

View File

@ -0,0 +1,8 @@
blank_issues_enabled: true
contact_links:
- name: 💬 Forum
url: https://community.ultralytics.com/
about: Ask on Ultralytics Community Forum
- name: Stack Overflow
url: https://stackoverflow.com/search?q=YOLOv3
about: Ask on Stack Overflow with 'YOLOv3' tag

View File

@ -0,0 +1,50 @@
name: 🚀 Feature Request
description: Suggest a YOLOv3 idea
# title: " "
labels: [enhancement]
body:
- type: markdown
attributes:
value: |
Thank you for submitting a YOLOv3 🚀 Feature Request!
- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/ultralytics/yolov5/issues) to see if a similar feature request already exists.
options:
- label: >
I have searched the YOLOv3 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
required: true
- type: textarea
attributes:
label: Description
description: A short description of your feature.
placeholder: |
What new feature would you like to see in YOLOv3?
validations:
required: true
- type: textarea
attributes:
label: Use case
description: |
Describe the use case of your feature request. It will help us understand and prioritize the feature request.
placeholder: |
How would this feature be used, and who would use it?
- type: textarea
attributes:
label: Additional
description: Anything else you would like to share?
- type: checkboxes
attributes:
label: Are you willing to submit a PR?
description: >
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/yolov5/pulls) (PR) to help improve YOLOv3 for everyone, especially if you have a good understanding of how to implement a fix or feature.
See the YOLOv3 [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) to get started.
options:
- label: Yes I'd like to help by submitting a PR!

View File

@ -0,0 +1,33 @@
name: ❓ Question
description: Ask a YOLOv3 question
# title: " "
labels: [question]
body:
- type: markdown
attributes:
value: |
Thank you for asking a YOLOv3 ❓ Question!
- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) to see if a similar question already exists.
options:
- label: >
I have searched the YOLOv3 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
required: true
- type: textarea
attributes:
label: Question
description: What is your question?
placeholder: |
💡 ProTip! Include as much information as possible (screenshots, logs, tracebacks etc.) to receive the most helpful response.
validations:
required: true
- type: textarea
attributes:
label: Additional
description: Anything else you would like to share?

View File

@ -0,0 +1,9 @@
<!--
Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started:
- Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists.
- Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented.
- Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable).
Please see our ✅ [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) for more details.
-->

23
yolov3/.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,23 @@
version: 2
updates:
- package-ecosystem: pip
directory: "/"
schedule:
interval: weekly
time: "04:00"
open-pull-requests-limit: 10
reviewers:
- glenn-jocher
labels:
- dependencies
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
time: "04:00"
open-pull-requests-limit: 5
reviewers:
- glenn-jocher
labels:
- dependencies

128
yolov3/.github/workflows/ci-testing.yml vendored Normal file
View File

@ -0,0 +1,128 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# YOLOv3 Continuous Integration (CI) GitHub Actions tests
name: YOLOv3 CI
on:
push:
branches: [master]
pull_request:
branches: [master]
schedule:
- cron: '0 0 * * *' # runs at 00:00 UTC every day
jobs:
Tests:
timeout-minutes: 60
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest] # macos-latest bug https://github.com/ultralytics/yolov5/pull/9049
python-version: ['3.10']
model: [yolov5n]
include:
- os: ubuntu-latest
python-version: '3.7' # '3.6.8' min
model: yolov5n
- os: ubuntu-latest
python-version: '3.8'
model: yolov5n
- os: ubuntu-latest
python-version: '3.9'
model: yolov5n
- os: ubuntu-latest
python-version: '3.8' # torch 1.7.0 requires python >=3.6, <=3.8
model: yolov5n
torch: '1.7.0' # min torch version CI https://pypi.org/project/torchvision/
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Get cache dir
# https://github.com/actions/cache/blob/master/examples.md#multiple-oss-in-a-workflow
id: pip-cache
run: echo "::set-output name=dir::$(pip cache dir)"
- name: Cache pip
uses: actions/cache@v3
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-${{ matrix.python-version }}-pip-${{ hashFiles('requirements.txt') }}
restore-keys: ${{ runner.os }}-${{ matrix.python-version }}-pip-
- name: Install requirements
run: |
python -m pip install --upgrade pip wheel
if [ "${{ matrix.torch }}" == "1.7.0" ]; then
pip install -r requirements.txt torch==1.7.0 torchvision==0.8.1 --extra-index-url https://download.pytorch.org/whl/cpu
else
pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
fi
shell: bash # for Windows compatibility
- name: Check environment
run: |
python -c "import utils; utils.notebook_init()"
echo "RUNNER_OS is ${{ runner.os }}"
echo "GITHUB_EVENT_NAME is ${{ github.event_name }}"
echo "GITHUB_WORKFLOW is ${{ github.workflow }}"
echo "GITHUB_ACTOR is ${{ github.actor }}"
echo "GITHUB_REPOSITORY is ${{ github.repository }}"
echo "GITHUB_REPOSITORY_OWNER is ${{ github.repository_owner }}"
python --version
pip --version
pip list
- name: Test detection
shell: bash # for Windows compatibility
run: |
# export PYTHONPATH="$PWD" # to run '$ python *.py' files in subdirectories
m=${{ matrix.model }} # official weights
b=runs/train/exp/weights/best # best.pt checkpoint
python train.py --imgsz 64 --batch 32 --weights $m.pt --cfg $m.yaml --epochs 1 --device cpu # train
for d in cpu; do # devices
for w in $m $b; do # weights
python val.py --imgsz 64 --batch 32 --weights $w.pt --device $d # val
python detect.py --imgsz 64 --weights $w.pt --device $d # detect
done
done
python hubconf.py --model $m # hub
# python models/tf.py --weights $m.pt # build TF model
python models/yolo.py --cfg $m.yaml # build PyTorch model
python export.py --weights $m.pt --img 64 --include torchscript # export
python - <<EOF
import torch
im = torch.zeros([1, 3, 64, 64])
for path in '$m', '$b':
model = torch.hub.load('.', 'custom', path=path, source='local')
print(model('data/images/bus.jpg'))
model(im) # warmup, build grids for trace
torch.jit.trace(model, [im])
EOF
- name: Test segmentation
shell: bash # for Windows compatibility
run: |
m=${{ matrix.model }}-seg # official weights
b=runs/train-seg/exp/weights/best # best.pt checkpoint
python segment/train.py --imgsz 64 --batch 32 --weights $m.pt --cfg $m.yaml --epochs 1 --device cpu # train
python segment/train.py --imgsz 64 --batch 32 --weights '' --cfg $m.yaml --epochs 1 --device cpu # train
for d in cpu; do # devices
for w in $m $b; do # weights
python segment/val.py --imgsz 64 --batch 32 --weights $w.pt --device $d # val
python segment/predict.py --imgsz 64 --weights $w.pt --device $d # predict
python export.py --weights $w.pt --img 64 --include torchscript --device $d # export
done
done
- name: Test classification
shell: bash # for Windows compatibility
run: |
m=${{ matrix.model }}-cls.pt # official weights
b=runs/train-cls/exp/weights/best.pt # best.pt checkpoint
python classify/train.py --imgsz 32 --model $m --data mnist160 --epochs 1 # train
python classify/val.py --imgsz 32 --weights $b --data ../datasets/mnist160 # val
python classify/predict.py --imgsz 32 --weights $b --source ../datasets/mnist160/test/7/60.png # predict
python classify/predict.py --imgsz 32 --weights $m --source data/images/bus.jpg # predict
python export.py --weights $b --img 64 --include torchscript # export
python - <<EOF
import torch
for path in '$m', '$b':
model = torch.hub.load('.', 'custom', path=path, source='local')
EOF

View File

@ -0,0 +1,54 @@
# This action runs GitHub's industry-leading static analysis engine, CodeQL, against a repository's source code to find security vulnerabilities.
# https://github.com/github/codeql-action
name: "CodeQL"
on:
schedule:
- cron: '0 0 1 * *' # Runs at 00:00 UTC on the 1st of every month
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: ['python']
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2

57
yolov3/.github/workflows/docker.yml vendored Normal file
View File

@ -0,0 +1,57 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Builds ultralytics/yolov5:latest images on DockerHub https://hub.docker.com/r/ultralytics/yolov3
name: Publish Docker Images
on:
push:
branches: [none] # use DockerHub AutoBuild
jobs:
docker:
if: github.repository == 'ultralytics/yolov3'
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push arm64 image
uses: docker/build-push-action@v4
continue-on-error: true
with:
context: .
platforms: linux/arm64
file: utils/docker/Dockerfile-arm64
push: true
tags: ultralytics/yolov3:latest-arm64
- name: Build and push CPU image
uses: docker/build-push-action@v4
continue-on-error: true
with:
context: .
file: utils/docker/Dockerfile-cpu
push: true
tags: ultralytics/yolov3:latest-cpu
- name: Build and push GPU image
uses: docker/build-push-action@v4
continue-on-error: true
with:
context: .
file: utils/docker/Dockerfile
push: true
tags: ultralytics/yolov3:latest

65
yolov3/.github/workflows/greetings.yml vendored Normal file
View File

@ -0,0 +1,65 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
name: Greetings
on:
pull_request_target:
types: [opened]
issues:
types: [opened]
jobs:
greeting:
runs-on: ubuntu-latest
steps:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
pr-message: |
👋 Hello @${{ github.actor }}, thank you for submitting a YOLOv3 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is **up-to-date** with `ultralytics/yolov5` `master` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally.
- ✅ Verify all YOLOv3 Continuous Integration (CI) **checks are passing**.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
issue-message: |
👋 Hello @${{ github.actor }}, thank you for your interest in YOLOv3 🚀! Please visit our ⭐️ [Tutorials](https://github.com/ultralytics/yolov5/wiki#tutorials) to get started, where you can find quickstart guides for simple tasks like [Custom Data Training](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) all the way to advanced concepts like [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607).
If this is a 🐛 Bug Report, please provide a **minimum reproducible example** to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results).
## Requirements
[**Python>=3.7.0**](https://www.python.org/) with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). To get started:
```bash
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
```
## Environments
YOLOv3 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Notebooks** with free GPU: <a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a> <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov3"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov3?logo=docker" alt="Docker Pulls"></a>
## Status
<a href="https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv3 CI"></a>
If this badge is green, all [YOLOv3 GitHub Actions](https://github.com/ultralytics/yolov3/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv3 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
## Introducing YOLOv8 🚀
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - [YOLOv8](https://github.com/ultralytics/ultralytics) 🚀!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our [YOLOv8 Docs](https://docs.ultralytics.com/) for details and get started with:
```bash
pip install ultralytics
```

40
yolov3/.github/workflows/stale.yml vendored Normal file
View File

@ -0,0 +1,40 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
name: Close stale issues
on:
schedule:
- cron: '0 0 * * *' # Runs at 00:00 UTC every day
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v7
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional [YOLOv3](https://ultralytics.com/yolov5) 🚀 resources:
- **Wiki** https://github.com/ultralytics/yolov5/wiki
- **Tutorials** https://github.com/ultralytics/yolov5#tutorials
- **Docs** https://docs.ultralytics.com
Access additional [Ultralytics](https://ultralytics.com) ⚡ resources:
- **Ultralytics HUB** https://ultralytics.com/hub
- **Vision API** https://ultralytics.com/yolov5
- **About Us** https://ultralytics.com/about
- **Join Our Team** https://ultralytics.com/work
- **Contact Us** https://ultralytics.com/contact
Feel free to inform us of any other **issues** you discover or **feature requests** that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv3 🚀 and Vision AI ⭐!
stale-pr-message: 'This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions YOLOv3 🚀 and Vision AI ⭐.'
days-before-issue-stale: 30
days-before-issue-close: 10
days-before-pr-stale: 90
days-before-pr-close: 30
exempt-issue-labels: 'documentation,tutorial,TODO'
operations-per-run: 300 # The maximum number of operations per run, used to control rate limiting.

View File

@ -0,0 +1,26 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# README translation action to translate README.md to Chinese as README.zh-CN.md on any change to README.md
name: Translate README
on:
push:
branches:
- translate_readme # replace with 'master' to enable action
paths:
- README.md
jobs:
Translate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 16
# ISO Language Codes: https://cloud.google.com/translate/docs/languages
- name: Adding README - Chinese Simplified
uses: dephraiim/translate-readme@main
with:
LANG: zh-CN

257
yolov3/.gitignore vendored Executable file
View File

@ -0,0 +1,257 @@
# Repo-specific GitIgnore ----------------------------------------------------------------------------------------------
*.jpg
*.jpeg
*.png
*.bmp
*.tif
*.tiff
*.heic
*.JPG
*.JPEG
*.PNG
*.BMP
*.TIF
*.TIFF
*.HEIC
*.mp4
*.mov
*.MOV
*.avi
*.data
*.json
*.cfg
!setup.cfg
!cfg/yolov3*.cfg
storage.googleapis.com
runs/*
data/*
data/images/*
!data/*.yaml
!data/hyps
!data/scripts
!data/images
!data/images/zidane.jpg
!data/images/bus.jpg
!data/*.sh
results*.csv
# Datasets -------------------------------------------------------------------------------------------------------------
coco/
coco128/
VOC/
# MATLAB GitIgnore -----------------------------------------------------------------------------------------------------
*.m~
*.mat
!targets*.mat
# Neural Network weights -----------------------------------------------------------------------------------------------
*.weights
*.pt
*.pb
*.onnx
*.engine
*.mlmodel
*.torchscript
*.tflite
*.h5
*_saved_model/
*_web_model/
*_openvino_model/
*_paddle_model/
darknet53.conv.74
yolov3-tiny.conv.15
# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
/wandb/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# dotenv
.env
# virtualenv
.venv*
venv*/
ENV*/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
Icon?
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff:
.idea/*
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/dictionaries
.html # Bokeh Plots
.pg # TensorFlow Frozen Graphs
.avi # videos
# Sensitive or high-churn files:
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
# Gradle:
.idea/**/gradle.xml
.idea/**/libraries
# CMake
cmake-build-debug/
cmake-build-release/
# Mongo Explorer plugin:
.idea/**/mongoSettings.xml
## File-based project format:
*.iws
## Plugin-specific files:
# IntelliJ
out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Cursive Clojure plugin
.idea/replstate.xml
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties

View File

@ -0,0 +1,69 @@
# Ultralytics YOLO 🚀, GPL-3.0 license
# Pre-commit hooks. For more information see https://github.com/pre-commit/pre-commit-hooks/blob/main/README.md
exclude: 'docs/'
# Define bot property if installed via https://github.com/marketplace/pre-commit-ci
ci:
autofix_prs: true
autoupdate_commit_msg: '[pre-commit.ci] pre-commit suggestions'
autoupdate_schedule: monthly
# submodules: true
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-case-conflict
- id: check-yaml
- id: check-docstring-first
- id: double-quote-string-fixer
- id: detect-private-key
- repo: https://github.com/asottile/pyupgrade
rev: v3.3.1
hooks:
- id: pyupgrade
name: Upgrade code
args: [--py37-plus]
- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
name: Sort imports
- repo: https://github.com/google/yapf
rev: v0.32.0
hooks:
- id: yapf
name: YAPF formatting
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
hooks:
- id: mdformat
name: MD formatting
additional_dependencies:
- mdformat-gfm
- mdformat-black
# exclude: "README.md|README.zh-CN.md|CONTRIBUTING.md"
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
- id: flake8
name: PEP8
- repo: https://github.com/codespell-project/codespell
rev: v2.2.2
hooks:
- id: codespell
args:
- --ignore-words-list=crate,nd,strack,dota
#- repo: https://github.com/asottile/yesqa
# rev: v1.4.0
# hooks:
# - id: yesqa

14
yolov3/CITATION.cff Normal file
View File

@ -0,0 +1,14 @@
cff-version: 1.2.0
preferred-citation:
type: software
message: If you use , please cite it as below.
authors:
- family-names: Jocher
given-names: Glenn
orcid: "https://orcid.org/0000-0001-5950-6979"
title: " by Ultralytics"
version: 7.0
doi: 10.5281/zenodo.3908559
date-released: 2020-5-29
license: GPL-3.0
url: "https://github.com/ultralytics/yolov5"

94
yolov3/CONTRIBUTING.md Normal file
View File

@ -0,0 +1,94 @@
## Contributing to YOLOv3 🚀
We love your input! We want to make contributing to YOLOv3 as easy and transparent as possible, whether it's:
- Reporting a bug
- Discussing the current state of the code
- Submitting a fix
- Proposing a new feature
- Becoming a maintainer
YOLOv3 works so well due to our combined community effort, and for every small improvement you contribute you will be
helping push the frontiers of what's possible in AI 😃!
## Submitting a Pull Request (PR) 🛠️
Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
### 1. Select File to Update
Select `requirements.txt` to update by clicking on it in GitHub.
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
### 2. Click 'Edit this file'
Button is in top-right corner.
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
### 3. Make Changes
Change `matplotlib` version from `3.2.2` to `3.3`.
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
### 4. Preview Changes and Submit PR
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
changes** button. All done, your PR is now submitted to YOLOv3 for review and approval 😃!
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
### PR recommendations
To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is **up-to-date with upstream/master.** If your PR is behind upstream/master an
automatic [GitHub actions](https://github.com/ultralytics/yolov3/blob/master/.github/workflows/rebase.yml) rebase may
be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature'
with the name of your local branch:
```bash
git remote add upstream https://github.com/ultralytics/yolov3.git
git fetch upstream
git checkout feature # <----- replace 'feature' with local branch name
git merge upstream/master
git push -u origin -f
```
- ✅ Verify all Continuous Integration (CI) **checks are passing**.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase
but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
## Submitting a Bug Report 🐛
If you spot a problem with YOLOv3 please submit a Bug Report!
For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few
short guidelines below to help users provide what we need in order to get started.
When asking a question, people will be better able to provide help if you provide **code** that they can easily
understand and use to **reproduce** the problem. This is referred to by community members as creating
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
the problem should be:
* ✅ **Minimal** Use as little code as possible that still produces the same problem
* ✅ **Complete** Provide **all** parts someone else needs to reproduce your problem in the question itself
* ✅ **Reproducible** Test the code you're about to provide to make sure it reproduces the problem
In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
should be:
* ✅ **Current** Verify that your code is up-to-date with current
GitHub [master](https://github.com/ultralytics/yolov3/tree/master), and if necessary `git pull` or `git clone` a new
copy to ensure your problem has not already been resolved by previous commits.
* ✅ **Unmodified** Your problem must be reproducible without any modifications to the codebase in this
repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **
Bug Report** [template](https://github.com/ultralytics/yolov3/issues/new/choose) and providing
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
understand and diagnose your problem.
## License
By contributing, you agree that your contributions will be licensed under
the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)

674
yolov3/LICENSE Normal file
View File

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

273
yolov3/README.md Normal file
View File

@ -0,0 +1,273 @@
<div align="center">
<p>
<a align="left" href="https://ultralytics.com/yolov3" target="_blank">
<img width="850" src="https://user-images.githubusercontent.com/26833433/99805965-8f2ca800-2b3d-11eb-8fad-13a96b222a23.jpg"></a>
</p>
<br>
<div>
<a href="https://github.com/ultralytics/yolov3/actions"><img src="https://github.com/ultralytics/yolov3/workflows/CI%20CPU%20testing/badge.svg" alt="CI CPU testing"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv3 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov3"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov3?logo=docker" alt="Docker Pulls"></a>
<br>
<a href="https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov3"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<a href="https://join.slack.com/t/ultralytics/shared_invite/zt-w29ei8bp-jczz7QYUmDtgo6r6KcMIAg"><img src="https://img.shields.io/badge/Slack-Join_Forum-blue.svg?logo=slack" alt="Join Forum"></a>
</div>
<br>
<div align="center">
<a href="https://github.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-github.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.linkedin.com/company/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-linkedin.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://twitter.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-twitter.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://youtube.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-youtube.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.facebook.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-facebook.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.instagram.com/ultralytics/">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="2%"/>
</a>
</div>
<br>
<p>
YOLOv3 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents <a href="https://ultralytics.com">Ultralytics</a>
open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
</p>
<!--
<a align="center" href="https://ultralytics.com/yolov3" target="_blank">
<img width="800" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-api.png"></a>
-->
</div>
## <div align="center">Documentation</div>
See the [YOLOv3 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
## <div align="center">Quick Start Examples</div>
<details open>
<summary>Install</summary>
[**Python>=3.6.0**](https://www.python.org/) is required with all
[requirements.txt](https://github.com/ultralytics/yolov3/blob/master/requirements.txt) installed including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
<!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
```bash
$ git clone https://github.com/ultralytics/yolov3
$ cd yolov3
$ pip install -r requirements.txt
```
</details>
<details open>
<summary>Inference</summary>
Inference with YOLOv3 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download
from the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases).
```python
import torch
# Model
model = torch.hub.load('ultralytics/yolov3', 'yolov3') # or yolov3-spp, yolov3-tiny, custom
# Images
img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
```
</details>
<details>
<summary>Inference with detect.py</summary>
`detect.py` runs inference on a variety of sources, downloading models automatically from
the [latest YOLOv3 release](https://github.com/ultralytics/yolov3/releases) and saving results to `runs/detect`.
```bash
$ python detect.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
</details>
<details>
<summary>Training</summary>
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
</details>
<details open>
<summary>Tutorials</summary>
* [Train Custom Data](https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data)&nbsp; 🚀 RECOMMENDED
* [Tips for Best Training Results](https://github.com/ultralytics/yolov3/wiki/Tips-for-Best-Training-Results)&nbsp; ☘️
RECOMMENDED
* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)&nbsp; 🌟 NEW
* [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975)&nbsp; 🌟 NEW
* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)&nbsp; ⭐ NEW
* [TorchScript, ONNX, CoreML Export](https://github.com/ultralytics/yolov5/issues/251) 🚀
* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)
* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)&nbsp; ⭐ NEW
* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx)
</details>
## <div align="center">Environments</div>
Get started in seconds with our verified environments. Click each icon below for details.
<div align="center">
<a href="https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-colab-small.png" width="15%"/>
</a>
<a href="https://www.kaggle.com/ultralytics/yolov3">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-kaggle-small.png" width="15%"/>
</a>
<a href="https://hub.docker.com/r/ultralytics/yolov3">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-docker-small.png" width="15%"/>
</a>
<a href="https://github.com/ultralytics/yolov3/wiki/AWS-Quickstart">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-aws-small.png" width="15%"/>
</a>
<a href="https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-gcp-small.png" width="15%"/>
</a>
</div>
## <div align="center">Integrations</div>
<div align="center">
<a href="https://wandb.ai/site?utm_campaign=repo_yolo_readme">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-wb-long.png" width="49%"/>
</a>
<a href="https://roboflow.com/?ref=ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-roboflow-long.png" width="49%"/>
</a>
</div>
|Weights and Biases|Roboflow ⭐ NEW|
|:-:|:-:|
|Automatically track and visualize all your YOLOv3 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv3 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) |
## <div align="center">Why YOLOv5</div>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/136901921-abcfcd9d-f978-4942-9b97-0e3f202907df.png"></p>
<details>
<summary>YOLOv3-P5 640 Figure (click to expand)</summary>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/136763877-b174052b-c12f-48d2-8bc4-545e3853398e.png"></p>
</details>
<details>
<summary>Figure Notes (click to expand)</summary>
* **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
* **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
* **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
* **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
### Pretrained Checkpoints
[assets]: https://github.com/ultralytics/yolov5/releases
[TTA]: https://github.com/ultralytics/yolov5/issues/303
|Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
|--- |--- |--- |--- |--- |--- |--- |--- |---
|[YOLOv5n][assets] |640 |28.4 |46.0 |**45** |**6.3**|**0.6**|**1.9**|**4.5**
|[YOLOv5s][assets] |640 |37.2 |56.0 |98 |6.4 |0.9 |7.2 |16.5
|[YOLOv5m][assets] |640 |45.2 |63.9 |224 |8.2 |1.7 |21.2 |49.0
|[YOLOv5l][assets] |640 |48.8 |67.2 |430 |10.1 |2.7 |46.5 |109.1
|[YOLOv5x][assets] |640 |50.7 |68.9 |766 |12.1 |4.8 |86.7 |205.7
| | | | | | | | |
|[YOLOv5n6][assets] |1280 |34.0 |50.7 |153 |8.1 |2.1 |3.2 |4.6
|[YOLOv5s6][assets] |1280 |44.5 |63.0 |385 |8.2 |3.6 |16.8 |12.6
|[YOLOv5m6][assets] |1280 |51.0 |69.0 |887 |11.1 |6.8 |35.7 |50.0
|[YOLOv5l6][assets] |1280 |53.6 |71.6 |1784 |15.8 |10.5 |76.8 |111.4
|[YOLOv5x6][assets]<br>+ [TTA][TTA]|1280<br>1536 |54.7<br>**55.4** |**72.4**<br>72.3 |3136<br>- |26.2<br>- |19.4<br>- |140.7<br>- |209.8<br>-
<details>
<summary>Table Notes (click to expand)</summary>
* All checkpoints are trained to 300 epochs with default settings and hyperparameters.
* **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
* **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45`
* **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## <div align="center">Contribute</div>
We love your input! We want to make contributing to YOLOv3 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv3 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors!
<a href="https://github.com/ultralytics/yolov3/graphs/contributors"><img src="https://opencollective.com/ultralytics/contributors.svg?width=990" /></a>
## <div align="center">Contact</div>
For YOLOv3 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov3/issues). For business inquiries or
professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).
<br>
<div align="center">
<a href="https://github.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-github.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://www.linkedin.com/company/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-linkedin.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://twitter.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-twitter.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://youtube.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-youtube.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://www.facebook.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-facebook.png" width="3%"/>
</a>
<img width="3%" />
<a href="https://www.instagram.com/ultralytics/">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="3%"/>
</a>
</div>

488
yolov3/README.zh-CN.md Normal file
View File

@ -0,0 +1,488 @@
<div align="center">
<p>
<a align="center" href="https://ultralytics.com/yolov3" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov3/banner-yolov3.png"></a>
</p>
[英文](README.md)|[简体中文](README.zh-CN.md)<br>
<div>
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv3 CI"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv3 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
<br>
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a>
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
</div>
<br>
YOLOv3 🚀 是世界上最受欢迎的视觉 AI代表<a href="https://ultralytics.com"> Ultralytics </a>对未来视觉 AI 方法的开源研究,结合在数千小时的研究和开发中积累的经验教训和最佳实践。
如果要申请企业许可证,请填写表格<a href="https://ultralytics.com/license">Ultralytics 许可</a>.
<div align="center">
<a href="https://github.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.producthunt.com/@glenn_jocher" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-producthunt.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.facebook.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-facebook.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="2%" alt="" /></a>
</div>
</div>
## <div align="center">YOLOv8 🚀 NEW</div>
We are thrilled to announce the launch of Ultralytics YOLOv8 🚀, our NEW cutting-edge, state-of-the-art (SOTA) model
released at **[https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics)**.
YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of
object detection, image segmentation and image classification tasks.
See the [YOLOv8 Docs](https://docs.ultralytics.com) for details and get started with:
```commandline
pip install ultralytics
```
<div align="center">
<a href="https://ultralytics.com/yolov8" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/yolo-comparison-plots.png"></a>
</div>
## <div align="center">文档</div>
有关训练、测试和部署的完整文档见[YOLOv3 文档](https://docs.ultralytics.com)。请参阅下面的快速入门示例。
<details open>
<summary>安装</summary>
克隆 repo并要求在 [**Python>=3.7.0**](https://www.python.org/) 环境中安装 [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) ,且要求 [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/) 。
```bash
git clone https://github.com/ultralytics/yolov3 # clone
cd yolov3
pip install -r requirements.txt # install
```
</details>
<details>
<summary>推理</summary>
使用 YOLOv3 [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) 推理。最新 [模型](https://github.com/ultralytics/yolov5/tree/master/models) 将自动的从
YOLOv3 [release](https://github.com/ultralytics/yolov5/releases) 中下载。
```python
import torch
# Model
model = torch.hub.load("ultralytics/yolov5", "yolov3") # or yolov5n - yolov5x6, custom
# Images
img = "https://ultralytics.com/images/zidane.jpg" # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
```
</details>
<details>
<summary>使用 detect.py 推理</summary>
`detect.py` 在各种来源上运行推理, [模型](https://github.com/ultralytics/yolov5/tree/master/models) 自动从
最新的YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载,并将结果保存到 `runs/detect`
```bash
python detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
list.txt # list of images
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
</details>
<details>
<summary>训练</summary>
下面的命令重现 YOLOv3 在 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) 数据集上的结果。
最新的 [模型](https://github.com/ultralytics/yolov5/tree/master/models) 和 [数据集](https://github.com/ultralytics/yolov5/tree/master/data)
将自动的从 YOLOv3 [release](https://github.com/ultralytics/yolov5/releases) 中下载。
YOLOv5n/s/m/l/x 在 V100 GPU 的训练时间为 1/2/4/6/8 天( [多GPU](https://github.com/ultralytics/yolov5/issues/475) 训练速度更快)。
尽可能使用更大的 `--batch-size` ,或通过 `--batch-size -1` 实现
YOLOv3 [自动批处理](https://github.com/ultralytics/yolov5/pull/5092) 。下方显示的 batchsize 适用于 V100-16GB。
```bash
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
yolov5s 64
yolov5m 40
yolov5l 24
yolov5x 16
```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
</details>
<details open>
<summary>教程</summary>
- [训练自定义数据](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)🚀 推荐
- [获得最佳训练结果的技巧](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)☘️ 推荐
- [多 GPU 训练](https://github.com/ultralytics/yolov5/issues/475)
- [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)🌟 新
- [TFLite、ONNX、CoreML、TensorRT 导出](https://github.com/ultralytics/yolov5/issues/251)🚀
- [NVIDIA Jetson Nano 部署](https://github.com/ultralytics/yolov5/issues/9627)🌟 新
- [测试时数据增强 (TTA)](https://github.com/ultralytics/yolov5/issues/303)
- [模型集成](https://github.com/ultralytics/yolov5/issues/318)
- [模型修剪/稀疏度](https://github.com/ultralytics/yolov5/issues/304)
- [超参数进化](https://github.com/ultralytics/yolov5/issues/607)
- [使用冻结层进行迁移学习](https://github.com/ultralytics/yolov5/issues/1314)
- [架构总结](https://github.com/ultralytics/yolov5/issues/6998)🌟 新
- [用于数据集、标签和主动学习的 Roboflow](https://github.com/ultralytics/yolov5/issues/4975)🌟 新
- [ClearML 记录](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml)🌟 新
- [Deci 平台](https://github.com/ultralytics/yolov5/wiki/Deci-Platform)🌟 新
- [Comet Logging](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/comet)🌟 新
</details>
## <div align="center">模块集成</div>
<br>
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov3/banner-integrations.png"></a>
<br>
<br>
<div align="center">
<a href="https://roboflow.com/?ref=ultralytics">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-roboflow.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://cutt.ly/yolov5-readme-clearml">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-clearml.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://bit.ly/yolov5-readme-comet2">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-comet.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://bit.ly/yolov5-neuralmagic">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" /></a>
</div>
| Roboflow | ClearML ⭐ 新 | Comet ⭐ 新 | Neural Magic ⭐ 新 |
| :--------------------------------------------------------------------------------: | :-------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------: |
| 将您的自定义数据集进行标注并直接导出到 YOLOv3 以进行训练 [Roboflow](https://roboflow.com/?ref=ultralytics) | 自动跟踪、可视化甚至远程训练 YOLOv3 [ClearML](https://cutt.ly/yolov5-readme-clearml)(开源!) | 永远免费,[Comet](https://bit.ly/yolov5-readme-comet2)可让您保存 YOLOv3 模型、恢复训练以及交互式可视化和调试预测 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic),运行 YOLOv3 推理的速度最高可提高6倍 |
## <div align="center">Ultralytics HUB</div>
[Ultralytics HUB](https://bit.ly/ultralytics_hub) 是我们的⭐**新的**用于可视化数据集、训练 YOLOv3 🚀 模型并以无缝体验部署到现实世界的无代码解决方案。现在开始 **免费** 使用他!
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
## <div align="center">为什么选择 YOLOv3</div>
YOLOv3 超级容易上手,简单易学。我们优先考虑现实世界的结果。
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png"></p>
<details>
<summary>YOLOv5-P5 640 图</summary>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png"></p>
</details>
<details>
<summary>图表笔记</summary>
- **COCO AP val** 表示 mAP@0.5:0.95 指标,在 [COCO val2017](http://cocodataset.org) 数据集的 5000 张图像上测得, 图像包含 256 到 1536 各种推理大小。
- **显卡推理速度** 为在 [COCO val2017](http://cocodataset.org) 数据集上的平均推理时间,使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100实例batchsize 为 32 。
- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl) batchsize 为32。
- **复现命令**`python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
### 预训练模型
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | 推理速度<br><sup>CPU b1<br>ms | 推理速度<br><sup>V100 b1<br>ms | 速度<br><sup>V100 b32<br>ms | 参数量<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ---------------------------------------------------------------------------------------------- | --------------- | -------------------- | ----------------- | --------------------------- | ---------------------------- | --------------------------- | --------------- | ---------------------- |
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
| | | | | | | | | |
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+[TTA] | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
<details>
<summary>笔记</summary>
- 所有模型都使用默认配置,训练 300 epochs。n和s模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) ,其他模型都使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml) 。
- \*\*mAP<sup>val</sup>\*\*在单模型单尺度上计算,数据集使用 [COCO val2017](http://cocodataset.org) 。<br>复现命令 `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **推理速度**在 COCO val 图像总体时间上进行平均得到,测试环境使用[AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/)实例。 NMS 时间 (大约 1 ms/img) 不包括在内。<br>复现命令 `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** [测试时数据增强](https://github.com/ultralytics/yolov5/issues/303) 包括反射和尺度变换。<br>复现命令 `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## <div align="center">实例分割模型 ⭐ 新</div>
我们新的 YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) 实例分割模型是世界上最快和最准确的模型,击败所有当前 [SOTA 基准](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco)。我们使它非常易于训练、验证和部署。更多细节请查看 [发行说明](https://github.com/ultralytics/yolov5/releases/v7.0) 或访问我们的 [YOLOv5 分割 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) 以快速入门。
<details>
<summary>实例分割模型列表</summary>
<br>
<div align="center">
<a align="center" href="https://ultralytics.com/yolov5" target="_blank">
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png"></a>
</div>
我们使用 A100 GPU 在 COCO 上以 640 图像大小训练了 300 epochs 得到 YOLOv5 分割模型。我们将所有模型导出到 ONNX FP32 以进行 CPU 速度测试,并导出到 TensorRT FP16 以进行 GPU 速度测试。为了便于再现,我们在 Google [Colab Pro](https://colab.research.google.com/signup) 上进行了所有速度测试。
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 训练时长<br><sup>300 epochs<br>A100 GPU小时 | 推理速度<br><sup>ONNX CPU<br>ms | 推理速度<br><sup>TRT A100<br>ms | 参数量<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------ | --------------- | -------------------- | --------------------- | --------------------------------------- | ----------------------------- | ----------------------------- | --------------- | ---------------------- |
| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** |
| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 |
| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 |
| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 |
| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
- 所有模型使用 SGD 优化器训练, 都使用 `lr0=0.01``weight_decay=5e-5` 参数, 图像大小为 640 。<br>训练 log 可以查看 https://wandb.ai/glenn-jocher/YOLOv5_v70_official
- **准确性**结果都在 COCO 数据集上,使用单模型单尺度测试得到。<br>复现命令 `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
- **推理速度**是使用 100 张图像推理时间进行平均得到,测试环境使用 [Colab Pro](https://colab.research.google.com/signup) 上 A100 高 RAM 实例。结果仅表示推理速度NMS 每张图像增加约 1 毫秒)。<br>复现命令 `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
- **模型转换**到 FP32 的 ONNX 和 FP16 的 TensorRT 脚本为 `export.py`.<br>运行命令 `python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
</details>
<details>
<summary>分割模型使用示例 &nbsp;<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/segment/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></summary>
### 训练
YOLOv5分割训练支持自动下载 COCO128-seg 分割数据集,用户仅需在启动指令中包含 `--data coco128-seg.yaml` 参数。 若要手动下载,使用命令 `bash data/scripts/get_coco.sh --train --val --segments` 在下载完毕后,使用命令 `python train.py --data coco.yaml` 开启训练。
```bash
# 单 GPU
python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640
# 多 GPU DDP 模式
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3
```
### 验证
在 COCO 数据集上验证 YOLOv5s-seg mask mAP
```bash
bash data/scripts/get_coco.sh --val --segments # 下载 COCO val segments 数据集 (780MB, 5000 images)
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # 验证
```
### 预测
使用预训练的 YOLOv5m-seg.pt 来预测 bus.jpg
```bash
python segment/predict.py --weights yolov5m-seg.pt --data data/images/bus.jpg
```
```python
model = torch.hub.load(
"ultralytics/yolov5", "custom", "yolov5m-seg.pt"
) # 从load from PyTorch Hub 加载模型 (WARNING: 推理暂未支持)
```
| ![zidane](https://user-images.githubusercontent.com/26833433/203113421-decef4c4-183d-4a0a-a6c2-6435b33bc5d3.jpg) | ![bus](https://user-images.githubusercontent.com/26833433/203113416-11fe0025-69f7-4874-a0a6-65d0bfe2999a.jpg) |
| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
### 模型导出
将 YOLOv5s-seg 模型导出到 ONNX 和 TensorRT
```bash
python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0
```
</details>
## <div align="center">分类网络 ⭐ 新</div>
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) 带来对分类模型训练、验证和部署的支持!详情请查看 [发行说明](https://github.com/ultralytics/yolov5/releases/v6.2) 或访问我们的 [YOLOv5 分类 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) 以快速入门。
<details>
<summary>分类网络模型</summary>
<br>
我们使用 4xA100 实例在 ImageNet 上训练了 90 个 epochs 得到 YOLOv5-cls 分类模型,我们训练了 ResNet 和 EfficientNet 模型以及相同的默认训练设置以进行比较。我们将所有模型导出到 ONNX FP32 以进行 CPU 速度测试,并导出到 TensorRT FP16 以进行 GPU 速度测试。为了便于重现,我们在 Google 上进行了所有速度测试 [Colab Pro](https://colab.research.google.com/signup) 。
| 模型 | 尺寸<br><sup>(像素) | acc<br><sup>top1 | acc<br><sup>top5 | 训练时长<br><sup>90 epochs<br>4xA100小时 | 推理速度<br><sup>ONNX CPU<br>ms | 推理速度<br><sup>TensorRT V100<br>ms | 参数<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| -------------------------------------------------------------------------------------------------- | --------------- | ---------------- | ---------------- | ------------------------------------ | ----------------------------- | ---------------------------------- | -------------- | ---------------------- |
| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 |
| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 |
| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 |
| | | | | | | | | |
| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 |
| [Resnetzch](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 |
| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 |
| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 |
| | | | | | | | | |
| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 |
| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 |
| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 |
| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 |
<details>
<summary>Table Notes (点击以展开)</summary>
- 所有模型都使用 SGD 优化器训练 90 个 epochs都使用 `lr0=0.001``weight_decay=5e-5` 参数, 图像大小为 224 ,且都使用默认设置。<br>训练 log 可以查看 https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2
- **准确性**都在单模型单尺度上计算,数据集使用 [ImageNet-1k](https://www.image-net.org/index.php) 。<br>复现命令 `python classify/val.py --data ../datasets/imagenet --img 224`
- **推理速度**是使用 100 个推理图像进行平均得到,测试环境使用谷歌 [Colab Pro](https://colab.research.google.com/signup) V100 高 RAM 实例。<br>复现命令 `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
- **模型导出**到 FP32 的 ONNX 和 FP16 的 TensorRT 使用 `export.py`<br>复现命令 `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
</details>
</details>
<details>
<summary>分类训练示例 &nbsp;<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/classify/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></summary>
### 训练
YOLOv5 分类训练支持自动下载 MNIST、Fashion-MNIST、CIFAR10、CIFAR100、Imagenette、Imagewoof 和 ImageNet 数据集,命令中使用 `--data` 即可。 MNIST 示例 `--data mnist`
```bash
# 单 GPU
python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128
# 多 GPU DDP 模式
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3
```
### 验证
在 ImageNet-1k 数据集上验证 YOLOv5m-cls 的准确性:
```bash
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
```
### 预测
使用预训练的 YOLOv5s-cls.pt 来预测 bus.jpg
```bash
python classify/predict.py --weights yolov5s-cls.pt --data data/images/bus.jpg
```
```python
model = torch.hub.load(
"ultralytics/yolov5", "custom", "yolov5s-cls.pt"
) # load from PyTorch Hub
```
### 模型导出
将一组经过训练的 YOLOv5s-cls、ResNet 和 EfficientNet 模型导出到 ONNX 和 TensorRT
```bash
python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224
```
</details>
## <div align="center">环境</div>
使用下面我们经过验证的环境,在几秒钟内开始使用 YOLOv3 。单击下面的图标了解详细信息。
<div align="center">
<a href="https://bit.ly/yolov5-paperspace-notebook">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-gradient.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-colab-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://www.kaggle.com/ultralytics/yolov5">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-kaggle-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://hub.docker.com/r/ultralytics/yolov5">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-docker-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-aws-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-gcp-small.png" width="10%" /></a>
</div>
## <div align="center">贡献</div>
我们喜欢您的意见或建议!我们希望尽可能简单和透明地为 YOLOv3 做出贡献。请看我们的 [投稿指南](CONTRIBUTING.md),并填写 [YOLOv5调查](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 向我们发送您的体验反馈。感谢我们所有的贡献者!
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
<a href="https://github.com/ultralytics/yolov5/graphs/contributors">
<img src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" /></a>
## <div align="center">License</div>
YOLOv3 在两种不同的 License 下可用:
- **GPL-3.0 License** 查看 [License](https://github.com/ultralytics/yolov5/blob/master/LICENSE) 文件的详细信息。
- **企业License**:在没有 GPL-3.0 开源要求的情况下为商业产品开发提供更大的灵活性。典型用例是将 Ultralytics 软件和 AI 模型嵌入到商业产品和应用程序中。在以下位置申请企业许可证 [Ultralytics 许可](https://ultralytics.com/license) 。
## <div align="center">联系我们</div>
请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues) 或 [Ultralytics Community Forum](https://community.ultralytis.com) 以报告 YOLOv3 错误和请求功能。
<br>
<div align="center">
<a href="https://github.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.producthunt.com/@glenn_jocher" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-producthunt.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.facebook.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-facebook.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="3%" alt="" /></a>
</div>
[tta]: https://github.com/ultralytics/yolov5/issues/303

169
yolov3/benchmarks.py Normal file
View File

@ -0,0 +1,169 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Run benchmarks on all supported export formats
Format | `export.py --include` | Model
--- | --- | ---
PyTorch | - | yolov5s.pt
TorchScript | `torchscript` | yolov5s.torchscript
ONNX | `onnx` | yolov5s.onnx
OpenVINO | `openvino` | yolov5s_openvino_model/
TensorRT | `engine` | yolov5s.engine
CoreML | `coreml` | yolov5s.mlmodel
TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/
TensorFlow GraphDef | `pb` | yolov5s.pb
TensorFlow Lite | `tflite` | yolov5s.tflite
TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite
TensorFlow.js | `tfjs` | yolov5s_web_model/
Requirements:
$ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU
$ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU
$ pip install -U nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com # TensorRT
Usage:
$ python benchmarks.py --weights yolov5s.pt --img 640
"""
import argparse
import platform
import sys
import time
from pathlib import Path
import pandas as pd
FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
# ROOT = ROOT.relative_to(Path.cwd()) # relative
import export
from models.experimental import attempt_load
from models.yolo import SegmentationModel
from segment.val import run as val_seg
from utils import notebook_init
from utils.general import LOGGER, check_yaml, file_size, print_args
from utils.torch_utils import select_device
from val import run as val_det
def run(
weights=ROOT / 'yolov5s.pt', # weights path
imgsz=640, # inference size (pixels)
batch_size=1, # batch size
data=ROOT / 'data/coco128.yaml', # dataset.yaml path
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
half=False, # use FP16 half-precision inference
test=False, # test exports only
pt_only=False, # test PyTorch only
hard_fail=False, # throw error on benchmark failure
):
y, t = [], time.time()
device = select_device(device)
model_type = type(attempt_load(weights, fuse=False)) # DetectionModel, SegmentationModel, etc.
for i, (name, f, suffix, cpu, gpu) in export.export_formats().iterrows(): # index, (name, file, suffix, CPU, GPU)
try:
assert i not in (9, 10), 'inference not supported' # Edge TPU and TF.js are unsupported
assert i != 5 or platform.system() == 'Darwin', 'inference only supported on macOS>=10.13' # CoreML
if 'cpu' in device.type:
assert cpu, 'inference not supported on CPU'
if 'cuda' in device.type:
assert gpu, 'inference not supported on GPU'
# Export
if f == '-':
w = weights # PyTorch format
else:
w = export.run(weights=weights, imgsz=[imgsz], include=[f], device=device, half=half)[-1] # all others
assert suffix in str(w), 'export failed'
# Validate
if model_type == SegmentationModel:
result = val_seg(data, w, batch_size, imgsz, plots=False, device=device, task='speed', half=half)
metric = result[0][7] # (box(p, r, map50, map), mask(p, r, map50, map), *loss(box, obj, cls))
else: # DetectionModel:
result = val_det(data, w, batch_size, imgsz, plots=False, device=device, task='speed', half=half)
metric = result[0][3] # (p, r, map50, map, *loss(box, obj, cls))
speed = result[2][1] # times (preprocess, inference, postprocess)
y.append([name, round(file_size(w), 1), round(metric, 4), round(speed, 2)]) # MB, mAP, t_inference
except Exception as e:
if hard_fail:
assert type(e) is AssertionError, f'Benchmark --hard-fail for {name}: {e}'
LOGGER.warning(f'WARNING ⚠️ Benchmark failure for {name}: {e}')
y.append([name, None, None, None]) # mAP, t_inference
if pt_only and i == 0:
break # break after PyTorch
# Print results
LOGGER.info('\n')
parse_opt()
notebook_init() # print system info
c = ['Format', 'Size (MB)', 'mAP50-95', 'Inference time (ms)'] if map else ['Format', 'Export', '', '']
py = pd.DataFrame(y, columns=c)
LOGGER.info(f'\nBenchmarks complete ({time.time() - t:.2f}s)')
LOGGER.info(str(py if map else py.iloc[:, :2]))
if hard_fail and isinstance(hard_fail, str):
metrics = py['mAP50-95'].array # values to compare to floor
floor = eval(hard_fail) # minimum metric floor to pass, i.e. = 0.29 mAP for YOLOv5n
assert all(x > floor for x in metrics if pd.notna(x)), f'HARD FAIL: mAP50-95 < floor {floor}'
return py
def test(
weights=ROOT / 'yolov5s.pt', # weights path
imgsz=640, # inference size (pixels)
batch_size=1, # batch size
data=ROOT / 'data/coco128.yaml', # dataset.yaml path
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
half=False, # use FP16 half-precision inference
test=False, # test exports only
pt_only=False, # test PyTorch only
hard_fail=False, # throw error on benchmark failure
):
y, t = [], time.time()
device = select_device(device)
for i, (name, f, suffix, gpu) in export.export_formats().iterrows(): # index, (name, file, suffix, gpu-capable)
try:
w = weights if f == '-' else \
export.run(weights=weights, imgsz=[imgsz], include=[f], device=device, half=half)[-1] # weights
assert suffix in str(w), 'export failed'
y.append([name, True])
except Exception:
y.append([name, False]) # mAP, t_inference
# Print results
LOGGER.info('\n')
parse_opt()
notebook_init() # print system info
py = pd.DataFrame(y, columns=['Format', 'Export'])
LOGGER.info(f'\nExports complete ({time.time() - t:.2f}s)')
LOGGER.info(str(py))
return py
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)')
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--test', action='store_true', help='test exports only')
parser.add_argument('--pt-only', action='store_true', help='test PyTorch only')
parser.add_argument('--hard-fail', nargs='?', const=True, default=False, help='Exception on error or < min metric')
opt = parser.parse_args()
opt.data = check_yaml(opt.data) # check YAML
print_args(vars(opt))
return opt
def main(opt):
test(**vars(opt)) if opt.test else run(**vars(opt))
if __name__ == '__main__':
opt = parse_opt()
main(opt)

226
yolov3/classify/predict.py Normal file
View File

@ -0,0 +1,226 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Run classification inference on images, videos, directories, globs, YouTube, webcam, streams, etc.
Usage - sources:
$ python classify/predict.py --weights yolov5s-cls.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
list.txt # list of images
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
Usage - formats:
$ python classify/predict.py --weights yolov5s-cls.pt # PyTorch
yolov5s-cls.torchscript # TorchScript
yolov5s-cls.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s-cls_openvino_model # OpenVINO
yolov5s-cls.engine # TensorRT
yolov5s-cls.mlmodel # CoreML (macOS-only)
yolov5s-cls_saved_model # TensorFlow SavedModel
yolov5s-cls.pb # TensorFlow GraphDef
yolov5s-cls.tflite # TensorFlow Lite
yolov5s-cls_edgetpu.tflite # TensorFlow Edge TPU
yolov5s-cls_paddle_model # PaddlePaddle
"""
import argparse
import os
import platform
import sys
from pathlib import Path
import torch
import torch.nn.functional as F
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import DetectMultiBackend
from utils.augmentations import classify_transforms
from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams
from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2,
increment_path, print_args, strip_optimizer)
from utils.plots import Annotator
from utils.torch_utils import select_device, smart_inference_mode
@smart_inference_mode()
def run(
weights=ROOT / 'yolov5s-cls.pt', # model.pt path(s)
source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam)
data=ROOT / 'data/coco128.yaml', # dataset.yaml path
imgsz=(224, 224), # inference size (height, width)
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
view_img=False, # show results
save_txt=False, # save results to *.txt
nosave=False, # do not save images/videos
augment=False, # augmented inference
visualize=False, # visualize features
update=False, # update all models
project=ROOT / 'runs/predict-cls', # save results to project/name
name='exp', # save results to project/name
exist_ok=False, # existing project/name ok, do not increment
half=False, # use FP16 half-precision inference
dnn=False, # use OpenCV DNN for ONNX inference
vid_stride=1, # video frame-rate stride
):
source = str(source)
save_img = not nosave and not source.endswith('.txt') # save inference images
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
webcam = source.isnumeric() or source.endswith('.streams') or (is_url and not is_file)
screenshot = source.lower().startswith('screen')
if is_url and is_file:
source = check_file(source) # download
# Directories
save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
(save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
# Load model
device = select_device(device)
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
stride, names, pt = model.stride, model.names, model.pt
imgsz = check_img_size(imgsz, s=stride) # check image size
# Dataloader
bs = 1 # batch_size
if webcam:
view_img = check_imshow(warn=True)
dataset = LoadStreams(source, img_size=imgsz, transforms=classify_transforms(imgsz[0]), vid_stride=vid_stride)
bs = len(dataset)
elif screenshot:
dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt)
else:
dataset = LoadImages(source, img_size=imgsz, transforms=classify_transforms(imgsz[0]), vid_stride=vid_stride)
vid_path, vid_writer = [None] * bs, [None] * bs
# Run inference
model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup
seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
for path, im, im0s, vid_cap, s in dataset:
with dt[0]:
im = torch.Tensor(im).to(model.device)
im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
if len(im.shape) == 3:
im = im[None] # expand for batch dim
# Inference
with dt[1]:
results = model(im)
# Post-process
with dt[2]:
pred = F.softmax(results, dim=1) # probabilities
# Process predictions
for i, prob in enumerate(pred): # per image
seen += 1
if webcam: # batch_size >= 1
p, im0, frame = path[i], im0s[i].copy(), dataset.count
s += f'{i}: '
else:
p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)
p = Path(p) # to Path
save_path = str(save_dir / p.name) # im.jpg
txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt
s += '%gx%g ' % im.shape[2:] # print string
annotator = Annotator(im0, example=str(names), pil=True)
# Print results
top5i = prob.argsort(0, descending=True)[:5].tolist() # top 5 indices
s += f"{', '.join(f'{names[j]} {prob[j]:.2f}' for j in top5i)}, "
# Write results
text = '\n'.join(f'{prob[j]:.2f} {names[j]}' for j in top5i)
if save_img or view_img: # Add bbox to image
annotator.text((32, 32), text, txt_color=(255, 255, 255))
if save_txt: # Write to file
with open(f'{txt_path}.txt', 'a') as f:
f.write(text + '\n')
# Stream results
im0 = annotator.result()
if view_img:
if platform.system() == 'Linux' and p not in windows:
windows.append(p)
cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux)
cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
cv2.imshow(str(p), im0)
cv2.waitKey(1) # 1 millisecond
# Save results (image with detections)
if save_img:
if dataset.mode == 'image':
cv2.imwrite(save_path, im0)
else: # 'video' or 'stream'
if vid_path[i] != save_path: # new video
vid_path[i] = save_path
if isinstance(vid_writer[i], cv2.VideoWriter):
vid_writer[i].release() # release previous video writer
if vid_cap: # video
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
else: # stream
fps, w, h = 30, im0.shape[1], im0.shape[0]
save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos
vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
vid_writer[i].write(im0)
# Print time (inference-only)
LOGGER.info(f'{s}{dt[1].dt * 1E3:.1f}ms')
# Print results
t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
if save_txt or save_img:
s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
if update:
strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning)
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s-cls.pt', help='model path(s)')
parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[224], help='inference size h,w')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--view-img', action='store_true', help='show results')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--visualize', action='store_true', help='visualize features')
parser.add_argument('--update', action='store_true', help='update all models')
parser.add_argument('--project', default=ROOT / 'runs/predict-cls', help='save results to project/name')
parser.add_argument('--name', default='exp', help='save results to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride')
opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(vars(opt))
return opt
def main(opt):
check_requirements(exclude=('tensorboard', 'thop'))
run(**vars(opt))
if __name__ == '__main__':
opt = parse_opt()
main(opt)

333
yolov3/classify/train.py Normal file
View File

@ -0,0 +1,333 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Train a YOLOv5 classifier model on a classification dataset
Usage - Single-GPU training:
$ python classify/train.py --model yolov5s-cls.pt --data imagenette160 --epochs 5 --img 224
Usage - Multi-GPU DDP training:
$ python -m torch.distributed.run --nproc_per_node 4 --master_port 2022 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3
Datasets: --data mnist, fashion-mnist, cifar10, cifar100, imagenette, imagewoof, imagenet, or 'path/to/data'
YOLOv5-cls models: --model yolov5n-cls.pt, yolov5s-cls.pt, yolov5m-cls.pt, yolov5l-cls.pt, yolov5x-cls.pt
Torchvision models: --model resnet50, efficientnet_b0, etc. See https://pytorch.org/vision/stable/models.html
"""
import argparse
import os
import subprocess
import sys
import time
from copy import deepcopy
from datetime import datetime
from pathlib import Path
import torch
import torch.distributed as dist
import torch.hub as hub
import torch.optim.lr_scheduler as lr_scheduler
import torchvision
from torch.cuda import amp
from tqdm import tqdm
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # YOLOv5 root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from classify import val as validate
from models.experimental import attempt_load
from models.yolo import ClassificationModel, DetectionModel
from utils.dataloaders import create_classification_dataloader
from utils.general import (DATASETS_DIR, LOGGER, TQDM_BAR_FORMAT, WorkingDirectory, check_git_info, check_git_status,
check_requirements, colorstr, download, increment_path, init_seeds, print_args, yaml_save)
from utils.loggers import GenericLogger
from utils.plots import imshow_cls
from utils.torch_utils import (ModelEMA, model_info, reshape_classifier_output, select_device, smart_DDP,
smart_optimizer, smartCrossEntropyLoss, torch_distributed_zero_first)
LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
RANK = int(os.getenv('RANK', -1))
WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
GIT_INFO = check_git_info()
def train(opt, device):
init_seeds(opt.seed + 1 + RANK, deterministic=True)
save_dir, data, bs, epochs, nw, imgsz, pretrained = \
opt.save_dir, Path(opt.data), opt.batch_size, opt.epochs, min(os.cpu_count() - 1, opt.workers), \
opt.imgsz, str(opt.pretrained).lower() == 'true'
cuda = device.type != 'cpu'
# Directories
wdir = save_dir / 'weights'
wdir.mkdir(parents=True, exist_ok=True) # make dir
last, best = wdir / 'last.pt', wdir / 'best.pt'
# Save run settings
yaml_save(save_dir / 'opt.yaml', vars(opt))
# Logger
logger = GenericLogger(opt=opt, console_logger=LOGGER) if RANK in {-1, 0} else None
# Download Dataset
with torch_distributed_zero_first(LOCAL_RANK), WorkingDirectory(ROOT):
data_dir = data if data.is_dir() else (DATASETS_DIR / data)
if not data_dir.is_dir():
LOGGER.info(f'\nDataset not found ⚠️, missing path {data_dir}, attempting download...')
t = time.time()
if str(data) == 'imagenet':
subprocess.run(f"bash {ROOT / 'data/scripts/get_imagenet.sh'}", shell=True, check=True)
else:
url = f'https://github.com/ultralytics/yolov5/releases/download/v1.0/{data}.zip'
download(url, dir=data_dir.parent)
s = f"Dataset download success ✅ ({time.time() - t:.1f}s), saved to {colorstr('bold', data_dir)}\n"
LOGGER.info(s)
# Dataloaders
nc = len([x for x in (data_dir / 'train').glob('*') if x.is_dir()]) # number of classes
trainloader = create_classification_dataloader(path=data_dir / 'train',
imgsz=imgsz,
batch_size=bs // WORLD_SIZE,
augment=True,
cache=opt.cache,
rank=LOCAL_RANK,
workers=nw)
test_dir = data_dir / 'test' if (data_dir / 'test').exists() else data_dir / 'val' # data/test or data/val
if RANK in {-1, 0}:
testloader = create_classification_dataloader(path=test_dir,
imgsz=imgsz,
batch_size=bs // WORLD_SIZE * 2,
augment=False,
cache=opt.cache,
rank=-1,
workers=nw)
# Model
with torch_distributed_zero_first(LOCAL_RANK), WorkingDirectory(ROOT):
if Path(opt.model).is_file() or opt.model.endswith('.pt'):
model = attempt_load(opt.model, device='cpu', fuse=False)
elif opt.model in torchvision.models.__dict__: # TorchVision models i.e. resnet50, efficientnet_b0
model = torchvision.models.__dict__[opt.model](weights='IMAGENET1K_V1' if pretrained else None)
else:
m = hub.list('ultralytics/yolov5') # + hub.list('pytorch/vision') # models
raise ModuleNotFoundError(f'--model {opt.model} not found. Available models are: \n' + '\n'.join(m))
if isinstance(model, DetectionModel):
LOGGER.warning("WARNING ⚠️ pass YOLOv5 classifier model with '-cls' suffix, i.e. '--model yolov5s-cls.pt'")
model = ClassificationModel(model=model, nc=nc, cutoff=opt.cutoff or 10) # convert to classification model
reshape_classifier_output(model, nc) # update class count
for m in model.modules():
if not pretrained and hasattr(m, 'reset_parameters'):
m.reset_parameters()
if isinstance(m, torch.nn.Dropout) and opt.dropout is not None:
m.p = opt.dropout # set dropout
for p in model.parameters():
p.requires_grad = True # for training
model = model.to(device)
# Info
if RANK in {-1, 0}:
model.names = trainloader.dataset.classes # attach class names
model.transforms = testloader.dataset.torch_transforms # attach inference transforms
model_info(model)
if opt.verbose:
LOGGER.info(model)
images, labels = next(iter(trainloader))
file = imshow_cls(images[:25], labels[:25], names=model.names, f=save_dir / 'train_images.jpg')
logger.log_images(file, name='Train Examples')
logger.log_graph(model, imgsz) # log model
# Optimizer
optimizer = smart_optimizer(model, opt.optimizer, opt.lr0, momentum=0.9, decay=opt.decay)
# Scheduler
lrf = 0.01 # final lr (fraction of lr0)
# lf = lambda x: ((1 + math.cos(x * math.pi / epochs)) / 2) * (1 - lrf) + lrf # cosine
lf = lambda x: (1 - x / epochs) * (1 - lrf) + lrf # linear
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
# scheduler = lr_scheduler.OneCycleLR(optimizer, max_lr=lr0, total_steps=epochs, pct_start=0.1,
# final_div_factor=1 / 25 / lrf)
# EMA
ema = ModelEMA(model) if RANK in {-1, 0} else None
# DDP mode
if cuda and RANK != -1:
model = smart_DDP(model)
# Train
t0 = time.time()
criterion = smartCrossEntropyLoss(label_smoothing=opt.label_smoothing) # loss function
best_fitness = 0.0
scaler = amp.GradScaler(enabled=cuda)
val = test_dir.stem # 'val' or 'test'
LOGGER.info(f'Image sizes {imgsz} train, {imgsz} test\n'
f'Using {nw * WORLD_SIZE} dataloader workers\n'
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting {opt.model} training on {data} dataset with {nc} classes for {epochs} epochs...\n\n'
f"{'Epoch':>10}{'GPU_mem':>10}{'train_loss':>12}{f'{val}_loss':>12}{'top1_acc':>12}{'top5_acc':>12}")
for epoch in range(epochs): # loop over the dataset multiple times
tloss, vloss, fitness = 0.0, 0.0, 0.0 # train loss, val loss, fitness
model.train()
if RANK != -1:
trainloader.sampler.set_epoch(epoch)
pbar = enumerate(trainloader)
if RANK in {-1, 0}:
pbar = tqdm(enumerate(trainloader), total=len(trainloader), bar_format=TQDM_BAR_FORMAT)
for i, (images, labels) in pbar: # progress bar
images, labels = images.to(device, non_blocking=True), labels.to(device)
# Forward
with amp.autocast(enabled=cuda): # stability issues when enabled
loss = criterion(model(images), labels)
# Backward
scaler.scale(loss).backward()
# Optimize
scaler.unscale_(optimizer) # unscale gradients
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
if ema:
ema.update(model)
if RANK in {-1, 0}:
# Print
tloss = (tloss * i + loss.item()) / (i + 1) # update mean losses
mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
pbar.desc = f"{f'{epoch + 1}/{epochs}':>10}{mem:>10}{tloss:>12.3g}" + ' ' * 36
# Test
if i == len(pbar) - 1: # last batch
top1, top5, vloss = validate.run(model=ema.ema,
dataloader=testloader,
criterion=criterion,
pbar=pbar) # test accuracy, loss
fitness = top1 # define fitness as top1 accuracy
# Scheduler
scheduler.step()
# Log metrics
if RANK in {-1, 0}:
# Best fitness
if fitness > best_fitness:
best_fitness = fitness
# Log
metrics = {
'train/loss': tloss,
f'{val}/loss': vloss,
'metrics/accuracy_top1': top1,
'metrics/accuracy_top5': top5,
'lr/0': optimizer.param_groups[0]['lr']} # learning rate
logger.log_metrics(metrics, epoch)
# Save model
final_epoch = epoch + 1 == epochs
if (not opt.nosave) or final_epoch:
ckpt = {
'epoch': epoch,
'best_fitness': best_fitness,
'model': deepcopy(ema.ema).half(), # deepcopy(de_parallel(model)).half(),
'ema': None, # deepcopy(ema.ema).half(),
'updates': ema.updates,
'optimizer': None, # optimizer.state_dict(),
'opt': vars(opt),
'git': GIT_INFO, # {remote, branch, commit} if a git repo
'date': datetime.now().isoformat()}
# Save last, best and delete
torch.save(ckpt, last)
if best_fitness == fitness:
torch.save(ckpt, best)
del ckpt
# Train complete
if RANK in {-1, 0} and final_epoch:
LOGGER.info(f'\nTraining complete ({(time.time() - t0) / 3600:.3f} hours)'
f"\nResults saved to {colorstr('bold', save_dir)}"
f'\nPredict: python classify/predict.py --weights {best} --source im.jpg'
f'\nValidate: python classify/val.py --weights {best} --data {data_dir}'
f'\nExport: python export.py --weights {best} --include onnx'
f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{best}')"
f'\nVisualize: https://netron.app\n')
# Plot examples
images, labels = (x[:25] for x in next(iter(testloader))) # first 25 images and labels
pred = torch.max(ema.ema(images.to(device)), 1)[1]
file = imshow_cls(images, labels, pred, model.names, verbose=False, f=save_dir / 'test_images.jpg')
# Log results
meta = {'epochs': epochs, 'top1_acc': best_fitness, 'date': datetime.now().isoformat()}
logger.log_images(file, name='Test Examples (true-predicted)', epoch=epoch)
logger.log_model(best, epochs, metadata=meta)
def parse_opt(known=False):
parser = argparse.ArgumentParser()
parser.add_argument('--model', type=str, default='yolov5s-cls.pt', help='initial weights path')
parser.add_argument('--data', type=str, default='imagenette160', help='cifar10, cifar100, mnist, imagenet, ...')
parser.add_argument('--epochs', type=int, default=10, help='total training epochs')
parser.add_argument('--batch-size', type=int, default=64, help='total batch size for all GPUs')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=224, help='train, val image size (pixels)')
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--project', default=ROOT / 'runs/train-cls', help='save to project/name')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--pretrained', nargs='?', const=True, default=True, help='start from i.e. --pretrained False')
parser.add_argument('--optimizer', choices=['SGD', 'Adam', 'AdamW', 'RMSProp'], default='Adam', help='optimizer')
parser.add_argument('--lr0', type=float, default=0.001, help='initial learning rate')
parser.add_argument('--decay', type=float, default=5e-5, help='weight decay')
parser.add_argument('--label-smoothing', type=float, default=0.1, help='Label smoothing epsilon')
parser.add_argument('--cutoff', type=int, default=None, help='Model layer cutoff index for Classify() head')
parser.add_argument('--dropout', type=float, default=None, help='Dropout (fraction)')
parser.add_argument('--verbose', action='store_true', help='Verbose mode')
parser.add_argument('--seed', type=int, default=0, help='Global training seed')
parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify')
return parser.parse_known_args()[0] if known else parser.parse_args()
def main(opt):
# Checks
if RANK in {-1, 0}:
print_args(vars(opt))
check_git_status()
check_requirements()
# DDP mode
device = select_device(opt.device, batch_size=opt.batch_size)
if LOCAL_RANK != -1:
assert opt.batch_size != -1, 'AutoBatch is coming soon for classification, please pass a valid --batch-size'
assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE'
assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'
torch.cuda.set_device(LOCAL_RANK)
device = torch.device('cuda', LOCAL_RANK)
dist.init_process_group(backend='nccl' if dist.is_nccl_available() else 'gloo')
# Parameters
opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok) # increment run
# Train
train(opt, device)
def run(**kwargs):
# Usage: from yolov5 import classify; classify.train.run(data=mnist, imgsz=320, model='yolov5m')
opt = parse_opt(True)
for k, v in kwargs.items():
setattr(opt, k, v)
main(opt)
return opt
if __name__ == '__main__':
opt = parse_opt()
main(opt)

1480
yolov3/classify/tutorial.ipynb vendored Normal file

File diff suppressed because it is too large Load Diff

170
yolov3/classify/val.py Normal file
View File

@ -0,0 +1,170 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Validate a trained YOLOv5 classification model on a classification dataset
Usage:
$ bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
$ python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate ImageNet
Usage - formats:
$ python classify/val.py --weights yolov5s-cls.pt # PyTorch
yolov5s-cls.torchscript # TorchScript
yolov5s-cls.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s-cls_openvino_model # OpenVINO
yolov5s-cls.engine # TensorRT
yolov5s-cls.mlmodel # CoreML (macOS-only)
yolov5s-cls_saved_model # TensorFlow SavedModel
yolov5s-cls.pb # TensorFlow GraphDef
yolov5s-cls.tflite # TensorFlow Lite
yolov5s-cls_edgetpu.tflite # TensorFlow Edge TPU
yolov5s-cls_paddle_model # PaddlePaddle
"""
import argparse
import os
import sys
from pathlib import Path
import torch
from tqdm import tqdm
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import DetectMultiBackend
from utils.dataloaders import create_classification_dataloader
from utils.general import (LOGGER, TQDM_BAR_FORMAT, Profile, check_img_size, check_requirements, colorstr,
increment_path, print_args)
from utils.torch_utils import select_device, smart_inference_mode
@smart_inference_mode()
def run(
data=ROOT / '../datasets/mnist', # dataset dir
weights=ROOT / 'yolov5s-cls.pt', # model.pt path(s)
batch_size=128, # batch size
imgsz=224, # inference size (pixels)
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
workers=8, # max dataloader workers (per RANK in DDP mode)
verbose=False, # verbose output
project=ROOT / 'runs/val-cls', # save to project/name
name='exp', # save to project/name
exist_ok=False, # existing project/name ok, do not increment
half=False, # use FP16 half-precision inference
dnn=False, # use OpenCV DNN for ONNX inference
model=None,
dataloader=None,
criterion=None,
pbar=None,
):
# Initialize/load model and set device
training = model is not None
if training: # called by train.py
device, pt, jit, engine = next(model.parameters()).device, True, False, False # get model device, PyTorch model
half &= device.type != 'cpu' # half precision only supported on CUDA
model.half() if half else model.float()
else: # called directly
device = select_device(device, batch_size=batch_size)
# Directories
save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
save_dir.mkdir(parents=True, exist_ok=True) # make dir
# Load model
model = DetectMultiBackend(weights, device=device, dnn=dnn, fp16=half)
stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine
imgsz = check_img_size(imgsz, s=stride) # check image size
half = model.fp16 # FP16 supported on limited backends with CUDA
if engine:
batch_size = model.batch_size
else:
device = model.device
if not (pt or jit):
batch_size = 1 # export.py models default to batch-size 1
LOGGER.info(f'Forcing --batch-size 1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models')
# Dataloader
data = Path(data)
test_dir = data / 'test' if (data / 'test').exists() else data / 'val' # data/test or data/val
dataloader = create_classification_dataloader(path=test_dir,
imgsz=imgsz,
batch_size=batch_size,
augment=False,
rank=-1,
workers=workers)
model.eval()
pred, targets, loss, dt = [], [], 0, (Profile(), Profile(), Profile())
n = len(dataloader) # number of batches
action = 'validating' if dataloader.dataset.root.stem == 'val' else 'testing'
desc = f'{pbar.desc[:-36]}{action:>36}' if pbar else f'{action}'
bar = tqdm(dataloader, desc, n, not training, bar_format=TQDM_BAR_FORMAT, position=0)
with torch.cuda.amp.autocast(enabled=device.type != 'cpu'):
for images, labels in bar:
with dt[0]:
images, labels = images.to(device, non_blocking=True), labels.to(device)
with dt[1]:
y = model(images)
with dt[2]:
pred.append(y.argsort(1, descending=True)[:, :5])
targets.append(labels)
if criterion:
loss += criterion(y, labels)
loss /= n
pred, targets = torch.cat(pred), torch.cat(targets)
correct = (targets[:, None] == pred).float()
acc = torch.stack((correct[:, 0], correct.max(1).values), dim=1) # (top1, top5) accuracy
top1, top5 = acc.mean(0).tolist()
if pbar:
pbar.desc = f'{pbar.desc[:-36]}{loss:>12.3g}{top1:>12.3g}{top5:>12.3g}'
if verbose: # all classes
LOGGER.info(f"{'Class':>24}{'Images':>12}{'top1_acc':>12}{'top5_acc':>12}")
LOGGER.info(f"{'all':>24}{targets.shape[0]:>12}{top1:>12.3g}{top5:>12.3g}")
for i, c in model.names.items():
acc_i = acc[targets == i]
top1i, top5i = acc_i.mean(0).tolist()
LOGGER.info(f'{c:>24}{acc_i.shape[0]:>12}{top1i:>12.3g}{top5i:>12.3g}')
# Print results
t = tuple(x.t / len(dataloader.dataset.samples) * 1E3 for x in dt) # speeds per image
shape = (1, 3, imgsz, imgsz)
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms post-process per image at shape {shape}' % t)
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}")
return top1, top5, loss
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--data', type=str, default=ROOT / '../datasets/mnist', help='dataset path')
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s-cls.pt', help='model.pt path(s)')
parser.add_argument('--batch-size', type=int, default=128, help='batch size')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=224, help='inference size (pixels)')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--verbose', nargs='?', const=True, default=True, help='verbose output')
parser.add_argument('--project', default=ROOT / 'runs/val-cls', help='save to project/name')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
opt = parser.parse_args()
print_args(vars(opt))
return opt
def main(opt):
check_requirements(exclude=('tensorboard', 'thop'))
run(**vars(opt))
if __name__ == '__main__':
opt = parse_opt()
main(opt)

View File

@ -0,0 +1,67 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/
# Example usage: python train.py --data Argoverse.yaml
# parent
# ├── yolov3
# └── datasets
# └── Argoverse ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/Argoverse # dataset root dir
train: Argoverse-1.1/images/train/ # train images (relative to 'path') 39384 images
val: Argoverse-1.1/images/val/ # val images (relative to 'path') 15062 images
test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview
# Classes
nc: 8 # number of classes
names: ['person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign'] # class names
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
import json
from tqdm import tqdm
from utils.general import download, Path
def argoverse2yolo(set):
labels = {}
a = json.load(open(set, "rb"))
for annot in tqdm(a['annotations'], desc=f"Converting {set} to YOLOv3 format..."):
img_id = annot['image_id']
img_name = a['images'][img_id]['name']
img_label_name = img_name[:-3] + "txt"
cls = annot['category_id'] # instance class id
x_center, y_center, width, height = annot['bbox']
x_center = (x_center + width / 2) / 1920.0 # offset and scale
y_center = (y_center + height / 2) / 1200.0 # offset and scale
width /= 1920.0 # scale
height /= 1200.0 # scale
img_dir = set.parents[2] / 'Argoverse-1.1' / 'labels' / a['seq_dirs'][a['images'][annot['image_id']]['sid']]
if not img_dir.exists():
img_dir.mkdir(parents=True, exist_ok=True)
k = str(img_dir / img_label_name)
if k not in labels:
labels[k] = []
labels[k].append(f"{cls} {x_center} {y_center} {width} {height}\n")
for k in labels:
with open(k, "w") as f:
f.writelines(labels[k])
# Download
dir = Path('../datasets/Argoverse') # dataset root dir
urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip']
download(urls, dir=dir, delete=False)
# Convert
annotations_dir = 'Argoverse-HD/annotations/'
(dir / 'Argoverse-1.1' / 'tracking').rename(dir / 'Argoverse-1.1' / 'images') # rename 'tracking' to 'images'
for d in "train.json", "val.json":
argoverse2yolo(dir / annotations_dir / d) # convert VisDrone annotations to YOLO labels

View File

@ -0,0 +1,53 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Global Wheat 2020 dataset http://www.global-wheat.com/
# Example usage: python train.py --data GlobalWheat2020.yaml
# parent
# ├── yolov3
# └── datasets
# └── GlobalWheat2020 ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/GlobalWheat2020 # dataset root dir
train: # train images (relative to 'path') 3422 images
- images/arvalis_1
- images/arvalis_2
- images/arvalis_3
- images/ethz_1
- images/rres_1
- images/inrae_1
- images/usask_1
val: # val images (relative to 'path') 748 images (WARNING: train set contains ethz_1)
- images/ethz_1
test: # test images (optional) 1276 images
- images/utokyo_1
- images/utokyo_2
- images/nau_1
- images/uq_1
# Classes
nc: 1 # number of classes
names: ['wheat_head'] # class names
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
from utils.general import download, Path
# Download
dir = Path(yaml['path']) # dataset root dir
urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/GlobalWheat2020_labels.zip']
download(urls, dir=dir)
# Make Directories
for p in 'annotations', 'images', 'labels':
(dir / p).mkdir(parents=True, exist_ok=True)
# Move
for p in 'arvalis_1', 'arvalis_2', 'arvalis_3', 'ethz_1', 'rres_1', 'inrae_1', 'usask_1', \
'utokyo_1', 'utokyo_2', 'nau_1', 'uq_1':
(dir / p).rename(dir / 'images' / p) # move to /images
f = (dir / p).with_suffix('.json') # json file
if f.exists():
f.rename((dir / 'annotations' / p).with_suffix('.json')) # move to /annotations

1022
yolov3/data/ImageNet.yaml Normal file

File diff suppressed because it is too large Load Diff

52
yolov3/data/SKU-110K.yaml Normal file
View File

@ -0,0 +1,52 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19
# Example usage: python train.py --data SKU-110K.yaml
# parent
# ├── yolov3
# └── datasets
# └── SKU-110K ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/SKU-110K # dataset root dir
train: train.txt # train images (relative to 'path') 8219 images
val: val.txt # val images (relative to 'path') 588 images
test: test.txt # test images (optional) 2936 images
# Classes
nc: 1 # number of classes
names: ['object'] # class names
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
import shutil
from tqdm import tqdm
from utils.general import np, pd, Path, download, xyxy2xywh
# Download
dir = Path(yaml['path']) # dataset root dir
parent = Path(dir.parent) # download dir
urls = ['http://trax-geometry.s3.amazonaws.com/cvpr_challenge/SKU110K_fixed.tar.gz']
download(urls, dir=parent, delete=False)
# Rename directories
if dir.exists():
shutil.rmtree(dir)
(parent / 'SKU110K_fixed').rename(dir) # rename dir
(dir / 'labels').mkdir(parents=True, exist_ok=True) # create labels dir
# Convert labels
names = 'image', 'x1', 'y1', 'x2', 'y2', 'class', 'image_width', 'image_height' # column names
for d in 'annotations_train.csv', 'annotations_val.csv', 'annotations_test.csv':
x = pd.read_csv(dir / 'annotations' / d, names=names).values # annotations
images, unique_images = x[:, 0], np.unique(x[:, 0])
with open((dir / d).with_suffix('.txt').__str__().replace('annotations_', ''), 'w') as f:
f.writelines(f'./images/{s}\n' for s in unique_images)
for im in tqdm(unique_images, desc=f'Converting {dir / d}'):
cls = 0 # single-class dataset
with open((dir / 'labels' / im).with_suffix('.txt'), 'a') as f:
for r in x[images == im]:
w, h = r[6], r[7] # image width, height
xywh = xyxy2xywh(np.array([[r[1] / w, r[2] / h, r[3] / w, r[4] / h]]))[0] # instance
f.write(f"{cls} {xywh[0]:.5f} {xywh[1]:.5f} {xywh[2]:.5f} {xywh[3]:.5f}\n") # write label

61
yolov3/data/VisDrone.yaml Normal file
View File

@ -0,0 +1,61 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset
# Example usage: python train.py --data VisDrone.yaml
# parent
# ├── yolov3
# └── datasets
# └── VisDrone ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/VisDrone # dataset root dir
train: VisDrone2019-DET-train/images # train images (relative to 'path') 6471 images
val: VisDrone2019-DET-val/images # val images (relative to 'path') 548 images
test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images
# Classes
nc: 10 # number of classes
names: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor']
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
from utils.general import download, os, Path
def visdrone2yolo(dir):
from PIL import Image
from tqdm import tqdm
def convert_box(size, box):
# Convert VisDrone box to YOLO xywh box
dw = 1. / size[0]
dh = 1. / size[1]
return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh
(dir / 'labels').mkdir(parents=True, exist_ok=True) # make labels directory
pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}')
for f in pbar:
img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size
lines = []
with open(f, 'r') as file: # read annotation.txt
for row in [x.split(',') for x in file.read().strip().splitlines()]:
if row[4] == '0': # VisDrone 'ignored regions' class 0
continue
cls = int(row[5]) - 1
box = convert_box(img_size, tuple(map(int, row[:4])))
lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n")
with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl:
fl.writelines(lines) # write label.txt
# Download
dir = Path(yaml['path']) # dataset root dir
urls = ['https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-train.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip']
download(urls, dir=dir)
# Convert
for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':
visdrone2yolo(dir / d) # convert VisDrone annotations to YOLO labels

44
yolov3/data/coco.yaml Normal file
View File

@ -0,0 +1,44 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# COCO 2017 dataset http://cocodataset.org
# Example usage: python train.py --data coco.yaml
# parent
# ├── yolov3
# └── datasets
# └── coco ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco # dataset root dir
train: train2017.txt # train images (relative to 'path') 118287 images
val: val2017.txt # train images (relative to 'path') 5000 images
test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
# Classes
nc: 80 # number of classes
names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush'] # class names
# Download script/URL (optional)
download: |
from utils.general import download, Path
# Download labels
segments = False # segment or box labels
dir = Path(yaml['path']) # dataset root dir
url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
urls = [url + ('coco2017labels-segments.zip' if segments else 'coco2017labels.zip')] # labels
download(urls, dir=dir.parent)
# Download data
urls = ['http://images.cocodataset.org/zips/train2017.zip', # 19G, 118k images
'http://images.cocodataset.org/zips/val2017.zip', # 1G, 5k images
'http://images.cocodataset.org/zips/test2017.zip'] # 7G, 41k images (optional)
download(urls, dir=dir / 'images', threads=3)

View File

@ -0,0 +1,101 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# COCO128-seg dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov5
# └── datasets
# └── coco128-seg ← downloads here (7 MB)
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco128-seg # dataset root dir
train: images/train2017 # train images (relative to 'path') 128 images
val: images/train2017 # val images (relative to 'path') 128 images
test: # test images (optional)
# Classes
names:
0: person
1: bicycle
2: car
3: motorcycle
4: airplane
5: bus
6: train
7: truck
8: boat
9: traffic light
10: fire hydrant
11: stop sign
12: parking meter
13: bench
14: bird
15: cat
16: dog
17: horse
18: sheep
19: cow
20: elephant
21: bear
22: zebra
23: giraffe
24: backpack
25: umbrella
26: handbag
27: tie
28: suitcase
29: frisbee
30: skis
31: snowboard
32: sports ball
33: kite
34: baseball bat
35: baseball glove
36: skateboard
37: surfboard
38: tennis racket
39: bottle
40: wine glass
41: cup
42: fork
43: knife
44: spoon
45: bowl
46: banana
47: apple
48: sandwich
49: orange
50: broccoli
51: carrot
52: hot dog
53: pizza
54: donut
55: cake
56: chair
57: couch
58: potted plant
59: bed
60: dining table
61: toilet
62: tv
63: laptop
64: mouse
65: remote
66: keyboard
67: cell phone
68: microwave
69: oven
70: toaster
71: sink
72: refrigerator
73: book
74: clock
75: vase
76: scissors
77: teddy bear
78: hair drier
79: toothbrush
# Download script/URL (optional)
download: https://ultralytics.com/assets/coco128-seg.zip

30
yolov3/data/coco128.yaml Normal file
View File

@ -0,0 +1,30 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov3
# └── datasets
# └── coco128 ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco128 # dataset root dir
train: images/train2017 # train images (relative to 'path') 128 images
val: images/train2017 # val images (relative to 'path') 128 images
test: # test images (optional)
# Classes
nc: 80 # number of classes
names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush'] # class names
# Download script/URL (optional)
download: https://ultralytics.com/assets/coco128.zip

View File

@ -0,0 +1,18 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# COCO 2017 dataset http://cocodataset.org
# Example usage: python train.py --data coco.yaml
# parent
# ├── yolov3
# └── datasets
# └── coco ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../pipe-dataset/ # dataset root dir
train: train/images # train images (relative to 'path') 118287 images
val: val/images # train images (relative to 'path') 5000 images
# test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
# Classes
nc: 1 # number of classes
names: ['pipe'] # class names

View File

@ -0,0 +1,34 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for Objects365 training
# python train.py --weights yolov5m.pt --data Objects365.yaml --evolve
# See Hyperparameter Evolution tutorial for details https://github.com/ultralytics/yolov5#tutorials
lr0: 0.00258
lrf: 0.17
momentum: 0.779
weight_decay: 0.00058
warmup_epochs: 1.33
warmup_momentum: 0.86
warmup_bias_lr: 0.0711
box: 0.0539
cls: 0.299
cls_pw: 0.825
obj: 0.632
obj_pw: 1.0
iou_t: 0.2
anchor_t: 3.44
anchors: 3.2
fl_gamma: 0.0
hsv_h: 0.0188
hsv_s: 0.704
hsv_v: 0.36
degrees: 0.0
translate: 0.0902
scale: 0.491
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
mosaic: 1.0
mixup: 0.0
copy_paste: 0.0

View File

@ -0,0 +1,40 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for VOC training
# python train.py --batch 128 --weights yolov5m6.pt --data VOC.yaml --epochs 50 --img 512 --hyp hyp.scratch-med.yaml --evolve
# See Hyperparameter Evolution tutorial for details https://github.com/ultralytics/yolov5#tutorials
# YOLOv5 Hyperparameter Evolution Results
# Best generation: 467
# Last generation: 996
# metrics/precision, metrics/recall, metrics/mAP_0.5, metrics/mAP_0.5:0.95, val/box_loss, val/obj_loss, val/cls_loss
# 0.87729, 0.85125, 0.91286, 0.72664, 0.0076739, 0.0042529, 0.0013865
lr0: 0.00334
lrf: 0.15135
momentum: 0.74832
weight_decay: 0.00025
warmup_epochs: 3.3835
warmup_momentum: 0.59462
warmup_bias_lr: 0.18657
box: 0.02
cls: 0.21638
cls_pw: 0.5
obj: 0.51728
obj_pw: 0.67198
iou_t: 0.2
anchor_t: 3.3744
fl_gamma: 0.0
hsv_h: 0.01041
hsv_s: 0.54703
hsv_v: 0.27739
degrees: 0.0
translate: 0.04591
scale: 0.75544
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
mosaic: 0.85834
mixup: 0.04266
copy_paste: 0.0
anchors: 3.412

View File

@ -0,0 +1,35 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters when using Albumentations frameworks
# python train.py --hyp hyp.no-augmentation.yaml
# See https://github.com/ultralytics/yolov5/pull/3882 for YOLOv5 + Albumentations Usage examples
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.3 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 0.7 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
# this parameters are all zero since we want to use albumentation framework
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0 # image HSV-Hue augmentation (fraction)
hsv_s: 00 # image HSV-Saturation augmentation (fraction)
hsv_v: 0 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0 # image translation (+/- fraction)
scale: 0 # image scale (+/- gain)
shear: 0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.0 # image flip left-right (probability)
mosaic: 0.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

View File

@ -0,0 +1,34 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for high-augmentation COCO training from scratch
# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.3 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 0.7 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.9 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.1 # image mixup (probability)
copy_paste: 0.1 # segment copy-paste (probability)

View File

@ -0,0 +1,34 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for low-augmentation COCO training from scratch
# python train.py --batch 64 --cfg yolov5n6.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --linear
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

View File

@ -0,0 +1,34 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for medium-augmentation COCO training from scratch
# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.3 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 0.7 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.9 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.1 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

View File

@ -0,0 +1,34 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for COCO training from scratch
# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

BIN
yolov3/data/images/bus.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 476 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

112
yolov3/data/objects365.yaml Normal file
View File

@ -0,0 +1,112 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Objects365 dataset https://www.objects365.org/
# Example usage: python train.py --data Objects365.yaml
# parent
# ├── yolov3
# └── datasets
# └── Objects365 ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/Objects365 # dataset root dir
train: images/train # train images (relative to 'path') 1742289 images
val: images/val # val images (relative to 'path') 80000 images
test: # test images (optional)
# Classes
nc: 365 # number of classes
names: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',
'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',
'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',
'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',
'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',
'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',
'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',
'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',
'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',
'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',
'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',
'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',
'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',
'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',
'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',
'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',
'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',
'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',
'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',
'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',
'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',
'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',
'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',
'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',
'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',
'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',
'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',
'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',
'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',
'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',
'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',
'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',
'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',
'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis']
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
from pycocotools.coco import COCO
from tqdm import tqdm
from utils.general import Path, download, np, xyxy2xywhn
# Make Directories
dir = Path(yaml['path']) # dataset root dir
for p in 'images', 'labels':
(dir / p).mkdir(parents=True, exist_ok=True)
for q in 'train', 'val':
(dir / p / q).mkdir(parents=True, exist_ok=True)
# Train, Val Splits
for split, patches in [('train', 50 + 1), ('val', 43 + 1)]:
print(f"Processing {split} in {patches} patches ...")
images, labels = dir / 'images' / split, dir / 'labels' / split
# Download
url = f"https://dorc.ks3-cn-beijing.ksyun.com/data-set/2020Objects365%E6%95%B0%E6%8D%AE%E9%9B%86/{split}/"
if split == 'train':
download([f'{url}zhiyuan_objv2_{split}.tar.gz'], dir=dir, delete=False) # annotations json
download([f'{url}patch{i}.tar.gz' for i in range(patches)], dir=images, curl=True, delete=False, threads=8)
elif split == 'val':
download([f'{url}zhiyuan_objv2_{split}.json'], dir=dir, delete=False) # annotations json
download([f'{url}images/v1/patch{i}.tar.gz' for i in range(15 + 1)], dir=images, curl=True, delete=False, threads=8)
download([f'{url}images/v2/patch{i}.tar.gz' for i in range(16, patches)], dir=images, curl=True, delete=False, threads=8)
# Move
for f in tqdm(images.rglob('*.jpg'), desc=f'Moving {split} images'):
f.rename(images / f.name) # move to /images/{split}
# Labels
coco = COCO(dir / f'zhiyuan_objv2_{split}.json')
names = [x["name"] for x in coco.loadCats(coco.getCatIds())]
for cid, cat in enumerate(names):
catIds = coco.getCatIds(catNms=[cat])
imgIds = coco.getImgIds(catIds=catIds)
for im in tqdm(coco.loadImgs(imgIds), desc=f'Class {cid + 1}/{len(names)} {cat}'):
width, height = im["width"], im["height"]
path = Path(im["file_name"]) # image filename
try:
with open(labels / path.with_suffix('.txt').name, 'a') as file:
annIds = coco.getAnnIds(imgIds=im["id"], catIds=catIds, iscrowd=None)
for a in coco.loadAnns(annIds):
x, y, w, h = a['bbox'] # bounding box in xywh (xy top-left corner)
xyxy = np.array([x, y, x + w, y + h])[None] # pixels(1,4)
x, y, w, h = xyxy2xywhn(xyxy, w=width, h=height, clip=True)[0] # normalized and clipped
file.write(f"{cid} {x:.5f} {y:.5f} {w:.5f} {h:.5f}\n")
except Exception as e:
print(e)

View File

@ -0,0 +1,18 @@
#!/bin/bash
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Download latest models from https://github.com/ultralytics/yolov3/releases
# Example usage: bash path/to/download_weights.sh
# parent
# └── yolov3
# ├── yolov3.pt ← downloads here
# ├── yolov3-spp.pt
# └── ...
python - <<EOF
from utils.downloads import attempt_download
models = ['yolov3', 'yolov3-spp', 'yolov3-tiny']
for x in models:
attempt_download(f'{x}.pt')
EOF

27
yolov3/data/scripts/get_coco.sh Executable file
View File

@ -0,0 +1,27 @@
#!/bin/bash
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Download COCO 2017 dataset http://cocodataset.org
# Example usage: bash data/scripts/get_coco.sh
# parent
# ├── yolov3
# └── datasets
# └── coco ← downloads here
# Download/unzip labels
d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
# Download/unzip images
d='../datasets/coco/images' # unzip directory
url=http://images.cocodataset.org/zips/
f1='train2017.zip' # 19G, 118k images
f2='val2017.zip' # 1G, 5k images
f3='test2017.zip' # 7G, 41k images (optional)
for f in $f1 $f2; do
echo 'Downloading' $url$f '...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
done
wait # finish background tasks

View File

@ -0,0 +1,17 @@
#!/bin/bash
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
# Example usage: bash data/scripts/get_coco128.sh
# parent
# ├── yolov3
# └── datasets
# └── coco128 ← downloads here
# Download/unzip images and labels
d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco128.zip' # or 'coco128-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
wait # finish background tasks

View File

@ -0,0 +1,51 @@
#!/bin/bash
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Download ILSVRC2012 ImageNet dataset https://image-net.org
# Example usage: bash data/scripts/get_imagenet.sh
# parent
# ├── yolov5
# └── datasets
# └── imagenet ← downloads here
# Arguments (optional) Usage: bash data/scripts/get_imagenet.sh --train --val
if [ "$#" -gt 0 ]; then
for opt in "$@"; do
case "${opt}" in
--train) train=true ;;
--val) val=true ;;
esac
done
else
train=true
val=true
fi
# Make dir
d='../datasets/imagenet' # unzip directory
mkdir -p $d && cd $d
# Download/unzip train
if [ "$train" == "true" ]; then
wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_train.tar # download 138G, 1281167 images
mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
tar -xf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
find . -name "*.tar" | while read NAME; do
mkdir -p "${NAME%.tar}"
tar -xf "${NAME}" -C "${NAME%.tar}"
rm -f "${NAME}"
done
cd ..
fi
# Download/unzip val
if [ "$val" == "true" ]; then
wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar # download 6.3G, 50000 images
mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xf ILSVRC2012_img_val.tar
wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash # move into subdirs
fi
# Delete corrupted image (optional: PNG under JPEG name that may cause dataloaders to fail)
# rm train/n04266014/n04266014_10835.JPEG
# TFRecords (optional)
# wget https://raw.githubusercontent.com/tensorflow/models/master/research/slim/datasets/imagenet_lsvrc_2015_synsets.txt

80
yolov3/data/voc.yaml Normal file
View File

@ -0,0 +1,80 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC
# Example usage: python train.py --data VOC.yaml
# parent
# ├── yolov3
# └── datasets
# └── VOC ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/VOC
train: # train images (relative to 'path') 16551 images
- images/train2012
- images/train2007
- images/val2012
- images/val2007
val: # val images (relative to 'path') 4952 images
- images/test2007
test: # test images (optional)
- images/test2007
# Classes
nc: 20 # number of classes
names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
import xml.etree.ElementTree as ET
from tqdm import tqdm
from utils.general import download, Path
def convert_label(path, lb_path, year, image_id):
def convert_box(size, box):
dw, dh = 1. / size[0], 1. / size[1]
x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]
return x * dw, y * dh, w * dw, h * dh
in_file = open(path / f'VOC{year}/Annotations/{image_id}.xml')
out_file = open(lb_path, 'w')
tree = ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)
for obj in root.iter('object'):
cls = obj.find('name').text
if cls in yaml['names'] and not int(obj.find('difficult').text) == 1:
xmlbox = obj.find('bndbox')
bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')])
cls_id = yaml['names'].index(cls) # class id
out_file.write(" ".join([str(a) for a in (cls_id, *bb)]) + '\n')
# Download
dir = Path(yaml['path']) # dataset root dir
url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
urls = [url + 'VOCtrainval_06-Nov-2007.zip', # 446MB, 5012 images
url + 'VOCtest_06-Nov-2007.zip', # 438MB, 4953 images
url + 'VOCtrainval_11-May-2012.zip'] # 1.95GB, 17126 images
download(urls, dir=dir / 'images', delete=False)
# Convert
path = dir / f'images/VOCdevkit'
for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'):
imgs_path = dir / 'images' / f'{image_set}{year}'
lbs_path = dir / 'labels' / f'{image_set}{year}'
imgs_path.mkdir(exist_ok=True, parents=True)
lbs_path.mkdir(exist_ok=True, parents=True)
image_ids = open(path / f'VOC{year}/ImageSets/Main/{image_set}.txt').read().strip().split()
for id in tqdm(image_ids, desc=f'{image_set}{year}'):
f = path / f'VOC{year}/JPEGImages/{id}.jpg' # old img path
lb_path = (lbs_path / f.name).with_suffix('.txt') # new label path
f.rename(imgs_path / f.name) # move image
convert_label(path, lb_path, year, id) # convert labels to YOLO format

102
yolov3/data/xView.yaml Normal file
View File

@ -0,0 +1,102 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# xView 2018 dataset https://challenge.xviewdataset.org
# -------- DOWNLOAD DATA MANUALLY from URL above and unzip to 'datasets/xView' before running train command! --------
# Example usage: python train.py --data xView.yaml
# parent
# ├── yolov3
# └── datasets
# └── xView ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/xView # dataset root dir
train: images/autosplit_train.txt # train images (relative to 'path') 90% of 847 train images
val: images/autosplit_val.txt # train images (relative to 'path') 10% of 847 train images
# Classes
nc: 60 # number of classes
names: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',
'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower'] # class names
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
import json
import os
from pathlib import Path
import numpy as np
from PIL import Image
from tqdm import tqdm
from utils.datasets import autosplit
from utils.general import download, xyxy2xywhn
def convert_labels(fname=Path('xView/xView_train.geojson')):
# Convert xView geoJSON labels to YOLO format
path = fname.parent
with open(fname) as f:
print(f'Loading {fname}...')
data = json.load(f)
# Make dirs
labels = Path(path / 'labels' / 'train')
os.system(f'rm -rf {labels}')
labels.mkdir(parents=True, exist_ok=True)
# xView classes 11-94 to 0-59
xview_class2index = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, -1, 9, 10, 11,
12, 13, 14, 15, -1, -1, 16, 17, 18, 19, 20, 21, 22, -1, 23, 24, 25, -1, 26, 27, -1, 28, -1,
29, 30, 31, 32, 33, 34, 35, 36, 37, -1, 38, 39, 40, 41, 42, 43, 44, 45, -1, -1, -1, -1, 46,
47, 48, 49, -1, 50, 51, -1, 52, -1, -1, -1, 53, 54, -1, 55, -1, -1, 56, -1, 57, -1, 58, 59]
shapes = {}
for feature in tqdm(data['features'], desc=f'Converting {fname}'):
p = feature['properties']
if p['bounds_imcoords']:
id = p['image_id']
file = path / 'train_images' / id
if file.exists(): # 1395.tif missing
try:
box = np.array([int(num) for num in p['bounds_imcoords'].split(",")])
assert box.shape[0] == 4, f'incorrect box shape {box.shape[0]}'
cls = p['type_id']
cls = xview_class2index[int(cls)] # xView class to 0-60
assert 59 >= cls >= 0, f'incorrect class index {cls}'
# Write YOLO label
if id not in shapes:
shapes[id] = Image.open(file).size
box = xyxy2xywhn(box[None].astype(np.float), w=shapes[id][0], h=shapes[id][1], clip=True)
with open((labels / id).with_suffix('.txt'), 'a') as f:
f.write(f"{cls} {' '.join(f'{x:.6f}' for x in box[0])}\n") # write label.txt
except Exception as e:
print(f'WARNING: skipping one label for {file}: {e}')
# Download manually from https://challenge.xviewdataset.org
dir = Path(yaml['path']) # dataset root dir
# urls = ['https://d307kc0mrhucc3.cloudfront.net/train_labels.zip', # train labels
# 'https://d307kc0mrhucc3.cloudfront.net/train_images.zip', # 15G, 847 train images
# 'https://d307kc0mrhucc3.cloudfront.net/val_images.zip'] # 5G, 282 val images (no labels)
# download(urls, dir=dir, delete=False)
# Convert labels
convert_labels(dir / 'xView_train.geojson')
# Move images
images = Path(dir / 'images')
images.mkdir(parents=True, exist_ok=True)
Path(dir / 'train_images').rename(dir / 'images' / 'train')
Path(dir / 'val_images').rename(dir / 'images' / 'val')
# Split
autosplit(dir / 'images' / 'train')

244
yolov3/detect.py Normal file
View File

@ -0,0 +1,244 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Run inference on images, videos, directories, streams, etc.
Usage:
$ python path/to/detect.py --weights yolov3.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
"""
import argparse
import os
import sys
from pathlib import Path
import cv2
import torch
import torch.backends.cudnn as cudnn
FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import DetectMultiBackend
from utils.datasets import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
from utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr,
increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import select_device, time_sync
@torch.no_grad()
def run(weights=ROOT / 'yolov3.pt', # model.pt path(s)
source=ROOT / 'data/images', # file/dir/URL/glob, 0 for webcam
imgsz=640, # inference size (pixels)
conf_thres=0.25, # confidence threshold
iou_thres=0.45, # NMS IOU threshold
max_det=1000, # maximum detections per image
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
view_img=False, # show results
save_txt=False, # save results to *.txt
save_conf=False, # save confidences in --save-txt labels
save_crop=False, # save cropped prediction boxes
nosave=False, # do not save images/videos
classes=None, # filter by class: --class 0, or --class 0 2 3
agnostic_nms=False, # class-agnostic NMS
augment=False, # augmented inference
visualize=False, # visualize features
update=False, # update all models
project=ROOT / 'runs/detect', # save results to project/name
name='exp', # save results to project/name
exist_ok=False, # existing project/name ok, do not increment
line_thickness=3, # bounding box thickness (pixels)
hide_labels=False, # hide labels
hide_conf=False, # hide confidences
half=False, # use FP16 half-precision inference
dnn=False, # use OpenCV DNN for ONNX inference
):
source = str(source)
save_img = not nosave and not source.endswith('.txt') # save inference images
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
if is_url and is_file:
source = check_file(source) # download
# Directories
save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
(save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
# Load model
device = select_device(device)
model = DetectMultiBackend(weights, device=device, dnn=dnn)
stride, names, pt, jit, onnx = model.stride, model.names, model.pt, model.jit, model.onnx
imgsz = check_img_size(imgsz, s=stride) # check image size
# Half
half &= pt and device.type != 'cpu' # half precision only supported by PyTorch on CUDA
if pt:
model.model.half() if half else model.model.float()
# Dataloader
if webcam:
view_img = check_imshow()
cudnn.benchmark = True # set True to speed up constant image size inference
dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt and not jit)
bs = len(dataset) # batch_size
else:
dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt and not jit)
bs = 1 # batch_size
vid_path, vid_writer = [None] * bs, [None] * bs
# Run inference
if pt and device.type != 'cpu':
model(torch.zeros(1, 3, *imgsz).to(device).type_as(next(model.model.parameters()))) # warmup
dt, seen = [0.0, 0.0, 0.0], 0
for path, im, im0s, vid_cap, s in dataset:
t1 = time_sync()
im = torch.from_numpy(im).to(device)
im = im.half() if half else im.float() # uint8 to fp16/32
im /= 255 # 0 - 255 to 0.0 - 1.0
if len(im.shape) == 3:
im = im[None] # expand for batch dim
t2 = time_sync()
dt[0] += t2 - t1
# Inference
visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
pred = model(im, augment=augment, visualize=visualize)
t3 = time_sync()
dt[1] += t3 - t2
# NMS
pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
dt[2] += time_sync() - t3
# Second-stage classifier (optional)
# pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
# Process predictions
for i, det in enumerate(pred): # per image
seen += 1
if webcam: # batch_size >= 1
p, im0, frame = path[i], im0s[i].copy(), dataset.count
s += f'{i}: '
else:
p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)
p = Path(p) # to Path
save_path = str(save_dir / p.name) # im.jpg
txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt
s += '%gx%g ' % im.shape[2:] # print string
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
imc = im0.copy() if save_crop else im0 # for save_crop
annotator = Annotator(im0, line_width=line_thickness, example=str(names))
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
# Print results
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
# Write results
for *xyxy, conf, cls in reversed(det):
if save_txt: # Write to file
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
with open(txt_path + '.txt', 'a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n')
if save_img or save_crop or view_img: # Add bbox to image
c = int(cls) # integer class
label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
annotator.box_label(xyxy, label, color=colors(c, True))
if save_crop:
save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)
# Print time (inference-only)
LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')
# Stream results
im0 = annotator.result()
if view_img:
cv2.imshow(str(p), im0)
cv2.waitKey(1) # 1 millisecond
# Save results (image with detections)
if save_img:
if dataset.mode == 'image':
cv2.imwrite(save_path, im0)
else: # 'video' or 'stream'
if vid_path[i] != save_path: # new video
vid_path[i] = save_path
if isinstance(vid_writer[i], cv2.VideoWriter):
vid_writer[i].release() # release previous video writer
if vid_cap: # video
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
else: # stream
fps, w, h = 30, im0.shape[1], im0.shape[0]
save_path += '.mp4'
vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
vid_writer[i].write(im0)
# Print results
t = tuple(x / seen * 1E3 for x in dt) # speeds per image
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
if save_txt or save_img:
s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
if update:
strip_optimizer(weights) # update model (to fix SourceChangeWarning)
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov3.pt', help='model path(s)')
parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--view-img', action='store_true', help='show results')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--visualize', action='store_true', help='visualize features')
parser.add_argument('--update', action='store_true', help='update all models')
parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')
parser.add_argument('--name', default='exp', help='save results to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(FILE.stem, opt)
return opt
def main(opt):
check_requirements(exclude=('tensorboard', 'thop'))
run(**vars(opt))
if __name__ == "__main__":
opt = parse_opt()
main(opt)

369
yolov3/export.py Normal file
View File

@ -0,0 +1,369 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Export a PyTorch model to TorchScript, ONNX, CoreML, TensorFlow (saved_model, pb, TFLite, TF.js,) formats
TensorFlow exports authored by https://github.com/zldrobit
Usage:
$ python path/to/export.py --weights yolov3.pt --include torchscript onnx coreml saved_model pb tflite tfjs
Inference:
$ python path/to/detect.py --weights yolov3.pt
yolov3.onnx (must export with --dynamic)
yolov3_saved_model
yolov3.pb
yolov3.tflite
TensorFlow.js:
$ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
$ npm install
$ ln -s ../../yolov5/yolov3_web_model public/yolov3_web_model
$ npm start
"""
import argparse
import json
import os
import subprocess
import sys
import time
from pathlib import Path
import torch
import torch.nn as nn
from torch.utils.mobile_optimizer import optimize_for_mobile
FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import Conv
from models.experimental import attempt_load
from models.yolo import Detect
from utils.activations import SiLU
from utils.datasets import LoadImages
from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, colorstr, file_size, print_args,
url2file)
from utils.torch_utils import select_device
def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):
# TorchScript model export
try:
LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')
f = file.with_suffix('.torchscript.pt')
ts = torch.jit.trace(model, im, strict=False)
d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names}
extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap()
(optimize_for_mobile(ts) if optimize else ts).save(f, _extra_files=extra_files)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'{prefix} export failure: {e}')
def export_onnx(model, im, file, opset, train, dynamic, simplify, prefix=colorstr('ONNX:')):
# ONNX export
try:
check_requirements(('onnx',))
import onnx
LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
f = file.with_suffix('.onnx')
torch.onnx.export(model, im, f, verbose=False, opset_version=opset,
training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
do_constant_folding=not train,
input_names=['images'],
output_names=['output'],
dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'}, # shape(1,3,640,640)
'output': {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
} if dynamic else None)
# Checks
model_onnx = onnx.load(f) # load onnx model
onnx.checker.check_model(model_onnx) # check onnx model
# LOGGER.info(onnx.helper.printable_graph(model_onnx.graph)) # print
# Simplify
if simplify:
try:
check_requirements(('onnx-simplifier',))
import onnxsim
LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
model_onnx, check = onnxsim.simplify(
model_onnx,
dynamic_input_shape=dynamic,
input_shapes={'images': list(im.shape)} if dynamic else None)
assert check, 'assert check failed'
onnx.save(model_onnx, f)
except Exception as e:
LOGGER.info(f'{prefix} simplifier failure: {e}')
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
LOGGER.info(f"{prefix} run --dynamic ONNX model inference with: 'python detect.py --weights {f}'")
except Exception as e:
LOGGER.info(f'{prefix} export failure: {e}')
def export_coreml(model, im, file, prefix=colorstr('CoreML:')):
# CoreML export
ct_model = None
try:
check_requirements(('coremltools',))
import coremltools as ct
LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
f = file.with_suffix('.mlmodel')
model.train() # CoreML exports should be placed in model.train() mode
ts = torch.jit.trace(model, im, strict=False) # TorchScript model
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
ct_model.save(f)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
return ct_model
def export_saved_model(model, im, file, dynamic,
tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,
conf_thres=0.25, prefix=colorstr('TensorFlow saved_model:')):
# TensorFlow saved_model export
keras_model = None
try:
import tensorflow as tf
from tensorflow import keras
from models.tf import TFDetect, TFModel
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
f = str(file).replace('.pt', '_saved_model')
batch_size, ch, *imgsz = list(im.shape) # BCHW
tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
im = tf.zeros((batch_size, *imgsz, 3)) # BHWC order for TensorFlow
y = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
inputs = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
keras_model = keras.Model(inputs=inputs, outputs=outputs)
keras_model.trainable = False
keras_model.summary()
keras_model.save(f, save_format='tf')
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
return keras_model
def export_pb(keras_model, im, file, prefix=colorstr('TensorFlow GraphDef:')):
# TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow
try:
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
f = file.with_suffix('.pb')
m = tf.function(lambda x: keras_model(x)) # full model
m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))
frozen_func = convert_variables_to_constants_v2(m)
frozen_func.graph.as_graph_def()
tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_tflite(keras_model, im, file, int8, data, ncalib, prefix=colorstr('TensorFlow Lite:')):
# TensorFlow Lite export
try:
import tensorflow as tf
from models.tf import representative_dataset_gen
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
batch_size, ch, *imgsz = list(im.shape) # BCHW
f = str(file).replace('.pt', '-fp16.tflite')
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if int8:
dataset = LoadImages(check_dataset(data)['train'], img_size=imgsz, auto=False) # representative data
converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = []
converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_quantizer = False
f = str(file).replace('.pt', '-int8.tflite')
tflite_model = converter.convert()
open(f, "wb").write(tflite_model)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')):
# TensorFlow.js export
try:
check_requirements(('tensorflowjs',))
import re
import tensorflowjs as tfjs
LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
f = str(file).replace('.pt', '_web_model') # js dir
f_pb = file.with_suffix('.pb') # *.pb path
f_json = f + '/model.json' # *.json path
cmd = f"tensorflowjs_converter --input_format=tf_frozen_model " \
f"--output_node_names='Identity,Identity_1,Identity_2,Identity_3' {f_pb} {f}"
subprocess.run(cmd, shell=True)
json = open(f_json).read()
with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
subst = re.sub(
r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}}}',
r'{"outputs": {"Identity": {"name": "Identity"}, '
r'"Identity_1": {"name": "Identity_1"}, '
r'"Identity_2": {"name": "Identity_2"}, '
r'"Identity_3": {"name": "Identity_3"}}}',
json)
j.write(subst)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
@torch.no_grad()
def run(data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
weights=ROOT / 'yolov3.pt', # weights path
imgsz=(640, 640), # image (height, width)
batch_size=1, # batch size
device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu
include=('torchscript', 'onnx', 'coreml'), # include formats
half=False, # FP16 half-precision export
inplace=False, # set Detect() inplace=True
train=False, # model.train() mode
optimize=False, # TorchScript: optimize for mobile
int8=False, # CoreML/TF INT8 quantization
dynamic=False, # ONNX/TF: dynamic axes
simplify=False, # ONNX: simplify model
opset=12, # ONNX: opset version
topk_per_class=100, # TF.js NMS: topk per class to keep
topk_all=100, # TF.js NMS: topk for all classes to keep
iou_thres=0.45, # TF.js NMS: IoU threshold
conf_thres=0.25 # TF.js NMS: confidence threshold
):
t = time.time()
include = [x.lower() for x in include]
tf_exports = list(x in include for x in ('saved_model', 'pb', 'tflite', 'tfjs')) # TensorFlow exports
imgsz *= 2 if len(imgsz) == 1 else 1 # expand
file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights)
# Load PyTorch model
device = select_device(device)
assert not (device.type == 'cpu' and half), '--half only compatible with GPU export, i.e. use --device 0'
model = attempt_load(weights, map_location=device, inplace=True, fuse=True) # load FP32 model
nc, names = model.nc, model.names # number of classes, class names
# Input
gs = int(max(model.stride)) # grid size (max stride)
imgsz = [check_img_size(x, gs) for x in imgsz] # verify img_size are gs-multiples
im = torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection
# Update model
if half:
im, model = im.half(), model.half() # to FP16
model.train() if train else model.eval() # training mode = no Detect() layer grid construction
for k, m in model.named_modules():
if isinstance(m, Conv): # assign export-friendly activations
if isinstance(m.act, nn.SiLU):
m.act = SiLU()
elif isinstance(m, Detect):
m.inplace = inplace
m.onnx_dynamic = dynamic
# m.forward = m.forward_export # assign forward (optional)
for _ in range(2):
y = model(im) # dry runs
LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} ({file_size(file):.1f} MB)")
# Exports
if 'torchscript' in include:
export_torchscript(model, im, file, optimize)
if 'onnx' in include:
export_onnx(model, im, file, opset, train, dynamic, simplify)
if 'coreml' in include:
export_coreml(model, im, file)
# TensorFlow Exports
if any(tf_exports):
pb, tflite, tfjs = tf_exports[1:]
assert not (tflite and tfjs), 'TFLite and TF.js models must be exported separately, please pass only one type.'
model = export_saved_model(model.cpu(), im, file, dynamic, tf_nms=tfjs, agnostic_nms=tfjs,
topk_per_class=topk_per_class, topk_all=topk_all, conf_thres=conf_thres,
iou_thres=iou_thres) # keras model
if pb or tfjs: # pb prerequisite to tfjs
export_pb(model, im, file)
if tflite:
export_tflite(model, im, file, int8=int8, data=data, ncalib=100)
if tfjs:
export_tfjs(model, im, file)
# Finish
LOGGER.info(f'\nExport complete ({time.time() - t:.2f}s)'
f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
f'\nVisualize with https://netron.app')
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='weights path')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
parser.add_argument('--inplace', action='store_true', help='set YOLOv3 Detect() inplace=True')
parser.add_argument('--train', action='store_true', help='model.train() mode')
parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
parser.add_argument('--dynamic', action='store_true', help='ONNX/TF: dynamic axes')
parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
parser.add_argument('--opset', type=int, default=13, help='ONNX: opset version')
parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')
parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')
parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')
parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')
parser.add_argument('--include', nargs='+',
default=['torchscript', 'onnx'],
help='available formats are (torchscript, onnx, coreml, saved_model, pb, tflite, tfjs)')
opt = parser.parse_args()
print_args(FILE.stem, opt)
return opt
def main(opt):
run(**vars(opt))
if __name__ == "__main__":
opt = parse_opt()
main(opt)

107
yolov3/hubconf.py Normal file
View File

@ -0,0 +1,107 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/
Usage:
import torch
model = torch.hub.load('ultralytics/yolov3', 'yolov3')
"""
import torch
def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
"""Creates a specified model
Arguments:
name (str): name of model, i.e. 'yolov3'
pretrained (bool): load pretrained weights into the model
channels (int): number of input channels
classes (int): number of model classes
autoshape (bool): apply .autoshape() wrapper to model
verbose (bool): print all information to screen
device (str, torch.device, None): device to use for model parameters
Returns:
pytorch model
"""
from pathlib import Path
from models.experimental import attempt_load
from models.yolo import Model
from utils.downloads import attempt_download
from utils.general import check_requirements, intersect_dicts, set_logging
from utils.torch_utils import select_device
file = Path(__file__).resolve()
check_requirements(exclude=('tensorboard', 'thop', 'opencv-python'))
set_logging(verbose=verbose)
save_dir = Path('') if str(name).endswith('.pt') else file.parent
path = (save_dir / name).with_suffix('.pt') # checkpoint path
try:
device = select_device(('0' if torch.cuda.is_available() else 'cpu') if device is None else device)
if pretrained and channels == 3 and classes == 80:
model = attempt_load(path, map_location=device) # download/load FP32 model
else:
cfg = list((Path(__file__).parent / 'models').rglob(f'{name}.yaml'))[0] # model.yaml path
model = Model(cfg, channels, classes) # create model
if pretrained:
ckpt = torch.load(attempt_download(path), map_location=device) # load
csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
csd = intersect_dicts(csd, model.state_dict(), exclude=['anchors']) # intersect
model.load_state_dict(csd, strict=False) # load
if len(ckpt['model'].names) == classes:
model.names = ckpt['model'].names # set class names attribute
if autoshape:
model = model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS
return model.to(device)
except Exception as e:
help_url = 'https://github.com/ultralytics/yolov5/issues/36'
s = 'Cache may be out of date, try `force_reload=True`. See %s for help.' % help_url
raise Exception(s) from e
def custom(path='path/to/model.pt', autoshape=True, verbose=True, device=None):
# custom or local model
return _create(path, autoshape=autoshape, verbose=verbose, device=device)
def yolov3(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
# YOLOv3 model https://github.com/ultralytics/yolov3
return _create('yolov3', pretrained, channels, classes, autoshape, verbose, device)
def yolov3_spp(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
# YOLOv3-SPP model https://github.com/ultralytics/yolov3
return _create('yolov3-spp', pretrained, channels, classes, autoshape, verbose, device)
def yolov3_tiny(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
# YOLOv3-tiny model https://github.com/ultralytics/yolov3
return _create('yolov3-tiny', pretrained, channels, classes, autoshape, verbose, device)
if __name__ == '__main__':
model = _create(name='yolov3-tiny', pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) # pretrained
# model = custom(path='path/to/model.pt') # custom
# Verify inference
from pathlib import Path
import cv2
import numpy as np
from PIL import Image
imgs = ['data/images/zidane.jpg', # filename
Path('data/images/zidane.jpg'), # Path
'https://ultralytics.com/images/zidane.jpg', # URI
cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV
Image.open('data/images/bus.jpg'), # PIL
np.zeros((320, 640, 3))] # numpy
results = model(imgs) # batched inference
results.print()
results.save()

View File

593
yolov3/models/common.py Normal file
View File

@ -0,0 +1,593 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Common modules
"""
import json
import math
import platform
import warnings
from copy import copy
from pathlib import Path
import cv2
import numpy as np
import pandas as pd
import requests
import torch
import torch.nn as nn
from PIL import Image
from torch.cuda import amp
from utils.datasets import exif_transpose, letterbox
from utils.general import (LOGGER, check_requirements, check_suffix, colorstr, increment_path, make_divisible,
non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import time_sync
def autopad(k, p=None): # kernel, padding
# Pad to 'same'
if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p
class Conv(nn.Module):
# Standard convolution
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
def forward(self, x):
return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x):
return self.act(self.conv(x))
class DWConv(Conv):
# Depth-wise convolution class
def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
class TransformerLayer(nn.Module):
# Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
def __init__(self, c, num_heads):
super().__init__()
self.q = nn.Linear(c, c, bias=False)
self.k = nn.Linear(c, c, bias=False)
self.v = nn.Linear(c, c, bias=False)
self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
self.fc1 = nn.Linear(c, c, bias=False)
self.fc2 = nn.Linear(c, c, bias=False)
def forward(self, x):
x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
x = self.fc2(self.fc1(x)) + x
return x
class TransformerBlock(nn.Module):
# Vision Transformer https://arxiv.org/abs/2010.11929
def __init__(self, c1, c2, num_heads, num_layers):
super().__init__()
self.conv = None
if c1 != c2:
self.conv = Conv(c1, c2)
self.linear = nn.Linear(c2, c2) # learnable position embedding
self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))
self.c2 = c2
def forward(self, x):
if self.conv is not None:
x = self.conv(x)
b, _, w, h = x.shape
p = x.flatten(2).unsqueeze(0).transpose(0, 3).squeeze(3)
return self.tr(p + self.linear(p)).unsqueeze(3).transpose(0, 3).reshape(b, self.c2, w, h)
class Bottleneck(nn.Module):
# Standard bottleneck
def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c_, c2, 3, 1, g=g)
self.add = shortcut and c1 == c2
def forward(self, x):
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
class BottleneckCSP(nn.Module):
# CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
self.cv4 = Conv(2 * c_, c2, 1, 1)
self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
self.act = nn.SiLU()
self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
def forward(self, x):
y1 = self.cv3(self.m(self.cv1(x)))
y2 = self.cv2(x)
return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
class C3(nn.Module):
# CSP Bottleneck with 3 convolutions
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c1, c_, 1, 1)
self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2)
self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
# self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])
def forward(self, x):
return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
class C3TR(C3):
# C3 module with TransformerBlock()
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e)
self.m = TransformerBlock(c_, c_, 4, n)
class C3SPP(C3):
# C3 module with SPP()
def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e)
self.m = SPP(c_, c_, k)
class C3Ghost(C3):
# C3 module with GhostBottleneck()
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e) # hidden channels
self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))
class SPP(nn.Module):
# Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
def __init__(self, c1, c2, k=(5, 9, 13)):
super().__init__()
c_ = c1 // 2 # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
def forward(self, x):
x = self.cv1(x)
with warnings.catch_warnings():
warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
class SPPF(nn.Module):
# Spatial Pyramid Pooling - Fast (SPPF) layer for by Glenn Jocher
def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
super().__init__()
c_ = c1 // 2 # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c_ * 4, c2, 1, 1)
self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
def forward(self, x):
x = self.cv1(x)
with warnings.catch_warnings():
warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
y1 = self.m(x)
y2 = self.m(y1)
return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
class Focus(nn.Module):
# Focus wh information into c-space
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
# self.contract = Contract(gain=2)
def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
# return self.conv(self.contract(x))
class GhostConv(nn.Module):
# Ghost Convolution https://github.com/huawei-noah/ghostnet
def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
super().__init__()
c_ = c2 // 2 # hidden channels
self.cv1 = Conv(c1, c_, k, s, None, g, act)
self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
def forward(self, x):
y = self.cv1(x)
return torch.cat([y, self.cv2(y)], 1)
class GhostBottleneck(nn.Module):
# Ghost Bottleneck https://github.com/huawei-noah/ghostnet
def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
super().__init__()
c_ = c2 // 2
self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
def forward(self, x):
return self.conv(x) + self.shortcut(x)
class Contract(nn.Module):
# Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
def __init__(self, gain=2):
super().__init__()
self.gain = gain
def forward(self, x):
b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
s = self.gain
x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2)
x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40)
class Expand(nn.Module):
# Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
def __init__(self, gain=2):
super().__init__()
self.gain = gain
def forward(self, x):
b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
s = self.gain
x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80)
x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160)
class Concat(nn.Module):
# Concatenate a list of tensors along dimension
def __init__(self, dimension=1):
super().__init__()
self.d = dimension
def forward(self, x):
return torch.cat(x, self.d)
class DetectMultiBackend(nn.Module):
# MultiBackend class for python inference on various backends
def __init__(self, weights='yolov3.pt', device=None, dnn=True):
# Usage:
# PyTorch: weights = *.pt
# TorchScript: *.torchscript.pt
# CoreML: *.mlmodel
# TensorFlow: *_saved_model
# TensorFlow: *.pb
# TensorFlow Lite: *.tflite
# ONNX Runtime: *.onnx
# OpenCV DNN: *.onnx with dnn=True
super().__init__()
w = str(weights[0] if isinstance(weights, list) else weights)
suffix, suffixes = Path(w).suffix.lower(), ['.pt', '.onnx', '.tflite', '.pb', '', '.mlmodel']
check_suffix(w, suffixes) # check weights have acceptable suffix
pt, onnx, tflite, pb, saved_model, coreml = (suffix == x for x in suffixes) # backend booleans
jit = pt and 'torchscript' in w.lower()
stride, names = 64, [f'class{i}' for i in range(1000)] # assign defaults
if jit: # TorchScript
LOGGER.info(f'Loading {w} for TorchScript inference...')
extra_files = {'config.txt': ''} # model metadata
model = torch.jit.load(w, _extra_files=extra_files)
if extra_files['config.txt']:
d = json.loads(extra_files['config.txt']) # extra_files dict
stride, names = int(d['stride']), d['names']
elif pt: # PyTorch
from models.experimental import attempt_load # scoped to avoid circular import
model = torch.jit.load(w) if 'torchscript' in w else attempt_load(weights, map_location=device)
stride = int(model.stride.max()) # model stride
names = model.module.names if hasattr(model, 'module') else model.names # get class names
elif coreml: # CoreML *.mlmodel
import coremltools as ct
model = ct.models.MLModel(w)
elif dnn: # ONNX OpenCV DNN
LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')
check_requirements(('opencv-python>=4.5.4',))
net = cv2.dnn.readNetFromONNX(w)
elif onnx: # ONNX Runtime
LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
cuda = torch.cuda.is_available()
check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
import onnxruntime
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
session = onnxruntime.InferenceSession(w, providers=providers)
else: # TensorFlow model (TFLite, pb, saved_model)
import tensorflow as tf
if pb: # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
def wrap_frozen_graph(gd, inputs, outputs):
x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped
return x.prune(tf.nest.map_structure(x.graph.as_graph_element, inputs),
tf.nest.map_structure(x.graph.as_graph_element, outputs))
LOGGER.info(f'Loading {w} for TensorFlow *.pb inference...')
graph_def = tf.Graph().as_graph_def()
graph_def.ParseFromString(open(w, 'rb').read())
frozen_func = wrap_frozen_graph(gd=graph_def, inputs="x:0", outputs="Identity:0")
elif saved_model:
LOGGER.info(f'Loading {w} for TensorFlow saved_model inference...')
model = tf.keras.models.load_model(w)
elif tflite: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
if 'edgetpu' in w.lower():
LOGGER.info(f'Loading {w} for TensorFlow Edge TPU inference...')
import tflite_runtime.interpreter as tfli
delegate = {'Linux': 'libedgetpu.so.1', # install https://coral.ai/software/#edgetpu-runtime
'Darwin': 'libedgetpu.1.dylib',
'Windows': 'edgetpu.dll'}[platform.system()]
interpreter = tfli.Interpreter(model_path=w, experimental_delegates=[tfli.load_delegate(delegate)])
else:
LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')
interpreter = tf.lite.Interpreter(model_path=w) # load TFLite model
interpreter.allocate_tensors() # allocate
input_details = interpreter.get_input_details() # inputs
output_details = interpreter.get_output_details() # outputs
self.__dict__.update(locals()) # assign all variables to self
def forward(self, im, augment=False, visualize=False, val=False):
# MultiBackend inference
b, ch, h, w = im.shape # batch, channel, height, width
if self.pt: # PyTorch
y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize)
return y if val else y[0]
elif self.coreml: # CoreML *.mlmodel
im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
im = Image.fromarray((im[0] * 255).astype('uint8'))
# im = im.resize((192, 320), Image.ANTIALIAS)
y = self.model.predict({'image': im}) # coordinates are xywh normalized
box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels
conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)
elif self.onnx: # ONNX
im = im.cpu().numpy() # torch to numpy
if self.dnn: # ONNX OpenCV DNN
self.net.setInput(im)
y = self.net.forward()
else: # ONNX Runtime
y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0]
else: # TensorFlow model (TFLite, pb, saved_model)
im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
if self.pb:
y = self.frozen_func(x=self.tf.constant(im)).numpy()
elif self.saved_model:
y = self.model(im, training=False).numpy()
elif self.tflite:
input, output = self.input_details[0], self.output_details[0]
int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model
if int8:
scale, zero_point = input['quantization']
im = (im / scale + zero_point).astype(np.uint8) # de-scale
self.interpreter.set_tensor(input['index'], im)
self.interpreter.invoke()
y = self.interpreter.get_tensor(output['index'])
if int8:
scale, zero_point = output['quantization']
y = (y.astype(np.float32) - zero_point) * scale # re-scale
y[..., 0] *= w # x
y[..., 1] *= h # y
y[..., 2] *= w # w
y[..., 3] *= h # h
y = torch.tensor(y)
return (y, []) if val else y
class AutoShape(nn.Module):
# input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
conf = 0.25 # NMS confidence threshold
iou = 0.45 # NMS IoU threshold
classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
multi_label = False # NMS multiple labels per box
max_det = 1000 # maximum number of detections per image
def __init__(self, model):
super().__init__()
self.model = model.eval()
def autoshape(self):
LOGGER.info('AutoShape already enabled, skipping... ') # model already converted to model.autoshape()
return self
def _apply(self, fn):
# Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
self = super()._apply(fn)
m = self.model.model[-1] # Detect()
m.stride = fn(m.stride)
m.grid = list(map(fn, m.grid))
if isinstance(m.anchor_grid, list):
m.anchor_grid = list(map(fn, m.anchor_grid))
return self
@torch.no_grad()
def forward(self, imgs, size=640, augment=False, profile=False):
# Inference from various sources. For height=640, width=1280, RGB images example inputs are:
# file: imgs = 'data/images/zidane.jpg' # str or PosixPath
# URI: = 'https://ultralytics.com/images/zidane.jpg'
# OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
# PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
# numpy: = np.zeros((640,1280,3)) # HWC
# torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
# multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
t = [time_sync()]
p = next(self.model.parameters()) # for device and type
if isinstance(imgs, torch.Tensor): # torch
with amp.autocast(enabled=p.device.type != 'cpu'):
return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
# Pre-process
n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images
shape0, shape1, files = [], [], [] # image and inference shapes, filenames
for i, im in enumerate(imgs):
f = f'image{i}' # filename
if isinstance(im, (str, Path)): # filename or uri
im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
im = np.asarray(exif_transpose(im))
elif isinstance(im, Image.Image): # PIL Image
im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
files.append(Path(f).with_suffix('.jpg').name)
if im.shape[0] < 5: # image in CHW
im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input
s = im.shape[:2] # HWC
shape0.append(s) # image shape
g = (size / max(s)) # gain
shape1.append([y * g for y in s])
imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape
x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad
x = np.stack(x, 0) if n > 1 else x[0][None] # stack
x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW
x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
t.append(time_sync())
with amp.autocast(enabled=p.device.type != 'cpu'):
# Inference
y = self.model(x, augment, profile)[0] # forward
t.append(time_sync())
# Post-process
y = non_max_suppression(y, self.conf, iou_thres=self.iou, classes=self.classes,
multi_label=self.multi_label, max_det=self.max_det) # NMS
for i in range(n):
scale_coords(shape1, y[i][:, :4], shape0[i])
t.append(time_sync())
return Detections(imgs, y, files, t, self.names, x.shape)
class Detections:
# detections class for inference results
def __init__(self, imgs, pred, files, times=None, names=None, shape=None):
super().__init__()
d = pred[0].device # device
gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs] # normalizations
self.imgs = imgs # list of images as numpy arrays
self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
self.names = names # class names
self.files = files # image filenames
self.xyxy = pred # xyxy pixels
self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
self.n = len(self.pred) # number of images (batch size)
self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms)
self.s = shape # inference BCHW shape
def display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')):
crops = []
for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):
s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string
if pred.shape[0]:
for c in pred[:, -1].unique():
n = (pred[:, -1] == c).sum() # detections per class
s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
if show or save or render or crop:
annotator = Annotator(im, example=str(self.names))
for *box, conf, cls in reversed(pred): # xyxy, confidence, class
label = f'{self.names[int(cls)]} {conf:.2f}'
if crop:
file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
crops.append({'box': box, 'conf': conf, 'cls': cls, 'label': label,
'im': save_one_box(box, im, file=file, save=save)})
else: # all others
annotator.box_label(box, label, color=colors(cls))
im = annotator.im
else:
s += '(no detections)'
im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
if pprint:
LOGGER.info(s.rstrip(', '))
if show:
im.show(self.files[i]) # show
if save:
f = self.files[i]
im.save(save_dir / f) # save
if i == self.n - 1:
LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
if render:
self.imgs[i] = np.asarray(im)
if crop:
if save:
LOGGER.info(f'Saved results to {save_dir}\n')
return crops
def print(self):
self.display(pprint=True) # print results
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' %
self.t)
def show(self):
self.display(show=True) # show results
def save(self, save_dir='runs/detect/exp'):
save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir
self.display(save=True, save_dir=save_dir) # save results
def crop(self, save=True, save_dir='runs/detect/exp'):
save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None
return self.display(crop=True, save=save, save_dir=save_dir) # crop results
def render(self):
self.display(render=True) # render results
return self.imgs
def pandas(self):
# return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
new = copy(self) # return copy
ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
return new
def tolist(self):
# return a list of Detections objects, i.e. 'for result in results.tolist():'
x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)]
for d in x:
for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
setattr(d, k, getattr(d, k)[0]) # pop out of list
return x
def __len__(self):
return self.n
class Classify(nn.Module):
# Classification head, i.e. x(b,c1,20,20) to x(b,c2)
def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
self.flat = nn.Flatten()
def forward(self, x):
z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
return self.flat(self.conv(z)) # flatten to x(b,c2)

View File

@ -0,0 +1,121 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Experimental modules
"""
import math
import numpy as np
import torch
import torch.nn as nn
from models.common import Conv
from utils.downloads import attempt_download
class CrossConv(nn.Module):
# Cross Convolution Downsample
def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
# ch_in, ch_out, kernel, stride, groups, expansion, shortcut
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, (1, k), (1, s))
self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
self.add = shortcut and c1 == c2
def forward(self, x):
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
class Sum(nn.Module):
# Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
def __init__(self, n, weight=False): # n: number of inputs
super().__init__()
self.weight = weight # apply weights boolean
self.iter = range(n - 1) # iter object
if weight:
self.w = nn.Parameter(-torch.arange(1.0, n) / 2, requires_grad=True) # layer weights
def forward(self, x):
y = x[0] # no weight
if self.weight:
w = torch.sigmoid(self.w) * 2
for i in self.iter:
y = y + x[i + 1] * w[i]
else:
for i in self.iter:
y = y + x[i + 1]
return y
class MixConv2d(nn.Module):
# Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595
def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): # ch_in, ch_out, kernel, stride, ch_strategy
super().__init__()
n = len(k) # number of convolutions
if equal_ch: # equal c_ per group
i = torch.linspace(0, n - 1E-6, c2).floor() # c2 indices
c_ = [(i == g).sum() for g in range(n)] # intermediate channels
else: # equal weight.numel() per group
b = [c2] + [0] * n
a = np.eye(n + 1, n, k=-1)
a -= np.roll(a, 1, axis=1)
a *= np.array(k) ** 2
a[0] = 1
c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
self.m = nn.ModuleList(
[nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)])
self.bn = nn.BatchNorm2d(c2)
self.act = nn.SiLU()
def forward(self, x):
return self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
class Ensemble(nn.ModuleList):
# Ensemble of models
def __init__(self):
super().__init__()
def forward(self, x, augment=False, profile=False, visualize=False):
y = []
for module in self:
y.append(module(x, augment, profile, visualize)[0])
# y = torch.stack(y).max(0)[0] # max ensemble
# y = torch.stack(y).mean(0) # mean ensemble
y = torch.cat(y, 1) # nms ensemble
return y, None # inference, train output
def attempt_load(weights, map_location=None, inplace=True, fuse=True):
from models.yolo import Detect, Model
# Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
model = Ensemble()
for w in weights if isinstance(weights, list) else [weights]:
ckpt = torch.load(attempt_download(w), map_location=map_location) # load
ckpt = (ckpt['ema'] or ckpt['model']).float() # FP32 model
model.append(ckpt.fuse().eval() if fuse else ckpt.eval()) # fused or un-fused model in eval mode
# Compatibility updates
for m in model.modules():
t = type(m)
if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model):
m.inplace = inplace # torch 1.7.0 compatibility
if t is Detect:
if not isinstance(m.anchor_grid, list): # new Detect Layer compatibility
delattr(m, 'anchor_grid')
setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)
elif t is Conv:
m._non_persistent_buffers_set = set() # torch 1.6.0 compatibility
elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'):
m.recompute_scale_factor = None # torch 1.11.0 compatibility
if len(model) == 1:
return model[-1] # return model
else:
print(f'Ensemble created with {weights}\n')
for k in ['names']:
setattr(model, k, getattr(model[-1], k))
model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
return model # return ensemble

View File

@ -0,0 +1,59 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Default anchors for COCO data
# P5 -------------------------------------------------------------------------------------------------------------------
# P5-640:
anchors_p5_640:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# P6 -------------------------------------------------------------------------------------------------------------------
# P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387
anchors_p6_640:
- [9,11, 21,19, 17,41] # P3/8
- [43,32, 39,70, 86,64] # P4/16
- [65,131, 134,130, 120,265] # P5/32
- [282,180, 247,354, 512,387] # P6/64
# P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792
anchors_p6_1280:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187
anchors_p6_1920:
- [28,41, 67,59, 57,141] # P3/8
- [144,103, 129,227, 270,205] # P4/16
- [209,452, 455,396, 358,812] # P5/32
- [653,922, 1109,570, 1387,1187] # P6/64
# P7 -------------------------------------------------------------------------------------------------------------------
# P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372
anchors_p7_640:
- [11,11, 13,30, 29,20] # P3/8
- [30,46, 61,38, 39,92] # P4/16
- [78,80, 146,66, 79,163] # P5/32
- [149,150, 321,143, 157,303] # P6/64
- [257,402, 359,290, 524,372] # P7/128
# P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818
anchors_p7_1280:
- [19,22, 54,36, 32,77] # P3/8
- [70,83, 138,71, 75,173] # P4/16
- [165,159, 148,334, 375,151] # P5/32
- [334,317, 251,626, 499,474] # P6/64
- [750,326, 534,814, 1079,818] # P7/128
# P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227
anchors_p7_1920:
- [29,34, 81,55, 47,115] # P3/8
- [105,124, 207,107, 113,259] # P4/16
- [247,238, 222,500, 563,227] # P5/32
- [501,476, 376,939, 749,711] # P6/64
- [1126,489, 801,1222, 1618,1227] # P7/128

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 BiFPN head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14, 6], 1, Concat, [1]], # cat P4 <--- BiFPN change
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,42 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 FPN head
head:
[[-1, 3, C3, [1024, False]], # 10 (P5/32-large)
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 1, Conv, [512, 1, 1]],
[-1, 3, C3, [512, False]], # 14 (P4/16-medium)
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 1, Conv, [256, 1, 1]],
[-1, 3, C3, [256, False]], # 18 (P3/8-small)
[[18, 14, 10], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,54 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head with (P2, P3, P4, P5) outputs
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 2], 1, Concat, [1]], # cat backbone P2
[-1, 1, C3, [128, False]], # 21 (P2/4-xsmall)
[-1, 1, Conv, [128, 3, 2]],
[[-1, 18], 1, Concat, [1]], # cat head P3
[-1, 3, C3, [256, False]], # 24 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 27 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 30 (P5/32-large)
[[21, 24, 27, 30], 1, Detect, [nc, anchors]], # Detect(P2, P3, P4, P5)
]

View File

@ -0,0 +1,41 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head with (P3, P4) outputs
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[[17, 20], 1, Detect, [nc, anchors]], # Detect(P3, P4)
]

View File

@ -0,0 +1,56 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head with (P3, P4, P5, P6) outputs
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]

View File

@ -0,0 +1,67 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, Conv, [1280, 3, 2]], # 11-P7/128
[-1, 3, C3, [1280]],
[-1, 1, SPPF, [1280, 5]], # 13
]
# YOLOv5 v6.0 head with (P3, P4, P5, P6, P7) outputs
head:
[[-1, 1, Conv, [1024, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 10], 1, Concat, [1]], # cat backbone P6
[-1, 3, C3, [1024, False]], # 17
[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 21
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 25
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 29 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 26], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 32 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 22], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 35 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 18], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 38 (P6/64-xlarge)
[-1, 1, Conv, [1024, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P7
[-1, 3, C3, [1280, False]], # 41 (P7/128-xxlarge)
[[29, 32, 35, 38, 41], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6, P7)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 PANet head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,60 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]

View File

@ -0,0 +1,60 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.67 # model depth multiple
width_multiple: 0.75 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]

View File

@ -0,0 +1,60 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.25 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]

View File

@ -0,0 +1,49 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
activation: nn.LeakyReLU(0.1) # <----- Conv() activation used throughout entire YOLOv5 model
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, GhostConv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3Ghost, [128]],
[-1, 1, GhostConv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3Ghost, [256]],
[-1, 1, GhostConv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3Ghost, [512]],
[-1, 1, GhostConv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3Ghost, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, GhostConv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3Ghost, [512, False]], # 13
[-1, 1, GhostConv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3Ghost, [256, False]], # 17 (P3/8-small)
[-1, 1, GhostConv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3Ghost, [512, False]], # 20 (P4/16-medium)
[-1, 1, GhostConv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3Ghost, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3TR, [1024]], # 9 <--- C3TR() Transformer module
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,60 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]

View File

@ -0,0 +1,60 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.33 # model depth multiple
width_multiple: 1.25 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Segment, [nc, anchors, 32, 256]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.67 # model depth multiple
width_multiple: 0.75 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Segment, [nc, anchors, 32, 256]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.25 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Segment, [nc, anchors, 32, 256]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.5 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Segment, [nc, anchors, 32, 256]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.33 # model depth multiple
width_multiple: 1.25 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Segment, [nc, anchors, 32, 256]], # Detect(P3, P4, P5)
]

628
yolov3/models/tf.py Normal file
View File

@ -0,0 +1,628 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
TensorFlow, Keras and TFLite versions of
Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127
Usage:
$ python models/tf.py --weights yolov3.pt
Export:
$ python path/to/export.py --weights yolov3.pt --include saved_model pb tflite tfjs
"""
import argparse
import logging
import sys
from copy import deepcopy
from pathlib import Path
from packaging import version
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
# ROOT = ROOT.relative_to(Path.cwd()) # relative
import numpy as np
import tensorflow as tf
import torch
import torch.nn as nn
from keras import backend
from keras.engine.base_layer import Layer
from keras.engine.input_spec import InputSpec
from keras.utils import conv_utils
from tensorflow import keras
from models.common import C3, SPP, SPPF, Bottleneck, BottleneckCSP, Concat, Conv, DWConv, Focus, autopad
from models.experimental import CrossConv, MixConv2d, attempt_load
from models.yolo import Detect
from utils.activations import SiLU
from utils.general import LOGGER, make_divisible, print_args
# isort: off
from tensorflow.python.util.tf_export import keras_export
class TFBN(keras.layers.Layer):
# TensorFlow BatchNormalization wrapper
def __init__(self, w=None):
super().__init__()
self.bn = keras.layers.BatchNormalization(
beta_initializer=keras.initializers.Constant(w.bias.numpy()),
gamma_initializer=keras.initializers.Constant(w.weight.numpy()),
moving_mean_initializer=keras.initializers.Constant(w.running_mean.numpy()),
moving_variance_initializer=keras.initializers.Constant(w.running_var.numpy()),
epsilon=w.eps)
def call(self, inputs):
return self.bn(inputs)
class TFMaxPool2d(keras.layers.Layer):
# TensorFlow MAX Pooling
def __init__(self, k, s, p, w=None):
super().__init__()
self.pool = keras.layers.MaxPool2D(pool_size=k, strides=s, padding='valid')
def call(self, inputs):
return self.pool(inputs)
class TFZeroPad2d(keras.layers.Layer):
# TensorFlow MAX Pooling
def __init__(self, p, w=None):
super().__init__()
if version.parse(tf.__version__) < version.parse('2.11.0'):
self.zero_pad = ZeroPadding2D(padding=p)
else:
self.zero_pad = keras.layers.ZeroPadding2D(padding=((p[0], p[1]), (p[2], p[3])))
def call(self, inputs):
return self.zero_pad(inputs)
class TFPad(keras.layers.Layer):
def __init__(self, pad):
super().__init__()
self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]])
def call(self, inputs):
return tf.pad(inputs, self.pad, mode='constant', constant_values=0)
class TFConv(keras.layers.Layer):
# Standard convolution
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
# ch_in, ch_out, weights, kernel, stride, padding, groups
super().__init__()
assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
assert isinstance(k, int), "Convolution with multiple kernels are not allowed."
# TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding)
# see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch
conv = keras.layers.Conv2D(
c2, k, s, 'SAME' if s == 1 else 'VALID', use_bias=False if hasattr(w, 'bn') else True,
kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
# activations
if isinstance(w.act, nn.LeakyReLU):
self.act = (lambda x: keras.activations.relu(x, alpha=0.1)) if act else tf.identity
elif isinstance(w.act, nn.Hardswish):
self.act = (lambda x: x * tf.nn.relu6(x + 3) * 0.166666667) if act else tf.identity
elif isinstance(w.act, (nn.SiLU, SiLU)):
self.act = (lambda x: keras.activations.swish(x)) if act else tf.identity
else:
raise Exception(f'no matching TensorFlow activation found for {w.act}')
def call(self, inputs):
return self.act(self.bn(self.conv(inputs)))
class TFFocus(keras.layers.Layer):
# Focus wh information into c-space
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
# ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.conv = TFConv(c1 * 4, c2, k, s, p, g, act, w.conv)
def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c)
# inputs = inputs / 255 # normalize 0-255 to 0-1
return self.conv(tf.concat([inputs[:, ::2, ::2, :],
inputs[:, 1::2, ::2, :],
inputs[:, ::2, 1::2, :],
inputs[:, 1::2, 1::2, :]], 3))
class TFBottleneck(keras.layers.Layer):
# Standard bottleneck
def __init__(self, c1, c2, shortcut=True, g=1, e=0.5, w=None): # ch_in, ch_out, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
self.cv2 = TFConv(c_, c2, 3, 1, g=g, w=w.cv2)
self.add = shortcut and c1 == c2
def call(self, inputs):
return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))
class TFConv2d(keras.layers.Layer):
# Substitution for PyTorch nn.Conv2D
def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None):
super().__init__()
assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
self.conv = keras.layers.Conv2D(
c2, k, s, 'VALID', use_bias=bias,
kernel_initializer=keras.initializers.Constant(w.weight.permute(2, 3, 1, 0).numpy()),
bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None, )
def call(self, inputs):
return self.conv(inputs)
class TFBottleneckCSP(keras.layers.Layer):
# CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
# ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
self.cv2 = TFConv2d(c1, c_, 1, 1, bias=False, w=w.cv2)
self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3)
self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4)
self.bn = TFBN(w.bn)
self.act = lambda x: keras.activations.relu(x, alpha=0.1)
self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])
def call(self, inputs):
y1 = self.cv3(self.m(self.cv1(inputs)))
y2 = self.cv2(inputs)
return self.cv4(self.act(self.bn(tf.concat((y1, y2), axis=3))))
class TFC3(keras.layers.Layer):
# CSP Bottleneck with 3 convolutions
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
# ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)
self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)
self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])
def call(self, inputs):
return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
class TFSPP(keras.layers.Layer):
# Spatial pyramid pooling layer used in YOLOv3-SPP
def __init__(self, c1, c2, k=(5, 9, 13), w=None):
super().__init__()
c_ = c1 // 2 # hidden channels
self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
self.cv2 = TFConv(c_ * (len(k) + 1), c2, 1, 1, w=w.cv2)
self.m = [keras.layers.MaxPool2D(pool_size=x, strides=1, padding='SAME') for x in k]
def call(self, inputs):
x = self.cv1(inputs)
return self.cv2(tf.concat([x] + [m(x) for m in self.m], 3))
class TFSPPF(keras.layers.Layer):
# Spatial pyramid pooling-Fast layer
def __init__(self, c1, c2, k=5, w=None):
super().__init__()
c_ = c1 // 2 # hidden channels
self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
self.cv2 = TFConv(c_ * 4, c2, 1, 1, w=w.cv2)
self.m = keras.layers.MaxPool2D(pool_size=k, strides=1, padding='SAME')
def call(self, inputs):
x = self.cv1(inputs)
y1 = self.m(x)
y2 = self.m(y1)
return self.cv2(tf.concat([x, y1, y2, self.m(y2)], 3))
class TFDetect(keras.layers.Layer):
def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None): # detection layer
super().__init__()
self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32)
self.nc = nc # number of classes
self.no = nc + 5 # number of outputs per anchor
self.nl = len(anchors) # number of detection layers
self.na = len(anchors[0]) // 2 # number of anchors
self.grid = [tf.zeros(1)] * self.nl # init grid
self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32)
self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]),
[self.nl, 1, -1, 1, 2])
self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)]
self.training = False # set to False after building model
self.imgsz = imgsz
for i in range(self.nl):
ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]
self.grid[i] = self._make_grid(nx, ny)
def call(self, inputs):
z = [] # inference output
x = []
for i in range(self.nl):
x.append(self.m[i](inputs[i]))
# x(bs,20,20,255) to x(bs,3,20,20,85)
ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]
x[i] = tf.transpose(tf.reshape(x[i], [-1, ny * nx, self.na, self.no]), [0, 2, 1, 3])
if not self.training: # inference
y = tf.sigmoid(x[i])
xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]
# Normalize xywh to 0-1 to reduce calibration error
xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
y = tf.concat([xy, wh, y[..., 4:]], -1)
z.append(tf.reshape(y, [-1, 3 * ny * nx, self.no]))
return x if self.training else (tf.concat(z, 1), x)
@staticmethod
def _make_grid(nx=20, ny=20):
# yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
# return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
xv, yv = tf.meshgrid(tf.range(nx), tf.range(ny))
return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32)
class TFUpsample(keras.layers.Layer):
def __init__(self, size, scale_factor, mode, w=None): # warning: all arguments needed including 'w'
super().__init__()
assert scale_factor == 2, "scale_factor must be 2"
self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * 2, x.shape[2] * 2), method=mode)
# self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode)
# with default arguments: align_corners=False, half_pixel_centers=False
# self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x,
# size=(x.shape[1] * 2, x.shape[2] * 2))
def call(self, inputs):
return self.upsample(inputs)
class TFConcat(keras.layers.Layer):
def __init__(self, dimension=1, w=None):
super().__init__()
assert dimension == 1, "convert only NCHW to NHWC concat"
self.d = 3
def call(self, inputs):
return tf.concat(inputs, self.d)
def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3)
LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
m_str = m
m = eval(m) if isinstance(m, str) else m # eval strings
for j, a in enumerate(args):
try:
args[j] = eval(a) if isinstance(a, str) else a # eval strings
except NameError:
pass
n = max(round(n * gd), 1) if n > 1 else n # depth gain
if m in [nn.Conv2d, Conv, Bottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3]:
c1, c2 = ch[f], args[0]
c2 = make_divisible(c2 * gw, 8) if c2 != no else c2
args = [c1, c2, *args[1:]]
if m in [BottleneckCSP, C3]:
args.insert(2, n)
n = 1
elif m is nn.BatchNorm2d:
args = [ch[f]]
elif m is Concat:
c2 = sum(ch[-1 if x == -1 else x + 1] for x in f)
elif m is Detect:
args.append([ch[x + 1] for x in f])
if isinstance(args[1], int): # number of anchors
args[1] = [list(range(args[1] * 2))] * len(f)
args.append(imgsz)
else:
c2 = ch[f]
tf_m = eval('TF' + m_str.replace('nn.', ''))
m_ = keras.Sequential([tf_m(*args, w=model.model[i][j]) for j in range(n)]) if n > 1 \
else tf_m(*args, w=model.model[i]) # module
torch_m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
t = str(m)[8:-2].replace('__main__.', '') # module type
np = sum(x.numel() for x in torch_m_.parameters()) # number params
m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
LOGGER.info(f'{i:>3}{str(f):>18}{str(n):>3}{np:>10} {t:<40}{str(args):<30}') # print
save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
layers.append(m_)
ch.append(c2)
return keras.Sequential(layers), sorted(save)
class TFModel:
def __init__(self, cfg='yolov3.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes
super().__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict
# Define model
if nc and nc != self.yaml['nc']:
LOGGER.info(f"Overriding {cfg} nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz)
def predict(self, inputs, tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,
conf_thres=0.25):
y = [] # outputs
x = inputs
for i, m in enumerate(self.model.layers):
if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
x = m(x) # run
y.append(x if m.i in self.savelist else None) # save output
# Add TensorFlow NMS
if tf_nms:
boxes = self._xywh2xyxy(x[0][..., :4])
probs = x[0][:, :, 4:5]
classes = x[0][:, :, 5:]
scores = probs * classes
if agnostic_nms:
nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres)
return nms, x[1]
else:
boxes = tf.expand_dims(boxes, 2)
nms = tf.image.combined_non_max_suppression(
boxes, scores, topk_per_class, topk_all, iou_thres, conf_thres, clip_boxes=False)
return nms, x[1]
return x[0] # output only first tensor [1,6300,85] = [xywh, conf, class0, class1, ...]
# x = x[0][0] # [x(1,6300,85), ...] to x(6300,85)
# xywh = x[..., :4] # x(6300,4) boxes
# conf = x[..., 4:5] # x(6300,1) confidences
# cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes
# return tf.concat([conf, cls, xywh], 1)
@staticmethod
def _xywh2xyxy(xywh):
# Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
x, y, w, h = tf.split(xywh, num_or_size_splits=4, axis=-1)
return tf.concat([x - w / 2, y - h / 2, x + w / 2, y + h / 2], axis=-1)
class AgnosticNMS(keras.layers.Layer):
# TF Agnostic NMS
def call(self, input, topk_all, iou_thres, conf_thres):
# wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450
return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres), input,
fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32),
name='agnostic_nms')
@staticmethod
def _nms(x, topk_all=100, iou_thres=0.45, conf_thres=0.25): # agnostic NMS
boxes, classes, scores = x
class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32)
scores_inp = tf.reduce_max(scores, -1)
selected_inds = tf.image.non_max_suppression(
boxes, scores_inp, max_output_size=topk_all, iou_threshold=iou_thres, score_threshold=conf_thres)
selected_boxes = tf.gather(boxes, selected_inds)
padded_boxes = tf.pad(selected_boxes,
paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]],
mode="CONSTANT", constant_values=0.0)
selected_scores = tf.gather(scores_inp, selected_inds)
padded_scores = tf.pad(selected_scores,
paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
mode="CONSTANT", constant_values=-1.0)
selected_classes = tf.gather(class_inds, selected_inds)
padded_classes = tf.pad(selected_classes,
paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
mode="CONSTANT", constant_values=-1.0)
valid_detections = tf.shape(selected_inds)[0]
return padded_boxes, padded_scores, padded_classes, valid_detections
def representative_dataset_gen(dataset, ncalib=100):
# Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays
for n, (path, img, im0s, vid_cap, string) in enumerate(dataset):
input = np.transpose(img, [1, 2, 0])
input = np.expand_dims(input, axis=0).astype(np.float32)
input /= 255
yield [input]
if n >= ncalib:
break
def run(weights=ROOT / 'yolov3.pt', # weights path
imgsz=(640, 640), # inference size h,w
batch_size=1, # batch size
dynamic=False, # dynamic batch size
):
# PyTorch model
im = torch.zeros((batch_size, 3, *imgsz)) # BCHW image
model = attempt_load(weights, map_location=torch.device('cpu'), inplace=True, fuse=False)
y = model(im) # inference
model.info()
# TensorFlow model
im = tf.zeros((batch_size, *imgsz, 3)) # BHWC image
tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
y = tf_model.predict(im) # inference
# Keras model
im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
keras_model = keras.Model(inputs=im, outputs=tf_model.predict(im))
keras_model.summary()
LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\nUse export.py for TF model export.')
@keras_export("keras.layers.ZeroPadding2D")
class ZeroPadding2D(Layer):
"""Zero-padding layer for 2D input (e.g. picture).
This layer can add rows and columns of zeros
at the top, bottom, left and right side of an image tensor.
Examples:
>>> input_shape = (1, 1, 2, 2)
>>> x = np.arange(np.prod(input_shape)).reshape(input_shape)
>>> print(x)
[[[[0 1]
[2 3]]]]
>>> y = tf.keras.layers.ZeroPadding2D(padding=1)(x)
>>> print(y)
tf.Tensor(
[[[[0 0]
[0 0]
[0 0]
[0 0]]
[[0 0]
[0 1]
[2 3]
[0 0]]
[[0 0]
[0 0]
[0 0]
[0 0]]]], shape=(1, 3, 4, 2), dtype=int64)
Args:
padding: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.
- If int: the same symmetric padding
is applied to height and width.
- If tuple of 2 ints:
interpreted as two different
symmetric padding values for height and width:
`(symmetric_height_pad, symmetric_width_pad)`.
- If tuple of 2 tuples of 2 ints:
interpreted as
`((top_pad, bottom_pad), (left_pad, right_pad))`
data_format: A string,
one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch_size, height, width, channels)` while `channels_first`
corresponds to inputs with shape
`(batch_size, channels, height, width)`.
It defaults to the `image_data_format` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be "channels_last".
Input shape:
4D tensor with shape:
- If `data_format` is `"channels_last"`:
`(batch_size, rows, cols, channels)`
- If `data_format` is `"channels_first"`:
`(batch_size, channels, rows, cols)`
Output shape:
4D tensor with shape:
- If `data_format` is `"channels_last"`:
`(batch_size, padded_rows, padded_cols, channels)`
- If `data_format` is `"channels_first"`:
`(batch_size, channels, padded_rows, padded_cols)`
"""
def __init__(self, padding=(1, 1), data_format=None, **kwargs):
super().__init__(**kwargs)
self.data_format = conv_utils.normalize_data_format(data_format)
if isinstance(padding, int):
self.padding = ((padding, padding), (padding, padding))
elif hasattr(padding, "__len__"):
if len(padding) == 4:
padding = ((padding[0], padding[1]), (padding[2], padding[3]))
if len(padding) != 2:
raise ValueError(
f"`padding` should have two elements. Received: {padding}."
)
height_padding = conv_utils.normalize_tuple(
padding[0], 2, "1st entry of padding", allow_zero=True
)
width_padding = conv_utils.normalize_tuple(
padding[1], 2, "2nd entry of padding", allow_zero=True
)
self.padding = (height_padding, width_padding)
else:
raise ValueError(
"`padding` should be either an int, "
"a tuple of 2 ints "
"(symmetric_height_pad, symmetric_width_pad), "
"or a tuple of 2 tuples of 2 ints "
"((top_pad, bottom_pad), (left_pad, right_pad)). "
f"Received: {padding}."
)
self.input_spec = InputSpec(ndim=4)
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
if self.data_format == "channels_first":
if input_shape[2] is not None:
rows = input_shape[2] + self.padding[0][0] + self.padding[0][1]
else:
rows = None
if input_shape[3] is not None:
cols = input_shape[3] + self.padding[1][0] + self.padding[1][1]
else:
cols = None
return tf.TensorShape([input_shape[0], input_shape[1], rows, cols])
elif self.data_format == "channels_last":
if input_shape[1] is not None:
rows = input_shape[1] + self.padding[0][0] + self.padding[0][1]
else:
rows = None
if input_shape[2] is not None:
cols = input_shape[2] + self.padding[1][0] + self.padding[1][1]
else:
cols = None
return tf.TensorShape([input_shape[0], rows, cols, input_shape[3]])
def call(self, inputs):
return backend.spatial_2d_padding(
inputs, padding=self.padding, data_format=self.data_format
)
def get_config(self):
config = {"padding": self.padding, "data_format": self.data_format}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='weights path')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--dynamic', action='store_true', help='dynamic batch size')
opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(FILE.stem, opt)
return opt
def main(opt):
run(**vars(opt))
if __name__ == "__main__":
opt = parse_opt()
main(opt)

336
yolov3/models/yolo.py Normal file
View File

@ -0,0 +1,336 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
YOLO-specific modules
Usage:
$ python path/to/models/yolo.py --cfg yolov3.yaml
"""
import argparse
import sys
from copy import deepcopy
from pathlib import Path
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
# ROOT = ROOT.relative_to(Path.cwd()) # relative
from models.common import *
from models.experimental import *
from utils.autoanchor import check_anchor_order
from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args
from utils.plots import feature_visualization
from utils.torch_utils import (copy_attr, fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device,
time_sync)
try:
import thop # for FLOPs computation
except ImportError:
thop = None
class Detect(nn.Module):
stride = None # strides computed during build
onnx_dynamic = False # ONNX export parameter
def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
super().__init__()
self.nc = nc # number of classes
self.no = nc + 5 # number of outputs per anchor
self.nl = len(anchors) # number of detection layers
self.na = len(anchors[0]) // 2 # number of anchors
self.grid = [torch.zeros(1)] * self.nl # init grid
self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
self.inplace = inplace # use in-place ops (e.g. slice assignment)
def forward(self, x):
z = [] # inference output
for i in range(self.nl):
x[i] = self.m[i](x[i]) # conv
bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
if not self.training: # inference
if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
y = x[i].sigmoid()
if self.inplace:
y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
else: # for on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
y = torch.cat((xy, wh, y[..., 4:]), -1)
z.append(y.view(bs, -1, self.no))
return x if self.training else (torch.cat(z, 1), x)
def _make_grid(self, nx=20, ny=20, i=0):
d = self.anchors[i].device
if check_version(torch.__version__, '1.10.0'): # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility
yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)], indexing='ij')
else:
yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)])
grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float()
anchor_grid = (self.anchors[i].clone() * self.stride[i]) \
.view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float()
return grid, anchor_grid
class Model(nn.Module):
def __init__(self, cfg='yolov3.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super().__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg, encoding='ascii', errors='ignore') as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# Init weights, biases
initialize_weights(self)
self.info()
LOGGER.info('')
def forward(self, x, augment=False, profile=False, visualize=False):
if augment:
return self._forward_augment(x) # augmented inference, None
return self._forward_once(x, profile, visualize) # single-scale inference, train
def _forward_augment(self, x):
img_size = x.shape[-2:] # height, width
s = [1, 0.83, 0.67] # scales
f = [None, 3, None] # flips (2-ud, 3-lr)
y = [] # outputs
for si, fi in zip(s, f):
xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
yi = self._forward_once(xi)[0] # forward
# cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
yi = self._descale_pred(yi, fi, si, img_size)
y.append(yi)
y = self._clip_augmented(y) # clip augmented tails
return torch.cat(y, 1), None # augmented inference, train
def _forward_once(self, x, profile=False, visualize=False):
y, dt = [], [] # outputs
for m in self.model:
if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
if profile:
self._profile_one_layer(m, x, dt)
x = m(x) # run
y.append(x if m.i in self.save else None) # save output
if visualize:
feature_visualization(x, m.type, m.i, save_dir=visualize)
return x
def _descale_pred(self, p, flips, scale, img_size):
# de-scale predictions following augmented inference (inverse operation)
if self.inplace:
p[..., :4] /= scale # de-scale
if flips == 2:
p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
elif flips == 3:
p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
else:
x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
if flips == 2:
y = img_size[0] - y # de-flip ud
elif flips == 3:
x = img_size[1] - x # de-flip lr
p = torch.cat((x, y, wh, p[..., 4:]), -1)
return p
def _clip_augmented(self, y):
# Clip augmented inference tails
nl = self.model[-1].nl # number of detection layers (P3-P5)
g = sum(4 ** x for x in range(nl)) # grid points
e = 1 # exclude layer count
i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices
y[0] = y[0][:, :-i] # large
i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices
y[-1] = y[-1][:, i:] # small
return y
def _profile_one_layer(self, m, x, dt):
c = isinstance(m, Detect) # is final layer, copy input as inplace fix
o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
t = time_sync()
for _ in range(10):
m(x.copy() if c else x)
dt.append((time_sync() - t) * 100)
if m == self.model[0]:
LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}")
LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
if c:
LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total")
def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
# https://arxiv.org/abs/1708.02002 section 3.3
# cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
m = self.model[-1] # Detect() module
for mi, s in zip(m.m, m.stride): # from
b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls
mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
def _print_biases(self):
m = self.model[-1] # Detect() module
for mi in m.m: # from
b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
LOGGER.info(
('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
# def _print_weights(self):
# for m in self.model.modules():
# if type(m) is Bottleneck:
# LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
LOGGER.info('Fusing layers... ')
for m in self.model.modules():
if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
delattr(m, 'bn') # remove batchnorm
m.forward = m.forward_fuse # update forward
self.info()
return self
def autoshape(self): # add AutoShape module
LOGGER.info('Adding AutoShape... ')
m = AutoShape(self) # wrap model
copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
return m
def info(self, verbose=False, img_size=640): # print model information
model_info(self, verbose, img_size)
def _apply(self, fn):
# Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
self = super()._apply(fn)
m = self.model[-1] # Detect()
if isinstance(m, Detect):
m.stride = fn(m.stride)
m.grid = list(map(fn, m.grid))
if isinstance(m.anchor_grid, list):
m.anchor_grid = list(map(fn, m.anchor_grid))
return self
def parse_model(d, ch): # model_dict, input_channels(3)
LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
m = eval(m) if isinstance(m, str) else m # eval strings
for j, a in enumerate(args):
try:
args[j] = eval(a) if isinstance(a, str) else a # eval strings
except NameError:
pass
n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain
if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]:
c1, c2 = ch[f], args[0]
if c2 != no: # if not output
c2 = make_divisible(c2 * gw, 8)
args = [c1, c2, *args[1:]]
if m in [BottleneckCSP, C3, C3TR, C3Ghost]:
args.insert(2, n) # number of repeats
n = 1
elif m is nn.BatchNorm2d:
args = [ch[f]]
elif m is Concat:
c2 = sum(ch[x] for x in f)
elif m is Detect:
args.append([ch[x] for x in f])
if isinstance(args[1], int): # number of anchors
args[1] = [list(range(args[1] * 2))] * len(f)
elif m is Contract:
c2 = ch[f] * args[0] ** 2
elif m is Expand:
c2 = ch[f] // args[0] ** 2
else:
c2 = ch[f]
m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
t = str(m)[8:-2].replace('__main__.', '') # module type
np = sum(x.numel() for x in m_.parameters()) # number params
m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print
save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
layers.append(m_)
if i == 0:
ch = []
ch.append(c2)
return nn.Sequential(*layers), sorted(save)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--cfg', type=str, default='yolov3yaml', help='model.yaml')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--profile', action='store_true', help='profile model speed')
parser.add_argument('--test', action='store_true', help='test all yolo*.yaml')
opt = parser.parse_args()
opt.cfg = check_yaml(opt.cfg) # check YAML
print_args(FILE.stem, opt)
device = select_device(opt.device)
# Create model
model = Model(opt.cfg).to(device)
model.train()
# Profile
if opt.profile:
img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
y = model(img, profile=True)
# Test all models
if opt.test:
for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'):
try:
_ = Model(cfg)
except Exception as e:
print(f'Error in {cfg}: {e}')
# Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898)
# from torch.utils.tensorboard import SummaryWriter
# tb_writer = SummaryWriter('.')
# LOGGER.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/")
# tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph

View File

@ -0,0 +1,51 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# darknet53 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [32, 3, 1]], # 0
[-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[-1, 1, Bottleneck, [64]],
[-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[-1, 2, Bottleneck, [128]],
[-1, 1, Conv, [256, 3, 2]], # 5-P3/8
[-1, 8, Bottleneck, [256]],
[-1, 1, Conv, [512, 3, 2]], # 7-P4/16
[-1, 8, Bottleneck, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
[-1, 4, Bottleneck, [1024]], # 10
]
# YOLOv3-SPP head
head:
[[-1, 1, Bottleneck, [1024, False]],
[-1, 1, SPP, [512, [5, 9, 13]]],
[-1, 1, Conv, [1024, 3, 1]],
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P4
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P3
[-1, 1, Bottleneck, [256, False]],
[-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
[[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,41 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,14, 23,27, 37,58] # P4/16
- [81,82, 135,169, 344,319] # P5/32
# YOLOv3-tiny backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [16, 3, 1]], # 0
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 1-P1/2
[-1, 1, Conv, [32, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 3-P2/4
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 5-P3/8
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 7-P4/16
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 9-P5/32
[-1, 1, Conv, [512, 3, 1]],
[-1, 1, nn.ZeroPad2d, [[0, 1, 0, 1]]], # 11
[-1, 1, nn.MaxPool2d, [2, 1, 0]], # 12
]
# YOLOv3-tiny head
head:
[[-1, 1, Conv, [1024, 3, 1]],
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [512, 3, 1]], # 15 (P5/32-large)
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P4
[-1, 1, Conv, [256, 3, 1]], # 19 (P4/16-medium)
[[19, 15], 1, Detect, [nc, anchors]], # Detect(P4, P5)
]

51
yolov3/models/yolov3.yaml Normal file
View File

@ -0,0 +1,51 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# darknet53 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [32, 3, 1]], # 0
[-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[-1, 1, Bottleneck, [64]],
[-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[-1, 2, Bottleneck, [128]],
[-1, 1, Conv, [256, 3, 2]], # 5-P3/8
[-1, 8, Bottleneck, [256]],
[-1, 1, Conv, [512, 3, 2]], # 7-P4/16
[-1, 8, Bottleneck, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
[-1, 4, Bottleneck, [1024]], # 10
]
# YOLOv3 head
head:
[[-1, 1, Bottleneck, [1024, False]],
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [1024, 3, 1]],
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P4
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P3
[-1, 1, Bottleneck, [256, False]],
[-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
[[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.67 # model depth multiple
width_multiple: 0.75 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.25 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

View File

@ -0,0 +1,48 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.33 # model depth multiple
width_multiple: 1.25 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

47
yolov3/requirements.txt Executable file
View File

@ -0,0 +1,47 @@
# YOLOv3 requirements
# Usage: pip install -r requirements.txt
# Base ----------------------------------------
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.1.1
Pillow>=7.1.2
PyYAML>=5.3.1
requests>=2.23.0
scipy>=1.4.1
torch>=1.7.0 # see https://pytorch.org/get-started/locally/ (recommended)
torchvision>=0.8.1
tqdm>=4.64.0
# protobuf<=3.20.1 # https://github.com/ultralytics/yolov5/issues/8012
# Logging -------------------------------------
tensorboard>=2.4.1
# clearml
# comet
# Plotting ------------------------------------
pandas>=1.1.4
seaborn>=0.11.0
# Export --------------------------------------
# coremltools>=6.0 # CoreML export
# onnx>=1.9.0 # ONNX export
# onnx-simplifier>=0.4.1 # ONNX simplifier
# nvidia-pyindex # TensorRT export
# nvidia-tensorrt # TensorRT export
# scikit-learn<=1.1.2 # CoreML quantization
# tensorflow>=2.4.1 # TF exports (-cpu, -aarch64, -macos)
# tensorflowjs>=3.9.0 # TF.js export
# openvino-dev # OpenVINO export
# Deploy --------------------------------------
# tritonclient[all]~=2.24.0
# Extras --------------------------------------
ipython # interactive notebook
psutil # system utilization
thop>=0.1.1 # FLOPs computation
# mss # screenshots
# albumentations>=1.0.3
# pycocotools>=2.0 # COCO mAP
# roboflow

Binary file not shown.

284
yolov3/segment/predict.py Normal file
View File

@ -0,0 +1,284 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Run segmentation inference on images, videos, directories, streams, etc.
Usage - sources:
$ python segment/predict.py --weights yolov5s-seg.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
list.txt # list of images
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
Usage - formats:
$ python segment/predict.py --weights yolov5s-seg.pt # PyTorch
yolov5s-seg.torchscript # TorchScript
yolov5s-seg.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s-seg_openvino_model # OpenVINO
yolov5s-seg.engine # TensorRT
yolov5s-seg.mlmodel # CoreML (macOS-only)
yolov5s-seg_saved_model # TensorFlow SavedModel
yolov5s-seg.pb # TensorFlow GraphDef
yolov5s-seg.tflite # TensorFlow Lite
yolov5s-seg_edgetpu.tflite # TensorFlow Edge TPU
yolov5s-seg_paddle_model # PaddlePaddle
"""
import argparse
import os
import platform
import sys
from pathlib import Path
import torch
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from models.common import DetectMultiBackend
from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams
from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2,
increment_path, non_max_suppression, print_args, scale_boxes, scale_segments,
strip_optimizer)
from utils.plots import Annotator, colors, save_one_box
from utils.segment.general import masks2segments, process_mask, process_mask_native
from utils.torch_utils import select_device, smart_inference_mode
@smart_inference_mode()
def run(
weights=ROOT / 'yolov5s-seg.pt', # model.pt path(s)
source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam)
data=ROOT / 'data/coco128.yaml', # dataset.yaml path
imgsz=(640, 640), # inference size (height, width)
conf_thres=0.25, # confidence threshold
iou_thres=0.45, # NMS IOU threshold
max_det=1000, # maximum detections per image
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
view_img=False, # show results
save_txt=False, # save results to *.txt
save_conf=False, # save confidences in --save-txt labels
save_crop=False, # save cropped prediction boxes
nosave=False, # do not save images/videos
classes=None, # filter by class: --class 0, or --class 0 2 3
agnostic_nms=False, # class-agnostic NMS
augment=False, # augmented inference
visualize=False, # visualize features
update=False, # update all models
project=ROOT / 'runs/predict-seg', # save results to project/name
name='exp', # save results to project/name
exist_ok=False, # existing project/name ok, do not increment
line_thickness=3, # bounding box thickness (pixels)
hide_labels=False, # hide labels
hide_conf=False, # hide confidences
half=False, # use FP16 half-precision inference
dnn=False, # use OpenCV DNN for ONNX inference
vid_stride=1, # video frame-rate stride
retina_masks=False,
):
source = str(source)
save_img = not nosave and not source.endswith('.txt') # save inference images
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
webcam = source.isnumeric() or source.endswith('.streams') or (is_url and not is_file)
screenshot = source.lower().startswith('screen')
if is_url and is_file:
source = check_file(source) # download
# Directories
save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
(save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
# Load model
device = select_device(device)
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
stride, names, pt = model.stride, model.names, model.pt
imgsz = check_img_size(imgsz, s=stride) # check image size
# Dataloader
bs = 1 # batch_size
if webcam:
view_img = check_imshow(warn=True)
dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
bs = len(dataset)
elif screenshot:
dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt)
else:
dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
vid_path, vid_writer = [None] * bs, [None] * bs
# Run inference
model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup
seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
for path, im, im0s, vid_cap, s in dataset:
with dt[0]:
im = torch.from_numpy(im).to(model.device)
im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
im /= 255 # 0 - 255 to 0.0 - 1.0
if len(im.shape) == 3:
im = im[None] # expand for batch dim
# Inference
with dt[1]:
visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
pred, proto = model(im, augment=augment, visualize=visualize)[:2]
# NMS
with dt[2]:
pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det, nm=32)
# Second-stage classifier (optional)
# pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
# Process predictions
for i, det in enumerate(pred): # per image
seen += 1
if webcam: # batch_size >= 1
p, im0, frame = path[i], im0s[i].copy(), dataset.count
s += f'{i}: '
else:
p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)
p = Path(p) # to Path
save_path = str(save_dir / p.name) # im.jpg
txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt
s += '%gx%g ' % im.shape[2:] # print string
imc = im0.copy() if save_crop else im0 # for save_crop
annotator = Annotator(im0, line_width=line_thickness, example=str(names))
if len(det):
if retina_masks:
# scale bbox first the crop masks
det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # rescale boxes to im0 size
masks = process_mask_native(proto[i], det[:, 6:], det[:, :4], im0.shape[:2]) # HWC
else:
masks = process_mask(proto[i], det[:, 6:], det[:, :4], im.shape[2:], upsample=True) # HWC
det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # rescale boxes to im0 size
# Segments
if save_txt:
segments = [
scale_segments(im0.shape if retina_masks else im.shape[2:], x, im0.shape, normalize=True)
for x in reversed(masks2segments(masks))]
# Print results
for c in det[:, 5].unique():
n = (det[:, 5] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
# Mask plotting
annotator.masks(
masks,
colors=[colors(x, True) for x in det[:, 5]],
im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() /
255 if retina_masks else im[i])
# Write results
for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])):
if save_txt: # Write to file
seg = segments[j].reshape(-1) # (n,2) to (n*2)
line = (cls, *seg, conf) if save_conf else (cls, *seg) # label format
with open(f'{txt_path}.txt', 'a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n')
if save_img or save_crop or view_img: # Add bbox to image
c = int(cls) # integer class
label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
annotator.box_label(xyxy, label, color=colors(c, True))
# annotator.draw.polygon(segments[j], outline=colors(c, True), width=3)
if save_crop:
save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)
# Stream results
im0 = annotator.result()
if view_img:
if platform.system() == 'Linux' and p not in windows:
windows.append(p)
cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux)
cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
cv2.imshow(str(p), im0)
if cv2.waitKey(1) == ord('q'): # 1 millisecond
exit()
# Save results (image with detections)
if save_img:
if dataset.mode == 'image':
cv2.imwrite(save_path, im0)
else: # 'video' or 'stream'
if vid_path[i] != save_path: # new video
vid_path[i] = save_path
if isinstance(vid_writer[i], cv2.VideoWriter):
vid_writer[i].release() # release previous video writer
if vid_cap: # video
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
else: # stream
fps, w, h = 30, im0.shape[1], im0.shape[0]
save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos
vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
vid_writer[i].write(im0)
# Print time (inference-only)
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
# Print results
t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
if save_txt or save_img:
s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
if update:
strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning)
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s-seg.pt', help='model path(s)')
parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--view-img', action='store_true', help='show results')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--visualize', action='store_true', help='visualize features')
parser.add_argument('--update', action='store_true', help='update all models')
parser.add_argument('--project', default=ROOT / 'runs/predict-seg', help='save results to project/name')
parser.add_argument('--name', default='exp', help='save results to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride')
parser.add_argument('--retina-masks', action='store_true', help='whether to plot masks in native resolution')
opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(vars(opt))
return opt
def main(opt):
check_requirements(exclude=('tensorboard', 'thop'))
run(**vars(opt))
if __name__ == '__main__':
opt = parse_opt()
main(opt)

659
yolov3/segment/train.py Normal file
View File

@ -0,0 +1,659 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Train a segment model on a segment dataset
Models and datasets download automatically from the latest release.
Usage - Single-GPU training:
$ python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 # from pretrained (recommended)
$ python segment/train.py --data coco128-seg.yaml --weights '' --cfg yolov5s-seg.yaml --img 640 # from scratch
Usage - Multi-GPU DDP training:
$ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3
Models: https://github.com/ultralytics/yolov5/tree/master/models
Datasets: https://github.com/ultralytics/yolov5/tree/master/data
Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
"""
import argparse
import math
import os
import random
import subprocess
import sys
import time
from copy import deepcopy
from datetime import datetime
from pathlib import Path
import numpy as np
import torch
import torch.distributed as dist
import torch.nn as nn
import yaml
from torch.optim import lr_scheduler
from tqdm import tqdm
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
import segment.val as validate # for end-of-epoch mAP
from models.experimental import attempt_load
from models.yolo import SegmentationModel
from utils.autoanchor import check_anchors
from utils.autobatch import check_train_batch_size
from utils.callbacks import Callbacks
from utils.downloads import attempt_download, is_url
from utils.general import (LOGGER, TQDM_BAR_FORMAT, check_amp, check_dataset, check_file, check_git_info,
check_git_status, check_img_size, check_requirements, check_suffix, check_yaml, colorstr,
get_latest_run, increment_path, init_seeds, intersect_dicts, labels_to_class_weights,
labels_to_image_weights, one_cycle, print_args, print_mutation, strip_optimizer, yaml_save)
from utils.loggers import GenericLogger
from utils.plots import plot_evolve, plot_labels
from utils.segment.dataloaders import create_dataloader
from utils.segment.loss import ComputeLoss
from utils.segment.metrics import KEYS, fitness
from utils.segment.plots import plot_images_and_masks, plot_results_with_masks
from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer,
smart_resume, torch_distributed_zero_first)
LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
RANK = int(os.getenv('RANK', -1))
WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
GIT_INFO = check_git_info()
def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary
save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, mask_ratio = \
Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze, opt.mask_ratio
# callbacks.run('on_pretrain_routine_start')
# Directories
w = save_dir / 'weights' # weights dir
(w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir
last, best = w / 'last.pt', w / 'best.pt'
# Hyperparameters
if isinstance(hyp, str):
with open(hyp, errors='ignore') as f:
hyp = yaml.safe_load(f) # load hyps dict
LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
opt.hyp = hyp.copy() # for saving hyps to checkpoints
# Save run settings
if not evolve:
yaml_save(save_dir / 'hyp.yaml', hyp)
yaml_save(save_dir / 'opt.yaml', vars(opt))
# Loggers
data_dict = None
if RANK in {-1, 0}:
logger = GenericLogger(opt=opt, console_logger=LOGGER)
# Config
plots = not evolve and not opt.noplots # create plots
overlap = not opt.no_overlap
cuda = device.type != 'cpu'
init_seeds(opt.seed + 1 + RANK, deterministic=True)
with torch_distributed_zero_first(LOCAL_RANK):
data_dict = data_dict or check_dataset(data) # check if None
train_path, val_path = data_dict['train'], data_dict['val']
nc = 1 if single_cls else int(data_dict['nc']) # number of classes
names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset
# Model
check_suffix(weights, '.pt') # check weights
pretrained = weights.endswith('.pt')
if pretrained:
with torch_distributed_zero_first(LOCAL_RANK):
weights = attempt_download(weights) # download if not found locally
ckpt = torch.load(weights, map_location='cpu') # load checkpoint to CPU to avoid CUDA memory leak
model = SegmentationModel(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)
exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys
csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect
model.load_state_dict(csd, strict=False) # load
LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report
else:
model = SegmentationModel(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
amp = check_amp(model) # check AMP
# Freeze
freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze
for k, v in model.named_parameters():
v.requires_grad = True # train all layers
# v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results)
if any(x in k for x in freeze):
LOGGER.info(f'freezing {k}')
v.requires_grad = False
# Image size
gs = max(int(model.stride.max()), 32) # grid size (max stride)
imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple
# Batch size
if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size
batch_size = check_train_batch_size(model, imgsz, amp)
logger.update_params({'batch_size': batch_size})
# loggers.on_params_update({"batch_size": batch_size})
# Optimizer
nbs = 64 # nominal batch size
accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing
hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay
optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay'])
# Scheduler
if opt.cos_lr:
lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
else:
lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs)
# EMA
ema = ModelEMA(model) if RANK in {-1, 0} else None
# Resume
best_fitness, start_epoch = 0.0, 0
if pretrained:
if resume:
best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume)
del ckpt, csd
# DP mode
if cuda and RANK == -1 and torch.cuda.device_count() > 1:
LOGGER.warning('WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n'
'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.')
model = torch.nn.DataParallel(model)
# SyncBatchNorm
if opt.sync_bn and cuda and RANK != -1:
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
LOGGER.info('Using SyncBatchNorm()')
# Trainloader
train_loader, dataset = create_dataloader(
train_path,
imgsz,
batch_size // WORLD_SIZE,
gs,
single_cls,
hyp=hyp,
augment=True,
cache=None if opt.cache == 'val' else opt.cache,
rect=opt.rect,
rank=LOCAL_RANK,
workers=workers,
image_weights=opt.image_weights,
quad=opt.quad,
prefix=colorstr('train: '),
shuffle=True,
mask_downsample_ratio=mask_ratio,
overlap_mask=overlap,
)
labels = np.concatenate(dataset.labels, 0)
mlc = int(labels[:, 0].max()) # max label class
assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}'
# Process 0
if RANK in {-1, 0}:
val_loader = create_dataloader(val_path,
imgsz,
batch_size // WORLD_SIZE * 2,
gs,
single_cls,
hyp=hyp,
cache=None if noval else opt.cache,
rect=True,
rank=-1,
workers=workers * 2,
pad=0.5,
mask_downsample_ratio=mask_ratio,
overlap_mask=overlap,
prefix=colorstr('val: '))[0]
if not resume:
if not opt.noautoanchor:
check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor
model.half().float() # pre-reduce anchor precision
if plots:
plot_labels(labels, names, save_dir)
# callbacks.run('on_pretrain_routine_end', labels, names)
# DDP mode
if cuda and RANK != -1:
model = smart_DDP(model)
# Model attributes
nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps)
hyp['box'] *= 3 / nl # scale to layers
hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers
hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers
hyp['label_smoothing'] = opt.label_smoothing
model.nc = nc # attach number of classes to model
model.hyp = hyp # attach hyperparameters to model
model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights
model.names = names
# Start training
t0 = time.time()
nb = len(train_loader) # number of batches
nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations)
# nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
last_opt_step = -1
maps = np.zeros(nc) # mAP per class
results = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
scheduler.last_epoch = start_epoch - 1 # do not move
scaler = torch.cuda.amp.GradScaler(enabled=amp)
stopper, stop = EarlyStopping(patience=opt.patience), False
compute_loss = ComputeLoss(model, overlap=overlap) # init loss class
# callbacks.run('on_train_start')
LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...')
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
# callbacks.run('on_train_epoch_start')
model.train()
# Update image weights (optional, single-GPU only)
if opt.image_weights:
cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
# Update mosaic border (optional)
# b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
# dataset.mosaic_border = [b - imgsz, -b] # height, width borders
mloss = torch.zeros(4, device=device) # mean losses
if RANK != -1:
train_loader.sampler.set_epoch(epoch)
pbar = enumerate(train_loader)
LOGGER.info(('\n' + '%11s' * 8) %
('Epoch', 'GPU_mem', 'box_loss', 'seg_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size'))
if RANK in {-1, 0}:
pbar = tqdm(pbar, total=nb, bar_format=TQDM_BAR_FORMAT) # progress bar
optimizer.zero_grad()
for i, (imgs, targets, paths, _, masks) in pbar: # batch ------------------------------------------------------
# callbacks.run('on_train_batch_start')
ni = i + nb * epoch # number integrated batches (since train start)
imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0
# Warmup
if ni <= nw:
xi = [0, nw] # x interp
# compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())
for j, x in enumerate(optimizer.param_groups):
# bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)])
if 'momentum' in x:
x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
# Multi-scale
if opt.multi_scale:
sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
sf = sz / max(imgs.shape[2:]) # scale factor
if sf != 1:
ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
# Forward
with torch.cuda.amp.autocast(amp):
pred = model(imgs) # forward
loss, loss_items = compute_loss(pred, targets.to(device), masks=masks.to(device).float())
if RANK != -1:
loss *= WORLD_SIZE # gradient averaged between devices in DDP mode
if opt.quad:
loss *= 4.
# Backward
scaler.scale(loss).backward()
# Optimize - https://pytorch.org/docs/master/notes/amp_examples.html
if ni - last_opt_step >= accumulate:
scaler.unscale_(optimizer) # unscale gradients
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients
scaler.step(optimizer) # optimizer.step
scaler.update()
optimizer.zero_grad()
if ema:
ema.update(model)
last_opt_step = ni
# Log
if RANK in {-1, 0}:
mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB)
pbar.set_description(('%11s' * 2 + '%11.4g' * 6) %
(f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1]))
# callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths)
# if callbacks.stop_training:
# return
# Mosaic plots
if plots:
if ni < 3:
plot_images_and_masks(imgs, targets, masks, paths, save_dir / f'train_batch{ni}.jpg')
if ni == 10:
files = sorted(save_dir.glob('train*.jpg'))
logger.log_images(files, 'Mosaics', epoch)
# end batch ------------------------------------------------------------------------------------------------
# Scheduler
lr = [x['lr'] for x in optimizer.param_groups] # for loggers
scheduler.step()
if RANK in {-1, 0}:
# mAP
# callbacks.run('on_train_epoch_end', epoch=epoch)
ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights'])
final_epoch = (epoch + 1 == epochs) or stopper.possible_stop
if not noval or final_epoch: # Calculate mAP
results, maps, _ = validate.run(data_dict,
batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz,
half=amp,
model=ema.ema,
single_cls=single_cls,
dataloader=val_loader,
save_dir=save_dir,
plots=False,
callbacks=callbacks,
compute_loss=compute_loss,
mask_downsample_ratio=mask_ratio,
overlap=overlap)
# Update best mAP
fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
stop = stopper(epoch=epoch, fitness=fi) # early stop check
if fi > best_fitness:
best_fitness = fi
log_vals = list(mloss) + list(results) + lr
# callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi)
# Log val metrics and media
metrics_dict = dict(zip(KEYS, log_vals))
logger.log_metrics(metrics_dict, epoch)
# Save model
if (not nosave) or (final_epoch and not evolve): # if save
ckpt = {
'epoch': epoch,
'best_fitness': best_fitness,
'model': deepcopy(de_parallel(model)).half(),
'ema': deepcopy(ema.ema).half(),
'updates': ema.updates,
'optimizer': optimizer.state_dict(),
'opt': vars(opt),
'git': GIT_INFO, # {remote, branch, commit} if a git repo
'date': datetime.now().isoformat()}
# Save last, best and delete
torch.save(ckpt, last)
if best_fitness == fi:
torch.save(ckpt, best)
if opt.save_period > 0 and epoch % opt.save_period == 0:
torch.save(ckpt, w / f'epoch{epoch}.pt')
logger.log_model(w / f'epoch{epoch}.pt')
del ckpt
# callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi)
# EarlyStopping
if RANK != -1: # if DDP training
broadcast_list = [stop if RANK == 0 else None]
dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks
if RANK != 0:
stop = broadcast_list[0]
if stop:
break # must break all DDP ranks
# end epoch ----------------------------------------------------------------------------------------------------
# end training -----------------------------------------------------------------------------------------------------
if RANK in {-1, 0}:
LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.')
for f in last, best:
if f.exists():
strip_optimizer(f) # strip optimizers
if f is best:
LOGGER.info(f'\nValidating {f}...')
results, _, _ = validate.run(
data_dict,
batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz,
model=attempt_load(f, device).half(),
iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65
single_cls=single_cls,
dataloader=val_loader,
save_dir=save_dir,
save_json=is_coco,
verbose=True,
plots=plots,
callbacks=callbacks,
compute_loss=compute_loss,
mask_downsample_ratio=mask_ratio,
overlap=overlap) # val best model with plots
if is_coco:
# callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi)
metrics_dict = dict(zip(KEYS, list(mloss) + list(results) + lr))
logger.log_metrics(metrics_dict, epoch)
# callbacks.run('on_train_end', last, best, epoch, results)
# on train end callback using genericLogger
logger.log_metrics(dict(zip(KEYS[4:16], results)), epochs)
if not opt.evolve:
logger.log_model(best, epoch)
if plots:
plot_results_with_masks(file=save_dir / 'results.csv') # save results.png
files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))]
files = [(save_dir / f) for f in files if (save_dir / f).exists()] # filter
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}")
logger.log_images(files, 'Results', epoch + 1)
logger.log_images(sorted(save_dir.glob('val*.jpg')), 'Validation', epoch + 1)
torch.cuda.empty_cache()
return results
def parse_opt(known=False):
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s-seg.pt', help='initial weights path')
parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128-seg.yaml', help='dataset.yaml path')
parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path')
parser.add_argument('--epochs', type=int, default=100, help='total training epochs')
parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
parser.add_argument('--rect', action='store_true', help='rectangular training')
parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
parser.add_argument('--noval', action='store_true', help='only validate final epoch')
parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor')
parser.add_argument('--noplots', action='store_true', help='save no plot files')
parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk')
parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')
parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--project', default=ROOT / 'runs/train-seg', help='save to project/name')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--quad', action='store_true', help='quad dataloader')
parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')
parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2')
parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
parser.add_argument('--seed', type=int, default=0, help='Global training seed')
parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify')
# Instance Segmentation Args
parser.add_argument('--mask-ratio', type=int, default=4, help='Downsample the truth masks to saving memory')
parser.add_argument('--no-overlap', action='store_true', help='Overlap masks train faster at slightly less mAP')
return parser.parse_known_args()[0] if known else parser.parse_args()
def main(opt, callbacks=Callbacks()):
# Checks
if RANK in {-1, 0}:
print_args(vars(opt))
check_git_status()
check_requirements()
# Resume
if opt.resume and not opt.evolve: # resume from specified or most recent last.pt
last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run())
opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml
opt_data = opt.data # original dataset
if opt_yaml.is_file():
with open(opt_yaml, errors='ignore') as f:
d = yaml.safe_load(f)
else:
d = torch.load(last, map_location='cpu')['opt']
opt = argparse.Namespace(**d) # replace
opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate
if is_url(opt_data):
opt.data = check_file(opt_data) # avoid HUB resume auth timeout
else:
opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \
check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks
assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
if opt.evolve:
if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve
opt.project = str(ROOT / 'runs/evolve')
opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume
if opt.name == 'cfg':
opt.name = Path(opt.cfg).stem # use model.yaml as name
opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))
# DDP mode
device = select_device(opt.device, batch_size=opt.batch_size)
if LOCAL_RANK != -1:
msg = 'is not compatible with YOLOv5 Multi-GPU DDP training'
assert not opt.image_weights, f'--image-weights {msg}'
assert not opt.evolve, f'--evolve {msg}'
assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size'
assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE'
assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'
torch.cuda.set_device(LOCAL_RANK)
device = torch.device('cuda', LOCAL_RANK)
dist.init_process_group(backend='nccl' if dist.is_nccl_available() else 'gloo')
# Train
if not opt.evolve:
train(opt.hyp, opt, device, callbacks)
# Evolve hyperparameters (optional)
else:
# Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
meta = {
'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
'box': (1, 0.02, 0.2), # box loss gain
'cls': (1, 0.2, 4.0), # cls loss gain
'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
'iou_t': (0, 0.1, 0.7), # IoU training threshold
'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
'scale': (1, 0.0, 0.9), # image scale (+/- gain)
'shear': (1, 0.0, 10.0), # image shear (+/- deg)
'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
'mosaic': (1, 0.0, 1.0), # image mixup (probability)
'mixup': (1, 0.0, 1.0), # image mixup (probability)
'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability)
with open(opt.hyp, errors='ignore') as f:
hyp = yaml.safe_load(f) # load hyps dict
if 'anchors' not in hyp: # anchors commented in hyp.yaml
hyp['anchors'] = 3
if opt.noautoanchor:
del hyp['anchors'], meta['anchors']
opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch
# ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv'
if opt.bucket:
subprocess.run(
f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}'.split()) # download evolve.csv if exists
for _ in range(opt.evolve): # generations to evolve
if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate
# Select parent(s)
parent = 'single' # parent selection method: 'single' or 'weighted'
x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1)
n = min(5, len(x)) # number of previous results to consider
x = x[np.argsort(-fitness(x))][:n] # top n mutations
w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0)
if parent == 'single' or len(x) == 1:
# x = x[random.randint(0, n - 1)] # random selection
x = x[random.choices(range(n), weights=w)[0]] # weighted selection
elif parent == 'weighted':
x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
# Mutate
mp, s = 0.8, 0.2 # mutation probability, sigma
npr = np.random
npr.seed(int(time.time()))
g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1
ng = len(meta)
v = np.ones(ng)
while all(v == 1): # mutate until a change occurs (prevent duplicates)
v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
hyp[k] = float(x[i + 7] * v[i]) # mutate
# Constrain to limits
for k, v in meta.items():
hyp[k] = max(hyp[k], v[1]) # lower limit
hyp[k] = min(hyp[k], v[2]) # upper limit
hyp[k] = round(hyp[k], 5) # significant digits
# Train mutation
results = train(hyp.copy(), opt, device, callbacks)
callbacks = Callbacks()
# Write mutation results
print_mutation(KEYS, results, hyp.copy(), save_dir, opt.bucket)
# Plot results
plot_evolve(evolve_csv)
LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n'
f"Results saved to {colorstr('bold', save_dir)}\n"
f'Usage example: $ python train.py --hyp {evolve_yaml}')
def run(**kwargs):
# Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt')
opt = parse_opt(True)
for k, v in kwargs.items():
setattr(opt, k, v)
main(opt)
return opt
if __name__ == '__main__':
opt = parse_opt()
main(opt)

594
yolov3/segment/tutorial.ipynb vendored Normal file
View File

@ -0,0 +1,594 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "t6MPjfT5NrKQ"
},
"source": [
"<div align=\"center\">\n",
"\n",
" <a href=\"https://ultralytics.com/yolov5\" target=\"_blank\">\n",
" <img width=\"1024\", src=\"https://raw.githubusercontent.com/ultralytics/assets/main/yolov5/v70/splash.png\"></a>\n",
"\n",
"\n",
"<br>\n",
" <a href=\"https://bit.ly/yolov5-paperspace-notebook\"><img src=\"https://assets.paperspace.io/img/gradient-badge.svg\" alt=\"Run on Gradient\"></a>\n",
" <a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/segment/tutorial.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n",
" <a href=\"https://www.kaggle.com/ultralytics/yolov5\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a>\n",
"<br>\n",
"\n",
"This <a href=\"https://github.com/ultralytics/yolov5\"></a> 🚀 notebook by <a href=\"https://ultralytics.com\">Ultralytics</a> presents simple train, validate and predict examples to help start your AI adventure.<br>See <a href=\"https://github.com/ultralytics/yolov5/issues/new/choose\">GitHub</a> for community support or <a href=\"https://ultralytics.com/contact\">contact us</a> for professional support.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7mGmQbAO5pQb"
},
"source": [
"# Setup\n",
"\n",
"Clone GitHub [repository](https://github.com/ultralytics/yolov5), install [dependencies](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) and check PyTorch and GPU."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "wbvMlHd_QwMG",
"outputId": "171b23f0-71b9-4cbf-b666-6fa2ecef70c8"
},
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
" 🚀 v7.0-2-gc9d47ae Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 22.6/78.2 GB disk)\n"
]
}
],
"source": [
"!git clone https://github.com/ultralytics/yolov5 # clone\n",
"%cd yolov5\n",
"%pip install -qr requirements.txt # install\n",
"\n",
"import torch\n",
"import utils\n",
"display = utils.notebook_init() # checks"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4JnkELT0cIJg"
},
"source": [
"# 1. Predict\n",
"\n",
"`segment/predict.py` runs instance segmentation inference on a variety of sources, downloading models automatically from the [latest release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/predict`. Example inference sources are:\n",
"\n",
"```shell\n",
"python segment/predict.py --source 0 # webcam\n",
" img.jpg # image \n",
" vid.mp4 # video\n",
" screen # screenshot\n",
" path/ # directory\n",
" 'path/*.jpg' # glob\n",
" 'https://youtu.be/Zgi9g1ksQHc' # YouTube\n",
" 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "zR9ZbuQCH7FX",
"outputId": "3f67f1c7-f15e-4fa5-d251-967c3b77eaad"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[34m\u001b[1msegment/predict: \u001b[0mweights=['yolov5s-seg.pt'], source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/predict-seg, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1, retina_masks=False\n",
"YOLOv5 🚀 v7.0-2-gc9d47ae Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
"\n",
"Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt to yolov5s-seg.pt...\n",
"100% 14.9M/14.9M [00:01<00:00, 12.0MB/s]\n",
"\n",
"Fusing layers... \n",
"YOLOv5s-seg summary: 224 layers, 7611485 parameters, 0 gradients, 26.4 GFLOPs\n",
"image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 18.2ms\n",
"image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 1 tie, 13.4ms\n",
"Speed: 0.5ms pre-process, 15.8ms inference, 18.5ms NMS per image at shape (1, 3, 640, 640)\n",
"Results saved to \u001b[1mruns/predict-seg/exp\u001b[0m\n"
]
}
],
"source": [
"!python segment/predict.py --weights yolov5s-seg.pt --img 640 --conf 0.25 --source data/images\n",
"#display.Image(filename='runs/predict-seg/exp/zidane.jpg', width=600)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hkAzDWJ7cWTr"
},
"source": [
"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
"<img align=\"left\" src=\"https://user-images.githubusercontent.com/26833433/199030123-08c72f8d-6871-4116-8ed3-c373642cf28e.jpg\" width=\"600\">"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0eq1SMWl6Sfn"
},
"source": [
"# 2. Validate\n",
"Validate a model's accuracy on the [COCO](https://cocodataset.org/#home) dataset's `val` or `test` splits. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "WQPtK1QYVaD_",
"outputId": "9d751d8c-bee8-4339-cf30-9854ca530449"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Downloading https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels-segments.zip ...\n",
"Downloading http://images.cocodataset.org/zips/val2017.zip ...\n",
"######################################################################## 100.0%\n",
"######################################################################## 100.0%\n"
]
}
],
"source": [
"# Download COCO val\n",
"!bash data/scripts/get_coco.sh --val --segments # download (780M - 5000 images)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "X58w8JLpMnjH",
"outputId": "a140d67a-02da-479e-9ddb-7d54bf9e407a"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[34m\u001b[1msegment/val: \u001b[0mdata=/content/yolov5/data/coco.yaml, weights=['yolov5s-seg.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=val, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val-seg, name=exp, exist_ok=False, half=True, dnn=False\n",
"YOLOv5 🚀 v7.0-2-gc9d47ae Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
"\n",
"Fusing layers... \n",
"YOLOv5s-seg summary: 224 layers, 7611485 parameters, 0 gradients, 26.4 GFLOPs\n",
"\u001b[34m\u001b[1mval: \u001b[0mScanning /content/datasets/coco/val2017... 4952 images, 48 backgrounds, 0 corrupt: 100% 5000/5000 [00:03<00:00, 1361.31it/s]\n",
"\u001b[34m\u001b[1mval: \u001b[0mNew cache created: /content/datasets/coco/val2017.cache\n",
" Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100% 157/157 [01:54<00:00, 1.37it/s]\n",
" all 5000 36335 0.673 0.517 0.566 0.373 0.672 0.49 0.532 0.319\n",
"Speed: 0.6ms pre-process, 4.4ms inference, 2.9ms NMS per image at shape (32, 3, 640, 640)\n",
"Results saved to \u001b[1mruns/val-seg/exp\u001b[0m\n"
]
}
],
"source": [
"# Validate YOLOv5s-seg on COCO val\n",
"!python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 --half"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZY2VXXXu74w5"
},
"source": [
"# 3. Train\n",
"\n",
"<p align=\"\"><a href=\"https://roboflow.com/?ref=ultralytics\"><img width=\"1000\" src=\"https://github.com/ultralytics/assets/raw/main/im/integrations-loop.png\"/></a></p>\n",
"Close the active learning loop by sampling images from your inference conditions with the `roboflow` pip package\n",
"<br><br>\n",
"\n",
"Train a YOLOv5s-seg model on the [COCO128](https://www.kaggle.com/ultralytics/coco128) dataset with `--data coco128-seg.yaml`, starting from pretrained `--weights yolov5s-seg.pt`, or from randomly initialized `--weights '' --cfg yolov5s-seg.yaml`.\n",
"\n",
"- **Pretrained [Models](https://github.com/ultralytics/yolov5/tree/master/models)** are downloaded\n",
"automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases)\n",
"- **[Datasets](https://github.com/ultralytics/yolov5/tree/master/data)** available for autodownload include: [COCO](https://github.com/ultralytics/yolov5/blob/master/data/coco.yaml), [COCO128](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), [VOC](https://github.com/ultralytics/yolov5/blob/master/data/VOC.yaml), [Argoverse](https://github.com/ultralytics/yolov5/blob/master/data/Argoverse.yaml), [VisDrone](https://github.com/ultralytics/yolov5/blob/master/data/VisDrone.yaml), [GlobalWheat](https://github.com/ultralytics/yolov5/blob/master/data/GlobalWheat2020.yaml), [xView](https://github.com/ultralytics/yolov5/blob/master/data/xView.yaml), [Objects365](https://github.com/ultralytics/yolov5/blob/master/data/Objects365.yaml), [SKU-110K](https://github.com/ultralytics/yolov5/blob/master/data/SKU-110K.yaml).\n",
"- **Training Results** are saved to `runs/train-seg/` with incrementing run directories, i.e. `runs/train-seg/exp2`, `runs/train-seg/exp3` etc.\n",
"<br><br>\n",
"\n",
"A **Mosaic Dataloader** is used for training which combines 4 images into 1 mosaic.\n",
"\n",
"## Train on Custom Data with Roboflow 🌟 NEW\n",
"\n",
"[Roboflow](https://roboflow.com/?ref=ultralytics) enables you to easily **organize, label, and prepare** a high quality dataset with your own custom data. Roboflow also makes it easy to establish an active learning pipeline, collaborate with your team on dataset improvement, and integrate directly into your model building workflow with the `roboflow` pip package.\n",
"\n",
"- Custom Training Example: [https://blog.roboflow.com/train-yolov5-instance-segmentation-custom-dataset/](https://blog.roboflow.com/train-yolov5-instance-segmentation-custom-dataset/?ref=ultralytics)\n",
"- Custom Training Notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1JTz7kpmHsg-5qwVz2d2IH3AaenI1tv0N?usp=sharing)\n",
"<br>\n",
"\n",
"<p align=\"\"><a href=\"https://roboflow.com/?ref=ultralytics\"><img width=\"480\" src=\"https://robflow-public-assets.s3.amazonaws.com/how-to-train-yolov5-segmentation-annotation.gif\"/></a></p>Label images lightning fast (including with model-assisted labeling)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "i3oKtE4g-aNn"
},
"outputs": [],
"source": [
"#@title Select YOLOv5 🚀 logger {run: 'auto'}\n",
"logger = 'TensorBoard' #@param ['TensorBoard', 'Comet', 'ClearML']\n",
"\n",
"if logger == 'TensorBoard':\n",
" %load_ext tensorboard\n",
" %tensorboard --logdir runs/train-seg\n",
"elif logger == 'Comet':\n",
" %pip install -q comet_ml\n",
" import comet_ml; comet_ml.init()\n",
"elif logger == 'ClearML':\n",
" import clearml; clearml.browser_login()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "1NcFxRcFdJ_O",
"outputId": "3a3e0cf7-e79c-47a5-c8e7-2d26eeeab988"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[34m\u001b[1msegment/train: \u001b[0mweights=yolov5s-seg.pt, cfg=, data=coco128-seg.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=ram, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train-seg, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, mask_ratio=4, no_overlap=False\n",
"\u001b[34m\u001b[1mgithub: \u001b[0mup to date with https://github.com/ultralytics/yolov5 ✅\n",
"YOLOv5 🚀 v7.0-2-gc9d47ae Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)\n",
"\n",
"\u001b[34m\u001b[1mhyperparameters: \u001b[0mlr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0\n",
"\u001b[34m\u001b[1mTensorBoard: \u001b[0mStart with 'tensorboard --logdir runs/train-seg', view at http://localhost:6006/\n",
"\n",
"Dataset not found ⚠️, missing paths ['/content/datasets/coco128-seg/images/train2017']\n",
"Downloading https://ultralytics.com/assets/coco128-seg.zip to coco128-seg.zip...\n",
"100% 6.79M/6.79M [00:01<00:00, 6.73MB/s]\n",
"Dataset download success ✅ (1.9s), saved to \u001b[1m/content/datasets\u001b[0m\n",
"\n",
" from n params module arguments \n",
" 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] \n",
" 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] \n",
" 2 -1 1 18816 models.common.C3 [64, 64, 1] \n",
" 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] \n",
" 4 -1 2 115712 models.common.C3 [128, 128, 2] \n",
" 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] \n",
" 6 -1 3 625152 models.common.C3 [256, 256, 3] \n",
" 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] \n",
" 8 -1 1 1182720 models.common.C3 [512, 512, 1] \n",
" 9 -1 1 656896 models.common.SPPF [512, 512, 5] \n",
" 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] \n",
" 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n",
" 12 [-1, 6] 1 0 models.common.Concat [1] \n",
" 13 -1 1 361984 models.common.C3 [512, 256, 1, False] \n",
" 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] \n",
" 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n",
" 16 [-1, 4] 1 0 models.common.Concat [1] \n",
" 17 -1 1 90880 models.common.C3 [256, 128, 1, False] \n",
" 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] \n",
" 19 [-1, 14] 1 0 models.common.Concat [1] \n",
" 20 -1 1 296448 models.common.C3 [256, 256, 1, False] \n",
" 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] \n",
" 22 [-1, 10] 1 0 models.common.Concat [1] \n",
" 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] \n",
" 24 [17, 20, 23] 1 615133 models.yolo.Segment [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], 32, 128, [128, 256, 512]]\n",
"Model summary: 225 layers, 7621277 parameters, 7621277 gradients, 26.6 GFLOPs\n",
"\n",
"Transferred 367/367 items from yolov5s-seg.pt\n",
"\u001b[34m\u001b[1mAMP: \u001b[0mchecks passed ✅\n",
"\u001b[34m\u001b[1moptimizer:\u001b[0m SGD(lr=0.01) with parameter groups 60 weight(decay=0.0), 63 weight(decay=0.0005), 63 bias\n",
"\u001b[34m\u001b[1malbumentations: \u001b[0mBlur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))\n",
"\u001b[34m\u001b[1mtrain: \u001b[0mScanning /content/datasets/coco128-seg/labels/train2017... 126 images, 2 backgrounds, 0 corrupt: 100% 128/128 [00:00<00:00, 1389.59it/s]\n",
"\u001b[34m\u001b[1mtrain: \u001b[0mNew cache created: /content/datasets/coco128-seg/labels/train2017.cache\n",
"\u001b[34m\u001b[1mtrain: \u001b[0mCaching images (0.1GB ram): 100% 128/128 [00:00<00:00, 238.86it/s]\n",
"\u001b[34m\u001b[1mval: \u001b[0mScanning /content/datasets/coco128-seg/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100% 128/128 [00:00<?, ?it/s]\n",
"\u001b[34m\u001b[1mval: \u001b[0mCaching images (0.1GB ram): 100% 128/128 [00:01<00:00, 98.90it/s]\n",
"\n",
"\u001b[34m\u001b[1mAutoAnchor: \u001b[0m4.27 anchors/target, 0.994 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅\n",
"Plotting labels to runs/train-seg/exp/labels.jpg... \n",
"Image sizes 640 train, 640 val\n",
"Using 2 dataloader workers\n",
"Logging results to \u001b[1mruns/train-seg/exp\u001b[0m\n",
"Starting training for 3 epochs...\n",
"\n",
" Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size\n",
" 0/2 4.92G 0.0417 0.04646 0.06066 0.02126 192 640: 100% 8/8 [00:08<00:00, 1.10s/it]\n",
" Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100% 4/4 [00:02<00:00, 1.81it/s]\n",
" all 128 929 0.737 0.649 0.715 0.492 0.719 0.617 0.658 0.408\n",
"\n",
" Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size\n",
" 1/2 6.29G 0.04157 0.04503 0.05772 0.01777 208 640: 100% 8/8 [00:09<00:00, 1.21s/it]\n",
" Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100% 4/4 [00:02<00:00, 1.87it/s]\n",
" all 128 929 0.756 0.674 0.738 0.506 0.725 0.64 0.68 0.422\n",
"\n",
" Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size\n",
" 2/2 6.29G 0.0425 0.04793 0.06784 0.01863 161 640: 100% 8/8 [00:03<00:00, 2.02it/s]\n",
" Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100% 4/4 [00:02<00:00, 1.88it/s]\n",
" all 128 929 0.736 0.694 0.747 0.522 0.769 0.622 0.683 0.427\n",
"\n",
"3 epochs completed in 0.009 hours.\n",
"Optimizer stripped from runs/train-seg/exp/weights/last.pt, 15.6MB\n",
"Optimizer stripped from runs/train-seg/exp/weights/best.pt, 15.6MB\n",
"\n",
"Validating runs/train-seg/exp/weights/best.pt...\n",
"Fusing layers... \n",
"Model summary: 165 layers, 7611485 parameters, 0 gradients, 26.4 GFLOPs\n",
" Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100% 4/4 [00:06<00:00, 1.59s/it]\n",
" all 128 929 0.738 0.694 0.746 0.522 0.759 0.625 0.682 0.426\n",
" person 128 254 0.845 0.756 0.836 0.55 0.861 0.669 0.759 0.407\n",
" bicycle 128 6 0.475 0.333 0.549 0.341 0.711 0.333 0.526 0.322\n",
" car 128 46 0.612 0.565 0.539 0.257 0.555 0.435 0.477 0.171\n",
" motorcycle 128 5 0.73 0.8 0.752 0.571 0.747 0.8 0.752 0.42\n",
" airplane 128 6 1 0.943 0.995 0.732 0.92 0.833 0.839 0.555\n",
" bus 128 7 0.677 0.714 0.722 0.653 0.711 0.714 0.722 0.593\n",
" train 128 3 1 0.951 0.995 0.551 1 0.884 0.995 0.781\n",
" truck 128 12 0.555 0.417 0.457 0.285 0.624 0.417 0.397 0.277\n",
" boat 128 6 0.624 0.5 0.584 0.186 1 0.326 0.412 0.133\n",
" traffic light 128 14 0.513 0.302 0.411 0.247 0.435 0.214 0.376 0.251\n",
" stop sign 128 2 0.824 1 0.995 0.796 0.906 1 0.995 0.747\n",
" bench 128 9 0.75 0.667 0.763 0.367 0.724 0.585 0.698 0.209\n",
" bird 128 16 0.961 1 0.995 0.686 0.918 0.938 0.91 0.525\n",
" cat 128 4 0.771 0.857 0.945 0.752 0.76 0.8 0.945 0.728\n",
" dog 128 9 0.987 0.778 0.963 0.681 1 0.705 0.89 0.574\n",
" horse 128 2 0.703 1 0.995 0.697 0.759 1 0.995 0.249\n",
" elephant 128 17 0.916 0.882 0.93 0.691 0.811 0.765 0.829 0.537\n",
" bear 128 1 0.664 1 0.995 0.995 0.701 1 0.995 0.895\n",
" zebra 128 4 0.864 1 0.995 0.921 0.879 1 0.995 0.804\n",
" giraffe 128 9 0.883 0.889 0.94 0.683 0.845 0.778 0.78 0.463\n",
" backpack 128 6 1 0.59 0.701 0.372 1 0.474 0.52 0.252\n",
" umbrella 128 18 0.654 0.839 0.887 0.52 0.517 0.556 0.427 0.229\n",
" handbag 128 19 0.54 0.211 0.408 0.221 0.796 0.206 0.396 0.196\n",
" tie 128 7 0.864 0.857 0.857 0.577 0.925 0.857 0.857 0.534\n",
" suitcase 128 4 0.716 1 0.945 0.647 0.767 1 0.945 0.634\n",
" frisbee 128 5 0.708 0.8 0.761 0.643 0.737 0.8 0.761 0.501\n",
" skis 128 1 0.691 1 0.995 0.796 0.761 1 0.995 0.199\n",
" snowboard 128 7 0.918 0.857 0.904 0.604 0.32 0.286 0.235 0.137\n",
" sports ball 128 6 0.902 0.667 0.701 0.466 0.727 0.5 0.497 0.471\n",
" kite 128 10 0.586 0.4 0.511 0.231 0.663 0.394 0.417 0.139\n",
" baseball bat 128 4 0.359 0.5 0.401 0.169 0.631 0.5 0.526 0.133\n",
" baseball glove 128 7 1 0.519 0.58 0.327 0.687 0.286 0.455 0.328\n",
" skateboard 128 5 0.729 0.8 0.862 0.631 0.599 0.6 0.604 0.379\n",
" tennis racket 128 7 0.57 0.714 0.645 0.448 0.608 0.714 0.645 0.412\n",
" bottle 128 18 0.469 0.393 0.537 0.357 0.661 0.389 0.543 0.349\n",
" wine glass 128 16 0.677 0.938 0.866 0.441 0.53 0.625 0.67 0.334\n",
" cup 128 36 0.777 0.722 0.812 0.466 0.725 0.583 0.762 0.467\n",
" fork 128 6 0.948 0.333 0.425 0.27 0.527 0.167 0.18 0.102\n",
" knife 128 16 0.757 0.587 0.669 0.458 0.79 0.5 0.552 0.34\n",
" spoon 128 22 0.74 0.364 0.559 0.269 0.925 0.364 0.513 0.213\n",
" bowl 128 28 0.766 0.714 0.725 0.559 0.803 0.584 0.665 0.353\n",
" banana 128 1 0.408 1 0.995 0.398 0.539 1 0.995 0.497\n",
" sandwich 128 2 1 0 0.695 0.536 1 0 0.498 0.448\n",
" orange 128 4 0.467 1 0.995 0.693 0.518 1 0.995 0.663\n",
" broccoli 128 11 0.462 0.455 0.383 0.259 0.548 0.455 0.384 0.256\n",
" carrot 128 24 0.631 0.875 0.77 0.533 0.757 0.909 0.853 0.499\n",
" hot dog 128 2 0.555 1 0.995 0.995 0.578 1 0.995 0.796\n",
" pizza 128 5 0.89 0.8 0.962 0.796 1 0.778 0.962 0.766\n",
" donut 128 14 0.695 1 0.893 0.772 0.704 1 0.893 0.696\n",
" cake 128 4 0.826 1 0.995 0.92 0.862 1 0.995 0.846\n",
" chair 128 35 0.53 0.571 0.613 0.336 0.67 0.6 0.538 0.271\n",
" couch 128 6 0.972 0.667 0.833 0.627 1 0.62 0.696 0.394\n",
" potted plant 128 14 0.7 0.857 0.883 0.552 0.836 0.857 0.883 0.473\n",
" bed 128 3 0.979 0.667 0.83 0.366 1 0 0.83 0.373\n",
" dining table 128 13 0.775 0.308 0.505 0.364 0.644 0.231 0.25 0.0804\n",
" toilet 128 2 0.836 1 0.995 0.846 0.887 1 0.995 0.797\n",
" tv 128 2 0.6 1 0.995 0.846 0.655 1 0.995 0.896\n",
" laptop 128 3 0.822 0.333 0.445 0.307 1 0 0.392 0.12\n",
" mouse 128 2 1 0 0 0 1 0 0 0\n",
" remote 128 8 0.745 0.5 0.62 0.459 0.821 0.5 0.624 0.449\n",
" cell phone 128 8 0.686 0.375 0.502 0.272 0.488 0.25 0.28 0.132\n",
" microwave 128 3 0.831 1 0.995 0.722 0.867 1 0.995 0.592\n",
" oven 128 5 0.439 0.4 0.435 0.294 0.823 0.6 0.645 0.418\n",
" sink 128 6 0.677 0.5 0.565 0.448 0.722 0.5 0.46 0.362\n",
" refrigerator 128 5 0.533 0.8 0.783 0.524 0.558 0.8 0.783 0.527\n",
" book 128 29 0.732 0.379 0.423 0.196 0.69 0.207 0.38 0.131\n",
" clock 128 9 0.889 0.778 0.917 0.677 0.908 0.778 0.875 0.604\n",
" vase 128 2 0.375 1 0.995 0.995 0.455 1 0.995 0.796\n",
" scissors 128 1 1 0 0.0166 0.00166 1 0 0 0\n",
" teddy bear 128 21 0.813 0.829 0.841 0.457 0.826 0.678 0.786 0.422\n",
" toothbrush 128 5 0.806 1 0.995 0.733 0.991 1 0.995 0.628\n",
"Results saved to \u001b[1mruns/train-seg/exp\u001b[0m\n"
]
}
],
"source": [
"# Train YOLOv5s on COCO128 for 3 epochs\n",
"!python segment/train.py --img 640 --batch 16 --epochs 3 --data coco128-seg.yaml --weights yolov5s-seg.pt --cache"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "15glLzbQx5u0"
},
"source": [
"# 4. Visualize"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nWOsI5wJR1o3"
},
"source": [
"## Comet Logging and Visualization 🌟 NEW\n",
"\n",
"[Comet](https://www.comet.com/site/lp/yolov5-with-comet/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=yolov5_colab) is now fully integrated with YOLOv5. Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://www.comet.com/docs/v2/guides/comet-dashboard/code-panels/about-panels/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=yolov5_colab)! Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!\n",
"\n",
"Getting started is easy:\n",
"```shell\n",
"pip install comet_ml # 1. install\n",
"export COMET_API_KEY=<Your API Key> # 2. paste API key\n",
"python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt # 3. train\n",
"```\n",
"To learn more about all of the supported Comet features for this integration, check out the [Comet Tutorial](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/comet). If you'd like to learn more about Comet, head over to our [documentation](https://www.comet.com/docs/v2/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=yolov5_colab). Get started by trying out the Comet Colab Notebook:\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1RG0WOQyxlDlo5Km8GogJpIEJlg_5lyYO?usp=sharing)\n",
"\n",
"<a href=\"https://bit.ly/yolov5-readme-comet2\">\n",
"<img alt=\"Comet Dashboard\" src=\"https://user-images.githubusercontent.com/26833433/202851203-164e94e1-2238-46dd-91f8-de020e9d6b41.png\" width=\"1280\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Lay2WsTjNJzP"
},
"source": [
"## ClearML Logging and Automation 🌟 NEW\n",
"\n",
"[ClearML](https://cutt.ly/yolov5-notebook-clearml) is completely integrated into YOLOv5 to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML (check cells above):\n",
"\n",
"- `pip install clearml`\n",
"- run `clearml-init` to connect to a ClearML server (**deploy your own [open-source server](https://github.com/allegroai/clearml-server)**, or use our [free hosted server](https://cutt.ly/yolov5-notebook-clearml))\n",
"\n",
"You'll get all the great expected features from an experiment manager: live updates, model upload, experiment comparison etc. but ClearML also tracks uncommitted changes and installed packages for example. Thanks to that ClearML Tasks (which is what we call experiments) are also reproducible on different machines! With only 1 extra line, we can schedule a YOLOv5 training task on a queue to be executed by any number of ClearML Agents (workers).\n",
"\n",
"You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the [ClearML Tutorial](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml) for details!\n",
"\n",
"<a href=\"https://cutt.ly/yolov5-notebook-clearml\">\n",
"<img alt=\"ClearML Experiment Management UI\" src=\"https://github.com/thepycoder/clearml_screenshots/raw/main/scalars.jpg\" width=\"1280\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-WPvRbS5Swl6"
},
"source": [
"## Local Logging\n",
"\n",
"Training results are automatically logged with [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) loggers to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc.\n",
"\n",
"This directory contains train and val statistics, mosaics, labels, predictions and augmentated mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices. \n",
"\n",
"<img alt=\"Local logging results\" src=\"https://user-images.githubusercontent.com/26833433/183222430-e1abd1b7-782c-4cde-b04d-ad52926bf818.jpg\" width=\"1280\"/>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Zelyeqbyt3GD"
},
"source": [
"# Environments\n",
"\n",
"YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n",
"\n",
"- **Notebooks** with free GPU: <a href=\"https://bit.ly/yolov5-paperspace-notebook\"><img src=\"https://assets.paperspace.io/img/gradient-badge.svg\" alt=\"Run on Gradient\"></a> <a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a> <a href=\"https://www.kaggle.com/ultralytics/yolov5\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a>\n",
"- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)\n",
"- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)\n",
"- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href=\"https://hub.docker.com/r/ultralytics/yolov3\"><img src=\"https://img.shields.io/docker/pulls/ultralytics/yolov3?logo=docker\" alt=\"Docker Pulls\"></a>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6Qu7Iesl0p54"
},
"source": [
"# Status\n",
"\n",
"![YOLOv5 CI](https://github.com/ultralytics/yolov3/actions/workflows/ci-testing.yml/badge.svg)\n",
"\n",
"If this badge is green, all [YOLOv3 GitHub Actions](https://github.com/ultralytics/yolov3/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on macOS, Windows, and Ubuntu every 24 hours and on every commit.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IEijrePND_2I"
},
"source": [
"# Appendix\n",
"\n",
"Additional content below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GMusP4OAxFu6"
},
"outputs": [],
"source": [
"# YOLOv5 PyTorch HUB Inference (DetectionModels only)\n",
"import torch\n",
"\n",
"model = torch.hub.load('ultralytics/yolov5', 'yolov5s-seg') # yolov5n - yolov5x6 or custom\n",
"im = 'https://ultralytics.com/images/zidane.jpg' # file, Path, PIL.Image, OpenCV, nparray, list\n",
"results = model(im) # inference\n",
"results.print() # or .show(), .save(), .crop(), .pandas(), etc."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "YOLOv5 Segmentation Tutorial",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

473
yolov3/segment/val.py Normal file
View File

@ -0,0 +1,473 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Validate a trained segment model on a segment dataset
Usage:
$ bash data/scripts/get_coco.sh --val --segments # download COCO-segments val split (1G, 5000 images)
$ python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # validate COCO-segments
Usage - formats:
$ python segment/val.py --weights yolov5s-seg.pt # PyTorch
yolov5s-seg.torchscript # TorchScript
yolov5s-seg.onnx # ONNX Runtime or OpenCV DNN with --dnn
yolov5s-seg_openvino_label # OpenVINO
yolov5s-seg.engine # TensorRT
yolov5s-seg.mlmodel # CoreML (macOS-only)
yolov5s-seg_saved_model # TensorFlow SavedModel
yolov5s-seg.pb # TensorFlow GraphDef
yolov5s-seg.tflite # TensorFlow Lite
yolov5s-seg_edgetpu.tflite # TensorFlow Edge TPU
yolov5s-seg_paddle_model # PaddlePaddle
"""
import argparse
import json
import os
import subprocess
import sys
from multiprocessing.pool import ThreadPool
from pathlib import Path
import numpy as np
import torch
from tqdm import tqdm
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
import torch.nn.functional as F
from models.common import DetectMultiBackend
from models.yolo import SegmentationModel
from utils.callbacks import Callbacks
from utils.general import (LOGGER, NUM_THREADS, TQDM_BAR_FORMAT, Profile, check_dataset, check_img_size,
check_requirements, check_yaml, coco80_to_coco91_class, colorstr, increment_path,
non_max_suppression, print_args, scale_boxes, xywh2xyxy, xyxy2xywh)
from utils.metrics import ConfusionMatrix, box_iou
from utils.plots import output_to_target, plot_val_study
from utils.segment.dataloaders import create_dataloader
from utils.segment.general import mask_iou, process_mask, process_mask_native, scale_image
from utils.segment.metrics import Metrics, ap_per_class_box_and_mask
from utils.segment.plots import plot_images_and_masks
from utils.torch_utils import de_parallel, select_device, smart_inference_mode
def save_one_txt(predn, save_conf, shape, file):
# Save one txt result
gn = torch.tensor(shape)[[1, 0, 1, 0]] # normalization gain whwh
for *xyxy, conf, cls in predn.tolist():
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
with open(file, 'a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n')
def save_one_json(predn, jdict, path, class_map, pred_masks):
# Save one JSON result {"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}
from pycocotools.mask import encode
def single_encode(x):
rle = encode(np.asarray(x[:, :, None], order='F', dtype='uint8'))[0]
rle['counts'] = rle['counts'].decode('utf-8')
return rle
image_id = int(path.stem) if path.stem.isnumeric() else path.stem
box = xyxy2xywh(predn[:, :4]) # xywh
box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner
pred_masks = np.transpose(pred_masks, (2, 0, 1))
with ThreadPool(NUM_THREADS) as pool:
rles = pool.map(single_encode, pred_masks)
for i, (p, b) in enumerate(zip(predn.tolist(), box.tolist())):
jdict.append({
'image_id': image_id,
'category_id': class_map[int(p[5])],
'bbox': [round(x, 3) for x in b],
'score': round(p[4], 5),
'segmentation': rles[i]})
def process_batch(detections, labels, iouv, pred_masks=None, gt_masks=None, overlap=False, masks=False):
"""
Return correct prediction matrix
Arguments:
detections (array[N, 6]), x1, y1, x2, y2, conf, class
labels (array[M, 5]), class, x1, y1, x2, y2
Returns:
correct (array[N, 10]), for 10 IoU levels
"""
if masks:
if overlap:
nl = len(labels)
index = torch.arange(nl, device=gt_masks.device).view(nl, 1, 1) + 1
gt_masks = gt_masks.repeat(nl, 1, 1) # shape(1,640,640) -> (n,640,640)
gt_masks = torch.where(gt_masks == index, 1.0, 0.0)
if gt_masks.shape[1:] != pred_masks.shape[1:]:
gt_masks = F.interpolate(gt_masks[None], pred_masks.shape[1:], mode='bilinear', align_corners=False)[0]
gt_masks = gt_masks.gt_(0.5)
iou = mask_iou(gt_masks.view(gt_masks.shape[0], -1), pred_masks.view(pred_masks.shape[0], -1))
else: # boxes
iou = box_iou(labels[:, 1:], detections[:, :4])
correct = np.zeros((detections.shape[0], iouv.shape[0])).astype(bool)
correct_class = labels[:, 0:1] == detections[:, 5]
for i in range(len(iouv)):
x = torch.where((iou >= iouv[i]) & correct_class) # IoU > threshold and classes match
if x[0].shape[0]:
matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detect, iou]
if x[0].shape[0] > 1:
matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
# matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
correct[matches[:, 1].astype(int), i] = True
return torch.tensor(correct, dtype=torch.bool, device=iouv.device)
@smart_inference_mode()
def run(
data,
weights=None, # model.pt path(s)
batch_size=32, # batch size
imgsz=640, # inference size (pixels)
conf_thres=0.001, # confidence threshold
iou_thres=0.6, # NMS IoU threshold
max_det=300, # maximum detections per image
task='val', # train, val, test, speed or study
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
workers=8, # max dataloader workers (per RANK in DDP mode)
single_cls=False, # treat as single-class dataset
augment=False, # augmented inference
verbose=False, # verbose output
save_txt=False, # save results to *.txt
save_hybrid=False, # save label+prediction hybrid results to *.txt
save_conf=False, # save confidences in --save-txt labels
save_json=False, # save a COCO-JSON results file
project=ROOT / 'runs/val-seg', # save to project/name
name='exp', # save to project/name
exist_ok=False, # existing project/name ok, do not increment
half=True, # use FP16 half-precision inference
dnn=False, # use OpenCV DNN for ONNX inference
model=None,
dataloader=None,
save_dir=Path(''),
plots=True,
overlap=False,
mask_downsample_ratio=1,
compute_loss=None,
callbacks=Callbacks(),
):
if save_json:
check_requirements('pycocotools>=2.0.6')
process = process_mask_native # more accurate
else:
process = process_mask # faster
# Initialize/load model and set device
training = model is not None
if training: # called by train.py
device, pt, jit, engine = next(model.parameters()).device, True, False, False # get model device, PyTorch model
half &= device.type != 'cpu' # half precision only supported on CUDA
model.half() if half else model.float()
nm = de_parallel(model).model[-1].nm # number of masks
else: # called directly
device = select_device(device, batch_size=batch_size)
# Directories
save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
(save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
# Load model
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine
imgsz = check_img_size(imgsz, s=stride) # check image size
half = model.fp16 # FP16 supported on limited backends with CUDA
nm = de_parallel(model).model.model[-1].nm if isinstance(model, SegmentationModel) else 32 # number of masks
if engine:
batch_size = model.batch_size
else:
device = model.device
if not (pt or jit):
batch_size = 1 # export.py models default to batch-size 1
LOGGER.info(f'Forcing --batch-size 1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models')
# Data
data = check_dataset(data) # check
# Configure
model.eval()
cuda = device.type != 'cpu'
is_coco = isinstance(data.get('val'), str) and data['val'].endswith(f'coco{os.sep}val2017.txt') # COCO dataset
nc = 1 if single_cls else int(data['nc']) # number of classes
iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for mAP@0.5:0.95
niou = iouv.numel()
# Dataloader
if not training:
if pt and not single_cls: # check --weights are trained on --data
ncm = model.model.nc
assert ncm == nc, f'{weights} ({ncm} classes) trained on different --data than what you passed ({nc} ' \
f'classes). Pass correct combination of --weights and --data that are trained together.'
model.warmup(imgsz=(1 if pt else batch_size, 3, imgsz, imgsz)) # warmup
pad, rect = (0.0, False) if task == 'speed' else (0.5, pt) # square inference for benchmarks
task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images
dataloader = create_dataloader(data[task],
imgsz,
batch_size,
stride,
single_cls,
pad=pad,
rect=rect,
workers=workers,
prefix=colorstr(f'{task}: '),
overlap_mask=overlap,
mask_downsample_ratio=mask_downsample_ratio)[0]
seen = 0
confusion_matrix = ConfusionMatrix(nc=nc)
names = model.names if hasattr(model, 'names') else model.module.names # get class names
if isinstance(names, (list, tuple)): # old format
names = dict(enumerate(names))
class_map = coco80_to_coco91_class() if is_coco else list(range(1000))
s = ('%22s' + '%11s' * 10) % ('Class', 'Images', 'Instances', 'Box(P', 'R', 'mAP50', 'mAP50-95)', 'Mask(P', 'R',
'mAP50', 'mAP50-95)')
dt = Profile(), Profile(), Profile()
metrics = Metrics()
loss = torch.zeros(4, device=device)
jdict, stats = [], []
# callbacks.run('on_val_start')
pbar = tqdm(dataloader, desc=s, bar_format=TQDM_BAR_FORMAT) # progress bar
for batch_i, (im, targets, paths, shapes, masks) in enumerate(pbar):
# callbacks.run('on_val_batch_start')
with dt[0]:
if cuda:
im = im.to(device, non_blocking=True)
targets = targets.to(device)
masks = masks.to(device)
masks = masks.float()
im = im.half() if half else im.float() # uint8 to fp16/32
im /= 255 # 0 - 255 to 0.0 - 1.0
nb, _, height, width = im.shape # batch size, channels, height, width
# Inference
with dt[1]:
preds, protos, train_out = model(im) if compute_loss else (*model(im, augment=augment)[:2], None)
# Loss
if compute_loss:
loss += compute_loss((train_out, protos), targets, masks)[1] # box, obj, cls
# NMS
targets[:, 2:] *= torch.tensor((width, height, width, height), device=device) # to pixels
lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling
with dt[2]:
preds = non_max_suppression(preds,
conf_thres,
iou_thres,
labels=lb,
multi_label=True,
agnostic=single_cls,
max_det=max_det,
nm=nm)
# Metrics
plot_masks = [] # masks for plotting
for si, (pred, proto) in enumerate(zip(preds, protos)):
labels = targets[targets[:, 0] == si, 1:]
nl, npr = labels.shape[0], pred.shape[0] # number of labels, predictions
path, shape = Path(paths[si]), shapes[si][0]
correct_masks = torch.zeros(npr, niou, dtype=torch.bool, device=device) # init
correct_bboxes = torch.zeros(npr, niou, dtype=torch.bool, device=device) # init
seen += 1
if npr == 0:
if nl:
stats.append((correct_masks, correct_bboxes, *torch.zeros((2, 0), device=device), labels[:, 0]))
if plots:
confusion_matrix.process_batch(detections=None, labels=labels[:, 0])
continue
# Masks
midx = [si] if overlap else targets[:, 0] == si
gt_masks = masks[midx]
pred_masks = process(proto, pred[:, 6:], pred[:, :4], shape=im[si].shape[1:])
# Predictions
if single_cls:
pred[:, 5] = 0
predn = pred.clone()
scale_boxes(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred
# Evaluate
if nl:
tbox = xywh2xyxy(labels[:, 1:5]) # target boxes
scale_boxes(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels
labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels
correct_bboxes = process_batch(predn, labelsn, iouv)
correct_masks = process_batch(predn, labelsn, iouv, pred_masks, gt_masks, overlap=overlap, masks=True)
if plots:
confusion_matrix.process_batch(predn, labelsn)
stats.append((correct_masks, correct_bboxes, pred[:, 4], pred[:, 5], labels[:, 0])) # (conf, pcls, tcls)
pred_masks = torch.as_tensor(pred_masks, dtype=torch.uint8)
if plots and batch_i < 3:
plot_masks.append(pred_masks[:15]) # filter top 15 to plot
# Save/log
if save_txt:
save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt')
if save_json:
pred_masks = scale_image(im[si].shape[1:],
pred_masks.permute(1, 2, 0).contiguous().cpu().numpy(), shape, shapes[si][1])
save_one_json(predn, jdict, path, class_map, pred_masks) # append to COCO-JSON dictionary
# callbacks.run('on_val_image_end', pred, predn, path, names, im[si])
# Plot images
if plots and batch_i < 3:
if len(plot_masks):
plot_masks = torch.cat(plot_masks, dim=0)
plot_images_and_masks(im, targets, masks, paths, save_dir / f'val_batch{batch_i}_labels.jpg', names)
plot_images_and_masks(im, output_to_target(preds, max_det=15), plot_masks, paths,
save_dir / f'val_batch{batch_i}_pred.jpg', names) # pred
# callbacks.run('on_val_batch_end')
# Compute metrics
stats = [torch.cat(x, 0).cpu().numpy() for x in zip(*stats)] # to numpy
if len(stats) and stats[0].any():
results = ap_per_class_box_and_mask(*stats, plot=plots, save_dir=save_dir, names=names)
metrics.update(results)
nt = np.bincount(stats[4].astype(int), minlength=nc) # number of targets per class
# Print results
pf = '%22s' + '%11i' * 2 + '%11.3g' * 8 # print format
LOGGER.info(pf % ('all', seen, nt.sum(), *metrics.mean_results()))
if nt.sum() == 0:
LOGGER.warning(f'WARNING ⚠️ no labels found in {task} set, can not compute metrics without labels')
# Print results per class
if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats):
for i, c in enumerate(metrics.ap_class_index):
LOGGER.info(pf % (names[c], seen, nt[c], *metrics.class_result(i)))
# Print speeds
t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image
if not training:
shape = (batch_size, 3, imgsz, imgsz)
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t)
# Plots
if plots:
confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
# callbacks.run('on_val_end')
mp_bbox, mr_bbox, map50_bbox, map_bbox, mp_mask, mr_mask, map50_mask, map_mask = metrics.mean_results()
# Save JSON
if save_json and len(jdict):
w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights
anno_json = str(Path('../datasets/coco/annotations/instances_val2017.json')) # annotations
pred_json = str(save_dir / f'{w}_predictions.json') # predictions
LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...')
with open(pred_json, 'w') as f:
json.dump(jdict, f)
try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
anno = COCO(anno_json) # init annotations api
pred = anno.loadRes(pred_json) # init predictions api
results = []
for eval in COCOeval(anno, pred, 'bbox'), COCOeval(anno, pred, 'segm'):
if is_coco:
eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.im_files] # img ID to evaluate
eval.evaluate()
eval.accumulate()
eval.summarize()
results.extend(eval.stats[:2]) # update results (mAP@0.5:0.95, mAP@0.5)
map_bbox, map50_bbox, map_mask, map50_mask = results
except Exception as e:
LOGGER.info(f'pycocotools unable to run: {e}')
# Return results
model.float() # for training
if not training:
s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
final_metric = mp_bbox, mr_bbox, map50_bbox, map_bbox, mp_mask, mr_mask, map50_mask, map_mask
return (*final_metric, *(loss.cpu() / len(dataloader)).tolist()), metrics.get_maps(nc), t
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128-seg.yaml', help='dataset.yaml path')
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s-seg.pt', help='model path(s)')
parser.add_argument('--batch-size', type=int, default=32, help='batch size')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)')
parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold')
parser.add_argument('--max-det', type=int, default=300, help='maximum detections per image')
parser.add_argument('--task', default='val', help='train, val, test, speed or study')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--verbose', action='store_true', help='report mAP by class')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
parser.add_argument('--save-json', action='store_true', help='save a COCO-JSON results file')
parser.add_argument('--project', default=ROOT / 'runs/val-seg', help='save results to project/name')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
opt = parser.parse_args()
opt.data = check_yaml(opt.data) # check YAML
# opt.save_json |= opt.data.endswith('coco.yaml')
opt.save_txt |= opt.save_hybrid
print_args(vars(opt))
return opt
def main(opt):
check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop'))
if opt.task in ('train', 'val', 'test'): # run normally
if opt.conf_thres > 0.001: # https://github.com/ultralytics/yolov5/issues/1466
LOGGER.warning(f'WARNING ⚠️ confidence threshold {opt.conf_thres} > 0.001 produces invalid results')
if opt.save_hybrid:
LOGGER.warning('WARNING ⚠️ --save-hybrid returns high mAP from hybrid labels, not from predictions alone')
run(**vars(opt))
else:
weights = opt.weights if isinstance(opt.weights, list) else [opt.weights]
opt.half = torch.cuda.is_available() and opt.device != 'cpu' # FP16 for fastest results
if opt.task == 'speed': # speed benchmarks
# python val.py --task speed --data coco.yaml --batch 1 --weights yolov5n.pt yolov5s.pt...
opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False
for opt.weights in weights:
run(**vars(opt), plots=False)
elif opt.task == 'study': # speed vs mAP benchmarks
# python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n.pt yolov5s.pt...
for opt.weights in weights:
f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt' # filename to save to
x, y = list(range(256, 1536 + 128, 128)), [] # x axis (image sizes), y axis
for opt.imgsz in x: # img-size
LOGGER.info(f'\nRunning {f} --imgsz {opt.imgsz}...')
r, _, t = run(**vars(opt), plots=False)
y.append(r + t) # results and times
np.savetxt(f, y, fmt='%10.4g') # save
subprocess.run('zip -r study.zip study_*.txt'.split())
plot_val_study(x=x) # plot
else:
raise NotImplementedError(f'--task {opt.task} not in ("train", "val", "test", "speed", "study")')
if __name__ == '__main__':
opt = parse_opt()
main(opt)

51
yolov3/setup.cfg Normal file
View File

@ -0,0 +1,51 @@
# Project-wide configuration file, can be used for package metadata and other toll configurations
# Example usage: global configuration for PEP8 (via flake8) setting or default pytest arguments
[metadata]
license_file = LICENSE
description-file = README.md
[tool:pytest]
norecursedirs =
.git
dist
build
addopts =
--doctest-modules
--durations=25
--color=yes
[flake8]
max-line-length = 120
exclude = .tox,*.egg,build,temp
select = E,W,F
doctests = True
verbose = 2
# https://pep8.readthedocs.io/en/latest/intro.html#error-codes
format = pylint
# see: https://www.flake8rules.com/
ignore =
E731 # Do not assign a lambda expression, use a def
F405
E402
F841
E741
F821
E722
F401
W504
E127
W504
E231
E501
F403
E302
F541
[isort]
# https://pycqa.github.io/isort/docs/configuration/options.html
line_length = 120
multi_line_output = 0

625
yolov3/train.py Normal file
View File

@ -0,0 +1,625 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Train a model on a custom dataset
Usage:
$ python path/to/train.py --data coco128.yaml --weights yolov3.pt --img 640
"""
import argparse
import math
import os
import random
import sys
import time
from copy import deepcopy
from datetime import datetime
from pathlib import Path
import numpy as np
import torch
import torch.distributed as dist
import torch.nn as nn
import yaml
from torch.cuda import amp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import SGD, Adam, lr_scheduler
from tqdm import tqdm
FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
import val # for end-of-epoch mAP
from models.experimental import attempt_load
from models.yolo import Model
from utils.autoanchor import check_anchors
from utils.autobatch import check_train_batch_size
from utils.callbacks import Callbacks
from utils.datasets import create_dataloader
from utils.downloads import attempt_download
from utils.general import (LOGGER, NCOLS, check_dataset, check_file, check_git_status, check_img_size,
check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path,
init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods,
one_cycle, print_args, print_mutation, strip_optimizer)
from utils.loggers import Loggers
from utils.loggers.wandb.wandb_utils import check_wandb_resume
from utils.loss import ComputeLoss
from utils.metrics import fitness
from utils.plots import plot_evolve, plot_labels
from utils.torch_utils import EarlyStopping, ModelEMA, de_parallel, select_device, torch_distributed_zero_first
LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
RANK = int(os.getenv('RANK', -1))
WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
def train(hyp, # path/to/hyp.yaml or hyp dictionary
opt,
device,
callbacks
):
save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, = \
Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze
# Directories
w = save_dir / 'weights' # weights dir
(w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir
last, best = w / 'last.pt', w / 'best.pt'
# Hyperparameters
if isinstance(hyp, str):
with open(hyp, errors='ignore') as f:
hyp = yaml.safe_load(f) # load hyps dict
LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
# Save run settings
with open(save_dir / 'hyp.yaml', 'w') as f:
yaml.safe_dump(hyp, f, sort_keys=False)
with open(save_dir / 'opt.yaml', 'w') as f:
yaml.safe_dump(vars(opt), f, sort_keys=False)
data_dict = None
# Loggers
if RANK in [-1, 0]:
loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance
if loggers.wandb:
data_dict = loggers.wandb.data_dict
if resume:
weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp
# Register actions
for k in methods(loggers):
callbacks.register_action(k, callback=getattr(loggers, k))
# Config
plots = not evolve # create plots
cuda = device.type != 'cpu'
init_seeds(1 + RANK)
with torch_distributed_zero_first(LOCAL_RANK):
data_dict = data_dict or check_dataset(data) # check if None
train_path, val_path = data_dict['train'], data_dict['val']
nc = 1 if single_cls else int(data_dict['nc']) # number of classes
names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check
is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset
# Model
check_suffix(weights, '.pt') # check weights
pretrained = weights.endswith('.pt')
if pretrained:
with torch_distributed_zero_first(LOCAL_RANK):
weights = attempt_download(weights) # download if not found locally
ckpt = torch.load(weights, map_location=device) # load checkpoint
model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys
csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect
model.load_state_dict(csd, strict=False) # load
LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report
else:
model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
# Freeze
freeze = [f'model.{x}.' for x in range(freeze)] # layers to freeze
for k, v in model.named_parameters():
v.requires_grad = True # train all layers
if any(x in k for x in freeze):
LOGGER.info(f'freezing {k}')
v.requires_grad = False
# Image size
gs = max(int(model.stride.max()), 32) # grid size (max stride)
imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple
# Batch size
if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size
batch_size = check_train_batch_size(model, imgsz)
# Optimizer
nbs = 64 # nominal batch size
accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing
hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay
LOGGER.info(f"Scaled weight_decay = {hyp['weight_decay']}")
g0, g1, g2 = [], [], [] # optimizer parameter groups
for v in model.modules():
if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter): # bias
g2.append(v.bias)
if isinstance(v, nn.BatchNorm2d): # weight (no decay)
g0.append(v.weight)
elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter): # weight (with decay)
g1.append(v.weight)
if opt.adam:
optimizer = Adam(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
else:
optimizer = SGD(g0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
optimizer.add_param_group({'params': g1, 'weight_decay': hyp['weight_decay']}) # add g1 with weight_decay
optimizer.add_param_group({'params': g2}) # add g2 (biases)
LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__} with parameter groups "
f"{len(g0)} weight, {len(g1)} weight (no decay), {len(g2)} bias")
del g0, g1, g2
# Scheduler
if opt.linear_lr:
lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
else:
lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs)
# EMA
ema = ModelEMA(model) if RANK in [-1, 0] else None
# Resume
start_epoch, best_fitness = 0, 0.0
if pretrained:
# Optimizer
if ckpt['optimizer'] is not None:
optimizer.load_state_dict(ckpt['optimizer'])
best_fitness = ckpt['best_fitness']
# EMA
if ema and ckpt.get('ema'):
ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
ema.updates = ckpt['updates']
# Epochs
start_epoch = ckpt['epoch'] + 1
if resume:
assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.'
if epochs < start_epoch:
LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.")
epochs += ckpt['epoch'] # finetune additional epochs
del ckpt, csd
# DP mode
if cuda and RANK == -1 and torch.cuda.device_count() > 1:
LOGGER.warning('WARNING: DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n'
'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.')
model = torch.nn.DataParallel(model)
# SyncBatchNorm
if opt.sync_bn and cuda and RANK != -1:
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
LOGGER.info('Using SyncBatchNorm()')
# Trainloader
train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls,
hyp=hyp, augment=True, cache=opt.cache, rect=opt.rect, rank=LOCAL_RANK,
workers=workers, image_weights=opt.image_weights, quad=opt.quad,
prefix=colorstr('train: '), shuffle=True)
mlc = int(np.concatenate(dataset.labels, 0)[:, 0].max()) # max label class
nb = len(train_loader) # number of batches
assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}'
# Process 0
if RANK in [-1, 0]:
val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls,
hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1,
workers=workers, pad=0.5,
prefix=colorstr('val: '))[0]
if not resume:
labels = np.concatenate(dataset.labels, 0)
# c = torch.tensor(labels[:, 0]) # classes
# cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
# model._initialize_biases(cf.to(device))
if plots:
plot_labels(labels, names, save_dir)
# Anchors
if not opt.noautoanchor:
check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
model.half().float() # pre-reduce anchor precision
callbacks.run('on_pretrain_routine_end')
# DDP mode
if cuda and RANK != -1:
model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)
# Model parameters
nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps)
hyp['box'] *= 3 / nl # scale to layers
hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers
hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers
hyp['label_smoothing'] = opt.label_smoothing
model.nc = nc # attach number of classes to model
model.hyp = hyp # attach hyperparameters to model
model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights
model.names = names
# Start training
t0 = time.time()
nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)
# nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
last_opt_step = -1
maps = np.zeros(nc) # mAP per class
results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
scheduler.last_epoch = start_epoch - 1 # do not move
scaler = amp.GradScaler(enabled=cuda)
stopper = EarlyStopping(patience=opt.patience)
compute_loss = ComputeLoss(model) # init loss class
LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...')
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
model.train()
# Update image weights (optional, single-GPU only)
if opt.image_weights:
cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
# Update mosaic border (optional)
# b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
# dataset.mosaic_border = [b - imgsz, -b] # height, width borders
mloss = torch.zeros(3, device=device) # mean losses
if RANK != -1:
train_loader.sampler.set_epoch(epoch)
pbar = enumerate(train_loader)
LOGGER.info(('\n' + '%10s' * 7) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'labels', 'img_size'))
if RANK in [-1, 0]:
pbar = tqdm(pbar, total=nb, ncols=NCOLS, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar
optimizer.zero_grad()
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
ni = i + nb * epoch # number integrated batches (since train start)
imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0
# Warmup
if ni <= nw:
xi = [0, nw] # x interp
# compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())
for j, x in enumerate(optimizer.param_groups):
# bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
if 'momentum' in x:
x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
# Multi-scale
if opt.multi_scale:
sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
sf = sz / max(imgs.shape[2:]) # scale factor
if sf != 1:
ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
# Forward
with amp.autocast(enabled=cuda):
pred = model(imgs) # forward
loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
if RANK != -1:
loss *= WORLD_SIZE # gradient averaged between devices in DDP mode
if opt.quad:
loss *= 4.
# Backward
scaler.scale(loss).backward()
# Optimize
if ni - last_opt_step >= accumulate:
scaler.step(optimizer) # optimizer.step
scaler.update()
optimizer.zero_grad()
if ema:
ema.update(model)
last_opt_step = ni
# Log
if RANK in [-1, 0]:
mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB)
pbar.set_description(('%10s' * 2 + '%10.4g' * 5) % (
f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1]))
callbacks.run('on_train_batch_end', ni, model, imgs, targets, paths, plots, opt.sync_bn)
# end batch ------------------------------------------------------------------------------------------------
# Scheduler
lr = [x['lr'] for x in optimizer.param_groups] # for loggers
scheduler.step()
if RANK in [-1, 0]:
# mAP
callbacks.run('on_train_epoch_end', epoch=epoch)
ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights'])
final_epoch = (epoch + 1 == epochs) or stopper.possible_stop
if not noval or final_epoch: # Calculate mAP
results, maps, _ = val.run(data_dict,
batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz,
model=ema.ema,
single_cls=single_cls,
dataloader=val_loader,
save_dir=save_dir,
plots=False,
callbacks=callbacks,
compute_loss=compute_loss)
# Update best mAP
fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
if fi > best_fitness:
best_fitness = fi
log_vals = list(mloss) + list(results) + lr
callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi)
# Save model
if (not nosave) or (final_epoch and not evolve): # if save
ckpt = {'epoch': epoch,
'best_fitness': best_fitness,
'model': deepcopy(de_parallel(model)).half(),
'ema': deepcopy(ema.ema).half(),
'updates': ema.updates,
'optimizer': optimizer.state_dict(),
'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None,
'date': datetime.now().isoformat()}
# Save last, best and delete
torch.save(ckpt, last)
if best_fitness == fi:
torch.save(ckpt, best)
if (epoch > 0) and (opt.save_period > 0) and (epoch % opt.save_period == 0):
torch.save(ckpt, w / f'epoch{epoch}.pt')
del ckpt
callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi)
# Stop Single-GPU
if RANK == -1 and stopper(epoch=epoch, fitness=fi):
break
# Stop DDP TODO: known issues shttps://github.com/ultralytics/yolov5/pull/4576
# stop = stopper(epoch=epoch, fitness=fi)
# if RANK == 0:
# dist.broadcast_object_list([stop], 0) # broadcast 'stop' to all ranks
# Stop DPP
# with torch_distributed_zero_first(RANK):
# if stop:
# break # must break all DDP ranks
# end epoch ----------------------------------------------------------------------------------------------------
# end training -----------------------------------------------------------------------------------------------------
if RANK in [-1, 0]:
LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.')
for f in last, best:
if f.exists():
strip_optimizer(f) # strip optimizers
if f is best:
LOGGER.info(f'\nValidating {f}...')
results, _, _ = val.run(data_dict,
batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz,
model=attempt_load(f, device).half(),
iou_thres=0.65 if is_coco else 0.60, # best pycocotools results at 0.65
single_cls=single_cls,
dataloader=val_loader,
save_dir=save_dir,
save_json=is_coco,
verbose=True,
plots=True,
callbacks=callbacks,
compute_loss=compute_loss) # val best model with plots
if is_coco:
callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi)
callbacks.run('on_train_end', last, best, plots, epoch, results)
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}")
torch.cuda.empty_cache()
return results
def parse_opt(known=False):
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default=ROOT / 'yolov3.pt', help='initial weights path')
parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch.yaml', help='hyperparameters path')
parser.add_argument('--epochs', type=int, default=300)
parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
parser.add_argument('--rect', action='store_true', help='rectangular training')
parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
parser.add_argument('--noval', action='store_true', help='only validate final epoch')
parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--quad', action='store_true', help='quad dataloader')
parser.add_argument('--linear-lr', action='store_true', help='linear LR')
parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
parser.add_argument('--freeze', type=int, default=0, help='Number of layers to freeze. backbone=10, all=24')
parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
# Weights & Biases arguments
parser.add_argument('--entity', default=None, help='W&B: Entity')
parser.add_argument('--upload_dataset', action='store_true', help='W&B: Upload dataset as artifact table')
parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')
opt = parser.parse_known_args()[0] if known else parser.parse_args()
return opt
def main(opt, callbacks=Callbacks()):
# Checks
if RANK in [-1, 0]:
print_args(FILE.stem, opt)
check_git_status()
check_requirements(exclude=['thop'])
# Resume
if opt.resume and not check_wandb_resume(opt) and not opt.evolve: # resume an interrupted run
ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
with open(Path(ckpt).parent.parent / 'opt.yaml', errors='ignore') as f:
opt = argparse.Namespace(**yaml.safe_load(f)) # replace
opt.cfg, opt.weights, opt.resume = '', ckpt, True # reinstate
LOGGER.info(f'Resuming training from {ckpt}')
else:
opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \
check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks
assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
if opt.evolve:
opt.project = str(ROOT / 'runs/evolve')
opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume
opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))
# DDP mode
device = select_device(opt.device, batch_size=opt.batch_size)
if LOCAL_RANK != -1:
assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'
assert opt.batch_size % WORLD_SIZE == 0, '--batch-size must be multiple of CUDA device count'
assert not opt.image_weights, '--image-weights argument is not compatible with DDP training'
assert not opt.evolve, '--evolve argument is not compatible with DDP training'
torch.cuda.set_device(LOCAL_RANK)
device = torch.device('cuda', LOCAL_RANK)
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo")
# Train
if not opt.evolve:
train(opt.hyp, opt, device, callbacks)
if WORLD_SIZE > 1 and RANK == 0:
LOGGER.info('Destroying process group... ')
dist.destroy_process_group()
# Evolve hyperparameters (optional)
else:
# Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
'box': (1, 0.02, 0.2), # box loss gain
'cls': (1, 0.2, 4.0), # cls loss gain
'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
'iou_t': (0, 0.1, 0.7), # IoU training threshold
'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
'scale': (1, 0.0, 0.9), # image scale (+/- gain)
'shear': (1, 0.0, 10.0), # image shear (+/- deg)
'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
'mosaic': (1, 0.0, 1.0), # image mixup (probability)
'mixup': (1, 0.0, 1.0), # image mixup (probability)
'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability)
with open(opt.hyp, errors='ignore') as f:
hyp = yaml.safe_load(f) # load hyps dict
if 'anchors' not in hyp: # anchors commented in hyp.yaml
hyp['anchors'] = 3
opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch
# ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv'
if opt.bucket:
os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {save_dir}') # download evolve.csv if exists
for _ in range(opt.evolve): # generations to evolve
if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate
# Select parent(s)
parent = 'single' # parent selection method: 'single' or 'weighted'
x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1)
n = min(5, len(x)) # number of previous results to consider
x = x[np.argsort(-fitness(x))][:n] # top n mutations
w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0)
if parent == 'single' or len(x) == 1:
# x = x[random.randint(0, n - 1)] # random selection
x = x[random.choices(range(n), weights=w)[0]] # weighted selection
elif parent == 'weighted':
x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
# Mutate
mp, s = 0.8, 0.2 # mutation probability, sigma
npr = np.random
npr.seed(int(time.time()))
g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1
ng = len(meta)
v = np.ones(ng)
while all(v == 1): # mutate until a change occurs (prevent duplicates)
v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
hyp[k] = float(x[i + 7] * v[i]) # mutate
# Constrain to limits
for k, v in meta.items():
hyp[k] = max(hyp[k], v[1]) # lower limit
hyp[k] = min(hyp[k], v[2]) # upper limit
hyp[k] = round(hyp[k], 5) # significant digits
# Train mutation
results = train(hyp.copy(), opt, device, callbacks)
# Write mutation results
print_mutation(results, hyp.copy(), save_dir, opt.bucket)
# Plot results
plot_evolve(evolve_csv)
LOGGER.info(f'Hyperparameter evolution finished\n'
f"Results saved to {colorstr('bold', save_dir)}\n"
f'Use best hyperparameters example: $ python train.py --hyp {evolve_yaml}')
def run(**kwargs):
# Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov3.pt')
opt = parse_opt(True)
for k, v in kwargs.items():
setattr(opt, k, v)
main(opt)
if __name__ == "__main__":
opt = parse_opt()
main(opt)

1053
yolov3/tutorial.ipynb vendored Normal file

File diff suppressed because it is too large Load Diff

18
yolov3/utils/__init__.py Normal file
View File

@ -0,0 +1,18 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
utils/initialization
"""
def notebook_init():
# For notebooks
print('Checking setup...')
from IPython import display # to display images and clear console output
from utils.general import emojis
from utils.torch_utils import select_device # imports
display.clear_output()
select_device(newline=False)
print(emojis('Setup complete ✅'))
return display

101
yolov3/utils/activations.py Normal file
View File

@ -0,0 +1,101 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Activation functions
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
# SiLU https://arxiv.org/pdf/1606.08415.pdf ----------------------------------------------------------------------------
class SiLU(nn.Module): # export-friendly version of nn.SiLU()
@staticmethod
def forward(x):
return x * torch.sigmoid(x)
class Hardswish(nn.Module): # export-friendly version of nn.Hardswish()
@staticmethod
def forward(x):
# return x * F.hardsigmoid(x) # for torchscript and CoreML
return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0 # for torchscript, CoreML and ONNX
# Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------
class Mish(nn.Module):
@staticmethod
def forward(x):
return x * F.softplus(x).tanh()
class MemoryEfficientMish(nn.Module):
class F(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x)))
@staticmethod
def backward(ctx, grad_output):
x = ctx.saved_tensors[0]
sx = torch.sigmoid(x)
fx = F.softplus(x).tanh()
return grad_output * (fx + x * sx * (1 - fx * fx))
def forward(self, x):
return self.F.apply(x)
# FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------
class FReLU(nn.Module):
def __init__(self, c1, k=3): # ch_in, kernel
super().__init__()
self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)
self.bn = nn.BatchNorm2d(c1)
def forward(self, x):
return torch.max(x, self.bn(self.conv(x)))
# ACON https://arxiv.org/pdf/2009.04759.pdf ----------------------------------------------------------------------------
class AconC(nn.Module):
r""" ACON activation (activate or not).
AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter
according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
"""
def __init__(self, c1):
super().__init__()
self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
self.beta = nn.Parameter(torch.ones(1, c1, 1, 1))
def forward(self, x):
dpx = (self.p1 - self.p2) * x
return dpx * torch.sigmoid(self.beta * dpx) + self.p2 * x
class MetaAconC(nn.Module):
r""" ACON activation (activate or not).
MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network
according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
"""
def __init__(self, c1, k=1, s=1, r=16): # ch_in, kernel, stride, r
super().__init__()
c2 = max(r, c1 // r)
self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
self.fc1 = nn.Conv2d(c1, c2, k, s, bias=True)
self.fc2 = nn.Conv2d(c2, c1, k, s, bias=True)
# self.bn1 = nn.BatchNorm2d(c2)
# self.bn2 = nn.BatchNorm2d(c1)
def forward(self, x):
y = x.mean(dim=2, keepdims=True).mean(dim=3, keepdims=True)
# batch-size 1 bug/instabilities https://github.com/ultralytics/yolov5/issues/2891
# beta = torch.sigmoid(self.bn2(self.fc2(self.bn1(self.fc1(y))))) # bug/unstable
beta = torch.sigmoid(self.fc2(self.fc1(y))) # bug patch BN layers removed
dpx = (self.p1 - self.p2) * x
return dpx * torch.sigmoid(beta * dpx) + self.p2 * x

View File

@ -0,0 +1,277 @@
# YOLOv3 🚀 by Ultralytics, GPL-3.0 license
"""
Image augmentation functions
"""
import math
import random
import cv2
import numpy as np
from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box
from utils.metrics import bbox_ioa
class Albumentations:
# Albumentations class (optional, only used if package is installed)
def __init__(self):
self.transform = None
try:
import albumentations as A
check_version(A.__version__, '1.0.3', hard=True) # version requirement
self.transform = A.Compose([
A.Blur(p=0.01),
A.MedianBlur(p=0.01),
A.ToGray(p=0.01),
A.CLAHE(p=0.01),
A.RandomBrightnessContrast(p=0.0),
A.RandomGamma(p=0.0),
A.ImageCompression(quality_lower=75, p=0.0)],
bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))
LOGGER.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p))
except ImportError: # package not installed, skip
pass
except Exception as e:
LOGGER.info(colorstr('albumentations: ') + f'{e}')
def __call__(self, im, labels, p=1.0):
if self.transform and random.random() < p:
new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed
im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])
return im, labels
def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
# HSV color-space augmentation
if hgain or sgain or vgain:
r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV))
dtype = im.dtype # uint8
x = np.arange(0, 256, dtype=r.dtype)
lut_hue = ((x * r[0]) % 180).astype(dtype)
lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))
cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed
def hist_equalize(im, clahe=True, bgr=False):
# Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255
yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)
if clahe:
c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
yuv[:, :, 0] = c.apply(yuv[:, :, 0])
else:
yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram
return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB
def replicate(im, labels):
# Replicate labels
h, w = im.shape[:2]
boxes = labels[:, 1:].astype(int)
x1, y1, x2, y2 = boxes.T
s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
x1b, y1b, x2b, y2b = boxes[i]
bh, bw = y2b - y1b, x2b - x1b
yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b] # im4[ymin:ymax, xmin:xmax]
labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
return im, labels
def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
# Resize and pad image while meeting stride-multiple constraints
shape = im.shape[:2] # current shape [height, width]
if isinstance(new_shape, int):
new_shape = (new_shape, new_shape)
# Scale ratio (new / old)
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
if not scaleup: # only scale down, do not scale up (for better val mAP)
r = min(r, 1.0)
# Compute padding
ratio = r, r # width, height ratios
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
if auto: # minimum rectangle
dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
elif scaleFill: # stretch
dw, dh = 0.0, 0.0
new_unpad = (new_shape[1], new_shape[0])
ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
dw /= 2 # divide padding into 2 sides
dh /= 2
if shape[::-1] != new_unpad: # resize
im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
return im, ratio, (dw, dh)
def random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,
border=(0, 0)):
# torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10))
# targets = [cls, xyxy]
height = im.shape[0] + border[0] * 2 # shape(h,w,c)
width = im.shape[1] + border[1] * 2
# Center
C = np.eye(3)
C[0, 2] = -im.shape[1] / 2 # x translation (pixels)
C[1, 2] = -im.shape[0] / 2 # y translation (pixels)
# Perspective
P = np.eye(3)
P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
# Rotation and Scale
R = np.eye(3)
a = random.uniform(-degrees, degrees)
# a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
s = random.uniform(1 - scale, 1 + scale)
# s = 2 ** random.uniform(-scale, scale)
R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
# Shear
S = np.eye(3)
S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
# Translation
T = np.eye(3)
T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
# Combined rotation matrix
M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
if perspective:
im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114))
else: # affine
im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
# Visualize
# import matplotlib.pyplot as plt
# ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
# ax[0].imshow(im[:, :, ::-1]) # base
# ax[1].imshow(im2[:, :, ::-1]) # warped
# Transform label coordinates
n = len(targets)
if n:
use_segments = any(x.any() for x in segments)
new = np.zeros((n, 4))
if use_segments: # warp segments
segments = resample_segments(segments) # upsample
for i, segment in enumerate(segments):
xy = np.ones((len(segment), 3))
xy[:, :2] = segment
xy = xy @ M.T # transform
xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine
# clip
new[i] = segment2box(xy, width, height)
else: # warp boxes
xy = np.ones((n * 4, 3))
xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
xy = xy @ M.T # transform
xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine
# create new boxes
x = xy[:, [0, 2, 4, 6]]
y = xy[:, [1, 3, 5, 7]]
new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
# clip
new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
# filter candidates
i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)
targets = targets[i]
targets[:, 1:5] = new[i]
return im, targets
def copy_paste(im, labels, segments, p=0.5):
# Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
n = len(segments)
if p and n:
h, w, c = im.shape # height, width, channels
im_new = np.zeros(im.shape, np.uint8)
for j in random.sample(range(n), k=round(p * n)):
l, s = labels[j], segments[j]
box = w - l[3], l[2], w - l[1], l[4]
ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
labels = np.concatenate((labels, [[l[0], *box]]), 0)
segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
result = cv2.bitwise_and(src1=im, src2=im_new)
result = cv2.flip(result, 1) # augment segments (flip left-right)
i = result > 0 # pixels to replace
# i[:, :] = result.max(2).reshape(h, w, 1) # act over ch
im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug
return im, labels, segments
def cutout(im, labels, p=0.5):
# Applies image cutout augmentation https://arxiv.org/abs/1708.04552
if random.random() < p:
h, w = im.shape[:2]
scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
for s in scales:
mask_h = random.randint(1, int(h * s)) # create random masks
mask_w = random.randint(1, int(w * s))
# box
xmin = max(0, random.randint(0, w) - mask_w // 2)
ymin = max(0, random.randint(0, h) - mask_h // 2)
xmax = min(w, xmin + mask_w)
ymax = min(h, ymin + mask_h)
# apply random color mask
im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
# return unobscured labels
if len(labels) and s > 0.03:
box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
labels = labels[ioa < 0.60] # remove >60% obscured labels
return labels
def mixup(im, labels, im2, labels2):
# Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf
r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0
im = (im * r + im2 * (1 - r)).astype(np.uint8)
labels = np.concatenate((labels, labels2), 0)
return im, labels
def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
# Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates

Some files were not shown because too many files have changed in this diff Show More