From 7e8fc146e1d610b76d9becaaba6b9b2ba8ba95aa Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Wed, 20 Mar 2019 13:35:39 +0200 Subject: [PATCH] Update README.md --- README.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index e39122ad..c5d7f059 100755 --- a/README.md +++ b/README.md @@ -45,21 +45,6 @@ Each epoch trains on 120,000 images from the train and validate COCO sets, and t `from utils import utils; utils.plot_results()` ![Alt](https://user-images.githubusercontent.com/26833433/53494085-3251aa00-3a9d-11e9-8af7-8c08cf40d70b.png "train.py results") -# Speed - -https://cloud.google.com/deep-learning-vm/ -**Machine type:** n1-highmem-4 (4 vCPUs, 26 GB memory) -**CPU platform:** Intel Skylake -**GPUs:** 1-4 x NVIDIA Tesla P100 -**HDD:** 100 GB SSD - -GPUs | `batch_size` | speed | COCO epoch ---- |---| --- | --- -(P100) | (images) | (s/batch) | (min/epoch) -1 | 24 | 0.84s | 70min -2 | 48 | 1.27s | 53min -4 | 96 | 2.11s | 44min - ## Image Augmentation `datasets.py` applies random OpenCV-powered (https://opencv.org/) augmentation to the input images in accordance with the following specifications. Augmentation is applied **only** during training, not during inference. Bounding boxes are automatically tracked and updated with the images. 416 x 416 examples pictured below. @@ -76,6 +61,21 @@ HS**V** Intensity | +/- 50% +## Speed + +https://cloud.google.com/deep-learning-vm/ +**Machine type:** n1-highmem-4 (4 vCPUs, 26 GB memory) +**CPU platform:** Intel Skylake +**GPUs:** 1-4 x NVIDIA Tesla P100 +**HDD:** 100 GB SSD + +GPUs | `batch_size` | speed | COCO epoch +--- |---| --- | --- +(P100) | (images) | (s/batch) | (min/epoch) +1 | 24 | 0.84s | 70min +2 | 48 | 1.27s | 53min +4 | 96 | 2.11s | 44min + # Inference Run `detect.py` to apply trained weights to an image, such as `zidane.jpg` from the `data/samples` folder: