Skip to content

Commit

Permalink
Corrected performance terms
Browse files Browse the repository at this point in the history
  • Loading branch information
mhbuehler committed May 29, 2019
1 parent c6073c0 commit fa55b06
Show file tree
Hide file tree
Showing 31 changed files with 161 additions and 164 deletions.
4 changes: 2 additions & 2 deletions benchmarks/adversarial_networks/tensorflow/dcgan/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ precision, and docker image to use, along with your path to the external model d
for `--model-source-dir` (from step 1) `--data-location` (from step 2), and `--checkpoint` (from step 3).


Run the model script for throughput and latency with `--batch-size=100` :
Run the model script for batch and online inference with `--batch-size=100` :
```
$ cd /home/<user>/models/benchmarks
Expand All @@ -65,7 +65,7 @@ $ python launch_benchmark.py \

5. Log files are located at the value of `--output-dir`.

Below is a sample log file tail when running for throughput:
Below is a sample log file tail when running for batch inference:
```
Batch size: 100
Batches number: 500
Expand Down
10 changes: 5 additions & 5 deletions benchmarks/content_creation/tensorflow/draw/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ modes/precisions:
$ cd models/benchmarks
```

4. Run the model for either throughput or latency using the commands
4. Run the model for either batch or online inference using the commands
below. Replace in the path to the `--data-location` with your `mnist`
dataset directory from step 1 and the `--checkpoint` files that you
downloaded and extracted in step 2.

* Run DRAW for latency (with `--batch-size 1`):
* Run DRAW for online inference (with `--batch-size 1`):
```
python launch_benchmark.py \
--precision fp32 \
Expand All @@ -54,7 +54,7 @@ modes/precisions:
--batch-size 1 \
--socket-id 0
```
* Run DRAW for throughput (with `--batch-size 100`):
* Run DRAW for batch inference (with `--batch-size 100`):
```
python launch_benchmark.py \
--precision fp32 \
Expand All @@ -72,7 +72,7 @@ modes/precisions:
5. The log files for each run are saved at the value of `--output-dir`.
* Below is a sample log file tail when testing latency:
* Below is a sample log file tail when testing online inference:
```
...
Elapsed Time 0.006622
Expand All @@ -88,7 +88,7 @@ modes/precisions:
Log location outside container: {--output-dir value}/benchmark_draw_inference_fp32_20190123_012947.log
```
* Below is a sample log file tail when testing throughput:
* Below is a sample log file tail when testing batch inference:
```
Elapsed Time 0.028355
Elapsed Time 0.028221
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ precision, and docker image.
Substitute in your own `--checkpoint` pretrained model file path (from step 3),
and `--data-location` (from step 4).

FaceNet can be run for testing latency, throughput, or accuracy.
FaceNet can be run for testing online inference, batch inference, or accuracy.
Use one of the following examples below, depending on your use case.

* For latency (using `--batch-size 1`):
* For online inference (using `--batch-size 1`):

```
python launch_benchmark.py \
Expand All @@ -61,7 +61,7 @@ python launch_benchmark.py \
--model-source-dir /home/<user>/facenet/ \
--docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl
```
Example log tail for latency:
Example log tail for online inference:
```
Batch 979 elapsed Time 0.0297989845276
Batch 989 elapsed Time 0.029657125473
Expand All @@ -83,7 +83,7 @@ Ran inference with batch size 1
Log location outside container: {--output-dir value}/benchmark_facenet_inference_fp32_20190328_205911.log
```

* For throughput (using `--batch-size 100`):
* For batch inference (using `--batch-size 100`):

```
python launch_benchmark.py \
Expand All @@ -98,7 +98,7 @@ python launch_benchmark.py \
--model-source-dir /home/<user>/facenet/ \
--docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl
```
Example log tail for throughput:
Example log tail for batch inference:
```
Batch 219 elapsed Time 0.446497917175
Batch 229 elapsed Time 0.422048091888
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Run:

6. The log file is saved to the value of `--output-dir`.

Below is a sample log file tail when running for model throughput, latency, and accuracy:
Below is a sample log file tail when running for batch inference, online inference, and accuracy:

```
time cost 0.459 pnet 0.166 rnet 0.144 onet 0.149
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,7 @@ $ wget https://storage.googleapis.com/intel-optimized-tensorflow/models/inceptio
```

3. If you would like to run Inception ResNet V2 inference and test for
accuracy, you will need the full ImageNet dataset. Running for latency
and throughput do not require the ImageNet dataset.
accuracy, you will need the full ImageNet dataset. Running for online and batch inference performance do not require the ImageNet dataset.

Register and download the
[ImageNet dataset](http://image-net.org/download-images).
Expand Down Expand Up @@ -66,7 +65,7 @@ only) and `--in-graph` pre-trained model file path (from step 2). Note
that the docker image in the commands below is built using MKL PRs that
are required to run Inception ResNet V2 Int8.

Inception ResNet V2 can be run for accuracy, latency, or throughput.
Inception ResNet V2 can be run for accuracy, online inference, or batch inference.
Use one of the following examples below, depending on your use case.

For accuracy (using your `--data-location`, `--accuracy-only` and
Expand All @@ -85,7 +84,7 @@ python launch_benchmark.py \
--data-location /home/<user>/datasets/ImageNet_TFRecords
```

For latency (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):
For online inference (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):

```
python launch_benchmark.py \
Expand All @@ -100,7 +99,7 @@ python launch_benchmark.py \
--in-graph /home/<user>/inception_resnet_v2_int8_pretrained_model.pb
```

For throughput (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):
For batch inference (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):

```
python launch_benchmark.py \
Expand Down Expand Up @@ -134,7 +133,7 @@ Ran inference with batch size 100
Log location outside container: <output directory>/benchmark_inception_resnet_v2_inference_int8_20190330_012925.log
```

Example log tail when running for latency:
Example log tail when running for online inference:
```
...
Iteration 37: 0.046 sec
Expand All @@ -149,7 +148,7 @@ Ran inference with batch size 1
Log location outside container: <output directory>/benchmark_inception_resnet_v2_inference_int8_20190330_012557.log
```

Example log tail when running for throughput:
Example log tail when running for batch inference:
```
...
Iteration 37: 0.975 sec
Expand Down Expand Up @@ -183,16 +182,16 @@ For accuracy:
$ wget https://storage.googleapis.com/intel-optimized-tensorflow/models/inception_resnet_v2_fp32_pretrained_model.pb
```

For throughput and latency:
For batch and online inference:

```
$ wget http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz
$ mkdir -p checkpoints && tar -C ./checkpoints/ -zxf inception_resnet_v2_2016_08_30.tar.gz
```

3. If you would like to run Inception ResNet V2 inference and test for
accuracy, you will need the full ImageNet dataset. Running for latency
and throughput do not require the ImageNet dataset.
accuracy, you will need the full ImageNet dataset. Running for online
and batch inference do not require the ImageNet dataset.

Register and download the
[ImageNet dataset](http://image-net.org/download-images).
Expand Down Expand Up @@ -233,7 +232,7 @@ TF Records that you generated in step 3.
Substitute in your own `--data-location` (from step 3, for accuracy
only), `--checkpoint` pre-trained model checkpoint file path (from step 2).

Inception ResNet V2 can be run for accuracy, latency, or throughput.
Inception ResNet V2 can be run for accuracy, online inference, or batch inference.
Use one of the following examples below, depending on your use case.

For accuracy (using your `--data-location`, `--accuracy-only` and
Expand All @@ -252,7 +251,7 @@ python launch_benchmark.py \
--data-location /home/<user>/datasets/ImageNet_TFRecords
```

For latency (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):
For online inference (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):

```
python launch_benchmark.py \
Expand All @@ -268,7 +267,7 @@ python launch_benchmark.py \
--data-location /home/<user>/datasets/ImageNet_TFRecords
```

For throughput (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):
For batch inference (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):

```
python launch_benchmark.py \
Expand Down Expand Up @@ -304,7 +303,7 @@ Ran inference with batch size 100
Log location outside container: {--output-dir value}/benchmark_inception_resnet_v2_inference_fp32_20190109_081637.log
```

Example log tail when running for latency:
Example log tail when running for online inference:
```
eval/Accuracy[0]
eval/Recall_5[0.01]
Expand All @@ -319,7 +318,7 @@ Ran inference with batch size 1
Log location outside container: {--output-dir value}/benchmark_inception_resnet_v2_inference_fp32_20190108_015057.log
```

Example log tail when running for throughput:
Example log tail when running for batch inference:
```
eval/Accuracy[0.00078125]
eval/Recall_5[0.00375]
Expand Down
28 changes: 14 additions & 14 deletions benchmarks/image_recognition/tensorflow/inceptionv3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ only), `--in-graph` pretrained model file path (from step 3) and
[tensorflow/models](https://github.com/tensorflow/models) repo
(from step 2).

Inception V3 can be run for accuracy, latency, or throughput.
Inception V3 can be run for accuracy, online inference, or batch inference.
Use one of the following examples below, depending on your use case.

For accuracy (using your `--data-location`, `--accuracy-only` and
Expand All @@ -102,7 +102,7 @@ number of `warmup_steps` and `steps` as extra args, as shown in the
commands below. If these values are not specified, the script will
default to use `warmup_steps=10` and `steps=50`.

For latency with ImageNet data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):
For online inference with ImageNet data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):

```
python launch_benchmark.py \
Expand All @@ -119,7 +119,7 @@ python launch_benchmark.py \
-- warmup_steps=50 steps=500
```

For latency with dummy data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`), remove `--data-location` argument:
For online inference with dummy data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`), remove `--data-location` argument:

```
python launch_benchmark.py \
Expand All @@ -135,7 +135,7 @@ python launch_benchmark.py \
-- warmup_steps=50 steps=500
```

For throughput with ImageNet data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):
For batch inference with ImageNet data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):

```
python launch_benchmark.py \
Expand All @@ -152,7 +152,7 @@ python launch_benchmark.py \
-- warmup_steps=50 steps=500
```

For throughput with dummy data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`), remove `--data-location` argument::
For batch inference with dummy data (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`), remove `--data-location` argument::

```
python launch_benchmark.py \
Expand Down Expand Up @@ -193,7 +193,7 @@ Ran inference with batch size 100
Log location outside container: {--output-dir value}/benchmark_inceptionv3_inference_int8_20190104_013246.log
```

Example log tail when running for latency:
Example log tail when running for online inference:
```
...
steps = 470, 53.7256017113 images/sec
Expand All @@ -206,7 +206,7 @@ Ran inference with batch size 1
Log location outside container: {--output-dir value}/benchmark_inceptionv3_inference_int8_20190223_194002.log
```

Example log tail when running for throughput:
Example log tail when running for batch inference:
```
...
steps = 470, 370.435654276 images/sec
Expand Down Expand Up @@ -234,8 +234,8 @@ $ wget https://storage.googleapis.com/intel-optimized-tensorflow/models/inceptio
```

3. If you would like to run Inception V3 FP32 inference and test for
accuracy, you will need the ImageNet dataset. Running for latency
and throughput do not require the ImageNet dataset. Instructions for
accuracy, you will need the ImageNet dataset. Running for online
and batch inference do not require the ImageNet dataset. Instructions for
downloading the dataset and converting it to the TF Records format can
be found in the TensorFlow documentation
[here](https://github.com/tensorflow/models/tree/master/research/slim#an-automated-script-for-processing-imagenet-data).
Expand All @@ -249,10 +249,10 @@ precision, and docker image.

Substitute in your own `--in-graph` pretrained model file path (from step 2).

Inception V3 can be run for latency, throughput, or accuracy. Use one of the following examples below,
Inception V3 can be run for online inference, batch inference, or accuracy. Use one of the following examples below,
depending on your use case.

* For latency with dummy data (using `--batch-size 1`):
* For online inference with dummy data (using `--batch-size 1`):

```
python launch_benchmark.py \
Expand All @@ -265,7 +265,7 @@ python launch_benchmark.py \
--docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl \
--in-graph /home/<user>/inceptionv3_fp32_pretrained_model.pb
```
Example log tail when running for latency:
Example log tail when running for online inference:
```
Inference with dummy data.
Iteration 1: 1.075 sec
Expand All @@ -285,7 +285,7 @@ Ran inference with batch size 1
Log location outside container: {--output-dir value}/benchmark_inceptionv3_inference_fp32_20190104_025220.log
```

* For throughput with dummy data (using `--batch-size 128`):
* For batch inference with dummy data (using `--batch-size 128`):

```
python launch_benchmark.py \
Expand All @@ -298,7 +298,7 @@ python launch_benchmark.py \
--docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl \
--in-graph /home/<user>/inceptionv3_fp32_pretrained_model.pb
```
Example log tail when running for throughput:
Example log tail when running for batch inference:
```
Inference with dummy data.
Iteration 1: 2.024 sec
Expand Down
Loading

0 comments on commit fa55b06

Please sign in to comment.