Skip to content

Latest commit

 

History

History
168 lines (146 loc) · 7.95 KB

File metadata and controls

168 lines (146 loc) · 7.95 KB

SSD-MobileNet inference

Description

This document has instructions for running SSD-MobileNet inference using Intel-optimized TensorFlow.

Datasets

The COCO validation dataset is used in these SSD-Mobilenet quickstart scripts. The accuracy quickstart script require the dataset to be converted into the TF records format. See the COCO dataset for instructions on downloading and preprocessing the COCO validation dataset.

Set the DATASET_DIR to point to the dataset directory that contains the TF records file coco_val.record when running SSD-MobileNet accuracy script.

Quick Start Scripts

Script name Description
inference.sh Runs inference and outputs performance metrics. Uses synthetic data if no DATASET_DIR is set. Supported versions are (fp32, int8, bfloat16)
accuracy.sh Measures the inference accuracy (providing a DATASET_DIR environment variable is required). Supported versions are (fp32, int8, bfloat16, bfloat32).
inference_throughput_multi_instance.sh A multi-instance run that uses all the cores for each socket for each instance with a batch size of 448 and synthetic data. Supported versions are (fp32, int8, bfloat16, bfloat32)
inference_realtime_multi_instance.sh A multi-instance run that uses 4 cores per instance with a batch size of 1. Uses synthetic data if no DATASET_DIR is set. Supported versions are (fp32, int8, bfloat16, bfloat32)

Run the model

Setup your environment using the instructions below, depending on if you are using AI Tools:

Setup using AI Tools on Linux Setup without AI Tools on Linux Setup without AI Tools on Windows

To run using AI Tools on Linux you will need:

  • numactl
  • wget
  • build-essential
  • Cython
  • contextlib2
  • jupyter
  • lxml
  • matplotlib
  • pillow>=9.3.0
  • pycocotools
  • intel-extension-for-tensorflow (only required when using onednn graph optimization)
  • Activate the `tensorflow` conda environment
    conda activate tensorflow

To run without AI Tools on Linux you will need:

  • Python 3
  • git
  • numactl
  • wget
  • intel-tensorflow>=2.5.0
  • build-essential
  • Cython
  • contextlib2
  • jupyter
  • lxml
  • matplotlib
  • pillow>=9.3.0
  • pycocotools
  • intel-extension-for-tensorflow (only required when using onednn graph optimization)
  • A clone of the AI Reference Models repo
    git clone https://github.com/IntelAI/models.git

To run without AI Tools on Windows you will need:

For more information on the dependencies, see the documentation on prerequisites in the TensorFlow models repo.

Download the pretrained model and set the PRETRAINED_MODEL environment variable to the path of the frozen graph. If you run on Windows, please use a browser to download the pretrained model using the link below. For Linux, run:

# FP32, BFloat16 and BFloat32 Pretrained model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/ssdmobilenet_fp32_pretrained_model_combinedNMS.pb
export PRETRAINED_MODEL=$(pwd)/ssdmobilenet_fp32_pretrained_model_combinedNMS.pb

# Int8 Pretrained model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/ssdmobilenet_int8_pretrained_model_combinedNMS_s8.pb
export PRETRAINED_MODEL=$(pwd)/ssdmobilenet_int8_pretrained_model_combinedNMS_s8.pb

# Int8 Pretrained model for OneDNN Graph (Only used when the plugin Intel Extension for Tensorflow is installed, as OneDNN Graph optimization is enabled by default at this point)
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/2_12_0/ssd_mb_itex_int8.pb
export PRETRAINED_MODEL=$(pwd)/ssd_mb_itex_int8.pb

After installing the prerequisites and downloading the pretrained model, set the environment variables and for the DATASET_DIR use COCO raw dataset directory or tf_records file based on whether you run inference or accuracy scripts. Navigate to your AI Reference Models directory and then run a quickstart script on either Linux or Windows.

Run on Linux

# cd to your AI Reference Models directory
cd models

export PRETRAINED_MODEL=<path to the downloaded frozen graph>
export DATASET_DIR=<path to the coco tf record file>
export PRECISION=<set the precision to "int8" or "fp32" or "bfloat16" or "bfloat32" >
export OUTPUT_DIR=<path to the directory where log files will be written>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>

./quickstart/object_detection/tensorflow/ssd-mobilenet/inference/cpu/<script name>.sh

Run on Windows

Using cmd.exe, run:

# cd to your AI Reference Models directory
cd models

set PRETRAINED_MODEL=<path to the pretrained model pb file>
set DATASET_DIR=<path to the coco tf record file>
set OUTPUT_DIR=<directory where log files will be written>
set PRECISION=<set the precision to "int8" or "fp32" or "bfloat16">
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
set BATCH_SIZE=<customized batch size value>

# Run a quickstart script (inference.sh)
bash quickstart\object_detection\tensorflow\ssd-mobilenet\inference\cpu\inference.sh

Note: You may use cygpath to convert the Windows paths to Unix paths before setting the environment variables. As an example, if the dataset location on Windows is D:\user\coco_dataset\coco_val.record, convert the Windows path to Unix as shown:

cygpath D:\user\coco_dataset\coco_val.record
/d/user/coco_dataset/coco_val.record

Then, set the DATASET_DIR environment variable set DATASET_DIR=/d/user/coco_dataset/coco_val.record.

Additional Resources