- Ubuntu 20.04
- cmake 3.23.2
- gcc/g++ 9.4.0
- Python 3.8.10
- MLPerf™ Inference Benchmark Suite v3.0
- OpenVINO Toolkit 2023.0
MLPerf Inference Rules are here.
Area | Task | Model | Dataset | SingleStream | MultiStream | Server | Offline |
---|---|---|---|---|---|---|---|
Vision | Image classification | Resnet50-v1.5 (Image classification) | ImageNet (224x224) | ✔️ | ✔️ | ✔️ | ✔️ |
Vision | Object detection | Retinanet (Object detection) | OpenImages (800x800) | ✔️ | ✔️ | ✔️ | ✔️ |
Vision | Medical image segmentation | 3D UNET (Medical image segmentation) | KITS 2019 (602x512x512) | ✔️ | ❌ | ❌ | ✔️ |
Language | Language processing | BERT-large (Language processing) | SQuAD v1.1 (max_seq_len=384) | ✔️ | ✔️ | ✔️ | ✔️ |
✔️ - supported ❌ - not supported
- Performance
- Accuracy
- User first runs the benchmark in
Accuracy
mode to generatemlperf_log_accuracy.json
- User then runs a dedicated accuracy tool provided by MLPerf
- User first runs the benchmark in
Use the following to optimize performance on CPX/ICX systems. These BKCs are provided in performance.sh
mentioned in How to Build and Run.
- Turbo ON
echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo
- Set CPU governor to performance (Please rerun this command after reboot):
OR
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
cpupower frequency-set -g performance
-
Navigate to root repository directory. This directory is your BUILD_DIRECTORY.
-
Run the build script:
./build.sh
NOTE: sudo privileges are required
-
Modify BUILD_DIRECTORY in
setup_env.sh
(if necessary) and source:source scripts/setup_env.sh
-
Run the performance script for CPX/ICX systems:
./scripts/performance.sh
-
Download models
./scripts/download_models.sh [specific model]
-
Download datasets
./scripts/download_datasets.sh [specific dataset]
-
Modify script
./scripts/run.sh
to apply desired parameters.The following OpenVINO parameters should be adjusted based on selected hardware target:
- number of streams
- number of infer requests
- number of threads
- inference precision
-
Update MLPerf parameters (
user.conf
andmlperf.conf
) if it is needed. -
Run:
./scripts/run.sh -m <model> -d <device> -s <scenario> -e <mode>
For example (results will be stored into the
${BUILD_DIRECTORY}/results/resnet50/CPU/Performance/SingleStream
folder):./scripts/run.sh -m resnet50 -d CPU -s SingleStream -e Performance
To run all combination of models/devices/scenarios/modes:
./scripts/run_all.sh
NOTE: This product is not for production use and scripts are provided as example. For reporting MLPerf results dedicated scripts should be provided for each model with suitable parameters.