Model | Download | Download (with sample test data) | ONNX version | Opset version | Top-1 accuracy (%) |
---|---|---|---|---|---|
Inception-1 | 28 MB | 29 MB | 1.1 | 3 | |
Inception-1 | 28 MB | 29 MB | 1.1.2 | 6 | |
Inception-1 | 28 MB | 29 MB | 1.2 | 7 | |
Inception-1 | 28 MB | 29 MB | 1.3 | 8 | |
Inception-1 | 28 MB | 29 MB | 1.4 | 9 | |
Inception-1 | 27 MB | 25 MB | 1.9 | 12 | 67.23 |
Inception-1-int8 | 10 MB | 9 MB | 1.9 | 12 | 67.24 |
Inception-1-qdq | 7 MB | 5 MB | 1.12 | 12 | 67.21 |
Compared with the fp32 Inception-1, int8 Inception-1's Top-1 accuracy drop ratio is -0.01% and performance improvement is 1.26x.
Note
The performance depends on the test hardware. Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.
Inception v1 is a reproduction of GoogLeNet.
Caffe2 Inception v1 ==> ONNX Inception v1 ONNX Inception v1 ==> Quantized ONNX Inception v1
data_0: float[1, 3, 224, 224]
prob_1: float[1, 1000]
random generated sampe test data:
- test_data_0.npz
- test_data_1.npz
- test_data_2.npz
- test_data_set_0
- test_data_set_1
- test_data_set_2
Inception-1-int8 and Inception-1-qdq are obtained by quantizing fp32 Inception-1 model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
onnx: 1.9.0 onnxruntime: 1.8.0
wget https://github.com/onnx/models/raw/main/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-12.onnx
Make sure to specify the appropriate dataset path in the configuration file.
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--config=inception_v1.yaml \
--data_path=/path/to/imagenet \
--label_path=/path/to/imagenet/label \
--output_model=path/to/save
- mengniwang95 (Intel)
- airMeng (Intel)
- ftian1 (Intel)
- hshen14 (Intel)
MIT