Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance]: INT8 hifiGAN quantized by NNCF inference is much slower than bf16 with OpenVINO in CPU #25197

Open
3 tasks done
SakuraYM opened this issue Jun 25, 2024 · 12 comments
Assignees
Labels
category: NNCF Tasks related to NNCF tool performance Performance related topics support_request

Comments

@SakuraYM
Copy link

OpenVINO Version

2024.0.0

Operating System

Ubuntu 22.04 (LTS)

Device used for inference

CPU

OpenVINO installation

PyPi

Programming Language

Python

Hardware Architecture

x86 (64 bits)

Model used

hifiGAN vocoder

Model quantization

Yes

Target Platform

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8592+
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 2
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 3800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxs
r sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_
good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_
cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc
deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l
2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flex
priority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f
avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl x
saveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detec
t avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req v
nmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bital
g tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clea
r serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch

capabilities
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 6 MiB (128 instances)
L1i: 4 MiB (128 instances)
L2: 256 MiB (128 instances)
L3: 640 MiB (2 instances)
NUMA:
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; B
HI BHI_DIS_S
Srbds: Not affected
Tsx async abort: Not affected

Performance issue description

benchmark_app shows the INT8 quantized hifiGAN model inference is much slower than BF16
AMX_BF16
image
AMX_INT8
image

The same thing also occurs during the model compilation phase.....

Step-by-step reproduction

This is nncf code for hifigan

nncf_hifigan.txt

Issue submission checklist

  • I'm reporting a performance issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@SakuraYM SakuraYM added performance Performance related topics support_request labels Jun 25, 2024
@rkazants rkazants added the category: NNCF Tasks related to NNCF tool label Jun 25, 2024
@rkazants
Copy link
Contributor

@MaximProshin, @AlexKoff88, please take a look at this issue.

@MaximProshin
Copy link
Contributor

MaximProshin commented Jun 25, 2024

This is performance issue. As far as I understand the model was quantized with default settings and Mixed preset. CPU plugin should investigate it. @wenjiew @dmitry-gorokhov can someone from your side check why int8 performance is so bad?

@MaximProshin
Copy link
Contributor

@SakuraYM , did you validate the model after the quantization? Is it accurate?

@SakuraYM
Copy link
Author

did you validate the model after the quantization? Is it accurate?

No, we use dummy input to quantize hifigan and just want to estimate the best performance improvement

@dmitry-gorokhov
Copy link
Contributor

@SakuraYM May I ask you to attach both original and quantized IRs to this issue?

@AlexKoff88
Copy link
Contributor

I also noticed that you used 259 and 11 iterations of benchmarking for BF16 and INT8 models correspondingly.
I think it is also worth looking at how this model is quantized from the NNCF perspective.

@SakuraYM
Copy link
Author

@SakuraYM May I ask you to attach both original and quantized IRs to this issue?

of course, after data masking I‘ll upload the model. :)

@SakuraYM
Copy link
Author

I also noticed that you used 259 and 11 iterations of benchmarking for BF16 and INT8 models correspondingly. I think it is also worth looking at how this model is quantized from the NNCF perspective.

Yes, because it used the benchmark's default configuration, which only runs for 1 minute to collect data

@SakuraYM
Copy link
Author

hifigan_bf16.log
hifigan_i8.log
The attachment are the benchmark_app log to provide details for analysis.

@dmitry-gorokhov
Copy link
Contributor

hifigan_bf16.log hifigan_i8.log The attachment are the benchmark_app log to provide details for analysis.

Based on these logs I see int8 Convolutions work dramatically slow for some reason. @SakuraYM Could you please do the same benchmark_app runs but with DNNL_VERBOSE=1 env variable enabled and share the logs?

@SakuraYM
Copy link
Author

hifigan_bf16.log hifigan_i8.log The attachment are the benchmark_app log to provide details for analysis.

Based on these logs I see int8 Convolutions work dramatically slow for some reason. @SakuraYM Could you please do the same benchmark_app runs but with DNNL_VERBOSE=1 env variable enabled and share the logs?

hifigan_bf16_dnn.log
hifigan_int8_dnn.log

@SakuraYM
Copy link
Author

@SakuraYM May I ask you to attach both original and quantized IRs to this issue?

of course, after data masking I‘ll upload the model. :)

@dmitry-gorokhov hi, the models are too big to upload.... Is there a better way to share or just contact Yu, Meng in teams and I can send it to you directly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: NNCF Tasks related to NNCF tool performance Performance related topics support_request
Projects
None yet
Development

No branches or pull requests

5 participants