Skip to content

Commit

Permalink
Update optimum/intel/neural_compressor/quantization.py
Browse files Browse the repository at this point in the history
Co-authored-by: Ella Charlaix <[email protected]>
  • Loading branch information
changwangss and echarlaix authored Nov 29, 2024
1 parent 0991631 commit 3e21d57
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion optimum/intel/neural_compressor/quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -376,7 +376,7 @@ def _weight_only_quantization(
low_cpu_mem_usage = True

if use_xpu:
if hasattr(quantization_config, "use_layer_wise") and quantization_config.use_layer_wise:
if getattr(quantization_config, "use_layer_wise", False):
from neural_compressor.torch import load_empty_model

model = load_empty_model(model_id, cls=model_class, trust_remote_code=trust_remote_code)
Expand Down

0 comments on commit 3e21d57

Please sign in to comment.