Skip to content

Commit

Permalink
Fix warning msg on quantization (#1715)
Browse files Browse the repository at this point in the history
  • Loading branch information
WoosukKwon authored Nov 19, 2023
1 parent e105424 commit be66d9b
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions vllm/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,9 +137,9 @@ def _verify_quantization(self) -> None:
raise ValueError(
f"Unknown quantization method: {self.quantization}. Must "
f"be one of {supported_quantization}.")
logger.warning(f"{self.quantization} quantization is not fully "
"optimized yet. The speed can be slower than "
"non-quantized models.")
logger.warning(f"{self.quantization} quantization is not fully "
"optimized yet. The speed can be slower than "
"non-quantized models.")

def verify_with_parallel_config(
self,
Expand Down

0 comments on commit be66d9b

Please sign in to comment.