Skip to content

Commit 75fd28f

Browse files
authored
[SW-213890] Revert "[SW-213890] Disable test_two_step_layer_wise temporarily (#84)" (#86)
This reverts commit 27162ae.
1 parent 0f6e6e0 commit 75fd28f

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

test/3x/torch/quantization/fp8_quant/test_layer_wise.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
import torch
22
import habana_frameworks.torch.core as htcore
3-
import pytest
43

54
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
65
from neural_compressor.torch.quantization import FP8Config, convert, prepare, finalize_calibration
@@ -9,7 +8,7 @@
98

109
htcore.hpu_set_env()
1110

12-
@pytest.mark.xfail(reason="test fails, will be fixed at SW-213890")
11+
1312
def test_two_step_layer_wise():
1413
# layer-wise is based on memory mapping technique and https://github.com/huggingface/transformers/pull/31771
1514
model_name = "facebook/opt-350m"

0 commit comments

Comments
 (0)