Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug fixes in split-sequence optimization #1872

Merged
merged 2 commits into from
Sep 19, 2024
Merged

Conversation

gramalingam
Copy link
Collaborator

A couple of bugs in the optimization for split-sequence:

  • Handle the case where there is only one split-value (as the op-builder returns a single IR value instead of a list of IR values in this case).
  • Use 1D constant [axis] instead of scalar axis in Squeeze op.

Copy link

codecov bot commented Sep 19, 2024

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
13137 2 13135 2631
View the top 1 failed tests by shortest run time
onnxscript.backend.onnx_export_test.TestOnnxBackEnd test_export2python_produces_correct_onnx_script_model_1103_test_shrink_soft
Stack Traces | 0.004s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_shrink_soft'

The above exception was the direct cause of the following exception:
.nox\test\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_shrink_soft' (e=No module named 'tests.onnx_backend_test_code.test_shrink_soft') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_shrink_soft.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_shrink_soft.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import FLOAT
E   from onnxscript.onnx_opset import opset9
E   
E   @script()
E   def bck_test_shrink_soft(x: FLOAT[5]) -> (FLOAT[5]):
E       y = opset9.Shrink(x, bias=1.5, lambd=1.5)
E       return y
View the full list of 1 ❄️ flaky tests
onnxscript.tools.transformers_models.phi_test.TestExportPhi test_phi_dort_static

Flake rate in main: 100.00% (Passed 0 times, Failed 231 times)

Stack Traces | 23.4s run time
onnxscript/_internal/version_utils.py:114: in call_f
    return fct(self)
.../tools/transformers_models/phi_test.py:105: in test_phi_dort_static
    gradients = onnxscript.tools.training_helper.train_loop(compiled_model, *input_tensors)
onnxscript/tools/training_helper.py:42: in train_loop
    loss.backward()
..../test_torch_nightly/lib/python3.12.../site-packages/torch/_tensor.py:581: in backward
    torch.autograd.backward(
..../test_torch_nightly/lib/python3.12.../torch/autograd/__init__.py:347: in backward
    _engine_run_backward(
..../test_torch_nightly/lib/python3.12.../torch/autograd/graph.py:825: in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
..../test_torch_nightly/lib/python3.12.../torch/autograd/function.py:307: in apply
    return user_fn(self, *args)
..../test_torch_nightly/lib/python3.12.../_functorch/_aot_autograd/runtime_wrappers.py:2048: in backward
    out = call_compiled_backward()
..../test_torch_nightly/lib/python3.12.../_functorch/_aot_autograd/runtime_wrappers.py:1980: in call_compiled_backward
    out = call_func_at_runtime_with_args(
..../test_torch_nightly/lib/python3.12.../_functorch/_aot_autograd/utils.py:133: in call_func_at_runtime_with_args
    out = normalize_as_list(f(*args))
..../test_torch_nightly/lib/python3.12.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
..../test_torch_nightly/lib/python3.12.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
..../test_torch_nightly/lib/python3.12.../torch/_dynamo/eval_frame.py:632: in _fn
    return fn(*args, **kwargs)
..../test_torch_nightly/lib/python3.12.../torch/fx/graph_module.py:784: in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
..../test_torch_nightly/lib/python3.12.../torch/fx/graph_module.py:361: in __call__
    raise e
..../test_torch_nightly/lib/python3.12.../torch/fx/graph_module.py:348: in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
..../test_torch_nightly/lib/python3.12.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
..../test_torch_nightly/lib/python3.12.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
<eval_with_key>.41:5: in forward
    fused_0 = self.fused_0(tangents_1, primals_36, add_14, getitem_7, getitem_8, addmm_10, tanh_1, add_7, getitem_4, getitem_5, addmm_4, tanh, clone, getitem_1, getitem_2, primals_1, t_12, t_20, view_43, view_39, transpose_11, transpose_12, detach_11, t_16, view_23, transpose_13, transpose_14, unsqueeze_12, unsqueeze_11, t_24, t_32, t_28, primals_20, t_36, t_44, view_21, view_17, transpose_20, transpose_21, detach_15, t_40, view_1, transpose_22, transpose_23, unsqueeze_8, unsqueeze_7, t_48, t_56, t_52, primals_4);  tangents_1 = primals_36 = add_14 = getitem_7 = getitem_8 = addmm_10 = tanh_1 = add_7 = getitem_4 = getitem_5 = addmm_4 = tanh = clone = getitem_1 = getitem_2 = primals_1 = t_12 = t_20 = view_43 = view_39 = transpose_11 = transpose_12 = detach_11 = t_16 = view_23 = transpose_13 = transpose_14 = unsqueeze_12 = unsqueeze_11 = t_24 = t_32 = t_28 = primals_20 = t_36 = t_44 = view_21 = view_17 = transpose_20 = transpose_21 = detach_15 = t_40 = view_1 = transpose_22 = transpose_23 = unsqueeze_8 = unsqueeze_7 = t_48 = t_56 = t_52 = primals_4 = None
..../test_torch_nightly/lib/python3.12.../torch/fx/graph_module.py:784: in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
..../test_torch_nightly/lib/python3.12.../onnx/_internal/onnxruntime.py:1017: in _ort_acclerated_call
    onnx_session = onnxruntime.InferenceSession(
..../test_torch_nightly/lib/python3.12.../onnxruntime/capi/onnxruntime_inference_collection.py:419: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test_torch_nightly/lib/python3.12.../onnxruntime/capi/onnxruntime_inference_collection.py:474: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
E   onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (aten_rsub_505) Op (aten_rsub|folded_0) [ShapeInferenceError] (op_type:Sub, node name: n3): B has inconsistent type tensor(float)

To view individual test run time comparison to the main branch, go to the Test Analytics Dashboard

@gramalingam gramalingam merged commit b0ca0c3 into main Sep 19, 2024
28 of 39 checks passed
@gramalingam gramalingam deleted the rama/split-bug-fix branch September 19, 2024 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

2 participants