Skip to content

Add cases to skip list for test_transformers.py and remove passed cases #1710

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

PenghuiCheng
Copy link
Contributor

  1. Because XPU didn't support F.scaled_dot_product_attention and torch._scaled_dot_product_efficient_attention with SDPBackend.EFFICIENT_ATTENTION. So Add UT cases to skip list for test_transformers.py.
  2. Remove passed cases from skip list.

@Copilot Copilot AI review requested due to automatic review settings May 29, 2025 09:13
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Introduces two new tests for memory-efficient attention failure modes on XPU (due to lack of support for the efficient SDP backend) and cleans up the skip list by removing tests that now pass.

  • Add imports and two failure-mode tests for large-batch scaled dot-product attention.
  • Remove outdated skip entries and prepare skip list for new transformer tests.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
test/xpu/test_transformers_xpu.py Added imports for F.scaled_dot_product_attention, sdpa_kernel, and two new tests on XPU failure modes.
test/xpu/skip_list_common.py Removed many outdated skip entries and added a placeholder for skipping the new transformer tests.
Comments suppressed due to low confidence (2)

test/xpu/skip_list_common.py:617

  • The new tests test_mem_eff_attention_fail_with_batch_size_geq_65536 and test_mem_eff_attention_fail_with_batch_size_geq_65536_error are only commented out in the skip list. They should be added as active skip entries under test_transformers_xpu.py to prevent unintended failures on XPU.
#        "test_mem_eff_attention_fail_with_batch_size_geq_65536",

test/xpu/test_transformers_xpu.py:56

  • [nitpick] Consider adding a short docstring or inline comment explaining the test’s purpose and expected behavior to help future readers understand why this helper is needed.
def _test_mem_eff_attention_fail_with_batch_size_geq_65536(self):

Signed-off-by: Cheng, Penghui <[email protected]>
@PenghuiCheng PenghuiCheng force-pushed the penghuic/skip_list_clean branch from 22e7cb9 to 8623b34 Compare May 30, 2025 02:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant