Skip to content

(Do not submit) Set StableHLO Compilation to true for CI #9141

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,7 @@ test --test_env=PJRT_LOCAL_PROCESS_RANK

# This environmental variable is important for properly integrating with XLA.
test --test_env=XLA_EXPERIMENTAL
test --test_env=XLA_STABLEHLO_COMPILE

# To find `libpython` that is required to run tests (they run using installed wheels).
test --test_env=LD_LIBRARY_PATH
Expand Down
2 changes: 2 additions & 0 deletions .github/scripts/run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ PYTORCH_DIR=$1
XLA_DIR=$2
USE_COVERAGE="${3:-0}"

export XLA_STABLEHLO_COMPILE=1

if [ -x "$(command -v nvidia-smi)" ]; then
num_devices=$(nvidia-smi --list-gpus | wc -l)
echo "Found $num_devices GPU devices..."
Expand Down
2 changes: 1 addition & 1 deletion configuration.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ variables:
flag is experimental. The default_value will be set to true when
StableHLO workflow is mature.
type: bool
default_value: false
default_value: true
XLA_DUMP_POST_OPTIMIZATIONS:
description:
- Dump the HLO graph after optimizations. You need to use it together
Expand Down
1 change: 1 addition & 0 deletions test/cpp/run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ shift $(($OPTIND - 1))

# Set XLA_EXPERIMENTAL var to subsequently executed commands.
export XLA_EXPERIMENTAL
export XLA_STABLEHLO_COMPILE=1

EXTRA_FLAGS=""

Expand Down
1 change: 1 addition & 0 deletions test/neuron/run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ export TORCH_TEST_DEVICES="$CDIR/pytorch_test_base.py"
export PYTORCH_TEST_WITH_SLOW=1
export XLA_DUMP_FATAL_STACK=1
export CPU_NUM_DEVICES=4
export XLA_STABLEHLO_COMPILE=1

TORCH_XLA_DIR=$(cd ~; dirname "$(python -c 'import torch_xla; print(torch_xla.__file__)')")
COVERAGE_FILE="$CDIR/../.coverage"
Expand Down
1 change: 1 addition & 0 deletions test/run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ export TORCH_TEST_DEVICES="$CDIR/pytorch_test_base.py"
export PYTORCH_TEST_WITH_SLOW=1
export XLA_DUMP_FATAL_STACK=1
export CPU_NUM_DEVICES=4
unset XLA_STABLEHLO_COMPILE

TORCH_XLA_DIR=$(cd ~; dirname "$(python -c 'import torch_xla; print(torch_xla.__file__)')")
COVERAGE_FILE="$CDIR/../.coverage"
Expand Down
Loading