Skip to content

Commit

Permalink
Disable AITModel in fx2ait on AMD
Browse files Browse the repository at this point in the history
Summary: as title, there are still some utilities functions needed from fx2ait for aot_Inductor_lower before we fully moved everything to PT2 full stack

Reviewed By: chenyang78

Differential Revision: D56613348
  • Loading branch information
zoranzhao authored and facebook-github-bot committed Apr 28, 2024
1 parent 0973303 commit 50dcde2
Showing 1 changed file with 15 additions and 11 deletions.
26 changes: 15 additions & 11 deletions fx2ait/fx2ait/extension.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,14 +43,18 @@ def _get_extension_path(lib_name):
return ext_specs.origin


try:
torch.ops.load_library("//deeplearning/ait:AITModel")
logger.info("===Load non-OSS AITModel===")

except (ImportError, OSError):
lib_path = _get_extension_path("libait_model")
torch.ops.load_library(lib_path)
logger.info("===Load OSS AITModel===")

def is_oss_ait_model(): # noqa: F811
return True
if torch.version.hip is None:
# For Meta internal workloads, we don't have an active plan to apply AITemplate on AMD GPUs.
# As such, for AMD build we skip all AITemplate related supports. T186819748 is used to
# track the plans/strategies for AITemplate enablement on AMD GPUs if needed in the future.
try:
torch.ops.load_library("//deeplearning/ait:AITModel")
logger.info("===Load non-OSS AITModel===")

except (ImportError, OSError):
lib_path = _get_extension_path("libait_model")
torch.ops.load_library(lib_path)
logger.info("===Load OSS AITModel===")

def is_oss_ait_model(): # noqa: F811
return True

0 comments on commit 50dcde2

Please sign in to comment.