Skip to content

Commit

Permalink
Enable autograd cache on inductor tests (#140890)
Browse files Browse the repository at this point in the history
Summary:
This turns on AOTAutogradCache for all inductor tests. It clears AOTAutogradCache on each test as well, by virtue of the local cache using the same directory to store cache entries.

I've also tested with INDUCTOR_TEST_DISABLE_FRESH_CACHE=1, running all the tests. AOTAutogradCache successfully caches 99% of these. There are a few tests that use view_replay and therefore save functional tensors, which cause AOTAutogradCache to fail to pickle its result. Will look into next steps there, but for now, it seems okay if the cache just misses on those cases where it can't serialize the result. It would be better to check before pickling, though.

I've made the following small bugfixes to get this working:
- Inductor is sometimes used in a standalone mode without dynamo, which leads to attribute errors in check_can_cache. In general, we should *never* crash in cache checking, only bypass. So I change a try catch to check Exception instead of just a specific exception.
- Add extra structured logging for metadata on cache hits

X-link: pytorch/pytorch#140890
Approved by: https://github.com/bdhirsh

Reviewed By: atalman

Differential Revision: D66556085

Pulled By: jamesjwu

fbshipit-source-id: d1379a9946afca524e289459217845e29e97e142
  • Loading branch information
jamesjwu authored and facebook-github-bot committed Dec 2, 2024
1 parent 7217626 commit 9f8813e
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 0 deletions.
8 changes: 8 additions & 0 deletions userbenchmark/dynamo/dynamobench/_dynamo/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1130,6 +1130,14 @@ def __init__(self):
# TODO: log to init/id tlparse after I add support for it
log.info("ChromiumEventLogger initialized with id %s", self.id_)

def try_add_event_data(self, event_name: str, **kwargs) -> None:
"""
Same as add_event_data, but will silently not log if the event isn't in the stack.
"""
if event_name not in self.get_stack():
return
self.add_event_data(event_name, **kwargs)

def add_event_data(
self,
event_name: str,
Expand Down
4 changes: 4 additions & 0 deletions userbenchmark/dynamo/dynamobench/huggingface.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,10 @@
if "TORCHINDUCTOR_FX_GRAPH_CACHE" not in os.environ:
torch._inductor.config.fx_graph_cache = True

# Enable Autograd caching
if "TORCHINDUCTOR_AUTOGRAD_CACHE" not in os.environ:
torch._functorch.config.enable_autograd_cache = True


def pip_install(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
Expand Down
4 changes: 4 additions & 0 deletions userbenchmark/dynamo/dynamobench/torchbench.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@
if "TORCHINDUCTOR_FX_GRAPH_CACHE" not in os.environ:
torch._inductor.config.fx_graph_cache = True

# Enable Autograd caching
if "TORCHINDUCTOR_AUTOGRAD_CACHE" not in os.environ:
torch._functorch.config.enable_autograd_cache = True


def _reassign_parameters(model):
# torch_geometric models register parameter as tensors due to
Expand Down

0 comments on commit 9f8813e

Please sign in to comment.