Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.compile errors on vae.encode #10937

Open
Luciennnnnnn opened this issue Mar 2, 2025 · 1 comment
Open

torch.compile errors on vae.encode #10937

Luciennnnnnn opened this issue Mar 2, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Luciennnnnnn
Copy link
Contributor

Luciennnnnnn commented Mar 2, 2025

Describe the bug

torch.compile fails at compiling vae.encode while is ok at compiling vae.

By removing @apply_forward_hook, it works.

Reproduction

import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL

vae = AutoencoderKL.from_pretrained(
    'stable-diffusion-v1-5/stable-diffusion-v1-5',
    subfolder='vae', # use vae_ema?
    local_files_only=False,
).to('cuda')
vae = torch.compile(vae, mode="max-autotune", fullgraph=True) # ok
a = torch.randn(1, 3, 256, 256, device='cuda')
vae(a)

vae = AutoencoderKL.from_pretrained(
    'stable-diffusion-v1-5/stable-diffusion-v1-5',
    subfolder='vae', # use vae_ema?
    local_files_only=False,
).to('cuda')
vae.encode = torch.compile(vae.encode, mode="max-autotune", fullgraph=True) # error
a = torch.randn(1, 3, 256, 256, device='cuda')
vae.encode(a)

Logs

  File "/home/sist/luoxin/.conda/envs/py3.10+pytorch2.4+cu121/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
    return fn(*args, **kwargs)
  File "/home/sist/luoxin/.conda/envs/py3.10+pytorch2.4+cu121/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 43, in wrapper
    def wrapper(self, *args, **kwargs):
NameError: name 'torch' is not defined

System Info

  • 🤗 Diffusers version: 0.32.2
  • Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.10.15
  • PyTorch version (GPU?): 2.4.1+cu121 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.26.2
  • Transformers version: 4.46.2
  • Accelerate version: 1.0.1
  • PEFT version: 0.13.2
  • Bitsandbytes version: 0.44.1
  • Safetensors version: 0.4.5
  • xFormers version: 0.0.28
  • Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
    NVIDIA A100-SXM4-80GB, 81920 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

@sayakpaul @DN6 @yiyixuxu

@Luciennnnnnn Luciennnnnnn added the bug Something isn't working label Mar 2, 2025
@sayakpaul
Copy link
Member

Can you try with torch 2.5.1?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants