# ZLUDA Support ZLUDA (CUDA Wrapper) for AMD GPUs in Windows ## Warning ZLUDA does not fully support PyTorch in its official build. So ZLUDA support is so tricky and unstable. Support is limited at this time. Please don't create issues regarding ZLUDA on GitHub. Feel free to reach out via the ZLUDA thread in the help channel on discord. ## Installing ZLUDA for AMD GPUs in Windows. ### Note _This guide assumes you have [Git and Python](Installation#install-python-and-git) installed, and are comfortable using the command prompt, navigating Windows Explorer, renaming files and folders, and working with zip files._ If you have an integrated AMD GPU (iGPU), you may need to disable it, or use the `HIP_VISIBLE_DEVICES` environment variable. ### Install Visual C++ Runtime _Note: Most everyone would have this anyway, since it comes with a lot of games, but there's no harm in trying to install it._ Grab the latest version of Visual C++ Runtime from (this is a direct download link) and then run it. If you get the options to Repair or Uninstall, then you already have it installed and can click Close. Otherwise, install it. ### Install ZLUDA ZLUDA is now auto-installed, and automatically added to PATH, when starting webui.bat with `--use-zluda`. ### Install HIP SDK Install HIP SDK 6.2 from So long as your regular AMD GPU driver is up to date, you don't need to install the PRO driver HIP SDK suggests. ### Replace HIP SDK library files for unsupported GPU architectures Go to and find your GPU model. If your GPU model has a ✅ in both columns then skip to [Install SD.Next](#install-sdnext). If your GPU model has an ❌ in the HIP SDK column, or if your GPU isn't listed, follow the instructions below; 1. Open Windows Explorer and copy and paste `C:\Program Files\AMD\ROCm\6.2\bin\rocblas` into the location bar. _(Assuming you've installed the HIP SDK in the default location and Windows is located on C:)._ 2. Make a copy of the `library` folder, for backup purposes. 3. Download one of [the unofficial rocBLAS library](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.2.4), and unzip them in the original library folder, overwriting any files there. gfx1010: RX 5700, RX 5700 XT gfx1012: RX 5500, RX 5500 XT gfx1031: RX 6700, RX 6700 XT, RX 6750 XT gfx1032: RX 6600, RX 6600 XT, RX 6650 XT gfx1103: Radeon 780M gfx803: RX 570, RX 580 [More...](https://llvm.org/docs/AMDGPUUsage.html#processors) 4. Open the zip file. 5. Drag and drop the `library` folder from zip file into `%HIP_PATH%bin\rocblas` (The folder you opened in step 1). 6. Reboot PC If your GPU model not in the HIP SDK column or not available in the above list, follow the instructions in [ROCm Support guide](AMD-ROCm#rocm-windows-support) to build your own RocblasLibs. (_Note: Building your own libraries is not for the faint of heart._) ### Install SD.Next Using Windows Explorer, navigate to a place you'd like to install SD.Next. This should be a folder which your user account has read/write/execute access to. Installing SD.Next in a directory which requires admin permissions may cause it to not launch properly. Note: Refrain from installing SD.Next into the Program Files, Users, or Windows folders, this includes the OneDrive folder or on the Desktop, or into a folder that begins with a period; (eg: `.sdnext`). The best place would be on an SSD for model loading. In the Location Bar, type `cmd`, then hit [Enter]. This will open a Command Prompt window at that location. ![image](https://github.com/vladmandic/sdnext/assets/1969381/8a24ff53-4fe9-4260-8674-badcdc3d5aa5) Copy and paste the following commands into the Command Prompt window, one at a time; `git clone https://github.com/vladmandic/sdnext` then `cd automatic` then `.\webui.bat --use-zluda --debug --autolaunch`

_Note: ZLUDA functions best in Diffusers Backend, where certain Diffusers-only options are available._ ### Compilation, Settings, and First Generation After the UI starts, head on over to the System Tab (Standard UI) or the Settings Tab (Modern UI), then the Compute Settings category. Set "Attention optimization method" to "Dynamic Attention BMM", then click Apply settings. Now, try to generate something. This should take a fair while to compile (10-15mins, or even longer; some reports state over an hour), but this compilation should only need to be done once. Note: The text `Compilation is in progress. Please wait...` will repeatedly appear, just be patient. Eventually your image will start generating. Subsequent generations will be significantly quicker. ### Upgrading ZLUDA If you have problem with ZLUDA after updating SD.Next, upgrading ZLUDA may help. 1. Remove `.zluda` folder. 2. Launch WebUI. The installer will download and install newer ZLUDA. ※ You may have to wait for a while to compile as the first generation. ### (experimental) How to enable cuDNN MIOpen, the alternative of cuDNN for AMDGPUs, hasn't been released on Windows yet. However, you can enable it with a custom build of MIOpen. This section describes how to enable cuDNN. 1. Switch to `dev` branch. 2. Install HIP SDK 6.2. If you already have older HIP SDK, uninstall it before installing 6.2. 3. Remove `.zluda` folder if exists. ※ If you have set `ZLUDA` environment variable, download the latest nightly ZLUDA from [here](https://github.com/lshqqytiger/ZLUDA/releases). ※ If you built ZLUDA yourself, pull latest commits of ZLUDA and rebuild with `--nightly`. 4. Download and install HIP SDK extension from [here](https://drive.google.com/file/d/1eMbbmg3jH0gilOWzCPqroY99Znj1BvVe/view?usp=sharing). (unzip and paste folders upon `path/to/AMD/ROCm/6.2`) 5. Launch WebUI with environment variable `ZLUDA_NIGHTLY=1`. The first generation will take long time because MIOpen has to find the optimal solution and cache it. If you get driver crashes, restart webui and try again. #### Flash Attention 2 1. Go to `Compute Settings`. 2. Change precision type to `FP16`. 3. Change attention optimization method to `Scaled-Dot-Product`. 4. Enable Flash attention and turn Dynamic attention off in SDP options. 5. Go to `Backend Settings`. 6. Enable `Deterministic mode`. ### (experimental) How to enable cuBLASLt hipBLASLt, the alternative of cuBLASLt for AMDGPUs, hasn't been released on Windows yet. However, there're unofficial builds available. This section describes how to enable cuBLASLt. 1. Install HIP SDK 6.2. If you already have older HIP SDK, uninstall it before installing 6.2. 2. Remove `.zluda` folder if exists. ※ If you have set `ZLUDA` environment variable, download the latest nightly ZLUDA from [here](https://github.com/lshqqytiger/ZLUDA/releases). ※ If you built ZLUDA yourself, pull latest commits of ZLUDA and rebuild with `--nightly`. 3. Download and install unofficial hipBLASLt build. [gfx1100, gfx1101, gfx1102, gfx1103, or gfx1150](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/download/v0.6.2.4/hipblaslt-rocmlibs-for-gfx1100-gfx1101-gfx1102-gfx1103-gfx1150-for.hip6.2.7z) 4. Launch WebUI with environment variable `ZLUDA_NIGHTLY=1`. --- ## Comparison (DirectML) | | DirectML | ZLUDA | |-------------|----------|--------| | Speed | Slower | Faster | | VRAM Usage | More | Less | | VRAM GC | ❌ | ✅ | | Traning | * | ✅ | | Flash Attention | ❌ | ❌ | | FFT | ❓ | ✅ | | FFTW | ❓ | ❌ | | DNN | ❓ | ⚠️ | | RTC | ❓ | ⚠️ | | Source Code | Closed | Opened | | Python | <=3.12 | Same as CUDA | ❓: unknown ⚠️: partially supported *: known as possible, but uses too much VRAM to train stable diffusion models/LoRAs/etc. ## Compatibility | DTYPE | | |-------|------------| | FP64 | ✅ | | FP32 | ✅ | | FP16 | ✅ | | BF16 | ✅ | | LONG | ✅ | | INT8 | ✅ | | UINT8 | ✅* | | INT4 | ❓ | | FP8 | ⚠️ | | BF8 | ⚠️ | *: Not tested.