Support for AMD GPUs #184
Replies: 4 comments 27 replies
-
|
I think I may have found the issue. I much have fat fingered it when I put it in: if device == "rocm" {
device = "cuda"
}I just pushed a new commit to the branch. Let me know if it works. |
Beta Was this translation helpful? Give feedback.
-
|
I updated my "working" branch with your fix and rebuilt, and again tested the same command with both 'cuda' and 'rocm'. 'cuda' was the same ( |
Beta Was this translation helpful? Give feedback.
-
|
Any progress on figuring out the issue, or just currently busy irl / with other stuff? |
Beta Was this translation helpful? Give feedback.
-
|
@rishikanthc, I am going to need some more information on what you are stating as Nvidia Models. CUDA acceleration is only available to CUDA GPUs, which are only NVidia cards. CUDA acceleration will not work on AMD GPUs because they do not have CUDA cores to be able to perform the task. AMD GPUs have ROCM as their acceleration abilities. Some people have basically patched in AMD GPU functionality, but they are basically translating the CUDA API calls into ROCM API calls to the GPU, and thus allowing the GPU to perform the conversion. But we are using WhisperX, not Faster-Whisper, or Whisper.cpp, which currently does not seem to support AMD GPUs due to using a non ROCM supported version of cTranslate2. If we were using Whisper.cpp instead of WhisperX, this would be a much easier implementation since it already supports ROCM via the Vulkan API that AMD GPUs support. I am just trying to explain why this may not be able to work as we intended due to the complexity of these different versions of Whisper that have been created, which we are using one of. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This thread tracks the development of adding rocm support to run transcription on AMD GPUs
Beta Was this translation helpful? Give feedback.
All reactions