Replies: 3 comments 8 replies
-
I'd be so glad if that would be implemented. My 5950x does not do as good of a job as potentially my 6900xt would |
Beta Was this translation helpful? Give feedback.
0 replies
-
Whisper depends a lot on PyTorch to do the processing stuff. So when you see your GPUs used when running this software, that's not Whisper. That's Whisper's dependency, PyTorch, the machine learning component that yearns for computing powers. |
Beta Was this translation helpful? Give feedback.
8 replies
-
See #55 (comment) for some hints |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
what do you think about adding support not only for CUDA but also for AMD ROCm so that even users without a suitable graphics card can transcribe at reasonable speed?
Beta Was this translation helpful? Give feedback.
All reactions