-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tts-1-hd doesn't work with Radeon 6800XT and ROCm 5.7 #31
Comments
I'm sorry to hear that, I'm not able to test this - are you using the docker-compose.rocm.yaml? Please share some more details about how you're running it (docker compose) and the card you have. |
I'm running it with Podman, using the same config as the I'm running a Radeon 6800XT. I also run Ollama in Podman on this machine and it's able to use ROCM with just the two devices exposed, nothing else needed. |
Thanks for the reply, I'm currently traveling with limited internet access but I will try to get to this as soon as I can. |
I don't see the Radeon 6800XT listed as supported for ROCm 5.7, you may need to build your own image with an older version of ROCm pytorch. They do support the "AMD Radeon™ Pro W6800" but I don't think that's the same, only the 7800XT series is supported by 5.7. I am not able to test this and am not really sure this is correct. I'll have to look at how ollama does it later. |
6800XT seems to require trickery, try to set the following environment variable:
From comments in: https://www.reddit.com/r/Amd/comments/179dncu/amd_rocm_pytorch_now_supported_with_the_radeon_rx/ |
Unfortunately this env var doesn't seem to fix it. |
I think this is caused by openedai-speech using ROCM 5.7, while Ollama runs ROCM 6 which works with my GPU. |
Unfortunately, this means that support wont officially land in the pre-built image for a while, ROCm 6.0 brings in and requires torch 2.3, which so far is causing a lot of dependency and build problems (and the image is over 10GB so wont fit in github). You could try changing the requirements file yourself (from 5.7 to 6.0), but you may need to install a lot of extra stuff, like the nvidia cuda-toolkit, various developer tools, etc. I'll leave this open for now, but it probably will not get fixed until the whole stack is upgraded to torch 2.3. |
Hello. First of all thank you for making this project. I have been looking for a TTS extension for Open WebUI and this project fits perfectly. I wanted to share my observations. that might help solve the issue. On my AMD card I managed to run xtts with ROCm from a Podman container, but I had to add some changes. Branch: Experimental ROCm variants on AMD 7700 XT Start podman
Changes
requirements-rocm-torch.txt
Details
Note: Older amdgpu-install (5.7) doesn't seem to have rocm, that can be installed through Debugging Enter the container:
In
Resources |
I can confirm that uninstalling the cuda version of torch and torchaudio, then installing the rocm versions fixed it for me aswell (why is cuda version installed in rocm container in the first place?). But I don't see much of a performance difference, it's pretty much the same perf. I get with my Ryzen 5800X3D which leads to annoying pauses between sentences 🫤 In conclusion I would say that at least for 6000 series it doesn't make much sense since it reduces the VRAM available for LLMs and doesn't result in better perf. |
I really appreciate the work you put in on that branch @stormymeadow ! On my machine, running on the GPU is a significant improvement. |
I'm running the ROCM Docker image but it looks like it's only using the CPU when trying to use
tts-1-hd
model, from the logs:The text was updated successfully, but these errors were encountered: