We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LocalAI version: docker image with the tag :latest-aio-gpu-intel-f32
Environment, CPU architecture, OS, and Version: Linux Zana 6.6.68-Unraid #1 SMP PREEMPT_DYNAMIC Tue Dec 31 13:42:37 PST 2024 x86_64 11th Gen Intel(R) Core(TM) i5-11400 @ 2.60GHz GenuineIntel GNU/Linux
64 GB RAM
Describe the bug gpu does not work nor do i get any output from chat. however the tag[master-vulkan-ffmpeg-core] works just fine with the gpu
To Reproduce just run the docker latest-aio-gpu-intel-f32 while passing through /dev/dri and use localai
Expected behavior gpu accelerated chatting
Logs
logs.txt
Additional context i use the a750 with 8 GB vram which is not a lot but at least something
The text was updated successfully, but these errors were encountered:
I am using latest-aio-gpu-intel-f16 docker with I5-12400 Igpu, and the model cannot be loaded normally, but it is normal to switch to pure CPU
5:19PM INF [/build/backend/python/coqui/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh 5:19PM INF [/build/backend/python/autogptq/run.sh] Attempting to load 5:19PM INF BackendLoader starting backend=/build/backend/python/autogptq/run.sh modelID=deepseek-r1-distill-qwen-7b o.model=DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf 5:19PM INF [/build/backend/python/autogptq/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh 5:19PM INF [/build/backend/python/vllm/run.sh] Attempting to load 5:19PM INF BackendLoader starting backend=/build/backend/python/vllm/run.sh modelID=deepseek-r1-distill-qwen-7b o.model=DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf 5:19PM INF [/build/backend/python/vllm/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh
Sorry, something went wrong.
i am using an intel arc a750 as it shows in the logs
No branches or pull requests
LocalAI version:
docker image with the tag :latest-aio-gpu-intel-f32
Environment, CPU architecture, OS, and Version:
Linux Zana 6.6.68-Unraid #1 SMP PREEMPT_DYNAMIC Tue Dec 31 13:42:37 PST 2024 x86_64 11th Gen Intel(R) Core(TM) i5-11400 @ 2.60GHz GenuineIntel GNU/Linux
64 GB RAM
Describe the bug
gpu does not work nor do i get any output from chat. however the tag[master-vulkan-ffmpeg-core] works just fine with the gpu
To Reproduce
just run the docker latest-aio-gpu-intel-f32 while passing through /dev/dri and use localai
Expected behavior
gpu accelerated chatting
Logs
logs.txt
Additional context
i use the a750 with 8 GB vram which is not a lot but at least something
The text was updated successfully, but these errors were encountered: