[Request] Ollama + OpenWebUI #3505
Replies: 4 comments 4 replies
-
It would be great ! |
Beta Was this translation helpful? Give feedback.
-
I'd love this. Even if it set it up without a GPU (I know tteck doesn't like doing scripts requiring outside user setup for Nvidia GPUs) where we could get that working separately. Many people run small models with CPU only. I know a lot of people want this but struggle with it. |
Beta Was this translation helpful? Give feedback.
-
Installing ain't the issue, I've already whipped up a script for that. I deployed it on my Minisforum UM790 Pro with 64GB of RAM and a little graphics card. But honestly, even that hardware can’t compete with the performance of Gemini, ChatGPT, Claude, and the like. Sometimes those queries take forever, and it gets super bogged down. A lot of folks using the helper scripts are on older, less powerful machines, so it's just not worth it. If someone really wants it, they can follow the links above and set it up on a Debian/Ubuntu LXC themselves. |
Beta Was this translation helpful? Give feedback.
-
Open WebUI is now in the repository. |
Beta Was this translation helpful? Give feedback.
-
Awesome combination for selfhosting ChatGPT Copilot and general AI, could be integraated into GPU passthrough like jellyfin.
Ollama is used to download and interact with the models, and OpenWebUI is the interface, to interact with the model.
Websites:
https://ollama.com/
https://openwebui.com/
Github:
https://github.com/ollama/ollama
https://github.com/open-webui/open-webui
Server Install:
https://github.com/ollama/ollama/blob/main/docs/linux.md
https://docs.openwebui.com/getting-started/#manual-installation
Video showing it off:
https://www.youtube.com/watch?v=Wjrdr0NU4Sk
https://youtu.be/GrLpdfhTwLg?si=lfQLnaI3kGL4IsOf&t=169
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions