Deploying Open-WebUI with Ollama using Docker Compose (CPU-only setup) provides a streamlined way to run AI models locally without GPU hardware. This implementation uses a custom entrypoint script to download the deepseek-r1:8b model automatically when a container is launched. Here's an implementation guide:
- Docker Engine v20.10.10+
- Docker Compose v2.20.0+
- 8GB RAM minimum (16GB recommended for larger models)
- 20GB+ free disk space
- Linux/macOS/WSL2 (Windows)
- Clone the repo
git clone https://github.com/ntalekt/deepseek-r1-docker-compose.git
- Start Services
docker compose up -d
- Verify Installation
docker compose ps
Expected Output:
NAME | COMMAND | SERVICE | STATUS | PORTS |
---|---|---|---|---|
ollama | "/bin/ollama serve" | ollama | running | 0.0.0.0:11434->11434/tcp |
open-webui | "bash start.sh" | open-webui | running | 0.0.0.0:3000->8080/tcp |
- Check Logs
docker compose logs -f ollama
docker compose logs -f open-webui
- Uninstall
docker compose down --volumes --rmi all
- Open browser to
http://localhost:3000
- Create admin account
- Start chatting!
📄 License: MIT
❗ Note: CPU inference will be significantly slower than GPU-accelerated setups. For production use, consider GPU-enabled hardware.