Experimentation around LLM and MicroProfile
Running Ollama with the llama3.1 model:
$CONTAINER_ENGINE= podman | docker
$CONTAINER_ENGINE run -d --rm --name ollama --replace --pull=always -p 11434:11434 -v ollama:/root/.ollama --stop-signal=SIGKILL docker.io/ollama/ollama
$CONTAINER_ENGINE exec -it ollama ollama run llama3.1