Ollama local LLM step-by-step guide? #182
-
Hi, I have ollama installed on my mac and it´s working. I would like to use my local ollama LLM:s with fabric and I tried this:
My ~/.config/fabric/.env file contains Any suggestions? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Show me the results of your —listmodels |
Beta Was this translation helpful? Give feedback.
-
This is my workaround approach:
Fabric can automatically detect if Ollama runs on default port |
Beta Was this translation helpful? Give feedback.
-
Ok thanks. I will try this. /Rob11 mars 2024 kl. 23:20 skrev xssdoctor ***@***.***>:
I just pushed a fix to that. i dont know what the problem is. But with the local models the namesshould be something like llama2 or llama2-uncensored. these are also the names that go into the ollama api
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
This is my workaround approach:
Fabric can automatically detect if Ollama runs on default port
11434
. No need to addOPENAI_BASE_URL
in configuration file~/.config/fabric/.env
.