Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MacOS] Ollama model support discussion #108

Open
yvesete opened this issue Jan 23, 2025 · 11 comments
Open

[MacOS] Ollama model support discussion #108

yvesete opened this issue Jan 23, 2025 · 11 comments

Comments

@yvesete
Copy link

yvesete commented Jan 23, 2025

I am on a apple silicon Macbook (M3) and did install writing tools shortcut control w, had not the issue #105 all seems ok but it does not do anything. No error but ... nothing. Icon is there, settings working, menu coming up but it does nothing. No error only a big nothing.

@yvesete yvesete changed the title Apple writing tools install, no error but also no function. [MacOS] Apple writing tools install, no error but also no function. Jan 23, 2025
@dev-x3Ro
Copy link

same here... just the spinning cursor for half a sec and nothing.
Maybe API keys a broken? Is there a way to test those?

@yvesete
Copy link
Author

yvesete commented Jan 23, 2025

Ah no can't be. forgot to mention I am using ollama local. And it does work for sure.

@dev-x3Ro
Copy link

When I select Gemini as a model it works. Looks like an issue with openAI / Local LLM for me

@Aryamirsepasi
Copy link
Collaborator

Hi, if the cursor spins but there are no results, then the problem must be in the provider settings. How did you set the Ollama options in the app?

@yvesete
Copy link
Author

yvesete commented Jan 23, 2025

the cursor does not spin no reaction. Ollama setup is correct and ollama is working. I only see the full menu when short cutting and the settings window is open also but no reaction, no cursor spin nothing. only nothing.

@yvesete
Copy link
Author

yvesete commented Jan 23, 2025

settings api key : ollama
base url: http://localhost:11434/v1
model name llama3.1:8b

as I said ollama is working ok in other applications and in terminal

@gabrielbr
Copy link

Just a small suggestion, but did you try connecting to Ollama with the ip instead of localhost?

@yvesete
Copy link
Author

yvesete commented Jan 23, 2025

yes it is the same and all other apps are working with localhost also. It is strange because it does only nothing, no cpu usage in activity monitor, just nothing.

@Aryamirsepasi
Copy link
Collaborator

Aryamirsepasi commented Jan 23, 2025

This is strange. I use llama3.2 myself and it works without any problems. Maybe something happens when using llama3.1:8b. I'll test that and see what goes wrong.

Image

@yvesete
Copy link
Author

yvesete commented Jan 24, 2025

mmhhh I tested all the following models:

NAME ID SIZE MODIFIED
deepseek-coder-v2:latest 63fb193b3a9b 8.9 GB 5 weeks ago
qwen2.5-coder:latest 2b0496514337 4.7 GB 5 weeks ago
nomic-embed-text:latest 0a109f422b47 274 MB 3 months ago
llama3.2:latest a80c4f17acd5 2.0 GB 3 months ago
llama3.1:latest 62757c860e01 4.7 GB 5 months ago
llama3:8b-instruct-q8_0 1b8e49cece7f 8.5 GB 7 months ago
llama3:latest a6990ed6be41 4.7 GB 9 months ago
mistral:latest 61e88e884507 4.1 GB 11 months ago
llama2:latest 78e26419b446 3.8 GB 11 months ago

only llama3.2:latest and llama3:8b-instruct-q8_0 seem to work somehow. I test it further and let you know.

@theJayTea theJayTea changed the title [MacOS] Apple writing tools install, no error but also no function. [MacOS] Ollama model support discussion Jan 26, 2025
@Nabuzata
Copy link

Nabuzata commented Feb 8, 2025

Hello.

  1. Similar situation (MBA M3) : 0 reactions as I tried to summarize something (or the other options), or talking with the chatbot.
    -> I figured by accident that the window was actually open on my Spotify client, which was completely hidden...

  2. In my Terminal, I saved my model by giving it a name : "/save llama". Now to run it, I simply type "ollama run llama".
    -> in the writing tools configuration window, the only thing working now for me are these :
    API Key : llama (or whatever)
    Base URL : same as you
    Model Name : llama

Maybe that will help someone.
Cheers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants