Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistral on CPU #34

Open
pruthvi1990 opened this issue Oct 4, 2023 · 2 comments
Open

Mistral on CPU #34

pruthvi1990 opened this issue Oct 4, 2023 · 2 comments

Comments

@pruthvi1990
Copy link

Hi,

I was reading through the quickstart documentation, I see the requirement is to have a GPU with @least 24G of VRAM.

  • I want to know is there a way to run Mistral on CPU's?. if so, could you please provide the link to QuickStart documentation for the same ?
  • If its currently not supported, is there any future plans to support Mistral on CPU's?
@h3ndrik
Copy link

h3ndrik commented Oct 5, 2023

llama.cpp

@Frank-Buss
Copy link

This worked for me: https://ollama.ai/library/mistral You need to install ollama first, as described on the ollama.ai homepage (one bash script line for Mac and Linux), then just download and run the model with "ollama run mistral". Needs 8 GB RAM. It generates about 4 tokens/s on my PC (Intel i7-6700K CPU @ 4.00GHz).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants