Offline LLM support
Pre-release
Pre-release
- Feature: Added support for running LLMs locally on your system.
- Download mistral-7b-openorca.gguf2.Q4_0.gguf from the GPT4All website.
- Place model inside
models/https://gpt4all.io/index.html
. - Now see the magic🌠