Skip to content

Commit

Permalink
add Phi-2 to README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
amakropoulos authored Feb 15, 2024
1 parent f62545f commit 9993494
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ LLMUnity is built on top of the awesome [llama.cpp](https://github.com/ggerganov

- 💬 Craft your own NPC AI that the player interacts with in natural language
- 🖌️ The NPC can respond creatively according to your character definition
- 🗝️ ... or with predefined answers that stick to your script!
- 🗝️ or with predefined answers that stick to your script!
- 🔍 You can even build an intelligent search engine with our search system (RAG)

**Features**
Expand Down Expand Up @@ -347,7 +347,7 @@ In the scene, select the `LLM` GameObject download the default model (`Download
Save the scene, run and enjoy!

## Use your own model
LLMUnity uses the Mistral 7B Instruct model by default, quantised with the Q4 method ([link](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf?download=true)).<br>
LLMUnity uses the [Microsoft Phi-2](https://huggingface.co/microsoft/phi-2) or [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model by default, quantised with the Q4 method.<br>
Alternative models can be downloaded from [HuggingFace](https://huggingface.co/models).<br>
The required model format is .gguf as defined by the llama.cpp.<br>
The easiest way is to download gguf models directly by [TheBloke](https://huggingface.co/TheBloke) who has converted an astonishing number of models :rainbow:!<br>
Expand Down

0 comments on commit 9993494

Please sign in to comment.