From 35dedeb88257e3812066b1df341157f8dcfec818 Mon Sep 17 00:00:00 2001 From: Jules Kuehn Date: Thu, 7 Mar 2024 13:43:17 -0500 Subject: [PATCH] fix(docs): bge-m3 != bge-small-en-v1.5 (#11748) --- docs/getting_started/starter_example_local.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting_started/starter_example_local.md b/docs/getting_started/starter_example_local.md index 4785a2ac1f849..85dd5723bea7c 100644 --- a/docs/getting_started/starter_example_local.md +++ b/docs/getting_started/starter_example_local.md @@ -4,7 +4,7 @@ Make sure you've followed the [custom installation](installation.md) steps first. ``` -This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `BAAI/bge-m3` as our embedding model and `Mistral-7B` served through `Ollama` as our LLM. +This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `BAAI/bge-small-en-v1.5` as our embedding model and `Mistral-7B` served through `Ollama` as our LLM. ## Download data @@ -33,7 +33,7 @@ from llama_index.llms.ollama import Ollama documents = SimpleDirectoryReader("data").load_data() -# bge-m3 embedding model +# bge embedding model Settings.embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5") # ollama