diff --git a/docs/docs/getting_started/starter_example_local.md b/docs/docs/getting_started/starter_example_local.md index 24a0978e58c0a..cd42f4443bea7 100644 --- a/docs/docs/getting_started/starter_example_local.md +++ b/docs/docs/getting_started/starter_example_local.md @@ -3,7 +3,7 @@ !!! tip Make sure you've followed the [custom installation](installation.md) steps first. -This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `BAAI/bge-small-en-v1.5` as our embedding model and `Mistral-7B` served through `Ollama` as our LLM. +This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `nomic-embed-text` as our embedding model and `Llama3`, both served through `Ollama`. ## Download data @@ -17,26 +17,34 @@ Ollama is a tool to help you get set up with LLMs locally (currently supported o Follow the [README](https://github.com/jmorganca/ollama) to learn how to install it. -To load in a Mistral-7B model just do `ollama pull mistral` +To download the Llama3 model just do `ollama pull llama3`. + +To download the nomic embeddings, just do `ollama pull nomic-embed-text` **NOTE**: You will need a machine with at least 32GB of RAM. +To import `llama_index.llms.ollama`, you should run `pip install llama-index-llms-ollama`. + +To import `llama_index.embeddings.ollama`, you should run `pip install llama-index-embeddings-ollama`. + +More integrations are all listed on https://llamahub.ai. + ## Load data and build an index In the same folder where you created the `data` folder, create a file called `starter.py` file with the following: ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings -from llama_index.core.embeddings import resolve_embed_model +from llama_index.embeddings.ollama import OllamaEmbedding from llama_index.llms.ollama import Ollama documents = SimpleDirectoryReader("data").load_data() -# bge embedding model -Settings.embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5") +# nomic embedding model +Settings.embed_model = OllamaEmbedding(model_name="nomic-embed-text") # ollama -Settings.llm = Ollama(model="mistral", request_timeout=30.0) +Settings.llm = Ollama(model="llama3", request_timeout=360.0) index = VectorStoreIndex.from_documents( documents, @@ -53,7 +61,7 @@ Your directory structure should look like this:    └── paul_graham_essay.txt -We use the `BAAI/bge-small-en-v1.5` model through `resolve_embed_model`, which resolves to our HuggingFaceEmbedding class. We also use our `Ollama` LLM wrapper to load in the mistral model. +We use the `nomic-embed-text` from our `Ollama` embedding wrapper. We also use our `Ollama` LLM wrapper to load in the Llama3 model. ## Query your data