Skip to content

Commit

Permalink
docs: add tips of installing integrations (#13234)
Browse files Browse the repository at this point in the history
  • Loading branch information
HairlessVillager authored May 3, 2024
1 parent 3f7924c commit 2422861
Showing 1 changed file with 15 additions and 7 deletions.
22 changes: 15 additions & 7 deletions docs/docs/getting_started/starter_example_local.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
!!! tip
Make sure you've followed the [custom installation](installation.md) steps first.

This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `BAAI/bge-small-en-v1.5` as our embedding model and `Mistral-7B` served through `Ollama` as our LLM.
This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `nomic-embed-text` as our embedding model and `Llama3`, both served through `Ollama`.

## Download data

Expand All @@ -17,26 +17,34 @@ Ollama is a tool to help you get set up with LLMs locally (currently supported o

Follow the [README](https://github.com/jmorganca/ollama) to learn how to install it.

To load in a Mistral-7B model just do `ollama pull mistral`
To download the Llama3 model just do `ollama pull llama3`.

To download the nomic embeddings, just do `ollama pull nomic-embed-text`

**NOTE**: You will need a machine with at least 32GB of RAM.

To import `llama_index.llms.ollama`, you should run `pip install llama-index-llms-ollama`.

To import `llama_index.embeddings.ollama`, you should run `pip install llama-index-embeddings-ollama`.

More integrations are all listed on https://llamahub.ai.

## Load data and build an index

In the same folder where you created the `data` folder, create a file called `starter.py` file with the following:

```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.core.embeddings import resolve_embed_model
from llama_index.embeddings.ollama import OllamaEmbedding
from llama_index.llms.ollama import Ollama

documents = SimpleDirectoryReader("data").load_data()

# bge embedding model
Settings.embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")
# nomic embedding model
Settings.embed_model = OllamaEmbedding(model_name="nomic-embed-text")

# ollama
Settings.llm = Ollama(model="mistral", request_timeout=30.0)
Settings.llm = Ollama(model="llama3", request_timeout=360.0)

index = VectorStoreIndex.from_documents(
documents,
Expand All @@ -53,7 +61,7 @@ Your directory structure should look like this:
   └── paul_graham_essay.txt
</pre>

We use the `BAAI/bge-small-en-v1.5` model through `resolve_embed_model`, which resolves to our HuggingFaceEmbedding class. We also use our `Ollama` LLM wrapper to load in the mistral model.
We use the `nomic-embed-text` from our `Ollama` embedding wrapper. We also use our `Ollama` LLM wrapper to load in the Llama3 model.

## Query your data

Expand Down

0 comments on commit 2422861

Please sign in to comment.