Skip to content

Commit

Permalink
minor doc updates (run-llama#11520)
Browse files Browse the repository at this point in the history
  • Loading branch information
yisding authored and Izuki Matsuba committed Mar 29, 2024
1 parent 0d480f0 commit 7935b3a
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions docs/getting_started/v0_10_0_migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

With the introduction of LlamaIndex v0.10.0, there were several changes

- integrations have separate `pip installs (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139))
- integrations have separate `pip install`s (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139))
- many imports changed
- the service context was deprecated
- the `ServiceContext` was deprecated

Thankfully, we've tried to make these changes as easy as possible!

Expand Down Expand Up @@ -72,7 +72,7 @@ from llama_index.core import Settings
Settings.llm = llm
Settings.embed_model = embed_model
Setting.chunk_size = 512
Settings.chunk_size = 512
```

You can see the `ServiceContext` -> `Settings` migration guide for [more details](/module_guides/supporting_modules/service_context_migration.md).
6 changes: 3 additions & 3 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ You may choose to **fine-tune** a LLM with your data, but:
- Due to the cost to train, it's **hard to update** a LLM with latest information.
- **Observability** is lacking. When you ask a LLM a question, it's not obvious how the LLM arrived at its answer.

Instead of fine-tuning, one can a context augmentation pattern called `Retrieval-Augmented Generation (RAG) <./getting_started/concepts.html>`_ to obtain more accurate text generation relevant to your specific data. RAG involves the following high level steps:
Instead of fine-tuning, one can use a context augmentation pattern called `Retrieval-Augmented Generation (RAG) <./getting_started/concepts.html>`_ to obtain more accurate text generation relevant to your specific data. RAG involves the following high level steps:

1. Retrieve information from your data sources first,
2. Add it to your question as context, and
Expand All @@ -36,7 +36,7 @@ In doing so, RAG overcomes all three weaknesses of the fine-tuning approach:

Firstly, LlamaIndex imposes no restriction on how you use LLMs. You can still use LLMs as auto-complete, chatbots, semi-autonomous agents, and more (see Use Cases on the left). It only makes LLMs more relevant to you.

LlamaIndex provides the following tools to help you quickly standup production-ready RAG systems:
LlamaIndex provides the following tools to help you quickly stand up production-ready RAG systems:

- **Data connectors** ingest your existing data from their native source and format. These could be APIs, PDFs, SQL, and (much) more.
- **Data indexes** structure your data in intermediate representations that are easy and performant for LLMs to consume.
Expand Down Expand Up @@ -70,7 +70,7 @@ We recommend starting at `how to read these docs <./getting_started/reading.html

To download or contribute, find LlamaIndex on:

- Github: https://github.com/jerryjliu/llama_index
- Github: https://github.com/run-llama/llama_index
- PyPi:

- LlamaIndex: https://pypi.org/project/llama-index/.
Expand Down

0 comments on commit 7935b3a

Please sign in to comment.