Skip to content

Commit

Permalink
Fixed some minor gramatical issues (run-llama#11530)
Browse files Browse the repository at this point in the history
* Fixed some gramatical mistakes (run-llama#82)

* Update discover_llamaindex.md

* Update installation.md

* Update reading.md

* Update starter_example.md

* Update starter_example_local.md

* Update v0_10_0_migration.md

* Update 2024-02-28-rag-bootcamp-vector-institute.ipynb

* Update multimodal.md

* Update chatbots.md
  • Loading branch information
ShorthillsAI authored and Izuki Matsuba committed Mar 29, 2024
1 parent f7c914a commit 874da1b
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@
"\n",
"3. Declaration of Research Assessment: In academia, this could refer to a statement or policy regarding how research is evaluated.\n",
"\n",
"4. Digital on-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
"4. Digital On-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
"\n",
"Please provide more context for a more accurate definition.\n"
]
Expand Down Expand Up @@ -371,7 +371,7 @@
"source": [
"## In Summary\n",
"\n",
"- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain specific, updated data, long-tail)\n",
"- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain-specific, updated data, long-tail)\n",
"- Context augmentation has been shown (in a few studies) to outperform LLMs without augmentation\n",
"- In this notebook, we showed one such example that follows that pattern."
]
Expand Down
2 changes: 1 addition & 1 deletion docs/use_cases/chatbots.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Here are some relevant resources:
- [create-llama](https://blog.llamaindex.ai/create-llama-a-command-line-tool-to-generate-llamaindex-apps-8f7683021191), a command line tool that generates a full-stack chatbot application for you
- [SECinsights.ai](https://www.secinsights.ai/), an open-source application that uses LlamaIndex to build a chatbot that answers questions about SEC filings
- [RAGs](https://blog.llamaindex.ai/introducing-rags-your-personalized-chatgpt-experience-over-your-data-2b9d140769b1), a project inspired by OpenAI's GPTs that lets you build a low-code chatbot over your data using Streamlit
- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chat bots in nature
- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chatbots in nature

## External sources

Expand Down
8 changes: 4 additions & 4 deletions docs/use_cases/multimodal.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Multi-modal

LlamaIndex offers capabilities to not only build language-based applications, but also **multi-modal** applications - combining language and images.
LlamaIndex offers capabilities to not only build language-based applications but also **multi-modal** applications - combining language and images.

## Types of Multi-modal Use Cases

This space is actively being explored right now, but there are some fascinating use cases popping up.
This space is actively being explored right now, but some fascinating use cases are popping up.

### RAG (Retrieval Augmented Generation)

Expand Down Expand Up @@ -73,7 +73,7 @@ maxdepth: 1

These sections show comparisons between different multi-modal models for different use cases.

### LLaVa-13, Fuyu-8B and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning
### LLaVa-13, Fuyu-8B, and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning

These notebooks show how to use different Multi-Modal LLM models for image understanding/reasoning. The various model inferences are supported by Replicate or OpenAI GPT4-V API. We compared several popular Multi-Modal LLMs:

Expand All @@ -97,7 +97,7 @@ GPT4-V: </examples/multi_modal/openai_multi_modal.ipynb>

### Simple Evaluation of Multi-Modal RAG

In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use-cases.
In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded to in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use cases.

```{toctree}
---
Expand Down

0 comments on commit 874da1b

Please sign in to comment.