Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed some minor gramatical issues #11530

Merged
merged 7 commits into from
Mar 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@
"\n",
"3. Declaration of Research Assessment: In academia, this could refer to a statement or policy regarding how research is evaluated.\n",
"\n",
"4. Digital on-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
"4. Digital On-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
"\n",
"Please provide more context for a more accurate definition.\n"
]
Expand Down Expand Up @@ -371,7 +371,7 @@
"source": [
"## In Summary\n",
"\n",
"- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain specific, updated data, long-tail)\n",
"- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain-specific, updated data, long-tail)\n",
"- Context augmentation has been shown (in a few studies) to outperform LLMs without augmentation\n",
"- In this notebook, we showed one such example that follows that pattern."
]
Expand Down
2 changes: 1 addition & 1 deletion docs/use_cases/chatbots.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Here are some relevant resources:
- [create-llama](https://blog.llamaindex.ai/create-llama-a-command-line-tool-to-generate-llamaindex-apps-8f7683021191), a command line tool that generates a full-stack chatbot application for you
- [SECinsights.ai](https://www.secinsights.ai/), an open-source application that uses LlamaIndex to build a chatbot that answers questions about SEC filings
- [RAGs](https://blog.llamaindex.ai/introducing-rags-your-personalized-chatgpt-experience-over-your-data-2b9d140769b1), a project inspired by OpenAI's GPTs that lets you build a low-code chatbot over your data using Streamlit
- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chat bots in nature
- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chatbots in nature

## External sources

Expand Down
8 changes: 4 additions & 4 deletions docs/use_cases/multimodal.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Multi-modal

LlamaIndex offers capabilities to not only build language-based applications, but also **multi-modal** applications - combining language and images.
LlamaIndex offers capabilities to not only build language-based applications but also **multi-modal** applications - combining language and images.

## Types of Multi-modal Use Cases

This space is actively being explored right now, but there are some fascinating use cases popping up.
This space is actively being explored right now, but some fascinating use cases are popping up.

### RAG (Retrieval Augmented Generation)

Expand Down Expand Up @@ -73,7 +73,7 @@ maxdepth: 1

These sections show comparisons between different multi-modal models for different use cases.

### LLaVa-13, Fuyu-8B and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning
### LLaVa-13, Fuyu-8B, and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning

These notebooks show how to use different Multi-Modal LLM models for image understanding/reasoning. The various model inferences are supported by Replicate or OpenAI GPT4-V API. We compared several popular Multi-Modal LLMs:

Expand All @@ -97,7 +97,7 @@ GPT4-V: </examples/multi_modal/openai_multi_modal.ipynb>

### Simple Evaluation of Multi-Modal RAG

In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use-cases.
In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded to in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use cases.

```{toctree}
---
Expand Down
Loading