From 5a6244ffd01985990e58f3b389832f4e050ad46f Mon Sep 17 00:00:00 2001
From: robertturner <143536791+robertdhayanturner@users.noreply.github.com>
Date: Thu, 15 Feb 2024 13:04:27 -0500
Subject: [PATCH] Update retrieval_augmented_generation_eval.md
corrected image source urls. and markdown on third img
---
docs/use_cases/retrieval_augmented_generation_eval.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/docs/use_cases/retrieval_augmented_generation_eval.md b/docs/use_cases/retrieval_augmented_generation_eval.md
index 7eac69406..ce628461b 100644
--- a/docs/use_cases/retrieval_augmented_generation_eval.md
+++ b/docs/use_cases/retrieval_augmented_generation_eval.md
@@ -7,7 +7,7 @@
Retrieval Augmented Generation (RAG) is probably the most useful application of large language models today. RAG enhances content generation by leveraging existing information effectively. It can amalgamate specific, relevant details from multiple sources to generate more accurate and relevant query results. This makes RAG potentially invaluable in various domains, including content creation, question & answer applications, and information synthesis. RAG does this by combining the strengths of retrieval, usually using dense vector search, and text generation models, like GPT. For a more in-depth introduction to RAG, read [here](https://hub.superlinked.com/retrieval-augmented-generation).
-![Implementation of RAG using Qdrant as a vector database](..assets/use_cases/retrieval_augmented_generation_eval/rag_qdrant.jpg)
+![Implementation of RAG using Qdrant as a vector database](../assets/use_cases/retrieval_augmented_generation_eval/rag_qdrant.jpg)
*RAG system (above) using* *Qdrant* *as the knowledge store. To determine which Vector Database fits your specific use case, refer to the* [*Vector DB feature matrix*](https://vdbs.superlinked.com/).
@@ -35,7 +35,7 @@ In the case of RAG, not only is it important to have good metrics, but that you
To see where things are going well, can be improved, and also where errors may originate, it's important to evaluate each component in isolation. In the following visual, we've classified RAG's components - Information Retrieval, Context Augmentation, and Response Generation - along with what needs evaluation in each:
-![Classification of Challenges of RAG Evaluation](..assets/use_cases/retrieval_augmented_generation_eval/rag_challenges.jpg)
+![Classification of Challenges of RAG Evaluation](../assets/use_cases/retrieval_augmented_generation_eval/rag_challenges.jpg)
*The challenges of RAG Evaluation Presentation (above), including the* [*'Lost in the Middle'*](https://arxiv.org/abs/2307.03172) *problem*.
@@ -50,7 +50,7 @@ The evaluation framework we propose is meant to ensure granular and thorough mea
To meet these evaluation challenges systematically, it's best practice to break down our evaluation into different levels, so that we understand what specifically is working well, and what needs improvement.
-
+![Granular Levels of Evaluation of RAG](../assets/use_cases/retrieval_augmented_generation_eval/rag_granular.jpg)
Let's take a closer look to see what's involved in each of these levels individually.