Skip to content

Commit

Permalink
Update UpTrain tutorial and docs (#11504)
Browse files Browse the repository at this point in the history
  • Loading branch information
Dominastorm authored Feb 29, 2024
1 parent b3112de commit 4b765f4
Show file tree
Hide file tree
Showing 6 changed files with 401 additions and 202 deletions.
271 changes: 176 additions & 95 deletions docs/community/integrations/uptrain.md

Large diffs are not rendered by default.

304 changes: 211 additions & 93 deletions docs/examples/callbacks/UpTrainCallback.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/examples/evaluation/UpTrain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"id": "0958c248",
"metadata": {},
"source": [
"**Overview**: In this example, we will see how to use UpTrain with LlamaIndex. "
"**Overview**: In this example, we will see how to use UpTrain with LlamaIndex. UpTrain ([github](https://github.com/uptrain-ai/uptrain) || [website](https://github.com/uptrain-ai/uptrain/) || [docs](https://docs.uptrain.ai/)) is an open-source platform to evaluate and improve GenAI applications. It provides grades for 20+ preconfigured checks (covering language, code, embedding use cases), performs root cause analysis on failure cases and gives insights on how to resolve them. More details on UpTrain's evaluations can be found [here](https://github.com/uptrain-ai/uptrain?tab=readme-ov-file#pre-built-evaluations-we-offer-).\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
@@ -1,35 +1,35 @@
# LlamaIndex Callbacks Integration: UpTrain

UpTrain is an open-source tool to evaluate and monitor the performance of language models. It provides a set of pre-built evaluations to assess the quality of responses generated by the model. Once you add UpTrainCallbackHandler to your existing LlamaIndex pipeline, it will take care of sending the generated responses to the UpTrain Managed Service for evaluations and display the results in the output.
UpTrain ([github](https://github.com/uptrain-ai/uptrain) || [website](https://uptrain.ai/) || [docs](https://docs.uptrain.ai/getting-started/introduction)) is an open-source platform to evaluate and improve Generative AI applications. It provides grades for 20+ preconfigured checks (covering language, code, embedding use cases), performs root cause analysis on failure cases and gives insights on how to resolve them. Once you add UpTrainCallbackHandler to your existing LlamaIndex pipeline, it will automatically capture the right data, run evaluations and display the results in the output.

Three additional evaluations for Llamaindex have been introduced, complementing existing ones. These evaluations run automatically, with results displayed in the output. More details on UpTrain's evaluations can be found [here](https://github.com/uptrain-ai/uptrain?tab=readme-ov-file#pre-built-evaluations-we-offer-).
More details on UpTrain's evaluations can be found [here](https://github.com/uptrain-ai/uptrain?tab=readme-ov-file#pre-built-evaluations-we-offer-).

Selected operators from the LlamaIndex pipeline are highlighted for demonstration:

## 1. **RAG Query Engine Evaluations**:

The RAG query engine plays a crucial role in retrieving context and generating responses. To ensure its performance and response quality, we conduct the following evaluations:

- **Context Relevance**: Determines if the context extracted from the query is relevant to the response.
- **Factual Accuracy**: Assesses if the LLM is hallcuinating or providing incorrect information.
- **Response Completeness**: Checks if the response contains all the information requested by the query.
- **[Context Relevance](https://docs.uptrain.ai/predefined-evaluations/context-awareness/context-relevance)**: Determines if the context extracted from the query is relevant to the response.
- **[Factual Accuracy](https://docs.uptrain.ai/predefined-evaluations/context-awareness/factual-accuracy)**: Assesses if the LLM is hallucinating or providing incorrect information.
- **[Response Completeness](https://docs.uptrain.ai/predefined-evaluations/response-quality/response-completeness)**: Checks if the response contains all the information requested by the query.

## 2. **Sub-Question Query Generation Evaluation**:

The SubQuestionQueryGeneration operator decomposes a question into sub-questions, generating responses for each using a RAG query engine. Given the complexity, we include the previous evaluations and add:
The SubQuestionQueryGeneration operator decomposes a question into sub-questions, generating responses for each using a RAG query engine. To evaluate the performance of SubQuery module, we add another check as well as run the above three for all the sub-queries:

- **Sub Query Completeness**: Assures that the sub-questions accurately and comprehensively cover the original query.
- **[Sub Query Completeness](https://docs.uptrain.ai/predefined-evaluations/query-quality/sub-query-completeness)**: Assures that the sub-questions accurately and comprehensively cover the original query.

## 3. **Re-Ranking Evaluations**:

Re-ranking involves reordering nodes based on relevance to the query and choosing top n nodes. Different evaluations are performed based on the number of nodes returned after re-ranking.
Re-ranking involves reordering nodes based on relevance to the query and choosing the top n nodes. Different evaluations are performed based on the number of nodes returned after re-ranking.

a. Same Number of Nodes

- **Context Reranking**: Checks if the order of re-ranked nodes is more relevant to the query than the original order.
- **[Context Reranking](https://docs.uptrain.ai/predefined-evaluations/context-awareness/context-reranking)**: Checks if the order of re-ranked nodes is more relevant to the query than the original order.

b. Different Number of Nodes:

- **Context Conciseness**: Examines whether the reduced number of nodes still provides all the required information.
- **[Context Conciseness](https://docs.uptrain.ai/predefined-evaluations/context-awareness/context-conciseness)**: Examines whether the reduced number of nodes still provides all the required information.

These evaluations collectively ensure the robustness and effectiveness of the RAG query engine, SubQuestionQueryGeneration operator, and the re-ranking process in the LlamaIndex pipeline.
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ def uptrain_evaluate(
if column == "question":
print(f"\nQuestion: {row[column]}")
elif column == "response":
print(f"Response: {row[column]}")
print(f"Response: {row[column]}\n")
elif column.startswith("score"):
if column in score_name_map:
print(f"{score_name_map[column]}: {row[column]}")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ version = "0.1.1"

[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
llama-index-core = "0.10.0"
uptrain = ">=0.5.0"
llama-index-core = ">=0.10.0"
uptrain = ">=0.6.5"

[tool.poetry.group.dev.dependencies]
ipython = "8.10.0"
Expand Down

0 comments on commit 4b765f4

Please sign in to comment.