diff --git a/README.md b/README.md index 1a8492f..91b2e5b 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,12 @@ **Find the best hyperparameters for Embedding your data in RAG Pipeline** -![vectorboard banner image](docs/images/benchmark.png) +Get a detailed report on differences between each hyperparameter combination: + +- Performance and embedding time +- Quality of response on the Eval queries + +> More benchmark properties, such as cost estimation and Automated accuracy metric is upcoming. # TL;DR @@ -105,6 +110,16 @@ Finally, view the results in a Gradio app using `.results()` method. To get a pu search.results(share=True) ``` +## What's included in the results? + +Several metrics are included in the results page in the Gradio app: + +- Total runtime and embedding time per experiment (a combination of hyperparameters in the `param_grid`) + ![vectorboard banner image](docs/images/benchmark.png) + +- Response on each `eval_query` per `Experiment` in a table format. This way you can see which experiment is giving the correct answer and you can easily compare them with each other. + ![gradio image](docs/images/gradio-screenshot-001.png) + # Overview and Core Concepts RAG (Retrieval-Augmented Generation) revolutionizes the way we approach question-answering and text generation tasks by combining the capabilities of retrieval-based and generative models. While it excels in many areas, one of the critical factors for its success is the quality of embeddings. diff --git a/docs/images/gradio-screenshot-001.png b/docs/images/gradio-screenshot-001.png new file mode 100644 index 0000000..fe70c8f Binary files /dev/null and b/docs/images/gradio-screenshot-001.png differ