diff --git a/docs/trulens_eval/intro.md b/docs/trulens_eval/intro.md index fa4c99997..e4303604c 100644 --- a/docs/trulens_eval/intro.md +++ b/docs/trulens_eval/intro.md @@ -1,18 +1,36 @@ + # Welcome to TruLens-Eval! ![TruLens](https://www.trulens.org/assets/images/Neural_Network_Explainability.png) -**Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, retreivers, knowledge sources and more, TruLens-Eval is the tool you need to understand its performance. +**Don't just vibe-check your llm app!** Systematically evaluate and track your +LLM experiments with TruLens. As you develop your app including prompts, models, +retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +understand its performance. -Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application. +Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help +you to identify failure modes & systematically iterate to improve your +application. -Read more about the core concepts behind TruLens including [Feedback Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), [The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), and [Honest, Harmless and Helpful Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/). +Read more about the core concepts behind TruLens including [Feedback +Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), +[The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), +and [Honest, Harmless and Helpful +Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/). ## TruLens in the development workflow -Build your first prototype then connect instrumentation and logging with TruLens. Decide what feedbacks you need, and specify them with TruLens to run alongside your app. Then iterate and compare versions of your app in an easy-to-use user interface 👇 +Build your first prototype then connect instrumentation and logging with +TruLens. Decide what feedbacks you need, and specify them with TruLens to run +alongside your app. Then iterate and compare versions of your app in an +easy-to-use user interface 👇 -![Architecture Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png) +![Architecture +Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png) ## Installation and Setup @@ -24,10 +42,16 @@ Install the trulens-eval pip package from PyPI. ## Quick Usage -Walk through how to instrument and evaluate a RAG built from scratch with TruLens. +Walk through how to instrument and evaluate a RAG built from scratch with +TruLens. -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/quickstart/quickstart.ipynb) +[![Open In +Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/quickstart/quickstart.ipynb) ### 💡 Contributing -Interested in contributing? See our [contribution guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details. +Interested in contributing? See our [contribution +guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details. + \ No newline at end of file diff --git a/trulens_eval/OPTIONAL.md b/trulens_eval/OPTIONAL.md new file mode 100644 index 000000000..1844f7a09 --- /dev/null +++ b/trulens_eval/OPTIONAL.md @@ -0,0 +1,51 @@ +# Optional Packages + +Most of the examples included within `trulens_eval` require additional packages +not installed alongside `trulens_eval`. You may be prompted to install them +(with pip). The requirements file `trulens_eval/requirements.optional.txt` +contains the list of optional packages and their use if you'd like to install +them all in one go. + +## Dev Notes + +To handle optional packages and provide clearer instuctions to the user, we +employ a context-manager-based scheme (see `utils/imports.py`) to import +packages that may not be installed. The basic form of such imports can be seen +in `__init__.py`: + +```python +with OptionalImports(messages=REQUIREMENT_LLAMA): + from trulens_eval.tru_llama import TruLlama +``` + +This makes it so that `TruLlama` gets defined subsequently even if the import +fails (because `tru_llama` imports `llama_index` which may not be installed). +However, if the user imports TruLlama (via `__init__.py`) and tries to use it +(call it, look up attribute, etc), the will be presented a message telling them +that `llama-index` is optional and how to install it: + +``` +ModuleNotFoundError: +llama-index package is required for instrumenting llama_index apps. +You should be able to install it with pip: + + pip install "llama-index>=v0.9.14.post3" +``` + +If a user imports directly from TruLlama (not by way of `__init__.py`), they +will get that message immediately instead of upon use due to this line inside +`tru_llama.py`: + +```python +OptionalImports(messages=REQUIREMENT_LLAMA).assert_installed(llama_index) +``` + +This checks that the optional import system did not return a replacement for +`llama_index` (under a context manager earlier in the file). + +### When to Fail + +As per above implied, imports from a general package that does not imply an +optional package (like `from trulens_eval ...`) should not produce the error +immediately but imports from packages that do imply the use of optional import +(`tru_llama.py`) should. \ No newline at end of file diff --git a/trulens_eval/README.md b/trulens_eval/README.md index fa4c99997..024f11f1e 100644 --- a/trulens_eval/README.md +++ b/trulens_eval/README.md @@ -2,17 +2,30 @@ ![TruLens](https://www.trulens.org/assets/images/Neural_Network_Explainability.png) -**Don't just vibe-check your llm app!** Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, retreivers, knowledge sources and more, TruLens-Eval is the tool you need to understand its performance. +**Don't just vibe-check your llm app!** Systematically evaluate and track your +LLM experiments with TruLens. As you develop your app including prompts, models, +retreivers, knowledge sources and more, TruLens-Eval is the tool you need to +understand its performance. -Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application. +Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help +you to identify failure modes & systematically iterate to improve your +application. -Read more about the core concepts behind TruLens including [Feedback Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), [The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), and [Honest, Harmless and Helpful Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/). +Read more about the core concepts behind TruLens including [Feedback +Functions](https://www.trulens.org/trulens_eval/core_concepts_feedback_functions/), +[The RAG Triad](https://www.trulens.org/trulens_eval/core_concepts_rag_triad/), +and [Honest, Harmless and Helpful +Evals](https://www.trulens.org/trulens_eval/core_concepts_honest_harmless_helpful_evals/). ## TruLens in the development workflow -Build your first prototype then connect instrumentation and logging with TruLens. Decide what feedbacks you need, and specify them with TruLens to run alongside your app. Then iterate and compare versions of your app in an easy-to-use user interface 👇 +Build your first prototype then connect instrumentation and logging with +TruLens. Decide what feedbacks you need, and specify them with TruLens to run +alongside your app. Then iterate and compare versions of your app in an +easy-to-use user interface 👇 -![Architecture Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png) +![Architecture +Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png) ## Installation and Setup @@ -24,10 +37,13 @@ Install the trulens-eval pip package from PyPI. ## Quick Usage -Walk through how to instrument and evaluate a RAG built from scratch with TruLens. +Walk through how to instrument and evaluate a RAG built from scratch with +TruLens. -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/quickstart/quickstart.ipynb) +[![Open In +Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/quickstart/quickstart.ipynb) ### 💡 Contributing -Interested in contributing? See our [contribution guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details. +Interested in contributing? See our [contribution +guide](https://www.trulens.org/trulens_eval/CONTRIBUTING/) for more details. diff --git a/trulens_eval/trulens_eval/tru_llama.py b/trulens_eval/trulens_eval/tru_llama.py index d29345032..60e31dd79 100644 --- a/trulens_eval/trulens_eval/tru_llama.py +++ b/trulens_eval/trulens_eval/tru_llama.py @@ -24,43 +24,44 @@ pp = PrettyPrinter() -import llama_index -from llama_index.chat_engine.types import AgentChatResponse -from llama_index.chat_engine.types import BaseChatEngine -from llama_index.chat_engine.types import StreamingAgentChatResponse -from llama_index.embeddings.base import BaseEmbedding -from llama_index.indices.base import BaseIndex -# misc -from llama_index.indices.base_retriever import BaseRetriever -from llama_index.indices.prompt_helper import PromptHelper -from llama_index.indices.query.base import BaseQueryEngine -from llama_index.indices.query.schema import QueryBundle -from llama_index.indices.query.schema import QueryType -from llama_index.indices.service_context import ServiceContext -from llama_index.llm_predictor import LLMPredictor -from llama_index.llm_predictor.base import BaseLLMPredictor -from llama_index.llm_predictor.base import LLMMetadata -# LLMs -from llama_index.llms.base import BaseLLM # subtype of BaseComponent -# memory -from llama_index.memory import BaseMemory -from llama_index.node_parser.interface import NodeParser -from llama_index.prompts.base import Prompt -from llama_index.question_gen.types import BaseQuestionGenerator -from llama_index.response.schema import Response -from llama_index.response.schema import RESPONSE_TYPE -from llama_index.response.schema import StreamingResponse -from llama_index.response_synthesizers.base import BaseSynthesizer -from llama_index.response_synthesizers.refine import Refine -from llama_index.schema import BaseComponent -# agents -from llama_index.tools.types import AsyncBaseTool # subtype of BaseTool -from llama_index.tools.types import BaseTool -from llama_index.tools.types import \ - ToolMetadata # all of the readable info regarding tools is in this class -from llama_index.vector_stores.types import VectorStore - -from trulens_eval.utils.llama import WithFeedbackFilterNodes +with OptionalImports(messages=REQUIREMENT_LLAMA): + import llama_index + from llama_index.chat_engine.types import AgentChatResponse + from llama_index.chat_engine.types import BaseChatEngine + from llama_index.chat_engine.types import StreamingAgentChatResponse + from llama_index.embeddings.base import BaseEmbedding + from llama_index.indices.base import BaseIndex + # misc + from llama_index.indices.base_retriever import BaseRetriever + from llama_index.indices.prompt_helper import PromptHelper + from llama_index.indices.query.base import BaseQueryEngine + from llama_index.indices.query.schema import QueryBundle + from llama_index.indices.query.schema import QueryType + from llama_index.indices.service_context import ServiceContext + from llama_index.llm_predictor import LLMPredictor + from llama_index.llm_predictor.base import BaseLLMPredictor + from llama_index.llm_predictor.base import LLMMetadata + # LLMs + from llama_index.llms.base import BaseLLM # subtype of BaseComponent + # memory + from llama_index.memory import BaseMemory + from llama_index.node_parser.interface import NodeParser + from llama_index.prompts.base import Prompt + from llama_index.question_gen.types import BaseQuestionGenerator + from llama_index.response.schema import Response + from llama_index.response.schema import RESPONSE_TYPE + from llama_index.response.schema import StreamingResponse + from llama_index.response_synthesizers.base import BaseSynthesizer + from llama_index.response_synthesizers.refine import Refine + from llama_index.schema import BaseComponent + # agents + from llama_index.tools.types import AsyncBaseTool # subtype of BaseTool + from llama_index.tools.types import BaseTool + from llama_index.tools.types import \ + ToolMetadata # all of the readable info regarding tools is in this class + from llama_index.vector_stores.types import VectorStore + + from trulens_eval.utils.llama import WithFeedbackFilterNodes # Need to `from ... import ...` for the below as referring to some of these # later in this file by full path does not work due to lack of intermediate