Skip to content

Commit

Permalink
Hotfix (#55)
Browse files Browse the repository at this point in the history
* fixed issue with tools using agent_config properly (when specified)

* upgrade openai llm to support o3-mini

* updated

* updated README

* fixed README

* fixes
  • Loading branch information
ofermend authored Feb 7, 2025
1 parent 260f125 commit 34f9bf7
Show file tree
Hide file tree
Showing 7 changed files with 170 additions and 107 deletions.
1 change: 1 addition & 0 deletions .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,5 @@ disable =
too-many-branches,
too-many-instance-attributes,
too-many-arguments,
too-many-positional-arguments,

58 changes: 47 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,11 +198,22 @@ similar queries that require a response in terms of a list of matching documents

## 🛠️ Agent Tools at a Glance

`vectara-agentic` provides a few tools out of the box:
`vectara-agentic` provides a few tools out of the box (see ToolsCatalog for details):

1. **Standard tools**:
- `summarize_text`: a tool to summarize a long text into a shorter summary (uses LLM)
- `rephrase_text`: a tool to rephrase a given text, given a set of rephrase instructions (uses LLM)

These tools use an LLM and so would use the `Tools` LLM specified in your `AgentConfig`.
To instantiate them:

```python
from vectara_agentic.tools_catalog import ToolsCatalog
summarize_text = ToolsCatalog(agent_config).summarize_text
```

This ensures the summarize_text tool is configured with the proper LLM provider and model as
specified in the Agent configuration.

2. **Legal tools**: a set of tools for the legal vertical, such as:
- `summarize_legal_text`: summarize legal text with a certain point of view
- `critique_as_judge`: critique a legal text as a judge, providing their perspective
Expand Down Expand Up @@ -239,19 +250,44 @@ mult_tool = ToolsFactory().create_tool(mult_func)

## 🛠️ Configuration

## Configuring Vectara-agentic

The main way to control the behavior of `vectara-agentic` is by passing an `AgentConfig` object to your `Agent` when creating it.
This object will include the following items:
- `VECTARA_AGENTIC_AGENT_TYPE`: valid values are `REACT`, `LLMCOMPILER`, `LATS` or `OPENAI` (default: `OPENAI`)
- `VECTARA_AGENTIC_MAIN_LLM_PROVIDER`: valid values are `OPENAI`, `ANTHROPIC`, `TOGETHER`, `GROQ`, `COHERE`, `BEDROCK`, `GEMINI` or `FIREWORKS` (default: `OPENAI`)
- `VECTARA_AGENTIC_MAIN_MODEL_NAME`: agent model name (default depends on provider)
- `VECTARA_AGENTIC_TOOL_LLM_PROVIDER`: tool LLM provider (default: `OPENAI`)
- `VECTARA_AGENTIC_TOOL_MODEL_NAME`: tool model name (default depends on provider)
- `VECTARA_AGENTIC_OBSERVER_TYPE`: valid values are `ARIZE_PHOENIX` or `NONE` (default: `NONE`)
- `VECTARA_AGENTIC_API_KEY`: a secret key if using the API endpoint option (defaults to `dev-api-key`)
For example:

```python
agent_config = AgentConfig(
agent_type = AgentType.REACT,
main_llm_provider = ModelProvider.ANTHROPIC,
main_llm_model_name = 'claude-3-5-sonnet-20241022',
tool_llm_provider = ModelProvider.TOGETHER,
tool_llm_model_name = 'meta-llama/Llama-3.3-70B-Instruct-Turbo'
)

agent = Agent(
tools=[query_financial_reports_tool],
topic="10-K financial reports",
custom_instructions="You are a helpful financial assistant in conversation with a user.",
agent_config=agent_config
)
```

The `AgentConfig` object may include the following items:
- `agent_type`: the agent type. Valid values are `REACT`, `LLMCOMPILER`, `LATS` or `OPENAI` (default: `OPENAI`).
- `main_llm_provider` and `tool_llm_provider`: the LLM provider for main agent and for the tools. Valid values are `OPENAI`, `ANTHROPIC`, `TOGETHER`, `GROQ`, `COHERE`, `BEDROCK`, `GEMINI` or `FIREWORKS` (default: `OPENAI`).
- `main_llm_model_name` and `tool_llm_model_name`: agent model name for agent and tools (default depends on provider).
- `observer`: the observer type; should be `ARIZE_PHOENIX` or if undefined no observation framework will be used.
- `endpoint_api_key`: a secret key if using the API endpoint option (defaults to `dev-api-key`)

If any of these are not provided, `AgentConfig` first tries to read the values from the OS environment.

When creating a `VectaraToolFactory`, you can pass in a `vectara_api_key`, `vectara_customer_id`, and `vectara_corpus_id` to the factory. If not passed in, it will be taken from the environment variables (`VECTARA_API_KEY`, `VECTARA_CUSTOMER_ID` and `VECTARA_CORPUS_ID`). Note that `VECTARA_CORPUS_ID` can be a single ID or a comma-separated list of IDs (if you want to query multiple corpora).
## Configuring Vectara RAG or search tools

When creating a `VectaraToolFactory`, you can pass in a `vectara_api_key`, `vectara_customer_id`, and `vectara_corpus_id` to the factory.

If not passed in, it will be taken from the environment variables (`VECTARA_API_KEY`, `VECTARA_CUSTOMER_ID` and `VECTARA_CORPUS_ID`). Note that `VECTARA_CORPUS_ID` can be a single ID or a comma-separated list of IDs (if you want to query multiple corpora).

These values will be used as credentials when creating Vectara tools - in `create_rag_tool()` and `create_search_tool()`.

## ℹ️ Additional Information

Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ llama-index-indices-managed-vectara==0.3.1
llama-index-agent-llm-compiler==0.3.0
llama-index-agent-lats==0.3.0
llama-index-agent-openai==0.4.3
llama-index-llms-openai==0.3.16
llama-index-llms-openai==0.3.18
llama-index-llms-anthropic==0.6.4
llama-index-llms-together==0.3.1
llama-index-llms-groq==0.3.1
Expand Down
2 changes: 1 addition & 1 deletion vectara_agentic/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""
Define the version of the package.
"""
__version__ = "0.1.25"
__version__ = "0.1.26"
4 changes: 2 additions & 2 deletions vectara_agentic/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -397,8 +397,8 @@ def report(self) -> None:
print(f"- {tool.metadata.name}")
else:
print("- tool without metadata")
print(f"Agent LLM = {get_llm(LLMRole.MAIN).metadata.model_name}")
print(f"Tool LLM = {get_llm(LLMRole.TOOL).metadata.model_name}")
print(f"Agent LLM = {get_llm(LLMRole.MAIN, config=self.agent_config).metadata.model_name}")
print(f"Tool LLM = {get_llm(LLMRole.TOOL, config=self.agent_config).metadata.model_name}")

def token_counts(self) -> dict:
"""
Expand Down
15 changes: 11 additions & 4 deletions vectara_agentic/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,10 @@


from .types import ToolType
from .tools_catalog import summarize_text, rephrase_text, critique_text, get_bad_topics
from .tools_catalog import ToolsCatalog, get_bad_topics
from .db_tools import DBLoadSampleData, DBLoadUniqueValues, DBLoadData
from .utils import is_float
from .agent_config import AgentConfig

LI_packages = {
"yahoo_finance": ToolType.QUERY,
Expand Down Expand Up @@ -624,6 +625,9 @@ class ToolsFactory:
A factory class for creating agent tools.
"""

def __init__(self, agent_config: AgentConfig = None) -> None:
self.agent_config = agent_config

def create_tool(self, function: Callable, tool_type: ToolType = ToolType.QUERY) -> VectaraTool:
"""
Create a tool from a function.
Expand Down Expand Up @@ -686,7 +690,8 @@ def standard_tools(self) -> List[FunctionTool]:
"""
Create a list of standard tools.
"""
return [self.create_tool(tool) for tool in [summarize_text, rephrase_text]]
tc = ToolsCatalog(self.agent_config)
return [self.create_tool(tool) for tool in [tc.summarize_text, tc.rephrase_text, tc.critique_text]]

def guardrail_tools(self) -> List[FunctionTool]:
"""
Expand All @@ -711,15 +716,17 @@ def summarize_legal_text(
"""
Use this tool to summarize legal text with no more than summary_max_length characters.
"""
return summarize_text(text, expertise="law")
tc = ToolsCatalog(self.agent_config)
return tc.summarize_text(text, expertise="law")

def critique_as_judge(
text: str = Field(description="the original text."),
) -> str:
"""
Critique the legal document.
"""
return critique_text(
tc = ToolsCatalog(self.agent_config)
return tc.critique_text(
text,
role="judge",
point_of_view="""
Expand Down
195 changes: 107 additions & 88 deletions vectara_agentic/tools_catalog.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@
This module contains the tools catalog for the Vectara Agentic.
"""
from typing import List
from functools import lru_cache
from datetime import date

from inspect import signature
import requests

from pydantic import Field

from .types import LLMRole
from .agent_config import AgentConfig
from .utils import get_llm

req_session = requests.Session()
Expand All @@ -27,97 +29,114 @@ def get_current_date() -> str:
"""
return date.today().strftime("%A, %B %d, %Y")

#
# Standard Tools
#
@lru_cache(maxsize=None)
def summarize_text(
text: str = Field(description="the original text."),
expertise: str = Field(
description="the expertise to apply to the summarization.",
),
) -> str:
"""
This is a helper tool.
Use this tool to summarize text using a given expertise
with no more than summary_max_length characters.
Args:
text (str): The original text.
expertise (str): The expertise to apply to the summarization.
Returns:
str: The summarized text.
"""
if not isinstance(expertise, str):
return "Please provide a valid string for expertise."
if not isinstance(text, str):
return "Please provide a valid string for text."
expertise = "general" if len(expertise) < 3 else expertise.lower()
prompt = f"As an expert in {expertise}, summarize the provided text"
prompt += " into a concise summary."
prompt += f"\noriginal text: {text}\nsummary:"
llm = get_llm(LLMRole.TOOL)
response = llm.complete(prompt)
return response.text


@lru_cache(maxsize=None)
def rephrase_text(
text: str = Field(description="the original text."),
instructions: str = Field(description="the specific instructions for how to rephrase the text."),
) -> str:
"""
This is a helper tool.
Use this tool to rephrase the text according to the provided instructions.
For example, instructions could be "as a 5 year old would say it."

Args:
text (str): The original text.
instructions (str): The specific instructions for how to rephrase the text.
def remove_self_from_signature(func):
"""Decorator to remove 'self' from a method's signature for introspection."""
sig = signature(func)
params = list(sig.parameters.values())
# Remove the first parameter if it is named 'self'
if params and params[0].name == "self":
params = params[1:]
new_sig = sig.replace(parameters=params)
func.__signature__ = new_sig
return func

Returns:
str: The rephrased text.
class ToolsCatalog:
"""
prompt = f"""
Rephrase the provided text according to the following instructions: {instructions}.
If the input is Markdown, keep the output in Markdown as well.
original text: {text}
rephrased text:
A curated set of tools for vectara-agentic
"""
llm = get_llm(LLMRole.TOOL)
response = llm.complete(prompt)
return response.text


@lru_cache(maxsize=None)
def critique_text(
text: str = Field(description="the original text."),
role: str = Field(default=None, description="the role of the person providing critique."),
point_of_view: str = Field(default=None, description="the point of view with which to provide critique."),
) -> str:
"""
This is a helper tool.
Critique the text from the specified point of view.
Args:
text (str): The original text.
role (str): The role of the person providing critique.
point_of_view (str): The point of view with which to provide critique.
Returns:
str: The critique of the text.
"""
if role:
prompt = f"As a {role}, critique the provided text from the point of view of {point_of_view}."
else:
prompt = f"Critique the provided text from the point of view of {point_of_view}."
prompt += "Structure the critique as bullet points.\n"
prompt += f"Original text: {text}\nCritique:"
llm = get_llm(LLMRole.TOOL)
response = llm.complete(prompt)
return response.text

def __init__(self, agent_config: AgentConfig):
self.agent_config = agent_config

@remove_self_from_signature
def summarize_text(
self,
text: str = Field(description="the original text."),
expertise: str = Field(
description="the expertise to apply to the summarization.",
),
) -> str:
"""
This is a helper tool.
Use this tool to summarize text using a given expertise
with no more than summary_max_length characters.
Args:
text (str): The original text.
expertise (str): The expertise to apply to the summarization.
Returns:
str: The summarized text.
"""
if not isinstance(expertise, str):
return "Please provide a valid string for expertise."
if not isinstance(text, str):
return "Please provide a valid string for text."
expertise = "general" if len(expertise) < 3 else expertise.lower()
prompt = (
f"As an expert in {expertise}, summarize the provided text "
"into a concise summary.\n"
f"Original text: {text}\nSummary:"
)
llm = get_llm(LLMRole.TOOL, config=self.agent_config)
response = llm.complete(prompt)
return response.text

@remove_self_from_signature
def rephrase_text(
self,
text: str = Field(description="the original text."),
instructions: str = Field(description="the specific instructions for how to rephrase the text."),
) -> str:
"""
This is a helper tool.
Use this tool to rephrase the text according to the provided instructions.
For example, instructions could be "as a 5 year old would say it."
Args:
text (str): The original text.
instructions (str): The specific instructions for how to rephrase the text.
Returns:
str: The rephrased text.
"""
prompt = (
f"Rephrase the provided text according to the following instructions: {instructions}.\n"
"If the input is Markdown, keep the output in Markdown as well.\n"
f"Original text: {text}\nRephrased text:"
)
llm = get_llm(LLMRole.TOOL, config=self.agent_config)
response = llm.complete(prompt)
return response.text

@remove_self_from_signature
def critique_text(
self,
text: str = Field(description="the original text."),
role: str = Field(default=None, description="the role of the person providing critique."),
point_of_view: str = Field(default=None, description="the point of view with which to provide critique."),
) -> str:
"""
This is a helper tool.
Critique the text from the specified point of view.
Args:
text (str): The original text.
role (str): The role of the person providing critique.
point_of_view (str): The point of view with which to provide critique.
Returns:
str: The critique of the text.
"""
if role:
prompt = f"As a {role}, critique the provided text from the point of view of {point_of_view}."
else:
prompt = f"Critique the provided text from the point of view of {point_of_view}."
prompt += "\nStructure the critique as bullet points.\n"
prompt += f"Original text: {text}\nCritique:"
llm = get_llm(LLMRole.TOOL, config=self.agent_config)
response = llm.complete(prompt)
return response.text

#
# Guardrails tool: returns list of topics to avoid
Expand Down

0 comments on commit 34f9bf7

Please sign in to comment.