-
Notifications
You must be signed in to change notification settings - Fork 185
Integrate ProjectContainerTool in FunctionAnalyzer #1107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
683caaa
16a687c
0316104
554e5d3
fb4754c
fb8607d
52a5c12
b69f506
381868e
4e632c0
e82309e
eadd827
a242059
47010af
4314b30
1266603
a80b52a
44e7325
97984a0
69c03a3
40cbdb4
be2edba
e95b020
d40bfa2
149d47e
98a7a02
9695911
893edc5
f852534
b4934ab
91ab496
537e7c7
6937d40
1fbe6d3
d757ef7
6df02b3
643e246
be31842
2f92cca
2242f66
800b601
e108701
fc0b49f
53342aa
be8ff8f
385a332
e9a14ef
0ec4675
3a60edc
dde533e
d2d3137
716ff40
1f8cf96
735c46e
2108ded
a7a700d
5809692
466ca80
08263a0
f21a7ab
3b38d85
da435e9
b6bb19f
f767db9
831e5f2
56a3698
a8008cb
7aa44ce
671b8c6
dd35173
ba1cb93
2c34ef4
c65ccad
be7701f
e635f59
df628f5
b7f4914
ee2438c
c027061
41b841a
faebc3c
4729411
7b10f6b
f831530
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,6 +13,7 @@ | |
# limitations under the License. | ||
"""The abstract base class for LLM agents in stages.""" | ||
import argparse | ||
import asyncio | ||
import os | ||
import random | ||
import re | ||
|
@@ -23,11 +24,14 @@ | |
from typing import Any, Optional | ||
|
||
import requests | ||
from google.adk import agents, runners, sessions | ||
from google.genai import errors, types | ||
|
||
import logger | ||
import utils | ||
from data_prep import introspector | ||
from llm_toolkit.models import LLM | ||
from experiment import benchmark as benchmarklib | ||
from llm_toolkit.models import LLM, VertexAIModel | ||
from llm_toolkit.prompts import Prompt | ||
from results import Result | ||
from tool.base_tool import BaseTool | ||
|
@@ -295,6 +299,107 @@ def execute(self, result_history: list[Result]) -> Result: | |
"""Executes the agent based on previous result.""" | ||
|
||
|
||
class ADKBaseAgent(BaseAgent): | ||
"""The abstract base class for agents created using the ADK library.""" | ||
|
||
def __init__(self, | ||
trial: int, | ||
llm: LLM, | ||
args: argparse.Namespace, | ||
benchmark: benchmarklib.Benchmark, | ||
description: str = '', | ||
instruction: str = '', | ||
tools: Optional[list] = None, | ||
name: str = ''): | ||
|
||
super().__init__(trial, llm, args, tools, name) | ||
|
||
self.benchmark = benchmark | ||
|
||
# For now, ADKBaseAgents only support the Vertex AI Models. | ||
if not isinstance(llm, VertexAIModel): | ||
raise ValueError(f'{self.name} only supports Vertex AI models.') | ||
|
||
# Create the agent using the ADK library | ||
adk_agent = agents.LlmAgent( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @DonggeLiu I tried to make ADKBaseAgent extend both BaseAgent and LlmAgent but this did not work because both super classes had same argument name (tools) with conflicting types. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If a simple integration is not possible, we can leave this for now. Later we can think of a better way factor other agents to be based on There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @DavidKorczynski Do you have any insight on how to better support agents using GPT? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suggest landing this in its current form and then I'll follow-up with GPT implementation. Can we, however, disable this when GPT models for now? i.e. we don't want to break any workflows |
||
name=self.name, | ||
model=llm._vertex_ai_model, | ||
description=description, | ||
instruction=instruction, | ||
tools=tools or [], | ||
) | ||
|
||
# Create the session service | ||
session_service = sessions.InMemorySessionService() | ||
session_service.create_session( | ||
app_name=self.name, | ||
user_id=benchmark.id, | ||
session_id=f'session_{self.trial}', | ||
) | ||
|
||
# Create the runner | ||
self.runner = runners.Runner( | ||
agent=adk_agent, | ||
app_name=self.name, | ||
session_service=session_service, | ||
) | ||
|
||
self.round = 0 | ||
|
||
logger.info('ADK Agent %s created.', self.name, trial=self.trial) | ||
|
||
def chat_llm(self, cur_round: int, client: Any, prompt: Prompt, | ||
trial: int) -> str: | ||
"""Call the agent with the given prompt, running async code in sync.""" | ||
|
||
self.round = cur_round | ||
|
||
self.log_llm_prompt(prompt.get()) | ||
|
||
async def _call(): | ||
user_id = self.benchmark.id | ||
session_id = f"session_{self.trial}" | ||
content = types.Content(role='user', | ||
parts=[types.Part(text=prompt.get())]) | ||
|
||
final_response_text = '' | ||
|
||
async for event in self.runner.run_async( | ||
user_id=user_id, | ||
session_id=session_id, | ||
new_message=content, | ||
): | ||
if event.is_final_response(): | ||
if (event.content and event.content.parts and | ||
event.content.parts[0].text): | ||
final_response_text = event.content.parts[0].text | ||
elif event.actions and event.actions.escalate: | ||
error_message = event.error_message | ||
logger.error('Agent escalated: %s', error_message, trial=self.trial) | ||
|
||
self.log_llm_response(final_response_text) | ||
|
||
return final_response_text | ||
|
||
return self.llm.with_retry_on_error(lambda: asyncio.run(_call()), | ||
[errors.ClientError]) | ||
|
||
def log_llm_prompt(self, promt: str) -> None: | ||
self.round += 1 | ||
logger.info('<CHAT PROMPT:ROUND %02d>%s</CHAT PROMPT:ROUND %02d>', | ||
self.round, | ||
promt, | ||
self.round, | ||
trial=self.trial) | ||
|
||
def log_llm_response(self, response: str) -> None: | ||
logger.info('<CHAT RESPONSE:ROUND %02d>%s</CHAT RESPONSE:ROUND %02d>', | ||
self.round, | ||
response, | ||
self.round, | ||
trial=self.trial) | ||
|
||
|
||
if __name__ == "__main__": | ||
# For cloud experiments. | ||
BaseAgent.cloud_main() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any thoughts about how to support other AI models later?
This is important to OFG because we do want to allow users to use different LLMs transparently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the ADK library supports Gemini by default but has an extension that allows you to support other LLMs. I can work on adding this support.