-
Notifications
You must be signed in to change notification settings - Fork 185
Integrate ProjectContainerTool in FunctionAnalyzer #1107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit includes the following tasks: - implements a function analyzer agent using the ADK library. - implements a function that gets a function's source using the FuzzIntrospector API. - Provides this function as a tool to the function analyzer agent. - Adds basic prompt templates for the agent. - Creates a test file that can be used to access the agent independently.
…nction-analyzer-main
…s-fuzz-gen into introspector-tool
…s not need to perform any operation while waiting for results from the agent.
This commit refactors the agentic workflow so that the function's source is retrieved deterministically and included in the agemt. The prompts are also more specific on what requirements should be extracted.
…s-fuzz-gen into introspector-tool
First, the context retriever agent retrieves the source code of the function and its children for analysis. Then, the requirements extractor agent
…raw result and the list of requirements. The commit also updates function_analyzer_test to process multiple benchmarks in a provided file and write the results to a results file.
…trospector-tool
…alysis result to google cloud storage.
…nalyzer to get the model name dynamically.
…tion and its context are first retrieved, and passed to the agent in the prompt.
…lyzer to other agents. Modified the prototyper to consume requirements from function analyzer.
It's a bit confusing we use CHAT for both chat and ask utility functions. This changes so it's clear from the logging when each of them is used. Signed-off-by: David Korczynski <[email protected]>
This PR mainly implements a crash analyzer that can interact with LLDB in the multi-agent framework, and supports GPT. In addition, this PR attempts to fix the problem of not replacing the fuzz target and build script. This PR is under testing. The main logic is no longer changing, and minor bugs are being fixed. TODO: Optimize the process of agent interaction with LLDB. Solve the problem of missing debugging information for some projects. Try to add LLM-based static methods to enhance the crash analyzer. --------- Co-authored-by: Dongge Liu <[email protected]>
Fixing broken OSS-Fuzz projects is a common task and something we could leverage OFG to do. This adds an initial agent-based approach for doing this and it has already been used to fix google/oss-fuzz#13389 and other projects locally. ``` oss-fuzz-generator fix-build --project PROJECT_NAME --model ${MODEL} ``` --------- Signed-off-by: David Korczynski <[email protected]>
- Works better with C/CXX - Improve prompt writings - Improve OSS-Fuzz build_fuzzers/check_build validation - Improves code quality --------- Signed-off-by: David Korczynski <[email protected]>
Signed-off-by: David Korczynski <[email protected]>
Signed-off-by: David Korczynski <[email protected]>
Signed-off-by: David Korczynski <[email protected]>
Signed-off-by: David Korczynski <[email protected]>
…1086) Signed-off-by: David Korczynski <[email protected]>
/gcbrun exp -n pamusuo -m vertex_ai_gemini-2-5-pro-chat -ag -b quick-test |
/gcbrun exp -n pamusuo -m vertex_ai_gemini-2-5-pro-chat -ag -b quick-test |
/gcbrun exp -n pamusuo -m vertex_ai_gemini-2-5-pro-chat -ag -b quick-test |
/gcbrun exp -n pamusuo -m vertex_ai_gemini-2-5-pro-chat -ag -b quick-test |
raise ValueError(f'{self.name} only supports Vertex AI models.') | ||
|
||
# Create the agent using the ADK library | ||
adk_agent = agents.LlmAgent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DonggeLiu I tried to make ADKBaseAgent extend both BaseAgent and LlmAgent but this did not work because both super classes had same argument name (tools) with conflicting types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a simple integration is not possible, we can leave this for now.
I presume function_analyzer is the only child class of ADKBaseAgent
and will only communicate with other agents via Result
?
Later we can think of a better way factor other agents to be based on LlmAgent
, which will replace existing base agent, and make tools to be compatible with LlmAgent
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DavidKorczynski Do you have any insight on how to better support agents using GPT?
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest landing this in its current form and then I'll follow-up with GPT implementation. Can we, however, disable this when GPT models for now? i.e. we don't want to break any workflows
…d even when function wasnt found.
/gcbrun exp -n pamusuo -m vertex_ai_gemini-2-5-pro-chat -ag |
/gcbrun exp -n pamusuo -m vertex_ai_gemini-2-5-pro-chat -ag |
|
||
self.benchmark = benchmark | ||
|
||
# For now, ADKBaseAgents only support the Vertex AI Models. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any thoughts about how to support other AI models later?
This is important to OFG because we do want to allow users to use different LLMs transparently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the ADK library supports Gemini by default but has an extension that allows you to support other LLMs. I can work on adding this support.
raise ValueError(f'{self.name} only supports Vertex AI models.') | ||
|
||
# Create the agent using the ADK library | ||
adk_agent = agents.LlmAgent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a simple integration is not possible, we can leave this for now.
I presume function_analyzer is the only child class of ADKBaseAgent
and will only communicate with other agents via Result
?
Later we can think of a better way factor other agents to be based on LlmAgent
, which will replace existing base agent, and make tools to be compatible with LlmAgent
.
agent/function_analyzer.py
Outdated
You are a security engineer tasked with analyzing a function | ||
and extracting its input requirements, | ||
necessary for it to execute correctly. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put these into a text file like other prompts, please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I have put text prompts elsewhere in the python scripts explicitly. I find it a lot easier to work with, both when reading and writing the code. Cross-referencing becomes so much easier when you can use the IDE as opposed to tracing files around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @DonggeLiu, I have updated the code.
/gcbrun exp -n pamusuo -m vertex_ai_gemini-2-5-pro-chat -ag |
This pull request makes the following contributions: