-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Vertex LLM does not Handle FunctionCall tools #16678
Comments
Hey @stfines-clgx! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together! The error you're encountering, To address this issue, you should check the following:
If these steps do not resolve the issue, further investigation into the specific implementation details of the |
Probably could be related to the package version for the generative-ai package? I don't have access to vertex to test, but the code is here llama_index/llama-index-integrations/llms/llama-index-llms-vertex/llama_index/llms/vertex/base.py Line 498 in bfd2514
Happy to accept a PR to fix |
That does seem possible, esp since the pyproject.toml lists a aiplatform dependency of 1.39.0 and the current version is 1.70.0; I'll pull locally and see if I can build this lib with a newer dependency |
That would be amazing, thank you! |
Taking a look at this, the entire set of interfaces has changed significantly. I've forked and will re-work how to interact with gemini so that it uses the actual released interface and as such doesn't depend on unreleased/private features. which should address the problem. Once I can verify that it functions I'll submit a PR - it'll probably require a new version update. |
Ok this fork has a branch - dev/1_70_protobuf_only that seems to resolve this issue. My testing with several different types of functions and non-function invocations has it working well. It does not resolve @logan-markewich if you would take a look prior to my creating a PR and let me know what else needs doing to make a good PR for this project I would appreciate it. |
I encountered the same bug. However this fix did not resolve my issue. I will investigate further and see what insights I can offer. Thank you for investigating this @stfines-clgx, appreciate it. |
If you can post your error and input I can take a look and see what I need to add - I can't test all the different variations in input here so If I can make some unit tests It would help quite a bit. Although, to be sure you actually use this version you have to pull the fork, then run |
@logan-markewich can you assign this, #16625, and #16970 to me? It will make it easier for me to keep track of comments on both |
We confirmed we are running your version. Here is the error we are seeing - google.protobuf.json_format.ParseError: Failed to parse properties field: Failed to parse anyOf field: Failed to parse type field: Invalid enum value NULL for enum type google.cloud.aiplatform.v1beta1.Type at Schema.properties[images].anyOf[1].type... Before we merged your code, we were getting this error: AttributeError: 'FunctionCall' object has no attribute '_pb' Going to add more context below: This is the repo we are working off of - https://github.com/run-llama/multi-agent-concierge/tree/main Thank you for your help, and let me know if you need more info. |
Here's another simple unit test that is failing. Coming from this documentation - https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/ |
Bug Description
When using a FunctionTool, I receive the following error:
AttributeError: 'FunctionCall' object has no attribute '_pb'
It seems that this tool makes some assumptions about the input data types.
Version
0.11.7, llama-index-llms-vertex==0.3.7
Steps to Reproduce
Update the code in The multi-agent concierge example to use Vertex, then try and run.
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: