Optional Tool calling (with ChatOllama for example) #2650
Replies: 4 comments
-
Great question! Your desired behavior is usually the default. However, "tool use" quality varies by model, with many of the models on Ollama being especially easily distracted (since their fine-tuning dataset mixture may be imbalanced). I'm not as familiar with Ollama per se, but assuming they don't have atypical defaults, it usually comes down to prompting or picking a better model. For prompting strategies, there's basically two common methods: instructions & examples. You can either add more guidance in instructions on when to use or not use tools, or you can add examples of it not using the tools. |
Beta Was this translation helpful? Give feedback.
-
i don't think additionally, it sounds like there might be too many tools for a single agent to manage. i would recommend splitting into multiple agents and give them different subsets of the tools and see if that helps. you can have a supervisor that decides which agent to call next or allow the agents to call each other directly https://langchain-ai.github.io/langgraph/tutorials/#multi-agent-systems |
Beta Was this translation helpful? Give feedback.
-
Alright, thanks for both of your feedbacks! I'm glad I'm not doing something completely wrong, and the pointers to a more sophisticated setup with tool calling and choosing the right llm will certainly help. I guess I'll try the example from the langchain-academy next! |
Beta Was this translation helpful? Give feedback.
-
Thank you so much! This really helped me. I forgot something crucial: I have a checkpointer that I'm using... and while I'm deleting messages after creation if there are more than a certain number of them, I don't seem to clean up the tool messages. I think it's the tool messages that are actually confusing the AI. I basically cut the history down to just one message, and suddenly the prompting seemed to take effect. I certainly need to take better care not to provide too much context... |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I have a question. I'm trying to create a personal assistant. So far, a lot of things work quite nicely, also a lot of tools are integrated. My graph is basically an "agent" node where I call a ChatOllama llm bound with tools and a ToolNode with all my tools. That works. My "should_continue" edge is then very close to the tools_condition. The whole setup is very close to something in the basic tutorials.
However... Sometimes, I just want the LLM to make conversation. For example, ask questions back, return a haiku, etc. The weird part is that I cannot get the LLM to choose NOT to use tools. Sometimes it uses the repl, the clipboard to complete a simple request. Once it actually used tavily to analyze "thanks". I tried several ways to get the LLM to see the tool calls as optional:
Once I got a "I'm definitely not using tools for this one!" just after a tool call.
Is it even possible to have an optional tool call, instead of a guaranteed one? And how can I achieve it?
Thanks already!
Beta Was this translation helpful? Give feedback.
All reactions