Why does Langraph uses so much more Tokens then AgentExecutor? #678
Unanswered
aleksandra-mazurek
asked this question in
Q&A
Replies: 1 comment 1 reply
-
No, I think the answer is that the old agentexecutor did not save tool responses in memory. Whereas most examples I've seen of langgraph agents utilize the "messages" list which saves all previous tool calls and tool responses. I had a conversation with Gemini 1.5 Pro yesterday that consumed half a million tokens. Unimaginable just a few months ago. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've created an AgentExecutor with access to several custom tools (To access custom API). Whole thing allows to talk to a meeting management system - access and plan meetings. And it work ok, allowing to, for instance ask for planned meetings for today. Such a question uses around 5000 tokens according to LangSmith
Then I tried to create same thing with LangGraph following first part of the tutorial https://langchain-ai.github.io/langgraph/tutorials/customer-support/customer-support/,
I am trying LangGraph because eventually I want to have a chat with a supervisor and a flock of smaller agents responsible for different tools and types of request.
But for now it was a simple thing. Just one assistant node, and one tool node. Same tools, same prompt as above. And it uses -12,000 to 20,000 tokens to complete.
When looking at just the LLM calls in langSmith seems they are very simmilar. All i noticed that when called from LangGraph in metadata the whole TOOLS section is both in INVOCATION_PARAMS as well as OPTIONS.
Is that it? And if so, why is this section doubled when using LangGraph? Or what I could be doing wrong :)
To make thing more visible a create same thing with just simple CurrentDayTool and DuckDuckGo search.
Here are the lang smith traces showing that the Langraph used 2000 tokens when AgentExecutor just 1000
AgentExecutor :
https://smith.langchain.com/public/6bb79beb-d43f-46ba-a8d6-d37a4f5d5ace/r
Langgraph:
https://smith.langchain.com/public/32380cef-49a8-49da-9baf-09d2176e4e72/r
Update: The answer seems to be that the Lagsmith calculations seems to be wrong regarding AgentExecutor. Checking on the Anthropic Usage graph it shows that both examples used around 2000 tokens. Good to know I guess :)
Beta Was this translation helpful? Give feedback.
All reactions