You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am building a low latency conversational agent and need to utilize the tool calling feature each after human_step() call, one thing I have noticed in the code below:
When the use_tools argument is set to true, we are using sales_agent_executor instead of the the utterance chain, which only streams the intermediate steps of the executor, is it possible to stream chunks of sentences as they are being generated to reduce the response latency ?
Thanks.
The text was updated successfully, but these errors were encountered:
I am building a low latency conversational agent and need to utilize the tool calling feature each after human_step() call, one thing I have noticed in the code below:
SalesGPT/salesgpt/agents.py
Line 420 in 466a441
SalesGPT/salesgpt/agents.py
Line 421 in 466a441
When the use_tools argument is set to true, we are using sales_agent_executor instead of the the utterance chain, which only streams the intermediate steps of the executor, is it possible to stream chunks of sentences as they are being generated to reduce the response latency ?
Thanks.
The text was updated successfully, but these errors were encountered: