Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming text chunks support for sales agent executor chain instead of intermediate steps #120

Open
aliabbas101 opened this issue Mar 22, 2024 · 0 comments

Comments

@aliabbas101
Copy link

aliabbas101 commented Mar 22, 2024

I am building a low latency conversational agent and need to utilize the tool calling feature each after human_step() call, one thing I have noticed in the code below:

if self.use_tools:

ai_message = self.sales_agent_executor.invoke(inputs)

When the use_tools argument is set to true, we are using sales_agent_executor instead of the the utterance chain, which only streams the intermediate steps of the executor, is it possible to stream chunks of sentences as they are being generated to reduce the response latency ?

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant