How AgentExecutor in LCEL makes LLM streaming output in the final step #308
Unanswered
wangcailin
asked this question in
Q&A
Replies: 1 comment
-
This does not appear to be a langserve issue, but an LCEL question. Please let me know if i got that wrong. Likely the issue is that the code is using RunnableLambda which does not preserve streaming capabilities. Instead use from langchain.chat_models import ChatAnthropic
from langchain.schema.runnable import RunnablePassthrough, RunnableGenerator
chain = ChatAnthropic() # Streams
for chunk in chain.stream('hello'):
print(chunk)
chain = ChatAnthropic() | (lambda x: x) # Does not stream
for chunk in chain.stream('hello'):
print(chunk)
chain = ChatAnthropic() | RunnablePassthrough() # Will stream because it defines `transform`
for chunk in chain.stream('hello'):
print(chunk)
# Let's define our own
def _transform(input_stream):
for chunk in input_stream:
yield chunk['output']
chain = {
'output': ChatAnthropic()
} | RunnableGenerator(_transform)
for chunk in chain.stream('hello'):
print(chunk) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
How can AgentExecutor in LCEL in the last step how to make LLM streaming output, my business logic is
Currently the business processes are all achievable, but the final answer cannot be streamed through astream, astream streams the steps of AgentExecutor, and what I want to achieve is to stream the LLM answer of the last step.
@eyurtsev Please help guide me on how to achieve this, thanks
Beta Was this translation helpful? Give feedback.
All reactions