Add functionality for the callback handler {get_langchain_handler()} to control/disable/suppress the input/output for the GENERATION via @observe. #4688
asadsk8r02
started this conversation in
Ideas
Replies: 2 comments 1 reply
-
Thanks for sharing this! Open for contributions on this at any time as this would be a great addition. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Sure, Thank you @marcklingen. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Describe the feature or potential improvement
Hi Langfuse team,
In LangFuse, when using @observe decorators and importing models via Langchain (ex: from langchain_openai import OpenAI), the GENERATION is automatically created but the input is sometimes very large which includes (Prompt, pydantic class, variables, etc.) and the developers just need to know what the input is rather than the whole input being logged to the langfuse.
(eg: Input: Prompt template from langfuse which has variables, instructions, pydantic model and schema for output.)
An added feature would be to have the functionality to configure the input and output for the GENERATION and not just for the parent SPAN which we can do via "update_current_observation".
In this way we can control or suppress or edit the input/output that is logged to the LangFuse for the GENERATION as well.
It would be great to have a configurable option for the Langchain callback handler to enable or disable capturing input/output at the GENERATION span level. This would allow for finer control over sensitive data logging in applications using Langfuse.
Currently I have made an additional class using BaseCallbackHandler form langchain.callbacks.base to get the things done.
which works.
Thank you for considering this idea!
If you need the additional class solution, I'll be happy to contribute and help.
Additional information
For reference, in my current code, I am able to control the input and output for the SPAN [Process LLM Request] but not for the GENERATION [ChatOpenAI] which is created automatically thanks to @observe decorator.
Note: langfuse_ctx is simply an instance of langfuse_context.configure with langfuse keys and host values.
Currently I have made an additional class using BaseCallbackHandler form langchain.callbacks.base to get the things done.
which works.
cc - @marcklingen
Beta Was this translation helpful? Give feedback.
All reactions