Add new integration: LLM Proxy #1010
marcklingen
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
We've found that LiteLLM + Langfuse is a really great fit. We've got both running as a container on Google Cloud Run. It's super cool to have an OpenAI API endpoint that works with across multiple Azure OpenAI regions (load balanced), and automatically logs to Langfuse. (There's other features, that's just the thing I love the most.) https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---langfuse https://docs.litellm.ai/docs/observability/langfuse_integration cc @krrishdholakia and @ishaan-jaff if you haven't already spoken with them. They're already solving the cons for us/you. =) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Current situation
Currently the Client SDKs (js, python) or native integrations allow to instrument an application to asynchronously send traces to Langfuse.
While most integrations were built to support nested traces, the current OpenAI integration makes it very easy to capture individual LLM calls (generations) by changing the import.
Proxy
Alternatively, LLM calls could be logged via a proxy as pioneered by Helicone and adapted by Cloudflare (AI Gateway).
client = OpenAI( + base_url="https://proxy.langfuse.com/openai", )
Considerations
This form of integration goes into the same direction of the current OpenAI integration (very easy to get started, no support for non-LLM spans/events) but comes with some pros and cons
Pro:
Con:
At this point I think this should not be a priority as most users get a lot of value from using deeply nested traces. Looking forward to thoughts/ideas on this.
Beta Was this translation helpful? Give feedback.
All reactions