Cost tracking via Langfuse #4506
-
Hello all, Please let me know: *Q1. - I am tracking the token usage from the langchain's .ainvoke() response which is "response.response_metadata.token_usage" and feeding them to Langfuse's ModelUsage manually. So i am able to get the token usage but "response.response_metadata.token_usage" doesn't provide the cost and so my cost is not logged in langfuse UI. The official docs say the if you have the model registered in langfuse model it will auto calculate when you create a generation but it doesn't. Please suggest if there is a way to avoid manual cost calculation while using Python Low level SDK on importing Chat mdoels from Langchain. Q2. - How can I use @observe decorators as I tried but they don't seem to work for my application. If there is any tutorial, implementation for the decorators use for FastAPI or even examples other than official docs as they do not have enough examples. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hey there, @asadsk8r02! I'm here to help you with your Langfuse and FastAPI issues. Let's tackle these together! I found a similar unsolved discussion regarding inferring the cost of a trace from a metadata field, which might be related to your issue with cost calculation [1]. Additionally, there was a closed issue about Langfuse not counting tokens and costs with Langchain, but no solution was found in the comments [2]. For your first question, Langfuse's ModelUsage feature should automatically calculate costs if the model is registered correctly. Ensure that the model is specified in the generation request and that it matches one of the supported models, such as OpenAI or Anthropic. The model definition in Langfuse must include prices per unit, and the unit on the usage object must match the unit specified in the model definition [3][4]. If these conditions are met and the cost is still not calculated, there might be an issue with the setup or a limitation in the current implementation. Regarding your second question about using |
Beta Was this translation helpful? Give feedback.
Hey there, @asadsk8r02! I'm here to help you with your Langfuse and FastAPI issues. Let's tackle these together!
I found a similar unsolved discussion regarding inferring the cost of a trace from a metadata field, which might be related to your issue with cost calculation [1]. Additionally, there was a closed issue about Langfuse not counting tokens and costs with Langchain, but no solution was found in the comments [2].
For your first question, Langfuse's ModelUsage feature should automatically calculate costs if the model is registered correctly. Ensure that the model is specified in the generation request and that it matches one of the supported models, such as OpenAI or Anthropic. Th…