Replies: 1 comment 9 replies
-
Hi @JTZ18, could you try to upgrade to the latest version of our SDK (2.55.0) as we released full multi-modal support last week. |
Beta Was this translation helpful? Give feedback.
9 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
i am using
langfuse==2.53.9
I am trying to track the generation of the llm to get accurate input and output cost token calculation for multimodal inputs like images and text. It works well when i use the
from langfuse.openai import AzureOpenAI
. But using the low level SDK results in truncationwhen i use the low level sdk and use
when i use the
from langfuse.openai import AzureOpenAI
, the inputs are properly handled and not truncatedI want to make it work in the low level SDK, any ideas how i could avoid truncation for
trace.generation
?Beta Was this translation helpful? Give feedback.
All reactions