You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running litellm proxy in docker and have a langfuse integration turned on and I notice that sometimes, the messages are not redacted there, specifically the AI output (user input seems to always be redacted).
This only happens sometimes and I have no idea what triggers it or how to debug it.
Input
{
messages: [
0: {
role: "user"
content: "redacted-by-litellm"
}
]
}
Output
{
content: "Seveda, z veseljem ti pomagam. Tukaj je nekaj možnosti, kako bi lahko jedrnato povzel tvoje delo s klientko:
Relevant log output
20:43:56 - LiteLLM Proxy:INFO: parallel_request_limiter.py:68 - Current Usage of key in this minute: None
litellm-1 | 20:43:56 - LiteLLM Proxy:INFO: parallel_request_limiter.py:68 - Current Usage of user in this minute: None
litellm-1 | 20:43:56 - LiteLLM:INFO: utils.py:3035 -
litellm-1 | LiteLLM completion() model= gemini-2.0-pro-exp-02-05; provider = vertex_ai
litellm-1 | 20:43:56 - LiteLLM:INFO: cost_calculator.py:588 - selected model name for cost calculation: vertex_ai/gemini-2.0-pro-exp-02-05
litellm-1 | 20:43:56 - LiteLLM Router:INFO: router.py:1051 - litellm.acompletion(model=vertex_ai/gemini-2.0-pro-exp-02-05) 200 OK
litellm-1 | INFO: 172.18.0.6:38458 - "POST /v1/chat/completions HTTP/1.1" 200 OK
litellm-1 | 20:44:05 - LiteLLM:INFO: cost_calculator.py:588 - selected model name for cost calculation: vertex_ai/gemini-2.0-pro-exp-02-05
litellm-1 | 20:44:05 - LiteLLM Proxy:INFO: proxy_server.py:899 - Writing spend log to db - request_id: chatcmpl-a98d9f79-38be-43a4-94d7-206b253839d1, spend: 0.0
litellm-1 | 20:44:05 - LiteLLM:INFO: cost_calculator.py:588 - selected model name for cost calculation: vertex_ai/gemini-2.0-pro-exp-02-05
litellm-1 | 20:44:05 - LiteLLM:INFO: langfuse.py:261 - Langfuse Layer Logging - logging success
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
1.63.14
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
I am running litellm proxy in docker and have a langfuse integration turned on and I notice that sometimes, the messages are not redacted there, specifically the AI output (user input seems to always be redacted).
This only happens sometimes and I have no idea what triggers it or how to debug it.
settings:
litellm_settings:
modify_params: true
cache: false
success_callback: ["langfuse"]
failure_callback: ["sentry"]
turn_off_message_logging: True
json output in langfuse:
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
1.63.14
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: