Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: turn_off_message_logging sometimes not redacting output #9507

Open
clarity99 opened this issue Mar 24, 2025 · 0 comments
Open

[Bug]: turn_off_message_logging sometimes not redacting output #9507

clarity99 opened this issue Mar 24, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@clarity99
Copy link

What happened?

I am running litellm proxy in docker and have a langfuse integration turned on and I notice that sometimes, the messages are not redacted there, specifically the AI output (user input seems to always be redacted).
This only happens sometimes and I have no idea what triggers it or how to debug it.

settings:
litellm_settings:
modify_params: true
cache: false
success_callback: ["langfuse"]
failure_callback: ["sentry"]
turn_off_message_logging: True

json output in langfuse:

Input

{
messages: [
0: {
role: "user"
content: "redacted-by-litellm"
}
]
}
Output

{
content: "Seveda, z veseljem ti pomagam. Tukaj je nekaj možnosti, kako bi lahko jedrnato povzel tvoje delo s klientko:

Relevant log output

20:43:56 - LiteLLM Proxy:INFO: parallel_request_limiter.py:68 - Current Usage of key in this minute: None
litellm-1  | 20:43:56 - LiteLLM Proxy:INFO: parallel_request_limiter.py:68 - Current Usage of user in this minute: None
litellm-1  | 20:43:56 - LiteLLM:INFO: utils.py:3035 - 
litellm-1  | LiteLLM completion() model= gemini-2.0-pro-exp-02-05; provider = vertex_ai
litellm-1  | 20:43:56 - LiteLLM:INFO: cost_calculator.py:588 - selected model name for cost calculation: vertex_ai/gemini-2.0-pro-exp-02-05
litellm-1  | 20:43:56 - LiteLLM Router:INFO: router.py:1051 - litellm.acompletion(model=vertex_ai/gemini-2.0-pro-exp-02-05) 200 OK
litellm-1  | INFO:     172.18.0.6:38458 - "POST /v1/chat/completions HTTP/1.1" 200 OK
litellm-1  | 20:44:05 - LiteLLM:INFO: cost_calculator.py:588 - selected model name for cost calculation: vertex_ai/gemini-2.0-pro-exp-02-05
litellm-1  | 20:44:05 - LiteLLM Proxy:INFO: proxy_server.py:899 - Writing spend log to db - request_id: chatcmpl-a98d9f79-38be-43a4-94d7-206b253839d1, spend: 0.0
litellm-1  | 20:44:05 - LiteLLM:INFO: cost_calculator.py:588 - selected model name for cost calculation: vertex_ai/gemini-2.0-pro-exp-02-05
litellm-1  | 20:44:05 - LiteLLM:INFO: langfuse.py:261 - Langfuse Layer Logging - logging success

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

1.63.14

Twitter / LinkedIn details

No response

@clarity99 clarity99 added the bug Something isn't working label Mar 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant