Skip to content

Fix cache miss for gemini models with response_format #10635

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

casparhsws
Copy link
Contributor

@casparhsws casparhsws commented May 7, 2025

Title

Fix cache miss for gemini models with response_format

Relevant issues

Fixes #8706

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

I have added an assertion to an existing test. It's not ideal as it doesn't directly address the caching issue but I will ask for advice.

  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem
image

Type

🐛 Bug Fix

Changes

Copy link

vercel bot commented May 7, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 7, 2025 8:04pm

@CLAassistant
Copy link

CLAassistant commented May 7, 2025

CLA assistant check
All committers have signed the CLA.

@casparhsws casparhsws marked this pull request as ready for review May 7, 2025 20:08
@krrishdholakia krrishdholakia merged commit d680feb into BerriAI:main May 8, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Caching on litellm proxy does not work when using structured output (response_format)
3 participants