Skip to content

Conversation

Sameerlite
Copy link
Collaborator

Add proper api_base support for Gemini models in LiteLLM proxy

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature
🐛 Bug Fix

Changes

Added better URL construction for gemini. Now when you provide API BASE, it will add the model defined in the config with the api base and endpoint. ** This change doesn't break the earlier functionality of when api_base is not provided the api request goes to - https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent?key=*****xQMg . The vertex_ai provider config also works as earlier.

Testing

# Now correctly constructs: https://proxy.zapier.com/generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent
response = litellm.completion(
    model="gemini/gemini-2.5-flash-lite", 
    messages=[{"role": "user", "content": "Hello!"}],
    api_base="https://proxy.zapier.com/generativelanguage.googleapis.com/v1beta"
)
image

Copy link

vercel bot commented Sep 16, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Sep 16, 2025 10:18am

@krrishdholakia krrishdholakia merged commit 635dc72 into BerriAI:main Sep 17, 2025
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants