how to enable context caching in vertex ai models within LiteLLM Router ? #6878
Unanswered
AbhishekRP2002
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
here is my model list:
i am using LiteLLM router from LangChain, shown as follows:
i want to understand how i can enable context caching for gemini family models in this ?
any help would be really appreciated
cc: @krrishdholakia
Beta Was this translation helpful? Give feedback.
All reactions