You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm developing an endpoint using the LiteLLM SDK and it's working fine for me with all the models I've attempted from Google's Vertex Model Garden except gemini-pro-vision. The response JSON I'm getting returned is just this which isn't descriptive at all
{"detail": {"Error": "'image'"}}
Looking further in my application logs LiteLLM is logging out a more detailed error message which I'm assuming is causing the above error response to be returned.
Exception occured - model info for model=gemini-1.0-pro-vision does not have 'output_cost_per_character'-pricing
model_info={'key': 'gemini-1.0-pro-vision', 'max_tokens': 2048, 'max_input_tokens': 16384, 'max_output_tokens': 2048, 'input_cost_per_token': 2.5e-07, 'cache_creation_input_token_cost': None, 'cache_read_input_token_cost': None, 'input_cost_per_character': None, 'input_cost_per_token_above_128k_tokens': None, 'output_cost_per_token': 5e-07, 'output_cost_per_character': None, 'output_cost_per_token_above_128k_tokens': None, 'output_cost_per_character_above_128k_tokens': None, 'output_vector_size': None, 'litellm_provider': 'vertex_ai-vision-models', 'mode': 'chat', 'supported_openai_params': ['temperature', 'top_p', 'max_tokens', 'stream', 'tools', 'tool_choice', 'response_format', 'n', 'stop', 'extra_headers'], 'supports_system_messages': None, 'supports_response_schema': None, 'supports_vision': True, 'supports_function_calling': True, 'supports_assistant_prefill': False}
Traceback (most recent call last):
File "C:\Users\rraman\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\litellm_core_utils\llm_cost_calc\google.py", line 154, in cost_per_character
assert (
AssertionError: model info for model=gemini-1.0-pro-vision does not have 'output_cost_per_character'-pricing
Seems that for gemini-pro-vision, model_info doesn't have output_cost_per_character and that's the unit used to calculate cost for this model and thus causes an error response to be returned.
What might be causing the errors detailed above, and what steps can I take to resolve them?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm developing an endpoint using the LiteLLM SDK and it's working fine for me with all the models I've attempted from Google's Vertex Model Garden except gemini-pro-vision. The response JSON I'm getting returned is just this which isn't descriptive at all
Looking further in my application logs LiteLLM is logging out a more detailed error message which I'm assuming is causing the above error response to be returned.
Seems that for gemini-pro-vision, model_info doesn't have output_cost_per_character and that's the unit used to calculate cost for this model and thus causes an error response to be returned.
What might be causing the errors detailed above, and what steps can I take to resolve them?
Beta Was this translation helpful? Give feedback.
All reactions