Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Not able to use Gemini. #12100

Closed
ujjwalk906 opened this issue Mar 20, 2024 · 5 comments
Closed

[Bug]: Not able to use Gemini. #12100

ujjwalk906 opened this issue Mar 20, 2024 · 5 comments
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@ujjwalk906
Copy link

Bug Description

Getting key error when calling Gemini chat.

KeyError: 'model'

Version

0.10.20

Steps to Reproduce

from llama_index.llms.gemini import Gemini
from llama_index.core.llms import ChatMessage, MessageRole

history = [
ChatMessage(role=MessageRole.USER,
content= " The alien says hello ")
]

llm = Gemini(api_key=credentials)
llm.chat(history)

Relevant Logs/Tracbacks

{
	"name": "KeyError",
	"message": "'model'",
	"stack": "---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[26], line 10
      4 history = [
      5     ChatMessage(role=MessageRole.USER,
      6                 content= \" The alien says hello \")
      7 ]
      9 llm = Gemini(api_key=credentials)
---> 10 llm.chat(history)

File d:\\Anaconda\\envs\\llama_0_10\\Lib\\site-packages\\llama_index\\core\\llms\\callbacks.py:93, in wrapped_llm_chat(_self, messages, **kwargs)
     83         callback_manager.on_event_end(
     84             CBEventType.LLM,
     85             payload={
   (...)
     89             event_id=event_id,
     90         )
     92     return wrapped_gen()
---> 93 else:
     94     callback_manager.on_event_end(
     95         CBEventType.LLM,
     96         payload={
   (...)
    100         event_id=event_id,
    101     )
    102     dispatcher.event(
    103         LLMChatEndEvent(
    104             messages=messages,
    105             response=f_return_val,
    106         )
    107     )

File d:\\Anaconda\\envs\\llama_0_10\\Lib\\site-packages\\llama_index\\llms\\gemini\\base.py:163, in Gemini.chat(self, messages, **kwargs)
    161 chat = self._model.start_chat(history=history)
    162 response = chat.send_message(next_msg)
--> 163 return chat_from_gemini_response(response)

File d:\\Anaconda\\envs\\llama_0_10\\Lib\\site-packages\\llama_index\\llms\\gemini\\utils.py:61, in chat_from_gemini_response(response)
     55 _error_if_finished_early(top_candidate)
     57 raw = {
     58     **(type(top_candidate).to_dict(top_candidate)),
     59     **(type(response.prompt_feedback).to_dict(response.prompt_feedback)),
     60 }
---> 61 role = ROLES_FROM_GEMINI[top_candidate.content.role]
     62 return ChatResponse(message=ChatMessage(role=role, content=response.text), raw=raw)

KeyError: 'model'"
}
@ujjwalk906 ujjwalk906 added bug Something isn't working triage Issue needs to be triaged/prioritized labels Mar 20, 2024
Copy link

dosubot bot commented Mar 20, 2024

Hey again, @ujjwalk906! Good to see you bringing another interesting challenge to our table.

Thank you for reporting this issue. The KeyError you're encountering when trying to call the Gemini chat in LlamaIndex version 0.10.20 is likely due to the way the model_name is being passed or accessed within the Gemini class or during its instantiation. The model_name is a critical parameter for initializing the Gemini class and subsequently used to configure and access the underlying Google's generative AI model (genai.GenerativeModel).

In the provided code snippet, the model_name is defined as a class attribute with a default value set to the first model in the GEMINI_MODELS tuple. This attribute is then overridden in the __init__ method if a different model_name is provided during the class instantiation. The model_name is used to configure the genai.GenerativeModel instance:

self._model = genai.GenerativeModel(
    model_name=model_name,
    generation_config=final_gen_config,
    safety_settings=safety_settings,
)

If you're encountering a KeyError with 'model', it suggests that there might be an issue with how the model_name is being passed, accessed, or used. Here are a few potential causes and solutions:

  1. Incorrect Model Name: Ensure that the model_name passed during the instantiation of the Gemini class matches one of the supported models. If an incorrect or unsupported model name is provided, it could lead to errors when the underlying API or library tries to access model-specific features or configurations.

  2. Model Name Not Properly Passed or Configured: Double-check that the model_name is correctly passed to the Gemini class constructor and that there's no typo or logical error in how it's being set or overridden.

  3. Issues with the Underlying Library or API: The error might also stem from changes or issues within the google.generativeai library or the API it interacts with. Ensure that the library version is compatible with LlamaIndex version 0.10.20 and that there haven't been any breaking changes or deprecations related to model configuration or access.

To resolve the issue, verify the model name, ensure it's correctly passed and used within the Gemini class, and check for any compatibility issues with the underlying google.generativeai library. If the problem persists, reviewing the library's documentation or seeking support from its maintainers might provide further insights.

I also found a similar closed issue specifying the model name isn't working in the latest version which was resolved by a pull request: Fix query runner by passing llm_predictor/prompt_helper to the query properly . This might provide some additional context or potential solutions to your problem.

Please let me know if this helps or if you need further assistance.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@logan-markewich
Copy link
Collaborator

Try pip install -U llama-index-core llama-index-llms-gemini

@ujjwalk906
Copy link
Author

Try pip install -U llama-index-core llama-index-llms-gemini

Still the same issue persists.

@logan-markewich
Copy link
Collaborator

I don't really see how that's possible?

ROLES_FROM_GEMINI: Dict[MessageRole, MessageRole] = {

The key is there 🤔

@ujjwalk906
Copy link
Author

Works in 0.10.21

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants