-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Not able to use Gemini. #12100
Comments
Hey again, @ujjwalk906! Good to see you bringing another interesting challenge to our table. Thank you for reporting this issue. The In the provided code snippet, the self._model = genai.GenerativeModel(
model_name=model_name,
generation_config=final_gen_config,
safety_settings=safety_settings,
) If you're encountering a
To resolve the issue, verify the model name, ensure it's correctly passed and used within the I also found a similar closed issue specifying the model name isn't working in the latest version which was resolved by a pull request: Fix query runner by passing llm_predictor/prompt_helper to the query properly . This might provide some additional context or potential solutions to your problem. Please let me know if this helps or if you need further assistance. Sources
|
Try |
Still the same issue persists. |
I don't really see how that's possible?
The key is there 🤔 |
Works in 0.10.21 |
Bug Description
Getting key error when calling Gemini chat.
KeyError: 'model'
Version
0.10.20
Steps to Reproduce
from llama_index.llms.gemini import Gemini
from llama_index.core.llms import ChatMessage, MessageRole
history = [
ChatMessage(role=MessageRole.USER,
content= " The alien says hello ")
]
llm = Gemini(api_key=credentials)
llm.chat(history)
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: