Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Unexpected keyword argument 'temperature' when using QA script with Gemini LlamaIndex #1073

Open
keandk opened this issue Jan 22, 2025 · 2 comments
Assignees
Labels
AutoRAG Core From the core framework of AutoRAG bug Something isn't working

Comments

@keandk
Copy link

keandk commented Jan 22, 2025

Describe the bug
With the provided QA creation script in the README, I tried to replace OpenAI model with Gemini. After modifying which model to use and my file paths, I ran the code and got this error: TypeError: ChatSession.send_message_async() got an unexpected keyword argument 'temperature'.

To Reproduce
Steps to reproduce the behavior:

  1. Import Gemini from LlamaIndex llm
  2. Instantiate a new LLM object for Gemini
  3. Use that object for llm arguments in the code

Expected behavior
Normal execution.

Full Error log
This happens in \llama_index\llms\gemini\base.py:233 in achat

Code that bug is happened

import pandas as pd
from llama_index.llms.gemini import Gemini

from autorag.data.qa.filter.dontknow import dontknow_filter_rule_based
from autorag.data.qa.generation_gt.llama_index_gen_gt import (
    make_basic_gen_gt,
    make_concise_gen_gt,
)
from autorag.data.qa.schema import Raw, Corpus
from autorag.data.qa.query.llama_gen_query import factoid_query_gen
from autorag.data.qa.sample import random_single_hop

from dotenv import load_dotenv
load_dotenv()

llm = Gemini(model="models/gemini-1.5-flash")
raw_df = pd.read_parquet("data/raw.parquet")
raw_instance = Raw(raw_df)

corpus_df = pd.read_parquet("data/corpus-semantic.parquet")
corpus_instance = Corpus(corpus_df, raw_instance)

initial_qa = (
    corpus_instance.sample(random_single_hop, n=3)
    .map(
        lambda df: df.reset_index(drop=True),
    )
    .make_retrieval_gt_contents()
    .batch_apply(
        factoid_query_gen,  # query generation
        llm=llm,
    )
    .batch_apply(
        make_basic_gen_gt,  # answer generation (basic)
        llm=llm,
    )
    .batch_apply(
        make_concise_gen_gt,  # answer generation (concise)
        llm=llm,
    )
    .filter(
        dontknow_filter_rule_based,  # filter don't know
        lang="vi",
    )
)

initial_qa.to_parquet('data/qa.parquet', 'data/corpus.parquet')

Desktop (please complete the following information):

  • OS: Windows
  • Python version 3.12
@keandk keandk added the bug Something isn't working label Jan 22, 2025
@vkehfdl1
Copy link
Contributor

@keandk
Thank you for reporting the bug.
I will check it out.

@vkehfdl1 vkehfdl1 self-assigned this Jan 25, 2025
@vkehfdl1 vkehfdl1 added the AutoRAG Core From the core framework of AutoRAG label Jan 30, 2025
@keandk
Copy link
Author

keandk commented Feb 3, 2025

Hi, I have updated AutoRAG package and tried running the code again and i got this new error:

[02/03/25 15:06:12] ERROR    [__init__.py:53] >> Unexpected exception                                           __init__.py:53
                             ╭────────────────────── Traceback (most recent call last) ───────────────────────╮ 
                             │qa-creation.py:31 in <module>    │ 
                             │                                                                                │ 
                             │   28 │   │   lambda df: df.reset_index(drop=True),                             │ 
                             │   29 │   )                                                                     │ 
                             │   30 │   .make_retrieval_gt_contents()                                         │ 
                             │ ❱ 31 │   .batch_apply(                                                         │ 
                             │   32 │   │   factoid_query_gen,  # query generation                            │ 
                             │   33 │   │   llm=llm_gemini,                                                   │ 
                             │   34 │   )                                                                     │ 
                             │                                                                                │ 
                             │.venv\Lib\site-packages\autorag\ │ 
                             │ data\qa\schema.py:140 in batch_apply                                           │ 
                             │                                                                                │ 
                             │   137 │   │   qa_dicts = self.data.to_dict(orient="records")                   │ 
                             │   138 │   │   loop = get_event_loop()                                          │ 
                             │   139 │   │   tasks = [fn(qa_dict, **kwargs) for qa_dict in qa_dicts]          │ 
                             │ ❱ 140 │   │   results = loop.run_until_complete(process_batch(tasks, batch_siz │ 
                             │   141 │   │                                                                    │ 
                             │   142 │   │   # Experimental feature                                           │ 
                             │   143 │   │   if fn.__name__ == "multiple_queries_gen":                        │ 
                             │                                                                                │ 
                             │ C:\Program                                                                     │ 
                             │ Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.2288.0_x64__qbz5n2 │ 
                             │ kfra8p0\Lib\asyncio\base_events.py:686 in run_until_complete                   │ 
                             │                                                                                │ 
                             │    683 │   │   if not future.done():                                           │ 
                             │    684 │   │   │   raise RuntimeError('Event loop stopped before Future comple │ 
                             │    685 │   │                                                                   │ 
                             │ ❱  686 │   │   return future.result()                                          │ 
                             │    687 │                                                                       │ 
                             │    688 │   def stop(self):                                                     │ 
                             │    689 │   │   """Stop running the event loop.                                 │ 
                             │                                                                                │ 
                             │\.venv\Lib\site-packages\autorag\ │ 
                             │ utils\util.py:306 in process_batch                                             │ 
                             │                                                                                │ 
                             │   303 │                                                                        │ 
                             │   304 │   for i in range(0, len(tasks), batch_size):                           │ 
                             │   305 │   │   batch = tasks[i : i + batch_size]                                │ 
                             │ ❱ 306 │   │   batch_results = await asyncio.gather(*batch)                     │ 
                             │   307 │   │   results.extend(batch_results)                                    │ 
                             │   308 │                                                                        │ 
                             │   309 │   return results                                                       │ 
                             │                                                                                │ 
                             │ \.venv\Lib\site-packages\autorag\ │ 
                             │ data\qa\query\llama_gen_query.py:30 in factoid_query_gen                       │ 
                             │                                                                                │ 
                             │   27 │   llm: BaseLLM,                                                         │ 
                             │   28 │   lang: str = "en",                                                     │ 
                             │   29 ) -> Dict:                                                                │ 
                             │ ❱ 30 │   return await llama_index_generate_base(                               │ 
                             │   31 │   │   row, llm, QUERY_GEN_PROMPT["factoid_single_hop"][lang]            │ 
                             │   32 │   )                                                                     │ 
                             │   33                                                                           │ 
                             │                                                                                │ 
                             │\.venv\Lib\site-packages\autorag\ │ 
                             │ data\qa\query\llama_gen_query.py:20 in llama_index_generate_base               │ 
                             │                                                                                │ 
                             │   17 │   user_prompt = f"Text:\n{context_str}\n\nGenerated Question from the T │ 
                             │   18 │   user_message = ChatMessage(role=MessageRole.USER, content=user_prompt │ 
                             │   19 │   new_messages = [*messages, user_message]                              │ 
                             │ ❱ 20 │   chat_response: ChatResponse = await llm.achat(messages=new_messages)  │ 
                             │   21 │   row["query"] = chat_response.message.content                          │ 
                             │   22 │   return row                                                            │ 
                             │   23                                                                           │ 
                             │                                                                                │ 
                             │ \.venv\Lib\site-packages\llama_in │ 
                             │ dex\core\instrumentation\dispatcher.py:357 in async_wrapper                    │ 
                             │                                                                                │ 
                             │   354 │   │   │   │   tags=tags,                                               │ 
                             │   355 │   │   │   )                                                            │ 
                             │   356 │   │   │   try:                                                         │ 
                             │ ❱ 357 │   │   │   │   result = await func(*args, **kwargs)                     │ 
                             │   358 │   │   │   except BaseException as e:                                   │ 
                             │   359 │   │   │   │   self.event(SpanDropEvent(span_id=id_, err_str=str(e)))   │ 
                             │   360 │   │   │   │   self.span_drop(id_=id_, bound_args=bound_args, instance= │ 
                             │                                                                                │ 
                             │ \.venv\Lib\site-packages\llama_in │ 
                             │ dex\llms\gemini\base.py:231 in achat                                           │ 
                             │                                                                                │ 
                             │   228 │   ) -> ChatResponse:                                                   │ 
                             │   229 │   │   request_options = self._request_options or kwargs.pop("request_o │ 
                             │   230 │   │   merged_messages = merge_neighboring_same_role_messages(messages) │ 
                             │ ❱ 231 │   │   *history, next_msg = map(chat_message_to_gemini, merged_messages │ 
                             │   232 │   │   chat = self._model.start_chat(history=history)                   │ 
                             │   233 │   │   response = await chat.send_message_async(                        │ 
                             │   234 │   │   │   next_msg, request_options=request_options, **kwargs          │ 
                             │                                                                                │ 
                             │ \.venv\Lib\site-packages\llama_in │ 
                             │ dex\llms\gemini\utils.py:78 in chat_message_to_gemini                          │ 
                             │                                                                                │ 
                             │   75 │   """Convert ChatMessages to Gemini-specific history, including ImageDo │ 
                             │   76 │   parts = []                                                            │ 
                             │   77 │   content_txt = ""                                                      │ 
                             │ ❱ 78 │   for block in message.blocks:                                          │ 
                             │   79 │   │   if isinstance(block, TextBlock):                                  │ 
                             │   80 │   │   │   parts.append(block.text)                                      │ 
                             │   81 │   │   elif isinstance(block, ImageBlock):                               │ 
                             │                                                                                │ 
                             │ \.venv\Lib\site-packages\pydantic │ 
                             │ \main.py:856 in __getattr__                                                    │ 
                             │                                                                                │ 
                             │    853 │   │   │   │   │   │   return super().__getattribute__(item)  # Raises │ 
                             │        if appropriate                                                          │ 
                             │    854 │   │   │   │   │   else:                                               │ 
                             │    855 │   │   │   │   │   │   # this is the current error                     │ 
                             │ ❱  856 │   │   │   │   │   │   raise AttributeError(f'{type(self).__name__!r}  │ 
                             │        attribute {item!r}')                                                    │ 
                             │    857 │   │                                                                   │ 
                             │    858 │   │   def __setattr__(self, name: str, value: Any) -> None:           │ 
                             │    859 │   │   │   if name in self.__class_vars__:                             │ 
                             ╰────────────────────────────────────────────────────────────────────────────────╯ 
                             AttributeError: 'ChatMessage' object has no attribute 'blocks'
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1738569975.757050   23928 init.cc:232] grpc_wait_for_shutdown_with_timeout() timed out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AutoRAG Core From the core framework of AutoRAG bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants