Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] Minor errors in gemini_api.py and internvl2.py. #502

Merged
merged 2 commits into from
Jan 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions lmms_eval/models/gemini_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ def __init__(
self.timeout = timeout
self.model = genai.GenerativeModel(model_version)
self.continual_mode = continual_mode
self.response_persistent_file = ""
self.interleave = interleave
# if self.continual_mode and response_persistent_folder is None:
# raise ValueError("Continual mode requires a persistent path for the response. We will cache the Gemini API response in this path and use it for future requests. Please provide a valid path.")
Expand Down
2 changes: 1 addition & 1 deletion lmms_eval/models/internvl2.py
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ def generate_until(self, requests) -> List[str]:
contexts = image_tokens + "\n" + contexts
else:
pixel_values = None
num_patch_list = None
num_patches_list = None
response, history = self.model.chat(self.tokenizer, pixel_values, contexts, gen_kwargs, num_patches_list=num_patches_list, history=None, return_history=True)
elif self.modality == "video":
assert len(visuals) == 1, f"Only one video is supported, but got {len(visuals)} videos."
Expand Down
Loading