You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, thank you very much for your contribution to the codebase. I am currently working on evaluating LLaVA and have encountered a few issues. I look forward to your guidance.
#476
Open
LiZhangMing opened this issue
Dec 27, 2024
· 3 comments
Firstly, thank you very much for your contribution to the codebase. I am currently working on evaluating LLaVA and have encountered a few issues. I look forward to your guidance.
For LLaVA, I first performed pretraining, followed by fine-tuning with LoRA. However, during evaluation, I ran into the following problems:
How can I load a local dataset, such as datasets--lmms-lab--MME?
For local dataset, please check some of the issues in the common issues. I believe you can now solve it with HF_DATASETS_OFFLINE
I am not sure what is the correctly way to load lora weights. But the logic should be the same, you might need to take a lot in the model init and see how to pass in the weights. Feel free to check issues such as How to evaluate LLaVA-OneVision finetuned with custom dataset? #241
Thank you very much for your response! Regarding the loading of offline datasets, I’d like to ask if we have any hyperparameters that can be directly configured to handle this.
This should related to the offline mode of the huggingface dataset. You should try setting the correspond env var that related to this and download the cached dataset.
Firstly, thank you very much for your contribution to the codebase. I am currently working on evaluating LLaVA and have encountered a few issues. I look forward to your guidance.
For LLaVA, I first performed pretraining, followed by fine-tuning with LoRA. However, during evaluation, I ran into the following problems:
datasets--lmms-lab--MME
?Below is the command I am using:
CUDA_VISIBLE_DEVICES=6 python3 -m accelerate.commands.launch \ --num_processes=1 \ -m lmms_eval \ --model llava \ --model_args pretrained="local model" \ --tasks mme \ --batch_size 1 \ --log_samples \ --log_samples_suffix llava_v1.5_mme \ --output_path ./logs/
Looking forward to your suggestions.
The text was updated successfully, but these errors were encountered: