-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Issues: haotian-liu/LLaVA
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Question] Model parameters during finetuning (prints me only mm_projector parameters)
#1784
opened Nov 26, 2024 by
daulettoibazar
[Usage] Batch evaluation using sqlang doesn't support llava-v1.5 model
#1777
opened Nov 23, 2024 by
pspdada
[Question] Where can I obtain the training dataset of the LLava 1.5?
#1776
opened Nov 20, 2024 by
weiaicunzai
[Usage] json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1559608 column 82 (char 39444904)
#1775
opened Nov 18, 2024 by
tianke0711
[Usage] The issue encountered when fine-tuning llava_mistral1.6 using LoRA
#1772
opened Nov 16, 2024 by
yuwang4321
[Question] Can not reproduce LLaVA 1.5 performance on ScienceQA
#1770
opened Nov 15, 2024 by
yiwei-chenn
[Usage] Inference Speed Issue with LoRA Fine-tuned Model on ScienceQA
#1763
opened Nov 12, 2024 by
jinghanSunn
[Usage] Training process get stuck in the last iteration of instruction finetuning phrase.
#1759
opened Nov 6, 2024 by
fmy7834
[Question] Is it possible to extract the latent representation of the image input from the model?
#1758
opened Nov 5, 2024 by
Tizzzzy
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.