-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have u compare tthe Recap DataComp dataset trained old llava old only? #3
Comments
Thank you for your interest in our work! |
does the dataset only can boost lava original performance?
…---- Replied Message ----
| From | Haoqin ***@***.***> |
| Date | 06/15/2024 11:22 |
| To | ***@***.***> |
| Cc | ***@***.***>***@***.***> |
| Subject | Re: [UCSC-VLAA/Recap-DataComp-1B] Have u compare tthe Recap DataComp dataset trained old llava old only? (Issue #3) |
Thank you for your interest in our work!
We didn't fine-tune the LLaMA3-powered LLaVA using our RecapDataComp1B. Instead, we use the powerful LLaMA3-powered LLaVA to recaption the DataComp-1B and the resulting dataset is our RecapDataComp1B.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Sorry, but I'm a little confused, what do you mean by 'boosting LLaVA original performance'? As we didn't use our RecapDataComp1B to fine-tune a LLaVA model. |
Just want to make sure, is the sysnthsis data can boost performance or not. You should keep model same to compare it. using a new LLM then you should compare same llava-llama3 model which offcially released. |
As far as I konw, the llava with llama3 8b newest version already get a very good result without RecapDataComp1B
Just wondering, how does the dataset contribute to the performance without changing the model.
The text was updated successfully, but these errors were encountered: