Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I use the --dialog argument for QPEFT in multimodal LLaMA2? #182

Open
scy-v opened this issue Mar 26, 2024 · 1 comment
Open

Can I use the --dialog argument for QPEFT in multimodal LLaMA2? #182

scy-v opened this issue Mar 26, 2024 · 1 comment

Comments

@scy-v
Copy link

scy-v commented Mar 26, 2024

Hi! I noticed that in the document, the .sh scripts for multi-turn finetuning all use the --dialog.
Can I add --dialog in alpacaLlava_llamaQformerv2Peft_QF_13B.sh for multimodal LLaMA2 QPEFT under image-text multi-turn conversations?

@ChrisLiu6
Copy link
Collaborator

using --dialog means to use https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/data/conversation/dataset.py instead of the default https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/data/alpaca.py as the dataset class. If your data conforms with the following format:

[{
"conversations": [
  { "from": "human", "value": "some question one" }, 
  { "from": "gpt", "value": "some answer one" },
  { "from": "human", "value": "some question two" }, 
  { "from": "gpt", "value": "some answer two" },
  ....],
"image": "/path/to/image"
}, ...
]

Then it should work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants