Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP Multimodal support #1930

Open
wants to merge 20 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/config.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,17 @@ tokenizer_legacy:
# This is reported to improve training speed on some models
resize_token_embeddings_to_32x:

# Optional processor configuration path in case you want to use a different processor
# than the one defined in the base model
processor_config:
# Corresponding processor for the model AutoProcessor is a good choice
processor_type: AutoProcessor


# (Internal use only)
# To identify whether the model is multimodal
is_multimodal:

# Used to identify which the model is based on
is_falcon_derived_model:
is_llama_derived_model:
Expand Down
2 changes: 1 addition & 1 deletion docs/input_output.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ ds = load_from_disk(f'last_run_prepared/{directory[0]}/')
hi there!. goodbye farewell</s>
```

We can check that the right tokens are ingored by comparing the labels
We can check that the right tokens are ignored by comparing the labels
to each token:

```python
Expand Down
64 changes: 64 additions & 0 deletions examples/mllama/ft_test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
base_model: meta-llama/Llama-3.2-11B-Vision

load_in_8bit: true
load_in_4bit: false
strict: false

datasets:
- path: HuggingFaceH4/llava-instruct-mix-vsft
type: llava
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/lora-out

sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: true

adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save:
- embed_tokens
- lm_head

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
s2_attention: false
sdp_attention: true

warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
8 changes: 6 additions & 2 deletions src/axolotl/cli/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
from axolotl.utils.dict import DictDefault
from axolotl.utils.distributed import is_main_process
from axolotl.utils.mlflow_ import setup_mlflow_env_vars
from axolotl.utils.models import load_tokenizer
from axolotl.utils.models import load_processor, load_tokenizer
from axolotl.utils.tokenization import check_dataset_labels
from axolotl.utils.trainer import prepare_opinionated_env, prepare_optim_env
from axolotl.utils.wandb_ import setup_wandb_env_vars
Expand Down Expand Up @@ -407,9 +407,13 @@ def load_datasets(
cli_args: TrainerCliArgs,
) -> TrainDatasetMeta:
tokenizer = load_tokenizer(cfg)
tokenizer_processor = tokenizer
if cfg.is_multimodal:
processor = load_processor(cfg, tokenizer)
tokenizer_processor = processor

train_dataset, eval_dataset, total_num_steps, prompters = prepare_dataset(
cfg, tokenizer
cfg, tokenizer_processor
)

if cli_args.debug or cfg.debug:
Expand Down
Loading