Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix readme cmds for clip-roberta #1603

Open
wants to merge 2 commits into
base: v1.15-release
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 50 additions & 42 deletions examples/contrastive-image-text/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,33 +128,34 @@ PT_HPU_LAZY_MODE=0 python run_clip.py \
Run the following command for distributed training:

```bash
PT_HPU_LAZY_MODE=0 \
python ../gaudi_spawn.py --world_size 8 --use_mpi run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train --do_eval \
--per_device_train_batch_size="512" \
PT_HPU_LAZY_MODE=0 PT_ENABLE_INT64_SUPPORT=1 \
python3 ../gaudi_spawn.py --world_size 8 --use_mpi run_clip.py \
--output_dir=/tmp/clip_roberta \
--model_name_or_path=./clip-roberta \
--data_dir=./data \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we change this back?

--dataset_name="ydshieh/coco_dataset_script" \
--dataset_config_name="2017" \
--image_column="image_path" \
--caption_column="caption" \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to change 136-139

--remove_unused_columns="False" \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to change 140

--do_train --do_eval \
--mediapipe_dataloader \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay="0.1" \
--overwrite_output_dir \
--save_strategy epoch \
--use_habana \
--gaudi_config_name Habana/clip \
--throughput_warmup_steps 3 \
--dataloader_num_workers 16 \
--mediapipe_dataloader \
--bf16 \
--sdp_on_bf16 \
--distribution_strategy fast_ddp \
--trust_remote_code \
--use_lazy_mode=False \
--gaudi_config_name="Habana/clip" \
--throughput_warmup_steps=30 \
--save_strategy="no" \
--dataloader_num_workers=2 \
--use_hpu_graphs \
--max_steps=100 \
--torch_compile_backend=hpu_backend \
--torch_compile
--torch_compile \
--logging_nan_inf_filter \
--trust_remote_code
```

> `--mediapipe_dataloader` only works on Gaudi2.
Expand All @@ -165,29 +166,36 @@ python ../gaudi_spawn.py --world_size 8 --use_mpi run_clip.py \
Run the following command for training with DeepSpeed:

```bash
PT_HPU_LAZY_MODE=0 \
python ../gaudi_spawn.py --world_size 8 --use_deepspeed run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train --do_eval \
--per_device_train_batch_size="512" \
PT_HPU_LAZY_MODE=0 PT_ENABLE_INT64_SUPPORT=1 \
python3 ../gaudi_spawn.py --world_size 8 --use_deepspeed run_clip.py \
--output_dir=/tmp/clip_roberta \
--model_name_or_path=./clip-roberta \
--data_dir=./data \
--dataset_name="ydshieh/coco_dataset_script" \
--dataset_config_name="2017" \
--image_column="image_path" \
--caption_column="caption" \
--remove_unused_columns="False" \
--do_train --do_eval \
--mediapipe_dataloader \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay="0.1" \
--overwrite_output_dir \
--save_strategy epoch \
--use_habana \
--gaudi_config_name Habana/clip \
--throughput_warmup_steps 3 \
--deepspeed path_to_my_deepspeed_config \
--trust_remote_code \
--use_lazy_mode=False \
--gaudi_config_name="Habana/clip" \
--throughput_warmup_steps=30 \
--save_strategy="no" \
--dataloader_num_workers=2 \
--use_hpu_graphs \
--max_steps=100 \
--torch_compile_backend=hpu_backend \
--torch_compile
--torch_compile \
--logging_nan_inf_filter \
--trust_remote_code \
--deepspeed <path_to_my_deepspeed_config>

```

You can look at the [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/deepspeed) for more information about how to use DeepSpeed in Optimum Habana.
Expand Down
Loading