Skip to content
This repository has been archived by the owner on Jun 26, 2024. It is now read-only.

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
eduand-alvarez authored Mar 12, 2024
1 parent 0cf7c6e commit ad7d664
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion distributed-training/stable_diffusion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,7 @@ Option 2: Copy the contents of the hugging face accelerate cache to the other no
Finally, it's time to run the fine-tuning process on multi-CPU setup. Make sure you are connected to your main machine (rank 0) and in the "./stable_diffusion/" directory. Run the following command be used to launch distributed training:
```bash
mpirun -f ./hosts -n 3 -ppn 1 accelerate launch textual_inversion_icom.py --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" --train_data_dir="./dicoo/" --learnable_property="object" --placeholder_token="<dicoo>" --initializer_token="toy" --resolution=512 --train_batch_size=1 --seed=7 --gradient_accumulation_steps=1 --max_train_steps=30 --learning_rate=2.0e-03 --scale_lr --lr_scheduler="constant" --lr_warmup_steps=3 --output_dir=./textual_inversion_output --mixed_precision bf16 --save_as_full_pipeline
mpirun -f ./hosts -n 3 -ppn 1 accelerate launch textual_inversion_icom.py --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" --train_data_dir="./dicoo/" --learnable_property="object" --placeholder_token="<dicoo>" --initializer_token="toy" --resolution=512 --train_batch_size=3 --seed=7 --gradient_accumulation_steps=1 --max_train_steps=30 --learning_rate=2.0e-03 --scale_lr --lr_scheduler="constant" --lr_warmup_steps=3 --output_dir=./textual_inversion_output --mixed_precision bf16 --save_as_full_pipeline
```
Some notes on the arguments for `mpirun` to consider:
Expand Down

0 comments on commit ad7d664

Please sign in to comment.