Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPU Training Giving Different Loss #276

Open
nikhil-ghosh-berkeley opened this issue Nov 16, 2023 · 1 comment
Open

Multi-GPU Training Giving Different Loss #276

nikhil-ghosh-berkeley opened this issue Nov 16, 2023 · 1 comment

Comments

@nikhil-ghosh-berkeley
Copy link

I am trying to train using a multi-GPU setup with DDP (launching with accelerate launch), but I am noticing that the loss values are significantly different from a single GPU setup with the same effective batch size.

I have attached the eval/loss curves below.

  1. In purple is a single gpu run with per_device_train_batch_size=16
  2. In blue is a multi gpu run with 8 gpus and per_device_train_batch_size=2 (only trained for a few steps)

All other hyperparameters are the same.

Screenshot 2023-11-16 at 3 17 55 PM

I am wondering why in (2) the loss values seem to be much smaller than in (1)? Any suggestions are much appreciated!

@giaosudau
Copy link

Hey @nikhil-ghosh-berkeley
Can you give full command how to launch the qlora with multiple gpus? I am training on 4GPUs A100-40GB but got OOM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants