You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I had a few questions regarding the warmup schedule when changing the number of training tokens, as done in the GPT-3 experiments in your work.
For the GPT-3 sweeps, is the batch size kept the same between the proxy model and target model?
For the 40M proxy model, which was trained for 4B and 16B tokens respectively compared to the 300B tokens for the full 6.7B param model, is the warmup period set as a proportion of the total training steps (ex. 1% of training steps) or as an absolute number of steps (ex. 1B steps)?
The text was updated successfully, but these errors were encountered:
Hi! I had a few questions regarding the warmup schedule when changing the number of training tokens, as done in the GPT-3 experiments in your work.
For the GPT-3 sweeps, is the batch size kept the same between the proxy model and target model?
For the 40M proxy model, which was trained for 4B and 16B tokens respectively compared to the 300B tokens for the full 6.7B param model, is the warmup period set as a proportion of the total training steps (ex. 1% of training steps) or as an absolute number of steps (ex. 1B steps)?
The text was updated successfully, but these errors were encountered: