-
Notifications
You must be signed in to change notification settings - Fork 443
is_local_main_process -> is_main_process in finetune.py #975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @jacob-morrison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical race condition in the checkpoint cleanup mechanism during multinode training. By transitioning from a local main process check to a global main process check, it centralizes the responsibility for managing and cleaning up training checkpoints, thereby enhancing the robustness and reliability of the training pipeline, especially in distributed setups.
Highlights
- Checkpoint Cleanup Logic: Changed accelerator.is_local_main_process to accelerator.is_main_process in finetune.py to ensure checkpoint cleanup is handled by the global main process.
- Race Condition Resolution: This modification resolves a race condition that occurred during checkpoint cleanup in multinode training environments, preventing potential issues with incomplete or corrupted checkpoints.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses a race condition in multi-node training by changing accelerator.is_local_main_process
to accelerator.is_main_process
for checkpoint cleanup operations. This ensures that only the global main process handles file system modifications on a shared file system. However, my review identified a related critical race condition. The logic for creating the COMPLETED
file after saving a checkpoint is still executed by all processes, which can lead to file system issues. I've provided comments with suggestions to move this logic inside the if accelerator.is_main_process:
block to fully resolve the race conditions during checkpointing.
) as f: | ||
f.write("COMPLETED") # annoyingly, empty files arent uploaded by beaker. | ||
if accelerator.is_local_main_process: # TODO: in mason local model this is gonna error out if using something like output/test; because mason used the same shared file ssytem. | ||
if accelerator.is_main_process: # TODO: in mason local model this is gonna error out if using something like output/test; because mason used the same shared file ssytem. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this change to is_main_process
is correct for handling checkpoint cleanup in a multi-node environment, a related race condition still exists. The preceding lines (955-958) that write the COMPLETED
file are executed by all processes, not just the main one. This can cause file system contention and is inefficient.
To fully resolve the race conditions during checkpointing, this file-writing logic should also be restricted to the main process. Additionally, the TODO
comment is now obsolete with this fix and can be removed.
I recommend the following structure:
if accelerator.is_main_process:
# use this to mark the checkpoint as completely saved, to avoid restoring from garbled checkpoints
with open(
os.path.join(get_last_checkpoint_path(args, incomplete=True), "COMPLETED"), "w"
) as f:
f.write("COMPLETED") # annoyingly, empty files arent uploaded by beaker.
clean_last_n_checkpoints(args.output_dir, args.keep_last_n_checkpoints)
This ensures both operations are performed atomically by a single process.
with open(os.path.join(get_last_checkpoint_path(args, incomplete=True), "COMPLETED"), "w") as f: | ||
f.write("COMPLETED") # annoyingly, empty files arent uploaded by beaker. | ||
if accelerator.is_local_main_process: | ||
if accelerator.is_main_process: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to my other comment, there's a race condition here. The code that writes the COMPLETED
file (lines 972-973) is executed by all processes, which should be avoided.
To prevent potential file system issues, this logic should be moved inside the if accelerator.is_main_process:
block.
Here is the recommended change:
if accelerator.is_main_process:
# use this to mark the checkpoint as completely saved, to avoid restoring from garbled checkpoints
with open(os.path.join(get_last_checkpoint_path(args, incomplete=True), "COMPLETED"), "w") as f:
f.write("COMPLETED") # annoyingly, empty files arent uploaded by beaker.
clean_last_n_checkpoints(args.output_dir, args.keep_last_n_checkpoints)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, one minor comment. Also, it would be good to link a beaker job with a multi-node run (can be a small job) that works correctly.
output_dir = os.path.join(args.output_dir, output_dir) | ||
accelerator.save_state(output_dir) | ||
# use this to mark the checkpoint as completely saved, to avoid restoring from garbled checkpoints | ||
with open(os.path.join(get_last_checkpoint_path(args, incomplete=True), "COMPLETED"), "w") as f: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with gemini-bot here, we should probably put this completed write under the main process too.
accelerator.save_state(output_dir) | ||
# use this to mark the checkpoint as completely saved, to avoid restoring from garbled checkpoints | ||
with open( | ||
os.path.join(get_last_checkpoint_path(args, incomplete=True), "COMPLETED"), "w" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with gemini-bot here, we should probably put this completed write under the main process too.
This resolves a race condition when cleaning up checkpoints during multinode training; just let the main process handle it