-
Notifications
You must be signed in to change notification settings - Fork 5
Add SFT trainer and sft task #284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Hmm, very strange. Latest SFT run (https://github.com/AI-Hypercomputer/torchprime/actions/runs/15501248395/job/43649488239?pr=284) is taking forever to finish. Looks like it is compiling many graphs. In the past I've seen this happening when there are unexpected transfers from the TPU to the CPU (e.g. printing or calling |
@tengyifei Error is due to the export model in the end (so align with transfers from the TPU to the CPU as you mentioned). I don't know why, seems it worked with tpu vm. Any idea on how to debug this? |
I get a lot of notifications for this PR and it's also getting large. Can you work on smaller PRs that chain towards a bigger change, and, ask for more intermediate reviews? with unit tests, it's pretty easy to introduce small PRs that are 100 lines of code. https://google.github.io/eng-practices/review/developer/small-cls.html |
Please create a PR when ready for review, and make the PRs small - 100 lines of code is a good target. A PR that lives for days with 18 commits (not addressing comments) is getting too large. |
@@ -66,9 +69,13 @@ def analyze_step_duration_from_pb(xspace: XSpace) -> float: | |||
|
|||
# Confirm we have exactly one unique event name | |||
if len(unique_names) > 1: | |||
raise ValueError(f"Ambiguous event names found in XSpace: {unique_names}") | |||
logger.warning( | |||
f"Multiple event names found in XSpace: {unique_names}.\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this workaround still required after #302?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sometime it still recompile. I assume it is the same root cause as #260. Do we want to just let it fail?
xm.mark_step() | ||
xm.wait_device_ops() | ||
|
||
# Ensure a torch.distributed PG exists (once per host) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we link to the SPMD distributed checkpoint docs that mentions such a requirement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it has nothing to do with SPMD, but because we are using torch.distributed.checkpoint for saving then we need to have a torch.distributed process group. That is my understanding.
Enable SFT, specifically:
GSM8k training: