Skip to content

Conversation

j1young
Copy link

@j1young j1young commented Sep 26, 2025

Motivation

apply FSDP2 to the offline training pipeline.

On H100x4 GPUs, the training throughput shows an improvement of ~1.4× compared to the previous baseline.

The following command was used for testing

torchrun \
    --standalone \
    --nproc_per_node 4 \
    ./scripts/train_eagle3_offline.py \
    --target-model-path lmsys/gpt-oss-120b-bf16 \
    --draft-model-config ./configs/gpt-oss-120B-eagle3.json \
    --train-data-path ${TRAIN_DATA_PATH} \
    --train-hidden-states-path ${TRAIN_HS_PATH} \
    --output-dir ${OUTPUT_DIR} \
    --draft-global-batch-size 16 \
    --draft-micro-batch-size 1 \
    --num-epochs 10 \
    --learning-rate 1e-4 \
    --max-length 2048 \
    --chat-template gpt-oss

Modifications

Related Issues

Accuracy Test

Benchmark & Profiling

Checklist

Copy link
Contributor

Summary of Changes

Hello @j1young, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the offline training infrastructure by integrating PyTorch's FSDP2. The core changes involve migrating to the latest FSDP API for improved distributed training efficiency and carefully adapting the state saving mechanisms to properly handle distributed tensors, ensuring robust checkpointing and model persistence.

Highlights

  • Performance Improvement: This pull request applies FSDP2 to the offline training pipeline, resulting in an approximate 1.4x improvement in training throughput on H100x4 GPUs compared to the previous baseline.
  • FSDP2 Integration: The FSDP implementation has been updated from the older FullyShardedDataParallel API to the newer fully_shard function, utilizing MixedPrecisionPolicy for mixed-precision training.
  • Distributed Device Mesh: A new get_dp_device_mesh function has been introduced and integrated to manage and retrieve the distributed data parallel device mesh, which is crucial for the new FSDP2 API.
  • State Saving Enhancements: The logic for saving model and optimizer states has been refined to correctly handle DTensor objects, ensuring they are converted to full tensors before saving to prevent data loss or corruption in distributed environments.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request migrates the offline training pipeline to use FSDP2, which results in a significant performance improvement. The changes correctly adapt the code to the new fully_shard API, including updates to model wrapping, checkpointing logic for sharded model and optimizer states, and necessary additions to distributed utilities. My review focuses on improving the maintainability of the new checkpointing logic. I've suggested refactoring a couple of areas to use more concise, Pythonic constructs like dictionary comprehensions, which will make the code easier to read and maintain.

@sleepcoo sleepcoo requested a review from zyksir September 28, 2025 09:12
j1young and others added 2 commits September 29, 2025 11:44
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant