Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[moe] feat: enabling expert parallelism in veScale #59

Merged
merged 6 commits into from
Dec 27, 2024

Conversation

chwan1016
Copy link
Contributor

@chwan1016 chwan1016 commented Dec 27, 2024

Overview

veScale provides an efficient framework for training Mixture of Experts (MoE) models using expert parallelism. Expert parallelism can be deployed with the parallelize_experts() function, which simplifies the process of distributing and managing workload during MoE training.

Function Signature

model = parallelize_experts(
    module: nn.Module,
    experts_expr: Union[str, List[str]],
    experts_allocator: vescale.moe.ExpertsAllocator,
    token_dispatcher: vescale.moe.TokenDispatcher,
    config: Dict,
)

Parameters

  • module: The training model (an instance of nn.Module) to be parallelized.
  • experts_expr: Specifies the paths to the expert modules. Can be a string or a list of strings.
  • experts_allocator: An instance of ExpertsAllocator, used for managing expert parameter allocation.
  • token_dispatcher: An instance of TokenDispatcher, responsible for token scheduling and distribution.
  • config: A dictionary containing the MoE training configuration, including layer count, number of experts, and other relevant settings.

Custom Scheduling

veScale allows users to define custom scheduling strategies for expert parallelism by implementing the following components:

  • ExpertsAllocator: Manages expert parameter allocation. It can use collect_performance() to profile and dynamically adjust the DP x TP device mesh for each expert. By default, veScale shards all expert parameters across devices using tensor parallelism.

  • TokenDispatcher: Handles token distribution. Using assign_task(), it determines workload allocation (e.g., expert IDs and token weights) and adjusts scheduling with collect_performance(). The default implementation randomly assigns tokens to a single DP rank for the selected expert.

Optimizer Support

Since veScale supports dynamic placement of expert parameters, a dedicated optimizer, MoEOptimizer, is required. This optimizer handles the redistribution of expert parameters and their states efficiently.
Future updates will integrate these functionalities into optimizers for static parameters to streamline the process.

Getting Started

Data Preparation

Prepare the Shakespeare dataset by running:

cd data/shakespeare/
python3 prepare.py
cd ../..

Training Command

torchrun --standalone --nproc_per_node={GPU_CNT} mixtral_train.py --dp={dp_size} --tp={tp_size} --max_iters={max_iters}

@CLAassistant
Copy link

CLAassistant commented Dec 27, 2024

CLA assistant check
All committers have signed the CLA.

@pengyanghua pengyanghua merged commit ac76ffa into volcengine:main Dec 27, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants