Skip to content

Commit

Permalink
Enabled Qwen2-MoE Tensor Parallelism (TP) inference (#6551)
Browse files Browse the repository at this point in the history
Modified _replace_module in auto_tp.py :
The modification keeps the layers 'shared_expert_gate' and 'gate' in
qwen2-moe the original type torch.nn.Linear and not changes them into
LinearLayer. In this way, their weights will not be split into multiple
HPU/GPU cards. Then the qwen2-moe can run on multiple HPU/GPU cards.
Since the weights of 'gate' are not split into multiple HPU/GPU cards,
all gather operations are not needed, which may improve performance.

---------

Co-authored-by: Logan Adams <[email protected]>
  • Loading branch information
gyou2021 and loadams authored Oct 9, 2024
1 parent 1062a0c commit 474a328
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
3 changes: 2 additions & 1 deletion deepspeed/module_inject/auto_tp.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,8 @@ def _replace(self, child, name, conv_linear_layer):
weight_shape = child.weight.shape
mp_replace = ReplaceWithTensorSlicing(mp_group=self.mp_group)
# For mixtral-7x8b, need to skip MoE gate linear replace.
if name == "block_sparse_moe.gate":
if name == "block_sparse_moe.gate" or (('mlp.shared_expert_gate' == name or 'mlp.gate' == name)
and 'qwen2_moe' in str(type(self.module))):
return child
# For Yuan model
if 'Yuan' in str(self.module):
Expand Down
1 change: 1 addition & 0 deletions docs/_tutorials/automatic-tensor-parallelism.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,7 @@ The following model families have been successfully tested with automatic tensor
- plbart
- qwen
- qwen2
- qwen2-moe
- reformer
- roberta
- roformer
Expand Down

0 comments on commit 474a328

Please sign in to comment.