Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoParallel] Add auto parallel moe layer #9886

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

pkuzyc
Copy link
Contributor

@pkuzyc pkuzyc commented Feb 17, 2025

PR types

New features

PR changes

Models

Description

Add auto parallel moe layer

Copy link

paddle-bot bot commented Feb 17, 2025

Thanks for your contribution!

Copy link

codecov bot commented Feb 17, 2025

Codecov Report

Attention: Patch coverage is 14.18048% with 466 lines in your changes missing coverage. Please review.

Project coverage is 50.91%. Comparing base (011ae71) to head (ce71db2).
Report is 6 commits behind head on develop.

Files with missing lines Patch % Lines
paddlenlp/transformers/moe_gate_auto.py 10.47% 282 Missing ⚠️
paddlenlp/transformers/moe_layer_auto.py 17.56% 122 Missing ⚠️
...addlenlp/transformers/deepseek_v2/modeling_auto.py 28.00% 36 Missing ⚠️
paddlenlp/transformers/auto_utils.py 13.33% 26 Missing ⚠️

❌ Your patch check has failed because the patch coverage (14.18%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.
❌ Your project check has failed because the head coverage (50.91%) is below the target coverage (58.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9886      +/-   ##
===========================================
- Coverage    51.18%   50.91%   -0.28%     
===========================================
  Files          745      748       +3     
  Lines       119016   119811     +795     
===========================================
+ Hits         60924    60998      +74     
- Misses       58092    58813     +721     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

me = paddle.stack(me_list).mean(0)
ce = paddle.stack(ce_list).mean(0)
aux_loss = paddle.sum(me * ce) * float(self.num_experts)
return aux_loss
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# Make sure the capacity value does not exceed the number of tokens.
capacity = int(min(new_capacity, paddle.tensor(mask1.size(0))))

l_aux = self._cal_aux_loss(gates, mask1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是不是没修改?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

文件里有添加,可能这里没显示

@pkuzyc pkuzyc force-pushed the auto_moe_layer branch 2 times, most recently from 8bfa877 to cbef5c5 Compare February 25, 2025 12:47
return mesh


def einsum(rule, a, b):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle现在不支持这些功能吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

einsum op 直接动转静有些问题

@@ -117,13 +117,13 @@ def scaled_dot_product_attention(
)

if isinstance(outputs, tuple):
outputs[0] = outputs[0].reshape([bsz, q_len, v_num_heads, head_dim])
outputs[0] = outputs[0].reshape([bsz, kv_seq_len, v_num_heads, head_dim])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里改成q_len

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. 这个函数里其他不一致的地方也改了

# Make sure the capacity value does not exceed the number of tokens.
capacity = int(min(new_capacity, paddle.tensor(mask1.size(0))))

l_aux = self._cal_aux_loss(gates, mask1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是不是没修改?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants