-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
[EPLB][ROCm]: support EPBL for ROCm backend #27731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for Expert Parallelism Load Balancing (EPLB) on the ROCm backend, achieving feature parity with the CUDA implementation. The changes are logical and well-contained, primarily enabling the feature for ROCm and updating the relevant checks and method calls. I have one point of feedback regarding the removal of a tensor contiguity assertion, which could potentially lead to issues with distributed communication. I've suggested ensuring tensor contiguity to maintain correctness and performance.
|
@abmfy could u please help review this PR |
30ca39a to
b45acb3
Compare
cf600fb to
41b7f56
Compare
tjtanaa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
hmellor
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks much cleaner now, just one thing I'm not sure about
| assert all( | ||
| weight.is_contiguous() | ||
| for name, weight in weights | ||
| if not name.startswith("_shared_experts.") | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about this change. @abmfy could this cause issues for other EPLB use cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the shared expert is currently not continuous, possibly because the later MR performed a stride operation, however, we think that EPLB only apply to routed experts(this function only returns routed experts), so we have canceled the contiguous check on shared expert, and shared expert may have other stride operations in the future and should not be asserted here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hmellor should be good here since EPLB is only on shared experts
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Perry Zhang <[email protected]>
Signed-off-by: Perry Zhang <[email protected]>
Signed-off-by: Perry Zhang <[email protected]>
Signed-off-by: Perry Zhang <[email protected]>
41b7f56 to
19df811
Compare
Signed-off-by: Perry Zhang <[email protected]>
19df811 to
cabe8ba
Compare
abmfy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Thanks for the contribution!
| assert all( | ||
| weight.is_contiguous() | ||
| for name, weight in weights | ||
| if not name.startswith("_shared_experts.") | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hmellor should be good here since EPLB is only on shared experts

Purpose
This PR supports EPLB for the ROCm backend, achieving feature parity with the existing CUDA implementation. The implementation validated on DeepSeekR1.
Test Plan
we try to enable EPLB on DeepSeekR1 on MI355 with the following parameters.
Test Result
Benchmark Result:

After using eplb, the
balancednessmetric increased from 0.55 to 0.65 with the random data. However, theavg_tokensis not enough in decode phase, so that thebalancednessmetric dropped to 0.3.Besides, we use
lm_evalto validate the accuracy ofEPLBongsm8kdatasets, the result as below: