Skip to content

Conversation

@Angazenn
Copy link
Collaborator

@Angazenn Angazenn commented Nov 14, 2025

What this PR does / why we need it?

Currently, the default cudagraph_capture_size in vLLM is [1, 2, 4 ,8 ,16 ,24 ,... , max_capture_size]. However, this is not always the best choice on different situations. This PR aims to change the default setting when running Qwen3-MoE on full dp (dp_size > 1 && tp_size == 1) setting, which is usually applied in Large-Scale EP.
old :
[1, 2, 4 ,8 ,16 ,24 ,... , max_capture_size]
new:
[1, 2, 5 ,10 ,15, 16 ,24 ,... , max_capture_size]
This is mainly because the performance of _npu_paged_attention op degrades dramatically on old settings. We hope to provide better performance if users do not set specific cudagraph_capture_size.

Does this PR introduce any user-facing change?

The default cudagraph_capture_size is modified in above cases. However, if cudagraph_capture_size has already set by users, this PR won't have any influence on this.

How was this patch tested?

@Angazenn Angazenn changed the base branch from main to v0.11.0-dev November 14, 2025 08:24
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@Angazenn Angazenn changed the title [v0.11.0-dev][misc]change default capture size for Qwen3-MoE when using pure dp [v0.11.0-dev][misc]change default capture size for Qwen3-MoE when using full dp Nov 14, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces significant changes across various components, including updates to CANN/PyTorch versions, refactoring of the Mooncake KV transfer connector, and adjustments to ACL graph capture sizes. While many changes aim to improve performance and compatibility, several areas require further attention to ensure correctness and maintainability. Specifically, the extensive refactoring of the Mooncake connector introduces new communication patterns and error handling, which needs thorough verification. Additionally, changes to default scheduling parameters and test coverage for critical functionalities should be carefully reviewed.

Signed-off-by: Angazenn <[email protected]>
Signed-off-by: Angazenn <[email protected]>
Signed-off-by: Angazenn <[email protected]>
Signed-off-by: Angazenn <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant