-
Notifications
You must be signed in to change notification settings - Fork 563
[Cherry-pick][0.11.0] Adapted to torch_npu.npu_fused_infer_attention_score #4202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v0.11.0-dev
Are you sure you want to change the base?
Conversation
…score Signed-off-by: Icey <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request updates a block size parameter from 64 to 128 in two distinct files. While the change itself appears to be a coordinated update, the direct hardcoding of this magic number in multiple locations introduces a significant maintainability risk. It is highly recommended to centralize such common configuration values into a single named constant, ideally in a shared utility module, to ensure consistency and simplify future updates.
| @staticmethod | ||
| def get_supported_block_size() -> list[int]: | ||
| return [64] | ||
| return [128] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The block size 128 is a magic number that appears in multiple files. Consider defining this value as a named constant in a shared utility module (e.g., vllm_ascend/utils.py) to improve maintainability and ensure consistency across the codebase. Duplicating such values can lead to errors if the value needs to be updated in the future.
| ).page_size_bytes | ||
|
|
||
| block_alignment_bytes = 64 | ||
| block_alignment_bytes = 128 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What this PR does / why we need it?
Fixes a compatible bug with torch_npu.npu_fused_infer_attention_score which is discribed in #4020.
@momo609 tells us this solution.
cherry-pick: #4025
Does this PR introduce any user-facing change?
N/A
How was this patch tested?
CI passed with new added/existing test.