-
Notifications
You must be signed in to change notification settings - Fork 76
Implement make_opt_flags function for XPU and enable tests in test_matmul.py
#5051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
bd9c534 to
5646fa0
Compare
|
Looks like I can test these changes using |
| if split_k > 1: | ||
| pytest.skip("splitK hasn't been fully tested on AMD GPU.") | ||
|
|
||
| elif is_xpu(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Local result: 980 passed, 2340 skipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New results: 8 failed, 1124 passed, 2188 skipped in 1379.15s (0:22:59) (8 cases added to the skiplists)
| elif is_xpu(): | ||
| if split_k > 1: | ||
| pytest.skip("splitK hasn't been fully tested on INTEL GPU.") | ||
| if "float8_e4m3fn" in act_dtype_str and "float8_e4m3fn" in weight_dtype_str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For these cases I see the following:
python/triton_kernels/tests/test_matmul.py::test_op[False-True-True-True-False-128-1000-400-400-ragged-float8_e4m3fn-float8_e4m3fn-3-1-1-1-False-None] - AssertionError: ref_y_scale: 0.004773152060806751, tri_y_scale: 0.005022321827709675| generate_native_code: bool = False | ||
| advanced_path: bool = False | ||
| enable_tile_load_linear_layout: bool = True | ||
| arch: str = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, new tests don't work.
| group_m = 8 | ||
| xcd_swizzle = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about it
| if split_k > 1: | ||
| pytest.skip("splitK hasn't been fully tested on INTEL GPU.") | ||
| if "float8_e4m3fn" in act_dtype_str and "float8_e4m3fn" in weight_dtype_str: | ||
| pytest.skip("FIXME") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest to create an issue for this and mark te skip with the issue number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Signed-off-by: Anatoly Myachev <[email protected]>
Signed-off-by: Anatoly Myachev <[email protected]>
Signed-off-by: Anatoly Myachev <[email protected]>
40fd5c2 to
2f9b449
Compare
Signed-off-by: Anatoly Myachev <[email protected]>
2f9b449 to
d129992
Compare
|
|
||
| def compute_split_k(block_k: int, k: int | None, grid_size: int) -> int: | ||
| device_props = torch.xpu.get_device_properties(0) | ||
| n_sms = device_props.multi_processor_countgpu_subslice_count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean gpu_subslice_count?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be gpu_subslice_count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch! you're right.
python/triton_kernels/triton_kernels/matmul_ogs_details/opt_flags_details/opt_flags_intel.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Anatoly Myachev <[email protected]>
Signed-off-by: Anatoly Myachev <[email protected]>
|
|
||
| TRITON_TEST_SUITE=triton_kernels \ | ||
| run_pytest_command -vvv -n ${PYTEST_MAX_PROCESSES:-8} --device xpu . | ||
| run_pytest_command -vvv -n ${PYTEST_MAX_PROCESSES:-4} --device xpu . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, Python processes start to break.
make_opt_flags function for XPUmake_opt_flags function for XPU and enable tests in test_matmul.py
Note for reviewers: I've left only the most basic heuristics. If you have improvements in mind that will definitely work better without testing, i.e. were already known, then we can make edits directly to this pull request. If you have improvements that need to be tested - that's also good, please also write, but I'd prefer to implement the basic version as quickly as possible and tune it in separate PRs if possible.
Pass rate: 84.11% -> 89.04%