-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
In getting CI working with the CUDA 13 driver there were a couple of hacks needed, which should be addressed when possible:
- AArch64 / Python 3.11 is tested with CUDA 12.2 instead of CUDA 12.0 as cuda-python 12.9.1 cannot be installed with CUDA 12.0 or 12.1 yet. This change should be reverted when the latest 12.9 bindings can be installed with CUDA 12.0. See https://github.com/NVIDIA/numba-cuda/pull/385/files#diff-cc749e1d52d834b934e6d416112a352bd963b19a7c94fb382be77784044ea0c4R13
- Test binaries are not built when testing the wheel with the ctypes bindings. This is because the build system uses
cuda.bindings.nvrtc
, which is not installed in that setup. Some way to build the LTOIR objects in this configuration needs to be found. See https://github.com/NVIDIA/numba-cuda/pull/385/files#diff-c6a6ecd17bfcae0e053cf9dce661870baff104ad82042ce6af76ecb15c6eec0f - There are some nvdisasm issues - nvdisasm from toolkits different to (older than?) the driver version sometimes struggle with the binaries produced by the latest driver / nvJitLink, and I have not been able to identify a pattern. The SASS inspection tests were disabled, but they should be re-enabled and the root cause resolved (if it was not a transient issue with the packaged versions we were getting in CI shortly after the CUDA 13 release). See https://github.com/NVIDIA/numba-cuda/pull/385/files#diff-a9317012eb2763cef1081435b171637317071e1ab252ab82f00bb8026241409b
Metadata
Metadata
Assignees
Labels
No labels