-
Notifications
You must be signed in to change notification settings - Fork 42
[MNT] Drop NUMBA_CUDA_USE_NVIDIA_BINDING; always use cuda.core and cuda.bindings as fallback #479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
|
/ok to test 474a1a2 |
|
/ok to test c1e11ce |
|
/ok to test 3f0ceb3 |
|
/ok to test 35b172b |
Co-authored-by: Graham Markall <[email protected]>
This reverts commit d93321e.
|
/ok to test f8b4d8c |
| # Require NVIDIA CUDA bindings at import time | ||
| if not ( | ||
| importlib.util.find_spec("cuda") | ||
| and importlib.util.find_spec("cuda.bindings") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be an impossible code path without actively trying to create a broken installation, so we can remove these checks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will change this in a follow-up.
|
I'm intermittently seeing #517 with this PR as well. I'm still looking to understand if that's an existing bug or a side-effect of this PR. |
| ) | ||
| # For backwards compatibility: indicate that the NVIDIA CUDA Python bindings are | ||
| # in use. Older code checks this flag to branch on binding-specific behavior. | ||
| USE_NV_BINDING = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not aware of code that checks driver.USE_NV_BINDING. The code I was thinking of that checks whether the NVIDIA bindings are in use checks config.CUDA_USE_NVIDIA_BINDING. E.g.: https://github.com/rapidsai/rmm/blob/branch-25.12/python/rmm/rmm/allocators/numba.py#L76
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this needs
diff --git a/numba_cuda/numba/cuda/cudadrv/nvrtc.py b/numba_cuda/numba/cuda/cudadrv/nvrtc.py
index 868397b3..0f50be8f 100644
--- a/numba_cuda/numba/cuda/cudadrv/nvrtc.py
+++ b/numba_cuda/numba/cuda/cudadrv/nvrtc.py
@@ -115,6 +115,8 @@ def compile(src, name, cc, ltoir=False, lineinfo=False, debug=False):
relocatable_device_code=True,
link_time_optimization=ltoir,
name=name,
+ debug=debug,
+ lineinfo=lineinfo
)
class Logger:to address #479 (comment), and that this is why tests are still failing.
|
/ok to test 7f98a05 |
|
/ok to test ee2c048 |
|
/ok to test 6d886d6 |
|
/ok to test |
@gmarkall, there was an error processing your request: See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/ |
|
/ok to test 92b9be2 |
Summary
Historically,
numba-cudasupported two binding layers (ctypes and cuda-python). This change simplifies maintenance and aligns with the long-term direction to use cuda.core while leveraging cuda.bindings to fill gaps.What changed?
Breaking changes
Fixes: #154