Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,23 +5,23 @@ CUDA Python is the home for accessing NVIDIA’s CUDA platform from Python. It c
* [cuda.core](https://nvidia.github.io/cuda-python/cuda-core/latest): Pythonic access to CUDA Runtime and other core functionalities
* [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest): Low-level Python bindings to CUDA C APIs
* [cuda.cccl.cooperative](https://nvidia.github.io/cccl/cuda_cooperative/): A Python module providing CCCL's reusable block-wide and warp-wide *device* primitives for use within Numba CUDA kernels
* [cuda.cccl.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python module for easy access to CCCL's highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc, that are callable on the *host*
* [cuda.cccl.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python module for easy access to CCCL's highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc. that are callable on the *host*
* [numba.cuda](https://nvidia.github.io/numba-cuda/): Numba's target for CUDA GPU programming by directly compiling a restricted subset of Python code into CUDA kernels and device functions following the CUDA execution model.
* [nvmath-python](https://docs.nvidia.com/cuda/nvmath-python/latest): Pythonic access to NVIDIA CPU & GPU Math Libraries, with both [*host*](https://docs.nvidia.com/cuda/nvmath-python/latest/overview.html#host-apis) and [*device* (nvmath.device)](https://docs.nvidia.com/cuda/nvmath-python/latest/overview.html#device-apis) APIs. It also provides low-level Python bindings to host C APIs ([nvmath.bindings](https://docs.nvidia.com/cuda/nvmath-python/latest/bindings/index.html)).

CUDA Python is currently undergoing an overhaul to improve existing and bring up new components. All of the previously available functionalities from the `cuda-python` package will continue to be available, please refer to the [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest) documentation for installation guide and further detail.
CUDA Python is currently undergoing an overhaul to improve existing and introduce new components. All of the previously available functionalities from the `cuda-python` package will continue to be available, please refer to the [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest) documentation for installation guide and further detail.

## cuda-python as a metapackage

`cuda-python` is being re-structured to become a metapackage that contains a collection of subpackages. Each subpackage is versioned independently, allowing installation of each component as needed.
`cuda-python` is being restructured to become a metapackage that contains a collection of subpackages. Each subpackage is versioned independently, allowing installation of each component as needed.

### Subpackage: `cuda.core`

The `cuda.core` package offers idiomatic, pythonic access to CUDA Runtime and other functionalities.
The `cuda.core` package offers idiomatic, Pythonic access to CUDA Runtime and other functionalities.

The goals are to

1. Provide **idiomatic ("pythonic")** access to CUDA Driver, Runtime, and JIT compiler toolchain
1. Provide **idiomatic ("Pythonic")** access to CUDA Driver, Runtime, and JIT compiler toolchain
2. Focus on **developer productivity** by ensuring end-to-end CUDA development can be performed quickly and entirely in Python
3. **Avoid homegrown** Python abstractions for CUDA for new Python GPU libraries starting from scratch
4. **Ease** developer **burden of maintaining** and catching up with latest CUDA features
Expand All @@ -31,7 +31,7 @@ The goals are to

The `cuda.bindings` package is a standard set of low-level interfaces, providing full coverage of and access to the CUDA host APIs from Python.

The list of available interfaces are:
The list of available interfaces is:

* CUDA Driver
* CUDA Runtime
Expand Down
4 changes: 2 additions & 2 deletions SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ including all source code repositories managed through our organization.
If you need to report a security issue, please use the appropriate contact points outlined
below. **Please do not report security vulnerabilities through GitHub/GitLab.**

## Reporting Potential Security Vulnerability in nvmath-python
## Reporting Potential Security Vulnerability in CUDA Python

To report a potential security vulnerability in nvmath-python:
To report a potential security vulnerability in CUDA Python:

- Web: [Security Vulnerability Submission
Form](https://www.nvidia.com/object/submit-security-vulnerability.html)
Expand Down
14 changes: 7 additions & 7 deletions cuda_core/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# `cuda.core`: (experimental) pythonic CUDA module
# `cuda.core`: (experimental) Pythonic CUDA module

Currently under active development; see [the documentation](https://nvidia.github.io/cuda-python/cuda-core/latest/) for more details.

Expand All @@ -13,16 +13,16 @@ This subpackage adheres to the developing practices described in the parent meta
## Testing

To run these tests:
* `python -m pytest tests/` against editable installations
* `pytest tests/` against installed packages
* `python -m pytest tests/` with editable installations
* `pytest tests/` with installed packages

### Cython Unit Tests

Cython tests are located in `tests/cython` and need to be built. These builds have the same CUDA Toolkit header requirements as [those of cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest/install.html#requirements) where the major.minor version must match `cuda.bindings`. To build them:

1. Setup environment variable `CUDA_HOME` with the path to the CUDA Toolkit installation.
2. Run `build_tests` script located in `test/cython` appropriate to your platform. This will both cythonize the tests and build them.
1. Set up environment variable `CUDA_HOME` with the path to the CUDA Toolkit installation.
2. Run `build_tests` script located in `tests/cython` appropriate to your platform. This will both cythonize the tests and build them.

To run these tests:
* `python -m pytest tests/cython/` against editable installations
* `pytest tests/cython/` against installed packages
* `python -m pytest tests/cython/` with editable installations
* `pytest tests/cython/` with installed packages
4 changes: 2 additions & 2 deletions cuda_core/docs/source/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ including:
- Compiling and launching CUDA kernels
- Asynchronous concurrent execution with CUDA graphs, streams and events
- Coordinating work across multiple CUDA devices
- Allocating, transfering, and managing device memory
- Allocating, transferring, and managing device memory
- Runtime linking of device code with Link-Time Optimization (LTO)
- and much more!

Expand Down Expand Up @@ -94,7 +94,7 @@ s.sync()
```

This example demonstrates one of the core workflows enabled by `cuda.core`: compiling and launching CUDA code.
Note the clean, Pythonic interface, and absense of any direct calls to the CUDA runtime/driver APIs.
Note the clean, Pythonic interface, and absence of any direct calls to the CUDA runtime/driver APIs.

## Examples and Recipes

Expand Down