diff --git a/README.md b/README.md index 2c0b18f07..97d9800cc 100644 --- a/README.md +++ b/README.md @@ -5,23 +5,23 @@ CUDA Python is the home for accessing NVIDIA’s CUDA platform from Python. It c * [cuda.core](https://nvidia.github.io/cuda-python/cuda-core/latest): Pythonic access to CUDA Runtime and other core functionalities * [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest): Low-level Python bindings to CUDA C APIs * [cuda.cccl.cooperative](https://nvidia.github.io/cccl/cuda_cooperative/): A Python module providing CCCL's reusable block-wide and warp-wide *device* primitives for use within Numba CUDA kernels -* [cuda.cccl.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python module for easy access to CCCL's highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc, that are callable on the *host* +* [cuda.cccl.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python module for easy access to CCCL's highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc. that are callable on the *host* * [numba.cuda](https://nvidia.github.io/numba-cuda/): Numba's target for CUDA GPU programming by directly compiling a restricted subset of Python code into CUDA kernels and device functions following the CUDA execution model. * [nvmath-python](https://docs.nvidia.com/cuda/nvmath-python/latest): Pythonic access to NVIDIA CPU & GPU Math Libraries, with both [*host*](https://docs.nvidia.com/cuda/nvmath-python/latest/overview.html#host-apis) and [*device* (nvmath.device)](https://docs.nvidia.com/cuda/nvmath-python/latest/overview.html#device-apis) APIs. It also provides low-level Python bindings to host C APIs ([nvmath.bindings](https://docs.nvidia.com/cuda/nvmath-python/latest/bindings/index.html)). -CUDA Python is currently undergoing an overhaul to improve existing and bring up new components. All of the previously available functionalities from the `cuda-python` package will continue to be available, please refer to the [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest) documentation for installation guide and further detail. +CUDA Python is currently undergoing an overhaul to improve existing and introduce new components. All of the previously available functionalities from the `cuda-python` package will continue to be available, please refer to the [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest) documentation for installation guide and further detail. ## cuda-python as a metapackage -`cuda-python` is being re-structured to become a metapackage that contains a collection of subpackages. Each subpackage is versioned independently, allowing installation of each component as needed. +`cuda-python` is being restructured to become a metapackage that contains a collection of subpackages. Each subpackage is versioned independently, allowing installation of each component as needed. ### Subpackage: `cuda.core` -The `cuda.core` package offers idiomatic, pythonic access to CUDA Runtime and other functionalities. +The `cuda.core` package offers idiomatic, Pythonic access to CUDA Runtime and other functionalities. The goals are to -1. Provide **idiomatic ("pythonic")** access to CUDA Driver, Runtime, and JIT compiler toolchain +1. Provide **idiomatic ("Pythonic")** access to CUDA Driver, Runtime, and JIT compiler toolchain 2. Focus on **developer productivity** by ensuring end-to-end CUDA development can be performed quickly and entirely in Python 3. **Avoid homegrown** Python abstractions for CUDA for new Python GPU libraries starting from scratch 4. **Ease** developer **burden of maintaining** and catching up with latest CUDA features @@ -31,7 +31,7 @@ The goals are to The `cuda.bindings` package is a standard set of low-level interfaces, providing full coverage of and access to the CUDA host APIs from Python. -The list of available interfaces are: +The list of available interfaces is: * CUDA Driver * CUDA Runtime diff --git a/SECURITY.md b/SECURITY.md index 02d52a371..b72f0d67a 100755 --- a/SECURITY.md +++ b/SECURITY.md @@ -6,9 +6,9 @@ including all source code repositories managed through our organization. If you need to report a security issue, please use the appropriate contact points outlined below. **Please do not report security vulnerabilities through GitHub/GitLab.** -## Reporting Potential Security Vulnerability in nvmath-python +## Reporting Potential Security Vulnerability in CUDA Python -To report a potential security vulnerability in nvmath-python: +To report a potential security vulnerability in CUDA Python: - Web: [Security Vulnerability Submission Form](https://www.nvidia.com/object/submit-security-vulnerability.html) diff --git a/cuda_core/README.md b/cuda_core/README.md index 5247aff02..8a863c732 100644 --- a/cuda_core/README.md +++ b/cuda_core/README.md @@ -1,4 +1,4 @@ -# `cuda.core`: (experimental) pythonic CUDA module +# `cuda.core`: (experimental) Pythonic CUDA module Currently under active development; see [the documentation](https://nvidia.github.io/cuda-python/cuda-core/latest/) for more details. @@ -13,16 +13,16 @@ This subpackage adheres to the developing practices described in the parent meta ## Testing To run these tests: -* `python -m pytest tests/` against editable installations -* `pytest tests/` against installed packages +* `python -m pytest tests/` with editable installations +* `pytest tests/` with installed packages ### Cython Unit Tests Cython tests are located in `tests/cython` and need to be built. These builds have the same CUDA Toolkit header requirements as [those of cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest/install.html#requirements) where the major.minor version must match `cuda.bindings`. To build them: -1. Setup environment variable `CUDA_HOME` with the path to the CUDA Toolkit installation. -2. Run `build_tests` script located in `test/cython` appropriate to your platform. This will both cythonize the tests and build them. +1. Set up environment variable `CUDA_HOME` with the path to the CUDA Toolkit installation. +2. Run `build_tests` script located in `tests/cython` appropriate to your platform. This will both cythonize the tests and build them. To run these tests: -* `python -m pytest tests/cython/` against editable installations -* `pytest tests/cython/` against installed packages +* `python -m pytest tests/cython/` with editable installations +* `pytest tests/cython/` with installed packages diff --git a/cuda_core/docs/source/getting-started.md b/cuda_core/docs/source/getting-started.md index a1eccb0fe..d27edb2c4 100644 --- a/cuda_core/docs/source/getting-started.md +++ b/cuda_core/docs/source/getting-started.md @@ -8,7 +8,7 @@ including: - Compiling and launching CUDA kernels - Asynchronous concurrent execution with CUDA graphs, streams and events - Coordinating work across multiple CUDA devices -- Allocating, transfering, and managing device memory +- Allocating, transferring, and managing device memory - Runtime linking of device code with Link-Time Optimization (LTO) - and much more! @@ -94,7 +94,7 @@ s.sync() ``` This example demonstrates one of the core workflows enabled by `cuda.core`: compiling and launching CUDA code. -Note the clean, Pythonic interface, and absense of any direct calls to the CUDA runtime/driver APIs. +Note the clean, Pythonic interface, and absence of any direct calls to the CUDA runtime/driver APIs. ## Examples and Recipes