Skip to content

Conversation

@tqchen
Copy link
Member

@tqchen tqchen commented Sep 12, 2025

This PR enhances DLPack exchange by introducing DLPackPyObjectExporter, DLPackPyObjectImporter and DLPackTensorAllocator.

These three function pointers will help us to speedup import/export with DLPack and also streamline the rare(but still useful sometimes) allocation inside the FFI.

They can help to significantly speedup autodlpack import. They will also enable us to be able to query the allocator from env and return ffi::Tensor back to the caller environment(experimental), when a function takes torch.Tensor as argument, returned Tensor values will be converted to torch.Tensor.

Also renames SetCurrentStream => SetStream to align with styles in CUDA API.

This PR enhances DLPack exchange by introducing DLPackPyObjectExporter,
DLPackPyObjectImporter and DLPackTensorAllocator.

These three function pointers will help us to speedup import/export
with DLPack and also streamline the rare(but still useful sometimes)
allocation inside the FFI.

They can help to significantly speedup autodlpack import. They will also
enable us to be able to query the allocator from env and return ffi::Tensor
back to the caller environment(experimental), when a function takes torch.Tensor
as argument, returned Tensor values will be converted to torch.Tensor.

Also renames SetCurrentStream => SetStream to align with styles in CUDA API.

Finally, we add option to select whether we release GIL,
we release gil by default like ctypes, however, for short running functions
it may be helpful to set func.release_gil = False
@tqchen tqchen changed the title [FFI][ABI][REFACTOR] Enhance DLPack Exchange Speed and Bheavior [FFI][ABI][REFACTOR] Enhance DLPack Exchange Speed and Behavior Sep 12, 2025
Copy link
Contributor

@MasterJH5574 MasterJH5574 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@MasterJH5574 MasterJH5574 merged commit 71635d0 into apache:main Sep 12, 2025
14 checks passed
tqchen added a commit to tqchen/tvm that referenced this pull request Sep 13, 2025
…he#18306)

This PR enhances DLPack exchange by introducing DLPackPyObjectExporter,
DLPackPyObjectImporter and DLPackTensorAllocator.

These three function pointers will help us to speedup import/export
with DLPack and also streamline the rare(but still useful sometimes)
allocation inside the FFI.

They can help to significantly speedup autodlpack import. They will also
enable us to be able to query the allocator from env and return ffi::Tensor
back to the caller environment(experimental), when a function takes torch.Tensor
as argument, returned Tensor values will be converted to torch.Tensor.

Also renames SetCurrentStream => SetStream to align with styles in CUDA API.

Finally, we add option to select whether we release GIL,
we release gil by default like ctypes, however, for short running functions
it may be helpful to set func.release_gil = False
tqchen added a commit to tqchen/tvm that referenced this pull request Sep 13, 2025
…he#18306)

This PR enhances DLPack exchange by introducing DLPackPyObjectExporter,
DLPackPyObjectImporter and DLPackTensorAllocator.

These three function pointers will help us to speedup import/export
with DLPack and also streamline the rare(but still useful sometimes)
allocation inside the FFI.

They can help to significantly speedup autodlpack import. They will also
enable us to be able to query the allocator from env and return ffi::Tensor
back to the caller environment(experimental), when a function takes torch.Tensor
as argument, returned Tensor values will be converted to torch.Tensor.

Also renames SetCurrentStream => SetStream to align with styles in CUDA API.

Finally, we add option to select whether we release GIL,
we release gil by default like ctypes, however, for short running functions
it may be helpful to set func.release_gil = False
tqchen added a commit to tqchen/tvm that referenced this pull request Sep 13, 2025
…he#18306)

This PR enhances DLPack exchange by introducing DLPackPyObjectExporter,
DLPackPyObjectImporter and DLPackTensorAllocator.

These three function pointers will help us to speedup import/export
with DLPack and also streamline the rare(but still useful sometimes)
allocation inside the FFI.

They can help to significantly speedup autodlpack import. They will also
enable us to be able to query the allocator from env and return ffi::Tensor
back to the caller environment(experimental), when a function takes torch.Tensor
as argument, returned Tensor values will be converted to torch.Tensor.

Also renames SetCurrentStream => SetStream to align with styles in CUDA API.

Finally, we add option to select whether we release GIL,
we release gil by default like ctypes, however, for short running functions
it may be helpful to set func.release_gil = False
MasterJH5574 pushed a commit that referenced this pull request Sep 25, 2025
…onversion (#18331)

This PR enhances `BasePyModule` by integrating a faster DLPack
converter for efficient tensor conversion between TVM and PyTorch
following #18306.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants