Skip to content

[TASK] Improve resilience of the map_zip_with #3589

@mythrocks

Description

@mythrocks

This is a(nother) follow-up task for #3490, which introduced support for the map_zip_with operation.

When reviewing #3490, one found that for very large maps, some of the data-structures allocated for in the implementation might be too large to fit within the size_type size.

As the algorithm currently stands, it fails when an overflow is detected, to avoid accidental memory corruption.

It would be good to address this, perhaps with batching. This should allow the consumption of m (<n) rows of the inputs at a time, processing them to produce intermediates, and then combining those to produce a final output.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions