-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Is your feature request related to a problem or challenge?
The ListingTable works quite well in practice, but like all software could be made better. I am writing up this ticket to enumerate some areas for improvement in the hopes people who are interested can collaborate / coordinate their efforts
Background
DataFusion has a ListingTable that effectively reading tables stored in one or more files in a "hive partitioned" directory structure:
So for example, give files like this:
/path/to/my_table/file1.parquet
/path/to/my_table/file2.parquet
/path/to/my_table/file3.parquet
You can create a table with a command like
CREATE EXTERNAL TABLE my_table
LOCATION '/path/to/my_table'And the ListingTable will handle figuring out schema, and running queries against those files as though they were a single table.
Describe the solution you'd like
Here are some things I suspect could be improved:
All Formats
Object store list caching
For large tables (many files) on remote stores, the actual object store call to LIST may be non trivially expensive and thus doing over and over is expensive
@henrifroese points out a similar thing for pruning partitions #9654
Parquet Specific
MetaData caching
ListingTable (code link) prunes files based on statistics, and then inside the ParquetExec itself (link) where it again prunes row groups and data pages based on metadata. Fetching and parsing this metatadata twice (once to prune files and once to prune row groups) could be improved
IO granularity
I have heard it said that the DataFusion ParquetExec reader reads a page at a time -- this is fine if the parquet file is a local file on disk, but it is likely quite inefficient if each page must be fetched with an individual remote object store request. This assertion needs to be researched, but if true we could make queries on remote parquet files much faster by making fewer larger requests
Describe alternatives you've considered
@Ted-Jiang added some APIs in #7570 https://github.com/apache/arrow-datafusion/blob/2b0a7db0ce64950864e07edaddfa80756fe0ffd5/datafusion/execution/src/cache/mod.rs but there aren't any default implementations in DataFusion so the metadata is read multiple times
Maybe we can add a default implementation of the caches in SessionContext with a simple policy (like LRU / some max size)
Another potential way to improve performance is to cache the decoded metadata from the Parquet footer rather than checking it once to prune files and then again to prune row groups / pages. This could be taken even farther with pruning files and row groups and pages in one go using and API like #9929
Additional context
@matthewmturner mentioned interest in improving listing table performance: #9899 (comment)
Note we don't use ListingTable in InfluxDB for some of the reasons described above
Related tickets:
- Partitioned object store lists all files on every query when using hive-partitioned parquet files #9654
- API in ParquetExec to pass in RowSelections to
ParquetExec(enable custom indexes, finer grained pushdown) #9929 - Cache Parquet Metadata #15582
- Filter cache based on the paper "Predicate Caching: Query-Driven Secondary Indexing for Cloud Data" #15585