Skip to content

Implement Parquet filter pushdown via new filter pushdown APIs #15769

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 21 commits into
base: main
Choose a base branch
from

Conversation

adriangb
Copy link
Contributor

@adriangb adriangb commented Apr 18, 2025

Moves predicate pushdown into parquet being something specialized that ListingTable and Parquet to working for any TableProvider and any file format the implements the APIs. The checks for compatibility also happen all within the parquet data source machinery, instead of leaking implementations via supports_filters_pushdown.

@github-actions github-actions bot added core Core DataFusion crate datasource Changes to the datasource crate labels Apr 18, 2025
Copy link
Contributor Author

@adriangb adriangb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pointing out current issues to move forward with implementing parquet filter pushdown via the new APIs we've introduced

cc @alamb @berkaysynnada for ideas

Comment on lines 478 to 455
Arc::new(ParquetSource::default())
todo!() // need access of file schema?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This poses an issue.

TLDR is that in order to know if it can absorb a filter as exact ParquetSource needs to know not only the filter but also the file schema it's applied to (in particular to get the type of the columns since it can't handle structs).

let remaining_description = if config.execution.parquet.pushdown_filters {
let mut remaining_filters = fd.filters.clone();
for filter in &remaining_filters {
if can_expr_be_pushed_down_with_schemas(filter, &conf.file_schema) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where we need the file schema

@alamb
Copy link
Contributor

alamb commented Apr 19, 2025

Thanks @adriangb -- I am about to be offline for a week so I will review this when I return

@adriangb adriangb changed the title re-implement filter pushdown for parquet Implement filter pushdown for TopK Apr 19, 2025
@adriangb adriangb changed the title Implement filter pushdown for TopK re-implement filter pushdown for parquet Apr 19, 2025
@github-actions github-actions bot added the proto Related to proto crate label Apr 19, 2025
@adriangb adriangb changed the title re-implement filter pushdown for parquet Implement Parquet filter pushdown via new filter pushdown APIs Apr 19, 2025
@github-actions github-actions bot added the sqllogictest SQL Logic Tests (.slt) label Apr 19, 2025
Comment on lines -985 to -1015
// if we can't push it down completely with only the filename-based/path-based
// column names, then we should check if we can do parquet predicate pushdown
let supports_pushdown = self.options.format.supports_filters_pushdown(
&self.file_schema,
&self.table_schema,
&[filter],
)?;

if supports_pushdown == FilePushdownSupport::Supported {
return Ok(TableProviderFilterPushDown::Exact);
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point of this PR is that this moves from being something specialized that ListingTable does to anything that works for any TableProvider / they don't need to do anything special! The checks for compatibility also happen all within the parquet data source machinery, instead of leaking implementations via supports_filters_pushdown.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one question: aren't we expecting/preparing for, people to use ListingTable if they read Parquet files? Are we eventually planning to remove all format-specific handlings? Or this is a case only for filter pushdown?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's the case, why don't we fully remove supports_filters_pushdown() API at all

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think many users of DataFusion (based on our usage, talks I've seen and examples we have) use custom TableProvider implementations.

I would keep supports_filters_pushdown so that TableProviders can do Exact pruning of filters, e.g. using partition columns.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can justify implementing other TableProviders for Parquet, but still I cannot understand why we need to degrade the capabilities of our ListingTable. Is't it always better pruning/simplifying things at the higher levels as possible?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one question: aren't we expecting/preparing for, people to use ListingTable if they read Parquet files? Are we eventually planning to remove all format-specific handlings? Or this is a case only for filter pushdown?

For what it is worth, we (InfluxData) doesn't use ListingTable to read parquet files, instead we provide our own equivalent and create the DataSourceExec's directly

I would keep supports_filters_pushdown so that TableProviders can do Exact pruning of filters, e.g. using partition columns.

Yes I think that is important too -- I don't think we should be removing any APIs from ListingTable

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can justify implementing other TableProviders for Parquet, but still I cannot understand why we need to degrade the capabilities of our ListingTable. Is't it always better pruning/simplifying things at the higher levels as possible?

I don't think this degrades the capabilities of the current listing table. I think the only implications are for anyone who used a custom FileFormat and impleented supports_filters_pushdown -- I suspect this is not very common and we can likely avoid consternation by mentioning it in the upgrade guide (see comment below)

Comment on lines -151 to +152
source = source.with_predicate(Arc::clone(&file_schema), predicate);
source = source.with_predicate(predicate);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seemed like an easy win since I was able to just change this so that the schema is always passed in by the FileSourceConfigBuilder instead of only when with_predicate is called.
This was necessary becasue with_predicate is no longer called to attach a predicate, instaed it happens during an optimization pass so ParquetSource neesd to have it available at that point.
I left with_predicate in there to avoid churn and in case there is a use case for attaching a predicate directly through the scan instad of a as a FilterExec that later gets pushed into the scan.

@@ -244,7 +242,7 @@ impl ParquetExecBuilder {
inner: DataSourceExec::new(Arc::new(base_config.clone())),
base_config,
predicate,
pruning_predicate: parquet.pruning_predicate,
pruning_predicate: None, // for backwards compat since `ParquetExec` is only for backwards compat anyway
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Open to other suggestions (i.e. removing it). I felt like this minimizes breakage for folks still using ParquetExec, who are likely the same folks that want to do the least amount of work possible to upgrade.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this is fine too in my opinion. It is almost time to remove ParquetExec anyways -- maybe we should just do it in this release 🤔

Comment on lines -652 to -658
let table_schema = get_basic_table_schema();

let file_schema = Schema::new(vec![Field::new(
"list_col",
DataType::Struct(Fields::empty()),
true,
)]);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test was wrong! It wanted to test that list_col prevents pushdown because it's a nested type. Instead it was prevented because list_col is not in the table / schema!

Comment on lines 562 to 580
let pruning_predicate_string = self
.pruning_predicate
.as_ref()
.map(|pre| {
let mut guarantees = pre
.literal_guarantees()
.iter()
.map(|item| format!("{}", item))
.collect_vec();
guarantees.sort();
format!(
", pruning_predicate={}, required_guarantees=[{}]",
pre.predicate_expr(),
guarantees.join(", ")
)
})
.unwrap_or_default();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In #15561 (review) Andrew asked me to keep this, but now since the schema isn't even being passed in to with_predicate it's going to be hard to keep these. I suggest we just accept that they won't be present in the physical plans. If that's not okay what I could do is generate them on the fly in fmt_extra or generate them if with_predicate is called with a schema or with_schema is called with a predicate. But I'd like to avoid that unless someone thinks is worth it or has another suggestion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is important to keep these in the physical plans -- in particular what I think is important is to be able to check via the explain plan if pruning is happening by looking at the explain plan

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm okay. I'll see if I can make it happen...

@@ -587,4 +560,49 @@ impl FileSource for ParquetSource {
}
}
}

fn try_pushdown_filters(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @berkaysynnada for this implementation

Comment on lines -112 to -114
/// Check if the specified file format has support for pushing down the provided filters within
/// the given schemas. Added initially to support the Parquet file format's ability to do this.
fn supports_filters_pushdown(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Binning specialized code that was also leaking parquet stuff through DataSource and into TableProvider 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I agree

Since FileFormat is a pub trait, this is technically a breaking API change, but I do think it was a parquet specific optimization

I recommend we mark this PR as an API change and add a note to the upgrade guide https://github.com/apache/datafusion/blob/main/docs/source/library-user-guide/upgrading.md

I think it should basically say if you implemented FileFormat (which probably no one did) and ListingTable you will have to implement the newly added ExecutionPlan::try_pushdown_filter method into your execution plan directly if you want the filters to be pushed down

/// An enum to distinguish between different states when determining if certain filters can be
/// pushed down to file scanning
#[derive(Debug, PartialEq)]
pub enum FilePushdownSupport {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another one of these enums!

Comment on lines -84 to +86
02)--TableScan: t_pushdown projection=[a], full_filters=[t_pushdown.b > Int32(2)]
02)--Projection: t_pushdown.a
03)----Filter: t_pushdown.b > Int32(2)
04)------TableScan: t_pushdown projection=[a, b], partial_filters=[t_pushdown.b > Int32(2)]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because the pushdown no longer happens at the logical level - it happens at the physical level. This makes sense, in part because the checks for suitability of pushdown are better at the physical level (there may be reasons to reject a pushdown at the physical level that are not present at the logical level, e.g. partition columns or encodings).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense and is ok that the logical plans show the filter not pushed down

Comment on lines 88 to 92
03)----DataSourceExec: file_groups={2 groups: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/1.parquet], [WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/2.parquet]]}, projection=[a], file_type=parquet, predicate=b@1 > 2, pruning_predicate=b_null_count@1 != row_count@2 AND b_max@0 > 2, required_guarantees=[]
03)----CoalesceBatchesExec: target_batch_size=8192
04)------RepartitionExec: partitioning=RoundRobinBatch(4), input_partitions=2
05)--------DataSourceExec: file_groups={2 groups: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/1.parquet], [WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/2.parquet]]}, projection=[a], file_type=parquet, predicate=b@1 > 2 AND b@1 > 2
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@berkaysynnada any idea why we have extra CoalesceBatchesExec and RepartitionExec now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've a guess but not proved: CoalesceBatchesExec comes because of RepartitionExec, and RepartitionExec is inserted to satisfy partition count, which is 4. That's required by FilterExec now (which was pushed down at the logical level before), but that FilterExec is pushed down later after EnforceDistribution.

So, this makes me think about the correct order of physical rules. PushdownFilter should probably work before distribution&order satisfiers. But that could also bring some issues, I'm not sure.

Copy link
Contributor Author

@adriangb adriangb Apr 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PushdownFilter should probably work before distribution&order satisfiers

That makes sense to me. It does more "invasive" re-arranging of plans than those do.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it is important to remove the coalesce / repartition

Maybe we can make a separate PR to move the filter pushdown code earlier in the physical planning

An alternate could be to update the filter pushdown optimizer pass somehow to remove these -- but I think it would be cleaner / easier to understand if they were never added in the first place

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened #15938

@adriangb adriangb marked this pull request as ready for review April 20, 2025 01:30
@adriangb adriangb force-pushed the parquet-filter-pushdown branch from 071aa19 to ff090a7 Compare April 20, 2025 01:31
@adriangb
Copy link
Contributor Author

Thanks @adriangb -- I am about to be offline for a week so I will review this when I return

Enjoy your vacation! I think you'll like this diff:

image

Copy link
Contributor

@berkaysynnada berkaysynnada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @adriangb. I couldn't provide much design suggestions, since I cannot fully understand the need of these changes. If you provide more background information, I can help more maybe.

It seems there are some critical planning changes here, and it's better getting approvals by more people for this PR.

};
let config_pushdown_enabled = config.execution.parquet.pushdown_filters;
let table_pushdown_enabled = self.pushdown_filters();
if table_pushdown_enabled || config_pushdown_enabled {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OR'ing this is correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current behavior is not documented anywhere, I tried to match the existing tests:

# pushdown_filters (currently) defaults to false, but we set it here to be explicit
statement ok
set datafusion.execution.parquet.pushdown_filters = false;
statement ok
CREATE EXTERNAL TABLE t(a varchar, b int, c float) STORED AS PARQUET
LOCATION 'test_files/scratch/parquet_filter_pushdown/parquet_table/';
## Create table with pushdown enabled (pushdown setting is part of the table)
statement ok
set datafusion.execution.parquet.pushdown_filters = true;
## Create table without pushdown
statement ok
CREATE EXTERNAL TABLE t_pushdown(a varchar, b int, c float) STORED AS PARQUET
LOCATION 'test_files/scratch/parquet_filter_pushdown/parquet_table/';
# restore defaults
statement ok
set datafusion.execution.parquet.pushdown_filters = false;
# When filter pushdown is not enabled, ParquetExec only filters based on
# metadata, so a FilterExec is required to filter the
# output of the `ParquetExec`
query T
select a from t where b > 2 ORDER BY a;
----
baz
foo
NULL
NULL
NULL
query TT
EXPLAIN select a from t_pushdown where b > 2 ORDER BY a;
----
logical_plan
01)Sort: t_pushdown.a ASC NULLS LAST
02)--TableScan: t_pushdown projection=[a], full_filters=[t_pushdown.b > Int32(2)]
physical_plan
01)SortPreservingMergeExec: [a@0 ASC NULLS LAST]
02)--SortExec: expr=[a@0 ASC NULLS LAST], preserve_partitioning=[true]
03)----DataSourceExec: file_groups={2 groups: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/1.parquet], [WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/2.parquet]]}, projection=[a], file_type=parquet, predicate=b@1 > 2, pruning_predicate=b_null_count@1 != row_count@2 AND b_max@0 > 2, required_guarantees=[]

let mut conf = self.clone();
let mut allowed_filters = vec![];
let mut remaining_filters = vec![];
for filter in &fd.filters {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fd.take_filters() to avoid clone's below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fd: FilterDescription,
config: &datafusion_common::config::ConfigOptions,
) -> datafusion_common::Result<FilterPushdownResult<Arc<dyn FileSource>>> {
let Some(file_schema) = self.file_schema.clone() else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm asking to learn: in which cases ParquetSource doesn't have the schema?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think they always end up with a schema now, but the current APIs don't require it via the constructor and instead it gets passed in via FileScanConfigBuilder. I tried piping it into the constructor but makes things difficult, there's APIs that rely on ParquetSource::default() and such. So TLDR is it's a bit gross but this is the least chrun way to do it and we can always come back later and clean the rest up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we file a follow on ticket

Comment on lines -985 to -1015
// if we can't push it down completely with only the filename-based/path-based
// column names, then we should check if we can do parquet predicate pushdown
let supports_pushdown = self.options.format.supports_filters_pushdown(
&self.file_schema,
&self.table_schema,
&[filter],
)?;

if supports_pushdown == FilePushdownSupport::Supported {
return Ok(TableProviderFilterPushDown::Exact);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one question: aren't we expecting/preparing for, people to use ListingTable if they read Parquet files? Are we eventually planning to remove all format-specific handlings? Or this is a case only for filter pushdown?

Comment on lines -985 to -1015
// if we can't push it down completely with only the filename-based/path-based
// column names, then we should check if we can do parquet predicate pushdown
let supports_pushdown = self.options.format.supports_filters_pushdown(
&self.file_schema,
&self.table_schema,
&[filter],
)?;

if supports_pushdown == FilePushdownSupport::Supported {
return Ok(TableProviderFilterPushDown::Exact);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's the case, why don't we fully remove supports_filters_pushdown() API at all

/// Optional predicate for row filtering during parquet scan
pub(crate) predicate: Option<Arc<dyn PhysicalExpr>>,
/// Optional predicate for pruning row groups (derived from `predicate`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good to see these are unifying

/// The schema of the file.
/// In particular, this is the schema of the table without partition columns,
/// *not* the physical schema of the file.
pub(crate) file_schema: Option<SchemaRef>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also another schema in FileScanConfig. Are they both reflects the file schema, not physical schema? and can we somehow unify them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same schema that FileScanConfig passes into ParquetSource

Comment on lines 88 to 92
03)----DataSourceExec: file_groups={2 groups: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/1.parquet], [WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/2.parquet]]}, projection=[a], file_type=parquet, predicate=b@1 > 2, pruning_predicate=b_null_count@1 != row_count@2 AND b_max@0 > 2, required_guarantees=[]
03)----CoalesceBatchesExec: target_batch_size=8192
04)------RepartitionExec: partitioning=RoundRobinBatch(4), input_partitions=2
05)--------DataSourceExec: file_groups={2 groups: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/1.parquet], [WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/2.parquet]]}, projection=[a], file_type=parquet, predicate=b@1 > 2 AND b@1 > 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've a guess but not proved: CoalesceBatchesExec comes because of RepartitionExec, and RepartitionExec is inserted to satisfy partition count, which is 4. That's required by FilterExec now (which was pushed down at the logical level before), but that FilterExec is pushed down later after EnforceDistribution.

So, this makes me think about the correct order of physical rules. PushdownFilter should probably work before distribution&order satisfiers. But that could also bring some issues, I'm not sure.

@adriangb
Copy link
Contributor Author

adriangb commented Apr 22, 2025

#15812 surfaced another reason why building the predicates from the files schemas is necessary. I think once we merge this we can tackle that.

@adriangb adriangb force-pushed the parquet-filter-pushdown branch from 1af7766 to 3fde445 Compare April 22, 2025 20:53
Copy link
Contributor

@berkaysynnada berkaysynnada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are good things here, but the main change doesn't seem correct to me. Why are we reducing the capabilities of logical optimizations? I think these planning changes will harm some people. How does it block the dynamic filtering approach?

Comment on lines -985 to -1015
// if we can't push it down completely with only the filename-based/path-based
// column names, then we should check if we can do parquet predicate pushdown
let supports_pushdown = self.options.format.supports_filters_pushdown(
&self.file_schema,
&self.table_schema,
&[filter],
)?;

if supports_pushdown == FilePushdownSupport::Supported {
return Ok(TableProviderFilterPushDown::Exact);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can justify implementing other TableProviders for Parquet, but still I cannot understand why we need to degrade the capabilities of our ListingTable. Is't it always better pruning/simplifying things at the higher levels as possible?

@adriangb
Copy link
Contributor Author

adriangb commented Apr 24, 2025

I'm not really sure how this degrades anything. The end result is the same, users won't see any difference.

What ListingTable does currently is misguided and wrong since it is not really at the logical level as you say, instead it pierces the logical / physical separation (see how it converts Expr to PhysicalExpr, etc). It even produces bugs (I believe the pushdown of struct fields may currently be broken, or at least the implementation is confusing and the test is completely wrong).

I think there is pushdown that makes sense at the logical level, namely partition pruning. And I left that for TableProvider to continue to do. But the pruning that relies on a PhysicalExpr seems to me like it should be happening at the physical layer not the logical. It kinda gets away with it because it's the last thing that happens at the logical layer I think, but it's still smelly.

We might be able to leave the stuff in TableProvider in place but we'll be dealing with duplication and confusing methods on DataSource, which is already a complex bit of code. When I first tried to implement it this way I ran into cases with duplicate pushdown and other confusing scenarios. Probably it could have been resolved but I felt like why make one of the most complex bits in DataFusion even more complex instead of simplifying it where possible.

@berkaysynnada
Copy link
Contributor

berkaysynnada commented Apr 24, 2025

I'm not really sure how this degrades anything. The end result is the same, users won't see any difference.

Logical planning results are changing. We are also using DF end-to-end, but there could always be people only relying on logical plans of DF.

We might be able to leave the stuff in TableProvider in place but we'll be dealing with duplication and confusing methods on DataSource, which is already a complex bit of code. When I first tried to implement it this way I ran into cases with duplicate pushdown and other confusing scenarios. Probably it could have been resolved but I felt like why make one of the most complex bits in DataFusion even more complex instead of simplifying it where possible.

I'm also challenging to decide to be which side because of that complexity :D

I will take a look to other parts in this PR, and try to find a solution for the points like https://github.com/apache/datafusion/pull/15769/files#r2051612926. Maybe there isn't something wrong at all as you said, there is no harm to double check

Unlike the other PRs in this work, we might be touching some core components here. So, having a few more people review and approval would make us feel more confident.

@adriangb
Copy link
Contributor Author

I do think you make a good point of "can we keep the current thing and add the new one". It's worth a shot, at least to split the PR into two. And if that's too complicated or if we just want to simplify we can evaluate from there.

@alamb alamb mentioned this pull request Apr 29, 2025
26 tasks
alamb
alamb previously approved these changes May 1, 2025
Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @adriangb -- this change makes sense to me and I think is an improvement

The code that is moved was used to avoid adding a FilterExec when the table provider would be able to do exact filter pushdown

I am 1/2 the way through the review of this PR -- I hope to finish up shortly

Comment on lines -985 to -1015
// if we can't push it down completely with only the filename-based/path-based
// column names, then we should check if we can do parquet predicate pushdown
let supports_pushdown = self.options.format.supports_filters_pushdown(
&self.file_schema,
&self.table_schema,
&[filter],
)?;

if supports_pushdown == FilePushdownSupport::Supported {
return Ok(TableProviderFilterPushDown::Exact);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one question: aren't we expecting/preparing for, people to use ListingTable if they read Parquet files? Are we eventually planning to remove all format-specific handlings? Or this is a case only for filter pushdown?

For what it is worth, we (InfluxData) doesn't use ListingTable to read parquet files, instead we provide our own equivalent and create the DataSourceExec's directly

I would keep supports_filters_pushdown so that TableProviders can do Exact pruning of filters, e.g. using partition columns.

Yes I think that is important too -- I don't think we should be removing any APIs from ListingTable

@@ -982,18 +980,6 @@ impl TableProvider for ListingTable {
return Ok(TableProviderFilterPushDown::Exact);
}

// if we can't push it down completely with only the filename-based/path-based
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change makes sense to me -- when @itsjunetime originally implemented this code, there was some complexity because there was no way to do filter pushdown in ExecutionPlans so in my mind this approach was a (clever) workaround

The comments even hint that this is a parquet specific special case

I think the new pattern of handling predicates more generally in this PR is cleaner and will support more cases. Since this code is only currently executed

Perhaps @cisaacson has some other thoughts

Comment on lines -112 to -114
/// Check if the specified file format has support for pushing down the provided filters within
/// the given schemas. Added initially to support the Parquet file format's ability to do this.
fn supports_filters_pushdown(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I agree

Since FileFormat is a pub trait, this is technically a breaking API change, but I do think it was a parquet specific optimization

I recommend we mark this PR as an API change and add a note to the upgrade guide https://github.com/apache/datafusion/blob/main/docs/source/library-user-guide/upgrading.md

I think it should basically say if you implemented FileFormat (which probably no one did) and ListingTable you will have to implement the newly added ExecutionPlan::try_pushdown_filter method into your execution plan directly if you want the filters to be pushed down

Comment on lines -985 to -1015
// if we can't push it down completely with only the filename-based/path-based
// column names, then we should check if we can do parquet predicate pushdown
let supports_pushdown = self.options.format.supports_filters_pushdown(
&self.file_schema,
&self.table_schema,
&[filter],
)?;

if supports_pushdown == FilePushdownSupport::Supported {
return Ok(TableProviderFilterPushDown::Exact);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can justify implementing other TableProviders for Parquet, but still I cannot understand why we need to degrade the capabilities of our ListingTable. Is't it always better pruning/simplifying things at the higher levels as possible?

I don't think this degrades the capabilities of the current listing table. I think the only implications are for anyone who used a custom FileFormat and impleented supports_filters_pushdown -- I suspect this is not very common and we can likely avoid consternation by mentioning it in the upgrade guide (see comment below)

@@ -244,7 +242,7 @@ impl ParquetExecBuilder {
inner: DataSourceExec::new(Arc::new(base_config.clone())),
base_config,
predicate,
pruning_predicate: parquet.pruning_predicate,
pruning_predicate: None, // for backwards compat since `ParquetExec` is only for backwards compat anyway
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this is fine too in my opinion. It is almost time to remove ParquetExec anyways -- maybe we should just do it in this release 🤔

Comment on lines 562 to 580
let pruning_predicate_string = self
.pruning_predicate
.as_ref()
.map(|pre| {
let mut guarantees = pre
.literal_guarantees()
.iter()
.map(|item| format!("{}", item))
.collect_vec();
guarantees.sort();
format!(
", pruning_predicate={}, required_guarantees=[{}]",
pre.predicate_expr(),
guarantees.join(", ")
)
})
.unwrap_or_default();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is important to keep these in the physical plans -- in particular what I think is important is to be able to check via the explain plan if pruning is happening by looking at the explain plan

@alamb alamb dismissed their stale review May 1, 2025 13:18

clicked wrong button

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I went through this PR and TLDR is I think it is an improvement. Thank you very much @adriangb

I left several suggestions on how to improve it / the tests, but I also think we could do that as a follow on PR.

I think this is a really nice step forward -- and while it is taking a long time I am confident it will be worth it in the end

fd: FilterDescription,
config: &datafusion_common::config::ConfigOptions,
) -> datafusion_common::Result<FilterPushdownResult<Arc<dyn FileSource>>> {
let Some(file_schema) = self.file_schema.clone() else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we file a follow on ticket

Comment on lines -84 to +86
02)--TableScan: t_pushdown projection=[a], full_filters=[t_pushdown.b > Int32(2)]
02)--Projection: t_pushdown.a
03)----Filter: t_pushdown.b > Int32(2)
04)------TableScan: t_pushdown projection=[a, b], partial_filters=[t_pushdown.b > Int32(2)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense and is ok that the logical plans show the filter not pushed down

Comment on lines 88 to 92
03)----DataSourceExec: file_groups={2 groups: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/1.parquet], [WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/2.parquet]]}, projection=[a], file_type=parquet, predicate=b@1 > 2, pruning_predicate=b_null_count@1 != row_count@2 AND b_max@0 > 2, required_guarantees=[]
03)----CoalesceBatchesExec: target_batch_size=8192
04)------RepartitionExec: partitioning=RoundRobinBatch(4), input_partitions=2
05)--------DataSourceExec: file_groups={2 groups: [[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/1.parquet], [WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/parquet_filter_pushdown/parquet_table/2.parquet]]}, projection=[a], file_type=parquet, predicate=b@1 > 2 AND b@1 > 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it is important to remove the coalesce / repartition

Maybe we can make a separate PR to move the filter pushdown code earlier in the physical planning

An alternate could be to update the filter pushdown optimizer pass somehow to remove these -- but I think it would be cleaner / easier to understand if they were never added in the first place

logical_plan TableScan: t projection=[a], full_filters=[t.a != Int32(100)]
logical_plan
01)Filter: t.a != Int32(100)
02)--TableScan: t projection=[a], partial_filters=[t.a != Int32(100)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend we change these tests to show the physical plans (not the logical plans) as that would more accurately show the pushdown happening. Maybe also something we could do as a separate PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alamb do you know why these don't display the phyiscal plan already? Is something parsing them out?

@berkaysynnada
Copy link
Contributor

@adriangb do you have time to address the last suggestions? I understand the mistake here, and I think we should take this in asap

@adriangb
Copy link
Contributor Author

adriangb commented May 3, 2025

@adriangb do you have time to address the last suggestions? I understand the mistake here, and I think we should take this in asap

I am going to try to address the last round of review later today on a flight. In particular:

Is there anything I'm missing? Is this what you meant by the mistake?

@berkaysynnada
Copy link
Contributor

berkaysynnada commented May 3, 2025

Is there anything I'm missing? Is this what you meant by the mistake?

I meant the need of this PR.

Is there anything I'm missing?
#15769 (comment)

a ticket would be good IMO as well
#15769 (comment)

do you wanna give it a try to change the orders of rules?

@adriangb
Copy link
Contributor Author

adriangb commented May 3, 2025

If you can make a PR to change the order of the rules and open that ticket I would appreciate it I won't be able to for several hours 🙏

@adriangb adriangb force-pushed the parquet-filter-pushdown branch from 47b03d9 to cc8ae9a Compare May 4, 2025 07:14
@github-actions github-actions bot added the documentation Improvements or additions to documentation label May 4, 2025
@github-actions github-actions bot added the optimizer Optimizer rules label May 4, 2025
@adriangb
Copy link
Contributor Author

adriangb commented May 4, 2025

I updated the order of the pushdown rules in this PR, it worked to get rid of the extra nodes.

I've also added the upgrade guide and the pushdown preview is being shown in the physical plans.

@alamb I think the only point missing is #15769 (comment) which I need a big of guidance on

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Core DataFusion crate datasource Changes to the datasource crate documentation Improvements or additions to documentation optimizer Optimizer rules proto Related to proto crate sqllogictest SQL Logic Tests (.slt)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants