-
Notifications
You must be signed in to change notification settings - Fork 0
chore: separate column predicates from partition filters #149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe pull request introduces modifications to three key Scala files in the Spark implementation: Changes
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
af68f91
to
c3d3546
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
spark/src/main/scala/ai/chronon/spark/Join.scala (1)
216-219
: LGTM: Clean separation of partition filters from column predicates.The changes improve the code structure by:
- Using
effectiveRange.whereClauses("ds")
to generate partition-based filters- Adding a separate parameter for additional column predicates (currently empty)
This separation of concerns makes the code more maintainable and allows for future extensions to handle column predicates independently.
Consider documenting the purpose of the empty list parameter in
scanDfBase
to clarify its intended use for future column predicates.spark/src/main/scala/ai/chronon/spark/TableUtils.scala (1)
1055-1055
: Consider extracting predicate construction logic.While the implementation is correct, consider extracting the predicate construction logic into a separate private method for better readability and reusability.
+ private def buildPredicates(query: Option[Query], range: Option[PartitionRange], partitionColumn: String): (Seq[String], Seq[String]) = { + val rangeWheres = range.map(whereClauses(_, partitionColumn)).getOrElse(Seq.empty) + val queryWheres = query.flatMap(q => Option(q.wheres)).map(_.toScala).getOrElse(Seq.empty) + (queryWheres, rangeWheres) + } + def scanDf(query: Query, table: String, fallbackSelects: Option[Map[String, String]] = None, range: Option[PartitionRange] = None, partitionColumn: String = partitionColumn): DataFrame = { - val rangeWheres = range.map(whereClauses(_, partitionColumn)).getOrElse(Seq.empty) - val queryWheres = Option(query).flatMap(q => Option(q.wheres)).map(_.toScala).getOrElse(Seq.empty) - val wheres: Seq[String] = rangeWheres ++ queryWheres + val (queryWheres, rangeWheres) = buildPredicates(Option(query), range, partitionColumn) + val wheres: Seq[String] = queryWheres ++ rangeWheres val selects = Option(query).flatMap(q => Option(q.selects)).map(_.toScala).getOrElse(Map.empty) scanDfBase(selects, table, wheres, rangeWheres, fallbackSelects) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (3)
spark/src/main/scala/ai/chronon/spark/GroupBy.scala
(1 hunks)spark/src/main/scala/ai/chronon/spark/Join.scala
(1 hunks)spark/src/main/scala/ai/chronon/spark/TableUtils.scala
(3 hunks)
🔇 Additional comments (4)
spark/src/main/scala/ai/chronon/spark/GroupBy.scala (1)
680-681
: LGTM! Parameter reordering aligns with separation of predicates.
The reordering of parameters in scanDfBase
call, moving partitionConditions
after wheres
, correctly implements the separation of column predicates from partition filters.
Let's verify the parameter order matches the TableUtils.scanDfBase
method signature:
✅ Verification successful
Parameter order in scanDfBase
call matches method signature
The call to tableUtils.scanDfBase
in GroupBy.scala
correctly matches the method signature in TableUtils.scala
:
- Method signature:
scanDfBase(selectMap, table, wheres, rangeWheres, fallbackSelects)
- Call site order:
scanDfBase(selects, table, wheres, partitionConditions, fallbackSelects)
The parameter reordering is correct and maintains consistency with the method definition.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the parameter order in TableUtils.scanDfBase matches the call site
ast-grep --pattern 'def scanDfBase($$$) {'
Length of output: 44
Script:
#!/bin/bash
# Let's try a broader search for scanDfBase method using ripgrep
rg "scanDfBase" -A 5 --type scala
Length of output: 2328
spark/src/main/scala/ai/chronon/spark/TableUtils.scala (3)
1028-1029
: LGTM! Clean implementation of range filters.
The implementation correctly handles the new rangeWheres
parameter with proper logging and filtering. The separation of range predicates from other where clauses improves code clarity.
Also applies to: 1033-1033
1055-1055
: LGTM! Well-designed API change.
The addition of the optional range
parameter is backward compatible and follows Scala best practices by using Option
for optional values.
1014-1015
: Verify all callers of scanDfBase
are updated.
The addition of the rangeWheres
parameter is a breaking change. While this separation of concerns is good for maintainability, we need to ensure all callers are updated.
✅ Verification successful
Let me search for more context around this call to understand if it's properly handling the new parameter.
Breaking change is properly handled
The code inspection shows that scanDfBase
is only called from within the scanDf
method, which correctly handles the new rangeWheres
parameter. The scanDf
method separates the range-based predicates from query predicates and passes them appropriately:
- Range predicates are derived from
range.map(whereClauses(_, partitionColumn))
- Query predicates are extracted from
query.wheres
- Both are properly passed to
scanDfBase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all direct calls to scanDfBase
ast-grep --pattern 'scanDfBase($$$)'
Length of output: 167
Script:
#!/bin/bash
# Get context around the scanDfBase call
rg "scanDfBase" spark/src/main/scala/ai/chronon/spark/TableUtils.scala -B 10 -A 5
Length of output: 1405
@@ -1024,9 +1025,12 @@ case class TableUtils(sparkSession: SparkSession) { | |||
| ${selects.mkString("\n ").green} | |||
| wheres: | |||
| ${wheres.mkString(",\n ").green} | |||
| rangeWheres: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| rangeWheres: | |
| partition filters: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for backfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for backfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for backfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Cheour clientslist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for baour clientsfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to traour clients the status of staour clientss when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
Summary
In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source.
Checklist
Summary by CodeRabbit
New Features
Bug Fixes
Documentation