Skip to content

chore: separate column predicates from partition filters #149

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 23, 2024

Conversation

tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Dec 20, 2024

Summary

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Enhanced DataFrame querying capabilities with additional filtering options.
    • Improved exception handling and logging for backfill operations.
  • Bug Fixes

    • Refined logic for data filtering and retrieval in join operations.
  • Documentation

    • Updated method signatures to reflect new parameters and functionality.

Copy link

coderabbitai bot commented Dec 20, 2024

Walkthrough

The pull request introduces modifications to three key Scala files in the Spark implementation: GroupBy.scala, Join.scala, and TableUtils.scala. The changes primarily focus on enhancing data querying and filtering capabilities, with updates to method signatures and internal logic for handling DataFrame operations. The modifications include adjustments to how range conditions are applied, improvements in logging and error handling, and refinements in the way data retrieval and join operations are processed.

Changes

File Change Summary
spark/src/main/scala/ai/chronon/spark/GroupBy.scala - Updated sourceDf method parameter order
- Enhanced computeBackfill method with improved logging and exception handling
spark/src/main/scala/ai/chronon/spark/Join.scala - Modified getRightPartsData method's condition handling
- Updated scanDfBase method invocation with additional parameter
spark/src/main/scala/ai/chronon/spark/TableUtils.scala - Added rangeWheres parameter to scanDfBase method
- Updated scanDf method to support optional partition range

Possibly related PRs

  • Summary upload #50: The changes in the SummaryUploader class involve handling DataFrames, which relates to the modifications in the GroupBy class's sourceDf method for DataFrame queries.
  • chore: small cleanup of TableUtils #153: The cleanup of the TableUtils class, including the removal of deprecated methods, is directly related to the changes made in the main PR that also modifies the TableUtils class for better handling of DataFrame queries and operations.

Suggested reviewers

  • david-zlai
  • piyush-zlai
  • nikhil-zlai

Poem

🐰 Hoppity hop through Spark's domain,
Where data flows like a coding refrain,
Parameters dance, methods realign,
Filtering magic in each design,
A rabbit's code, precise and bright! 🔍✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 2659da6 and 0903016.

📒 Files selected for processing (1)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tchow-zlai tchow-zlai changed the base branch from tchow/bq-support-4 to main December 20, 2024 08:30
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
spark/src/main/scala/ai/chronon/spark/Join.scala (1)

216-219: LGTM: Clean separation of partition filters from column predicates.

The changes improve the code structure by:

  1. Using effectiveRange.whereClauses("ds") to generate partition-based filters
  2. Adding a separate parameter for additional column predicates (currently empty)

This separation of concerns makes the code more maintainable and allows for future extensions to handle column predicates independently.

Consider documenting the purpose of the empty list parameter in scanDfBase to clarify its intended use for future column predicates.

spark/src/main/scala/ai/chronon/spark/TableUtils.scala (1)

1055-1055: Consider extracting predicate construction logic.

While the implementation is correct, consider extracting the predicate construction logic into a separate private method for better readability and reusability.

+  private def buildPredicates(query: Option[Query], range: Option[PartitionRange], partitionColumn: String): (Seq[String], Seq[String]) = {
+    val rangeWheres = range.map(whereClauses(_, partitionColumn)).getOrElse(Seq.empty)
+    val queryWheres = query.flatMap(q => Option(q.wheres)).map(_.toScala).getOrElse(Seq.empty)
+    (queryWheres, rangeWheres)
+  }
+
   def scanDf(query: Query,
              table: String,
              fallbackSelects: Option[Map[String, String]] = None,
              range: Option[PartitionRange] = None,
              partitionColumn: String = partitionColumn): DataFrame = {
-    val rangeWheres = range.map(whereClauses(_, partitionColumn)).getOrElse(Seq.empty)
-    val queryWheres = Option(query).flatMap(q => Option(q.wheres)).map(_.toScala).getOrElse(Seq.empty)
-    val wheres: Seq[String] = rangeWheres ++ queryWheres
+    val (queryWheres, rangeWheres) = buildPredicates(Option(query), range, partitionColumn)
+    val wheres: Seq[String] = queryWheres ++ rangeWheres

     val selects = Option(query).flatMap(q => Option(q.selects)).map(_.toScala).getOrElse(Map.empty)
     scanDfBase(selects, table, wheres, rangeWheres, fallbackSelects)
   }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 1730474 and c3d3546.

📒 Files selected for processing (3)
  • spark/src/main/scala/ai/chronon/spark/GroupBy.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/Join.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala (3 hunks)
🔇 Additional comments (4)
spark/src/main/scala/ai/chronon/spark/GroupBy.scala (1)

680-681: LGTM! Parameter reordering aligns with separation of predicates.

The reordering of parameters in scanDfBase call, moving partitionConditions after wheres, correctly implements the separation of column predicates from partition filters.

Let's verify the parameter order matches the TableUtils.scanDfBase method signature:

✅ Verification successful

Parameter order in scanDfBase call matches method signature

The call to tableUtils.scanDfBase in GroupBy.scala correctly matches the method signature in TableUtils.scala:

  • Method signature: scanDfBase(selectMap, table, wheres, rangeWheres, fallbackSelects)
  • Call site order: scanDfBase(selects, table, wheres, partitionConditions, fallbackSelects)

The parameter reordering is correct and maintains consistency with the method definition.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the parameter order in TableUtils.scanDfBase matches the call site
ast-grep --pattern 'def scanDfBase($$$) {'

Length of output: 44


Script:

#!/bin/bash
# Let's try a broader search for scanDfBase method using ripgrep
rg "scanDfBase" -A 5 --type scala

Length of output: 2328

spark/src/main/scala/ai/chronon/spark/TableUtils.scala (3)

1028-1029: LGTM! Clean implementation of range filters.

The implementation correctly handles the new rangeWheres parameter with proper logging and filtering. The separation of range predicates from other where clauses improves code clarity.

Also applies to: 1033-1033


1055-1055: LGTM! Well-designed API change.

The addition of the optional range parameter is backward compatible and follows Scala best practices by using Option for optional values.


1014-1015: Verify all callers of scanDfBase are updated.

The addition of the rangeWheres parameter is a breaking change. While this separation of concerns is good for maintainability, we need to ensure all callers are updated.

✅ Verification successful

Let me search for more context around this call to understand if it's properly handling the new parameter.


Breaking change is properly handled

The code inspection shows that scanDfBase is only called from within the scanDf method, which correctly handles the new rangeWheres parameter. The scanDf method separates the range-based predicates from query predicates and passes them appropriately:

  • Range predicates are derived from range.map(whereClauses(_, partitionColumn))
  • Query predicates are extracted from query.wheres
  • Both are properly passed to scanDfBase
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for all direct calls to scanDfBase
ast-grep --pattern 'scanDfBase($$$)'

Length of output: 167


Script:

#!/bin/bash
# Get context around the scanDfBase call
rg "scanDfBase" spark/src/main/scala/ai/chronon/spark/TableUtils.scala -B 10 -A 5

Length of output: 1405

@tchow-zlai tchow-zlai changed the title feat: separate column predicates from partition filters chore: separate column predicates from partition filters Dec 20, 2024
@@ -1024,9 +1025,12 @@ case class TableUtils(sparkSession: SparkSession) {
| ${selects.mkString("\n ").green}
| wheres:
| ${wheres.mkString(",\n ").green}
| rangeWheres:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| rangeWheres:
| partition filters:

Copy link
Contributor

@nikhil-zlai nikhil-zlai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@coderabbitai coderabbitai bot mentioned this pull request Apr 18, 2025
4 tasks
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for backfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for backfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for backfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Cheour clientslist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for baour clientsfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants