-
Notifications
You must be signed in to change notification settings - Fork 0
feat: do partition filtering on bq native tables by union individual partitions #690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: Thomas Chow <[email protected]>
WalkthroughThe Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant BigQueryNative
participant BigQuery
Caller->>BigQueryNative: table(...)
BigQueryNative->>BigQuery: Query partition column
BigQuery-->>BigQueryNative: Partition column name
BigQueryNative->>BigQuery: Query distinct partition values (with filters)
BigQuery-->>BigQueryNative: List of partition values
loop For each partition value
BigQueryNative->>BigQuery: Read data for partition value
BigQuery-->>BigQueryNative: DataFrame for partition
end
BigQueryNative->>Caller: Unioned DataFrame with partition column
Possibly related PRs
Suggested reviewers
Poem
Warning Review ran into problems🔥 ProblemsGitHub Actions and Pipeline Checks: Resource not accessible by integration - https://docs.github.com/rest/actions/workflow-runs#list-workflow-runs-for-a-repository. Please grant the required permissions to the CodeRabbit GitHub App under the organization or repository settings. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (19)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1)
62-70
: Many small reads ⇒ latency; prefer singleIN (…)
Looping over partitions issues N BQ jobs.
A single read withWHERE partCol IN ('p1','p2',…)
is faster and cheaper.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (2)
spark/src/main/scala/ai/chronon/spark/catalog/TableUtils.scala (3)
sql
(319-347)TableUtils
(43-639)TableUtils
(641-643)cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/SparkBQUtils.scala (2)
SparkBQUtils
(7-24)toTableId
(9-16)
⏰ Context from checks skipped due to timeout of 90000ms (19)
- GitHub Check: service_tests
- GitHub Check: service_commons_tests
- GitHub Check: service_tests
- GitHub Check: hub_tests
- GitHub Check: online_tests
- GitHub Check: cloud_gcp_tests
- GitHub Check: hub_tests
- GitHub Check: cloud_aws_tests
- GitHub Check: cloud_gcp_tests
- GitHub Check: online_tests
- GitHub Check: cloud_aws_tests
- GitHub Check: flink_tests
- GitHub Check: flink_tests
- GitHub Check: aggregator_tests
- GitHub Check: api_tests
- GitHub Check: api_tests
- GitHub Check: scala_compile_fmt_fix
- GitHub Check: aggregator_tests
- GitHub Check: enforce_triggered_workflows
🔇 Additional comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1)
64-70
: Preserve column type when re-adding partition field
lit(partValue)
re-creates the column as STRING; original may beDATE/TIMESTAMP
. Consider casting:.withColumn(partColName, lit(partValue).cast(col(partColName).schema))
val partitionWheres = if (partitionFilters.nonEmpty) s"WHERE ${partitionFilters}" else partitionFilters | ||
val partitionFormat = TableUtils(sparkSession).partitionFormat | ||
val dfw = sparkSession.read | ||
val select = s"SELECT distinct(${partColName}) AS ${internalBQCol} FROM ${bqFriendlyName} ${partitionWheres}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use DISTINCT …
– distinct()
is invalid BigQuery SQL
distinct(${partColName})
will ☠️ on BQ.
Replace with the keyword form.
-val select = s"SELECT distinct(${partColName}) AS ${internalBQCol} FROM ${bqFriendlyName} ${partitionWheres}"
+val select = s"SELECT DISTINCT ${partColName} AS ${internalBQCol} FROM ${bqFriendlyName} ${partitionWheres}"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
val select = s"SELECT distinct(${partColName}) AS ${internalBQCol} FROM ${bqFriendlyName} ${partitionWheres}" | |
val select = s"SELECT DISTINCT ${partColName} AS ${internalBQCol} FROM ${bqFriendlyName} ${partitionWheres}" |
val selectedParts = sparkSession.read | ||
.format(bqFormat) | ||
.option("viewsEnabled", true) | ||
.option("materializationDataset", bqTableId.getDataset) | ||
if (isPseudoColumn) { | ||
val select = s"SELECT ${partColName} AS ${internalBQCol}, * FROM ${bqFriendlyName} ${partitionWheres}" | ||
logger.info(s"BQ select: ${select}") | ||
dfw | ||
.load(select) | ||
.withColumn(partColName, date_format(col(internalBQCol), partitionFormat)) | ||
.drop(internalBQCol) | ||
} else { | ||
dfw | ||
.load(s"SELECT * FROM ${bqFriendlyName} ${partitionWheres}") | ||
} | ||
.load(select) | ||
.select(date_format(col(internalBQCol), partitionFormat)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing project
option when reading partitions
Reads here omit .option("project", providedProject)
. Cross-project tables will fail.
val selectedParts = sparkSession.read
.format(bqFormat)
+ .option("project", providedProject)
.option("viewsEnabled", true)
.option("materializationDataset", bqTableId.getDataset)
.load(select)
Apply the same when loading each partition below.
selectedParts | ||
.map((partValue) => { | ||
val pFilter = f"${partColName} = '${partValue}'" | ||
sparkSession.read | ||
.format(bqFormat) | ||
.option("filter", pFilter) | ||
.load(bqFriendlyName) | ||
.withColumn(partColName, lit(partValue)) | ||
}) // todo: make it nullable | ||
.reduce(_ unionByName _) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against empty partition list
reduce
throws on Nil
.
- .reduce(_ unionByName _)
+ .reduceOption(_ unionByName _)
+ .getOrElse(sparkSession.emptyDataFrame)
Co-authored-by: Thomas Chow <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Man, BQ such a nightmare
…partitions (#690) ## Summary - Getting a 403 querying for a range of partitions in bigquery native tables: ``` Response too large to return. Consider specifying a destination table in your job configuration ``` - instead, let's just query individual partitions of data as separate dataframes and union them together. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved handling of BigQuery partitioned tables, ensuring more accurate partition filtering and data retrieval. - **Refactor** - Streamlined the process for reading partitioned data from BigQuery, resulting in a clearer and more consistent approach for users working with partitioned tables. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Thomas Chow <[email protected]>
…partitions (#690) ## Summary - Getting a 403 querying for a range of partitions in bigquery native tables: ``` Response too large to return. Consider specifying a destination table in your job configuration ``` - instead, let's just query individual partitions of data as separate dataframes and union them together. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved handling of BigQuery partitioned tables, ensuring more accurate partition filtering and data retrieval. - **Refactor** - Streamlined the process for reading partitioned data from BigQuery, resulting in a clearer and more consistent approach for users working with partitioned tables. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Thomas Chow <[email protected]>
…partitions (#690) ## Summary - Getting a 403 querying for a range of partitions in bigquery native tables: ``` Response too large to return. Consider specifying a destination table in your job configuration ``` - instead, let's just query individual partitions of data as separate dataframes and union them together. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved handling of BigQuery partitioned tables, ensuring more accurate partition filtering and data retrieval. - **Refactor** - Streamlined the process for reading partitioned data from BigQuery, resulting in a clearer and more consistent approach for users working with partitioned tables. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Thomas Chow <[email protected]>
…partitions (#690) ## Summary - Getting a 403 querying for a range of partitions in bigquery native tables: ``` Response too large to return. Consider specifying a destination table in your job configuration ``` - instead, let's just query individual partitions of data as separate dataframes and union them together. ## Cheour clientslist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- av pr metadata This information is embedded by the av CLI when creating PRs to traour clients the status of staour clientss when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved handling of BigQuery partitioned tables, ensuring more accurate partition filtering and data retrieval. - **Refactor** - Streamlined the process for reading partitioned data from BigQuery, resulting in a clearer and more consistent approach for users working with partitioned tables. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Thomas Chow <[email protected]>
Summary
Checklist
Summary by CodeRabbit
Bug Fixes
Refactor