Skip to content

feat: support pseudocolumns in bigquery native tables #689

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 26, 2025

Conversation

tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Apr 26, 2025

Summary

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • Bug Fixes
    • Improved handling of BigQuery partitioned tables to correctly identify and process system-defined partition columns.
    • Enhanced error handling and validation for partition column detection.
    • Improved logging to provide clearer information about partition columns and their types.

Copy link

coderabbitai bot commented Apr 26, 2025

Walkthrough

The table method in BigQueryNative.scala was updated to explicitly query BigQuery's INFORMATION_SCHEMA to detect the partition column and whether it is system-defined. The method now handles system-defined partition columns by rewriting the query to alias and format the partition column, and applies stricter validation and error handling. Logging was added for partition column discovery, and the approach for partition filtering was updated to avoid deprecated options.

Changes

File(s) Change Summary
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala Enhanced table method to detect and handle system-defined partition columns, update query logic, add logging.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant BigQueryNative
    participant BigQuery
    participant SparkSession

    Caller->>BigQueryNative: table(tableName, partitionFilters)
    BigQueryNative->>BigQuery: Query INFORMATION_SCHEMA.COLUMNS
    BigQuery-->>BigQueryNative: Return partition column info
    BigQueryNative->>BigQueryNative: Validate partition column & system-defined flag
    alt System-defined partition column
        BigQueryNative->>BigQuery: Query table with partition column aliased
        BigQuery-->>BigQueryNative: Return data
        BigQueryNative->>SparkSession: Format partition column, drop alias
    else Not system-defined
        BigQueryNative->>BigQuery: Query table with optional WHERE clause
        BigQuery-->>BigQueryNative: Return data
    end
    BigQueryNative-->>Caller: Return DataFrame
Loading

Possibly related PRs

Suggested reviewers

  • nikhil-zlai
  • piyush-zlai
  • david-zlai

Poem

In BigQuery’s halls, partitions hide,
System-defined or user supplied.
Now queries seek with sharper eyes,
Aliasing columns, formatting wise.
With logging bright and errors tight,
The table’s truth comes into light!
🚀

Warning

Review ran into problems

🔥 Problems

GitHub Actions and Pipeline Checks: Resource not accessible by integration - https://docs.github.com/rest/actions/workflow-runs#list-workflow-runs-for-a-repository.

Please grant the required permissions to the CodeRabbit GitHub App under the organization or repository settings.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 08fced3 and df72331.

📒 Files selected for processing (1)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala
⏰ Context from checks skipped due to timeout of 90000ms (18)
  • GitHub Check: service_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: service_tests
  • GitHub Check: hub_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: hub_tests
  • GitHub Check: online_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: api_tests
  • GitHub Check: online_tests
  • GitHub Check: flink_tests
  • GitHub Check: api_tests
  • GitHub Check: flink_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: scala_compile_fmt_fix

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Co-authored-by: Thomas Chow <[email protected]>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (2)

30-41: Avoid full collect for a single-row lookup

collect.headOption materialises the entire INFORMATION_SCHEMA query (one row today, but unbounded in theory). Fetch just the first row to keep driver memory/use low.

-      .collect
-      .headOption
+      .limit(1)
+      .as[(String, String)]
+      .headOption           // pulls a single row only

58-66: Duplicate column scan & extra shuffle

SELECT ${partColName} as ${internalBQCol}, * re-reads the partition column twice, and BigQuery will still transmit both before we drop one.
A lighter pattern:

-SELECT ${partColName} as ${internalBQCol}, * FROM
+SELECT *, ${partColName} as ${internalBQCol} FROM

or BigQuery’s REPLACE clause:

SELECT * REPLACE(${partColName} AS ${internalBQCol}) FROM ...

keeps the payload minimal and sidesteps duplicate-column pitfalls.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between e6b68c0 and ee48900.

📒 Files selected for processing (1)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (19)
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: service_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: hub_tests
  • GitHub Check: service_tests
  • GitHub Check: online_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: online_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: hub_tests
  • GitHub Check: flink_tests
  • GitHub Check: flink_tests
  • GitHub Check: api_tests
  • GitHub Check: api_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: aggregator_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1)

49-51:

❌ Incorrect review comment

logger may be undefined here

logger.info assumes an in-scope logger, but this object doesn’t declare one and Format doesn’t obviously supply it. Compilation will fail unless another trait injects it.

Please verify or add something like:

private val logger = org.slf4j.LoggerFactory.getLogger(getClass)

🏁 Script executed:

#!/bin/bash
# Show the definition and signature of BigQueryNative
grep -n "class BigQueryNative" -n cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala
sed -n '1,50p' cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala

# Find any Format trait declarations
grep -R "trait Format" -n .

Length of output: 2716


🏁 Script executed:

#!/bin/bash
sed -n '1,200p' spark/src/main/scala/ai/chronon/spark/catalog/Format.scala

Length of output: 2242


Ignore logger undefined warning

The Format trait declares

@transient protected lazy val logger: Logger = LoggerFactory.getLogger(getClass)

so BigQueryNative already inherits logger. No change needed.

Likely an incorrect or invalid review comment.

Co-authored-by: Thomas Chow <[email protected]>
@tchow-zlai tchow-zlai merged commit 071ef17 into main Apr 26, 2025
21 of 40 checks passed
@tchow-zlai tchow-zlai deleted the tchow/bq-partitioning branch April 26, 2025 18:53
kumar-zlai pushed a commit that referenced this pull request Apr 27, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of BigQuery partitioned tables to correctly identify
and process system-defined partition columns.
- Enhanced error handling and validation for partition column detection.
- Improved logging to provide clearer information about partition
columns and their types.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of BigQuery partitioned tables to correctly identify
and process system-defined partition columns.
- Enhanced error handling and validation for partition column detection.
- Improved logging to provide clearer information about partition
columns and their types.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of BigQuery partitioned tables to correctly identify
and process system-defined partition columns.
- Enhanced error handling and validation for partition column detection.
- Improved logging to provide clearer information about partition
columns and their types.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of BigQuery partitioned tables to correctly identify
and process system-defined partition columns.
- Enhanced error handling and validation for partition column detection.
- Improved logging to provide clearer information about partition
columns and their types.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of BigQuery partitioned tables to correctly identify
and process system-defined partition columns.
- Enhanced error handling and validation for partition column detection.
- Improved logging to provide clearer information about partition
columns and their types.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants