Skip to content

feat: GCS format support #147

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Dec 23, 2024
Merged

feat: GCS format support #147

merged 11 commits into from
Dec 23, 2024

Conversation

tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Dec 20, 2024

Summary

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Added new library dependencies to enhance Google Cloud Storage integration.
    • Introduced a new GCS class for handling partitioning in Google Cloud Storage.
  • Improvements

    • Updated BigQueryFormat to refine table reading and processing logic.
    • Enhanced error handling for external table definitions and improved control flow for table metadata access.
  • Bug Fixes

    • Improved error handling for missing database specifications in partition methods.

Copy link

coderabbitai bot commented Dec 20, 2024

Walkthrough

This pull request enhances the cloud_gcp project's Google Cloud integration by adding new library dependencies for Google Cloud Storage and BigQuery. The changes include updating the build.sbt file with new connectors, modifying the BigQueryFormat class to improve BigQuery interactions, and introducing a new GCSFormat class to handle Google Cloud Storage operations. The modifications expand the project's capabilities for working with Google Cloud services.

Changes

File Change Summary
build.sbt Added three new library dependencies for Google Cloud Storage and Hadoop connectors
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala Renamed GCPFormatProvider to GcpFormatProvider, updated BigQuery client initialization and table reading logic
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala New file introducing GCS case class with methods for handling Google Cloud Storage partitioning and format operations

Possibly Related PRs

Suggested Reviewers

  • nikhil-zlai
  • piyush-zlai
  • david-zlai

Poem

🌥️ Clouds of Code, Dancing Light
Google's libraries take flight
Connectors weave their magic spell
Storage dances, data tells
Cloud integration shines so bright! ☁️


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tchow-zlai tchow-zlai force-pushed the tchow/bq-support-3 branch 3 times, most recently from b6c1f86 to 76cd466 Compare December 20, 2024 23:24
Copy link
Contributor

@nikhil-zlai nikhil-zlai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

excellent!

tchow-zlai added a commit that referenced this pull request Dec 23, 2024
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for backfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
Base automatically changed from tchow/bq-support-2 to main December 23, 2024 08:54
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (4)

9-9: Add a brief Scaladoc for this new class.
A concise class-level comment helps consumers quickly grasp its purpose.


15-45: Validate presence of 'database'.
Throwing an exception is acceptable. Consider returning a clearer message.


46-46: Consider a friendlier error mechanism for multiple URIs.
An assert can abruptly stop the session; a custom exception with more context might be more user-friendly.


93-94: Clarify unsupported or stub methods.
Throwing here is valid, but a doc comment or self-documentation can reduce confusion.

cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)

34-58: Remove or clarify this commented-out block.
Lingering commented code can cause confusion.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 723c69c and ea2d400.

📒 Files selected for processing (3)
  • build.sbt (1 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1 hunks)
🔇 Additional comments (8)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (4)

11-13: Behavior is inherited intact.
No extra logic here, so this override looks good.


78-91: Partition mapping logic is clear.
This approach is straightforward and maintainable. Well done!


96-98: Sub-partitions filter support is beneficial.
Implementing sub-partition filters can speed queries significantly.


68-77: Potential overhead reading partition metadata with 'parquet(uris: _*)'.
Large data might cause overhead. You could request partition info directly without scanning the entire data.

cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (4)

7-13: Imports are relevant and cohesive.
These additions streamline BigQuery connectivity and usage.


24-32: Logic under case matches is reversed?
External table → GCS, Standard → BigQuery would be more intuitive, but the code does the opposite. Confirm correctness.

✅ Verification successful

Let me search for related code to verify this logic.


Logic is correct as implemented

Looking at the commented-out federation code in the same file, we can see the intended mapping:

  • External tables → BQuery (matches current implementation)
  • Standard/Managed tables → GCS (matches current implementation)

This aligns with the current implementation and appears to be the intended design.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for any other occurrences or usages of ExternalTableDefinition and StandardTableDefinition
rg "ExternalTableDefinition|StandardTableDefinition" -A 3 -B 3

Length of output: 4721


Script:

#!/bin/bash
# Look for any tests or documentation that might explain this logic
fd "BigQueryFormat" --exec cat {}

Length of output: 5279


Script:

#!/bin/bash
# Search for any code that uses BQuery or GCS classes/objects
rg "BQuery|GCS" -A 2 -B 2

Length of output: 3313


14-15: Inject concurrency safety if needed.
Check if the lazy bigQueryClient is thread-safe. Otherwise, you might consider synchronization.

✅ Verification successful

Let me search for any concurrent usage of bigQueryClient in the codebase.


Let me check if this class is used in any concurrent/parallel contexts by searching the entire codebase.


Lazy initialization is thread-safe in Scala - no synchronization needed

The lazy val in Scala is thread-safe by default. The Scala compiler automatically handles synchronization for lazy initialization, making bigQueryClient safe for concurrent access.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for references to bigQueryClient
rg "bigQueryClient" -A 2

# Search for any concurrent/parallel/async operations in the same file
rg -p "parallel|concurrent|async|thread|Future|Promise" "cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala"

Length of output: 853


Script:

#!/bin/bash
# Search for GcpFormatProvider usage
rg "GcpFormatProvider" -A 3

# Search for any parallel/concurrent operations in files that might use this provider
ast-grep --pattern 'class $_ extends FormatProvider { $$$ }'

Length of output: 615


18-23: Straightforward table retrieval.
Good usage of BigQuery client. Ensure all error paths (like missing datasets) are covered.

✅ Verification successful

Let me search for error handling patterns in this codebase.


Let me check how the code handles the null case from getTable.


Error paths are properly handled
The code safely wraps getTable in Option and handles both existing and missing table cases through pattern matching, defaulting to Hive when table doesn't exist.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for error handling around BigQuery table operations
rg -A 5 "getTable|TableId" --type scala

# Look for try/catch blocks or Option handling around BigQuery operations
ast-grep --pattern 'try {
  $$$
  getTable($$$)
  $$$
}'

# Search for any exception handling patterns
ast-grep --pattern 'catch {
  $$$
  case $_: $_Exception => $$$
}'

Length of output: 18507


Script:

#!/bin/bash
# Search for the complete match pattern around getTable in BigQueryFormat
rg -A 10 "tableOpt = Option\(bigQueryClient\.getTable" --type scala

Length of output: 1287

Comment on lines +215 to +217
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "3.0.3", // it's what's on the cluster
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "hadoop3-2.2.26", // it's what's on the cluster
libraryDependencies += "com.google.cloud.bigdataoss" % "gcsio" % "3.0.3", // need it for https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/master/gcsio/src/main/java/com/google/cloud/hadoop/gcsio/GoogleCloudStorageFileSystem.java
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Remove duplicate gcs-connector dependency with conflicting versions

  • Remove line with gcs-connector version 3.0.3 as it conflicts with hadoop3-2.2.26
  • Keep version alignment between remaining dependencies: hadoop3-2.2.26 for connector and 3.0.3 for gcsio
🔗 Analysis chain

Confirm version alignment for GCS libraries.
Multiple versions (3.0.3, 2.2.26) might conflict.


Let me verify the GCS library version alignment by checking the build configuration files and any potential version conflicts.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for GCS connector and related dependencies
rg -A 2 "gcs-connector|gcsio" --type sbt

Length of output: 70


Script:

#!/bin/bash
# Search for GCS dependencies in all files
rg "gcs-connector|gcsio"

Length of output: 543

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)

Line range hint 61-62: Implement writeFormat method

Replace ??? with proper implementation.

🧹 Nitpick comments (2)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1)

89-89: Add type safety for field value conversion

Replace direct toString with safe type conversion.

-            fieldName -> fieldValue.toString // Just going to cast this as a string.
+            fieldName -> Option(fieldValue).map(_.toString).getOrElse("")
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)

34-58: Remove commented federation code

Clean up unused code block.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between ea2d400 and 17ca367.

📒 Files selected for processing (2)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1 hunks)
🔇 Additional comments (4)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (3)

1-16: LGTM: Clean class structure and imports


96-96: Verify empty fileFormatString impact

Empty string return might affect format detection.


48-48: Consider relaxing single URI restriction

The assertion might be too restrictive for future use cases.

✅ Verification successful

Let me verify if this restriction is necessary by analyzing the codebase.


Let me try a different search strategy to understand the URI handling better.


Let me try one final search to understand the specific GCS/URI handling context.


Single URI restriction is necessary - keep the assertion

The assertion is actually required because:

  • The code processes partition information using Spark's FileSourceScanExec which expects a single base path
  • The implementation relies on PartitioningAwareFileIndex which needs a unified partition structure
  • The method is specifically designed to handle HDFS-style partitioned paths under one root URI
🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for usages of GCSFormat and related URI handling
rg -A 5 "GCSFormat" 

# Search for table creation patterns with multiple URIs
ast-grep --pattern 'CREATE TABLE $_ LOCATION $$$'

# Look for any existing multi-URI patterns
rg -i "uris?.*(multiple|several|many|array|list)"

Length of output: 123


Script:

#!/bin/bash
# Find the complete implementation of GCSFormat
ast-grep --pattern 'class GCSFormat {
  $$$
}'

# Find any references to URI patterns or external tables
rg -B 2 -A 2 "external.*table|uri.*pattern"

Length of output: 4156


Script:

#!/bin/bash
# Find the file content to understand the context better
cat cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala

# Look for any BigQuery external table patterns
rg -B 3 -A 3 "BigQuery.*external.*table"

Length of output: 4066

cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)

14-33: LGTM: Clean table type handling

@tchow-zlai tchow-zlai merged commit ea2a1f3 into main Dec 23, 2024
8 of 9 checks passed
@tchow-zlai tchow-zlai deleted the tchow/bq-support-3 branch December 23, 2024 09:28
tchow-zlai added a commit that referenced this pull request Jan 4, 2025
## Summary

- Adding unit tests for #147 
- https://app.asana.com/0/1208949807589885/1208960391734329/f

## Checklist
- [x] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added a new test class to validate GCS format partitioning
functionality.

- **Refactor**
	- Updated package structure for test files.
	- Removed specific imports in test files.

- **Chores**
	- Added an import for a BigQuery table in the format handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for backfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new library dependencies to enhance Google Cloud Storage
integration.
- Introduced a new `GCS` class for handling partitioning in Google Cloud
Storage.
  
- **Improvements**
- Updated `BigQueryFormat` to refine table reading and processing logic.
- Enhanced error handling for external table definitions and improved
control flow for table metadata access.

- **Bug Fixes**
- Improved error handling for missing database specifications in
partition methods.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary

- Adding unit tests for #147 
- https://app.asana.com/0/1208949807589885/1208960391734329/f

## Checklist
- [x] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added a new test class to validate GCS format partitioning
functionality.

- **Refactor**
	- Updated package structure for test files.
	- Removed specific imports in test files.

- **Chores**
	- Added an import for a BigQuery table in the format handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for backfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new library dependencies to enhance Google Cloud Storage
integration.
- Introduced a new `GCS` class for handling partitioning in Google Cloud
Storage.
  
- **Improvements**
- Updated `BigQueryFormat` to refine table reading and processing logic.
- Enhanced error handling for external table definitions and improved
control flow for table metadata access.

- **Bug Fixes**
- Improved error handling for missing database specifications in
partition methods.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary

- Adding unit tests for #147 
- https://app.asana.com/0/1208949807589885/1208960391734329/f

## Checklist
- [x] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added a new test class to validate GCS format partitioning
functionality.

- **Refactor**
	- Updated package structure for test files.
	- Removed specific imports in test files.

- **Chores**
	- Added an import for a BigQuery table in the format handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for backfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new library dependencies to enhance Google Cloud Storage
integration.
- Introduced a new `GCS` class for handling partitioning in Google Cloud
Storage.
  
- **Improvements**
- Updated `BigQueryFormat` to refine table reading and processing logic.
- Enhanced error handling for external table definitions and improved
control flow for table metadata access.

- **Bug Fixes**
- Improved error handling for missing database specifications in
partition methods.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

- Adding unit tests for #147 
- https://app.asana.com/0/1208949807589885/1208960391734329/f

## Checklist
- [x] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added a new test class to validate GCS format partitioning
functionality.

- **Refactor**
	- Updated package structure for test files.
	- Removed specific imports in test files.

- **Chores**
	- Added an import for a BigQuery table in the format handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

- This is part 1 of the saga to supportBigQuery reads.
https://app.asana.com/0/1208949807589885/1208951092959581/f
- There are sibling PR's that address BigQuery cataloging: 
- - #145
- - #146
- - #147
- - #148
In terms of functionality, they are not technically dependent on one
another for code completeness, however will need to work in concert to
fully support BQ as a data source.
- This PR is the first step to supporting BigQuery partition pushdown.
Partition filters are handled separate from predicates, see:
https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables
- We need to follow up this PR by setting the partition filter in the
read option. Since this is a significant change, we'll break it up into
steps so we can test incrementally.

## Cheour clientslist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced DataFrame querying capabilities with additional filtering
options.
	- Improved exception handling and logging for baour clientsfill operations.

- **Bug Fixes**
	- Refined logic for data filtering and retrieval in join operations.

- **Documentation**
- Updated method signatures to reflect new parameters and functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new library dependencies to enhance Google Cloud Storage
integration.
- Introduced a new `GCS` class for handling partitioning in Google Cloud
Storage.
  
- **Improvements**
- Updated `BigQueryFormat` to refine table reading and processing logic.
- Enhanced error handling for external table definitions and improved
control flow for table metadata access.

- **Bug Fixes**
- Improved error handling for missing database specifications in
partition methods.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

- Adding unit tests for #147 
- https://app.asana.com/0/1208949807589885/1208960391734329/f

## Cheour clientslist
- [x] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added a new test class to validate GCS format partitioning
functionality.

- **Refactor**
	- Updated paour clientsage structure for test files.
	- Removed specific imports in test files.

- **Chores**
	- Added an import for a BigQuery table in the format handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants