-
Notifications
You must be signed in to change notification settings - Fork 0
feat: GCS format support #147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis pull request enhances the Changes
Possibly Related PRs
Suggested Reviewers
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
a436526
to
36aa7f3
Compare
b6c1f86
to
76cd466
Compare
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excellent!
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for backfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
3fdb1cd
to
ea2d400
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (4)
9-9
: Add a brief Scaladoc for this new class.
A concise class-level comment helps consumers quickly grasp its purpose.
15-45
: Validate presence of 'database'.
Throwing an exception is acceptable. Consider returning a clearer message.
46-46
: Consider a friendlier error mechanism for multiple URIs.
An assert can abruptly stop the session; a custom exception with more context might be more user-friendly.
93-94
: Clarify unsupported or stub methods.
Throwing here is valid, but a doc comment or self-documentation can reduce confusion.cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)
34-58
: Remove or clarify this commented-out block.
Lingering commented code can cause confusion.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (3)
build.sbt
(1 hunks)cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala
(1 hunks)cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala
(1 hunks)
🔇 Additional comments (8)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (4)
11-13
: Behavior is inherited intact.
No extra logic here, so this override looks good.
78-91
: Partition mapping logic is clear.
This approach is straightforward and maintainable. Well done!
96-98
: Sub-partitions filter support is beneficial.
Implementing sub-partition filters can speed queries significantly.
68-77
: Potential overhead reading partition metadata with 'parquet(uris: _*)'.
Large data might cause overhead. You could request partition info directly without scanning the entire data.
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (4)
7-13
: Imports are relevant and cohesive.
These additions streamline BigQuery connectivity and usage.
24-32
: Logic under case matches is reversed?
External table → GCS, Standard → BigQuery would be more intuitive, but the code does the opposite. Confirm correctness.
✅ Verification successful
Let me search for related code to verify this logic.
Logic is correct as implemented
Looking at the commented-out federation code in the same file, we can see the intended mapping:
- External tables → BQuery (matches current implementation)
- Standard/Managed tables → GCS (matches current implementation)
This aligns with the current implementation and appears to be the intended design.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for any other occurrences or usages of ExternalTableDefinition and StandardTableDefinition
rg "ExternalTableDefinition|StandardTableDefinition" -A 3 -B 3
Length of output: 4721
Script:
#!/bin/bash
# Look for any tests or documentation that might explain this logic
fd "BigQueryFormat" --exec cat {}
Length of output: 5279
Script:
#!/bin/bash
# Search for any code that uses BQuery or GCS classes/objects
rg "BQuery|GCS" -A 2 -B 2
Length of output: 3313
14-15
: Inject concurrency safety if needed.
Check if the lazy bigQueryClient
is thread-safe. Otherwise, you might consider synchronization.
✅ Verification successful
Let me search for any concurrent usage of bigQueryClient
in the codebase.
Let me check if this class is used in any concurrent/parallel contexts by searching the entire codebase.
Lazy initialization is thread-safe in Scala - no synchronization needed
The lazy val
in Scala is thread-safe by default. The Scala compiler automatically handles synchronization for lazy initialization, making bigQueryClient
safe for concurrent access.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for references to bigQueryClient
rg "bigQueryClient" -A 2
# Search for any concurrent/parallel/async operations in the same file
rg -p "parallel|concurrent|async|thread|Future|Promise" "cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala"
Length of output: 853
Script:
#!/bin/bash
# Search for GcpFormatProvider usage
rg "GcpFormatProvider" -A 3
# Search for any parallel/concurrent operations in files that might use this provider
ast-grep --pattern 'class $_ extends FormatProvider { $$$ }'
Length of output: 615
18-23
: Straightforward table retrieval.
Good usage of BigQuery client. Ensure all error paths (like missing datasets) are covered.
✅ Verification successful
Let me search for error handling patterns in this codebase.
Let me check how the code handles the null case from getTable.
Error paths are properly handled
The code safely wraps getTable
in Option
and handles both existing and missing table cases through pattern matching, defaulting to Hive when table doesn't exist.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for error handling around BigQuery table operations
rg -A 5 "getTable|TableId" --type scala
# Look for try/catch blocks or Option handling around BigQuery operations
ast-grep --pattern 'try {
$$$
getTable($$$)
$$$
}'
# Search for any exception handling patterns
ast-grep --pattern 'catch {
$$$
case $_: $_Exception => $$$
}'
Length of output: 18507
Script:
#!/bin/bash
# Search for the complete match pattern around getTable in BigQueryFormat
rg -A 10 "tableOpt = Option\(bigQueryClient\.getTable" --type scala
Length of output: 1287
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "3.0.3", // it's what's on the cluster | ||
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "hadoop3-2.2.26", // it's what's on the cluster | ||
libraryDependencies += "com.google.cloud.bigdataoss" % "gcsio" % "3.0.3", // need it for https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/master/gcsio/src/main/java/com/google/cloud/hadoop/gcsio/GoogleCloudStorageFileSystem.java |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Remove duplicate gcs-connector dependency with conflicting versions
- Remove line with
gcs-connector
version3.0.3
as it conflicts withhadoop3-2.2.26
- Keep version alignment between remaining dependencies:
hadoop3-2.2.26
for connector and3.0.3
for gcsio
🔗 Analysis chain
Confirm version alignment for GCS libraries.
Multiple versions (3.0.3, 2.2.26) might conflict.
Let me verify the GCS library version alignment by checking the build configuration files and any potential version conflicts.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for GCS connector and related dependencies
rg -A 2 "gcs-connector|gcsio" --type sbt
Length of output: 70
Script:
#!/bin/bash
# Search for GCS dependencies in all files
rg "gcs-connector|gcsio"
Length of output: 543
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)
Line range hint
61-62
: Implement writeFormat methodReplace ??? with proper implementation.
🧹 Nitpick comments (2)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1)
89-89
: Add type safety for field value conversionReplace direct toString with safe type conversion.
- fieldName -> fieldValue.toString // Just going to cast this as a string. + fieldName -> Option(fieldValue).map(_.toString).getOrElse("")cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)
34-58
: Remove commented federation codeClean up unused code block.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (2)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala
(1 hunks)cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala
(1 hunks)
🔇 Additional comments (4)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (3)
1-16
: LGTM: Clean class structure and imports
96-96
: Verify empty fileFormatString impact
Empty string return might affect format detection.
48-48
: Consider relaxing single URI restriction
The assertion might be too restrictive for future use cases.
✅ Verification successful
Let me verify if this restriction is necessary by analyzing the codebase.
Let me try a different search strategy to understand the URI handling better.
Let me try one final search to understand the specific GCS/URI handling context.
Single URI restriction is necessary - keep the assertion
The assertion is actually required because:
- The code processes partition information using Spark's FileSourceScanExec which expects a single base path
- The implementation relies on PartitioningAwareFileIndex which needs a unified partition structure
- The method is specifically designed to handle HDFS-style partitioned paths under one root URI
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for usages of GCSFormat and related URI handling
rg -A 5 "GCSFormat"
# Search for table creation patterns with multiple URIs
ast-grep --pattern 'CREATE TABLE $_ LOCATION $$$'
# Look for any existing multi-URI patterns
rg -i "uris?.*(multiple|several|many|array|list)"
Length of output: 123
Script:
#!/bin/bash
# Find the complete implementation of GCSFormat
ast-grep --pattern 'class GCSFormat {
$$$
}'
# Find any references to URI patterns or external tables
rg -B 2 -A 2 "external.*table|uri.*pattern"
Length of output: 4156
Script:
#!/bin/bash
# Find the file content to understand the context better
cat cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala
# Look for any BigQuery external table patterns
rg -B 3 -A 3 "BigQuery.*external.*table"
Length of output: 4066
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)
14-33
: LGTM: Clean table type handling
## Summary - Adding unit tests for #147 - https://app.asana.com/0/1208949807589885/1208960391734329/f ## Checklist - [x] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added a new test class to validate GCS format partitioning functionality. - **Refactor** - Updated package structure for test files. - Removed specific imports in test files. - **Chores** - Added an import for a BigQuery table in the format handling. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for backfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added new library dependencies to enhance Google Cloud Storage integration. - Introduced a new `GCS` class for handling partitioning in Google Cloud Storage. - **Improvements** - Updated `BigQueryFormat` to refine table reading and processing logic. - Enhanced error handling for external table definitions and improved control flow for table metadata access. - **Bug Fixes** - Improved error handling for missing database specifications in partition methods. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary - Adding unit tests for #147 - https://app.asana.com/0/1208949807589885/1208960391734329/f ## Checklist - [x] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added a new test class to validate GCS format partitioning functionality. - **Refactor** - Updated package structure for test files. - Removed specific imports in test files. - **Chores** - Added an import for a BigQuery table in the format handling. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for backfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added new library dependencies to enhance Google Cloud Storage integration. - Introduced a new `GCS` class for handling partitioning in Google Cloud Storage. - **Improvements** - Updated `BigQueryFormat` to refine table reading and processing logic. - Enhanced error handling for external table definitions and improved control flow for table metadata access. - **Bug Fixes** - Improved error handling for missing database specifications in partition methods. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary - Adding unit tests for #147 - https://app.asana.com/0/1208949807589885/1208960391734329/f ## Checklist - [x] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added a new test class to validate GCS format partitioning functionality. - **Refactor** - Updated package structure for test files. - Removed specific imports in test files. - **Chores** - Added an import for a BigQuery table in the format handling. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for backfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added new library dependencies to enhance Google Cloud Storage integration. - Introduced a new `GCS` class for handling partitioning in Google Cloud Storage. - **Improvements** - Updated `BigQueryFormat` to refine table reading and processing logic. - Enhanced error handling for external table definitions and improved control flow for table metadata access. - **Bug Fixes** - Improved error handling for missing database specifications in partition methods. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary - Adding unit tests for #147 - https://app.asana.com/0/1208949807589885/1208960391734329/f ## Checklist - [x] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added a new test class to validate GCS format partitioning functionality. - **Refactor** - Updated package structure for test files. - Removed specific imports in test files. - **Chores** - Added an import for a BigQuery table in the format handling. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary - This is part 1 of the saga to supportBigQuery reads. https://app.asana.com/0/1208949807589885/1208951092959581/f - There are sibling PR's that address BigQuery cataloging: - - #145 - - #146 - - #147 - - #148 In terms of functionality, they are not technically dependent on one another for code completeness, however will need to work in concert to fully support BQ as a data source. - This PR is the first step to supporting BigQuery partition pushdown. Partition filters are handled separate from predicates, see: https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#partitioned-tables - We need to follow up this PR by setting the partition filter in the read option. Since this is a significant change, we'll break it up into steps so we can test incrementally. ## Cheour clientslist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced DataFrame querying capabilities with additional filtering options. - Improved exception handling and logging for baour clientsfill operations. - **Bug Fixes** - Refined logic for data filtering and retrieval in join operations. - **Documentation** - Updated method signatures to reflect new parameters and functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to traour clients the status of staour clientss when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary ## Cheour clientslist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added new library dependencies to enhance Google Cloud Storage integration. - Introduced a new `GCS` class for handling partitioning in Google Cloud Storage. - **Improvements** - Updated `BigQueryFormat` to refine table reading and processing logic. - Enhanced error handling for external table definitions and improved control flow for table metadata access. - **Bug Fixes** - Improved error handling for missing database specifications in partition methods. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to traour clients the status of staour clientss when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` -->
## Summary - Adding unit tests for #147 - https://app.asana.com/0/1208949807589885/1208960391734329/f ## Cheour clientslist - [x] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added a new test class to validate GCS format partitioning functionality. - **Refactor** - Updated paour clientsage structure for test files. - Removed specific imports in test files. - **Chores** - Added an import for a BigQuery table in the format handling. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to traour clients the status of staour clientss when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
Summary
Checklist
Summary by CodeRabbit
New Features
GCS
class for handling partitioning in Google Cloud Storage.Improvements
BigQueryFormat
to refine table reading and processing logic.Bug Fixes