Skip to content

fix: properly detect bigquery catalog #629

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Apr 12, 2025
Merged

fix: properly detect bigquery catalog #629

merged 13 commits into from
Apr 12, 2025

Conversation

tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Apr 11, 2025

Summary

  • Now that we have configurable catalogs, we should rely on that to determine whether we're interacting with BigQuery.

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Enhanced cloud integration logic to accurately determine data table catalogs, ensuring more reliable data format retrieval.
    • Improved error handling, providing clearer feedback when encountering issues with data format detection.
    • Introduced a new method for converting table names into identifiers, enhancing catalog detection functionality.
    • Added a method for retrieving the catalog name based on table names, improving the overall functionality.
  • Bug Fixes

    • Corrected a typo in the error message for the NoSuchTableException.
  • Tests

    • Added a test case to verify the functionality of catalog detection in the GcpFormatProvider class, increasing test coverage.
    • Introduced a new test case for validating catalog detection based on various input strings in the TableUtilsTest class.
    • Enhanced test coverage for format detection scenarios in the BigQueryCatalogTest class.

Copy link

coderabbitai bot commented Apr 11, 2025

Walkthrough

The changes modify the readFormat method in the GcpFormatProvider class to determine the catalog associated with a given table name before proceeding. A new helper method getCatalog extracts the catalog from the table name, and another method isBigQueryCatalog checks its type. Based on the catalog identification, the method either attempts to retrieve the BigQuery table format with enhanced logging and error handling or defers to the superclass implementation.

Changes

Files Change Summary
cloud_gcp/src/.../GcpFormatProvider.scala Modified readFormat to determine the catalog via getCatalog; added isBigQueryCatalog for conditional checks and updated error handling.
cloud_gcp/src/.../BigQueryCatalogTest.scala Added new test case "integration testing formats" to validate format detection with various table scenarios.
cloud_gcp/src/.../DelegatingBigQueryMetastoreCatalog.scala Introduced catalogProps variable; modified initialize and loadTable for improved table loading and metadata management.
cloud_gcp/src/.../SparkBQUtils.scala Added toIdentifier method to convert table names into Identifier format.
spark/src/.../DefaultFormatProvider.scala Added getCatalog method to parse table names and return the corresponding catalog.
spark/src/.../TableUtilsTest.scala Introduced new test case "test catalog detection" to validate getCatalog functionality; activated previous test for special characters in column names.
cloud_gcp/src/.../GcpFormatProviderTest.scala Updated method to retrieve format from gcpFormatProvider using readFormat(tableName) instead of getFormat(mockTable).

Poem

A change in code, so crisp and neat,
Catalogs now tell which path to meet.
BigQuery flows with clear intent,
Logging each step, errors are well-spent.
Our code sings a joyful, rhythmic beat!
🚀✨

Warning

Review ran into problems

🔥 Problems

GitHub Actions and Pipeline Checks: Resource not accessible by integration - https://docs.github.com/rest/actions/workflow-runs#list-workflow-runs-for-a-repository.

Please grant the required permissions to the CodeRabbit GitHub App under the organization or repository settings.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 565bdd0 and a423261.

📒 Files selected for processing (1)
  • cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala (5)
spark/src/main/scala/ai/chronon/spark/format/Iceberg.scala (1)
  • Iceberg (8-56)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1)
  • readFormat (22-45)
spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (1)
  • readFormat (15-23)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryExternal.scala (1)
  • BigQueryExternal (12-101)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1)
  • BigQueryNative (10-91)
⏰ Context from checks skipped due to timeout of 90000ms (17)
  • GitHub Check: streaming_tests
  • GitHub Check: join_tests
  • GitHub Check: groupby_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: batch_tests
  • GitHub Check: spark_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: streaming_tests
  • GitHub Check: join_tests
  • GitHub Check: groupby_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: batch_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: spark_tests
  • GitHub Check: non_spark_tests
  • GitHub Check: non_spark_tests
  • GitHub Check: scala_compile_fmt_fix
🔇 Additional comments (2)
cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala (2)

13-14: Added required import

Import for Iceberg format needed for new test case.


118-146: Comprehensive test case for format detection

Test checks format detection across various table types (external, native, iceberg) with and without catalog prefixes.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1)

28-48: Consider fallback
If the table isn't found, throwing an exception may be strict. Evaluate returning None for a graceful fallback.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 1f20f33 and eb08cb1.

📒 Files selected for processing (1)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (2 hunks)
🧰 Additional context used
🧠 Learnings (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (2)
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:29-60
Timestamp: 2025-04-09T21:40:05.504Z
Learning: In BigQuery integration, table existence check is performed outside the BigQueryFormat.createTable method, at a higher level in TableUtils.createTable.
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-04-09T21:40:05.505Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.
🧬 Code Graph Analysis (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (2)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/SparkBQUtils.scala (2)
  • SparkBQUtils (6-17)
  • toTableId (8-15)
spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (1)
  • readFormat (15-23)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: non_spark_tests
  • GitHub Check: non_spark_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (3)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (3)

5-8: Approved imports
No issues detected.


50-60: Clear parsing
Good fallback to currentCatalog if no explicit catalog. Brief tests with edge cases recommended.


62-69: Check unknown catalog
catalogManager.catalog(catalog) might fail if the catalog is unregistered. Confirm usage or handle gracefully.

Co-authored-by: Thomas Chow <[email protected]>
Copy link
Contributor

@david-zlai david-zlai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just had one question

if (isBigQueryCatalog(parsedCatalog)) {
logger.info(s"Detected BigQuery catalog: $parsedCatalog")
Try {
val btTableIdentifier = SparkBQUtils.toTableId(tableName)(sparkSession)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remind me again but why do we need to convert tableName to a big table identifier?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's just what the bigQueryClient accepts - see the line below.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbh this isn't even entirely correct just yet. We actually need to find the project_id associated with the catalog, and replace that in the table Name before we pass it off to the BQ Client. but will do that in a follow-up.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright I've adjusted this actually to make it work with BigQuery.

@tchow-zlai tchow-zlai marked this pull request as draft April 11, 2025 15:18
tchow-zlai and others added 4 commits April 11, 2025 09:47
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
@tchow-zlai tchow-zlai marked this pull request as ready for review April 11, 2025 17:20
@tchow-zlai
Copy link
Collaborator Author

ptal @david-zlai

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/SparkBQUtils.scala (1)

18-22: New method to convert table name to Identifier.

Clean implementation that parses and restructures table name components. Consider adding a comment explaining the reversal logic for clarity.

 def toIdentifier(tableName: String)(implicit spark: SparkSession): Identifier = {
+  // Parse and reverse to get table name first, then namespace components
   val parseIdentifier = spark.sessionState.sqlParser.parseMultipartIdentifier(tableName).reverse
   Identifier.of(parseIdentifier.tail.reverse.toArray, parseIdentifier.head)

 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 8822ede and 766a1bd.

📒 Files selected for processing (6)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/DelegatingBigQueryMetastoreCatalog.scala (6 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (2 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/SparkBQUtils.scala (2 hunks)
  • cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala (3 hunks)
  • spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/TableUtilsTest.scala (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/SparkBQUtils.scala (1)
spark/src/main/scala/ai/chronon/spark/TableUtils.scala (1)
  • sql (297-325)
cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala (5)
spark/src/main/scala/ai/chronon/spark/format/Iceberg.scala (1)
  • Iceberg (8-56)
spark/src/main/scala/ai/chronon/spark/format/FormatProvider.scala (2)
  • FormatProvider (21-48)
  • from (23-48)
spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (1)
  • readFormat (15-23)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryExternal.scala (1)
  • BigQueryExternal (12-101)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1)
  • BigQueryNative (10-91)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (5)
spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (1)
  • getCatalog (25-35)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/SparkBQUtils.scala (2)
  • SparkBQUtils (7-24)
  • toIdentifier (18-22)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/DelegatingBigQueryMetastoreCatalog.scala (1)
  • loadTable (119-177)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryNative.scala (1)
  • BigQueryNative (10-91)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryExternal.scala (1)
  • BigQueryExternal (12-101)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: non_spark_tests
  • GitHub Check: non_spark_tests
  • GitHub Check: scala_compile_fmt_fix
🔇 Additional comments (15)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/SparkBQUtils.scala (1)

5-5: Added import for Identifier.

Import needed for the new toIdentifier method.

spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (1)

25-35: Added method to extract catalog from table name.

Concise implementation with proper pattern matching for different identifier formats. Good fallback to current catalog when not explicitly specified.

spark/src/test/scala/ai/chronon/spark/test/TableUtilsTest.scala (2)

28-29: Added necessary imports for new test.

These imports are required for the new catalog detection test.


646-656: Comprehensive test for catalog detection.

Thorough test coverage of various table name formats and edge cases.

cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/BigQueryCatalogTest.scala (3)

13-14: Added import for Iceberg format.

Required for the new test comparing format detection results.


45-53: Uncommented Iceberg catalog configuration.

Enables necessary settings for Iceberg catalog integration tests.


118-134: Added integration test for format detection.

Covers key table formats (BigQuery external, native, Iceberg) and non-existent tables. Appropriate to mark as ignore since it requires actual BigQuery resources.

cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (2)

22-44: Improved catalog detection logic.

Refactored implementation properly identifies BigQuery catalog types and returns appropriate format based on table properties.


4-7: Appropriate import additions.

Added necessary imports for new functionality.

cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/DelegatingBigQueryMetastoreCatalog.scala (6)

93-93: Added catalogProps variable.

Stores catalog properties for later use.


120-127: Improved table identifier handling.

Properly removes catalog segment from namespace and supports Iceberg tables.


130-131: Better project ID retrieval.

Uses catalogProps for consistent project ID retrieval.


157-159: Explicit PARQUET provider type.

Sets provider type for format detection.


169-170: Explicit BIGQUERY provider type.

Sets provider type for format detection.


204-204: Store catalog properties.

Populates catalogProps for use in loadTable method.

private[cloud_gcp] def getFormat(table: Table): Format = {
table.getDefinition.asInstanceOf[TableDefinition] match {
case _: ExternalTableDefinition =>
val parsedCatalog = getCatalog(tableName)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Missing getCatalog implementation.

Method is called but appears to be missing implementation.


🏁 Script executed:

#!/bin/bash
# Check for getCatalog method implementation
grep -r "def getCatalog" --include="*.scala" .

Length of output: 171


Implement or Inherit getCatalog

The call to getCatalog in GcpFormatProvider.scala (line 23) doesn’t resolve locally. Although a similar method exists in DefaultFormatProvider.scala, it isn’t automatically available here. Please either inherit from a common base that provides the implementation or add a GCP-specific getCatalog method.

tchow-zlai and others added 2 commits April 11, 2025 10:27
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
@tchow-zlai tchow-zlai force-pushed the tchow/catalog-switch branch from 766a1bd to b7b44d4 Compare April 11, 2025 17:29
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProviderTest.scala (1)

19-21: 🛠️ Refactor suggestion

Update mock setup for new method signature.

The test setup creates a mock Table but readFormat only needs a tableName string, making the mock setup unnecessary.

Consider either removing the unused mock setup or enhancing the test to verify format detection properly.

Also applies to: 36-36

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 766a1bd and b7b44d4.

📒 Files selected for processing (2)
  • cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProviderTest.scala (1 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/TableUtilsTest.scala (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProviderTest.scala (2)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1)
  • readFormat (22-45)
spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (1)
  • readFormat (15-23)
spark/src/test/scala/ai/chronon/spark/test/TableUtilsTest.scala (2)
spark/src/main/scala/ai/chronon/spark/format/FormatProvider.scala (1)
  • FormatProvider (21-48)
spark/src/main/scala/ai/chronon/spark/format/DefaultFormatProvider.scala (2)
  • DefaultFormatProvider (10-63)
  • getCatalog (25-35)
⏰ Context from checks skipped due to timeout of 90000ms (18)
  • GitHub Check: streaming_tests
  • GitHub Check: spark_tests
  • GitHub Check: groupby_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: join_tests
  • GitHub Check: batch_tests
  • GitHub Check: streaming_tests
  • GitHub Check: join_tests
  • GitHub Check: groupby_tests
  • GitHub Check: analyzer_tests
  • GitHub Check: fetcher_tests
  • GitHub Check: batch_tests
  • GitHub Check: non_spark_tests
  • GitHub Check: spark_tests
  • GitHub Check: scala_compile_fmt_fix
  • GitHub Check: non_spark_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (2)
spark/src/test/scala/ai/chronon/spark/test/TableUtilsTest.scala (2)

28-30: Added necessary imports.

Added imports for FormatProvider and ParseException to support the new test case.


646-656: Comprehensive test coverage for catalog detection.

The test thoroughly verifies catalog extraction logic across various table name formats including quoted identifiers and default catalog fallbacks. Good job covering edge cases like empty strings.

@@ -33,6 +33,6 @@ class GcpFormatProviderTest extends AnyFlatSpec with MockitoSugar {
.build())
when(mockTable.getTableId).thenReturn(TableId.of("project", "dataset", "table"))

val gcsFormat = gcpFormatProvider.getFormat(mockTable)
val gcsFormat = gcpFormatProvider.readFormat(tableName)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Method changed but test still ignored.

Test was updated to use readFormat instead of getFormat but remains ignored and lacks assertions.

-val gcsFormat = gcpFormatProvider.readFormat(tableName)
+val gcsFormat = gcpFormatProvider.readFormat(tableName)
+assert(gcsFormat.isDefined, "Format should be detected")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
val gcsFormat = gcpFormatProvider.readFormat(tableName)
val gcsFormat = gcpFormatProvider.readFormat(tableName)
assert(gcsFormat.isDefined, "Format should be detected")

tchow-zlai and others added 4 commits April 11, 2025 10:38
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
@tchow-zlai tchow-zlai force-pushed the tchow/catalog-switch branch from b5abf56 to a423261 Compare April 11, 2025 18:29
@tchow-zlai tchow-zlai merged commit a4fcedf into main Apr 12, 2025
21 checks passed
@tchow-zlai tchow-zlai deleted the tchow/catalog-switch branch April 12, 2025 15:30
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary

- Now that we have configurable catalogs, we should rely on that to
determine whether we're interacting with BigQuery.

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Enhanced cloud integration logic to accurately determine data table
catalogs, ensuring more reliable data format retrieval.
- Improved error handling, providing clearer feedback when encountering
issues with data format detection.
- Introduced a new method for converting table names into identifiers,
enhancing catalog detection functionality.
- Added a method for retrieving the catalog name based on table names,
improving the overall functionality.

- **Bug Fixes**
	- Corrected a typo in the error message for the `NoSuchTableException`.

- **Tests**
- Added a test case to verify the functionality of catalog detection in
the `GcpFormatProvider` class, increasing test coverage.
- Introduced a new test case for validating catalog detection based on
various input strings in the `TableUtilsTest` class.
- Enhanced test coverage for format detection scenarios in the
`BigQueryCatalogTest` class.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary

- Now that we have configurable catalogs, we should rely on that to
determine whether we're interacting with BigQuery.

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Enhanced cloud integration logic to accurately determine data table
catalogs, ensuring more reliable data format retrieval.
- Improved error handling, providing clearer feedback when encountering
issues with data format detection.
- Introduced a new method for converting table names into identifiers,
enhancing catalog detection functionality.
- Added a method for retrieving the catalog name based on table names,
improving the overall functionality.

- **Bug Fixes**
	- Corrected a typo in the error message for the `NoSuchTableException`.

- **Tests**
- Added a test case to verify the functionality of catalog detection in
the `GcpFormatProvider` class, increasing test coverage.
- Introduced a new test case for validating catalog detection based on
various input strings in the `TableUtilsTest` class.
- Enhanced test coverage for format detection scenarios in the
`BigQueryCatalogTest` class.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

- Now that we have configurable catalogs, we should rely on that to
determine whether we're interacting with BigQuery.

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Enhanced cloud integration logic to accurately determine data table
catalogs, ensuring more reliable data format retrieval.
- Improved error handling, providing clearer feedback when encountering
issues with data format detection.
- Introduced a new method for converting table names into identifiers,
enhancing catalog detection functionality.
- Added a method for retrieving the catalog name based on table names,
improving the overall functionality.

- **Bug Fixes**
	- Corrected a typo in the error message for the `NoSuchTableException`.

- **Tests**
- Added a test case to verify the functionality of catalog detection in
the `GcpFormatProvider` class, increasing test coverage.
- Introduced a new test case for validating catalog detection based on
various input strings in the `TableUtilsTest` class.
- Enhanced test coverage for format detection scenarios in the
`BigQueryCatalogTest` class.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

- Now that we have configurable catalogs, we should rely on that to
determine whether we're interacting with BigQuery.

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Enhanced cloud integration logic to accurately determine data table
catalogs, ensuring more reliable data format retrieval.
- Improved error handling, providing clearer feedback when encountering
issues with data format detection.
- Introduced a new method for converting table names into identifiers,
enhancing catalog detection functionality.
- Added a method for retrieving the catalog name based on table names,
improving the overall functionality.

- **Bug Fixes**
	- Corrected a typo in the error message for the `NoSuchTableException`.

- **Tests**
- Added a test case to verify the functionality of catalog detection in
the `GcpFormatProvider` class, increasing test coverage.
- Introduced a new test case for validating catalog detection based on
various input strings in the `TableUtilsTest` class.
- Enhanced test coverage for format detection scenarios in the
`BigQueryCatalogTest` class.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

- Now that we have configurable catalogs, we should rely on that to
determine whether we're interacting with BigQuery.

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Enhanced cloud integration logic to accurately determine data table
catalogs, ensuring more reliable data format retrieval.
- Improved error handling, providing clearer feedbaour clients when encountering
issues with data format detection.
- Introduced a new method for converting table names into identifiers,
enhancing catalog detection functionality.
- Added a method for retrieving the catalog name based on table names,
improving the overall functionality.

- **Bug Fixes**
	- Corrected a typo in the error message for the `NoSuchTableException`.

- **Tests**
- Added a test case to verify the functionality of catalog detection in
the `GcpFormatProvider` class, increasing test coverage.
- Introduced a new test case for validating catalog detection based on
various input strings in the `TableUtilsTest` class.
- Enhanced test coverage for format detection scenarios in the
`BigQueryCatalogTest` class.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants