-
Notifications
You must be signed in to change notification settings - Fork 0
feat: TableUtils to be compatible with DataPointer (part 1) #158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Warning Rate limit exceeded@tchow-zlai has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 43 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (5)
WalkthroughThis pull request introduces significant enhancements to Chronon's cloud integration and data handling capabilities, focusing on Google Cloud Platform (GCP) services. The changes span multiple files, including BigQuery and GCS format providers, Spark utilities, and build configurations. The modifications improve data pointer handling, table management, and format resolution, with a particular emphasis on more robust and flexible data source interactions. Changes
Suggested reviewers
Possibly related PRs
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
2b6579c
to
0f78195
Compare
66f2904
to
8577b47
Compare
f89e49e
to
0e457a1
Compare
8577b47
to
fde1a44
Compare
0e457a1
to
d01103e
Compare
d01103e
to
c26536f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 1
🧹 Nitpick comments (5)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (2)
16-24
: Comments clarify default project resolution.
36-36
: Private method naming "format" might be ambiguous. Consider rename.cloud_gcp/src/test/scala/ai/chronon/integrations/cloud_gcp/test/BigQueryCatalogTest.scala (3)
52-61
: Ignored test indicates manual bigquery integration. Fine for local dev.
63-63
: Another ignored test. Same rationale.
70-74
: Print statements help verify partition logic.
🛑 Comments failed to post (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1)
42-42: 💡 Codebase verification
Based on the code analysis, I can now provide a final response:
Add try-catch block around parquet read for better error handling
The parquet read operation in GCSFormat.scala could fail with various exceptions if sourceUri is incorrect:
- FileNotFoundException if path doesn't exist
- InvalidInputException for malformed URIs
- AccessDeniedException for permission issues
Add error handling to provide clear error messages to users.
🔗 Analysis chain
Might need error handling if sourceUri is incorrect.
Let me search for error handling patterns in the codebase.
Let me search for error handling patterns in the GCSFormat class and its parent class.
Let me check how parquet reads are handled in the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash # Check the content of the file to understand the context cat cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala # Search for error handling patterns in Scala files rg "try|catch|throw|error" --type scala -A 3Length of output: 65727
Script:
#!/bin/bash # Check if Format class has error handling for incorrect URIs ast-grep --pattern 'class Format { $$$ def parquet($uri: $_) { $$$ } $$$ }' # Search for any error handling around parquet reads rg "parquet.*try|catch.*parquet" --type scalaLength of output: 137
Script:
#!/bin/bash # Check GCSFormat implementation cat cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala # Search for parquet read patterns rg "\.read\.parquet" --type scala -A 3Length of output: 4637
c26536f
to
2f5e4f8
Compare
2f5e4f8
to
0380119
Compare
c4b3aa6
to
177c4ca
Compare
177c4ca
to
7ca3892
Compare
f6573cf
to
14f7ef9
Compare
3640599
to
cf4a368
Compare
d0deb93
to
9390cba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 1
🧹 Nitpick comments (6)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (2)
49-65
: Enhance error handling for external table configuration.Add null checks and more descriptive error messages for invalid URIs or format options.
if (table.getDefinition.isInstanceOf[ExternalTableDefinition]) { val uris = table.getDefinition .asInstanceOf[ExternalTableDefinition] .getSourceUris .asScala .toList .map((uri) => uri.stripSuffix("/*") + "/") - assert(uris.length == 1, s"External table ${tableName} can be backed by only one URI.") + if (uris.isEmpty) { + throw new IllegalStateException(s"External table ${tableName} must have at least one source URI") + } + if (uris.length > 1) { + throw new IllegalStateException(s"External table ${tableName} can be backed by only one URI, found ${uris.length}") + } val formatStr = table.getDefinition .asInstanceOf[ExternalTableDefinition] .getFormatOptions + .getOrElse(throw new IllegalStateException(s"Format options not found for external table ${tableName}")) .asInstanceOf[FormatOptions] .getType
72-93
: Remove commented federation code.Consider moving this to a separate branch or documenting it elsewhere if needed for reference.
spark/src/main/scala/ai/chronon/spark/Extensions.scala (1)
300-339
: Extract format strings as constants.Move format strings to a companion object to improve maintainability.
object DataPointerAwareDataFrameWriter { val FORMAT_BIGQUERY = "bigquery" val FORMAT_SNOWFLAKE = "snowflake" val FORMAT_PARQUET = "parquet" val FORMAT_CSV = "csv" val FORMAT_HIVE = "hive" val FORMAT_DELTA = "delta" val FORMAT_ICEBERG = "iceberg" }spark/src/main/scala/ai/chronon/spark/TableUtils.scala (2)
136-136
: Add error logging for table existence check.Consider logging specific errors to aid debugging.
-Try(loadTable(tableName)).isSuccess +Try(loadTable(tableName)).recoverWith { case e => + logger.debug(s"Table $tableName does not exist: ${e.getMessage}") + Failure(e) +}.isSuccess
Line range hint
650-659
: Add table name length validation.Long table names might exceed system limits after appending the timestamp.
private def archiveTableIfExists(tableName: String, timestamp: Option[Instant]): Unit = { if (tableExists(tableName)) { val humanReadableTimestamp = archiveTimestampFormatter.format(timestamp.getOrElse(Instant.now())) val finalArchiveTableName = s"${tableName}_$humanReadableTimestamp" + if (finalArchiveTableName.length > 255) { + throw new IllegalArgumentException(s"Archived table name would exceed 255 characters: $finalArchiveTableName") + } val command = s"ALTER TABLE $tableName RENAME TO $finalArchiveTableName" logger.info(s"Archiving table with command: $command") sql(command) } }spark/src/main/scala/ai/chronon/spark/JoinBase.scala (1)
497-501
: Efficient small mode.
Combines ranges to reduce overhead. Could log memory usage.
🛑 Comments failed to post (1)
spark/src/main/scala/ai/chronon/spark/Extensions.scala (1)
341-374: 🛠️ Refactor suggestion
Extract common format handling logic.
Significant code duplication between reader and writer classes.
private object FormatHandler { def getReader(format: String, dfr: DataFrameReader, dp: DataPointer): DataFrame = { // Common format handling logic } def getWriter[T](format: String, dfw: DataFrameWriter[T], dp: DataPointer): Unit = { // Common format handling logic } }
## Summary - Fixing some earlier bugs. Doing case matching on java isn't the same with scala classes. - Adding a few more params to GCS format. Need the source URI, and the `format` string (which is basically the file format). - Deleting some queries to INFORMATION_SCHEMA in GCS format. no longer needed since we are using the BQ Client. - Adding some code to handle Spark InternalRows. We are using a low level impl to get at the InMemoryFileIndex which contains file partitions. It gives us internal rows so we need to translate that to rows, which involves the correct serialization based on the column types. - Add a couple tests to BigQueryCatalogTest - Adding a `name` field to `Format` - Begin to migrate some TableUtils methods to delegate to DataPointer. - https://app.asana.com/0/1208949807589885/1208960391734329/f - https://app.asana.com/0/1208949807589885/1208960391734331/f ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced BigQuery and GCS format handling with improved table name resolution and data source support. - Updated Spark table utilities with more robust data loading and management capabilities. - Introduced new methods for resolving table names and handling data formats. - Added support for new dependencies related to Google Cloud Dataproc. - Introduced unit tests for GCS format functionality. - **Bug Fixes** - Improved error handling for data source formats and table operations. - Streamlined data pointer operations for better format compatibility. - **Refactor** - Simplified data loading and schema retrieval methods. - Consolidated format handling logic in data source operations. - Enhanced organization and clarity in data pointer handling. - Cleaned up dependency declarations and project settings in build configuration. - Improved error handling and control flow in join computation processes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary - Fixing some earlier bugs. Doing case matching on java isn't the same with scala classes. - Adding a few more params to GCS format. Need the source URI, and the `format` string (which is basically the file format). - Deleting some queries to INFORMATION_SCHEMA in GCS format. no longer needed since we are using the BQ Client. - Adding some code to handle Spark InternalRows. We are using a low level impl to get at the InMemoryFileIndex which contains file partitions. It gives us internal rows so we need to translate that to rows, which involves the correct serialization based on the column types. - Add a couple tests to BigQueryCatalogTest - Adding a `name` field to `Format` - Begin to migrate some TableUtils methods to delegate to DataPointer. - https://app.asana.com/0/1208949807589885/1208960391734329/f - https://app.asana.com/0/1208949807589885/1208960391734331/f ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced BigQuery and GCS format handling with improved table name resolution and data source support. - Updated Spark table utilities with more robust data loading and management capabilities. - Introduced new methods for resolving table names and handling data formats. - Added support for new dependencies related to Google Cloud Dataproc. - Introduced unit tests for GCS format functionality. - **Bug Fixes** - Improved error handling for data source formats and table operations. - Streamlined data pointer operations for better format compatibility. - **Refactor** - Simplified data loading and schema retrieval methods. - Consolidated format handling logic in data source operations. - Enhanced organization and clarity in data pointer handling. - Cleaned up dependency declarations and project settings in build configuration. - Improved error handling and control flow in join computation processes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary - Fixing some earlier bugs. Doing case matching on java isn't the same with scala classes. - Adding a few more params to GCS format. Need the source URI, and the `format` string (which is basically the file format). - Deleting some queries to INFORMATION_SCHEMA in GCS format. no longer needed since we are using the BQ Client. - Adding some code to handle Spark InternalRows. We are using a low level impl to get at the InMemoryFileIndex which contains file partitions. It gives us internal rows so we need to translate that to rows, which involves the correct serialization based on the column types. - Add a couple tests to BigQueryCatalogTest - Adding a `name` field to `Format` - Begin to migrate some TableUtils methods to delegate to DataPointer. - https://app.asana.com/0/1208949807589885/1208960391734329/f - https://app.asana.com/0/1208949807589885/1208960391734331/f ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced BigQuery and GCS format handling with improved table name resolution and data source support. - Updated Spark table utilities with more robust data loading and management capabilities. - Introduced new methods for resolving table names and handling data formats. - Added support for new dependencies related to Google Cloud Dataproc. - Introduced unit tests for GCS format functionality. - **Bug Fixes** - Improved error handling for data source formats and table operations. - Streamlined data pointer operations for better format compatibility. - **Refactor** - Simplified data loading and schema retrieval methods. - Consolidated format handling logic in data source operations. - Enhanced organization and clarity in data pointer handling. - Cleaned up dependency declarations and project settings in build configuration. - Improved error handling and control flow in join computation processes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary - Fixing some earlier bugs. Doing case matching on java isn't the same with scala classes. - Adding a few more params to GCS format. Need the source URI, and the `format` string (which is basically the file format). - Deleting some queries to INFORMATION_SCHEMA in GCS format. no longer needed since we are using the BQ Client. - Adding some code to handle Spark InternalRows. We are using a low level impl to get at the InMemoryFileIndex which contains file partitions. It gives us internal rows so we need to translate that to rows, which involves the correct serialization based on the column types. - Add a couple tests to BigQueryCatalogTest - Adding a `name` field to `Format` - Begin to migrate some TableUtils methods to delegate to DataPointer. - https://app.asana.com/0/1208949807589885/1208960391734329/f - https://app.asana.com/0/1208949807589885/1208960391734331/f ## Cheour clientslist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced BigQuery and GCS format handling with improved table name resolution and data source support. - Updated Spark table utilities with more robust data loading and management capabilities. - Introduced new methods for resolving table names and handling data formats. - Added support for new dependencies related to Google Cloud Dataproc. - Introduced unit tests for GCS format functionality. - **Bug Fixes** - Improved error handling for data source formats and table operations. - Streamlined data pointer operations for better format compatibility. - **Refactor** - Simplified data loading and schema retrieval methods. - Consolidated format handling logic in data source operations. - Enhanced organization and clarity in data pointer handling. - Cleaned up dependency declarations and project settings in build configuration. - Improved error handling and control flow in join computation processes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to traour clients the status of staour clientss when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
Summary
format
string (which is basically the file format).name
field toFormat
Checklist
Summary by CodeRabbit
New Features
Bug Fixes
Refactor