Skip to content

feat: CatalogAwareDataPointer and refactoring existing DataPointer #157

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jan 3, 2025

Conversation

tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Dec 23, 2024

Summary

  • Refactor DataPointer. Ideally, if we are parsing URI's, if we come across a prefix we should preserve the original whole prefix. We lose out on the benefits of the various s3<c> uri's but we can fix that in a future iteration. This way, the Extensions code is simpler.
  • Define some DataframeWriter and DataframeReader implicit classes to support handling DataPointer. Ideally we want DataPointer to be a lightweight object that we can take action on, similar to what a table name or a uri is.
  • Introduce CatalogAwareDataPointer. The way this works is that it encapsulates Format which is a runtime injection used to figure out underlying storage r/w layers. This is ultimately what DataPointer represents, and instead of statically defining it we will make remote calls to do so.

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a more flexible DataPointer architecture with an abstract base class and URIDataPointer.
    • Added support for dynamic format resolution in Spark data sources.
    • Enhanced BQuery and GCS classes with specific name methods.
  • Refactor

    • Restructured DataPointer class to improve extensibility.
    • Enhanced format handling with standardized name methods for different data formats.
    • Updated DataPointerOps to streamline format and catalog handling.
    • Modified TableUtils to utilize the new DataPointer instantiation method.
  • Improvements

    • Implemented more robust table and format parsing mechanisms.
    • Added utility methods for resolving table names and formats.
    • Refined logging for DataPointer instantiation and state representation.

Copy link

coderabbitai bot commented Dec 23, 2024

Walkthrough

The pull request introduces a significant refactoring of the DataPointer class in the Chronon API. The changes transform the DataPointer from a case class to an abstract class with more flexible implementations, introducing a new URIDataPointer case class. Accompanying modifications include updates to the parsing logic, test cases, and the addition of a new CatalogAwareDataPointer in the Spark module. The Format trait is also enhanced with a new name method for format types.

Changes

File Change Summary
api/src/main/scala/ai/chronon/api/DataPointer.scala - Converted DataPointer to an abstract class
- Added URIDataPointer case class
- Updated parsing logic
api/src/test/scala/ai/chronon/api/test/DataPointerTest.scala - Replaced DataPointer with URIDataPointer in test cases
spark/src/main/scala/ai/chronon/spark/CatalogAwareDataPointer.scala - New file with CatalogAwareDataPointer case class
- Added DataPointer object with dynamic format provider resolution
spark/src/main/scala/ai/chronon/spark/Format.scala - Added name method to Format trait
- Implemented name for Hive, Iceberg, and DeltaLake
- Added resolveTableName to FormatProvider trait
spark/src/main/scala/ai/chronon/spark/Extensions.scala - Updated handling of format and catalog in DataPointerOps
spark/src/main/scala/ai/chronon/spark/TableUtils.scala - Changed instantiation of DataPointer to use apply method
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala - Added name method to BQuery class
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala - Added name method to GCS class
- Updated fileFormatString method

Possibly Related PRs

Suggested Reviewers

  • piyush-zlai
  • nikhil-zlai

Poem

🌟 Data Pointer's Metamorphosis 🌟
From case class to abstract might,
URIs dancing with delight,
Formats singing their sweet name,
Chronon's flexibility aflame! 🔥
Code evolves, complexity bows 🎉


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 1b97e2a and 23a1184.

📒 Files selected for processing (8)
  • api/src/main/scala/ai/chronon/api/DataPointer.scala (2 hunks)
  • api/src/test/scala/ai/chronon/api/test/DataPointerTest.scala (2 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/CatalogAwareDataPointer.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/Extensions.scala (2 hunks)
  • spark/src/main/scala/ai/chronon/spark/Format.scala (6 hunks)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
  • api/src/test/scala/ai/chronon/api/test/DataPointerTest.scala
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala
  • api/src/main/scala/ai/chronon/api/DataPointer.scala
  • spark/src/main/scala/ai/chronon/spark/Extensions.scala
  • spark/src/main/scala/ai/chronon/spark/CatalogAwareDataPointer.scala
  • spark/src/main/scala/ai/chronon/spark/Format.scala

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tchow-zlai tchow-zlai force-pushed the tchow/bq-support-7 branch 2 times, most recently from 1ef9160 to e9a0d24 Compare December 23, 2024 09:59
@tchow-zlai tchow-zlai force-pushed the tchow/bq-support-8 branch 2 times, most recently from 8577b47 to fde1a44 Compare December 23, 2024 19:03
@tchow-zlai tchow-zlai changed the base branch from tchow/bq-support-7 to tchow/bq-support-9 December 23, 2024 19:07
@tchow-zlai tchow-zlai changed the base branch from tchow/bq-support-9 to tchow/bq-support-7 December 23, 2024 19:11
@tchow-zlai tchow-zlai force-pushed the tchow/bq-support-8 branch 2 times, most recently from 016025e to c4b3aa6 Compare December 23, 2024 19:13
@tchow-zlai tchow-zlai changed the base branch from tchow/bq-support-7 to main December 28, 2024 17:53
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
spark/src/main/scala/ai/chronon/spark/Extensions.scala (2)

326-364: DataFrameWriter integration.
Consider extracting repeated match logic into helper to reduce duplication.


366-399: DataFrameReader integration.
Mirrors writer logic. Same refactor notion applies.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 6d3b6ca and 7ca3892.

📒 Files selected for processing (4)
  • api/src/main/scala/ai/chronon/api/DataPointer.scala (2 hunks)
  • api/src/test/scala/ai/chronon/api/test/DataPointerTest.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/CatalogAwareDataPointer.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/Extensions.scala (2 hunks)
🔇 Additional comments (20)
api/src/main/scala/ai/chronon/api/DataPointer.scala (5)

4-10: Well-structured abstract class.
Interfaces are clear.


12-17: Straightforward case class extension.
Implementation is clean.


40-41: Solid handling of catalog + format prefix.
Should be covered by parser tests.


48-55: Format fallback logic.
Confirm edge cases (unknown prefix).


59-59: Handles unnamed references.
Default case is sensible.

spark/src/main/scala/ai/chronon/spark/Extensions.scala (1)

29-30: Necessary imports.
Looks good.

spark/src/main/scala/ai/chronon/spark/CatalogAwareDataPointer.scala (2)

8-23: Case class bridging a FormatProvider.
Clean override of DataPointer.


25-39: Reflection-based constructor.
Ensure config keys are correct. Consider fallback if class not found.

api/src/test/scala/ai/chronon/api/test/DataPointerTest.scala (12)

4-4: Import for URIDataPointer.
Required for updated tests.


12-12: S3 path parse test.
Matches expected URIDataPointer.


18-21: BigQuery parse with options.
Tests correct. Consider negative tests.


26-26: BigQuery parse without options.
Logic is consistent.


31-31: Kafka topic parse.
Straightforward.


36-36: File path parse with CSV.
Works as intended.


41-42: Parsing options with spaces.
Validation is correct.


47-48: Path with dots.
Format detection tested.


53-53: Multiple dots, no explicit format.
Resolves to “dots” format.


58-58: File+csv with multiple dots.
Matches expected parse.


63-63: Wildcard path.
Ensures partial match.


68-68: No prefix scenario.
Default behavior tested.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
api/src/main/scala/ai/chronon/api/DataPointer.scala (1)

40-55: Document format extraction rules.

Complex format extraction logic needs documentation explaining the rules.

Add comments:

+      // Rule 1: If format is explicitly specified in prefix (s3+parquet://), use that format
       case Some((ctl, Some(fmt)) ~ opts ~ sep) ~ path =>
         URIDataPointer(ctl + sep + path, Some(fmt), Some(fmt), opts.getOrElse(Map.empty))

+      // Rule 2: For file-like sources, extract format from extension
+      // Rule 3: For table-like sources, use catalog as format
       case Some((ctl, None) ~ opts ~ sep) ~ path =>
api/src/test/scala/ai/chronon/api/test/DataPointerTest.scala (1)

10-69: Add negative test cases and property-based testing.

Current tests cover happy paths well, but missing error cases.

Add tests:

  1. Invalid URI formats
  2. Malformed options
  3. Property-based testing with ScalaCheck
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 3640599 and cf4a368.

📒 Files selected for processing (4)
  • api/src/main/scala/ai/chronon/api/DataPointer.scala (2 hunks)
  • api/src/test/scala/ai/chronon/api/test/DataPointerTest.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/CatalogAwareDataPointer.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/Format.scala (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • spark/src/main/scala/ai/chronon/spark/Format.scala
🔇 Additional comments (2)
spark/src/main/scala/ai/chronon/spark/CatalogAwareDataPointer.scala (1)

8-22: LGTM! Clean implementation with lazy evaluation.

Good use of lazy evaluation for formats and proper encapsulation of FormatProvider.

Let's verify FormatProvider implementations:

✅ Verification successful

FormatProvider implementations look good and consistent

Both DefaultFormatProvider and GcpFormatProvider:

  • Properly extend FormatProvider trait
  • Implement required methods (readFormat, writeFormat)
  • Take SparkSession as constructor param
  • Handle format detection appropriately
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find all FormatProvider implementations
ast-grep --pattern 'class $_ extends FormatProvider'

Length of output: 54


Script:

#!/bin/bash
# Let's try finding the FormatProvider trait/interface definition first
rg "trait FormatProvider" -A 5

# Also search for any files containing FormatProvider in their name
fd "FormatProvider"

# Search for implementations using rg instead
rg "extends FormatProvider"

Length of output: 908


Script:

#!/bin/bash
# Check DefaultFormatProvider implementation
rg "case class DefaultFormatProvider" -A 10

# Check GcpFormatProvider implementation
rg "case class GcpFormatProvider" -A 10

Length of output: 2433

api/src/main/scala/ai/chronon/api/DataPointer.scala (1)

4-17: LGTM! Clean abstraction design.

Good separation of concerns between abstract interface and concrete implementation.

Comment on lines +27 to +40
def apply(tableOrPath: String, sparkSession: SparkSession): DataPointer = {
val clazzName =
sparkSession.conf.get("spark.chronon.table.format_provider.class", classOf[DefaultFormatProvider].getName)
val mirror = runtimeMirror(getClass.getClassLoader)
val classSymbol = mirror.staticClass(clazzName)
val classMirror = mirror.reflectClass(classSymbol)
val constructor = classSymbol.primaryConstructor.asMethod
val constructorMirror = classMirror.reflectConstructor(constructor)
val reflected = constructorMirror(sparkSession)
val provider = reflected.asInstanceOf[FormatProvider]

CatalogAwareDataPointer(tableOrPath, provider)

}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for reflection failures.

Reflection could fail if class doesn't exist or lacks proper constructor.

Add try-catch:

 def apply(tableOrPath: String, sparkSession: SparkSession): DataPointer = {
+  try {
     val clazzName =
       sparkSession.conf.get("spark.chronon.table.format_provider.class", classOf[DefaultFormatProvider].getName)
     val mirror = runtimeMirror(getClass.getClassLoader)
     val classSymbol = mirror.staticClass(clazzName)
     val classMirror = mirror.reflectClass(classSymbol)
     val constructor = classSymbol.primaryConstructor.asMethod
     val constructorMirror = classMirror.reflectConstructor(constructor)
     val reflected = constructorMirror(sparkSession)
     val provider = reflected.asInstanceOf[FormatProvider]

     CatalogAwareDataPointer(tableOrPath, provider)
+  } catch {
+    case e: Exception => throw new IllegalArgumentException(s"Failed to initialize FormatProvider: ${e.getMessage}", e)
+  }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def apply(tableOrPath: String, sparkSession: SparkSession): DataPointer = {
val clazzName =
sparkSession.conf.get("spark.chronon.table.format_provider.class", classOf[DefaultFormatProvider].getName)
val mirror = runtimeMirror(getClass.getClassLoader)
val classSymbol = mirror.staticClass(clazzName)
val classMirror = mirror.reflectClass(classSymbol)
val constructor = classSymbol.primaryConstructor.asMethod
val constructorMirror = classMirror.reflectConstructor(constructor)
val reflected = constructorMirror(sparkSession)
val provider = reflected.asInstanceOf[FormatProvider]
CatalogAwareDataPointer(tableOrPath, provider)
}
def apply(tableOrPath: String, sparkSession: SparkSession): DataPointer = {
try {
val clazzName =
sparkSession.conf.get("spark.chronon.table.format_provider.class", classOf[DefaultFormatProvider].getName)
val mirror = runtimeMirror(getClass.getClassLoader)
val classSymbol = mirror.staticClass(clazzName)
val classMirror = mirror.reflectClass(classSymbol)
val constructor = classSymbol.primaryConstructor.asMethod
val constructorMirror = classMirror.reflectConstructor(constructor)
val reflected = constructorMirror(sparkSession)
val provider = reflected.asInstanceOf[FormatProvider]
CatalogAwareDataPointer(tableOrPath, provider)
} catch {
case e: Exception => throw new IllegalArgumentException(s"Failed to initialize FormatProvider: ${e.getMessage}", e)
}
}

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
spark/src/main/scala/ai/chronon/spark/Extensions.scala (2)

328-328: Use a config or constant for default format


370-370: Revise message to say "Unsupported format"

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between cf4a368 and 1034c5d.

📒 Files selected for processing (2)
  • spark/src/main/scala/ai/chronon/spark/Extensions.scala (2 hunks)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala (1 hunks)
🔇 Additional comments (3)
spark/src/main/scala/ai/chronon/spark/Extensions.scala (1)

329-329: Ensure handling unknown or empty format

spark/src/main/scala/ai/chronon/spark/TableUtils.scala (2)

749-749: Creating DataPointer with apply() is clean


755-755: Logging dp.readFormat is correct

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1)

13-14: Return a meaningful name.
Returning an empty string can hamper debugging or logging. Define a descriptive name if possible.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 1034c5d and 1b97e2a.

📒 Files selected for processing (2)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1 hunks)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GCSFormat.scala (1 hunks)
🔇 Additional comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala (1)

69-70: Implementation looks consistent.

@tchow-zlai tchow-zlai merged commit b81b7db into main Jan 3, 2025
10 checks passed
@tchow-zlai tchow-zlai deleted the tchow/bq-support-8 branch January 3, 2025 05:53
tchow-zlai added a commit that referenced this pull request Jan 6, 2025
…Utils behavior (#173)

## Summary

- #157 introduced CatalogAwareDataPointer and also a regression in the
way we write tables. Need to perform `insertInto` for hive-based tables
which have different write semantics from `saveAsTable`.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update


<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
)

## Summary

- Refactor DataPointer. Ideally, if we are parsing URI's, if we come
across a prefix we should preserve the original whole prefix. We lose
out on the benefits of the various `s3<c>` uri's but we can fix that in
a future iteration. This way, the Extensions code is simpler.
- Define some DataframeWriter and DataframeReader implicit classes to
support handling DataPointer. Ideally we want DataPointer to be a
lightweight object that we can take action on, similar to what a table
name or a uri is.
- Introduce CatalogAwareDataPointer. The way this works is that it
encapsulates `Format` which is a runtime injection used to figure out
underlying storage r/w layers. This is ultimately what DataPointer
represents, and instead of statically defining it we will make remote
calls to do so.

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Introduced a more flexible `DataPointer` architecture with an abstract
base class and `URIDataPointer`.
  - Added support for dynamic format resolution in Spark data sources.
  - Enhanced `BQuery` and `GCS` classes with specific `name` methods.

- **Refactor**
  - Restructured `DataPointer` class to improve extensibility.
- Enhanced format handling with standardized `name` methods for
different data formats.
  - Updated `DataPointerOps` to streamline format and catalog handling.
- Modified `TableUtils` to utilize the new `DataPointer` instantiation
method.

- **Improvements**
  - Implemented more robust table and format parsing mechanisms.
  - Added utility methods for resolving table names and formats.
- Refined logging for `DataPointer` instantiation and state
representation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
…Utils behavior (#173)

## Summary

- #157 introduced CatalogAwareDataPointer and also a regression in the
way we write tables. Need to perform `insertInto` for hive-based tables
which have different write semantics from `saveAsTable`.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update


<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
)

## Summary

- Refactor DataPointer. Ideally, if we are parsing URI's, if we come
across a prefix we should preserve the original whole prefix. We lose
out on the benefits of the various `s3<c>` uri's but we can fix that in
a future iteration. This way, the Extensions code is simpler.
- Define some DataframeWriter and DataframeReader implicit classes to
support handling DataPointer. Ideally we want DataPointer to be a
lightweight object that we can take action on, similar to what a table
name or a uri is.
- Introduce CatalogAwareDataPointer. The way this works is that it
encapsulates `Format` which is a runtime injection used to figure out
underlying storage r/w layers. This is ultimately what DataPointer
represents, and instead of statically defining it we will make remote
calls to do so.

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Introduced a more flexible `DataPointer` architecture with an abstract
base class and `URIDataPointer`.
  - Added support for dynamic format resolution in Spark data sources.
  - Enhanced `BQuery` and `GCS` classes with specific `name` methods.

- **Refactor**
  - Restructured `DataPointer` class to improve extensibility.
- Enhanced format handling with standardized `name` methods for
different data formats.
  - Updated `DataPointerOps` to streamline format and catalog handling.
- Modified `TableUtils` to utilize the new `DataPointer` instantiation
method.

- **Improvements**
  - Implemented more robust table and format parsing mechanisms.
  - Added utility methods for resolving table names and formats.
- Refined logging for `DataPointer` instantiation and state
representation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
…Utils behavior (#173)

## Summary

- #157 introduced CatalogAwareDataPointer and also a regression in the
way we write tables. Need to perform `insertInto` for hive-based tables
which have different write semantics from `saveAsTable`.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update


<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
)

## Summary

- Refactor DataPointer. Ideally, if we are parsing URI's, if we come
across a prefix we should preserve the original whole prefix. We lose
out on the benefits of the various `s3<c>` uri's but we can fix that in
a future iteration. This way, the Extensions code is simpler.
- Define some DataframeWriter and DataframeReader implicit classes to
support handling DataPointer. Ideally we want DataPointer to be a
lightweight object that we can take action on, similar to what a table
name or a uri is.
- Introduce CatalogAwareDataPointer. The way this works is that it
encapsulates `Format` which is a runtime injection used to figure out
underlying storage r/w layers. This is ultimately what DataPointer
represents, and instead of statically defining it we will make remote
calls to do so.

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Introduced a more flexible `DataPointer` architecture with an abstract
base class and `URIDataPointer`.
  - Added support for dynamic format resolution in Spark data sources.
  - Enhanced `BQuery` and `GCS` classes with specific `name` methods.

- **Refactor**
  - Restructured `DataPointer` class to improve extensibility.
- Enhanced format handling with standardized `name` methods for
different data formats.
  - Updated `DataPointerOps` to streamline format and catalog handling.
- Modified `TableUtils` to utilize the new `DataPointer` instantiation
method.

- **Improvements**
  - Implemented more robust table and format parsing mechanisms.
  - Added utility methods for resolving table names and formats.
- Refined logging for `DataPointer` instantiation and state
representation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
…Utils behavior (#173)

## Summary

- #157 introduced CatalogAwareDataPointer and also a regression in the
way we write tables. Need to perform `insertInto` for hive-based tables
which have different write semantics from `saveAsTable`.

## Checklist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update


<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
)

## Summary

- Refactor DataPointer. Ideally, if we are parsing URI's, if we come
across a prefix we should preserve the original whole prefix. We lose
out on the benefits of the various `s3<c>` uri's but we can fix that in
a future iteration. This way, the Extensions code is simpler.
- Define some DataframeWriter and DataframeReader implicit classes to
support handling DataPointer. Ideally we want DataPointer to be a
lightweight object that we can take action on, similar to what a table
name or a uri is.
- Introduce CatalogAwareDataPointer. The way this works is that it
encapsulates `Format` which is a runtime injection used to figure out
underlying storage r/w layers. This is ultimately what DataPointer
represents, and instead of statically defining it we will make remote
calls to do so.

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Introduced a more flexible `DataPointer` architecture with an abstract
base class and `URIDataPointer`.
  - Added support for dynamic format resolution in Spark data sources.
  - Enhanced `BQuery` and `GCS` classes with specific `name` methods.

- **Refactor**
  - Restructured `DataPointer` class to improve extensibility.
- Enhanced format handling with standardized `name` methods for
different data formats.
  - Updated `DataPointerOps` to streamline format and catalog handling.
- Modified `TableUtils` to utilize the new `DataPointer` instantiation
method.

- **Improvements**
  - Implemented more robust table and format parsing mechanisms.
  - Added utility methods for resolving table names and formats.
- Refined logging for `DataPointer` instantiation and state
representation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
…Utils behavior (#173)

## Summary

- #157 introduced CatalogAwareDataPointer and also a regression in the
way we write tables. Need to perform `insertInto` for hive-based tables
which have different write semantics from `saveAsTable`.

## Cheour clientslist
- [ ] Added Unit Tests
- [x] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update


<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

Co-authored-by: Thomas Chow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants