Skip to content

feat: use direct writes to bigquery #264

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 27, 2025
Merged

feat: use direct writes to bigquery #264

merged 2 commits into from
Jan 27, 2025

Conversation

tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Jan 23, 2025

Summary

  • With feat: support create table in BigQuery #263 we control table creation ourselves. We don't need to rely on indirect writes to then do the table creation (and partitioning) for us, we just simply use the storage API to write directly into the table we created. This should be much more performant and preferred over indirect writes because we don't need to stage data, then load as a temp BQ table, and it uses the BigQuery storage API directly.
  • Remove configs that are used only for indirect writes

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

Release Notes

  • Improvements

    • Enhanced BigQuery data writing process with more precise configuration options.
    • Simplified table creation and partition insertion logic.
    • Improved handling of DataFrame column arrangements during data operations.
  • Changes

    • Updated BigQuery write method to use a direct writing approach.
    • Introduced a new option to prevent table creation if it does not exist.
    • Modified table creation process to be more format-aware.
    • Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud data processing workflows.

Copy link
Contributor

coderabbitai bot commented Jan 23, 2025

Walkthrough

The pull request introduces modifications to the GCP integration and Spark table utilities, focusing on changes in data writing and table creation processes. The primary updates involve adjusting the BigQuery write format, removing table reachability checks, and simplifying partition handling in the GcpFormatProvider and TableUtils classes. These changes aim to streamline data management and improve the flexibility of table creation and data insertion methods.

Changes

File Change Summary
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala - Removed TableUtils table reachability check
- Changed write method from "indirect" to "direct"
- Added createDisposition option
spark/src/main/scala/ai/chronon/spark/TableUtils.scala - Moved writeFormat declaration outside conditional check
- Simplified createTable method logic
- Streamlined insertPartitions column rearrangement

Possibly related PRs

Suggested reviewers

  • nikhil-zlai
  • piyush-zlai
  • david-zlai

Poem

🌟 Bits and bytes dance with grace,
In BigQuery's cloud-kissed space,
Format shifts, a subtle art,
Where data writes so smartly start!
Code evolves, no table's the same 🚀

Warning

Review ran into problems

🔥 Problems

GitHub Actions: Resource not accessible by integration - https://docs.github.com/rest/actions/workflow-runs#list-workflow-runs-for-a-repository.

Please grant the required permissions to the CodeRabbit GitHub App under the organization or repository settings.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between f0fd283 and a891110.

📒 Files selected for processing (2)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala (3 hunks)
🧰 Additional context used
📓 Learnings (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (2)
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:29-60
Timestamp: 2025-01-24T23:55:30.256Z
Learning: In BigQuery integration, table existence check is performed outside the BigQueryFormat.createTable method, at a higher level in TableUtils.createTable.
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-01-24T23:55:40.650Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: fetcher_spark_tests
  • GitHub Check: other_spark_tests
  • GitHub Check: mutation_spark_tests
  • GitHub Check: table_utils_delta_format_spark_tests
  • GitHub Check: join_spark_tests
  • GitHub Check: scala_compile_fmt_fix
🔇 Additional comments (3)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1)

38-40: LGTM! Direct writes configuration looks good.

The switch to direct writes with CREATE_NEVER disposition aligns with the PR objectives.

Let's verify that all BigQuery table creation is handled before writes:

✅ Verification successful

✅ Table existence checks confirmed

GcpFormatProvider verifies tables via bigQueryClient.getTable() before writes, and CREATE_NEVER disposition provides additional safety.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for BigQuery table creation calls
ast-grep --pattern 'createTable($$$)'

Length of output: 336


Script:

#!/bin/bash
# Search for BigQuery table creation and usage patterns
rg -A 2 "BigQuery.*[Tt]able" 
rg -l "TableUtils" cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/

Length of output: 3772

spark/src/main/scala/ai/chronon/spark/TableUtils.scala (2)

Line range hint 287-318: LGTM! Table creation logic properly handles BigQuery tables.

Skipping table properties for BigQuery tables is correct as they're handled differently.


333-339: LGTM! Partition column handling is standardized.

Converting partition column to date type for BigQuery is consistent with the standardization approach.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Comment on lines 531 to 530
.select(saltedDf.columns.map {
case c if c == partitionColumn && dataPointer.writeFormat.map(_.toUpperCase).exists("BIGQUERY".equals) =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

forgetting why we had this conditional:

&& dataPointer.writeFormat.map(_.toUpperCase).exists("BIGQUERY".equals)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to keep the behavior in all other cases the same, eg when we write to a Hive / Iceberg table. However, I think we should just try to keep the behavior same everywhere instead of having a ton of branching.

@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch 2 times, most recently from 0e1fa49 to 0e284cc Compare January 23, 2025 18:07
@tchow-zlai tchow-zlai force-pushed the tchow/direct-writes branch 2 times, most recently from 3424bff to efbddc9 Compare January 23, 2025 20:10
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch 2 times, most recently from 44476e7 to ff2c7b7 Compare January 23, 2025 20:18
@tchow-zlai tchow-zlai force-pushed the tchow/direct-writes branch 2 times, most recently from 54eb393 to d1d545f Compare January 23, 2025 20:27
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from ff2c7b7 to 7cdc7c1 Compare January 23, 2025 20:27
@tchow-zlai tchow-zlai force-pushed the tchow/direct-writes branch 2 times, most recently from 2c92ca2 to bbc2ec2 Compare January 23, 2025 21:56
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from 7cdc7c1 to bc7933b Compare January 23, 2025 21:57
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from bc7933b to d6ed467 Compare January 24, 2025 04:13
@tchow-zlai tchow-zlai force-pushed the tchow/direct-writes branch 2 times, most recently from 44983bc to 6d4c5bd Compare January 24, 2025 21:12
@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from d6ed467 to 76466ad Compare January 24, 2025 21:12
}
if (autoExpand) {
expandTable(tableName, df.schema)
if (writeFormat.name.toUpperCase != "BIGQUERY") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

without this conditional it breaks right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes correct.

df
}
val colOrder = df.columns.diff(partitionColumns) ++ partitionColumns
val dfRearranged: DataFrame = df.select(colOrder.map {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're losing the behavior though of making sure the partition column is the last in the order?

Copy link
Collaborator Author

@tchow-zlai tchow-zlai Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be fine right? va colOrder handles that for us.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yes it does. thanks for pointing out

val colOrder = df.columns.diff(partitionColumns) ++ partitionColumns
val dfRearranged: DataFrame = df.select(colOrder.map {
case c if c == partitionColumn =>
to_date(df.col(c), partitionFormat).as(partitionColumn)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did we want to only convert the partition column to a date type for just bigquery?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to try and make this the same across the board to avoid any special casing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current unit test suite passes with this change, but we'll have to see if it affects hive implementations anywhere. However, I do think the engine should support date partition columns as the right behavior.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm, putting this back because it's too much of a lift to support this generally.

@tchow-zlai tchow-zlai force-pushed the tchow/bq-createtable branch from 76466ad to b3d765b Compare January 24, 2025 23:06
Base automatically changed from tchow/bq-createtable to main January 25, 2025 07:42
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1)

39-39: Document CREATE_NEVER behavior.

Add a comment explaining that jobs will fail if the table doesn't exist, as table creation is handled by TableUtils.

+      // Table must exist or job will fail. Table creation is handled by TableUtils
       "createDisposition" -> JobInfo.CreateDisposition.CREATE_NEVER.name
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between c0f1645 and f0fd283.

📒 Files selected for processing (2)
  • cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1 hunks)
  • spark/src/main/scala/ai/chronon/spark/TableUtils.scala (3 hunks)
🧰 Additional context used
📓 Learnings (1)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (2)
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:29-60
Timestamp: 2025-01-24T23:55:30.256Z
Learning: In BigQuery integration, table existence check is performed outside the BigQueryFormat.createTable method, at a higher level in TableUtils.createTable.
Learnt from: tchow-zlai
PR: zipline-ai/chronon#263
File: cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala:56-57
Timestamp: 2025-01-24T23:55:40.650Z
Learning: For BigQuery table creation operations in BigQueryFormat.scala, allow exceptions to propagate directly without wrapping them in try-catch blocks, as the original BigQuery exceptions provide sufficient context.
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: mutation_spark_tests
  • GitHub Check: join_spark_tests
  • GitHub Check: fetcher_spark_tests
  • GitHub Check: table_utils_delta_format_spark_tests
  • GitHub Check: other_spark_tests
  • GitHub Check: scala_compile_fmt_fix
🔇 Additional comments (5)
cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala (1)

38-40: Verify BigQuery direct write configuration.

The switch to direct writes looks good, but ensure the BigQuery Storage Write API quotas are sufficient for your write patterns.

✅ Verification successful

Direct write configuration is safe to proceed

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the BigQuery Storage Write API is enabled
gcloud services list --format="table(NAME)" | grep bigquerystorage.googleapis.com

# Look for any quota-related configurations or issues in the codebase
rg -i "quota|rate.?limit" --type scala

Length of output: 2129

spark/src/main/scala/ai/chronon/spark/TableUtils.scala (4)

287-287: LGTM! Initialization moved outside condition.

Moving writeFormat initialization before the tableReachable check prevents potential NPEs.


333-334: LGTM! Simplified column ordering.

Clean simplification of column reordering logic.


335-338: LGTM! Consistent date type handling.

Uniform date type conversion for partition columns across all formats improves consistency.


312-318: Verify BigQuery table properties handling.

Skipping table properties and autoExpand for BigQuery tables aligns with direct writes, but ensure this doesn't affect any existing functionality that relies on these properties.

✅ Verification successful

BigQuery table properties handling is correctly implemented

Skipping table properties for BigQuery is intentional and safe. BigQuery tables use direct writes and native property management through BigQuery's API instead of Hive-style table properties. No existing functionality relies on these properties for BigQuery tables.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for code that depends on BigQuery table properties
rg -A 5 "tableProperties.*BIGQUERY" 

Length of output: 37


Script:

#!/bin/bash
# Search for BigQuery related code and table properties
echo "=== BigQuery Format Usage ==="
rg -i "bigquery" -A 5

echo -e "\n=== Table Properties Usage ==="
rg "tableProperties" -A 5

echo -e "\n=== Property Configuration Patterns ==="
ast-grep --pattern 'if ($format == "BIGQUERY") { $$$ }'

Length of output: 68758

@tchow-zlai tchow-zlai changed the base branch from main to tchow/avro-date January 27, 2025 18:54
@tchow-zlai tchow-zlai changed the base branch from tchow/avro-date to main January 27, 2025 19:17
tchow-zlai and others added 2 commits January 27, 2025 11:18
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>

Co-authored-by: Thomas Chow <[email protected]>
@tchow-zlai tchow-zlai merged commit 31c78af into main Jan 27, 2025
9 checks passed
@tchow-zlai tchow-zlai deleted the tchow/direct-writes branch January 27, 2025 21:14
nikhil-zlai pushed a commit that referenced this pull request Feb 4, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
@coderabbitai coderabbitai bot mentioned this pull request Feb 21, 2025
4 tasks
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary
- With #263 we control table
creation ourselves. We don't need to rely on indirect writes to then do
the table creation (and partitioning) for us, we just simply use the
storage API to write directly into the table we created. This should be
much more performant and preferred over indirect writes because we don't
need to stage data, then load as a temp BQ table, and it uses the
BigQuery storage API directly.
- Remove configs that are used only for indirect writes

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Improvements**
- Enhanced BigQuery data writing process with more precise configuration
options.
  - Simplified table creation and partition insertion logic.
- Improved handling of DataFrame column arrangements during data
operations.

- **Changes**
  - Updated BigQuery write method to use a direct writing approach.
- Introduced a new option to prevent table creation if it does not
exist.
  - Modified table creation process to be more format-aware.
  - Streamlined partition insertion mechanism.

These updates improve data management and writing efficiency in cloud
data processing workflows.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants