Skip to content

Commit c38fcd1

Browse files
fix: Do BigQuery indirect writes (#357)
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced table output handling to support partitioned tables. - Introduced configurable options for temporary storage and integration settings, improving cloud-based table materialization. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
1 parent 9fddd49 commit c38fcd1

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/GcpFormatProvider.scala

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,10 @@ case class GcpFormatProvider(sparkSession: SparkSession) extends FormatProvider
4545
assert(scala.Option(tableId.getDataset).isDefined, s"dataset required for ${table}")
4646

4747
val sparkOptions: Map[String, String] = Map(
48-
"writeMethod" -> "direct",
48+
"temporaryGcsBucket" -> sparkSession.conf.get("spark.chronon.table.gcs.temporary_gcs_bucket"),
49+
"writeMethod" -> "indirect",
50+
"materializationProject" -> tableId.getProject,
51+
"materializationDataset" -> tableId.getDataset,
4952
"createDisposition" -> JobInfo.CreateDisposition.CREATE_NEVER.name
5053
)
5154

0 commit comments

Comments
 (0)