Open
Description
Hi team,
I'm using the Spark BigQuery Connector to query datasets that reside in the EU region, and I’ve encountered a consistent issue.
The connector always executes query jobs in the US region, even when using .option("location", "EU"), resulting in errors like:
Not found: Dataset project:dataset was not found in location US
After investigating the codebase and tracing the execution path, here’s what I found. The connector ultimately submits query jobs using:
bigQuery.create(JobInfo.of(queryJobConfig))
This creates a JobInfo with a JobConfiguration, but without a JobId. In the BigQuery API, location is part of jobReference, not configuration. Therefore, unless a JobId is created with .setLocation(...), BigQuery defaults to US.
Would you please take a look at this one?
Metadata
Metadata
Assignees
Labels
No labels