You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using the Spark BigQuery Connector to query datasets that reside in the EU region, and I’ve encountered a consistent issue.
The connector always executes query jobs in the US region, even when using .option("location", "EU"), resulting in errors like:
Not found: Dataset project:dataset was not found in location US
After investigating the codebase and tracing the execution path, here’s what I found. The connector ultimately submits query jobs using:
bigQuery.create(JobInfo.of(queryJobConfig))
This creates a JobInfo with a JobConfiguration, but without a JobId. In the BigQuery API, location is part of jobReference, not configuration. Therefore, unless a JobId is created with .setLocation(...), BigQuery defaults to US.
Would you please take a look at this one?
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
Hi team,
I'm using the Spark BigQuery Connector to query datasets that reside in the EU region, and I’ve encountered a consistent issue.
The connector always executes query jobs in the US region, even when using .option("location", "EU"), resulting in errors like:
After investigating the codebase and tracing the execution path, here’s what I found. The connector ultimately submits query jobs using:
This creates a JobInfo with a JobConfiguration, but without a JobId. In the BigQuery API, location is part of jobReference, not configuration. Therefore, unless a JobId is created with .setLocation(...), BigQuery defaults to US.
Would you please take a look at this one?
The text was updated successfully, but these errors were encountered: