-
Notifications
You must be signed in to change notification settings - Fork 125
Querying large amounts of data without creating a destination table #290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @Mikerah, I recommend you try out the See: https://pandas-gbq.readthedocs.io/en/latest/reading.html#improving-download-performance for the full instructions. Closing for now, but let me know if you have any questions or this doesn't work. |
Are you using legacy SQL? https://cloud.google.com/bigquery/docs/writing-results#limitations The workarounds I suggest are:
|
Hi @tswast, I am executing a query that should be returning ~150GB of data and I am getting the following error even when using "use_bqstorage_api=True" in the pandas "read_gbq" function. Error: "google.api_core.exceptions.Forbidden: 403 Response too large to return. Consider specifying a destination table in your job configuration. For more details, see https://cloud.google.com/bigquery/troubleshooting-errors" I believe this is failing in the query stage since I am not specifying a destination table to save the data to. From this issue ticket, it seems like I should not have to specify a destination table if I am using the BQ storage API. Is that true? If this method does not work, how would you suggest executing python/pandas queries that can regularly expect large amounts of data returned? Is there a way to change BQ settings to allow large temporary tables to be created? My automated workflow will have variable size results returned so always creating tables to read data from seems like a lot of extra work to do if im unsure the result size needs it. Thanks! Edit: |
@jlynchMicron I think you are encountering a different error from the one described in googleapis/python-bigquery#1252 I'm not seeing a relevant limit listed here https://cloud.google.com/bigquery/quotas#query_jobs but you might be returning too many results to write to an anonymous results table. https://cloud.google.com/bigquery/docs/cached-results Are you doing a
If not, as a workaround, you can specify a destination table with the
|
Hi @tswast You are right it is a different error, I was just showing an example of the amount of data I could expect to receive from a past issue ticket. As for your suggestion, is there a way for this python code to run a test query beforehand to know if a destination table will need to be created or not? The way I have query code setup, it is basically an SQLalchemy query wrapper so it can have varying result sizes from very small amounts of data to the entire table being returned (1TB+). Is there a way to dry run the query to see how much data it is expected to return to see if a destination table needs to be written or not? Thanks! |
@jlynchMicron I don't know of a good way to know the result size without actually running the query. In the For your use case, I might suggest always creating a destination table but also setting an expiration time so that the result tables don't stay around for much longer than they are needed. |
Hi @tswast, I tired your first code suggestion as a test, but it fails to work: Error: "Reason: 400 Syntax error: Expected end of input but got identifier" ... it goes on with my table information. Looking at the query in BigQuery, it thinks I'm trying to execute that table identifier as a SQL query and does not know what to do with it. Based on this, I don't think you can download a table just but putting in its table identifier string. This seems like its a problem, especially if I want to do a "Select *" on an existing table. Does that mean my only course of action to do that is save the result of a table "Select *" to another table with the query will then download from? |
Hi @jlynchMicron the feature to read directly from a table was added in pandas-gbq version 0.17.0 and later. https://github.com/googleapis/python-bigquery-pandas/blob/main/CHANGELOG.md#0170-2022-01-19 That may explain why you are encountering this error. |
Hi @tswast, ah looks like I am using version 0.16.0. Thanks! |
Hi @tswast, If I know my query will exceed the "max query response size" and I will have to specify a destination table to write the results to, do you know if there is any way to specify that the result table should be a temporary table? The type of table defined in these places:
If this was possible, do you think this process could be automated into the read_gbq function as an argument flag? I am just trying to think of a nice way to download very large results for a python/pandas data scientist who does not want to deal with extra table management just to execute a query on a big dataset. |
It's not possible to manually create a temporary table, but you can emulate one by creating a table with a default expiration time.
I'd like to learn a little more about the cases in which this error can occur, as I didn't realize it could happen when using Standard SQL syntax. Perhaps one of my teammates can investigate as a feature request. If we were to implement this, I'd expect us to create a |
I'm trying to load data from BigQuery into a data frame. This is a lot of data, initially requiring the use of
allowLargeResults
. However, every time I query, I use resources that I shouldn't have to be using. Is there a way to query large amount of data from BigQuery without having to create a destination table every time and just store the result in a data frame?I am using a jupyter notebook on an AWS instance, if this matters.
The text was updated successfully, but these errors were encountered: