Skip to content

"spark.sql.sources.partitionOverwriteMode": "DYNAMIC" - creates additional tables #1314

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
MichalBogoryja opened this issue Nov 15, 2024 · 4 comments
Assignees

Comments

@MichalBogoryja
Copy link

When writing a spark dataframe to an existing partitioned BQ table I end up with the table modified in an expected way (partition added/modified). However, the additional table is being saved (it consists of the exact data of the dataframe that I was adding to the other table).
To reproduce:
database state: empty

from pyspark.sql import SparkSession
spark = SparkSession.builder.config("spark.sql.sources.partitionOverwriteMode", "DYNAMIC").config("enableReadSessionCaching", "false").getOrCreate()
spark
sdf.write.format("bigquery").option('partitionField', 'curdate').option('partitionType', 'DAY').mode('overwrite').save(f"{gcp_project_id}.{db}.{table_name}")

database state:
one table named {table_name} - data as in sdf

sdf_2.write.format("bigquery").mode('overwrite').save(f"{gcp_project_id}.{db}.{table_name}")

database state:
one table named {table_name} - data as in sdf with new data from sdf_2 (or if sdf_2 consists of the same partitions as there were in sdf, the original partitions are overwritten)
ADDITIONAL table named {table_name}random_numbers (eg. table_name4467706876500)

Can you modify the saving function to not save this additional table (or drop it after the save process)?

@isha97
Copy link
Member

isha97 commented Feb 3, 2025

Hi @MichalBogoryja, what's the connector version you are using? Please try with the latest connector version 0.41.1.
We do have a cleanup job to delete all the temporary tables created after the application finishes.
Can you check for logs like Running cleanup jobs. Jobs count is in your spark driver logs?

@isha97 isha97 closed this as completed Feb 18, 2025
@ma-gianluca-busatta
Copy link

Hi @isha97, I ran a test under similar conditions to those described by @MichalBogoryja, and the issue is still present in version 0.42.0. Could you please reopen the issue?

@MichalBogoryja
Copy link
Author

Hi @isha97,
I've missed your response.
Previously, I checked only in Vertex AI Workbench, and now I've checked with the latest connector 0.42.0 both in Vertex AI Workbench and when running in batch dataproc serverless.
Run in batch dataproc serverless finished with a cleanup job and all the temporary tables were deleted.
However, this process doesn't work in Vertex Ai Workbench. I've tested it using a new JupyterLab 4. The cleanup process doesn't run when shutting down the kernel (either manually or automatic shutdown after the ideal time) or when Vertex AI Workbench instance stops. The only way that I've managed to invoke the cleanup process is when I've restarted the kernel, which is the least common option for finishing computations in a jupyter notebook.

Can you modify the cleanup process? I guess the most efficient way would be to trigger the cleanup process just after the writing to bq partitioned table finishes.

@MichalBogoryja
Copy link
Author

Hi @isha97, @davidrabinowitz
I want to ask again to reopen this issue. The way the cleanup process is triggered now is not sufficient while using the spark-bg connector in Vertex AI Workbench

@isha97 isha97 reopened this Apr 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants