Skip to content

Commit f58011f

Browse files
authored
Fix error when setting a large number of properties (#312)
* Fix error when setting a large number of properties Bugfix Fix #269. This change greatly reduces the likelihood of an error when specifying a large number of property_ids in `ga4.combine_property_data()`. * Fixed the following bug * Changed to copy a table for each peoperty_id dbt_project.yml ```yml vars: ga4: source_project: source-project-id property_ids: [ 000000001 , 000000002 , ... , 000000040 ] start_date: 20210101 static_incremental_days: 3 combined_dataset: combined_dataset_name ``` ```shell $ dbt run -s base_ga4__events --full-refresh 06:51:19 Running with dbt=1.5.0 06:52:05 Found 999 models, 999 tests, 999 snapshots, 999 analyses, 999 macros, 999 operations, 999 seed files, 999 sources, 999 exposures, 999 metrics, 999 groups 06:52:06 06:52:14 Concurrency: 4 threads (target='dev') 06:52:14 06:52:14 1 of 1 START sql view model dataset_name.base_ga4__events ......... [RUN] 06:56:17 BigQuery adapter: https://console.cloud.google.com/bigquery?project=project-id&j=bq:asia-northeast1:????????-????-????-????-????????????&page=queryresults 06:56:17 1 of 1 ERROR creating sql view model dataset_name.base_ga4__events [ERROR in 243.80s] 06:56:18 06:56:18 Finished running 1 view model in 0 hours 4 minutes and 11.62 seconds (251.62s). 06:56:22 06:56:22 Completed with 1 error and 0 warnings: 06:56:22 06:56:23 Database Error in model base_ga4__events (models/staging/base/base_ga4__events.sql) 06:56:23 The query is too large. The maximum standard SQL query length is 1024.00K characters, including comments and white space characters. 06:56:23 06:56:23 Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1 ``` Merging this pull request will enable execution. ```shell $ dbt run -s base_ga4__events --full-refresh HH:mm:ss Running with dbt=1.5.0 HH:mm:ss Found 999 models, 999 tests, 999 snapshots, 999 analyses, 999 macros, 999 operations, 999 seed files, 999 sources, 999 exposures, 999 metrics, 999 groups HH:mm:ss HH:mm:ss Concurrency: 4 threads (target='dev') HH:mm:ss HH:mm:ss 1 of 1 START sql incremental model dataset_name.base_ga4__events ... [RUN] HH:mm:ss Cloned from `source-project-id.analytics_000000001.events_*[20210101-20240324]` to `project-id.combined_dataset_name.events_YYYYMMDD000000001`. HH:mm:ss Cloned from `source-project-id.analytics_000000002.events_*[20210101-20240324]` to `project-id.combined_dataset_name.events_YYYYMMDD000000002`. .... HH:mm:ss Cloned from `source-project-id.analytics_000000040.events_*[20210101-20240324]` to `project-id.combined_dataset_name.events_YYYYMMDD000000040`. HH:mm:ss 1 of 1 OK created sql incremental model dataset_name.base_ga4__events [CREATE TABLE (? rows, ? processed) in ?] HH:mm:ss HH:mm:ss Finished running 1 incremental model in ? (?). HH:mm:ss HH:mm:ss Completed successfully HH:mm:ss HH:mm:ss Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1 ``` --- Fixed timeout in clone operation The following error will almost never occur because I have changed to clone separated by property_id. * Removed https://github.com/Velir/dbt-ga4/blame/6.0.1/README.md#L323-L332 from README.md * Resolved the following operation https://github.com/Velir/dbt-ga4/blame/6.0.1/README.md#L323-L332 > Jobs that run a large number of clone operations are prone to timing out. As a result, it is recommended that you increase the query timeout if you need to backfill or full-refresh the table, when first setting up or when the base model gets modified. Otherwise, it is best to prevent the base model from rebuilding on full refreshes unless needed to minimize timeouts. > > ``` > models: > ga4: > staging: > base: > base_ga4__events: > +full_refresh: false > ``` * Changed the implementation of combine_property_data to the minimum necessary * Remove latest_shard_to_retrieve
1 parent d97fbf6 commit f58011f

File tree

2 files changed

+20
-22
lines changed

2 files changed

+20
-22
lines changed

README.md

-10
Original file line numberDiff line numberDiff line change
@@ -320,16 +320,6 @@ vars:
320320

321321
With these variables set, the `combine_property_data` macro will run as a pre-hook to `base_ga4_events` and clone shards to the target dataset. The number of days' worth of data to clone during incremental runs will be based on the `static_incremental_days` variable.
322322

323-
Jobs that run a large number of clone operations are prone to timing out. As a result, it is recommended that you increase the query timeout if you need to backfill or full-refresh the table, when first setting up or when the base model gets modified. Otherwise, it is best to prevent the base model from rebuilding on full refreshes unless needed to minimize timeouts.
324-
325-
```
326-
models:
327-
ga4:
328-
staging:
329-
base:
330-
base_ga4__events:
331-
+full_refresh: false
332-
```
333323
# dbt Style Guide
334324

335325
This package attempts to adhere to the Brooklyn Data style guide found [here](https://github.com/brooklyn-data/co/blob/main/sql_style_guide.md). This work is in-progress.

macros/combine_property_data.sql

+20-12
Original file line numberDiff line numberDiff line change
@@ -3,36 +3,44 @@
33
{%- endmacro -%}
44

55
{% macro default__combine_property_data() %}
6-
7-
create schema if not exists `{{target.project}}.{{var('combined_dataset')}}`;
8-
9-
{# If incremental, then use static_incremental_days variable to find earliest shard to copy #}
106
{% if not should_full_refresh() %}
11-
{% set earliest_shard_to_retrieve = (modules.datetime.date.today() - modules.datetime.timedelta(days=var('static_incremental_days')))|string|replace("-", "")|int %}
7+
{# If incremental, then use static_incremental_days variable to find earliest shard to copy #}
8+
{%- set earliest_shard_to_retrieve = (modules.datetime.date.today() - modules.datetime.timedelta(days=var('static_incremental_days')))|string|replace("-", "")|int -%}
129
{% else %}
13-
{# Otherwise use 'start_date' variable #}
14-
15-
{% set earliest_shard_to_retrieve = var('start_date')|int %}
10+
{# Otherwise use 'start_date' variable #}
11+
{%- set earliest_shard_to_retrieve = var('start_date')|int -%}
1612
{% endif %}
1713

1814
{% for property_id in var('property_ids') %}
1915
{%- set schema_name = "analytics_" + property_id|string -%}
16+
17+
{%- set combine_specified_property_data_query -%}
18+
create schema if not exists `{{target.project}}.{{var('combined_dataset')}}`;
19+
2020
{# Copy intraday tables #}
2121
{%- set relations = dbt_utils.get_relations_by_pattern(schema_pattern=schema_name, table_pattern='events_intraday_%', database=var('source_project')) -%}
2222
{% for relation in relations %}
2323
{%- set relation_suffix = relation.identifier|replace('events_intraday_', '') -%}
2424
{%- if relation_suffix|int >= earliest_shard_to_retrieve|int -%}
25-
CREATE OR REPLACE TABLE `{{target.project}}.{{var('combined_dataset')}}.events_intraday_{{relation_suffix}}{{property_id}}` CLONE `{{var('source_project')}}.analytics_{{property_id}}.events_intraday_{{relation_suffix}}`;
25+
create or replace table `{{target.project}}.{{var('combined_dataset')}}.events_intraday_{{relation_suffix}}{{property_id}}` clone `{{var('source_project')}}.analytics_{{property_id}}.events_intraday_{{relation_suffix}}`;
2626
{%- endif -%}
2727
{% endfor %}
28+
2829
{# Copy daily tables and drop old intraday table #}
2930
{%- set relations = dbt_utils.get_relations_by_pattern(schema_pattern=schema_name, table_pattern='events_%', exclude='events_intraday_%', database=var('source_project')) -%}
3031
{% for relation in relations %}
3132
{%- set relation_suffix = relation.identifier|replace('events_', '') -%}
3233
{%- if relation_suffix|int >= earliest_shard_to_retrieve|int -%}
33-
CREATE OR REPLACE TABLE `{{target.project}}.{{var('combined_dataset')}}.events_{{relation_suffix}}{{property_id}}` CLONE `{{var('source_project')}}.analytics_{{property_id}}.events_{{relation_suffix}}`;
34-
DROP TABLE IF EXISTS `{{target.project}}.{{var('combined_dataset')}}.events_intraday_{{relation_suffix}}{{property_id}}`;
34+
create or replace table `{{target.project}}.{{var('combined_dataset')}}.events_{{relation_suffix}}{{property_id}}` clone `{{var('source_project')}}.analytics_{{property_id}}.events_{{relation_suffix}}`;
35+
drop table if exists `{{target.project}}.{{var('combined_dataset')}}.events_intraday_{{relation_suffix}}{{property_id}}`;
3536
{%- endif -%}
3637
{% endfor %}
38+
{%- endset -%}
39+
40+
{% do run_query(combine_specified_property_data_query) %}
41+
42+
{% if execute %}
43+
{{ log("Cloned from `" ~ var('source_project') ~ ".analytics_" ~ property_id ~ ".events_*` to `" ~ target.project ~ "." ~ var('combined_dataset') ~ ".events_YYYYMMDD" ~ property_id ~ "`.", True) }}
44+
{% endif %}
3745
{% endfor %}
38-
{% endmacro %}
46+
{% endmacro %}

0 commit comments

Comments
 (0)