@@ -15,46 +15,49 @@ replicated. Please check each destination to learn if Typing and Deduping is sup
15
15
16
16
- One-to-one table mapping: Data in one stream will always be mapped to one table in your data
17
17
warehouse. No more sub-tables.
18
- - Improved per-row error handling with ` _airbyte_meta ` : Airbyte will now populate typing errors in
18
+ - Improved per-row error/change handling with ` _airbyte_meta ` : Airbyte will now populate typing changes in
19
19
the ` _airbyte_meta ` column instead of failing your sync. You can query these results to audit
20
20
misformatted or unexpected data.
21
21
- Internal Airbyte tables in the ` airbyte_internal ` schema: Airbyte will now generate all raw tables
22
22
in the ` airbyte_internal ` schema. We no longer clutter your desired schema with raw data tables.
23
23
- Incremental delivery for large syncs: Data will be incrementally delivered to your final tables
24
24
when possible. No more waiting hours to see the first rows in your destination table.
25
25
26
- ## ` _airbyte_meta ` Errors
26
+ ## ` _airbyte_meta ` Changes
27
27
28
- "Per-row error handling" is a new paradigm for Airbyte which provides greater flexibility for our
28
+ "Per-row change handling" is a new paradigm for Airbyte which provides greater flexibility for our
29
29
users. Airbyte now separates ` data-moving problems ` from ` data-content problems ` . Prior to
30
30
Destinations V2, both types of errors were handled the same way: by failing the sync. Now, a failing
31
31
sync means that Airbyte could not _ move_ all of your data. You can query the ` _airbyte_meta ` column
32
32
to see which rows failed for _ content_ reasons, and why. This is a more flexible approach, as you
33
- can now decide how to handle rows with errors on a case-by-case basis.
33
+ can now decide how to handle rows with errors/changes on a case-by-case basis.
34
34
35
35
::: tip
36
36
37
37
When using data downstream from Airbyte, we generally recommend you only include rows which do not
38
- have an error , e.g:
38
+ have an change , e.g:
39
39
40
40
``` sql
41
41
-- postgres syntax
42
- SELECT COUNT (* ) FROM _table_ WHERE json_array_length(_airbyte_meta - >> errors ) = 0
42
+ SELECT COUNT (* ) FROM _table_ WHERE json_array_length(_airbyte_meta - >> changes ) = 0
43
43
```
44
44
45
45
:::
46
46
47
- The types of errors which will be stored in ` _airbyte_meta.errors ` include:
47
+ The types of changes which will be stored in ` _airbyte_meta.changes ` include:
48
48
49
- - ** Typing errors ** : the source declared that the type of the column ` id ` should be an integer, but
49
+ - ** Typing changes ** : the source declared that the type of the column ` id ` should be an integer, but
50
50
a string value was returned.
51
- - ** Size errors (coming soon) ** : the source returned content which cannot be stored within this this
51
+ - ** Size changes ** : the source returned content which cannot be stored within this this
52
52
row or column (e.g.
53
53
[ a Redshift Super column has a 16mb limit] ( https://docs.aws.amazon.com/redshift/latest/dg/limitations-super.html ) ).
54
54
Destinations V2 will allow us to trim records which cannot fit into destinations, but retain the
55
- primary key(s) and cursors and include "too big" error messages.
55
+ primary key(s) and cursors and include "too big" changes messages.
56
56
57
- Depending on your use-case, it may still be valuable to consider rows with errors, especially for
57
+ Also, sources can make use of the same tooling to denote that there was a problem emitting the Airbyte record to begin with,
58
+ possibly also creating an entry in ` _airbyte_meta.changes ` .
59
+
60
+ Depending on your use-case, it may still be valuable to consider rows with changes, especially for
58
61
aggregations. For example, you may have a table ` user_reviews ` , and you would like to know the count
59
62
of new reviews received today. You can choose to include reviews regardless of whether your data
60
63
warehouse had difficulty storing the full contents of the ` message ` column. For this use case,
@@ -83,7 +86,7 @@ The data from one stream will now be mapped to one table in your schema as below
83
86
| _ (note, not in actual table)_ | \_ airbyte_raw_id | \_ airbyte_extracted_at | \_ airbyte_meta | id | first_name | age | address |
84
87
| -------------------------------------------- | ---------------- | ---------------------- | -------------------------------------------------------------- | --- | ---------- | ---- | ----------------------------------------- |
85
88
| Successful typing and de-duping ⟶ | xxx-xxx-xxx | 2022-01-01 12:00:00 | ` {} ` | 1 | sarah | 39 | ` { city: “San Francisco”, zip: “94131” } ` |
86
- | Failed typing that didn’t break other rows ⟶ | yyy-yyy-yyy | 2022-01-01 12:00:00 | ` { errors : {[“fish” is not a valid integer for column “age”] } ` | 2 | evan | NULL | ` { city: “Menlo Park”, zip: “94002” } ` |
89
+ | Failed typing that didn’t break other rows ⟶ | yyy-yyy-yyy | 2022-01-01 12:00:00 | ` { changes : {"field": "age", "change": "NULLED", "reason": "DESTINATION_TYPECAST_ERROR"} } ` | 2 | evan | NULL | ` { city: “Menlo Park”, zip: “94002” } ` |
87
90
| Not-yet-typed ⟶ | | | | | | | |
88
91
89
92
In legacy normalization, columns of
0 commit comments