Skip to content

Commit 5ecaef0

Browse files
authored
more destination postgres warnings (#38219)
1 parent 9c72d0e commit 5ecaef0

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

docs/integrations/destinations/postgres.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,15 @@ This page guides you through the process of setting up the Postgres destination
44

55
:::caution
66

7-
Postgres, while an excellent relational database, is not a data warehouse.
7+
Postgres, while an excellent relational database, is not a data warehouse. Please only consider using postgres as a destination for small data volumes (e.g. less than 10GB) or for testing purposes. For larger data volumes, we recommend using a data warehouse like BigQuery, Snowflake, or Redshift.
88

99
1. Postgres is likely to perform poorly with large data volumes. Even postgres-compatible
1010
destinations (e.g. AWS Aurora) are not immune to slowdowns when dealing with large writes or
11-
updates over ~500GB. Especially when using normalization with `destination-postgres`, be sure to
11+
updates over ~100GB. Especially when using [typing and deduplication](/using-airbyte/core-concepts/typing-deduping) with `destination-postgres`, be sure to
1212
monitor your database's memory and CPU usage during your syncs. It is possible for your
1313
destination to 'lock up', and incur high usage costs with large sync volumes.
14-
2. Postgres column [name length limitations](https://www.postgresql.org/docs/current/limits.html)
14+
2. When attempting to scale a postgres database to handle larger data volumes, scaling IOPS (disk throughput) is as important as increasing memory and compute capacity.
15+
3. Postgres column [name length limitations](https://www.postgresql.org/docs/current/limits.html)
1516
are likely to cause collisions when used as a destination receiving data from highly-nested and
1617
flattened sources, e.g. `{63 byte name}_a` and `{63 byte name}_b` will both be truncated to
1718
`{63 byte name}` which causes postgres to throw an error that a duplicate column name was

0 commit comments

Comments
 (0)