Skip to content

more destination postgres warnings #38219

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 15, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions docs/integrations/destinations/postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,15 @@ This page guides you through the process of setting up the Postgres destination

:::caution

Postgres, while an excellent relational database, is not a data warehouse.
Postgres, while an excellent relational database, is not a data warehouse. Please only consider using postgres as a destination for small data volumes (e.g. less than 10GB) or for testing purposes. For larger data volumes, we recommend using a data warehouse like BigQuery, Snowflake, or Redshift.

1. Postgres is likely to perform poorly with large data volumes. Even postgres-compatible
destinations (e.g. AWS Aurora) are not immune to slowdowns when dealing with large writes or
updates over ~500GB. Especially when using normalization with `destination-postgres`, be sure to
updates over ~100GB. Especially when using [typing and deduplication](/using-airbyte/core-concepts/typing-deduping) with `destination-postgres`, be sure to
monitor your database's memory and CPU usage during your syncs. It is possible for your
destination to 'lock up', and incur high usage costs with large sync volumes.
2. Postgres column [name length limitations](https://www.postgresql.org/docs/current/limits.html)
2. When attempting to scale a postgres database to handle larger data volumes, scaling IOPS (disk throughput) is as important as increasing memory and compute capacity.
3. Postgres column [name length limitations](https://www.postgresql.org/docs/current/limits.html)
are likely to cause collisions when used as a destination receiving data from highly-nested and
flattened sources, e.g. `{63 byte name}_a` and `{63 byte name}_b` will both be truncated to
`{63 byte name}` which causes postgres to throw an error that a duplicate column name was
Expand Down
Loading