Skip to content

Commit a8ed18d

Browse files
committed
ch.md
1 parent 177d406 commit a8ed18d

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

clickhouse.md

+9-1
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,12 @@
22

33
## Resources
44

5-
- [Building Multi-Petabyte Data Warehouses with ClickHouse](https://www.percona.com/live/e17/sessions/building-multi-petabyte-data-warehouses-with-clickhouse)
5+
- [Building Multi-Petabyte Data Warehouses with ClickHouse](https://www.percona.com/live/e17/sessions/building-multi-petabyte-data-warehouses-with-clickhouse)
6+
7+
## UI
8+
9+
[https://tabix.io](https://tabix.io)
10+
11+
## Best Practices
12+
- For small amounts of data (up to \~200 GB compressed), it is best to use as much memory as the volume of data. For large amounts of data and when processing interactive (online) queries, you should use a reasonable amount of RAM (128 GB or more) so the hot data subset will fit in the cache of pages. Even for data volumes of \~50 TB per server, using 128 GB of RAM significantly improves query performance compared to 64 GB.
13+
- We should make bulk inserts. Preferably ~ 10000 rows at a time.

0 commit comments

Comments
 (0)