-
Notifications
You must be signed in to change notification settings - Fork 927
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Invalid hypertable_compression_stats output until chunks are recompressed #7713
Comments
Still a problem in 2.18.2 FYI |
@antekresic to be clear, the bug happens regardless of access method. I shouldn't have included that in my replication steps I suppose. |
Anyone has a workaround for this? It's extremely frustrating to have to recompress chunks in order to get valid compression stats. Is there another view I can use? |
I tried this on 2.18.2, but I couldn't reproduce the issue:
@jflambert I am wondering what is different with your setup? |
@erimatnor I was completely wrong when I said this!
In fact the bug only happens with However, I just tested with |
@jflambert I managed to reproduce the issue. It turns out that this is an issue with how we handle compression stats. They are not updated when a chunk is recompressed using segmentwise recompression. Neither are the stats updated if data is transparently decompressed due to, e.g., updates. As a workaround, you can disable segmentwise recompression with a GUC. The explanation for what happens when you use the hypercore table access method on the hypertable is as follows: New chunks are created using the table access method and they are technically "compressed", but they have no data. Then you insert data and run recompression (using segmentwise approach) and stats are not updated. The similar thing would happen if you first compress a chunk without using the table access method and then backfill it and recompress. Stats won't be updated so they will no longer show correct size given the backfill. The extreme case is you backfill most of the data, and stats will be really wrong. We could probably implement a partial solution by updating the stats if we do segmentwise recompression and there is no compressed data. But that still won't solve the backfill issue. |
What type of bug is this?
Incorrect result
What subsystems and features are affected?
Compression with
hypercore
tam.What happened?
I'm unable to get compression stats unless I forcibly recompress chunks.
TimescaleDB version affected
2.18.1, 2.18.2, 2.19.0
PostgreSQL version used
16.6-16.8
What operating system did you use?
timescaledb-ha:pg16.6-ts2.18.1
timescaledb-ha:pg16.7-ts2.18.2
timescaledb-ha:pg16.8-ts2.19.0
What installation method did you use?
Docker
What platform did you run on?
On prem/Self-hosted
How can we reproduce the bug?
initial table size is 1.7GB
Let's compress.
Stats don't show up at all. table size is shrinking due to the autovacuum daemon presumably.
Let's decompress.
Table size neither grows nor shrinks at this point.
Let's compress again.
Finally, expected values from the view (though slightly different from table size)
The text was updated successfully, but these errors were encountered: