Skip to content

Skip default segmentby if orderby is explicitly set #8033

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .unreleased/pr_8033
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fixes: #8033 Skip default segmentby if orderby is explicitly set
3 changes: 2 additions & 1 deletion tsl/src/compression/create.c
Original file line number Diff line number Diff line change
Expand Up @@ -1203,7 +1203,8 @@ compression_settings_update(Hypertable *ht, CompressionSettings *settings,
{
settings->fd.segmentby = ts_compress_hypertable_parse_segment_by(with_clause_options, ht);
}
else if (!settings->fd.segmentby)
else if (!settings->fd.segmentby && !settings->fd.orderby &&
with_clause_options[AlterTableFlagCompressOrderBy].is_default)
{
settings->fd.segmentby = compression_setting_segmentby_get_default(ht);
}
Expand Down
2 changes: 0 additions & 2 deletions tsl/test/expected/bgw_custom.out
Original file line number Diff line number Diff line change
Expand Up @@ -882,8 +882,6 @@ INSERT INTO sensor_data
time;
-- enable compression
ALTER TABLE sensor_data SET (timescaledb.compress, timescaledb.compress_orderby = 'time DESC');
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "sensor_data" is set to ""
-- create new chunks
INSERT INTO sensor_data
SELECT
Expand Down
2 changes: 0 additions & 2 deletions tsl/test/expected/bgw_db_scheduler_fixed.out
Original file line number Diff line number Diff line change
Expand Up @@ -1660,8 +1660,6 @@ select show_chunks('test_table_scheduler');
(8 rows)

alter table test_table_scheduler set (timescaledb.compress, timescaledb.compress_orderby = 'time DESC');
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "test_table_scheduler" is set to ""
select add_retention_policy('test_table_scheduler', interval '2 year', initial_start => :'init'::timestamptz, timezone => 'Europe/Berlin');
add_retention_policy
----------------------
Expand Down
10 changes: 0 additions & 10 deletions tsl/test/expected/cagg_ddl-15.out
Original file line number Diff line number Diff line change
Expand Up @@ -1520,8 +1520,6 @@ INSERT INTO test_setting VALUES( '2020-11-01', 20);
--try out 2 settings here --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'true', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_40" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand All @@ -1540,8 +1538,6 @@ SELECT count(*) from test_setting_cagg ORDER BY 1;
--now set it back to false --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'false', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_40" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand Down Expand Up @@ -1609,8 +1605,6 @@ INSERT INTO test_setting VALUES( '2020-11-01', 20);
--try out 2 settings here --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'false', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_42" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand All @@ -1629,8 +1623,6 @@ SELECT count(*) from test_setting_cagg ORDER BY 1;
--now set it back to false --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'true', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_42" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand Down Expand Up @@ -2106,8 +2098,6 @@ CREATE MATERIALIZED VIEW cagg1 WITH (timescaledb.continuous) AS SELECT time_buck
NOTICE: refreshing continuous aggregate "cagg1"
ALTER MATERIALIZED VIEW cagg1 SET (timescaledb.compress);
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_56" is set to ""
SELECT count(compress_chunk(ch)) FROM show_chunks('cagg1') ch;
count
-------
Expand Down
10 changes: 0 additions & 10 deletions tsl/test/expected/cagg_ddl-16.out
Original file line number Diff line number Diff line change
Expand Up @@ -1520,8 +1520,6 @@ INSERT INTO test_setting VALUES( '2020-11-01', 20);
--try out 2 settings here --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'true', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_40" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand All @@ -1540,8 +1538,6 @@ SELECT count(*) from test_setting_cagg ORDER BY 1;
--now set it back to false --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'false', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_40" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand Down Expand Up @@ -1609,8 +1605,6 @@ INSERT INTO test_setting VALUES( '2020-11-01', 20);
--try out 2 settings here --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'false', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_42" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand All @@ -1629,8 +1623,6 @@ SELECT count(*) from test_setting_cagg ORDER BY 1;
--now set it back to false --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'true', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_42" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand Down Expand Up @@ -2106,8 +2098,6 @@ CREATE MATERIALIZED VIEW cagg1 WITH (timescaledb.continuous) AS SELECT time_buck
NOTICE: refreshing continuous aggregate "cagg1"
ALTER MATERIALIZED VIEW cagg1 SET (timescaledb.compress);
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_56" is set to ""
SELECT count(compress_chunk(ch)) FROM show_chunks('cagg1') ch;
count
-------
Expand Down
10 changes: 0 additions & 10 deletions tsl/test/expected/cagg_ddl-17.out
Original file line number Diff line number Diff line change
Expand Up @@ -1520,8 +1520,6 @@ INSERT INTO test_setting VALUES( '2020-11-01', 20);
--try out 2 settings here --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'true', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_40" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand All @@ -1540,8 +1538,6 @@ SELECT count(*) from test_setting_cagg ORDER BY 1;
--now set it back to false --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'false', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_40" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand Down Expand Up @@ -1609,8 +1605,6 @@ INSERT INTO test_setting VALUES( '2020-11-01', 20);
--try out 2 settings here --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'false', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_42" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand All @@ -1629,8 +1623,6 @@ SELECT count(*) from test_setting_cagg ORDER BY 1;
--now set it back to false --
ALTER MATERIALIZED VIEW test_setting_cagg SET (timescaledb.materialized_only = 'true', timescaledb.compress='true');
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_42" is set to ""
SELECT view_name, compression_enabled, materialized_only
FROM timescaledb_information.continuous_aggregates
where view_name = 'test_setting_cagg';
Expand Down Expand Up @@ -2106,8 +2098,6 @@ CREATE MATERIALIZED VIEW cagg1 WITH (timescaledb.continuous) AS SELECT time_buck
NOTICE: refreshing continuous aggregate "cagg1"
ALTER MATERIALIZED VIEW cagg1 SET (timescaledb.compress);
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_56" is set to ""
SELECT count(compress_chunk(ch)) FROM show_chunks('cagg1') ch;
count
-------
Expand Down
8 changes: 0 additions & 8 deletions tsl/test/expected/cagg_errors.out
Original file line number Diff line number Diff line change
Expand Up @@ -526,13 +526,9 @@ NOTICE: defaulting compress_orderby to bucket
ERROR: cannot use column "bucket" for both ordering and segmenting
ALTER MATERIALIZED VIEW i2980_cagg2 SET ( timescaledb.compress,
timescaledb.compress_orderby = 'bucket');
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_14" is set to ""
--enable compression and test re-enabling compression
ALTER MATERIALIZED VIEW i2980_cagg2 SET ( timescaledb.compress);
NOTICE: defaulting compress_orderby to bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_14" is set to ""
insert into i2980 select now();
call refresh_continuous_aggregate('i2980_cagg2', NULL, NULL);
SELECT compress_chunk(ch) FROM show_chunks('i2980_cagg2') ch;
Expand All @@ -545,8 +541,6 @@ ALTER MATERIALIZED VIEW i2980_cagg2 SET ( timescaledb.compress = 'false');
ERROR: cannot disable columnstore on hypertable with columnstore chunks
ALTER MATERIALIZED VIEW i2980_cagg2 SET ( timescaledb.compress = 'true');
NOTICE: defaulting compress_orderby to bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_14" is set to ""
ALTER MATERIALIZED VIEW i2980_cagg2 SET ( timescaledb.compress, timescaledb.compress_segmentby = 'bucket');
NOTICE: defaulting compress_orderby to bucket
ERROR: cannot use column "bucket" for both ordering and segmenting
Expand All @@ -561,8 +555,6 @@ SELECT add_compression_policy('i2980_cagg', '8 day'::interval);
ERROR: columnstore not enabled on continuous aggregate "i2980_cagg"
ALTER MATERIALIZED VIEW i2980_cagg SET ( timescaledb.compress );
NOTICE: defaulting compress_orderby to time_bucket
WARNING: there was some uncertainty picking the default segment by for the hypertable: You do not have any indexes on columns that can be used for segment_by and thus we are not using segment_by for converting to columnstore. Please make sure you are not missing any indexes
NOTICE: default segment by for hypertable "_materialized_hypertable_13" is set to ""
SELECT add_continuous_aggregate_policy('i2980_cagg2', '10 day'::interval, '6 day'::interval);
ERROR: function add_continuous_aggregate_policy(unknown, interval, interval) does not exist at character 8
SELECT add_compression_policy('i2980_cagg2', '3'::integer);
Expand Down
Loading
Loading