-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This release adds support for PostgreSQL 17, significantly improves the performance of continuous aggregate refreshes, and contains performance improvements for analytical queries and delete operations over compressed hypertables. We recommend that you upgrade at the next available opportunity. **Highlighted features in TimescaleDB v2.17.0** * Full PostgreSQL 17 support for all existing features. TimescaleDB v2.17 is available for PostgreSQL 14, 15, 16, and 17. * Significant performance improvements for continuous aggregate policies: continuous aggregate refresh is now using `merge` instead of deleting old materialized data and re-inserting. This update can decrease dramatically the amount of data that must be written on the continuous aggregate in the presence of a small number of changes, reduce the `i/o` cost of refreshing a continuous aggregate, and generate fewer Write-Ahead Logs (`WAL`). Overall, continuous aggregate policies will be more lightweight, use less system resources, and complete faster. * Increased performance for real-time analytical queries over compressed hypertables: we are excited to introduce additional Single Instruction, Multiple Data (`SIMD`) vectorization optimization to our engine by supporting vectorized execution for queries that group by using the `segment_by` column(s) and aggregate using the basic aggregate functions (`sum`, `count`, `avg`, `min`, `max`). Stay tuned for more to come in follow-up releases! Support for grouping on additional columns, filtered aggregation, vectorized expressions, and `time_bucket` is coming soon. * Improved performance of deletes on compressed hypertables when a large amount of data is affected. This improvement speeds up operations that delete whole segments by skipping the decompression step. It is enabled for all deletes that filter by the `segment_by` column(s). **PostgreSQL 14 deprecation announcement** We will continue supporting PostgreSQL 14 until April 2025. Closer to that time, we will announce the specific version of TimescaleDB in which PostgreSQL 14 support will not be included going forward. **Features** * timescale#6882: Allow delete of full segments on compressed chunks without decompression. * timescale#7033: Use `merge` statement on continuous aggregates refresh. * timescale#7126: Add functions to show the compression information. * timescale#7147: Vectorize partial aggregation for `sum(int4)` with grouping on `segment by` columns. * timescale#7204: Track additional extensions in telemetry. * timescale#7207: Refactor the `decompress_batches_scan` functions for easier maintenance. * timescale#7209: Add a function to drop the `osm` chunk. * timescale#7275: Add support for the `returning` clause for `merge`. * timescale#7200: Vectorize common aggregate functions like `min`, `max`, `sum`, `avg`, `stddev`, `variance` for compressed columns of arithmetic types, when there is grouping on `segment by` columns or no grouping. **Bug fixes** * timescale#7187: Fix the string literal length for the `compressed_data_info` function. * timescale#7191: Fix creating default indexes on chunks when migrating the data. * timescale#7195: Fix the `segment by` and `order by` checks when dropping a column from a compressed hypertable. * timescale#7201: Use the generic extension description when building `apt` and `rpm` loader packages. * timescale#7227: Add an index to the `compression_chunk_size` catalog table. * timescale#7229: Fix the foreign key constraints where the index and the constraint column order are different. * timescale#7230: Do not propagate the foreign key constraints to the `osm` chunk. * timescale#7234: Release the cache after accessing the cache entry. * timescale#7258: Force English in the `pg_config` command executed by `cmake` to avoid the unexpected building errors. * timescale#7270: Fix the memory leak in compressed DML batch filtering. * timescale#7286: Fix the index column check while searching for the index. * timescale#7290: Add check for null offset for continuous aggregates built on top of continuous aggregates. * timescale#7301: Make foreign key behavior for hypertables consistent. * timescale#7318: Fix chunk skipping range filtering. * timescale#7320: Set the license specific extension comment in the install script. **Thanks** * @MiguelTubio for reporting and fixing the Windows build error. * @posuch for reporting the misleading extension description in the generic loader packages. * @snyrkill for discovering and reporting the issue with continuous aggregates built on top of continuous aggregates. --------- Signed-off-by: Pallavi Sontakke <pallavi@timescale.com> Signed-off-by: Yannis Roussos <iroussos@gmail.com> Signed-off-by: Sven Klemm <31455525+svenklemm@users.noreply.github.com> Co-authored-by: Yannis Roussos <iroussos@gmail.com> Co-authored-by: atovpeko <114177030+atovpeko@users.noreply.github.com> Co-authored-by: Sven Klemm <31455525+svenklemm@users.noreply.github.com>
- Loading branch information
There are no files selected for viewing
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1 @@ | ||
Implements: #7295 Support ALTER TABLE SET ACCESS METHOD on hypertable | ||
Implements: #7295: Support ALTER TABLE SET ACCESS METHOD on hypertable. |
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
CREATE FUNCTION _timescaledb_functions.compressed_data_info(_timescaledb_internal.compressed_data) | ||
RETURNS TABLE (algorithm name, has_nulls bool) | ||
AS '@MODULE_PATHNAME@', 'ts_update_placeholder' | ||
LANGUAGE C STRICT IMMUTABLE SET search_path = pg_catalog, pg_temp; | ||
|
||
CREATE INDEX compression_chunk_size_idx ON _timescaledb_catalog.compression_chunk_size (compressed_chunk_id); | ||
|
||
CREATE FUNCTION _timescaledb_functions.drop_osm_chunk(hypertable REGCLASS) | ||
RETURNS BOOL | ||
AS '@MODULE_PATHNAME@', 'ts_update_placeholder' | ||
LANGUAGE C VOLATILE; |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
-- check whether we can safely downgrade the existing compression setup | ||
CREATE OR REPLACE FUNCTION _timescaledb_functions.add_sequence_number_metadata_column( | ||
comp_ch_schema_name text, | ||
comp_ch_table_name text | ||
) | ||
RETURNS BOOL LANGUAGE PLPGSQL AS | ||
$BODY$ | ||
DECLARE | ||
chunk_schema_name text; | ||
chunk_table_name text; | ||
index_name text; | ||
segmentby_columns text; | ||
BEGIN | ||
SELECT ch.schema_name, ch.table_name INTO STRICT chunk_schema_name, chunk_table_name | ||
FROM _timescaledb_catalog.chunk ch | ||
INNER JOIN _timescaledb_catalog.chunk comp_ch | ||
ON ch.compressed_chunk_id = comp_ch.id | ||
WHERE comp_ch.schema_name = comp_ch_schema_name | ||
AND comp_ch.table_name = comp_ch_table_name; | ||
|
||
IF NOT FOUND THEN | ||
RAISE USING | ||
ERRCODE = 'feature_not_supported', | ||
MESSAGE = 'Cannot migrate compressed chunk to version 2.16.1, chunk not found'; | ||
END IF; | ||
|
||
-- Add sequence number column to compressed chunk | ||
EXECUTE format('ALTER TABLE %s.%s ADD COLUMN _ts_meta_sequence_num INT DEFAULT NULL', comp_ch_schema_name, comp_ch_table_name); | ||
|
||
-- Remove all indexes from compressed chunk | ||
FOR index_name IN | ||
SELECT format('%s.%s', i.schemaname, i.indexname) | ||
FROM pg_indexes i | ||
WHERE i.schemaname = comp_ch_schema_name | ||
AND i.tablename = comp_ch_table_name | ||
LOOP | ||
EXECUTE format('DROP INDEX %s;', index_name); | ||
END LOOP; | ||
|
||
-- Fetch the segmentby columns from compression settings | ||
SELECT string_agg(cs.segmentby_column, ',') INTO segmentby_columns | ||
FROM ( | ||
SELECT unnest(segmentby) | ||
FROM _timescaledb_catalog.compression_settings | ||
WHERE relid = format('%s.%s', comp_ch_schema_name, comp_ch_table_name)::regclass::oid | ||
AND segmentby IS NOT NULL | ||
) AS cs(segmentby_column); | ||
|
||
-- Create compressed chunk index based on sequence num metadata column | ||
-- If there is no segmentby columns, we can skip creating the index | ||
IF FOUND AND segmentby_columns IS NOT NULL THEN | ||
EXECUTE format('CREATE INDEX ON %s.%s (%s, _ts_meta_sequence_num);', comp_ch_schema_name, comp_ch_table_name, segmentby_columns); | ||
END IF; | ||
|
||
-- Mark compressed chunk as unordered | ||
-- Marking the chunk status bit (2) makes it unordered | ||
-- and disables some optimizations. In order to re-enable | ||
-- them, you need to recompress these chunks. | ||
UPDATE _timescaledb_catalog.chunk | ||
SET status = status | 2 -- set unordered bit | ||
WHERE schema_name = chunk_schema_name | ||
AND table_name = chunk_table_name; | ||
|
||
RETURN true; | ||
END | ||
$BODY$ SET search_path TO pg_catalog, pg_temp; | ||
|
||
DO $$ | ||
DECLARE | ||
chunk_count int; | ||
chunk_record record; | ||
BEGIN | ||
-- if we find chunks which don't have sequence number metadata column in | ||
-- compressed chunk, we need to stop downgrade and have the user run | ||
-- a migration script to re-add the missing columns | ||
SELECT count(*) INTO STRICT chunk_count | ||
FROM _timescaledb_catalog.chunk ch | ||
INNER JOIN _timescaledb_catalog.chunk uncomp_ch | ||
ON uncomp_ch.compressed_chunk_id = ch.id | ||
WHERE not exists ( | ||
SELECT | ||
FROM pg_attribute att | ||
WHERE attrelid=format('%I.%I',ch.schema_name,ch.table_name)::regclass | ||
AND attname='_ts_meta_sequence_num') | ||
AND NOT uncomp_ch.dropped; | ||
|
||
-- Doing the migration if we find 10 or less chunks that need to be migrated | ||
IF chunk_count > 10 THEN | ||
RAISE USING | ||
ERRCODE = 'feature_not_supported', | ||
MESSAGE = 'Cannot downgrade compressed hypertables with chunks that do not contain sequence numbers. Run timescaledb--2.17-2.16.1.sql migration script before downgrading.', | ||
DETAIL = 'Number of chunks that need to be migrated: '|| chunk_count::text; | ||
ELSIF chunk_count > 0 THEN | ||
FOR chunk_record IN | ||
SELECT comp_ch.* | ||
FROM _timescaledb_catalog.chunk ch | ||
INNER JOIN _timescaledb_catalog.chunk comp_ch | ||
ON ch.compressed_chunk_id = comp_ch.id | ||
WHERE not exists ( | ||
SELECT | ||
FROM pg_attribute att | ||
WHERE attrelid=format('%I.%I',comp_ch.schema_name,comp_ch.table_name)::regclass | ||
AND attname='_ts_meta_sequence_num') | ||
AND NOT ch.dropped | ||
LOOP | ||
PERFORM _timescaledb_functions.add_sequence_number_metadata_column(chunk_record.schema_name, chunk_record.table_name); | ||
RAISE LOG 'Migrated compressed chunk %s.%s to version 2.16.1', chunk_record.schema_name, chunk_record.table_name; | ||
END LOOP; | ||
|
||
RAISE LOG 'Migration successful!'; | ||
END IF; | ||
END | ||
$$; | ||
|
||
DROP FUNCTION _timescaledb_functions.add_sequence_number_metadata_column(text, text); | ||
|
||
DROP FUNCTION _timescaledb_functions.compressed_data_info(_timescaledb_internal.compressed_data); | ||
DROP INDEX _timescaledb_catalog.compression_chunk_size_idx; | ||
DROP FUNCTION IF EXISTS _timescaledb_functions.drop_osm_chunk(REGCLASS); |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +0,0 @@ | ||
CREATE FUNCTION _timescaledb_functions.compressed_data_info(_timescaledb_internal.compressed_data) | ||
RETURNS TABLE (algorithm name, has_nulls bool) | ||
AS '@MODULE_PATHNAME@', 'ts_update_placeholder' | ||
LANGUAGE C STRICT IMMUTABLE SET search_path = pg_catalog, pg_temp; | ||
|
||
CREATE INDEX compression_chunk_size_idx ON _timescaledb_catalog.compression_chunk_size (compressed_chunk_id); | ||
|
||
CREATE FUNCTION _timescaledb_functions.drop_osm_chunk(hypertable REGCLASS) | ||
RETURNS BOOL | ||
AS '@MODULE_PATHNAME@', 'ts_update_placeholder' | ||
LANGUAGE C VOLATILE; | ||