Skip to content

Commit

Permalink
Release 2.16.0
Browse files Browse the repository at this point in the history
This release contains significant performance improvements when working with compressed data, extended join
support in continuous aggregates, and the ability to define foreign keys from regular tables towards hypertables.
We recommend that you upgrade at the next available opportunity.

In TimescaleDB v2.16.0 we:

* Introduce multiple performance focused optimizations for data manipulation operations (DML) over compressed chunks.

  Improved upsert performance by more than 100x in some cases and more than 1000x in some update/delete scenarios.

* Add the ability to define chunk skipping indexes on non-partitioning columns of compressed hypertables

  TimescaleDB v2.16.0 extends chunk exclusion to use those skipping (sparse) indexes when queries filter on the relevant columns,
  and prune chunks that do not include any relevant data for calculating the query response.

* Offer new options for use cases that require foreign keys defined.

  You can now add foreign keys from regular tables towards hypertables. We have also removed
  some really annoying locks in the reverse direction that blocked access to referenced tables
  while compression was running.

* Extend Continuous Aggregates to support more types of analytical queries.

  More types of joins are supported, additional equality operators on join clauses, and
  support for joins between multiple regular tables.

**Highlighted features in this release**

* Improved query performance through chunk exclusion on compressed hypertables.

  You can now define chunk skipping indexes on compressed chunks for any column with one of the following
  integer data types: `smallint`, `int`, `bigint`, `serial`, `bigserial`, `date`, `timestamp`, `timestamptz`.

  After you call `enable_chunk_skipping` on a column, TimescaleDB tracks the min and max values for
  that column. TimescaleDB uses that information to exclude chunks for queries that filter on that
  column, and would not find any data in those chunks.

* Improved upsert performance on compressed hypertables.

  By using index scans to verify constraints during inserts on compressed chunks, TimescaleDB speeds
  up some ON CONFLICT clauses by more than 100x.

* Improved performance of updates, deletes, and inserts on compressed hypertables.

  By filtering data while accessing the compressed data and before decompressing, TimescaleDB has
  improved performance for updates and deletes on all types of compressed chunks, as well as inserts
  into compressed chunks with unique constraints.

  By signaling constraint violations without decompressing, or decompressing only when matching
  records are found in the case of updates, deletes and upserts, TimescaleDB v2.16.0 speeds
  up those operations more than 1000x in some update/delete scenarios, and 10x for upserts.

* You can add foreign keys from regular tables to hypertables, with support for all types of cascading options.
  This is useful for hypertables that partition using sequential IDs, and need to reference those IDs from other tables.

* Lower locking requirements during compression for hypertables with foreign keys

  Advanced foreign key handling removes the need for locking referenced tables when new chunks are compressed.
  DML is no longer blocked on referenced tables while compression runs on a hypertable.

* Improved support for queries on Continuous Aggregates

  `INNER/LEFT` and `LATERAL` joins are now supported. Plus, you can now join with multiple regular tables,
  and you can have more than one equality operator on join clauses.

**PostgreSQL 13 support removal announcement**

Following the deprecation announcement for PostgreSQL 13 in TimescaleDB v2.13,
PostgreSQL 13 is no longer supported in TimescaleDB v2.16.

The Currently supported PostgreSQL major versions are 14, 15 and 16.
  • Loading branch information
svenklemm committed Jul 31, 2024
1 parent 01b5de5 commit e4eb666
Show file tree
Hide file tree
Showing 26 changed files with 193 additions and 111 deletions.
1 change: 0 additions & 1 deletion .unreleased/fix_7055

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/fix_7064

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/fix_7074

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_6880

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_6895

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_6897

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_6918

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_6920

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_6987

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_6989

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7018

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7020

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7046

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7048

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7069

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7075

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7101

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7108

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7116

This file was deleted.

1 change: 0 additions & 1 deletion .unreleased/pr_7134

This file was deleted.

2 changes: 0 additions & 2 deletions .unreleased/pr_7161

This file was deleted.

103 changes: 103 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,109 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**

## 2.16.0 (2024-07-31)

This release contains significant performance improvements when working with compressed data, extended join
support in continuous aggregates, and the ability to define foreign keys from regular tables towards hypertables.
We recommend that you upgrade at the next available opportunity.

In TimescaleDB v2.16.0 we:

* Introduce multiple performance focused optimizations for data manipulation operations (DML) over compressed chunks.

Improved upsert performance by more than 100x in some cases and more than 1000x in some update/delete scenarios.

* Add the ability to define chunk skipping indexes on non-partitioning columns of compressed hypertables

TimescaleDB v2.16.0 extends chunk exclusion to use those skipping (sparse) indexes when queries filter on the relevant columns,
and prune chunks that do not include any relevant data for calculating the query response.

* Offer new options for use cases that require foreign keys defined.

You can now add foreign keys from regular tables towards hypertables. We have also removed
some really annoying locks in the reverse direction that blocked access to referenced tables
while compression was running.

* Extend Continuous Aggregates to support more types of analytical queries.

More types of joins are supported, additional equality operators on join clauses, and
support for joins between multiple regular tables.

**Highlighted features in this release**

* Improved query performance through chunk exclusion on compressed hypertables.

You can now define chunk skipping indexes on compressed chunks for any column with one of the following
integer data types: `smallint`, `int`, `bigint`, `serial`, `bigserial`, `date`, `timestamp`, `timestamptz`.

After you call `enable_chunk_skipping` on a column, TimescaleDB tracks the min and max values for
that column. TimescaleDB uses that information to exclude chunks for queries that filter on that
column, and would not find any data in those chunks.

* Improved upsert performance on compressed hypertables.

By using index scans to verify constraints during inserts on compressed chunks, TimescaleDB speeds
up some ON CONFLICT clauses by more than 100x.

* Improved performance of updates, deletes, and inserts on compressed hypertables.

By filtering data while accessing the compressed data and before decompressing, TimescaleDB has
improved performance for updates and deletes on all types of compressed chunks, as well as inserts
into compressed chunks with unique constraints.

By signaling constraint violations without decompressing, or decompressing only when matching
records are found in the case of updates, deletes and upserts, TimescaleDB v2.16.0 speeds
up those operations more than 1000x in some update/delete scenarios, and 10x for upserts.

* You can add foreign keys from regular tables to hypertables, with support for all types of cascading options.
This is useful for hypertables that partition using sequential IDs, and need to reference those IDs from other tables.

* Lower locking requirements during compression for hypertables with foreign keys

Advanced foreign key handling removes the need for locking referenced tables when new chunks are compressed.
DML is no longer blocked on referenced tables while compression runs on a hypertable.

* Improved support for queries on Continuous Aggregates

`INNER/LEFT` and `LATERAL` joins are now supported. Plus, you can now join with multiple regular tables,
and you can have more than one equality operator on join clauses.

**PostgreSQL 13 support removal announcement**

Following the deprecation announcement for PostgreSQL 13 in TimescaleDB v2.13,
PostgreSQL 13 is no longer supported in TimescaleDB v2.16.

The Currently supported PostgreSQL major versions are 14, 15 and 16.

**Features**
* #6880: Add support for the array operators used for compressed DML batch filtering.
* #6895: Improve the compressed DML expression pushdown.
* #6897: Add support for replica identity on compressed hypertables.
* #6918: Remove support for PG13.
* #6920: Rework compression activity wal markers.
* #6989: Add support for foreign keys when converting plain tables to hypertables.
* #7020: Add support for the chunk column statistics tracking.
* #7048: Add an index scan for INSERT DML decompression.
* #7075: Reduce decompression on the compressed INSERT.
* #7101: Reduce decompressions for the compressed UPDATE/DELETE.
* #7108 Reduce decompressions for INSERTs with UNIQUE constraints
* #7116 Use DELETE instead of TRUNCATE after compression
* #7134 Refactor foreign key handling for compressed hypertables
* #7161 Fix `mergejoin input data is out of order`

**Bugfixes**
* #6987 Fix REASSIGN OWNED BY for background jobs
* #7018: Fix `search_path` quoting in the compression defaults function.
* #7046: Prevent locking for compressed tuples.
* #7055: Fix the `scankey` for `segment by` columns, where the type `constant` is different to `variable`.
* #7064: Fix the bug in the default `order by` calculation in compression.
* #7069: Fix the index column name usage.
* #7074: Fix the bug in the default `segment by` calculation in compression.

**Thanks**
* @jledentu For reporting a problem with mergejoin input order


## 2.15.3 (2024-07-02)

This release contains bug fixes since the 2.15.2 release.
Expand Down
3 changes: 2 additions & 1 deletion sql/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,8 @@ set(MOD_FILES
updates/2.14.2--2.15.0.sql
updates/2.15.0--2.15.1.sql
updates/2.15.1--2.15.2.sql
updates/2.15.2--2.15.3.sql)
updates/2.15.2--2.15.3.sql
updates/2.15.3--2.16.0.sql)

# The downgrade file to generate a downgrade script for the current version, as
# specified in version.config
Expand Down
86 changes: 86 additions & 0 deletions sql/updates/2.15.3--2.16.0.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
-- Enable tracking of statistics on a column of a hypertable.
--
-- hypertable - OID of the table to which the column belongs to
-- column_name - The column to track statistics for
-- if_not_exists - If set, and the entry already exists, generate a notice instead of an error
CREATE FUNCTION @extschema@.enable_chunk_skipping(
hypertable REGCLASS,
column_name NAME,
if_not_exists BOOLEAN = FALSE
) RETURNS TABLE(column_stats_id INT, enabled BOOL)
AS 'SELECT NULL,NULL' LANGUAGE SQL VOLATILE SET search_path = pg_catalog, pg_temp;

-- Disable tracking of statistics on a column of a hypertable.
--
-- hypertable - OID of the table to remove from
-- column_name - NAME of the column on which the stats are tracked
-- if_not_exists - If set, and the entry does not exist,
-- generate a notice instead of an error
CREATE FUNCTION @extschema@.disable_chunk_skipping(
hypertable REGCLASS,
column_name NAME,
if_not_exists BOOLEAN = FALSE
) RETURNS TABLE(hypertable_id INT, column_name NAME, disabled BOOL)
AS 'SELECT NULL,NULL,NULL' LANGUAGE SQL VOLATILE SET search_path = pg_catalog, pg_temp;

-- Track statistics for columns of chunks from a hypertable.
-- Currently, we track the min/max range for a given column across chunks.
-- More statistics (like bloom filters) will be added in the future.
--
-- A "special" entry for a column with invalid chunk_id, PG_INT64_MAX,
-- PG_INT64_MIN indicates that min/max ranges could be computed for this column
-- for chunks.
--
-- The ranges can overlap across chunks. The values could be out-of-date if
-- modifications/changes occur in the corresponding chunk and such entries
-- should be marked as "invalid" to ensure that the chunk is in
-- appropriate state to be able to use these values. Thus these entries
-- are different from dimension_slice which is used for tracking partitioning
-- column ranges which have different characteristics.
--
-- Currently this catalog supports datatypes like INT, SERIAL, BIGSERIAL,
-- DATE, TIMESTAMP etc. by storing the ranges in bigint columns. In the
-- future, we could support additional datatypes (which support btree style
-- >, <, = comparators) by storing their textual representation.
--
CREATE TABLE _timescaledb_catalog.chunk_column_stats (
id serial NOT NULL,
hypertable_id integer NOT NULL,
chunk_id integer NOT NULL,
column_name name NOT NULL,
range_start bigint NOT NULL,
range_end bigint NOT NULL,
valid boolean NOT NULL,
-- table constraints
CONSTRAINT chunk_column_stats_pkey PRIMARY KEY (id),
CONSTRAINT chunk_column_stats_ht_id_chunk_id_colname_key UNIQUE (hypertable_id, chunk_id, column_name),
CONSTRAINT chunk_column_stats_range_check CHECK (range_start <= range_end),
CONSTRAINT chunk_column_stats_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable (id),
CONSTRAINT chunk_column_stats_chunk_id_fkey FOREIGN KEY (chunk_id) REFERENCES _timescaledb_catalog.chunk (id)
);

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.chunk_column_stats', '');

SELECT pg_catalog.pg_extension_config_dump(pg_get_serial_sequence('_timescaledb_catalog.chunk_column_stats', 'id'), '');

GRANT SELECT ON _timescaledb_catalog.chunk_column_stats TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.chunk_column_stats_id_seq TO PUBLIC;

-- Remove foreign key constraints from compressed chunks
DO $$
DECLARE
conrelid regclass;
conname name;
BEGIN
FOR conrelid, conname IN
SELECT
con.conrelid::regclass,
con.conname
FROM _timescaledb_catalog.chunk ch
JOIN pg_constraint con ON con.conrelid = format('%I.%I',schema_name,table_name)::regclass AND con.contype='f'
WHERE NOT ch.dropped AND EXISTS(SELECT FROM _timescaledb_catalog.chunk ch2 WHERE NOT ch2.dropped AND ch2.compressed_chunk_id=ch.id)
LOOP
EXECUTE format('ALTER TABLE %s DROP CONSTRAINT %I', conrelid, conname);
END LOOP;
END $$;

86 changes: 0 additions & 86 deletions sql/updates/latest-dev.sql
Original file line number Diff line number Diff line change
@@ -1,86 +0,0 @@
-- Enable tracking of statistics on a column of a hypertable.
--
-- hypertable - OID of the table to which the column belongs to
-- column_name - The column to track statistics for
-- if_not_exists - If set, and the entry already exists, generate a notice instead of an error
CREATE FUNCTION @extschema@.enable_chunk_skipping(
hypertable REGCLASS,
column_name NAME,
if_not_exists BOOLEAN = FALSE
) RETURNS TABLE(column_stats_id INT, enabled BOOL)
AS 'SELECT NULL,NULL' LANGUAGE SQL VOLATILE SET search_path = pg_catalog, pg_temp;

-- Disable tracking of statistics on a column of a hypertable.
--
-- hypertable - OID of the table to remove from
-- column_name - NAME of the column on which the stats are tracked
-- if_not_exists - If set, and the entry does not exist,
-- generate a notice instead of an error
CREATE FUNCTION @extschema@.disable_chunk_skipping(
hypertable REGCLASS,
column_name NAME,
if_not_exists BOOLEAN = FALSE
) RETURNS TABLE(hypertable_id INT, column_name NAME, disabled BOOL)
AS 'SELECT NULL,NULL,NULL' LANGUAGE SQL VOLATILE SET search_path = pg_catalog, pg_temp;

-- Track statistics for columns of chunks from a hypertable.
-- Currently, we track the min/max range for a given column across chunks.
-- More statistics (like bloom filters) will be added in the future.
--
-- A "special" entry for a column with invalid chunk_id, PG_INT64_MAX,
-- PG_INT64_MIN indicates that min/max ranges could be computed for this column
-- for chunks.
--
-- The ranges can overlap across chunks. The values could be out-of-date if
-- modifications/changes occur in the corresponding chunk and such entries
-- should be marked as "invalid" to ensure that the chunk is in
-- appropriate state to be able to use these values. Thus these entries
-- are different from dimension_slice which is used for tracking partitioning
-- column ranges which have different characteristics.
--
-- Currently this catalog supports datatypes like INT, SERIAL, BIGSERIAL,
-- DATE, TIMESTAMP etc. by storing the ranges in bigint columns. In the
-- future, we could support additional datatypes (which support btree style
-- >, <, = comparators) by storing their textual representation.
--
CREATE TABLE _timescaledb_catalog.chunk_column_stats (
id serial NOT NULL,
hypertable_id integer NOT NULL,
chunk_id integer NOT NULL,
column_name name NOT NULL,
range_start bigint NOT NULL,
range_end bigint NOT NULL,
valid boolean NOT NULL,
-- table constraints
CONSTRAINT chunk_column_stats_pkey PRIMARY KEY (id),
CONSTRAINT chunk_column_stats_ht_id_chunk_id_colname_key UNIQUE (hypertable_id, chunk_id, column_name),
CONSTRAINT chunk_column_stats_range_check CHECK (range_start <= range_end),
CONSTRAINT chunk_column_stats_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable (id),
CONSTRAINT chunk_column_stats_chunk_id_fkey FOREIGN KEY (chunk_id) REFERENCES _timescaledb_catalog.chunk (id)
);

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.chunk_column_stats', '');

SELECT pg_catalog.pg_extension_config_dump(pg_get_serial_sequence('_timescaledb_catalog.chunk_column_stats', 'id'), '');

GRANT SELECT ON _timescaledb_catalog.chunk_column_stats TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.chunk_column_stats_id_seq TO PUBLIC;

-- Remove foreign key constraints from compressed chunks
DO $$
DECLARE
conrelid regclass;
conname name;
BEGIN
FOR conrelid, conname IN
SELECT
con.conrelid::regclass,
con.conname
FROM _timescaledb_catalog.chunk ch
JOIN pg_constraint con ON con.conrelid = format('%I.%I',schema_name,table_name)::regclass AND con.contype='f'
WHERE NOT ch.dropped AND EXISTS(SELECT FROM _timescaledb_catalog.chunk ch2 WHERE NOT ch2.dropped AND ch2.compressed_chunk_id=ch.id)
LOOP
EXECUTE format('ALTER TABLE %s DROP CONSTRAINT %I', conrelid, conname);
END LOOP;
END $$;

4 changes: 2 additions & 2 deletions version.config
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
version = 2.16.0-dev
update_from_version = 2.15.3
version = 2.17.0-dev
update_from_version = 2.16.0
downgrade_to_version = 2.15.3

0 comments on commit e4eb666

Please sign in to comment.