Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 2.7.0 #4378

Merged
merged 1 commit into from
May 24, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 37 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,61 @@ accidentally triggering the load of a previous DB version.**

## Unreleased

If you use compression with a non-default collation on a segmentby-column you might have to recompress the affected hypertable.
## 2.7.0 (2022-05-24)

This release adds major new features since the 2.6.1 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* Optimize continuous aggregate query performance and storage
* The following query clauses and functions can now be used in a continuous
aggregate: FILTER, DISTINCT, ORDER BY as well as [Ordered-Set Aggregate](https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-ORDEREDSET-TABLE)
and [Hypothetical-Set Aggregate](https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-HYPOTHETICAL-TABLE)
* Optimize now() query planning time
* Improve COPY insert performance
* Improve performance of UPDATE/DELETE on PG14 by excluding chunks

This release also includes several bug fixes.

If you are upgrading from a previous version and were using compression
with a non-default collation on a segmentby-column you should recompress
those hypertables.

**Features**
* #4045 Custom origin's support in CAGGs
* #4120 Add logging for retention policy
* #4158 Allow ANALYZE command on a data node directly
* #4169 Add support for chunk exclusion on DELETE to PG14
* #4209 Add support for chunk exclusion on UPDATE to PG14
* #4269 Continuous Aggregates finals form
* #4301 Add support for bulk inserts in COPY operator
* #4311 Support non-superuser move chunk operations
* #4330 Add GUC "bgw_launcher_poll_time"
* #4340 Enable now() usage in plan-time chunk exclusion

**Bugfixes**
* #3899 Fix segfault in Continuous Aggregates
* #4225 Fix TRUNCATE error as non-owner on hypertable
* #4259 Fix logic bug in extension update script
* #4236 Fix potential wrong order of results for compressed hypertable with a non-default collation
* #4249 Fix option "timescaledb.create_group_indexes"
* #4251 Fix INSERT into compressed chunks with dropped columns
* #4255 Fix option "timescaledb.create_group_indexes"
* #4259 Fix logic bug in extension update script
* #4269 Fix bad Continuous Aggregate view definition reported in #4233
* #4289 Support moving compressed chunks between data nodes
* #4300 Fix refresh window cap for cagg refresh policy
* #4330 Add GUC "bgw_launcher_poll_time"
* #4315 Fix memory leak in scheduler
* #4323 Remove printouts from signal handlers
* #4342 Fix move chunk cleanup logic
* #4349 Fix crashes in functions using AlterTableInternal
* #4358 Fix crash and other issues in telemetry reporter

**Thanks**
* @jsoref for fixing various misspellings in code, comments and documentation
* @abrownsword for reporting a bug in the telemetry reporter and testing the fix
* @jsoref for fixing various misspellings in code, comments and documentation
* @yalon for reporting an error with ALTER TABLE RENAME on distributed hypertables
* @zhuizhuhaomeng for reporting and fixing a memory leak in our scheduler

## 2.6.1 (2022-04-11)
This release is patch release. We recommend that you upgrade at the next available opportunity.
Expand All @@ -50,7 +81,9 @@ This release is patch release. We recommend that you upgrade at the next availab

**Thanks**
* @abrownsword for reporting a crash in the telemetry reporter
* @amalek215 for reporting a segmentation fault when running VACUUM FULL pg_class
* @daydayup863 for reporting issue with remote explain
* @krvajal for reporting an error with ADD COLUMN IF NOT EXISTS on compressed hypertables

## 2.6.0 (2022-02-16)
This release is medium priority for upgrade. We recommend that you upgrade at the next available opportunity.
Expand Down
5 changes: 3 additions & 2 deletions sql/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,12 @@ set(MOD_FILES
updates/2.5.0--2.5.1.sql
updates/2.5.1--2.5.2.sql
updates/2.5.2--2.6.0.sql
updates/2.6.0--2.6.1.sql)
updates/2.6.0--2.6.1.sql
updates/2.6.1--2.7.0.sql)

# The downgrade file to generate a downgrade script for the current version, as
# specified in version.config
set(CURRENT_REV_FILE reverse-dev.sql)
set(CURRENT_REV_FILE 2.7.0--2.6.1.sql)
# Files for generating old downgrade scripts. This should only include files for
# downgrade from one version to its previous version since we do not support
# skipping versions when downgrading.
Expand Down
171 changes: 171 additions & 0 deletions sql/updates/2.6.1--2.7.0.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
CREATE FUNCTION _timescaledb_internal.relation_size(relation REGCLASS)
RETURNS TABLE (total_size BIGINT, heap_size BIGINT, index_size BIGINT, toast_size BIGINT)
AS '@MODULE_PATHNAME@', 'ts_relation_size' LANGUAGE C VOLATILE;

DROP VIEW IF EXISTS _timescaledb_internal.hypertable_chunk_local_size;
DROP INDEX IF EXISTS _timescaledb_catalog.chunk_constraint_chunk_id_dimension_slice_id_idx;
CREATE INDEX chunk_constraint_dimension_slice_id_idx ON _timescaledb_catalog.chunk_constraint (dimension_slice_id);

-- Report the compressed chunks that have a wrong collation. See https://github.com/timescale/timescaledb/pull/4236
DO $$
DECLARE
_hypertable regclass;
_column_name text;
_chunks regclass[];
BEGIN
FOR _hypertable,
_column_name,
_chunks IN
-- We materialize this CTE so that the filter on dropped chunks works
-- first, and we don't try to look up regclass for dropped chunks.
WITH chunk AS MATERIALIZED (
SELECT
format('%I.%I', compressed_chunk.schema_name, compressed_chunk.table_name) compressed_chunk,
format('%I.%I', normal_chunk.schema_name, normal_chunk.table_name) normal_chunk,
normal_chunk.hypertable_id hypertable_id
FROM
_timescaledb_catalog.chunk normal_chunk,
_timescaledb_catalog.chunk compressed_chunk
WHERE
normal_chunk.compressed_chunk_id = compressed_chunk.id
AND NOT normal_chunk.dropped
),
col AS (
SELECT
hypertable_id,
normal_chunk,
normal_column.attname column_name
FROM
chunk,
pg_attribute normal_column,
pg_attribute compressed_column
WHERE
normal_column.attrelid = normal_chunk::regclass
AND compressed_column.attrelid = compressed_chunk::regclass
AND normal_column.attname = compressed_column.attname
AND compressed_column.atttypid != '_timescaledb_internal.compressed_data'::regtype
AND normal_column.attcollation != compressed_column.attcollation
),
report_rows AS (
SELECT
format('%I.%I', schema_name, table_name)::regclass hypertable,
normal_chunk::regclass chunk,
column_name
FROM
col,
_timescaledb_catalog.hypertable
WHERE
hypertable.id = hypertable_id
)
SELECT
hypertable,
column_name,
array_agg(chunk) chunks
FROM
report_rows
GROUP BY
hypertable,
column_name LOOP
RAISE warning 'some compressed chunks for hypertable "%" use a wrong collation for the column "%"', _hypertable, _column_name
USING detail = 'This may lead to wrong order of results if you are using an index on this column of the compessed chunk.',
hint = format('If you experience this problem, disable compression on the table and enable it again. This will require decompressing and compressing all chunks of the table. The affected chunks are "%s".', _chunks);
END LOOP;
END
$$;

-- Get rid of chunk_id from materialization hypertables
DROP FUNCTION IF EXISTS timescaledb_experimental.refresh_continuous_aggregate(REGCLASS, REGCLASS);

DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;

ALTER TABLE _timescaledb_catalog.continuous_agg
ADD COLUMN finalized BOOL;

UPDATE _timescaledb_catalog.continuous_agg SET finalized = FALSE;

ALTER TABLE _timescaledb_catalog.continuous_agg
ALTER COLUMN finalized SET NOT NULL,
ALTER COLUMN finalized SET DEFAULT TRUE;

DROP PROCEDURE IF EXISTS timescaledb_experimental.move_chunk(REGCLASS, NAME, NAME);
DROP PROCEDURE IF EXISTS timescaledb_experimental.copy_chunk(REGCLASS, NAME, NAME);

CREATE OR REPLACE FUNCTION timescaledb_experimental.subscription_exec(
subscription_command TEXT
) RETURNS VOID AS '@MODULE_PATHNAME@', 'ts_subscription_exec' LANGUAGE C VOLATILE;

-- Recreate chunk_copy_operation table with newly added `compress_chunk_name` column
--

CREATE TABLE _timescaledb_catalog._tmp_chunk_copy_operation (
operation_id name NOT NULL,
backend_pid integer NOT NULL,
completed_stage name NOT NULL,
time_start timestamptz NOT NULL DEFAULT NOW(),
chunk_id integer NOT NULL,
compress_chunk_name name NOT NULL, -- new column
source_node_name name NOT NULL,
dest_node_name name NOT NULL,
delete_on_source_node bool NOT NULL
);

INSERT INTO _timescaledb_catalog._tmp_chunk_copy_operation
SELECT
operation_id,
backend_pid,
completed_stage,
time_start,
chunk_id,
'', -- compress_chunk_name
source_node_name,
dest_node_name,
delete_on_source_node
FROM
_timescaledb_catalog.chunk_copy_operation
ORDER BY
operation_id;

ALTER EXTENSION timescaledb
DROP TABLE _timescaledb_catalog.chunk_copy_operation;

DROP TABLE _timescaledb_catalog.chunk_copy_operation;

-- Create a new table to void doing rename operation on the tmp table
--
CREATE TABLE _timescaledb_catalog.chunk_copy_operation (
operation_id name NOT NULL,
backend_pid integer NOT NULL,
completed_stage name NOT NULL,
time_start timestamptz NOT NULL DEFAULT NOW(),
chunk_id integer NOT NULL,
compress_chunk_name name NOT NULL,
source_node_name name NOT NULL,
dest_node_name name NOT NULL,
delete_on_source_node bool NOT NULL
);

INSERT INTO _timescaledb_catalog.chunk_copy_operation
SELECT
operation_id,
backend_pid,
completed_stage,
time_start,
chunk_id,
compress_chunk_name,
source_node_name,
dest_node_name,
delete_on_source_node
FROM
_timescaledb_catalog._tmp_chunk_copy_operation
ORDER BY
operation_id;

DROP TABLE _timescaledb_catalog._tmp_chunk_copy_operation;

ALTER TABLE _timescaledb_catalog.chunk_copy_operation
ADD CONSTRAINT chunk_copy_operation_pkey PRIMARY KEY (operation_id),
ADD CONSTRAINT chunk_copy_operation_chunk_id_fkey FOREIGN KEY (chunk_id) REFERENCES _timescaledb_catalog.chunk(id) ON DELETE CASCADE;

GRANT SELECT ON TABLE _timescaledb_catalog.chunk_copy_operation TO PUBLIC;

ANALYZE _timescaledb_catalog.chunk_copy_operation;
Loading