-TimescaleDB is an open-source database designed to make SQL scalable for
-time-series data. It is engineered up from PostgreSQL and packaged as a
-PostgreSQL extension, providing automatic partitioning across time and space
-(partitioning key), as well as full SQL support.
+
TimescaleDB is a PostgreSQL extension for high-performance real-time analytics on time-series and event data
-If you prefer not to install or administer your instance of TimescaleDB, try the
-30 day free trial of [Timescale Cloud](https://console.cloud.timescale.com/signup), our fully managed cloud offering.
-Timescale is pay-as-you-go. We don't charge for storage you dont use, backups, snapshots, ingress or egress.
+[![Docs](https://img.shields.io/badge/Read_the_Timescale_docs-black?style=for-the-badge&logo=readthedocs&logoColor=white)](https://docs.timescale.com/)
+[![SLACK](https://img.shields.io/badge/Ask_the_Timescale_community-black?style=for-the-badge&logo=slack&logoColor=white)](https://timescaledb.slack.com/archives/C4GT3N90X)
+[![Try TimescaleDB for free](https://img.shields.io/badge/Try_Timescale_for_free-black?style=for-the-badge&logo=timescale&logoColor=white)](https://console.cloud.timescale.com/signup)
-To determine which option is best for you, see [Timescale Products](https://tsdb.co/GitHubTimescaleProducts)
-for more information about our Apache-2 version, TimescaleDB Community (self-hosted), and Timescale
-Cloud (hosted), including: feature comparisons, FAQ, documentation, and support.
+
-Below is an introduction to TimescaleDB. For more information, please check out
-these other resources:
-- [Developer Documentation](https://docs.timescale.com/getting-started/latest/services/)
-- [Slack Channel](https://slack-login.timescale.com)
-- [Timescale Community Forum](https://www.timescale.com/forum/)
-- [Timescale Release Notes & Future Plans](https://tsdb.co/GitHubTimescaleDocsReleaseNotes)
+## Install TimescaleDB
-For reference and clarity, all code files in this repository reference
-licensing in their header (either the Apache-2-open-source license
-or [Timescale License (TSL)](https://github.com/timescale/timescaledb/blob/main/tsl/LICENSE-TIMESCALE)
-). Apache-2 licensed binaries can be built by passing `-DAPACHE_ONLY=1` to `bootstrap`.
+Install from a Docker container:
-[Contributors welcome.](https://github.com/timescale/timescaledb/blob/main/CONTRIBUTING.md)
+1. Run the TimescaleDB container:
-(To build TimescaleDB from source, see instructions in [_Building from source_](https://github.com/timescale/timescaledb/blob/main/docs/BuildSource.md).)
+ ```bash
+ docker run -d --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg17
+ ```
-### Using TimescaleDB
+1. Connect to a database:
-TimescaleDB scales PostgreSQL for time-series data via automatic
-partitioning across time and space (partitioning key), yet retains
-the standard PostgreSQL interface.
+ ```bash
+ docker exec -it timescaledb psql -d "postgres://postgres:password@localhost/postgres"
+ ```
-In other words, TimescaleDB exposes what look like regular tables, but
-are actually only an
-abstraction (or a virtual view) of many individual tables comprising the
-actual data. This single-table view, which we call a
-[hypertable](https://tsdb.co/GitHubTimescaleHypertable),
-is comprised of many chunks, which are created by partitioning
-the hypertable's data in either one or two dimensions: by a time
-interval, and by an (optional) "partition key" such as
-device id, location, user id, etc.
+See [other installation options](https://docs.timescale.com/self-hosted/latest/install/) or try [Timescale Cloud](https://docs.timescale.com/getting-started/latest/) for free.
-Virtually all user interactions with TimescaleDB are with
-hypertables. Creating tables and indexes, altering tables, inserting
-data, selecting data, etc., can (and should) all be executed on the
-hypertable.
+## Create a hypertable
-From the perspective of both use and management, TimescaleDB just
-looks and feels like PostgreSQL, and can be managed and queried as
-such.
-
-#### Before you start
-
-PostgreSQL's out-of-the-box settings are typically too conservative for modern
-servers and TimescaleDB. You should make sure your `postgresql.conf`
-settings are tuned, either by using [timescaledb-tune](https://github.com/timescale/timescaledb-tune)
-or doing it manually.
-
-#### Creating a hypertable
+You create a regular table and then convert it into a hypertable. A hypertable automatically partitions data into chunks based on your configuration.
```sql
--- Do not forget to create timescaledb extension
+-- Create timescaledb extension
CREATE EXTENSION timescaledb;
--- We start by creating a regular SQL table
+-- Create a regular SQL table
CREATE TABLE conditions (
time TIMESTAMPTZ NOT NULL,
location TEXT NOT NULL,
@@ -80,99 +50,155 @@ CREATE TABLE conditions (
humidity DOUBLE PRECISION NULL
);
--- Then we convert it into a hypertable that is partitioned by time
-SELECT create_hypertable('conditions', 'time');
+-- Convert the table into a hypertable that is partitioned by time
+SELECT create_hypertable('conditions', by_range('time'));
```
-- [Quick start: Creating hypertables](https://docs.timescale.com/use-timescale/latest/hypertables/create/)
-- [Reference examples](https://tsdb.co/GitHubTimescaleHypertableReference)
+See more:
-#### Inserting and querying data
+- [About hypertables](https://docs.timescale.com/use-timescale/latest/hypertables/)
+- [API reference](https://docs.timescale.com/api/latest/hypertable/)
-Inserting data into the hypertable is done via normal SQL commands:
+## Enable columnstore
-```sql
-INSERT INTO conditions(time, location, temperature, humidity)
- VALUES (NOW(), 'office', 70.0, 50.0);
-
-SELECT * FROM conditions ORDER BY time DESC LIMIT 100;
-
-SELECT time_bucket('15 minutes', time) AS fifteen_min,
- location, COUNT(*),
- MAX(temperature) AS max_temp,
- MAX(humidity) AS max_hum
- FROM conditions
- WHERE time > NOW() - interval '3 hours'
- GROUP BY fifteen_min, location
- ORDER BY fifteen_min DESC, max_temp DESC;
-```
+TimescaleDB's hypercore is a hybrid row-columnar store that boosts analytical query performance on your time-series and event data, while reducing data size by more than 90%. This keeps your queries operating at lightning speed and ensures low storage costs as you scale. Data is inserted in row format in the rowstore and converted to columnar format in the columnstore based on your configuration.
+
+- Configure the columnstore on a hypertable:
+
+ ```sql
+ ALTER TABLE conditions SET (
+ timescaledb.compress,
+ timescaledb.compress_segmentby = 'device_id'
+ );
+ ```
+
+- Create a policy to automatically convert chunks in row format that are older than seven days to chunks in the columnar format:
+
+ ```sql
+ SELECT add_compression_policy('conditions', INTERVAL '7 days');
+ ```
+
+See more:
+
+- [About columnstore](https://docs.timescale.com/use-timescale/latest/compression/about-compression/)
+- [Enable columnstore manually](https://docs.timescale.com/use-timescale/latest/compression/manual-compression/)
+- [API reference](https://docs.timescale.com/api/latest/compression/)
-In addition, TimescaleDB includes additional functions for time-series
-analysis that are not present in vanilla PostgreSQL. (For example, the `time_bucket` function above.)
+## Insert and query data
-- [Quick start: Basic operations](https://tsdb.co/GitHubTimescaleBasicOperations)
-- [Reference examples](https://tsdb.co/GitHubTimescaleWriteData)
-- [TimescaleDB API](https://tsdb.co/GitHubTimescaleAPI)
+Insert and query data in a hypertable via regular SQL commands. For example:
-### Installation
+- Insert data into a hypertable named `conditions`:
-Installation options are:
+ ```sql
+ INSERT INTO conditions
+ VALUES
+ (NOW(), 'office', 70.0, 50.0),
+ (NOW(), 'basement', 66.5, 60.0),
+ (NOW(), 'garage', 77.0, 65.2);
+ ```
-- **[Timescale Cloud](https://tsdb.co/GitHubTimescale)**: A fully-managed TimescaleDB in the cloud, is
- available via a free trial. Create a PostgreSQL database in the cloud with TimescaleDB pre-installed
- so you can power your application with TimescaleDB without the management overhead.
+- Return the number of entries written to the table conditions in the last 12 hours:
-- **Platform packages**: TimescaleDB is also available pre-packaged for several platforms such as
- Linux, Windows, MacOS, Docker, and Kubernetes. For more information, see [Install TimescaleDB](https://docs.timescale.com/self-hosted/latest/install/).
+ ```sql
+ SELECT
+ COUNT(*)
+ FROM
+ conditions
+ WHERE
+ time > NOW() - INTERVAL '12 hours';
+ ```
-- **Build from source**: See [Building from source](https://github.com/timescale/timescaledb/blob/main/docs/BuildSource.md).
+See more:
- We recommend not using TimescaleDB with PostgreSQL 17.1, 16.5, 15.9, 14.14, 13.17, 12.21.
- These minor versions [introduced a breaking binary interface change][postgres-breaking-change] that,
- once identified, was reverted in subsequent minor PostgreSQL versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22.
- When you build from source, best practice is to build with PostgreSQL 17.2, 16.6, etc and higher.
- Users of [Timescale Cloud](https://console.cloud.timescale.com/) and Platform packages built and
- distributed by Timescale are unaffected.
+- [Query data](https://docs.timescale.com/use-timescale/latest/query-data/)
+- [Write data](https://docs.timescale.com/use-timescale/latest/write-data/)
-## Resources
+## Create time buckets
-### Architecture documents
+Time buckets enable you to aggregate data in hypertables by time interval and calculate summary values.
-- [Basic TimescaleDB Features](tsl/README.md)
-- [Advanced TimescaleDB Features](tsl/README.md)
-- [Testing TimescaleDB](test/README.md)
+For example, calculate the average daily temperature in a table named `conditions`. The table has a `time` and `temperature` columns:
-### Useful tools
+```sql
+SELECT
+ time_bucket('1 day', time) AS bucket,
+ AVG(temperature) AS avg_temp
+FROM
+ conditions
+GROUP BY
+ bucket
+ORDER BY
+ bucket ASC;
+```
+
+See more:
+
+- [About time buckets](https://docs.timescale.com/use-timescale/latest/time-buckets/about-time-buckets/)
+- [API reference](https://docs.timescale.com/api/latest/hyperfunctions/time_bucket/)
+- [All TimescaleDB features](https://docs.timescale.com/use-timescale/latest/)
+- [Tutorials](https://docs.timescale.com/tutorials/latest/)
+
+## Create continuous aggregates
+
+Continuous aggregates are designed to make queries on very large datasets run faster. They continuously and incrementally refresh a query in the background, so that when you run such query, only the data that has changed needs to be computed, not the entire dataset. This is what makes them different from regular PostgreSQL [materialized views](https://www.postgresql.org/docs/current/rules-materializedviews.html), which cannot be incrementally materialized and have to be rebuilt from scratch every time you want to refresh it.
+
+For example, create a continuous aggregate view for daily weather data in two simple steps:
+
+1. Create a materialized view:
-- [timescaledb-tune](https://github.com/timescale/timescaledb-tune): Helps
-set your PostgreSQL configuration settings based on your system's resources.
-- [timescaledb-parallel-copy](https://github.com/timescale/timescaledb-parallel-copy):
-Parallelize your initial bulk loading by using PostgreSQL's `COPY` across
-multiple workers.
+ ```sql
+ CREATE MATERIALIZED VIEW conditions_summary_daily
+ WITH (timescaledb.continuous) AS
+ SELECT
+ device,
+ time_bucket(INTERVAL '1 day', time) AS bucket,
+ AVG(temperature),
+ MAX(temperature),
+ MIN(temperature)
+ FROM
+ conditions
+ GROUP BY
+ device,
+ bucket;
+ ```
-### Additional documentation
+1. Create a policy to refresh the view every hour:
-- [Why use TimescaleDB?](https://tsdb.co/GitHubTimescaleIntro)
-- [Migrating from PostgreSQL](https://docs.timescale.com/migrate/latest/)
-- [Writing data](https://tsdb.co/GitHubTimescaleWriteData)
-- [Querying and data analytics](https://tsdb.co/GitHubTimescaleReadData)
-- [Tutorials and sample data](https://tsdb.co/GitHubTimescaleTutorials)
+ ```sql
+ SELECT
+ add_continuous_aggregate_policy(
+ 'conditions_summary_daily',
+ start_offset => INTERVAL '1 month',
+ end_offset => INTERVAL '1 day',
+ schedule_interval => INTERVAL '1 hour'
+ );
+ ```
+See more:
-### Community & help
+- [About continuous aggregates](https://docs.timescale.com/use-timescale/latest/continuous-aggregates/)
+- [API reference](https://docs.timescale.com/api/latest/continuous-aggregates/create_materialized_view/)
-- [Slack Channel](https://slack.timescale.com)
-- [Github Issues](https://github.com/timescale/timescaledb/issues)
-- [Timescale Support](https://tsdb.co/GitHubTimescaleSupport): see support options (community & subscription)
+## Want TimescaleDB hosted and managed for you? Try Timescale Cloud
+
+[Timescale Cloud](https://docs.timescale.com/getting-started/latest/) is a cloud-based PostgreSQL platform for resource-intensive workloads. We help you build faster, scale further, and stay under budget. A Timescale Cloud service is a single optimized 100% PostgreSQL database instance that you use as is, or extend with capabilities specific to your business needs. The available capabilities are:
+
+- **Time-series and analytics**: PostgreSQL with TimescaleDB. The PostgreSQL you know and love, supercharged with functionality for storing and querying time-series data at scale for analytics and other use cases. Get faster time-based queries with hypertables, continuous aggregates, and columnar storage. Save on storage with native compression, data retention policies, and bottomless data tiering to Amazon S3.
+- **AI and vector**: PostgreSQL with vector extensions. Use PostgreSQL as a vector database with purpose built extensions for building AI applications from start to scale. Get fast and accurate similarity search with the pgvector and pgvectorscale extensions. Create vector embeddings and perform LLM reasoning on your data with the pgai extension.
+- **PostgreSQL**: the trusted industry-standard RDBMS. Ideal for applications requiring strong data consistency, complex relationships, and advanced querying capabilities. Get ACID compliance, extensive SQL support, JSON handling, and extensibility through custom functions, data types, and extensions.
+All services include all the cloud tooling you'd expect for production use: [automatic backups](https://docs.timescale.com/use-timescale/latest/backup-restore/backup-restore-cloud/), [high availability](https://docs.timescale.com/use-timescale/latest/ha-replicas/), [read replicas](https://docs.timescale.com/use-timescale/latest/ha-replicas/read-scaling/), [data forking](https://docs.timescale.com/use-timescale/latest/services/service-management/#fork-a-service), [connection pooling](https://docs.timescale.com/use-timescale/latest/services/connection-pooling/), [tiered storage](https://docs.timescale.com/use-timescale/latest/data-tiering/), [usage-based storage](https://docs.timescale.com/about/latest/pricing-and-account-management/), and much more.
+
+## Check build status
+
+|Linux/macOS|Linux i386|Windows|Coverity|Code Coverage|OpenSSF|
+|:---:|:---:|:---:|:---:|:---:|:---:|
+|[![Build Status Linux/macOS](https://github.com/timescale/timescaledb/actions/workflows/linux-build-and-test.yaml/badge.svg?branch=main&event=schedule)](https://github.com/timescale/timescaledb/actions/workflows/linux-build-and-test.yaml?query=workflow%3ARegression+branch%3Amain+event%3Aschedule)|[![Build Status Linux i386](https://github.com/timescale/timescaledb/actions/workflows/linux-32bit-build-and-test.yaml/badge.svg?branch=main&event=schedule)](https://github.com/timescale/timescaledb/actions/workflows/linux-32bit-build-and-test.yaml?query=workflow%3ARegression+branch%3Amain+event%3Aschedule)|[![Windows build status](https://github.com/timescale/timescaledb/actions/workflows/windows-build-and-test.yaml/badge.svg?branch=main&event=schedule)](https://github.com/timescale/timescaledb/actions/workflows/windows-build-and-test.yaml?query=workflow%3ARegression+branch%3Amain+event%3Aschedule)|[![Coverity Scan Build Status](https://scan.coverity.com/projects/timescale-timescaledb/badge.svg)](https://scan.coverity.com/projects/timescale-timescaledb)|[![Code Coverage](https://codecov.io/gh/timescale/timescaledb/branch/main/graphs/badge.svg?branch=main)](https://codecov.io/gh/timescale/timescaledb)|[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/8012/badge)](https://www.bestpractices.dev/projects/8012)|
-### Releases & updates
+## Get involved
- - [Timescale Release Notes](https://tsdb.co/GitHubTimescaleDocsReleaseNotes): see detailed information about current and past
- versions and subscribe to get
- notified about new releases, fixes, and early access/beta programs.
+We welcome contributions to TimescaleDB! See [Contributing](https://github.com/timescale/timescaledb/blob/main/CONTRIBUTING.md) and [Code style guide](https://github.com/timescale/timescaledb/blob/main/docs/StyleGuide.md) for details.
-### Contributing
+## Learn about Timescale
-- [Contributor instructions](https://github.com/timescale/timescaledb/blob/main/CONTRIBUTING.md)
-- [Code style guide](https://github.com/timescale/timescaledb/blob/main/docs/StyleGuide.md)
+Timescale is PostgreSQL made powerful. To learn more about the company and its products, visit [timescale.com](https://www.timescale.com).
-[postgres-breaking-change]: https://www.postgresql.org/about/news/postgresql-172-166-1510-1415-1318-and-1222-released-2965/
diff --git a/src/cross_module_fn.h b/src/cross_module_fn.h
index e471a867b25..da2c520b260 100644
--- a/src/cross_module_fn.h
+++ b/src/cross_module_fn.h
@@ -132,6 +132,9 @@ typedef struct CrossModuleFunctions
PGFunction decompress_chunk;
void (*decompress_batches_for_insert)(const ChunkInsertState *state, TupleTableSlot *slot);
bool (*decompress_target_segments)(HypertableModifyState *ht_state);
+ int (*hypercore_decompress_update_segment)(Relation relation, const ItemPointer ctid,
+ TupleTableSlot *slot, Snapshot snapshot,
+ ItemPointer new_tid);
/* The compression functions below are not installed in SQL as part of create extension;
* They are installed and tested during testing scripts. They are exposed in cross-module
* functions because they may be very useful for debugging customer problems if the sql
diff --git a/src/nodes/chunk_dispatch/chunk_dispatch.c b/src/nodes/chunk_dispatch/chunk_dispatch.c
index 407c66fee91..9d0839a7849 100644
--- a/src/nodes/chunk_dispatch/chunk_dispatch.c
+++ b/src/nodes/chunk_dispatch/chunk_dispatch.c
@@ -168,22 +168,16 @@ ts_chunk_dispatch_decompress_batches_for_insert(ChunkDispatch *dispatch, ChunkIn
{
if (cis->chunk_compressed)
{
- OnConflictAction onconflict_action = ts_chunk_dispatch_get_on_conflict_action(dispatch);
-
- if (cis->use_tam && onconflict_action != ONCONFLICT_UPDATE)
- {
- /* With our own TAM, a unique index covers both the compressed and
- * non-compressed data, so there is no need to decompress anything
- * when doing inserts. */
- }
/*
* If this is an INSERT into a compressed chunk with UNIQUE or
* PRIMARY KEY constraints we need to make sure any batches that could
* potentially lead to a conflict are in the decompressed chunk so
* postgres can do proper constraint checking.
*/
- else if (ts_cm_functions->decompress_batches_for_insert)
+ if (ts_cm_functions->decompress_batches_for_insert)
{
+ OnConflictAction onconflict_action = ts_chunk_dispatch_get_on_conflict_action(dispatch);
+
ts_cm_functions->decompress_batches_for_insert(cis, slot);
/* mark rows visible */
@@ -445,7 +439,8 @@ chunk_dispatch_exec(CustomScanState *node)
on_chunk_insert_state_changed,
state);
- ts_chunk_dispatch_decompress_batches_for_insert(dispatch, cis, slot);
+ if (!cis->use_tam)
+ ts_chunk_dispatch_decompress_batches_for_insert(dispatch, cis, slot);
MemoryContextSwitchTo(old);
diff --git a/src/nodes/hypertable_modify.c b/src/nodes/hypertable_modify.c
index 255bd7d129c..54b7dbdf5c2 100644
--- a/src/nodes/hypertable_modify.c
+++ b/src/nodes/hypertable_modify.c
@@ -5,6 +5,7 @@
*/
#include
#include
+#include
#include
#include
#include
@@ -31,6 +32,7 @@
#include "hypertable_modify.h"
#include "nodes/chunk_append/chunk_append.h"
#include "nodes/chunk_dispatch/chunk_dispatch.h"
+#include "utils.h"
static void fireASTriggers(ModifyTableState *node);
static void fireBSTriggers(ModifyTableState *node);
@@ -2391,6 +2393,38 @@ ExecOnConflictUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
ExecWithCheckOptions(WCO_RLS_CONFLICT_CHECK, resultRelInfo, existing, mtstate->ps.state);
}
+ /*
+ * If the target relation is using Hypercore TAM, the conflict resolution
+ * index might point to a compressed segment containing the conflicting
+ * row. It is possible to decompress the segment immediately so that the
+ * update can proceed on the decompressed row.
+ */
+ if (ts_is_hypercore_am(resultRelInfo->ri_RelationDesc->rd_rel->relam))
+ {
+ ItemPointerData new_tid;
+ int ntuples =
+ ts_cm_functions->hypercore_decompress_update_segment(resultRelInfo->ri_RelationDesc,
+ conflictTid,
+ existing,
+ context->estate->es_snapshot,
+ &new_tid);
+
+ if (ntuples > 0)
+ {
+ /*
+ * The conflicting row was decompressed, so must update the
+ * conflictTid to point to the decompressed row.
+ */
+ ItemPointerCopy(&new_tid, conflictTid);
+ /*
+ * Since data was decompressed, the command counter was
+ * incremented to make it visible. Make sure the executor uses the
+ * latest command ID to see the changes.
+ */
+ context->estate->es_output_cid = GetCurrentCommandId(true);
+ }
+ }
+
/* Project the new tuple version */
ExecProject(resultRelInfo->ri_onConflict->oc_ProjInfo);
diff --git a/tsl/src/hypercore/arrow_cache_explain.c b/tsl/src/hypercore/arrow_cache_explain.c
index a86c0d0c38e..7216df392d3 100644
--- a/tsl/src/hypercore/arrow_cache_explain.c
+++ b/tsl/src/hypercore/arrow_cache_explain.c
@@ -61,43 +61,12 @@ standard_ExplainOneQuery(Query *query, int cursorOptions, IntoClause *into, Expl
}
#endif
-static struct
+static inline void
+append_if_positive(StringInfo info, const char *key, long long val)
{
- const char *hits_text; /* Number of cache hits */
- const char *miss_text; /* Number of cache misses */
- const char *evict_text; /* Number of cache evictions */
- const char *decompress_text; /* Number of arrays decompressed */
- const char *decompress_calls_text; /* Number of calls to decompress an array */
-} format_texts[] = {
- [EXPLAIN_FORMAT_TEXT] = {
- .hits_text = "Array Cache Hits",
- .miss_text = "Array Cache Misses",
- .evict_text = "Array Cache Evictions",
- .decompress_text = "Array Decompressions",
- .decompress_calls_text = "Array Decompression Calls",
- },
- [EXPLAIN_FORMAT_XML]= {
- .hits_text = "hits",
- .miss_text = "misses",
- .evict_text = "evictions",
- .decompress_text = "decompressions",
- .decompress_calls_text = "decompression calls",
- },
- [EXPLAIN_FORMAT_JSON] = {
- .hits_text = "hits",
- .miss_text = "misses",
- .evict_text = "evictions",
- .decompress_text = "decompressions",
- .decompress_calls_text = "decompression calls",
- },
- [EXPLAIN_FORMAT_YAML] = {
- .hits_text = "hits",
- .miss_text = "misses",
- .evict_text = "evictions",
- .decompress_text = "decompressions",
- .decompress_calls_text = "decompression calls",
- },
-};
+ if (val > 0)
+ appendStringInfo(info, " %s=%lld", key, val);
+}
static void
explain_decompression(Query *query, int cursorOptions, IntoClause *into, ExplainState *es,
@@ -106,33 +75,41 @@ explain_decompression(Query *query, int cursorOptions, IntoClause *into, Explain
standard_ExplainOneQuery(query, cursorOptions, into, es, queryString, params, queryEnv);
if (decompress_cache_print)
{
- Assert(es->format < sizeof(format_texts) / sizeof(*format_texts));
-
- ExplainOpenGroup("Array cache", "Arrow Array Cache", true, es);
- ExplainPropertyInteger(format_texts[es->format].hits_text,
- NULL,
- decompress_cache_stats.hits,
- es);
- ExplainPropertyInteger(format_texts[es->format].miss_text,
- NULL,
- decompress_cache_stats.misses,
- es);
- ExplainPropertyInteger(format_texts[es->format].evict_text,
- NULL,
- decompress_cache_stats.evictions,
- es);
- ExplainPropertyInteger(format_texts[es->format].decompress_text,
- NULL,
- decompress_cache_stats.decompressions,
- es);
-
- if (es->verbose)
- ExplainPropertyInteger(format_texts[es->format].decompress_calls_text,
- NULL,
- decompress_cache_stats.decompress_calls,
- es);
-
- ExplainCloseGroup("Array cache", "Arrow Array Cache", true, es);
+ const bool has_decompress_data = decompress_cache_stats.decompressions > 0 ||
+ decompress_cache_stats.decompress_calls > 0;
+ const bool has_cache_data = decompress_cache_stats.hits > 0 ||
+ decompress_cache_stats.misses > 0 ||
+ decompress_cache_stats.evictions > 0;
+ if (has_decompress_data || has_cache_data)
+ {
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ appendStringInfoString(es->str, "Array:");
+ if (has_cache_data)
+ appendStringInfoString(es->str, " cache");
+ append_if_positive(es->str, "hits", decompress_cache_stats.hits);
+ append_if_positive(es->str, "misses", decompress_cache_stats.misses);
+ append_if_positive(es->str, "evictions", decompress_cache_stats.evictions);
+ if (has_decompress_data)
+ appendStringInfoString(es->str, ", decompress");
+ append_if_positive(es->str, "count", decompress_cache_stats.decompressions);
+ append_if_positive(es->str, "calls", decompress_cache_stats.decompress_calls);
+ appendStringInfoChar(es->str, '\n');
+ }
+ else
+ {
+ ExplainOpenGroup("Array Cache", "Arrow Array Cache", true, es);
+ ExplainPropertyInteger("hits", NULL, decompress_cache_stats.hits, es);
+ ExplainPropertyInteger("misses", NULL, decompress_cache_stats.misses, es);
+ ExplainPropertyInteger("evictions", NULL, decompress_cache_stats.evictions, es);
+ ExplainCloseGroup("Array Cache", "Arrow Array Cache", true, es);
+
+ ExplainOpenGroup("Array Decompress", "Arrow Array Decompress", true, es);
+ ExplainPropertyInteger("count", NULL, decompress_cache_stats.decompressions, es);
+ ExplainPropertyInteger("calls", NULL, decompress_cache_stats.decompress_calls, es);
+ ExplainCloseGroup("Array Decompress", "Arrow Array Decompress", true, es);
+ }
+ }
decompress_cache_print = false;
memset(&decompress_cache_stats, 0, sizeof(struct DecompressCacheStats));
diff --git a/tsl/src/hypercore/hypercore_handler.c b/tsl/src/hypercore/hypercore_handler.c
index 66c94d61e66..e852bb8689c 100644
--- a/tsl/src/hypercore/hypercore_handler.c
+++ b/tsl/src/hypercore/hypercore_handler.c
@@ -7,6 +7,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -49,6 +50,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -1787,6 +1789,82 @@ hypercore_tuple_delete(Relation relation, ItemPointer tid, CommandId cid, Snapsh
return result;
}
+/*
+ * Decompress a segment that contains the row given by ctid.
+ *
+ * This function is called during an upsert (ON CONFLICT DO UPDATE), where the
+ * conflicting row points to a compressed segment that needs to be
+ * decompressed before the update can take place. This function is used to
+ * decompress that segment into a set of individual rows and insert them into
+ * the non-compressed region.
+ *
+ * Returns the number of rows in the segment that were decompressed, or 0 if
+ * the TID pointed to a regular (non-compressed) tuple. If any rows are
+ * decompressed, the TID of the de-compressed conflicting row is returned via
+ * "new_ctid". If no rows were decompressed, the value of "new_ctid" is
+ * undefined.
+ */
+int
+hypercore_decompress_update_segment(Relation relation, const ItemPointer ctid, TupleTableSlot *slot,
+ Snapshot snapshot, ItemPointer new_ctid)
+{
+ HypercoreInfo *hcinfo;
+ Relation crel;
+ TupleTableSlot *cslot;
+ ItemPointerData decoded_tid;
+ TM_Result result;
+ TM_FailureData tmfd;
+ int n_batch_rows = 0;
+ uint16 tuple_index;
+ bool should_free;
+
+ /* Nothing to do if this is not a compressed segment */
+ if (!is_compressed_tid(ctid))
+ return 0;
+
+ Assert(TTS_IS_ARROWTUPLE(slot));
+ Assert(!TTS_EMPTY(slot));
+ Assert(ItemPointerEquals(ctid, &slot->tts_tid));
+
+ hcinfo = RelationGetHypercoreInfo(relation);
+ crel = table_open(hcinfo->compressed_relid, RowExclusiveLock);
+ tuple_index = hypercore_tid_decode(&decoded_tid, ctid);
+ cslot = arrow_slot_get_compressed_slot(slot, NULL);
+ HeapTuple tuple = ExecFetchSlotHeapTuple(cslot, false, &should_free);
+
+ RowDecompressor decompressor = build_decompressor(crel, relation);
+ heap_deform_tuple(tuple,
+ RelationGetDescr(crel),
+ decompressor.compressed_datums,
+ decompressor.compressed_is_nulls);
+
+ /* Must delete the segment before calling the decompression function below
+ * or otherwise index updates will lead to conflicts */
+ result = table_tuple_delete(decompressor.in_rel,
+ &cslot->tts_tid,
+ decompressor.mycid,
+ snapshot,
+ InvalidSnapshot,
+ true,
+ &tmfd,
+ false);
+
+ Ensure(result == TM_Ok, "could not delete compressed segment, result: %u", result);
+
+ n_batch_rows = row_decompressor_decompress_row_to_table(&decompressor);
+ /* Return the TID of the decompressed conflicting tuple. Tuple index is
+ * 1-indexed, so subtract 1. */
+ slot = decompressor.decompressed_slots[tuple_index - 1];
+ ItemPointerCopy(&slot->tts_tid, new_ctid);
+
+ /* Need to make decompressed (and deleted segment) visible */
+ CommandCounterIncrement();
+ row_decompressor_close(&decompressor);
+ table_close(crel, NoLock);
+
+ return n_batch_rows;
+}
+
#if PG16_LT
typedef bool TU_UpdateIndexes;
#endif
diff --git a/tsl/src/hypercore/hypercore_handler.h b/tsl/src/hypercore/hypercore_handler.h
index 71fe74f099c..8d26b5b69b1 100644
--- a/tsl/src/hypercore/hypercore_handler.h
+++ b/tsl/src/hypercore/hypercore_handler.h
@@ -30,6 +30,9 @@ extern void hypercore_xact_event(XactEvent event, void *arg);
extern bool hypercore_set_truncate_compressed(bool onoff);
extern void hypercore_scan_set_skip_compressed(TableScanDesc scan, bool skip);
extern void hypercore_skip_compressed_data_for_relation(Oid relid);
+extern int hypercore_decompress_update_segment(Relation relation, const ItemPointer ctid,
+ TupleTableSlot *slot, Snapshot snapshot,
+ ItemPointer new_ctid);
typedef struct ColumnCompressionSettings
{
diff --git a/tsl/src/init.c b/tsl/src/init.c
index 24f5c917515..cd43fa8ff48 100644
--- a/tsl/src/init.c
+++ b/tsl/src/init.c
@@ -175,6 +175,7 @@ CrossModuleFunctions tsl_cm_functions = {
.decompress_target_segments = decompress_target_segments,
.hypercore_handler = hypercore_handler,
.hypercore_proxy_handler = hypercore_proxy_handler,
+ .hypercore_decompress_update_segment = hypercore_decompress_update_segment,
.is_compressed_tid = tsl_is_compressed_tid,
.ddl_command_start = tsl_ddl_command_start,
.ddl_command_end = tsl_ddl_command_end,
diff --git a/tsl/test/expected/hypercore_columnar.out b/tsl/test/expected/hypercore_columnar.out
index d672a93a372..768f7fe846f 100644
--- a/tsl/test/expected/hypercore_columnar.out
+++ b/tsl/test/expected/hypercore_columnar.out
@@ -9,6 +9,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -18,17 +35,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -41,14 +54,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -101,11 +107,8 @@ $$, :'chunk'));
Scankey: (device < 4)
Vectorized Filter: (location = 2)
Rows Removed by Filter: 16
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 3
-(9 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(6 rows)
-- Save away all data from the chunk so that we can compare.
create table saved as select * from :chunk;
@@ -136,11 +139,8 @@ $$, :'chunk'));
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Vectorized Filter: (humidity > '110'::double precision)
Rows Removed by Filter: 204
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 30
-(8 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(5 rows)
select count(*) from :chunk where humidity > 110;
count
@@ -159,11 +159,8 @@ $$, :'chunk'));
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Vectorized Filter: (humidity > '50'::double precision)
Rows Removed by Filter: 87
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 30
-(8 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(5 rows)
select lhs.count, rhs.count
from (select count(*) from :chunk where humidity > 50) lhs,
@@ -194,11 +191,8 @@ $$, :'chunk'));
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Filter: (temp > '50'::numeric)
Rows Removed by Filter: 204
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 30
-(8 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(5 rows)
select count(*) from :chunk where temp > 50;
count
@@ -216,11 +210,8 @@ $$, :'chunk'));
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Filter: (temp > '20'::numeric)
Rows Removed by Filter: 98
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 30
-(8 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(5 rows)
select lhs.count, rhs.count
from (select count(*) from :chunk where temp > 20) lhs,
@@ -251,11 +242,8 @@ select count(*) from :chunk where humidity > 40 and temp > 20;
Filter: (temp > '20'::numeric)
Rows Removed by Filter: 132
Vectorized Filter: (humidity > '40'::double precision)
- Array Cache Hits: 0
- Array Cache Misses: 30
- Array Cache Evictions: 0
- Array Decompressions: 60
-(9 rows)
+ Array: cache misses=30, decompress count=60 calls=165
+(6 rows)
select count(*) from :chunk where humidity > 40 and temp > 20;
count
@@ -284,11 +272,8 @@ $$, :'chunk'));
Rows Removed by Filter: 3
Scankey: (device = 3)
Vectorized Filter: (humidity > '40'::double precision)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 2
-(10 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(7 rows)
select count(*) from :chunk where humidity > 40 and temp > 20 and device = 3;
count
@@ -318,11 +303,8 @@ $$, :'chunk'));
-> Seq Scan on _hyper_I_N_chunk (actual rows=N loops=N)
Filter: (device < 4)
Rows Removed by Filter: 184
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 96
-(11 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(8 rows)
drop table readings;
drop table saved;
diff --git a/tsl/test/expected/hypercore_constraints.out b/tsl/test/expected/hypercore_constraints.out
index f430699a12f..462574ec1ec 100644
--- a/tsl/test/expected/hypercore_constraints.out
+++ b/tsl/test/expected/hypercore_constraints.out
@@ -15,6 +15,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -24,17 +41,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -47,14 +60,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_copy.out b/tsl/test/expected/hypercore_copy.out
index 39ee9865a9c..64bb468b505 100644
--- a/tsl/test/expected/hypercore_copy.out
+++ b/tsl/test/expected/hypercore_copy.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_create.out b/tsl/test/expected/hypercore_create.out
index b290b7cd494..ef60e8529a5 100644
--- a/tsl/test/expected/hypercore_create.out
+++ b/tsl/test/expected/hypercore_create.out
@@ -9,6 +9,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -18,17 +35,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -41,14 +54,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_cursor.out b/tsl/test/expected/hypercore_cursor.out
index ca4249fa332..aec74cb5ef9 100644
--- a/tsl/test/expected/hypercore_cursor.out
+++ b/tsl/test/expected/hypercore_cursor.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_index_btree.out b/tsl/test/expected/hypercore_index_btree.out
index 7e0982cdd5d..6df931d5474 100644
--- a/tsl/test/expected/hypercore_index_btree.out
+++ b/tsl/test/expected/hypercore_index_btree.out
@@ -17,6 +17,23 @@ set role :ROLE_DEFAULT_PERM_USER;
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -26,17 +43,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -49,14 +62,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -351,11 +357,7 @@ select explain_analyze_anonymize(format('select * from %s where owner_id = 3', :
Scankey: (owner_id = 3)
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: (owner_id = 3)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(17 rows)
+(13 rows)
-- TODO(timescale/timescaledb-private#1117): the Decompress Count here
-- is not correct, but the result shows correctly.
@@ -364,11 +366,7 @@ select explain_analyze_anonymize(format('select * from %s where owner_id = 3', :
------------------------------------------------------------------------
Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: (owner_id = 3)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(6 rows)
+(2 rows)
reset enable_indexscan;
-- Test index scan on non-segmentby column
@@ -393,11 +391,8 @@ $$, :'hypertable'));
Index Cond: ((device_id >= 10) AND (device_id <= 20))
-> Index Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((device_id >= 10) AND (device_id <= 20))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 560
-(19 rows)
+ Array: cache hits=N misses=N, decompress count=N calls=N
+(16 rows)
select explain_analyze_anonymize(format($$
select device_id, avg(temp) from %s where device_id between 10 and 20
@@ -408,11 +403,8 @@ $$, :'chunk1'));
GroupAggregate (actual rows=N loops=N)
-> Index Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((device_id >= 10) AND (device_id <= 20))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 149
-(7 rows)
+ Array: cache hits=N misses=N, decompress count=N calls=N
+(4 rows)
-- Test index scan on segmentby column
select explain_analyze_anonymize(format($$
@@ -433,11 +425,8 @@ $$, :'hypertable'));
Index Cond: ((location_id >= 5) AND (location_id <= 10))
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: ((location_id >= 5) AND (location_id <= 10))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 360
-(17 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(14 rows)
select explain_analyze_anonymize(format($$
select created_at, location_id, temp from %s where location_id between 5 and 10
@@ -446,11 +435,8 @@ $$, :'chunk1'));
------------------------------------------------------------------------
Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: ((location_id >= 5) AND (location_id <= 10))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 60
-(6 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(3 rows)
-- These should generate decompressions as above, but for all columns.
select explain_analyze_anonymize(format($$
@@ -471,11 +457,7 @@ $$, :'hypertable'));
Index Cond: ((location_id >= 5) AND (location_id <= 10))
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: ((location_id >= 5) AND (location_id <= 10))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(17 rows)
+(13 rows)
select explain_analyze_anonymize(format($$
select * from %s where location_id between 5 and 10
@@ -484,11 +466,7 @@ $$, :'chunk1'));
------------------------------------------------------------------------
Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: ((location_id >= 5) AND (location_id <= 10))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(6 rows)
+(2 rows)
--
-- Test index only scan
@@ -518,11 +496,7 @@ $$, :'hypertable'));
Index Cond: ((location_id >= 5) AND (location_id <= 10))
-> Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: ((location_id >= 5) AND (location_id <= 10))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(17 rows)
+(13 rows)
-- We just compare the counts here, not the full content.
select heapam.count as heapam, hypercore.count as hypercore
@@ -558,11 +532,7 @@ $$, :'hypertable'));
-> Index Only Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((device_id >= 5) AND (device_id <= 10))
Heap Fetches: N
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(23 rows)
+(19 rows)
select explain_analyze_anonymize(format($$
select location_id from %s where location_id between 5 and 10
@@ -571,11 +541,7 @@ $$, :'chunk1'));
------------------------------------------------------------------------
Custom Scan (ColumnarScan) on _hyper_I_N_chunk (actual rows=N loops=N)
Scankey: ((location_id >= 5) AND (location_id <= 10))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(6 rows)
+(2 rows)
select explain_analyze_anonymize(format($$
select device_id from %s where device_id between 5 and 10
@@ -585,11 +551,7 @@ $$, :'chunk1'));
Index Only Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((device_id >= 5) AND (device_id <= 10))
Heap Fetches: N
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(7 rows)
+(3 rows)
-- Test index only scan with covering indexes.
--
@@ -620,11 +582,8 @@ $$, :'hypertable'));
Index Cond: ((location_id >= 5) AND (location_id <= 10))
-> Index Scan using _hyper_I_N_chunk_hypertable_location_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((location_id >= 5) AND (location_id <= 10))
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 150
-(20 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(17 rows)
select explain_analyze_anonymize(format($$
select device_id, avg(humidity) from %s where device_id between 5 and 10
@@ -653,11 +612,7 @@ $$, :'hypertable'));
-> Index Only Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((device_id >= 5) AND (device_id <= 10))
Heap Fetches: N
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(25 rows)
+(21 rows)
select explain_analyze_anonymize(format($$
select location_id, avg(humidity) from %s where location_id between 5 and 10
@@ -669,11 +624,7 @@ $$, :'chunk1'));
-> Index Only Scan using _hyper_I_N_chunk_hypertable_location_id_include_humidity_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((location_id >= 5) AND (location_id <= 10))
Heap Fetches: N
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(8 rows)
+(4 rows)
select explain_analyze_anonymize(format($$
select device_id, avg(humidity) from %s where device_id between 5 and 10
@@ -685,11 +636,7 @@ $$, :'chunk1'));
-> Index Only Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: ((device_id >= 5) AND (device_id <= 10))
Heap Fetches: N
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 0
-(8 rows)
+(4 rows)
-------------------------------------
-- Test UNIQUE and Partial indexes --
diff --git a/tsl/test/expected/hypercore_index_hash.out b/tsl/test/expected/hypercore_index_hash.out
index a3a82d6b8d3..48223b96171 100644
--- a/tsl/test/expected/hypercore_index_hash.out
+++ b/tsl/test/expected/hypercore_index_hash.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -261,11 +267,8 @@ $$, :'hypertable'));
-> Partial GroupAggregate (actual rows=N loops=N)
-> Index Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: (device_id = 10)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 320
-(24 rows)
+ Array: cache hits=N misses=N, decompress count=N calls=N
+(21 rows)
select explain_analyze_anonymize(format($$
select device_id, avg(temp) from %s where device_id = 10
@@ -276,11 +279,8 @@ $$, :'chunk1'));
GroupAggregate (actual rows=N loops=N)
-> Index Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: (device_id = 10)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 17
-(7 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(4 rows)
-- Test index scan on segmentby column
select explain_analyze_anonymize(format($$
@@ -301,11 +301,8 @@ $$, :'hypertable'));
Index Cond: (location_id = 5)
-> Index Scan using _hyper_I_N_chunk_hypertable_location_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: (location_id = 5)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 60
-(17 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(14 rows)
select explain_analyze_anonymize(format($$
select created_at, location_id, temp from %s where location_id = 5
@@ -314,11 +311,8 @@ $$, :'chunk1'));
----------------------------------------------------------------------------------------------------------
Index Scan using _hyper_I_N_chunk_hypertable_location_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: (location_id = 5)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 10
-(6 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(3 rows)
-- These should generate decompressions as above, but for all columns.
select explain_analyze_anonymize(format($$
@@ -339,11 +333,8 @@ $$, :'hypertable'));
Index Cond: (location_id = 5)
-> Index Scan using _hyper_I_N_chunk_hypertable_location_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: (location_id = 5)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 60
-(17 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(14 rows)
select explain_analyze_anonymize(format($$
select * from %s where location_id = 5
@@ -352,11 +343,8 @@ $$, :'chunk1'));
----------------------------------------------------------------------------------------------------------
Index Scan using _hyper_I_N_chunk_hypertable_location_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
Index Cond: (location_id = 5)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 10
-(6 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(3 rows)
drop table :hypertable cascade;
NOTICE: drop cascades to view chunk_indexes
diff --git a/tsl/test/expected/hypercore_insert.out b/tsl/test/expected/hypercore_insert.out
index 87a2c660bf1..fe2efd39c62 100644
--- a/tsl/test/expected/hypercore_insert.out
+++ b/tsl/test/expected/hypercore_insert.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_join.out b/tsl/test/expected/hypercore_join.out
index d901dc4c9c5..b55d547f772 100644
--- a/tsl/test/expected/hypercore_join.out
+++ b/tsl/test/expected/hypercore_join.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -176,11 +182,8 @@ $$, :'chunk1'));
-> Index Scan using _hyper_I_N_chunk_the_hypercore_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
-> Index Scan using _hyper_I_N_chunk_the_hypercore_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
-> Index Scan using _hyper_I_N_chunk_hypertable_device_id_idx on _hyper_I_N_chunk (actual rows=N loops=N)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 9
-(12 rows)
+ Array: cache hits=N misses=N, decompress count=N calls=N
+(9 rows)
-- Check that it generates the right result
select * into :inner from :chunk1 join the_hypercore using (device_id);
diff --git a/tsl/test/expected/hypercore_merge.out b/tsl/test/expected/hypercore_merge.out
index c45432f2b09..f468c2c65f3 100644
--- a/tsl/test/expected/hypercore_merge.out
+++ b/tsl/test/expected/hypercore_merge.out
@@ -15,6 +15,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -24,17 +41,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -47,14 +60,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -190,11 +196,8 @@ $$, :'hypertable'));
Index Cond: (created_at = sd.created_at)
-> Index Scan using "6_6_readings_created_at_key" on _hyper_I_N_chunk ht_7 (actual rows=N loops=N)
Index Cond: (created_at = sd.created_at)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 1
-(32 rows)
+ Array: cache misses=N, decompress count=N calls=N
+(29 rows)
-- Now, the inserted rows should show up, but not the ones that
-- already exist.
@@ -280,11 +283,8 @@ $$, :'hypertable'));
Index Cond: (created_at = sd.created_at)
-> Index Scan using "6_6_readings_created_at_key" on _hyper_I_N_chunk ht_7 (actual rows=N loops=N)
Index Cond: (created_at = sd.created_at)
- Array Cache Hits: N
- Array Cache Misses: N
- Array Cache Evictions: N
- Array Decompressions: 2
-(32 rows)
+ Array: cache hits=N misses=N, decompress count=N calls=N
+(29 rows)
\x on
select * from :hypertable where not _timescaledb_debug.is_compressed_tid(ctid);
diff --git a/tsl/test/expected/hypercore_parallel.out b/tsl/test/expected/hypercore_parallel.out
index 7219ebcd3a5..406e55c6e82 100644
--- a/tsl/test/expected/hypercore_parallel.out
+++ b/tsl/test/expected/hypercore_parallel.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_scans.out b/tsl/test/expected/hypercore_scans.out
index dd4f37f39d0..67bb6dbfa84 100644
--- a/tsl/test/expected/hypercore_scans.out
+++ b/tsl/test/expected/hypercore_scans.out
@@ -54,11 +54,7 @@ select * from :chunk where ctid = :'ctid';
------------------------------------------------------
Tid Scan on _hyper_1_1_chunk (actual rows=1 loops=1)
TID Cond: (ctid = '(2147483650,1)'::tid)
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(6 rows)
+(2 rows)
select * from :chunk where ctid = :'ctid';
time | location | device | temp | humidity
@@ -75,11 +71,7 @@ select * from :chunk where ctid = :'ctid';
------------------------------------------------------
Tid Scan on _hyper_1_1_chunk (actual rows=1 loops=1)
TID Cond: (ctid = '(0,1)'::tid)
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(6 rows)
+(2 rows)
select * from :chunk where ctid = :'ctid';
time | location | device | temp | humidity
@@ -123,11 +115,147 @@ select time, temp + humidity from readings where device between 5 and 10 and hum
Index Cond: ((device >= 5) AND (device <= 10))
Filter: (humidity > '5'::double precision)
Rows Removed by Filter: 6
- Array Cache Hits: 0
- Array Cache Misses: 6
- Array Cache Evictions: 0
- Array Decompressions: 18
-(30 rows)
+ Array: cache misses=6, decompress count=18 calls=105
+(27 rows)
+
+-- Testing JSON format to make sure it works and to get coverage for
+-- those parts of the code.
+explain (analyze, costs off, timing off, summary off, decompress_cache_stats, format json)
+select time, temp + humidity from readings where device between 5 and 10 and humidity > 5;
+ QUERY PLAN
+---------------------------------------------------------------------
+ [ +
+ { +
+ "Plan": { +
+ "Node Type": "Result", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Actual Rows": 1624, +
+ "Actual Loops": 1, +
+ "Plans": [ +
+ { +
+ "Node Type": "Append", +
+ "Parent Relationship": "Outer", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Actual Rows": 1624, +
+ "Actual Loops": 1, +
+ "Subplans Removed": 0, +
+ "Plans": [ +
+ { +
+ "Node Type": "Index Scan", +
+ "Parent Relationship": "Member", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Scan Direction": "Forward", +
+ "Index Name": "_hyper_1_1_chunk_readings_device_idx",+
+ "Relation Name": "_hyper_1_1_chunk", +
+ "Alias": "_hyper_1_1_chunk", +
+ "Actual Rows": 34, +
+ "Actual Loops": 1, +
+ "Index Cond": "((device >= 5) AND (device <= 10))", +
+ "Rows Removed by Index Recheck": 0, +
+ "Filter": "(humidity > '5'::double precision)", +
+ "Rows Removed by Filter": 1 +
+ }, +
+ { +
+ "Node Type": "Index Scan", +
+ "Parent Relationship": "Member", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Scan Direction": "Forward", +
+ "Index Name": "_hyper_1_2_chunk_readings_device_idx",+
+ "Relation Name": "_hyper_1_2_chunk", +
+ "Alias": "_hyper_1_2_chunk", +
+ "Actual Rows": 404, +
+ "Actual Loops": 1, +
+ "Index Cond": "((device >= 5) AND (device <= 10))", +
+ "Rows Removed by Index Recheck": 0, +
+ "Filter": "(humidity > '5'::double precision)", +
+ "Rows Removed by Filter": 17 +
+ }, +
+ { +
+ "Node Type": "Index Scan", +
+ "Parent Relationship": "Member", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Scan Direction": "Forward", +
+ "Index Name": "_hyper_1_3_chunk_readings_device_idx",+
+ "Relation Name": "_hyper_1_3_chunk", +
+ "Alias": "_hyper_1_3_chunk", +
+ "Actual Rows": 380, +
+ "Actual Loops": 1, +
+ "Index Cond": "((device >= 5) AND (device <= 10))", +
+ "Rows Removed by Index Recheck": 0, +
+ "Filter": "(humidity > '5'::double precision)", +
+ "Rows Removed by Filter": 23 +
+ }, +
+ { +
+ "Node Type": "Index Scan", +
+ "Parent Relationship": "Member", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Scan Direction": "Forward", +
+ "Index Name": "_hyper_1_4_chunk_readings_device_idx",+
+ "Relation Name": "_hyper_1_4_chunk", +
+ "Alias": "_hyper_1_4_chunk", +
+ "Actual Rows": 359, +
+ "Actual Loops": 1, +
+ "Index Cond": "((device >= 5) AND (device <= 10))", +
+ "Rows Removed by Index Recheck": 0, +
+ "Filter": "(humidity > '5'::double precision)", +
+ "Rows Removed by Filter": 18 +
+ }, +
+ { +
+ "Node Type": "Index Scan", +
+ "Parent Relationship": "Member", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Scan Direction": "Forward", +
+ "Index Name": "_hyper_1_5_chunk_readings_device_idx",+
+ "Relation Name": "_hyper_1_5_chunk", +
+ "Alias": "_hyper_1_5_chunk", +
+ "Actual Rows": 379, +
+ "Actual Loops": 1, +
+ "Index Cond": "((device >= 5) AND (device <= 10))", +
+ "Rows Removed by Index Recheck": 0, +
+ "Filter": "(humidity > '5'::double precision)", +
+ "Rows Removed by Filter": 16 +
+ }, +
+ { +
+ "Node Type": "Index Scan", +
+ "Parent Relationship": "Member", +
+ "Parallel Aware": false, +
+ "Async Capable": false, +
+ "Scan Direction": "Forward", +
+ "Index Name": "_hyper_1_6_chunk_readings_device_idx",+
+ "Relation Name": "_hyper_1_6_chunk", +
+ "Alias": "_hyper_1_6_chunk", +
+ "Actual Rows": 68, +
+ "Actual Loops": 1, +
+ "Index Cond": "((device >= 5) AND (device <= 10))", +
+ "Rows Removed by Index Recheck": 0, +
+ "Filter": "(humidity > '5'::double precision)", +
+ "Rows Removed by Filter": 6 +
+ } +
+ ] +
+ } +
+ ] +
+ }, +
+ "Triggers": [ +
+ ] +
+ }, +
+ "Arrow Array Cache": { +
+ "hits": 0, +
+ "misses": 6, +
+ "evictions": 0 +
+ }, +
+ "Arrow Array Decompress": { +
+ "count": 18, +
+ "calls": 105 +
+ } +
+ ]
+(1 row)
-- Check the explain cache information output.
--
@@ -163,11 +291,8 @@ select time, temp + humidity from readings where device between 5 and 10 and hum
Index Cond: ((device >= 5) AND (device <= 10))
Filter: (humidity > '5'::double precision)
Rows Removed by Filter: 6
- Array Cache Hits: 0
- Array Cache Misses: 6
- Array Cache Evictions: 0
- Array Decompressions: 18
-(30 rows)
+ Array: cache misses=6, decompress count=18 calls=105
+(27 rows)
-- Check the explain cache information output. Query 1 and 3 should
-- show the same explain plan, and the plan in the middle should not
@@ -178,11 +303,7 @@ select * from :chunk where device between 5 and 10;
----------------------------------------------------------------------------------------------------
Index Scan using _hyper_1_1_chunk_readings_device_idx on _hyper_1_1_chunk (actual rows=35 loops=1)
Index Cond: ((device >= 5) AND (device <= 10))
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(6 rows)
+(2 rows)
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
select * from :chunk where device between 5 and 10;
@@ -190,11 +311,7 @@ select * from :chunk where device between 5 and 10;
----------------------------------------------------------------------------------------------------
Index Scan using _hyper_1_1_chunk_readings_device_idx on _hyper_1_1_chunk (actual rows=35 loops=1)
Index Cond: ((device >= 5) AND (device <= 10))
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(6 rows)
+(2 rows)
-- Queries that will select just a few columns
set max_parallel_workers_per_gather to 0;
@@ -215,11 +332,8 @@ select device, humidity from readings where device between 5 and 10;
Index Cond: ((device >= 5) AND (device <= 10))
-> Index Scan using _hyper_1_6_chunk_readings_device_idx on _hyper_1_6_chunk (actual rows=74 loops=1)
Index Cond: ((device >= 5) AND (device <= 10))
- Array Cache Hits: 0
- Array Cache Misses: 6
- Array Cache Evictions: 0
- Array Decompressions: 6
-(17 rows)
+ Array: cache misses=6, decompress count=6 calls=35
+(14 rows)
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
select device, avg(humidity) from readings where device between 5 and 10
@@ -242,11 +356,8 @@ group by device;
Index Cond: ((device >= 5) AND (device <= 10))
-> Index Scan using _hyper_1_6_chunk_readings_device_idx on _hyper_1_6_chunk (actual rows=74 loops=1)
Index Cond: ((device >= 5) AND (device <= 10))
- Array Cache Hits: 0
- Array Cache Misses: 6
- Array Cache Evictions: 0
- Array Decompressions: 6
-(20 rows)
+ Array: cache misses=6, decompress count=6 calls=35
+(17 rows)
-- Test on conflict: insert the same data as before, but throw away
-- the updates.
@@ -266,11 +377,8 @@ on conflict (location, device, time) do nothing;
-> Custom Scan (ChunkDispatch) (actual rows=8641 loops=1)
-> Subquery Scan on "*SELECT*" (actual rows=8641 loops=1)
-> Function Scan on generate_series t (actual rows=8641 loops=1)
- Array Cache Hits: 0
- Array Cache Misses: 2
- Array Cache Evictions: 0
- Array Decompressions: 4
-(13 rows)
+ Array: cache misses=2, decompress count=4 calls=4
+(10 rows)
-- This should show values for all columns
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
@@ -299,11 +407,7 @@ select time, temp + humidity from readings where device between 5 and 10 and hum
-> Index Scan using _hyper_1_6_chunk_readings_device_idx on _hyper_1_6_chunk (never executed)
Index Cond: ((device >= 5) AND (device <= 10))
Filter: (humidity > '5'::double precision)
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(26 rows)
+(22 rows)
select time, temp + humidity from readings where device between 5 and 10 and humidity > 5 limit 5;
time | ?column?
@@ -342,11 +446,8 @@ order by time desc;
-> Custom Scan (ColumnarScan) on _hyper_1_1_chunk (actual rows=88 loops=1)
Vectorized Filter: (location = '1'::text)
Rows Removed by Filter: 319
- Array Cache Hits: 0
- Array Cache Misses: 30
- Array Cache Evictions: 0
- Array Decompressions: 84
-(10 rows)
+ Array: cache misses=30, decompress count=84 calls=242
+(7 rows)
-- Save the data for comparison with seqscan
create temp table chunk_saved as
@@ -404,11 +505,7 @@ select count(*) from :chunk where device = 1;
Aggregate (actual rows=1 loops=1)
-> Custom Scan (ColumnarScan) on _hyper_1_1_chunk (actual rows=17 loops=1)
Scankey: (device = 1)
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(7 rows)
+(3 rows)
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
select device from :chunk where device = 1;
@@ -416,11 +513,7 @@ select device from :chunk where device = 1;
-------------------------------------------------------------------------
Custom Scan (ColumnarScan) on _hyper_1_1_chunk (actual rows=17 loops=1)
Scankey: (device = 1)
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(6 rows)
+(2 rows)
-- Using a non-segmentby column will decompress that column
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
@@ -431,11 +524,8 @@ select count(*) from :chunk where location = 1::text;
-> Custom Scan (ColumnarScan) on _hyper_1_1_chunk (actual rows=89 loops=1)
Vectorized Filter: (location = '1'::text)
Rows Removed by Filter: 320
- Array Cache Hits: 0
- Array Cache Misses: 30
- Array Cache Evictions: 0
- Array Decompressions: 30
-(8 rows)
+ Array: cache misses=30, decompress count=30 calls=30
+(5 rows)
-- Testing same thing with SeqScan. It still decompresses in the
-- count(*) case, although it shouldn't have to. So, probably an
@@ -449,11 +539,8 @@ select count(*) from :chunk where device = 1;
-> Seq Scan on _hyper_1_1_chunk (actual rows=17 loops=1)
Filter: (device = 1)
Rows Removed by Filter: 392
- Array Cache Hits: 0
- Array Cache Misses: 30
- Array Cache Evictions: 0
- Array Decompressions: 62
-(8 rows)
+ Array: cache misses=30, decompress count=62 calls=410
+(5 rows)
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
select device from :chunk where device = 1;
@@ -462,11 +549,7 @@ select device from :chunk where device = 1;
Seq Scan on _hyper_1_1_chunk (actual rows=17 loops=1)
Filter: (device = 1)
Rows Removed by Filter: 392
- Array Cache Hits: 0
- Array Cache Misses: 0
- Array Cache Evictions: 0
- Array Decompressions: 0
-(7 rows)
+(3 rows)
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
select count(*) from :chunk where location = 1::text;
@@ -476,11 +559,8 @@ select count(*) from :chunk where location = 1::text;
-> Seq Scan on _hyper_1_1_chunk (actual rows=89 loops=1)
Filter: (location = '1'::text)
Rows Removed by Filter: 320
- Array Cache Hits: 0
- Array Cache Misses: 30
- Array Cache Evictions: 0
- Array Decompressions: 62
-(8 rows)
+ Array: cache misses=30, decompress count=62 calls=410
+(5 rows)
-- ColumnarScan declares itself as projection capable. This query
-- would add a Result node on top if ColumnarScan couldn't project.
diff --git a/tsl/test/expected/hypercore_stats.out b/tsl/test/expected/hypercore_stats.out
index 6ac8d9ef03c..eb901217a41 100644
--- a/tsl/test/expected/hypercore_stats.out
+++ b/tsl/test/expected/hypercore_stats.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_trigger.out b/tsl/test/expected/hypercore_trigger.out
index 3de60ad3cdd..4ac4daeab42 100644
--- a/tsl/test/expected/hypercore_trigger.out
+++ b/tsl/test/expected/hypercore_trigger.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_types.out b/tsl/test/expected/hypercore_types.out
index 4de32b94590..b76c66945d5 100644
--- a/tsl/test/expected/hypercore_types.out
+++ b/tsl/test/expected/hypercore_types.out
@@ -9,6 +9,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -18,17 +35,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -41,14 +54,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_update.out b/tsl/test/expected/hypercore_update.out
index 3426606c135..bd45a0e7c58 100644
--- a/tsl/test/expected/hypercore_update.out
+++ b/tsl/test/expected/hypercore_update.out
@@ -15,6 +15,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -24,17 +41,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -47,14 +60,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/expected/hypercore_vacuum_full.out b/tsl/test/expected/hypercore_vacuum_full.out
index 99643f1d882..6866229a1c2 100644
--- a/tsl/test/expected/hypercore_vacuum_full.out
+++ b/tsl/test/expected/hypercore_vacuum_full.out
@@ -14,6 +14,23 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -23,17 +40,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -46,14 +59,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
diff --git a/tsl/test/sql/hypercore_scans.sql b/tsl/test/sql/hypercore_scans.sql
index 63beaef30f3..b7cc7f276c7 100644
--- a/tsl/test/sql/hypercore_scans.sql
+++ b/tsl/test/sql/hypercore_scans.sql
@@ -70,6 +70,11 @@ select * from :chunk where device between 5 and 10;
explain (analyze, costs off, timing off, summary off, decompress_cache_stats)
select time, temp + humidity from readings where device between 5 and 10 and humidity > 5;
+-- Testing JSON format to make sure it works and to get coverage for
+-- those parts of the code.
+explain (analyze, costs off, timing off, summary off, decompress_cache_stats, format json)
+select time, temp + humidity from readings where device between 5 and 10 and humidity > 5;
+
-- Check the explain cache information output.
--
-- Query 1 and 3 should show the same explain plan, and the plan in
diff --git a/tsl/test/sql/include/hypercore_helpers.sql b/tsl/test/sql/include/hypercore_helpers.sql
index c8a1bd49d0f..2fb9e0ec9ac 100644
--- a/tsl/test/sql/include/hypercore_helpers.sql
+++ b/tsl/test/sql/include/hypercore_helpers.sql
@@ -6,6 +6,24 @@
-- emitted plan. This is intended to be used when the structure of the
-- plan is important, but not the specific chunks scanned nor the
-- number of heap fetches, rows, loops, etc.
+create function anonymize(ln text) returns text language plpgsql as
+$$
+begin
+ ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
+ ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
+ ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
+ ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
+
+ if trim(both from ln) like 'Array: %' then
+ ln := regexp_replace(ln, 'hits=\d+', 'hits=N');
+ ln := regexp_replace(ln, 'misses=\d+', 'misses=N');
+ ln := regexp_replace(ln, 'count=\d+', 'count=N');
+ ln := regexp_replace(ln, 'calls=\d+', 'calls=N');
+ end if;
+ return ln;
+end
+$$;
+
create function explain_analyze_anonymize(text) returns setof text
language plpgsql as
$$
@@ -15,17 +33,13 @@ begin
for ln in
execute format('explain (analyze, costs off, summary off, timing off, decompress_cache_stats) %s', $1)
loop
- if trim(both from ln) like 'Group Key:%' then
+ -- Group keys are shown for plans in PG15 but not others, so
+ -- we remove these lines to avoid having to have
+ -- version-sensible tests.
+ if trim(both from ln) like 'Group Key:%' then
continue;
end if;
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;
@@ -39,14 +53,7 @@ begin
for ln in
execute format('explain (costs off, summary off, timing off) %s', $1)
loop
- ln := regexp_replace(ln, 'Array Cache Hits: \d+', 'Array Cache Hits: N');
- ln := regexp_replace(ln, 'Array Cache Misses: \d+', 'Array Cache Misses: N');
- ln := regexp_replace(ln, 'Array Cache Evictions: \d+', 'Array Cache Evictions: N');
- ln := regexp_replace(ln, 'Heap Fetches: \d+', 'Heap Fetches: N');
- ln := regexp_replace(ln, 'Workers Launched: \d+', 'Workers Launched: N');
- ln := regexp_replace(ln, 'actual rows=\d+ loops=\d+', 'actual rows=N loops=N');
- ln := regexp_replace(ln, '_hyper_\d+_\d+_chunk', '_hyper_I_N_chunk', 1, 0);
- return next ln;
+ return next anonymize(ln);
end loop;
end;
$$;