diff --git a/doc/user/content/free-trial-faqs.md b/doc/user/content/free-trial-faqs.md
index d4445024dbb7..d10f812e3a68 100644
--- a/doc/user/content/free-trial-faqs.md
+++ b/doc/user/content/free-trial-faqs.md
@@ -45,7 +45,7 @@ trial period.
To see your current credit consumption rate, measured in credits per hour, run
the following query against Materialize:
-```sql
+```mzsql
SELECT sum(s.credits_per_hour) AS credit_consumption_rate
FROM mz_cluster_replicas r
JOIN mz_cluster_replica_sizes s ON r.size = s.size;
@@ -68,7 +68,7 @@ No, you cannot go over the rate limit of 4 credits per hour at any time during
your free trial. If you try to add a replica that puts you over the limit,
Materialize will return an error similar to:
-```sql
+```nofmt
Error: creating cluster replica would violate max_credit_consumption_rate limit (desired: 6, limit: 4, current: 3)
Hint: Drop an existing cluster replica or contact support to request a limit increase.
```
diff --git a/doc/user/content/get-started/isolation-level.md b/doc/user/content/get-started/isolation-level.md
index 0f2f281b3eed..915cf45c4652 100644
--- a/doc/user/content/get-started/isolation-level.md
+++ b/doc/user/content/get-started/isolation-level.md
@@ -35,11 +35,11 @@ Isolation level is a configuration parameter that can be set by the user on a se
## Examples
-```sql
+```mzsql
SET TRANSACTION_ISOLATION TO 'SERIALIZABLE';
```
-```sql
+```mzsql
SET TRANSACTION_ISOLATION TO 'STRICT SERIALIZABLE';
```
diff --git a/doc/user/content/get-started/quickstart.md b/doc/user/content/get-started/quickstart.md
index c0c1ed426264..33db5844822d 100644
--- a/doc/user/content/get-started/quickstart.md
+++ b/doc/user/content/get-started/quickstart.md
@@ -45,7 +45,7 @@ purchase items only to quickly resell them for profit.
1. Let's start by kicking off the built-in [auction load generator](/sql/create-source/load-generator/#auction), so you have some data to work
with.
- ```sql
+ ```mzsql
CREATE SOURCE auction_house
FROM LOAD GENERATOR AUCTION
(TICK INTERVAL '1s', AS OF 100000)
@@ -53,7 +53,7 @@ with.
```
1. Use the [`SHOW SOURCES`](/sql/show-sources/) command to get an idea of the data being generated:
- ```sql
+ ```mzsql
SHOW SOURCES;
```
@@ -75,7 +75,7 @@ with.
1. Before moving on, get a sense for the data you'll be working with:
- ```sql
+ ```mzsql
SELECT * FROM auctions LIMIT 1;
```
@@ -86,7 +86,7 @@ with.
1 | 1824 | Best Pizza in Town | 2023-09-10 21:24:54.838+00
```
- ```sql
+ ```mzsql
SELECT * FROM bids LIMIT 1;
```
@@ -107,7 +107,7 @@ with.
joins data from `auctions` and `bids` to get the bid with the highest `amount`
for each auction at its `end_time`.
- ```sql
+ ```mzsql
CREATE VIEW winning_bids AS
SELECT DISTINCT ON (auctions.id) bids.*, auctions.item, auctions.seller
FROM auctions, bids
@@ -130,7 +130,7 @@ for each auction at its `end_time`.
yet! Querying the view re-runs the embedded statement, which comes at some cost
on growing amounts of data.
- ```sql
+ ```mzsql
SELECT * FROM winning_bids
WHERE item = 'Best Pizza in Town'
ORDER BY bid_time DESC;
@@ -142,7 +142,7 @@ on growing amounts of data.
1. Next, try creating several indexes on the `winning_bids` view using columns
that can help optimize operations like point lookups and joins.
- ```sql
+ ```mzsql
CREATE INDEX wins_by_item ON winning_bids (item);
CREATE INDEX wins_by_bidder ON winning_bids (buyer);
CREATE INDEX wins_by_seller ON winning_bids (seller);
@@ -156,7 +156,7 @@ that can help optimize operations like point lookups and joins.
indexes (e.g., with a point lookup), things should be a whole lot more
interactive.
- ```sql
+ ```mzsql
SELECT * FROM winning_bids WHERE item = 'Best Pizza in Town' ORDER BY bid_time DESC;
```
@@ -172,7 +172,7 @@ interactive.
1. Create a view that detects when a user wins an auction as a bidder, and then
is identified as a seller for an item at a higher price.
- ```sql
+ ```mzsql
CREATE VIEW fraud_activity AS
SELECT w2.seller,
w2.item AS seller_item,
@@ -191,7 +191,7 @@ is identified as a seller for an item at a higher price.
Aha! You can now catch any auction flippers in real time, based on the results of this view.
- ```sql
+ ```mzsql
SELECT * FROM fraud_activity LIMIT 100;
```
@@ -205,7 +205,7 @@ is identified as a seller for an item at a higher price.
1. Create a [**table**](/sql/create-table/) that allows you to manually flag
fraudulent accounts.
- ```sql
+ ```mzsql
CREATE TABLE fraud_accounts (id bigint);
```
@@ -215,7 +215,7 @@ fraudulent accounts.
1. To see results change over time, let's `SUBSCRIBE` to a query that returns
the Top 5 auction winners, overall.
- ```sql
+ ```mzsql
SUBSCRIBE TO (
SELECT buyer, count(*)
FROM winning_bids
@@ -232,7 +232,7 @@ the Top 5 auction winners, overall.
the `SUBSCRIBE`, and mark them as fraudulent by adding them to the
`fraud_accounts` table.
- ```sql
+ ```mzsql
INSERT INTO fraud_accounts VALUES ();
```
@@ -256,7 +256,7 @@ operational use case: profit & loss alerts.
1. Create a view to track the sales and purchases of each auction house user.
- ```sql
+ ```mzsql
CREATE VIEW funds_movement AS
SELECT id, SUM(credits) as credits, SUM(debits) as debits
FROM (
@@ -277,13 +277,13 @@ operational use case: profit & loss alerts.
spot that results are correct and consistent. As an example, the total credit
and total debit amounts should always add up.
- ```sql
+ ```mzsql
SELECT SUM(credits), SUM(debits) FROM funds_movement;
```
You can also `SUBSCRIBE` to this query, and watch the sums change in lock step as auctions close.
- ```sql
+ ```mzsql
SUBSCRIBE TO (
SELECT SUM(credits), SUM(debits) FROM funds_movement
);
@@ -297,7 +297,7 @@ As the auction house operator, you should now have a high degree of confidence t
Once you’re done exploring the auction house source, remember to clean up your environment:
-```sql
+```mzsql
DROP SOURCE auction_house CASCADE;
DROP TABLE fraud_accounts;
diff --git a/doc/user/content/ingest-data/amazon-eventbridge.md b/doc/user/content/ingest-data/amazon-eventbridge.md
index 9e45e8287cce..24f1599f1b41 100644
--- a/doc/user/content/ingest-data/amazon-eventbridge.md
+++ b/doc/user/content/ingest-data/amazon-eventbridge.md
@@ -29,7 +29,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):
-```sql
+```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
@@ -40,7 +40,7 @@ SET CLUSTER = webhooks_cluster;
To validate requests between Amazon EventBridge and Materialize, you must create
a [secret](/sql/create-secret/):
-```sql
+```mzsql
CREATE SECRET eventbridge_webhook_secret AS '';
```
@@ -54,7 +54,7 @@ in Materialize to ingest data from Amazon EventBridge. By default, the source
will be created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.
-```sql
+```mzsql
CREATE SOURCE eventbridge_source
FROM WEBHOOK
BODY FORMAT JSON
@@ -121,7 +121,7 @@ Amazon EventBridge, you can now query the incoming data:
1. Use SQL queries to inspect and analyze the incoming data:
- ```sql
+ ```mzsql
SELECT * FROM eventbridge_source LIMIT 10;
```
diff --git a/doc/user/content/ingest-data/amazon-msk.md b/doc/user/content/ingest-data/amazon-msk.md
index e3812f7deedc..2ea417171a49 100644
--- a/doc/user/content/ingest-data/amazon-msk.md
+++ b/doc/user/content/ingest-data/amazon-msk.md
@@ -56,7 +56,7 @@ the TCP listeners (step 3) and the VPC endpoint service (step 5).
In Materialize, create a source connection that uses the SSH tunnel connection you configured in the previous section:
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'broker1:9092',
SSH TUNNEL ssh_connection
@@ -201,7 +201,7 @@ The process to connect Materialize to Amazon MSK consists of the following steps
e. Create a connection using the command below. The broker URL is what you copied in step c of this subsection. The `` is the name of the topic you created in Step 4. The `` and `` is from _Store a new secret_ under Step 2.
- ```sql
+ ```mzsql
CREATE SECRET msk_password AS '';
CREATE CONNECTION kafka_connection TO KAFKA (
@@ -226,7 +226,7 @@ multiple [`CREATE SOURCE`](/sql/create-source/kafka/) statements. By default,
the source will be created in the active cluster; to use a different cluster,
use the `IN CLUSTER` clause.
-```sql
+```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT JSON;
diff --git a/doc/user/content/ingest-data/cdc-sql-server.md b/doc/user/content/ingest-data/cdc-sql-server.md
index a942e10a8031..4a230f205305 100644
--- a/doc/user/content/ingest-data/cdc-sql-server.md
+++ b/doc/user/content/ingest-data/cdc-sql-server.md
@@ -154,7 +154,7 @@ information about upstream database operations, like the `before` and `after`
values for each record. To create a source that interprets the
[Debezium envelope](/sql/create-source/kafka/#using-debezium) in Materialize:
-```sql
+```mzsql
CREATE SOURCE kafka_repl
FROM KAFKA CONNECTION kafka_connection (TOPIC 'server1.testDB.tableName')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -177,7 +177,7 @@ Any materialized view defined on top of this source will be incrementally
updated as new change events stream in through Kafka, resulting from `INSERT`,
`UPDATE`, and `DELETE` operations in the original SQL Server database.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW cnt_table AS
SELECT field1,
COUNT(*) AS cnt
diff --git a/doc/user/content/ingest-data/confluent-cloud.md b/doc/user/content/ingest-data/confluent-cloud.md
index 8d7521da69f5..197299352f1f 100644
--- a/doc/user/content/ingest-data/confluent-cloud.md
+++ b/doc/user/content/ingest-data/confluent-cloud.md
@@ -88,7 +88,7 @@ of the following steps:
Step 4. The `` and `` are from the _Create
an API Key_ step.
- ```sql
+ ```mzsql
CREATE SECRET confluent_username AS '';
CREATE SECRET confluent_password AS '';
diff --git a/doc/user/content/ingest-data/hubspot.md b/doc/user/content/ingest-data/hubspot.md
index e76d77dc6a4d..1cb2952e080e 100644
--- a/doc/user/content/ingest-data/hubspot.md
+++ b/doc/user/content/ingest-data/hubspot.md
@@ -27,7 +27,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):
-```sql
+```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
@@ -37,7 +37,7 @@ SET CLUSTER = webhooks_cluster;
To validate requests between HubSpot and Materialize, you must create a [secret](/sql/create-secret/):
-```sql
+```mzsql
CREATE SECRET hubspot_webhook_secret AS '';
```
@@ -51,7 +51,7 @@ in Materialize to ingest data from HubSpot. By default, the source will be
created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.
-```sql
+```mzsql
CREATE SOURCE hubspot_source
FROM WEBHOOK
BODY FORMAT JSON
@@ -159,7 +159,7 @@ HubSpot, you can now query the incoming data:
1. Use SQL queries to inspect and analyze the incoming data:
- ```sql
+ ```mzsql
SELECT * FROM hubspot_source LIMIT 10;
```
@@ -171,7 +171,7 @@ Webhook data is ingested as a JSON blob. We recommend creating a parsing view on
top of your webhook source that uses [`jsonb` operators](/sql/types/jsonb/#operators)
to map the individual fields to columns with the required data types.
-```sql
+```mzsql
CREATE VIEW parse_hubspot AS SELECT
body->>'city' AS city,
body->>'firstname' AS firstname,
diff --git a/doc/user/content/ingest-data/kafka-self-hosted.md b/doc/user/content/ingest-data/kafka-self-hosted.md
index 6e9d5f30c4c0..b929801a8052 100644
--- a/doc/user/content/ingest-data/kafka-self-hosted.md
+++ b/doc/user/content/ingest-data/kafka-self-hosted.md
@@ -67,7 +67,7 @@ endpoint service (step 5).
In Materialize, create a source connection that uses the SSH tunnel connection
you configured in the previous section:
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'broker1:9092',
SSH TUNNEL ssh_connection
@@ -82,7 +82,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -92,7 +92,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
1. Create a [Kafka connection](/sql/create-connection/#kafka) that references
your Kafka cluster:
- ```sql
+ ```mzsql
CREATE SECRET kafka_password AS '';
CREATE CONNECTION kafka_connection TO KAFKA (
@@ -112,7 +112,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
The Kafka connection created in the previous section can then be reused across
multiple [`CREATE SOURCE`](/sql/create-source/kafka/) statements:
-```sql
+```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT JSON;
diff --git a/doc/user/content/ingest-data/mysql/amazon-aurora.md b/doc/user/content/ingest-data/mysql/amazon-aurora.md
index 25391b8aaf0f..3e0f40b6ecc5 100644
--- a/doc/user/content/ingest-data/mysql/amazon-aurora.md
+++ b/doc/user/content/ingest-data/mysql/amazon-aurora.md
@@ -71,7 +71,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -117,7 +117,7 @@ configuration of resources for an SSH tunnel. For more details, see the
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
diff --git a/doc/user/content/ingest-data/mysql/amazon-rds.md b/doc/user/content/ingest-data/mysql/amazon-rds.md
index 43584a7bc485..9ea46423ab19 100644
--- a/doc/user/content/ingest-data/mysql/amazon-rds.md
+++ b/doc/user/content/ingest-data/mysql/amazon-rds.md
@@ -60,13 +60,13 @@ binary logging.
reasonable value. To check the current value of the [`binlog retention hours`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration-usage-notes.binlog-retention-hours)
configuration parameter, connect to your RDS instance and run:
- ```sql
+ ```mysql
CALL mysql.rds_show_configuration;
```
If the value returned is `NULL`, or less than `168` (i.e. 7 days), run:
- ```sql
+ ```mysql
CALL mysql.rds_set_configuration('binlog retention hours', 168);
```
@@ -78,12 +78,12 @@ binary logging.
1. To validate that all configuration parameters are set to the expected values
after the above configuration changes, run:
- ```sql
+ ```mysql
-- Validate "binlog retention hours" configuration parameter
CALL mysql.rds_show_configuration;
```
- ```sql
+ ```mysql
-- Validate parameter group configuration parameters
SHOW VARIABLES WHERE variable_name IN (
'log_bin',
@@ -125,7 +125,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -171,7 +171,7 @@ configuration of resources for an SSH tunnel. For more details, see the
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -232,12 +232,12 @@ available(also for PostgreSQL)."
## Step 6. Check the ingestion status
-{{% postgres-direct/check-the-ingestion-status %}}
+{{% mysql-direct/check-the-ingestion-status %}}
## Step 7. Right-size the cluster
-{{% postgres-direct/right-size-the-cluster %}}
+{{% mysql-direct/right-size-the-cluster %}}
## Next steps
-{{% postgres-direct/next-steps %}}
+{{% mysql-direct/next-steps %}}
diff --git a/doc/user/content/ingest-data/mysql/azure-db.md b/doc/user/content/ingest-data/mysql/azure-db.md
index 11f296503eef..13394cae3abc 100644
--- a/doc/user/content/ingest-data/mysql/azure-db.md
+++ b/doc/user/content/ingest-data/mysql/azure-db.md
@@ -66,7 +66,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -98,7 +98,7 @@ to serve as your SSH bastion host.
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
diff --git a/doc/user/content/ingest-data/mysql/debezium.md b/doc/user/content/ingest-data/mysql/debezium.md
index 8e60bd350a7b..b5f6f2ce80a9 100644
--- a/doc/user/content/ingest-data/mysql/debezium.md
+++ b/doc/user/content/ingest-data/mysql/debezium.md
@@ -34,7 +34,7 @@ As _root_:
1. Check the `log_bin` and `binlog_format` settings:
- ```sql
+ ```mysql
SHOW VARIABLES
WHERE variable_name IN ('log_bin', 'binlog_format');
```
@@ -51,7 +51,7 @@ As _root_:
1. Grant enough privileges to the replication user to ensure Debezium can
operate in the database:
- ```sql
+ ```mysql
GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO "user";
FLUSH PRIVILEGES;
@@ -191,7 +191,7 @@ information about upstream database operations, like the `before` and `after`
values for each record. To create a source that interprets the
[Debezium envelope](/sql/create-source/kafka/#using-debezium) in Materialize:
-```sql
+```mzsql
CREATE SOURCE kafka_repl
FROM KAFKA CONNECTION kafka_connection (TOPIC 'dbserver1.db1.table1')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -213,7 +213,7 @@ Any materialized view defined on top of this source will be incrementally
updated as new change events stream in through Kafka, as a result of `INSERT`,
`UPDATE` and `DELETE` operations in the original MySQL database.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW cnt_table1 AS
SELECT field1,
COUNT(*) AS cnt
diff --git a/doc/user/content/ingest-data/mysql/google-cloud-sql.md b/doc/user/content/ingest-data/mysql/google-cloud-sql.md
index 5f232adbe36b..43f1ccc3d31b 100644
--- a/doc/user/content/ingest-data/mysql/google-cloud-sql.md
+++ b/doc/user/content/ingest-data/mysql/google-cloud-sql.md
@@ -60,7 +60,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -92,7 +92,7 @@ network to allow traffic from the bastion host.
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
diff --git a/doc/user/content/ingest-data/mysql/self-hosted.md b/doc/user/content/ingest-data/mysql/self-hosted.md
index 6e86cbf43998..5ec4a2a57939 100644
--- a/doc/user/content/ingest-data/mysql/self-hosted.md
+++ b/doc/user/content/ingest-data/mysql/self-hosted.md
@@ -61,7 +61,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -92,7 +92,7 @@ traffic from the bastion host.
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
diff --git a/doc/user/content/ingest-data/network-security/ssh-tunnel.md b/doc/user/content/ingest-data/network-security/ssh-tunnel.md
index bc6c16f41420..ad337f66eb5a 100644
--- a/doc/user/content/ingest-data/network-security/ssh-tunnel.md
+++ b/doc/user/content/ingest-data/network-security/ssh-tunnel.md
@@ -19,7 +19,7 @@ In Materialize, create a source connection that uses the SSH tunnel connection y
{{< tabs tabID="1" >}}
{{< tab "Kafka">}}
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'broker1:9092',
SSH TUNNEL ssh_connection
@@ -29,7 +29,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
You can reuse this Kafka connection across multiple [`CREATE SOURCE`](/sql/create-source/kafka/)
statements:
-```sql
+```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT JSON;
@@ -37,7 +37,7 @@ CREATE SOURCE json_source
{{< /tab >}}
{{< tab "PostgreSQL">}}
-```sql
+```mzsql
CREATE SECRET pgpass AS '';
CREATE CONNECTION pg_connection TO POSTGRES (
@@ -54,7 +54,7 @@ CREATE CONNECTION pg_connection TO POSTGRES (
You can reuse this PostgreSQL connection across multiple [`CREATE SOURCE`](/sql/create-source/postgres/)
statements:
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
@@ -62,7 +62,7 @@ CREATE SOURCE mz_source
{{< /tab >}}
{{< tab "MySQL">}}
-```sql
+```mzsql
CREATE SECRET mysqlpass AS '';
CREATE CONNECTION mysql_connection TO MYSQL (
@@ -74,7 +74,7 @@ CREATE SECRET mysqlpass AS '';
You can reuse this MySQL connection across multiple [`CREATE SOURCE`](/sql/create-source/postgres/)
statements:
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM mysql CONNECTION mysql_connection
FOR ALL TABLES;
diff --git a/doc/user/content/ingest-data/network-security/static-ips.md b/doc/user/content/ingest-data/network-security/static-ips.md
index f039fed4d735..4df69091057d 100644
--- a/doc/user/content/ingest-data/network-security/static-ips.md
+++ b/doc/user/content/ingest-data/network-security/static-ips.md
@@ -44,10 +44,12 @@ a region. We make every effort to provide advance notice of such changes.
Show the static egress IPs associated with a region:
-```sql
+```mzsql
SELECT * FROM mz_egress_ips;
```
+
+
```nofmt
egress_ip
----------------
diff --git a/doc/user/content/ingest-data/postgres/alloydb.md b/doc/user/content/ingest-data/postgres/alloydb.md
index 11e16ad8e88a..5e9a607f7427 100644
--- a/doc/user/content/ingest-data/postgres/alloydb.md
+++ b/doc/user/content/ingest-data/postgres/alloydb.md
@@ -58,7 +58,7 @@ Materialize with AlloyDB:
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -91,7 +91,7 @@ network to allow traffic from the bastion host.
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
diff --git a/doc/user/content/ingest-data/postgres/amazon-aurora.md b/doc/user/content/ingest-data/postgres/amazon-aurora.md
index f153132698f6..8773b145613d 100644
--- a/doc/user/content/ingest-data/postgres/amazon-aurora.md
+++ b/doc/user/content/ingest-data/postgres/amazon-aurora.md
@@ -58,7 +58,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -202,7 +202,7 @@ configuration of resources for an SSH tunnel. For more details, see the
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -252,7 +252,7 @@ start by selecting the relevant option.
command to securely store the password for the `materialize` PostgreSQL user you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -260,7 +260,7 @@ start by selecting the relevant option.
connection object with access and authentication details for Materialize to
use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -288,7 +288,7 @@ start by selecting the relevant option.
to your Aurora instance and start ingesting data from the publication you
created [earlier](#step-2-create-a-publication).
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -313,7 +313,7 @@ start by selecting the relevant option.
client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#aws-privatelink)
command to create an AWS PrivateLink connection:
- ```sql
+ ```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0356210a8a432d9e9',
AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az3')
@@ -332,7 +332,7 @@ start by selecting the relevant option.
1. Retrieve the AWS principal for the AWS PrivateLink connection you just created:
- ```sql
+ ```mzsql
SELECT principal
FROM mz_aws_privatelink_connections plc
JOIN mz_connections c ON plc.id = c.id
@@ -357,7 +357,7 @@ start by selecting the relevant option.
1. Validate the AWS PrivateLink connection you created using the
[`VALIDATE CONNECTION`](/sql/validate-connection) command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION privatelink_svc;
```
@@ -366,7 +366,7 @@ start by selecting the relevant option.
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the
password for the `materialize` PostgreSQL user you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -374,7 +374,7 @@ start by selecting the relevant option.
another connection object, this time with database access and authentication
details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -396,7 +396,7 @@ details for Materialize to use:
to your Aurora instance via AWS PrivateLink and start ingesting data from the
publication you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -417,7 +417,7 @@ details for Materialize to use:
client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#ssh-tunnel)
command to create an SSH tunnel connection:
- ```sql
+ ```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
PORT ,
@@ -434,7 +434,7 @@ details for Materialize to use:
1. Get Materialize's public keys for the SSH tunnel connection you just
created:
- ```sql
+ ```mzsql
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
@@ -459,7 +459,7 @@ details for Materialize to use:
connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection)
command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION ssh_connection;
```
@@ -468,7 +468,7 @@ details for Materialize to use:
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the
password for the `materialize` PostgreSQL user you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -476,7 +476,7 @@ password for the `materialize` PostgreSQL user you created [earlier](#step-2-cre
another connection object, this time with database access and authentication
details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -498,7 +498,7 @@ password for the `materialize` PostgreSQL user you created [earlier](#step-2-cre
to your Aurora instance and start ingesting data from the publication you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
diff --git a/doc/user/content/ingest-data/postgres/amazon-rds.md b/doc/user/content/ingest-data/postgres/amazon-rds.md
index aa8f47722de8..c12c674e0b2b 100644
--- a/doc/user/content/ingest-data/postgres/amazon-rds.md
+++ b/doc/user/content/ingest-data/postgres/amazon-rds.md
@@ -32,7 +32,7 @@ As a first step, you need to make sure logical replication is enabled.
1. Check if logical replication is enabled:
- ``` sql
+ ```postgres
SELECT name, setting
FROM pg_settings
WHERE name = 'rds.logical_replication';
@@ -70,7 +70,7 @@ As a first step, you need to make sure logical replication is enabled.
1. Back in the SQL client connected to PostgreSQL, verify that replication is
now enabled:
- ``` sql
+ ```postgres
SELECT name, setting
FROM pg_settings
WHERE name = 'rds.logical_replication';
@@ -113,7 +113,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -255,7 +255,7 @@ configuration of resources for an SSH tunnel. For more details, see the
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -304,7 +304,7 @@ start by selecting the relevant option.
command to securely store the password for the `materialize` PostgreSQL user you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -312,7 +312,7 @@ start by selecting the relevant option.
connection object with access and authentication details for Materialize to
use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -334,7 +334,7 @@ start by selecting the relevant option.
to your RDS instance and start ingesting data from the publication you created
[earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -359,7 +359,7 @@ start by selecting the relevant option.
client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#aws-privatelink)
command to create an AWS PrivateLink connection:
- ```sql
+ ```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0356210a8a432d9e9',
AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az3')
@@ -379,7 +379,7 @@ start by selecting the relevant option.
1. Retrieve the AWS principal for the AWS PrivateLink connection you just
created:
- ```sql
+ ```mzsql
SELECT principal
FROM mz_aws_privatelink_connections plc
JOIN mz_connections c ON plc.id = c.id
@@ -404,7 +404,7 @@ start by selecting the relevant option.
1. Validate the AWS PrivateLink connection you created using the
[`VALIDATE CONNECTION`](/sql/validate-connection) command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION privatelink_svc;
```
@@ -413,7 +413,7 @@ start by selecting the relevant option.
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the
password for the `materialize` PostgreSQL user you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -421,7 +421,7 @@ start by selecting the relevant option.
another connection object, this time with database access and authentication
details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -443,7 +443,7 @@ start by selecting the relevant option.
to your RDS instance via AWS PrivateLink and start ingesting data from the
publication you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -464,7 +464,7 @@ start by selecting the relevant option.
client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#ssh-tunnel)
command to create an SSH tunnel connection:
- ```sql
+ ```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
PORT ,
@@ -481,7 +481,7 @@ start by selecting the relevant option.
1. Get Materialize's public keys for the SSH tunnel connection you just
created:
- ```sql
+ ```mzsql
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
@@ -504,7 +504,7 @@ start by selecting the relevant option.
1. Back in the SQL client connected to Materialize, validate the SSH tunnel
connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection) command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION ssh_connection;
```
@@ -513,7 +513,7 @@ start by selecting the relevant option.
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the
password for the `materialize` PostgreSQL user you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -521,7 +521,7 @@ start by selecting the relevant option.
another connection object, this time with database access and authentication
details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -543,7 +543,7 @@ start by selecting the relevant option.
to your RDS instance and start ingesting data from the publication you created
[earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
diff --git a/doc/user/content/ingest-data/postgres/azure-db.md b/doc/user/content/ingest-data/postgres/azure-db.md
index a86384066e21..889d48e9bdf9 100644
--- a/doc/user/content/ingest-data/postgres/azure-db.md
+++ b/doc/user/content/ingest-data/postgres/azure-db.md
@@ -51,7 +51,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -83,7 +83,7 @@ to serve as your SSH bastion host.
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -124,7 +124,7 @@ start by selecting the relevant option.
command to securely store the password for the `materialize` PostgreSQL user
you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -132,7 +132,7 @@ start by selecting the relevant option.
connection object with access and authentication details for Materialize to
use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -152,7 +152,7 @@ start by selecting the relevant option.
to your Azure instance and start ingesting data from the publication you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -177,7 +177,7 @@ created [earlier](#step-2-create-a-publication):
client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#ssh-tunnel)
command to create an SSH tunnel connection:
- ```sql
+ ```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
PORT ,
@@ -194,7 +194,7 @@ created [earlier](#step-2-create-a-publication):
1. Get Materialize's public keys for the SSH tunnel connection you just
created:
- ```sql
+ ```mzsql
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
@@ -218,7 +218,7 @@ created [earlier](#step-2-create-a-publication):
connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection)
command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION ssh_connection;
```
@@ -227,7 +227,7 @@ created [earlier](#step-2-create-a-publication):
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the
password for the `materialize` PostgreSQL user you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -235,7 +235,7 @@ created [earlier](#step-2-create-a-publication):
another connection object, this time with database access and authentication
details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -255,7 +255,7 @@ created [earlier](#step-2-create-a-publication):
to your Azure instance and start ingesting data from the publication you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
diff --git a/doc/user/content/ingest-data/postgres/cloud-sql.md b/doc/user/content/ingest-data/postgres/cloud-sql.md
index 3c23fbb43f93..9a9fd848e75a 100644
--- a/doc/user/content/ingest-data/postgres/cloud-sql.md
+++ b/doc/user/content/ingest-data/postgres/cloud-sql.md
@@ -51,7 +51,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -83,7 +83,7 @@ network to allow traffic from the bastion host.
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
diff --git a/doc/user/content/ingest-data/postgres/debezium.md b/doc/user/content/ingest-data/postgres/debezium.md
index 9acdc56a4b1c..6cc484d733dc 100644
--- a/doc/user/content/ingest-data/postgres/debezium.md
+++ b/doc/user/content/ingest-data/postgres/debezium.md
@@ -35,7 +35,7 @@ As a _superuser_:
1. Check the [`wal_level` configuration](https://www.postgresql.org/docs/current/wal-configuration.html)
setting:
- ```sql
+ ```postgres
SHOW wal_level;
```
@@ -155,7 +155,7 @@ Once logical replication is enabled:
has **no primary key** defined, you must set the replica identity value to
`FULL`:
- ```sql
+ ```postgres
ALTER TABLE repl_table REPLICA IDENTITY FULL;
```
@@ -303,7 +303,7 @@ information about upstream database operations, like the `before` and `after`
values for each record. To create a source that interprets the
[Debezium envelope](/sql/create-source/kafka/#using-debezium) in Materialize:
-```sql
+```mzsql
CREATE SOURCE kafka_repl
FROM KAFKA CONNECTION kafka_connection (TOPIC 'pg_repl.public.table1')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -329,7 +329,7 @@ Any materialized view defined on top of this source will be incrementally
updated as new change events stream in through Kafka, as a result of `INSERT`,
`UPDATE` and `DELETE` operations in the original Postgres database.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW cnt_table1 AS
SELECT field1,
COUNT(*) AS cnt
diff --git a/doc/user/content/ingest-data/postgres/self-hosted.md b/doc/user/content/ingest-data/postgres/self-hosted.md
index 4df92e7acd9e..eec8ece9a112 100644
--- a/doc/user/content/ingest-data/postgres/self-hosted.md
+++ b/doc/user/content/ingest-data/postgres/self-hosted.md
@@ -29,7 +29,7 @@ As a first step, you need to make sure logical replication is enabled.
1. Check if logical replication is enabled:
- ```sql
+ ```postgres
SHOW wal_level;
```
@@ -43,7 +43,7 @@ As a first step, you need to make sure logical replication is enabled.
1. Back in the SQL client connected to PostgreSQL, verify that replication is
now enabled:
- ```sql
+ ```postgres
SHOW wal_level;
```
@@ -73,7 +73,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -156,7 +156,7 @@ option.
In Materialize, create a [`AWS PRIVATELINK`](/sql/create-connection/#aws-privatelink) connection that references the
endpoint service that you created in the previous step.
- ```sql
+ ```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce..vpce-svc-',
AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az3')
@@ -171,7 +171,7 @@ option.
Retrieve the AWS principal for the AWS PrivateLink connection you just
created:
- ```sql
+ ```mzsql
SELECT principal
FROM mz_aws_privatelink_connections plc
JOIN mz_connections c ON plc.id = c.id
@@ -221,7 +221,7 @@ traffic from the bastion host.
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:
- ```sql
+ ```mzsql
SELECT * FROM mz_egress_ips;
```
@@ -261,7 +261,7 @@ start by selecting the relevant option.
command to securely store the password for the `materialize` PostgreSQL user you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -269,7 +269,7 @@ start by selecting the relevant option.
connection object with access and authentication details for Materialize to
use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -289,7 +289,7 @@ start by selecting the relevant option.
to your database and start ingesting data from the publication you created
[earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -310,7 +310,7 @@ start by selecting the relevant option.
client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#ssh-tunnel)
command to create an SSH tunnel connection:
- ```sql
+ ```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
PORT ,
@@ -327,7 +327,7 @@ start by selecting the relevant option.
1. Get Materialize's public keys for the SSH tunnel connection you just
created:
- ```sql
+ ```mzsql
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
@@ -351,7 +351,7 @@ start by selecting the relevant option.
connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection)
command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION ssh_connection;
```
@@ -360,7 +360,7 @@ start by selecting the relevant option.
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the
password for the `materialize` PostgreSQL user you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -368,7 +368,7 @@ start by selecting the relevant option.
another connection object, this time with database access and authentication
details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -388,7 +388,7 @@ start by selecting the relevant option.
to your Azure instance and start ingesting data from the publication you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -413,7 +413,7 @@ start by selecting the relevant option.
command to securely store the password for the `materialize` PostgreSQL user you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -421,7 +421,7 @@ start by selecting the relevant option.
another connection object, this time with database access and authentication
details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -441,7 +441,7 @@ start by selecting the relevant option.
to your database and start ingesting data from the publication you created
[earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
diff --git a/doc/user/content/ingest-data/redpanda-cloud.md b/doc/user/content/ingest-data/redpanda-cloud.md
index 9cc16c697305..24e1c4788817 100644
--- a/doc/user/content/ingest-data/redpanda-cloud.md
+++ b/doc/user/content/ingest-data/redpanda-cloud.md
@@ -101,7 +101,7 @@ preferred SQL client, create a connection with your Redpanda Cloud cluster
access and authentication details using the [`CREATE CONNECTION`](/sql/create-connection/)
command:
- ```sql
+ ```mzsql
-- The credentials of your Redpanda Cloud user.
CREATE SECRET redpanda_username AS '';
CREATE SECRET redpanda_password AS '';
@@ -119,7 +119,7 @@ command:
By default, the source will be created in the active cluster; to use a
different cluster, use the `IN CLUSTER` clause.
- ```sql
+ ```mzsql
CREATE SOURCE rp_source
-- The topic you want to read from.
FROM KAFKA CONNECTION redpanda_cloud (TOPIC '')
@@ -200,7 +200,7 @@ preferred SQL client, create a [PrivateLink connection](/ingest-data/network-sec
using the service name from the previous step. Be sure to specify **all
availability zones** of your Redpanda Cloud cluster.
- ```sql
+ ```mzsql
CREATE CONNECTION rp_privatelink TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-abcdefghijk',
AVAILABILITY ZONES ('use1-az4','use1-az1','use1-az2')
@@ -210,7 +210,7 @@ availability zones** of your Redpanda Cloud cluster.
1. Retrieve the AWS principal for the AWS PrivateLink connection you just
created:
- ```sql
+ ```mzsql
SELECT principal
FROM mz_aws_privatelink_connections plc
JOIN mz_connections c ON plc.id = c.id
@@ -265,7 +265,7 @@ principal:
1. In Materialize, validate the AWS PrivateLink connection you created using the
[`VALIDATE CONNECTION`](/sql/validate-connection) command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION rp_privatelink;
```
@@ -279,7 +279,7 @@ principal:
1. Finally, create a connection to your Redpanda Cloud cluster using the AWS
Privatelink connection you created earlier:
- ```sql
+ ```mzsql
-- The credentials of your Redpanda Cloud user.
CREATE SECRET redpanda_username AS '';
CREATE SECRET redpanda_password AS '';
diff --git a/doc/user/content/ingest-data/rudderstack.md b/doc/user/content/ingest-data/rudderstack.md
index 5d2f39f3e8d1..fd3e400dc95c 100644
--- a/doc/user/content/ingest-data/rudderstack.md
+++ b/doc/user/content/ingest-data/rudderstack.md
@@ -28,7 +28,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):
-```sql
+```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
@@ -38,7 +38,7 @@ SET CLUSTER = webhooks_cluster;
To validate requests between Rudderstack and Materialize, you must create a [secret](/sql/create-secret/):
-```sql
+```mzsql
CREATE SECRET rudderstack_webhook_secret AS '';
```
@@ -51,7 +51,7 @@ in Materialize to ingest data from RudderStack. By default, the source will be
created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.
-```sql
+```mzsql
CREATE SOURCE rudderstack_source
FROM WEBHOOK
BODY FORMAT JSON
@@ -133,7 +133,7 @@ Rudderstack, you can now query the incoming data:
1. Use SQL queries to inspect and analyze the incoming data:
- ```sql
+ ```mzsql
SELECT * FROM rudderstack_source LIMIT 10;
```
@@ -148,7 +148,7 @@ Webhook data is ingested as a JSON blob. We recommend creating a parsing view on
top of your webhook source that uses [`jsonb` operators](https://materialize.com/docs/sql/types/jsonb/#operators)
to map the individual fields to columns with the required data types.
-```sql
+```mzsql
CREATE VIEW json_parsed AS
SELECT
(body -> '_metadata' ->> 'nodeVersion')::text AS nodeVersion,
diff --git a/doc/user/content/ingest-data/segment.md b/doc/user/content/ingest-data/segment.md
index ff5e2465922d..ff406bf9c606 100644
--- a/doc/user/content/ingest-data/segment.md
+++ b/doc/user/content/ingest-data/segment.md
@@ -29,7 +29,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):
-```sql
+```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
@@ -39,7 +39,7 @@ SET CLUSTER = webhooks_cluster;
To validate requests between Segment and Materialize, you must create a [secret](/sql/create-secret/):
-```sql
+```mzsql
CREATE SECRET segment_webhook_secret AS '';
```
@@ -52,7 +52,7 @@ in Materialize to ingest data from Segment. By default, the source will be
created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.
-```sql
+```mzsql
CREATE SOURCE segment_source IN CLUSTER webhooks_cluster FROM WEBHOOK
BODY FORMAT JSON
INCLUDE HEADER 'event-type' AS event_type
@@ -155,7 +155,7 @@ Segment, you can now query the incoming data:
1. Use SQL queries to inspect and analyze the incoming data:
- ```sql
+ ```mzsql
SELECT * FROM segment_source LIMIT 10;
```
@@ -170,7 +170,7 @@ to map the individual fields to columns with the required data types.
{{< tabs >}}
{{< tab "Page">}}
-```sql
+```mzsql
CREATE VIEW parse_segment AS SELECT
body->>'anonymousId' AS anonymousId,
body->>'channel' AS channel,
@@ -195,7 +195,7 @@ FROM segment_source;
{{< tab "Track">}}
-```sql
+```mzsql
CREATE VIEW parse_segment AS SELECT
body->>'anonymousId' AS anonymous_id,
body->'context'->'library'->>'name' AS context_library_name,
@@ -222,7 +222,7 @@ FROM segment_source;
{{< /tab >}}
{{< tab "Identity">}}
-```sql
+```mzsql
CREATE VIEW parse_segment AS SELECT
body->>'anonymousId' AS anonymous_id,
body->>'channel' AS channel,
diff --git a/doc/user/content/ingest-data/snowcatcloud.md b/doc/user/content/ingest-data/snowcatcloud.md
index 7c832efe3530..d600a04b779d 100644
--- a/doc/user/content/ingest-data/snowcatcloud.md
+++ b/doc/user/content/ingest-data/snowcatcloud.md
@@ -28,7 +28,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):
-```sql
+```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
@@ -38,7 +38,7 @@ SET CLUSTER = webhooks_cluster;
To validate requests between SnowcatCloud and Materialize, you must create a [secret](/sql/create-secret/):
-```sql
+```mzsql
CREATE SECRET snowcat_webhook_secret AS '';
```
@@ -51,7 +51,7 @@ in Materialize to ingest data from SnowcatCloud. By default, the source will be
created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.
-```sql
+```mzsql
CREATE SOURCE snowcat_source IN CLUSTER webhooks_cluster
FROM WEBHOOK
BODY FORMAT JSON
@@ -130,7 +130,7 @@ SnowcatCloud, you can now query the incoming data:
1. Use SQL queries to inspect and analyze the incoming data:
- ```sql
+ ```mzsql
SELECT * FROM segment_source LIMIT 10;
```
@@ -146,7 +146,7 @@ to map the individual fields to columns with the required data types.
To see what columns are available for your pipeline (enrichments), refer to
the [SnowcatCloud documentation](https://docs.snowcatcloud.com/).
-```sql
+```mzsql
CREATE VIEW events AS
SELECT
body ->> 'app_id' AS app_id,
diff --git a/doc/user/content/ingest-data/striim.md b/doc/user/content/ingest-data/striim.md
index 8bedff2d744f..444f9f2b73d0 100644
--- a/doc/user/content/ingest-data/striim.md
+++ b/doc/user/content/ingest-data/striim.md
@@ -95,7 +95,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
command to create connection objects with access and authentication details
to your Kafka cluster and schema registry:
- ```sql
+ ```mzsql
CREATE SECRET kafka_password AS '';
CREATE SECRET csr_password AS '';
@@ -119,7 +119,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
Materialize to your Kafka broker and schema registry using the connections you
created in the previous step.
- ```sql
+ ```mzsql
CREATE SOURCE src
FROM KAFKA CONNECTION kafka_connection (TOPIC '')
KEY FORMAT TEXT
diff --git a/doc/user/content/ingest-data/stripe.md b/doc/user/content/ingest-data/stripe.md
index 03100c42a3b2..9788473e493b 100644
--- a/doc/user/content/ingest-data/stripe.md
+++ b/doc/user/content/ingest-data/stripe.md
@@ -27,7 +27,7 @@ scenarios, we recommend separating your workloads into multiple clusters for
To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):
-```sql
+```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
@@ -37,7 +37,7 @@ SET CLUSTER = webhooks_cluster;
To validate requests between Stripe and Materialize, you must create a [secret](/sql/create-secret/):
-```sql
+```mzsql
CREATE SECRET stripe_webhook_secret AS '';
```
@@ -50,7 +50,7 @@ in Materialize to ingest data from Stripe. By default, the source will be
created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.
-```sql
+```mzsql
CREATE SOURCE stripe_source IN CLUSTER webhooks_cluster
FROM WEBHOOK
BODY FORMAT JSON;
@@ -132,7 +132,7 @@ Stripe signing scheme, check out the [Stripe documentation](https://stripe.com/d
1. Use SQL queries to inspect and analyze the incoming data:
- ```sql
+ ```mzsql
SELECT * FROM stripe_source LIMIT 10;
```
@@ -146,7 +146,7 @@ Webhook data is ingested as a JSON blob. We recommend creating a parsing view on
top of your webhook source that uses [`jsonb` operators](https://materialize.com/docs/sql/types/jsonb/#operators)
to map the individual fields to columns with the required data types.
-```sql
+```mzsql
CREATE VIEW parse_stripe AS SELECT
body->>'api_version' AS api_version,
to_timestamp((body->'created')::int) AS created,
diff --git a/doc/user/content/ingest-data/troubleshooting.md b/doc/user/content/ingest-data/troubleshooting.md
index 7d6dd0aec8f4..6eac20b040d1 100644
--- a/doc/user/content/ingest-data/troubleshooting.md
+++ b/doc/user/content/ingest-data/troubleshooting.md
@@ -29,7 +29,7 @@ Alternatively, you can get this information from the system catalog by querying
the [`mz_source_statuses`](/sql/system-catalog/mz_internal/#mz_source_statuses)
table:
-```sql
+```mzsql
SELECT * FROM mz_internal.mz_source_statuses
WHERE name = ;
```
@@ -60,7 +60,7 @@ To determine whether your source has completed ingesting the initial snapshot,
you can query the [`mz_source_statistics`](/sql/system-catalog/mz_internal/#mz_source_statistics)
system catalog table:
-```sql
+```mzsql
SELECT snapshot_committed
FROM mz_internal.mz_source_statistics
WHERE id = ;
@@ -80,7 +80,7 @@ Repeatedly query the
[`mz_source_statistics`](/sql/system-catalog/mz_internal/#mz_source_statistics)
table and look for ingestion statistics that advance over time:
-```sql
+```mzsql
SELECT
bytes_received,
messages_received,
diff --git a/doc/user/content/ingest-data/upstash-kafka.md b/doc/user/content/ingest-data/upstash-kafka.md
index cfbc4a36e36d..0b4876aba735 100644
--- a/doc/user/content/ingest-data/upstash-kafka.md
+++ b/doc/user/content/ingest-data/upstash-kafka.md
@@ -107,7 +107,7 @@ steps:
the name of the topic you created in Step 3. The `` and
`` are from the _Create new credentials_ step.
- ```sql
+ ```mzsql
CREATE SECRET upstash_username AS '';
CREATE SECRET upstash_password AS '';
@@ -146,7 +146,7 @@ steps:
To create a sink, run the following command:
- ```sql
+ ```mzsql
CREATE SINK
FROM
INTO KAFKA CONNECTION (TOPIC '')
diff --git a/doc/user/content/ingest-data/warpstream.md b/doc/user/content/ingest-data/warpstream.md
index c9b78ff1444a..aba76920ba8b 100644
--- a/doc/user/content/ingest-data/warpstream.md
+++ b/doc/user/content/ingest-data/warpstream.md
@@ -89,14 +89,14 @@ Ensure you have the following:
a. Save WarpStream credentials:
- ```sql
+ ```mzsql
CREATE SECRET warpstream_username AS '';
CREATE SECRET warpstream_password AS '';
```
b. Set up a connection to the WarpStream broker:
- ```sql
+ ```mzsql
CREATE CONNECTION warpstream_kafka TO KAFKA (
BROKER '.fly.dev:9092',
SASL MECHANISMS = "PLAIN",
@@ -109,7 +109,7 @@ Ensure you have the following:
source will be created in the active cluster; to use a different cluster,
use the `IN CLUSTER` clause.
- ```sql
+ ```mzsql
CREATE SOURCE warpstream_click_stream_source
FROM KAFKA CONNECTION warpstream_kafka (TOPIC 'materialize_click_streams')
FORMAT JSON;
@@ -117,13 +117,13 @@ Ensure you have the following:
d. Verify the ingestion and query the data in Materialize:
- ```sql
+ ```mzsql
SELECT * FROM warpstream_click_stream_source LIMIT 10;
```
e. Furthermore, create a materialized view to aggregate the data:
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW warpstream_click_stream_aggregate AS
SELECT
user_id,
@@ -146,7 +146,7 @@ Ensure you have the following:
g. Query the materialized view to monitor the real-time updates:
- ```sql
+ ```mzsql
SELECT * FROM warpstream_click_stream_aggregate;
```
diff --git a/doc/user/content/ingest-data/webhook-quickstart.md b/doc/user/content/ingest-data/webhook-quickstart.md
index 4a04129212ef..4eabd3eb754b 100644
--- a/doc/user/content/ingest-data/webhook-quickstart.md
+++ b/doc/user/content/ingest-data/webhook-quickstart.md
@@ -26,7 +26,7 @@ and pop open the SQL Shell.
To validate requests between the webhook event generator and Materialize, you
need a [secret](/sql/create-secret/):
-```sql
+```mzsql
CREATE SECRET demo_webhook AS '';
```
@@ -39,7 +39,7 @@ Using the secret from the previous step, create a webhook source to ingest data
from the webhook event generator. By default, the source will be created in the
current cluster.
-```sql
+```mzsql
CREATE SOURCE webhook_demo FROM WEBHOOK
BODY FORMAT JSON
CHECK (
@@ -70,7 +70,7 @@ to shape the events.
In the SQL Shell, validate that the source is ingesting data:
-```sql
+```mzsql
SELECT jsonb_pretty(body) AS body FROM webhook_demo LIMIT 1;
```
@@ -97,7 +97,7 @@ top of your webhook source that uses [jsonb operators](https://materialize.com/d
to map the individual fields to columns with the required data types. Using the
previous example:
-```sql
+```mzsql
CREATE VIEW webhook_demo_parsed AS SELECT
(body->'location'->>'latitude')::numeric AS location_latitude,
(body->'location'->>'longitude')::numeric AS location_longitude,
@@ -112,7 +112,7 @@ FROM webhook_demo;
To see results change over time, let’s [`SUBSCRIBE`](/sql/subscribe/) to the
`webhook_demo_parsed ` view:
-```sql
+```mzsql
SUBSCRIBE(SELECT * FROM webhook_demo_parsed) WITH (SNAPSHOT = FALSE);
```
@@ -124,7 +124,7 @@ cancel out of the `SUBSCRIBE` using **Stop streaming**.
Once you’re done exploring the generated webhook data, remember to clean up your
environment:
-```sql
+```mzsql
DROP SOURCE webhook_demo CASCADE;
DROP SECRET demo_webhook;
diff --git a/doc/user/content/integrations/golang.md b/doc/user/content/integrations/golang.md
index 54c7fd916dbe..65cc9bd69677 100644
--- a/doc/user/content/integrations/golang.md
+++ b/doc/user/content/integrations/golang.md
@@ -192,7 +192,7 @@ An `MzDiff` value of `-1` indicates that Materialize is deleting one row with th
To clean up the sources, views, and tables that we created, first connect to Materialize using a [PostgreSQL client](/integrations/sql-clients/) and then, run the following commands:
-```sql
+```mzsql
DROP MATERIALIZED VIEW IF EXISTS counter_sum;
DROP SOURCE IF EXISTS counter;
DROP TABLE IF EXISTS countries;
diff --git a/doc/user/content/integrations/java-jdbc.md b/doc/user/content/integrations/java-jdbc.md
index 26135c6ec4c3..7dbf449393e6 100644
--- a/doc/user/content/integrations/java-jdbc.md
+++ b/doc/user/content/integrations/java-jdbc.md
@@ -389,7 +389,7 @@ A `mz_diff` value of `-1` indicates that Materialize is deleting one row with th
To clean up the sources, views, and tables that we created, first connect to Materialize using a [PostgreSQL client](/integrations/sql-clients/) and then, run the following commands:
-```sql
+```mzsql
DROP MATERIALIZED VIEW IF EXISTS counter_sum;
DROP SOURCE IF EXISTS counter;
DROP TABLE IF EXISTS countries;
diff --git a/doc/user/content/integrations/node-js.md b/doc/user/content/integrations/node-js.md
index c1e74d55af8d..97e4fc9be761 100644
--- a/doc/user/content/integrations/node-js.md
+++ b/doc/user/content/integrations/node-js.md
@@ -263,7 +263,7 @@ client.connect((err, client) => {
To clean up the sources, views, and tables that we created, first connect to Materialize using a [PostgreSQL client](/integrations/sql-clients/) and then, run the following commands:
-```sql
+```mzsql
DROP MATERIALIZED VIEW IF EXISTS counter_sum;
DROP SOURCE IF EXISTS counter;
DROP TABLE IF EXISTS countries;
diff --git a/doc/user/content/integrations/php.md b/doc/user/content/integrations/php.md
index f17b22388c14..46cf244091d9 100644
--- a/doc/user/content/integrations/php.md
+++ b/doc/user/content/integrations/php.md
@@ -212,7 +212,7 @@ An `mz_diff` value of `-1` indicates Materialize is deleting one row with the in
To clean up the sources, views, and tables that we created, first connect to Materialize using a [PostgreSQL client](/integrations/sql-clients/) and then, run the following commands:
-```sql
+```mzsql
DROP MATERIALIZED VIEW IF EXISTS counter_sum;
DROP SOURCE IF EXISTS counter;
DROP TABLE IF EXISTS countries;
diff --git a/doc/user/content/integrations/python.md b/doc/user/content/integrations/python.md
index d675d0f56f2a..3bdd333a2d63 100644
--- a/doc/user/content/integrations/python.md
+++ b/doc/user/content/integrations/python.md
@@ -211,7 +211,7 @@ with conn.cursor() as cur:
To clean up the sources, views, and tables that we created, first connect to Materialize using a [PostgreSQL client](/integrations/sql-clients/) and then, run the following commands:
-```sql
+```mzsql
DROP MATERIALIZED VIEW IF EXISTS counter_sum;
DROP SOURCE IF EXISTS counter;
DROP TABLE IF EXISTS countries;
diff --git a/doc/user/content/integrations/ruby.md b/doc/user/content/integrations/ruby.md
index f6ab659457a7..9dfff8c07c35 100644
--- a/doc/user/content/integrations/ruby.md
+++ b/doc/user/content/integrations/ruby.md
@@ -180,7 +180,7 @@ An `mz_diff` value of `-1` indicates Materialize is deleting one row with the in
To clean up the sources, views, and tables that we created, first connect to Materialize using a [PostgreSQL client](/integrations/sql-clients/) and then, run the following commands:
-```sql
+```mzsql
DROP MATERIALIZED VIEW IF EXISTS counter_sum;
DROP SOURCE IF EXISTS counter;
DROP TABLE IF EXISTS countries;
diff --git a/doc/user/content/integrations/rust.md b/doc/user/content/integrations/rust.md
index 7ea38f801986..5d852bcdff5c 100644
--- a/doc/user/content/integrations/rust.md
+++ b/doc/user/content/integrations/rust.md
@@ -169,7 +169,7 @@ The [SUBSCRIBE output format](/sql/subscribe/#output) of the `counter_sum` view
To clean up the sources, views, and tables that we created, first connect to Materialize using a [PostgreSQL client](/integrations/sql-clients/) and then, run the following commands:
-```sql
+```mzsql
DROP MATERIALIZED VIEW IF EXISTS counter_sum;
DROP SOURCE IF EXISTS counter;
DROP TABLE IF EXISTS countries;
diff --git a/doc/user/content/integrations/sql-clients.md b/doc/user/content/integrations/sql-clients.md
index 887ad2781fae..d6bb30149521 100644
--- a/doc/user/content/integrations/sql-clients.md
+++ b/doc/user/content/integrations/sql-clients.md
@@ -113,7 +113,7 @@ define a bootstrap query in the connection initialization settings.
1. Under **Bootstrap queries**, click **Configure** and add a new SQL query that
sets the active cluster for the connection:
- ```sql
+ ```mzsql
SET cluster = other_cluster;
```
diff --git a/doc/user/content/manage/access-control/_index.md b/doc/user/content/manage/access-control/_index.md
index 0e2a6998d2b7..be68d304473c 100644
--- a/doc/user/content/manage/access-control/_index.md
+++ b/doc/user/content/manage/access-control/_index.md
@@ -124,7 +124,7 @@ databases, or schemas. To modify the default privileges available to all other
roles in a Materialize region, you can use the [`ALTER DEFAULT PRIVILEGES`](/sql/alter-default-privileges/)
command.
-```sql
+```mzsql
# Use SHOW ROLES to list existing roles in the system, which are 1:1 with invited users
SHOW ROLES;
@@ -151,7 +151,7 @@ decide to roll out a RBAC strategy for your Materialize organization.
As an alternative, you can approximate the set of privileges of a _superuser_ by
instead modifying the default privileges to be wildly permissive:
-```sql
+```mzsql
-- Use SHOW ROLES to list existing roles in the system, which are 1:1 with invited users
SHOW ROLES;
@@ -192,7 +192,7 @@ If your Materialize user base is small and you don't expect it to grow
significantly over time, you can grant and revoke privileges directly to/from
user roles.
-```sql
+```mzsql
-- Use SHOW ROLES to list existing roles in the system.
SHOW ROLES;
@@ -222,7 +222,7 @@ As an example, Data Engineers might need a larger scope of permissions to create
and evolve the data model, while Data Analysts might only need read permissions
to query Materialize using BI tools.
-```sql
+```mzsql
-- Use SHOW ROLES to list existing roles in the system, which are 1:1 with invited users
SHOW ROLES;
@@ -283,7 +283,7 @@ roles from inheriting it. This means that users have to explicitly run e.g.,
`SET ROLE production` before being able to run any commands in the specified
environment.
-```sql
+```mzsql
-- Step 1: create the dev and prod roles
CREATE ROLE dev;
CREATE ROLE prod NOINHERIT;
diff --git a/doc/user/content/manage/access-control/manage-privileges.md b/doc/user/content/manage/access-control/manage-privileges.md
index ed843a1c2c7f..cdd217118d36 100644
--- a/doc/user/content/manage/access-control/manage-privileges.md
+++ b/doc/user/content/manage/access-control/manage-privileges.md
@@ -14,7 +14,7 @@ This page outlines how to assign and manage role privileges.
To grant privileges to a role, use the [`GRANT PRIVILEGE`](https://materialize.com/docs/sql/grant-privilege/) statement with the
object you want to grant privileges to:
-```sql
+```mzsql
GRANT USAGE ON TO ;
```
@@ -49,7 +49,7 @@ For example, to allow a role to create a materialized view, you would
give that role `CREATE` privileges on the cluster and the schema because the
materialized view will be namespaced by the schema.
-```sql
+```mzsql
GRANT CREATE ON CLUSTER to ;
GRANT CREATE ON to ;
```
@@ -58,6 +58,6 @@ GRANT CREATE ON to ;
To remove privileges from a role, use the [`REVOKE`](https://materialize.com/docs/sql/revoke-privilege/) statement:
-```sql
+```mzsql
REVOKE USAGE ON FROM ;
```
diff --git a/doc/user/content/manage/access-control/manage-roles.md b/doc/user/content/manage/access-control/manage-roles.md
index 28b7c25aa7e6..66bea2916606 100644
--- a/doc/user/content/manage/access-control/manage-roles.md
+++ b/doc/user/content/manage/access-control/manage-roles.md
@@ -15,7 +15,7 @@ This page outlines how to create and manage roles in Materialize.
To create a new role, use the [`CREATE ROLE`](https://materialize.com/docs/sql/create-role/) statement:
-```sql
+```mzsql
CREATE ROLE WITH ;
```
@@ -29,7 +29,7 @@ Materialize roles have the following available attributes:
To change a role's attributes, use the [`ALTER ROLE`](https://materialize.com/docs/sql/alter-role/) statement:
-```sql
+```mzsql
ALTER ROLE WITH ;
```
@@ -37,7 +37,7 @@ ALTER ROLE WITH ;
To grant a role assignment to a user, use the [`GRANT`](https://materialize.com/docs/sql/grant-role/) statement:
-```sql
+```mzsql
GRANT to ;
```
@@ -45,7 +45,7 @@ GRANT to ;
To remove a user from a role, use the [`REVOKE`](https://materialize.com/docs/sql/revoke-role/) statement:
-```sql
+```mzsql
REVOKE FROM ;
```
@@ -53,7 +53,7 @@ REVOKE FROM ;
To remove a role, use the [`DROP ROLE`](https://materialize.com/docs/sql/drop-role/) statement:
-```sql
+```mzsql
DROP ROLE ;
```
diff --git a/doc/user/content/manage/access-control/rbac-terraform-tutorial.md b/doc/user/content/manage/access-control/rbac-terraform-tutorial.md
index 26de6b2c0b6b..11ee615c945a 100644
--- a/doc/user/content/manage/access-control/rbac-terraform-tutorial.md
+++ b/doc/user/content/manage/access-control/rbac-terraform-tutorial.md
@@ -46,10 +46,12 @@ In this scenario, you are a DevOps engineer responsible for managing your Materi
3. Each role you create has default role attributes that determine how they can interact with Materialize objects. Let’s look at the role attributes of the role you created:
- ```sql
+ ```mzsql
SELECT * FROM mz_roles WHERE name = 'dev_role';
```
+
+
```nofmt
-[ RECORD 1 ]--+------
id | u8
@@ -113,10 +115,12 @@ Your `dev_role` has the default system-level permissions and needs object-level
3. Now that our resources exist, we can query their privileges before they have been associated with our role created in step 1.
- ```sql
+ ```mzsql
SELECT name, privileges FROM mz_tables WHERE name = 'dev_table';
```
+
+
```nofmt
name|privileges
dev_table|{u1=arwd/u1,u8=arw/u1}
@@ -160,10 +164,12 @@ In this example, let's say your `dev_role` needs the following permissions:
3. We can now check the privileges on our table again
- ```sql
+ ```mzsql
SELECT name, privileges FROM mz_tables WHERE name = 'dev_table';
```
+
+
```nofmt
name|privileges
dev_table|{u1=arwd/u1,u8=arw/u1}
@@ -224,7 +230,7 @@ The dev_role now has the acceptable privileges it needs. Let’s apply this role
3. To review the permissions a roles, you can view the object data:
- ```sql
+ ```mzsql
SELECT name, privileges FROM mz_tables WHERE name = 'dev_table';
```
@@ -300,7 +306,7 @@ Your `dev_role` also needs access to `qa_db`. You can apply these privileges ind
3. Review the privileges of `qa_role` and `dev_role`:
- ```sql
+ ```mzsql
SELECT name, privileges FROM mz_databases WHERE name='qa_db';
```
@@ -329,7 +335,7 @@ You can revoke certain privileges for each role, even if they are inherited from
Because Terraform is responsible for maintaining the state of our project, removing this grant resource and running an `apply` is the equivalent of running a revoke statement:
- ```sql
+ ```mzsql
REVOKE CREATE ON DATABASE dev_table FROM dev_role;
```
diff --git a/doc/user/content/manage/access-control/rbac-tutorial.md b/doc/user/content/manage/access-control/rbac-tutorial.md
index cd3dfe1cbd29..095b0d8a8a66 100644
--- a/doc/user/content/manage/access-control/rbac-tutorial.md
+++ b/doc/user/content/manage/access-control/rbac-tutorial.md
@@ -42,7 +42,7 @@ example.
1. In the [SQL Shell](https://console.materialize.com/), or your preferred SQL
client connected to Materialize, create a new role:
- ```sql
+ ```mzsql
CREATE ROLE dev_role;
```
@@ -50,7 +50,7 @@ example.
can interact with Materialize objects. Let's look at the role attributes of
the role you created:
- ```sql
+ ```mzsql
SELECT * FROM mz_roles WHERE name = 'dev_role';
```
@@ -87,27 +87,27 @@ privileges the role needs.
1. In the SQL client connected to Materialize, create a new example cluster to
avoid impacting other environments:
- ```sql
+ ```mzsql
CREATE CLUSTER dev_cluster (SIZE = '25cc');
```
1. Change into the example cluster:
- ```sql
+ ```mzsql
SET CLUSTER TO dev_cluster;
```
1. Create a new database, schema, and table:
- ```sql
+ ```mzsql
CREATE DATABASE dev_db;
```
- ```sql
+ ```mzsql
CREATE SCHEMA dev_db.schema;
```
- ```sql
+ ```mzsql
CREATE TABLE dev_table (a int, b text NOT NULL);
```
@@ -126,7 +126,7 @@ In this example, let's say your `dev_role` needs the following permissions:
1. In your terminal, grant table-level privileges to the `dev_role`:
- ```sql
+ ```mzsql
GRANT SELECT, UPDATE, INSERT ON dev_table TO dev_role;
```
@@ -136,7 +136,7 @@ In this example, let's say your `dev_role` needs the following permissions:
2. Grant schema privileges to the `dev_role`:
- ```sql
+ ```mzsql
GRANT USAGE ON SCHEMA dev_db.schema TO dev_role;
```
@@ -145,13 +145,13 @@ In this example, let's say your `dev_role` needs the following permissions:
3. Grant database privileges to the `dev_role`. You can use the `GRANT ALL`
statement to grant all available privileges on an object.
- ```sql
+ ```mzsql
GRANT ALL ON DATABASE dev_db TO dev_role;
```
4. Grant cluster privileges to the `dev_role`:
- ```sql
+ ```mzsql
GRANT USAGE, CREATE ON CLUSTER dev_cluster TO dev_role;
```
@@ -166,13 +166,13 @@ to a user in your Materialize organization.
1. In your terminal, use the `GRANT` statement to apply a role to your new user:
- ```sql
+ ```mzsql
GRANT dev_role TO ;
```
1. To review the permissions a role has, you can view the object data:
- ```sql
+ ```mzsql
SELECT name, privileges FROM mz_tables WHERE name='dev_table';
```
@@ -196,13 +196,13 @@ privileges as needed.
1. Create a second role your Materialize account:
- ```sql
+ ```mzsql
CREATE ROLE qa_role;
```
2. Apply `CREATEDB` privileges to the `qa_role`
- ```sql
+ ```mzsql
GRANT CREATEDB ON SYSTEM TO qa_role;
```
@@ -210,13 +210,13 @@ privileges as needed.
3. Create a new `qa_db` database:
- ```sql
+ ```mzsql
CREATE DATABASE qa_db;
```
4. Apply `USAGE` and `CREATE` privileges to the `qa_role` role for the new database:
- ```sql
+ ```mzsql
GRANT USAGE, CREATE ON DATABASE qa_db TO qa_role;
```
@@ -228,7 +228,7 @@ permissions as the `qa_role`.
1. Add `dev_role` as a member of `qa_role`:
- ```sql
+ ```mzsql
GRANT qa_role TO dev_role;
```
@@ -238,7 +238,7 @@ permissions as the `qa_role`.
2. Review the privileges of `qa_role` and `dev_role`:
- ```sql
+ ```mzsql
SELECT name, privileges FROM mz_databases WHERE name='qa_db';
```
@@ -261,7 +261,7 @@ You can revoke certain privileges for each role, even if they are inherited from
1. Let's say you decide `dev_role` no longer needs `CREATE` privileges on the
`dev_table` object. You can revoke that privilege for the role:
- ```sql
+ ```mzsql
REVOKE CREATE ON DATABASE dev_table FROM dev_role;
```
@@ -277,7 +277,7 @@ You can revoke certain privileges for each role, even if they are inherited from
If you need to revoke specific privileges from a role that have been
inheritied from another role, you must revoke the role with those privileges.
- ```sql
+ ```mzsql
REVOKE qa_role FROM dev_role;
```
In this example, when `dev_role` inherits from `qa_role`, `dev_role` always has
@@ -294,14 +294,14 @@ to destroy the objects you created for this guide.
1. Drop the roles you created:
- ```sql
+ ```mzsql
DROP ROLE qa_role;
DROP ROLE dev_role;
```
1. Drop the other objects you created:
- ```sql
+ ```mzsql
DROP CLUSTER dev_cluster CASCADE;
DROP DATABASE dev_db CASCADE;
DROP TABLE dev_table;
diff --git a/doc/user/content/manage/blue-green.md b/doc/user/content/manage/blue-green.md
index 9a66c1c492e6..4582141e74a0 100644
--- a/doc/user/content/manage/blue-green.md
+++ b/doc/user/content/manage/blue-green.md
@@ -105,7 +105,7 @@ it is safe to cut over.
3. Use the `SWAP` operation to atomically rename your objects in a way that is
transparent to clients.
- ```sql
+ ```mzsql
BEGIN;
ALTER SCHEMA prod SWAP WITH prod_deploy;
ALTER CLUSTER prod SWAP WITH prod_deploy;
@@ -116,7 +116,7 @@ transparent to clients.
4. Now that changes are running in `prod` and the legacy version is in
`prod_deploy`, you can drop the prod_deploy compute objects and schema.
- ```sql
+ ```mzsql
DROP CLUSTER prod_deploy CASCADE;
DROP SCHEMA prod_deploy CASCADE;
```
@@ -135,7 +135,7 @@ also use `ALTER...RENAME` operations on:
- Materialized Views
- Indexes
- ```sql
+ ```mzsql
BEGIN;
-- Swap schemas
ALTER SCHEMA prod RENAME TO temp;
diff --git a/doc/user/content/manage/dbt/_index.md b/doc/user/content/manage/dbt/_index.md
index ca459e225f12..aed521914c75 100644
--- a/doc/user/content/manage/dbt/_index.md
+++ b/doc/user/content/manage/dbt/_index.md
@@ -175,7 +175,7 @@ exposed** in dbt, and need to exist before you run any `source` models.
Create a [Kafka source](/sql/create-source/kafka/).
**Filename:** sources/kafka_topic_a.sql
-```sql
+```mzsql
{{ config(materialized='source') }}
FROM KAFKA CONNECTION kafka_connection (TOPIC 'topic_a')
@@ -193,7 +193,7 @@ database.schema.kafka_topic_a
Create a [PostgreSQL source](/sql/create-source/postgres/).
**Filename:** sources/pg.sql
-```sql
+```mzsql
{{ config(materialized='source') }}
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
@@ -225,7 +225,7 @@ models in, you should additionally force a dependency on the parent source
model (`pg`), as described in the [dbt documentation](https://docs.getdbt.com/reference/dbt-jinja-functions/ref#forcing-dependencies).
**Filename:** staging/dep_subsources.sql
-```sql
+```mzsql
-- depends_on: {{ ref('pg') }}
{{ config(materialized='view') }}
@@ -251,7 +251,7 @@ database.schema.table_b
Create a [MySQL source](/sql/create-source/mysql/).
**Filename:** sources/mysql.sql
-```sql
+```mzsql
{{ config(materialized='source') }}
FROM MYSQL CONNECTION mysql_connection
@@ -283,7 +283,7 @@ models in, you should additionally force a dependency on the parent source
model (`mysql`), as described in the [dbt documentation](https://docs.getdbt.com/reference/dbt-jinja-functions/ref#forcing-dependencies).
**Filename:** staging/dep_subsources.sql
-```sql
+```mzsql
-- depends_on: {{ ref('mysql') }}
{{ config(materialized='view') }}
@@ -309,7 +309,7 @@ database.schema.table_b
Create a [webhook source](/sql/create-source/webhook/).
**Filename:** sources/webhook.sql
-```sql
+```mzsql
{{ config(materialized='source') }}
FROM WEBHOOK
@@ -352,7 +352,7 @@ in Materialize you can simply provide the SQL statement in the model (and skip
the `materialized` configuration parameter).
**Filename:** models/view_a.sql
-```sql
+```mzsql
SELECT
col_a, ...
FROM {{ ref('kafka_topic_a') }}
@@ -370,7 +370,7 @@ databases), with [materialized views](/sql/create-materialized-view)
that **continuously update** as the underlying data changes:
**Filename:** models/materialized_view_a.sql
-```sql
+```mzsql
{{ config(materialized='materialized_view') }}
SELECT
@@ -394,7 +394,7 @@ can instruct dbt to create a sink using the custom `sink` materialization.
Create a [Kafka sink](/sql/create-sink).
**Filename:** sinks/kafka_topic_c.sql
-```sql
+```mzsql
{{ config(materialized='sink') }}
FROM {{ ref('materialized_view_a') }}
@@ -419,7 +419,7 @@ Use the `cluster` option to specify the [cluster](/sql/create-cluster/) in which
a `materialized view`, `index`, `source`, or `sink` model is created. If
unspecified, the default cluster for the connection is used.
-```sql
+```mzsql
{{ config(materialized='materialized_view', cluster='cluster_a') }}
```
@@ -429,7 +429,7 @@ Use the `database` option to specify the [database](/sql/namespaces/#database-de
in which a `source`, `view`, `materialized view` or `sink` is created. If
unspecified, the default database for the connection is used.
-```sql
+```mzsql
{{ config(materialized='materialized_view', database='database_a') }}
```
@@ -448,14 +448,14 @@ Component | Value | Description
##### Creating a multi-column index
-```sql
+```mzsql
{{ config(materialized='view',
indexes=[{'columns': ['col_a','col_b'], 'cluster': 'cluster_a'}]) }}
```
##### Creating a default index
-```sql
+```mzsql
{{ config(materialized='view',
indexes=[{'default': True}]) }}
```
@@ -514,7 +514,7 @@ types are supported.
A `not_null` constraint will be compiled to an [`ASSERT NOT NULL`](/sql/create-materialized-view/#non-null-assertions)
option for the specified columns of the materialize view.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW model_with_constraints
WITH (
ASSERT NOT NULL col_with_constraints
@@ -541,8 +541,13 @@ SELECT NULL AS col_with_constraints,
SQL client connected to Materialize, double-check that all objects have been
created:
- ```sql
+ ```mzsql
SHOW SOURCES [FROM database.schema];
+ ```
+
+
+
+ ```nofmt
name
-------------------
mysql_table_a
@@ -550,13 +555,31 @@ SELECT NULL AS col_with_constraints,
postgres_table_a
postgres_table_b
kafka_topic_a
+ ```
+
+
+ ```mzsql
SHOW VIEWS;
+ ```
+
+
+
+ ```nofmt
name
-------------------
view_a
+ ```
+
+
+
+ ```mzsql
+ SHOW MATERIALIZED VIEWS;
+ ```
- SHOW MATERIALIZED VIEWS;
+
+
+ ```nofmt
name
-------------------
materialized_view_a
@@ -633,14 +656,28 @@ trigger **real-time alerts** downstream.
SQL client connected to Materialize, that the schema storing the tests has been
created, as well as the test materialized views:
- ```sql
+ ```mzsql
SHOW SCHEMAS;
+ ```
+
+
+
+ ```nofmt
name
-------------------
public
public_etl_failure
+ ```
+
+
- SHOW MATERIALIZED VIEWS FROM public_etl_failure;;
+ ```mzsql
+ SHOW MATERIALIZED VIEWS FROM public_etl_failure;
+ ```
+
+
+
+ ```nofmt
name
-------------------
not_null_col_a
@@ -719,7 +756,7 @@ For "use-at-your-own-risk" workarounds, see [`dbt-core` #4226](https://github.co
As an alternative, you can configure `persist-docs` in the config block of your models:
- ```sql
+ ```mzsql
{{ config(
materialized=materialized_view,
persist_docs={"relation": true, "columns": true}
@@ -730,8 +767,12 @@ For "use-at-your-own-risk" workarounds, see [`dbt-core` #4226](https://github.co
files is persisted to Materialize in the [mz_internal.mz_comments](/sql/system-catalog/mz_internal/#mz_comments)
system catalog table on every `dbt run`:
- ```sql
+ ```mzsql
SELECT * FROM mz_internal.mz_comments;
+ ```
+
+
+ ```nofmt
id | object_type | object_sub_id | comment
------+-------------------+---------------+----------------------------------
diff --git a/doc/user/content/manage/dbt/development-workflows.md b/doc/user/content/manage/dbt/development-workflows.md
index 10471cdaa80d..f34d1ed02e13 100644
--- a/doc/user/content/manage/dbt/development-workflows.md
+++ b/doc/user/content/manage/dbt/development-workflows.md
@@ -153,7 +153,7 @@ to hydrate before you can validate that it produces the expected results.
1. As an example, imagine your dbt project includes the following models:
**Filename:** _models/my_model_a.sql_
- ```sql
+ ```mzsql
SELECT
1 AS a,
1 AS id,
@@ -163,7 +163,7 @@ to hydrate before you can validate that it produces the expected results.
```
**Filename:** _models/my_model_b.sql_
- ```sql
+ ```mzsql
SELECT
2 as b,
1 as id,
@@ -172,7 +172,7 @@ to hydrate before you can validate that it produces the expected results.
```
**Filename:** models/my_model.sql
- ```sql
+ ```mzsql
SELECT
a+b AS c,
CONCAT(string_a, string_b) AS string_c,
diff --git a/doc/user/content/overview/timelines.md b/doc/user/content/overview/timelines.md
index 15577203d4d4..ed2179dc32d2 100644
--- a/doc/user/content/overview/timelines.md
+++ b/doc/user/content/overview/timelines.md
@@ -33,7 +33,7 @@ This can be used to allow multiple CDC sources, or a CDC source and system time
For example, to create two CDC sources that are joinable:
-```sql
+```mzsql
CREATE CONNECTION kafka_conn TO KAFKA (BROKER 'broker');
CREATE SOURCE source_1
@@ -58,7 +58,7 @@ the `mz_epoch_ms` timeline.
You **must** ensure that the `time` field's units are milliseconds since the Unix epoch.
Joining this source to other system time sources will result in query delays until the timestamps being received are close to wall-clock `now()`.
-```sql
+```mzsql
CREATE SOURCE source_3
FROM KAFKA CONDITION kafka_conn (TOPIC 'topic-3')
FORMAT AVRO USING SCHEMA 'schema'
diff --git a/doc/user/content/quickstarts/data-applications.md b/doc/user/content/quickstarts/data-applications.md
index da54fbd4f80e..070df89e0510 100644
--- a/doc/user/content/quickstarts/data-applications.md
+++ b/doc/user/content/quickstarts/data-applications.md
@@ -32,12 +32,12 @@ Materialize provides public Kafka topics and a Confluent Schema Registry for its
1. In your `psql` terminal, create a new schema:
- ```sql
+ ```mzsql
CREATE SCHEMA shop;
```
1. Create [a connection](/sql/create-connection/#confluent-schema-registry) to the Confluent Schema Registry:
- ```sql
+ ```mzsql
CREATE SECRET IF NOT EXISTS shop.csr_username AS '';
CREATE SECRET IF NOT EXISTS shop.csr_password AS '';
@@ -50,7 +50,7 @@ Materialize provides public Kafka topics and a Confluent Schema Registry for its
1. Create [a connection](/sql/create-connection/#kafka) to the Kafka broker:
- ```sql
+ ```mzsql
CREATE SECRET shop.kafka_password AS '';
CREATE CONNECTION shop.kafka_connection TO KAFKA (
@@ -63,7 +63,7 @@ Materialize provides public Kafka topics and a Confluent Schema Registry for its
1. Create the sources, one per Kafka topic:
- ```sql
+ ```mzsql
CREATE SOURCE purchases
FROM KAFKA CONNECTION shop.kafka_connection (TOPIC 'purchases')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION shop.csr_basic_http
@@ -87,28 +87,28 @@ Materialized views compute and maintain the results of a query incrementally. Us
Reuse your `psql` session and build the analytics:
1. The sum of purchases:
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW shop.total_purchases AS
SELECT SUM(purchase_price * quantity) AS total_purchases
FROM shop.purchases;
```
1. The count of purchases:
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW shop.count_purchases AS
SELECT COUNT(1) AS count_purchases
FROM shop.purchases;
```
1. The count of users:
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW shop.total_users AS
SELECT COUNT(1) as total_users
FROM shop.users;
```
1. The best sellers items:
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW shop.best_sellers AS
SELECT I.name, I.category, COUNT(1) as purchases
FROM shop.purchases P
@@ -123,14 +123,14 @@ Reuse your `psql` session and build the analytics:
`SUBSCRIBE` can stream updates from materialized views as they occur. Use it to verify how the analytics change over time.
1. Subscribe to the best sellers items:
- ```sql
+ ```mzsql
COPY (SUBSCRIBE (SELECT * FROM shop.best_sellers)) TO STDOUT;
```
1. Press `CTRL + C` to interrupt the subscription after a few changes.
1. Subscribe to the best sellers items filtering by the `gadgets` category:
- ```sql
+ ```mzsql
COPY (SUBSCRIBE ( SELECT * FROM shop.best_sellers WHERE category = 'gadgets' )) TO STDOUT;
```
diff --git a/doc/user/content/quickstarts/streaming-analytics.md b/doc/user/content/quickstarts/streaming-analytics.md
index 971ce0e86793..f6cc03689e8c 100644
--- a/doc/user/content/quickstarts/streaming-analytics.md
+++ b/doc/user/content/quickstarts/streaming-analytics.md
@@ -32,20 +32,20 @@ Materialize provides public Kafka topics and a Confluent Schema Registry for its
1. In your `psql` terminal, create a new [cluster](https://materialize.com/docs/sql/create-cluster/) and [schema](https://materialize.com/docs/sql/create-schema/):
- ```sql
+ ```mzsql
CREATE CLUSTER demo (SIZE = '100cc');
CREATE SCHEMA shop;
```
1. Within the same `psql` terminal, we will switch to the cluster and schema we just created. This way everything done for this demo will be safely isolated from any other workflows we may have running:
- ```sql
+ ```mzsql
SET cluster = demo;
SET SCHEMA shop;
```
1. Create [a connection](/sql/create-connection/#confluent-schema-registry) to the Confluent Schema Registry:
- ```sql
+ ```mzsql
CREATE SECRET IF NOT EXISTS csr_username AS '';
CREATE SECRET IF NOT EXISTS csr_password AS '';
@@ -58,7 +58,7 @@ Materialize provides public Kafka topics and a Confluent Schema Registry for its
1. Create [a connection](/sql/create-connection/#kafka) to the Kafka broker:
- ```sql
+ ```mzsql
CREATE SECRET kafka_password AS '';
CREATE CONNECTION kafka_connection TO KAFKA (
@@ -71,7 +71,7 @@ Materialize provides public Kafka topics and a Confluent Schema Registry for its
1. Create the sources, one per Kafka topic:
- ```sql
+ ```mzsql
CREATE SOURCE IF NOT EXISTS purchases
FROM KAFKA CONNECTION kafka_connection (TOPIC 'mysql.shop.purchases')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_basic_http
@@ -98,7 +98,7 @@ Reuse your `psql` session and build the analytics:
A `MATERIALIZED VIEW` is persisted in durable storage and is incrementally updated as new data arrives.
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW vip_purchases AS
SELECT
user_id,
@@ -115,7 +115,7 @@ Reuse your `psql` session and build the analytics:
1. Create a view that takes the on-time bids and finds the highest bid for each auction:
- ```sql
+ ```mzsql
CREATE VIEW highest_bid_per_auction AS
SELECT grp.auction_id,
bid_id,
@@ -150,7 +150,7 @@ Reuse your `psql` session and build the analytics:
`SUBSCRIBE` can stream updates from materialized views as they occur. Use it to verify how the analytics change over time.
1. Subscribe to the vip purchases:
- ```sql
+ ```mzsql
COPY (SUBSCRIBE (SELECT * FROM vip_purchases)) TO STDOUT;
```
diff --git a/doc/user/content/quickstarts/user-workflows.md b/doc/user/content/quickstarts/user-workflows.md
index 4c5278f6e869..4ea9260c6a87 100644
--- a/doc/user/content/quickstarts/user-workflows.md
+++ b/doc/user/content/quickstarts/user-workflows.md
@@ -43,7 +43,7 @@ Sources are the first step in most Materialize projects.
1. In your `psql` terminal, create [a connection](/sql/create-connection/#confluent-schema-registry) to the Confluent Schema Registry:
- ```sql
+ ```mzsql
CREATE SECRET IF NOT EXISTS csr_username AS '';
CREATE SECRET IF NOT EXISTS csr_password AS '';
@@ -56,7 +56,7 @@ Sources are the first step in most Materialize projects.
1. Create [a connection](/sql/create-connection/#kafka) to the Kafka broker:
- ```sql
+ ```mzsql
CREATE SECRET kafka_password AS '';
CREATE CONNECTION ecommerce_kafka_connection TO KAFKA (
@@ -69,7 +69,7 @@ Sources are the first step in most Materialize projects.
1. Create the sources:
- ```sql
+ ```mzsql
CREATE SOURCE purchases
FROM KAFKA CONNECTION ecommerce_kafka_connection (TOPIC 'mysql.shop.purchases')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION schema_registry
@@ -92,7 +92,7 @@ Sources are the first step in most Materialize projects.
Now if you run `SHOW SOURCES;`, you should see the four sources we created:
- ```sql
+ ```mzsql
materialize=> SHOW SOURCES;
name
----------------
@@ -109,7 +109,7 @@ With JSON-formatted messages, we don't know the schema so the [JSON is pulled in
1. Create a [view](/sql/create-view/) that casts the raw bytes into a JSON object:
- ```sql
+ ```mzsql
CREATE VIEW v_pageviews AS
SELECT
(data->'user_id')::int AS user_id,
@@ -123,7 +123,7 @@ With JSON-formatted messages, we don't know the schema so the [JSON is pulled in
1. Define a view containing the incomplete orders:
- ```sql
+ ```mzsql
CREATE VIEW incomplete_purchases AS
SELECT
users.id AS user_id,
@@ -143,7 +143,7 @@ With JSON-formatted messages, we don't know the schema so the [JSON is pulled in
A materialized view is persisted in durable storage and is incrementally updated as new data arrives.
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW last_user_visit AS
SELECT DISTINCT ON(user_id) user_id, received_at
FROM v_pageviews
@@ -152,7 +152,7 @@ With JSON-formatted messages, we don't know the schema so the [JSON is pulled in
1. Create materialized view to get all the users that have been inactive for the last 3 minutes:
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW inactive_users_last_3_mins AS
SELECT
user_id,
@@ -166,7 +166,7 @@ With JSON-formatted messages, we don't know the schema so the [JSON is pulled in
1. Create a materialized view to join the incomplete purchases with the inactive users to get the abandoned carts:
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW abandoned_cart AS
SELECT
incomplete_purchases.user_id,
@@ -182,13 +182,13 @@ With JSON-formatted messages, we don't know the schema so the [JSON is pulled in
1. To see the changes in the `abandoned_cart` materialized view as new data arrives, you can use [`SUBSCRIBE`](/sql/subscribe):
- ```sql
+ ```mzsql
SELECT * FROM abandoned_cart LIMIT 10;
```
1. To see the changes in the `abandoned_cart` materialized view as new data arrives, you can use [`SUBSCRIBE`](/sql/subscribe):
- ```sql
+ ```mzsql
COPY ( SUBSCRIBE ( SELECT * FROM abandoned_cart ) ) TO STDOUT;
```
diff --git a/doc/user/content/releases/v0.100.md b/doc/user/content/releases/v0.100.md
index 596b97d5cc78..5abd8f33ed48 100644
--- a/doc/user/content/releases/v0.100.md
+++ b/doc/user/content/releases/v0.100.md
@@ -12,7 +12,7 @@ patch: 1
* Add a [`MAP` expression](/sql/types/map/#construction) that allows constructing a `map`
from a list of key–value pairs or a subquery.
- ```sql
+ ```mzsql
SELECT MAP['a' => 1, 'b' => 2];
map
diff --git a/doc/user/content/releases/v0.27.md b/doc/user/content/releases/v0.27.md
index b92891a2511e..bfd1bb3bbb1e 100644
--- a/doc/user/content/releases/v0.27.md
+++ b/doc/user/content/releases/v0.27.md
@@ -73,7 +73,7 @@ substantial breaking changes from [v0.26 LTS].
To emulate the old behavior, explicitly create a default index after creating
a view:
- ```sql
+ ```mzsql
CREATE VIEW ...;
CREATE DEFAULT INDEX ON ;
```
@@ -84,7 +84,7 @@ substantial breaking changes from [v0.26 LTS].
a default index. Instead, you must explicitly create a default index after
creating a source:
- ```sql
+ ```mzsql
CREATE SOURCE ...;
CREATE DEFAULT INDEX ON ;
```
@@ -141,7 +141,7 @@ from Materialize v0.26 LTS for Materialize v0.27:
Change from:
-```sql
+```mzsql
CREATE SOURCE kafka_sasl
FROM KAFKA BROKER 'broker.tld:9092' TOPIC 'top-secret' WITH (
security_protocol = 'SASL_SSL',
@@ -157,7 +157,7 @@ CREATE SOURCE kafka_sasl
to:
-```sql
+```mzsql
CREATE SECRET kafka_password AS '';
CREATE SECRET csr_password AS '';
@@ -182,13 +182,13 @@ CREATE SOURCE kafka_top_secret
Change from:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW v AS SELECT ...
```
to:
-```sql
+```mzsql
CREATE VIEW v AS SELECT ...
CREATE DEFAULT INDEX ON v
```
@@ -197,13 +197,13 @@ CREATE DEFAULT INDEX ON v
Change from:
-```sql
+```mzsql
CREATE MATERIALIZED SOURCE src ...
```
to:
-```sql
+```mzsql
CREATE SOURCE src ...
```
@@ -218,13 +218,13 @@ CREATE INDEX on src (lookup_col1, lookup_col2)
Change from:
-```sql
+```mzsql
COPY (TAIL t) TO STDOUT
```
to:
-```sql
+```mzsql
COPY (SUBSCRIBE t) TO STDOUT
```
diff --git a/doc/user/content/releases/v0.28.md b/doc/user/content/releases/v0.28.md
index 36b4b8a0de04..49eb4f7cb6fd 100644
--- a/doc/user/content/releases/v0.28.md
+++ b/doc/user/content/releases/v0.28.md
@@ -18,7 +18,7 @@ aliases: v0.28.0
**New syntax**
- ```sql
+ ```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'unique-jellyfish-0000-kafka.upstash.io:9092',
SASL MECHANISMS = 'SCRAM-SHA-256',
@@ -29,7 +29,7 @@ aliases: v0.28.0
**Old syntax**
- ```sql
+ ```mzsql
CREATE CONNECTION kafka_connection FOR KAFKA
BROKER 'unique-jellyfish-0000-kafka.upstash.io:9092',
SASL MECHANISMS = 'SCRAM-SHA-256',
@@ -56,7 +56,7 @@ aliases: v0.28.0
cluster. For the best performance when executing `SHOW` commands, switch to
the `mz_introspection` cluster using:
- ```sql
+ ```mzsql
SET CLUSTER = mz_introspection;
```
diff --git a/doc/user/content/releases/v0.29.md b/doc/user/content/releases/v0.29.md
index 753afc3610fd..87f64c45e216 100644
--- a/doc/user/content/releases/v0.29.md
+++ b/doc/user/content/releases/v0.29.md
@@ -15,7 +15,7 @@ patch: 3
to literal values, particularly in cases where e.g. `col_a` was of type
`VARCHAR`:
- ```sql
+ ```mzsql
SELECT * FROM table_foo WHERE col_a = 'hello';
```
diff --git a/doc/user/content/releases/v0.30.md b/doc/user/content/releases/v0.30.md
index 5f6c66b27a4b..ddc402c6f277 100644
--- a/doc/user/content/releases/v0.30.md
+++ b/doc/user/content/releases/v0.30.md
@@ -19,7 +19,7 @@ patch: 2
[PostgreSQL source](/sql/create-source/postgres/), specifying the table and
column containing an unsupported type:
- ```sql
+ ```mzsql
CREATE SOURCE pg_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES
diff --git a/doc/user/content/releases/v0.32.md b/doc/user/content/releases/v0.32.md
index ec95b461edcd..e1d23af7be54 100644
--- a/doc/user/content/releases/v0.32.md
+++ b/doc/user/content/releases/v0.32.md
@@ -11,7 +11,7 @@ patch: 4
[PostgreSQL source](/sql/create-source/postgres/), using the new `TEXT
COLUMNS` option:
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (
PUBLICATION 'mz_source',
@@ -28,7 +28,7 @@ patch: 4
* Improve error message for unexpected or mismatched type catalog errors,
specifying the catalog item type:
- ```sql
+ ```mzsql
DROP VIEW mz_table;
ERROR: "materialize.public.mz_table" is a table not a view
diff --git a/doc/user/content/releases/v0.33.md b/doc/user/content/releases/v0.33.md
index 8ad20ab7bf92..a53bcef0e60d 100644
--- a/doc/user/content/releases/v0.33.md
+++ b/doc/user/content/releases/v0.33.md
@@ -11,7 +11,7 @@ patch: 3
to an SSH bastion server.
- ```sql
+ ```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKERS (
'broker1:9092' USING SSH TUNNEL ssh_connection,
diff --git a/doc/user/content/releases/v0.36.md b/doc/user/content/releases/v0.36.md
index 32d1eccc7c20..0f7ea3ee7afa 100644
--- a/doc/user/content/releases/v0.36.md
+++ b/doc/user/content/releases/v0.36.md
@@ -19,7 +19,7 @@ patch: 2
all extant cluster replicas as a % of the total allocation, you can now
use:
- ```sql
+ ```mzsql
SELECT
r.id AS replica_id,
m.process_id,
diff --git a/doc/user/content/releases/v0.37.md b/doc/user/content/releases/v0.37.md
index d307b7801404..b3c8fe00c531 100644
--- a/doc/user/content/releases/v0.37.md
+++ b/doc/user/content/releases/v0.37.md
@@ -15,7 +15,7 @@ patch: 3
to the system catalog. This view allows you to monitor the resource utilization for
all extant cluster replicas as a % of the total resource allocation:
- ```sql
+ ```mzsql
SELECT * FROM mz_internal.mz_cluster_replica_utilization;
```
diff --git a/doc/user/content/releases/v0.39.md b/doc/user/content/releases/v0.39.md
index ad236b1b9271..2b179e839610 100644
--- a/doc/user/content/releases/v0.39.md
+++ b/doc/user/content/releases/v0.39.md
@@ -16,7 +16,7 @@ patch: 3
example, you can now get an overview of the relationship between user-defined
objects using:
- ```sql
+ ```mzsql
SELECT
object_id,
o.name,
@@ -41,7 +41,7 @@ patch: 3
the value returned by the existing `mz_version()` function, but the parameter
form can be more convenient for downstream applications.
- ```sql
+ ```mzsql
SHOW mz_version;
```
diff --git a/doc/user/content/releases/v0.40.md b/doc/user/content/releases/v0.40.md
index fc78e8fa7fdc..7ce3de66d8df 100644
--- a/doc/user/content/releases/v0.40.md
+++ b/doc/user/content/releases/v0.40.md
@@ -9,7 +9,7 @@ released: true
* Allow configuring an `AVAILABILITY ZONE` option for each broker when creating
a Kafka connection using [AWS PrivateLink](/sql/create-connection/#kafka-network-security):
- ```sql
+ ```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
diff --git a/doc/user/content/releases/v0.41.md b/doc/user/content/releases/v0.41.md
index 7cf2733eaa8c..ccb1a48296bd 100644
--- a/doc/user/content/releases/v0.41.md
+++ b/doc/user/content/releases/v0.41.md
@@ -17,7 +17,7 @@ patch: 1
name of the replication slot created in the upstream PostgreSQL
database that Materialize will create for each source.
- ```sql
+ ```mzsql
SELECT * FROM mz_internal.mz_postgres_sources;
id | replication_slot
@@ -32,7 +32,7 @@ patch: 1
**New syntax**
- ```sql
+ ```mzsql
CREATE SOURCE kafka_connection
IN CLUSTER quickstart
FROM KAFKA CONNECTION qck_kafka_connection (TOPIC 'test_topic')
diff --git a/doc/user/content/releases/v0.43.md b/doc/user/content/releases/v0.43.md
index 43401082535f..7654a0f0a571 100644
--- a/doc/user/content/releases/v0.43.md
+++ b/doc/user/content/releases/v0.43.md
@@ -17,7 +17,7 @@ released: true
PHYSICAL PLAN FOR [MATERIALIZED] VIEW $view_name` to print the name of the
view. The output will now look similar to:
- ```sql
+ ```mzsql
EXPLAIN VIEW v;
Optimized Plan
diff --git a/doc/user/content/releases/v0.45.md b/doc/user/content/releases/v0.45.md
index 5cd9ccd8da3c..cce8e52676cf 100644
--- a/doc/user/content/releases/v0.45.md
+++ b/doc/user/content/releases/v0.45.md
@@ -15,7 +15,7 @@ released: true
**Example**
- ```sql
+ ```mzsql
-- Given a "purchases" Kafka source, a "purchases_progress"
-- subsource is automatically created
SELECT partition, "offset"
@@ -48,7 +48,7 @@ released: true
* Support `options` settings on connection startup. As an example, you can
now specify the cluster to connect to in the `psql` connection string:
- ```sql
+ ```mzsql
psql "postgres://user%40domain.com@host:6875/materialize?options=--cluster%3Dfoo"
```
@@ -74,7 +74,7 @@ now specify the cluster to connect to in the `psql` connection string:
**Example**
- ```sql
+ ```mzsql
CREATE VIEW foo AS SELECT 'bar';
ERROR: view "materialize.public.foo" already exists
diff --git a/doc/user/content/releases/v0.48.md b/doc/user/content/releases/v0.48.md
index 3f82aba0a1b8..8d02f8f5797a 100644
--- a/doc/user/content/releases/v0.48.md
+++ b/doc/user/content/releases/v0.48.md
@@ -25,7 +25,7 @@ patch: 4
* Support specifying multiple roles in the [`GRANT ROLE`](/sql/grant-role) and
[`REVOKE ROLE`](/sql/revoke-role) commands.
- ```sql
+ ```mzsql
-- Grant role
GRANT data_scientist TO joe, mike;
diff --git a/doc/user/content/releases/v0.51.md b/doc/user/content/releases/v0.51.md
index c4ff4ef7423d..40bee97ca652 100644
--- a/doc/user/content/releases/v0.51.md
+++ b/doc/user/content/releases/v0.51.md
@@ -13,7 +13,7 @@ patch: 1
[PostgreSQL source](/sql/create-source/postgres/), using the new `FOR SCHEMAS(...)`
option:
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR SCHEMAS (public, finance)
@@ -33,7 +33,7 @@ patch: 1
replaces a set of characters in a string with another set of characters
(one by one, regardless of the order of those characters):
- ```sql
+ ```mzsql
SELECT translate('12345', '134', 'ax');
translate
diff --git a/doc/user/content/releases/v0.52.md b/doc/user/content/releases/v0.52.md
index 9058d990eea9..1333e69c0b09 100644
--- a/doc/user/content/releases/v0.52.md
+++ b/doc/user/content/releases/v0.52.md
@@ -38,7 +38,7 @@ mentioning it here."
To see your current credit consumption rate, measured in credits per hour, run
the following query:
- ```sql
+ ```mzsql
SELECT sum(s.credits_per_hour) AS credit_consumption_rate
FROM mz_cluster_replicas r
JOIN mz_internal.mz_cluster_replica_sizes s ON r.size = s.size;
diff --git a/doc/user/content/releases/v0.53.md b/doc/user/content/releases/v0.53.md
index 5f93aac04bfd..f22ac037897e 100644
--- a/doc/user/content/releases/v0.53.md
+++ b/doc/user/content/releases/v0.53.md
@@ -13,7 +13,7 @@ released: true
join condition in the statement below will be referenceable as `lhs.c`,
`rhs.c`, and `joint.c`.
- ```sql
+ ```mzsql
SELECT *
FROM lhs
JOIN rhs USING (c) AS joint;
diff --git a/doc/user/content/releases/v0.54.md b/doc/user/content/releases/v0.54.md
index a8146b1e636d..b221182edb1d 100644
--- a/doc/user/content/releases/v0.54.md
+++ b/doc/user/content/releases/v0.54.md
@@ -31,7 +31,7 @@ released: true
roles, as well as the `ALL` keyword to indicate that all privileges should
be granted or revoked.
- ```sql
+ ```mzsql
GRANT SELECT ON mv TO joe, mike;
GRANT ALL ON CLUSTER dev TO joe;
diff --git a/doc/user/content/releases/v0.55.md b/doc/user/content/releases/v0.55.md
index 28b4405a72c4..c9860309e9aa 100644
--- a/doc/user/content/releases/v0.55.md
+++ b/doc/user/content/releases/v0.55.md
@@ -13,7 +13,7 @@ patch: 6
current_schema`, respectively. From this release, the following sequence of
commands provide the same functionality:
- ```sql
+ ```mzsql
materialize=> SET schema = finance;
SET
materialize=> SHOW schema;
@@ -23,7 +23,7 @@ patch: 6
(1 row)
```
- ```sql
+ ```mzsql
materialize=> SET search_path = finance, public;
SET
materialize=> SELECT current_schema;
diff --git a/doc/user/content/releases/v0.56.md b/doc/user/content/releases/v0.56.md
index 0514b8fa16cb..2b88e567f47a 100644
--- a/doc/user/content/releases/v0.56.md
+++ b/doc/user/content/releases/v0.56.md
@@ -20,7 +20,7 @@ patch: 4
* Add the `has_table_privilege` access control function, which allows a role
to query if it has privileges on a specific relation:
- ```sql
+ ```mzsql
SELECT has_table_privilege('marta','auction_house','select');
has_table_privilege
diff --git a/doc/user/content/releases/v0.57.md b/doc/user/content/releases/v0.57.md
index 7b5412ed46b7..e86573736164 100644
--- a/doc/user/content/releases/v0.57.md
+++ b/doc/user/content/releases/v0.57.md
@@ -23,7 +23,7 @@ patch: 10
* Add `RESET schema` as an alias to `RESET search_path`. From this release, the
following sequence of commands provide the same functionality:
- ```sql
+ ```mzsql
materialize=> SET schema = finance;
SET
materialize=> SHOW schema;
@@ -41,7 +41,7 @@ patch: 10
(1 row)
```
- ```sql
+ ```mzsql
materialize=> SET search_path = finance, public;
SET
materialize=> SELECT current_schema;
diff --git a/doc/user/content/releases/v0.59.md b/doc/user/content/releases/v0.59.md
index 6a058505e893..e1bc523938d6 100644
--- a/doc/user/content/releases/v0.59.md
+++ b/doc/user/content/releases/v0.59.md
@@ -17,7 +17,7 @@ supported in the next release.
* Support parsing multi-dimensional arrays, including multi-dimensional empty arrays.
- ```sql
+ ```mzsql
materialize=> SELECT '{{1}, {2}}'::int[];
arr
-----------
diff --git a/doc/user/content/releases/v0.60.md b/doc/user/content/releases/v0.60.md
index 1eac4cd4135a..116e3f3a3b2e 100644
--- a/doc/user/content/releases/v0.60.md
+++ b/doc/user/content/releases/v0.60.md
@@ -22,7 +22,7 @@ flag. The flag was raised in v0.60 -— so mentioning it here."
**New syntax**
- ```sql
+ ```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'ch_anges')
FORMAT JSON
@@ -39,7 +39,7 @@ flag. The flag was raised in v0.60 -— so mentioning it here."
**Old syntax**
- ```sql
+ ```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'ch_anges')
FORMAT BYTES
diff --git a/doc/user/content/releases/v0.62.md b/doc/user/content/releases/v0.62.md
index 38316c598c64..2f129c22c37e 100644
--- a/doc/user/content/releases/v0.62.md
+++ b/doc/user/content/releases/v0.62.md
@@ -21,7 +21,7 @@ patch: 4
For a given JSON-formatted source, the following query cannot
benefit from filter pushdown:
- ```sql
+ ```mzsql
SELECT *
FROM foo
WHERE (data ->> 'timestamp')::timestamp > mz_now();
@@ -29,7 +29,7 @@ patch: 4
But can be optimized as:
- ```sql
+ ```mzsql
SELECT *
FROM foo
WHERE try_parse_monotonic_iso8601_timestamp(data ->> 'timestamp') > mz_now();
diff --git a/doc/user/content/releases/v0.67.md b/doc/user/content/releases/v0.67.md
index 8086649b310d..95f08db04150 100644
--- a/doc/user/content/releases/v0.67.md
+++ b/doc/user/content/releases/v0.67.md
@@ -18,7 +18,7 @@ here."
rows as a series of inserts, updates and deletes within each distinct
timestamp. The output rows will have the following structure:
- ```sql
+ ```mzsql
SUBSCRIBE mview ENVELOPE UPSERT (KEY (key));
mz_timestamp | mz_state | key | value
diff --git a/doc/user/content/releases/v0.69.md b/doc/user/content/releases/v0.69.md
index 4da937a7f434..f4d4987ebc8a 100644
--- a/doc/user/content/releases/v0.69.md
+++ b/doc/user/content/releases/v0.69.md
@@ -21,7 +21,7 @@ flag. The flag was raised in v0.69 — so mentioning it here."
using the new [`VALIDATE CONNECTION`](https://materialize.com/docs/sql/validate-connection/)
syntax:
- ```sql
+ ```mzsql
VALIDATE CONNECTION ssh_connection;
```
@@ -41,7 +41,7 @@ flag. The flag was raised in v0.69 — so mentioning it here."
* Add the `IN CLUSTER` option to the `SHOW { SOURCES | SINKS }` commands to
restrict the objects listed to a specific cluster.
- ```sql
+ ```mzsql
SHOW SOURCES;
```
```nofmt
@@ -51,7 +51,7 @@ flag. The flag was raised in v0.69 — so mentioning it here."
my_postgres_source | postgres | | c2
```
- ```sql
+ ```mzsql
SHOW SOURCES IN CLUSTER c2;
```
```nofmt
diff --git a/doc/user/content/releases/v0.73.md b/doc/user/content/releases/v0.73.md
index 270309400a00..30083203ac6a 100644
--- a/doc/user/content/releases/v0.73.md
+++ b/doc/user/content/releases/v0.73.md
@@ -19,7 +19,7 @@ behind a feature flag."
**Example:**
- ```sql
+ ```mzsql
CREATE TABLE t (c1 int, c2 text);
COMMENT ON TABLE t IS 'materialize comment on t';
COMMENT ON COLUMN t.c2 IS 'materialize comment on t.c2';
diff --git a/doc/user/content/releases/v0.74.md b/doc/user/content/releases/v0.74.md
index edeb889ff608..7bfa517496a5 100644
--- a/doc/user/content/releases/v0.74.md
+++ b/doc/user/content/releases/v0.74.md
@@ -16,7 +16,7 @@ deployments."
* Bring back support for [window aggregations](/sql/functions/#window-func), or
aggregate functions (e.g., `sum`, `avg`) that use an `OVER` clause.
- ```sql
+ ```mzsql
CREATE TABLE sales(time int, amount int);
INSERT INTO sales VALUES (1,3), (2,6), (3,1), (4,5), (5,5), (6,6);
@@ -50,7 +50,7 @@ deployments."
* Improve error message for possibly mistyped column names, suggesting similarly
named columns if the one specified cannot be found.
- ```sql
+ ```mzsql
CREATE SOURCE case_sensitive_names
FROM POSTGRES CONNECTION pg (
PUBLICATION 'mz_source',
diff --git a/doc/user/content/releases/v0.76.md b/doc/user/content/releases/v0.76.md
index d51779b432d6..5826ba64c3c2 100644
--- a/doc/user/content/releases/v0.76.md
+++ b/doc/user/content/releases/v0.76.md
@@ -12,7 +12,7 @@ released: true
using the `SSH TUNNEL` top-level option. The default connection will be used
to connect to any new or unlisted brokers.
- ```sql
+ ```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'broker1:9092',
SSH TUNNEL ssh_connection
diff --git a/doc/user/content/releases/v0.77.md b/doc/user/content/releases/v0.77.md
index 5db77d4b69e5..99633a3c16e3 100644
--- a/doc/user/content/releases/v0.77.md
+++ b/doc/user/content/releases/v0.77.md
@@ -16,7 +16,7 @@ patch: 1
**Example**
- ```sql
+ ```mzsql
CREATE SOURCE webhook_with_time_based_rejection
IN CLUSTER webhook_cluster
FROM WEBHOOK
@@ -33,7 +33,7 @@ patch: 1
**Example**
- ```sql
+ ```mzsql
SELECT timezone_offset('America/New_York', '2023-11-05T06:00:00+00')
----
(EST,-05:00:00,00:00:00)
diff --git a/doc/user/content/releases/v0.78.md b/doc/user/content/releases/v0.78.md
index 19fc080ae5cd..61c5ae68efb6 100644
--- a/doc/user/content/releases/v0.78.md
+++ b/doc/user/content/releases/v0.78.md
@@ -23,7 +23,7 @@ patch: 15
sources, which allows extracting individual headers from Kafka messages and
expose them as columns of the source.
- ```sql
+ ```mzsql
CREATE SOURCE kafka_metadata
FROM KAFKA CONNECTION kafka_connection (TOPIC 'data')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -31,7 +31,7 @@ expose them as columns of the source.
ENVELOPE NONE
```
- ```sql
+ ```mzsql
SELECT
id,
seller,
diff --git a/doc/user/content/releases/v0.83.md b/doc/user/content/releases/v0.83.md
index 2a9b13ee204d..06708085c0b2 100644
--- a/doc/user/content/releases/v0.83.md
+++ b/doc/user/content/releases/v0.83.md
@@ -18,7 +18,7 @@ patch: 4
these objects in a context that requires indexing, we recommend creating a
view over the catalog objects, and indexing that view instead.
- ```sql
+ ```mzsql
CREATE VIEW mz_objects_indexed AS
SELECT o.id AS object_id,
s.name AS schema_name
diff --git a/doc/user/content/releases/v0.84.md b/doc/user/content/releases/v0.84.md
index 028b92104572..7de1d890817f 100644
--- a/doc/user/content/releases/v0.84.md
+++ b/doc/user/content/releases/v0.84.md
@@ -17,7 +17,7 @@ patch: 4
**New syntax**
- ```sql
+ ```mzsql
--Create the object in a specific cluster
CREATE SOURCE json_source
IN CLUSTER some_cluster
@@ -32,7 +32,7 @@ patch: 4
**Deprecated syntax**
- ```sql
+ ```mzsql
--Create the object in a dedicated (linked) cluster
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'ch_anges')
diff --git a/doc/user/content/releases/v0.86.md b/doc/user/content/releases/v0.86.md
index b11c117ce932..35c2d8403b0e 100644
--- a/doc/user/content/releases/v0.86.md
+++ b/doc/user/content/releases/v0.86.md
@@ -12,7 +12,7 @@ patch: 1
* Add support for [handling batched events](https://materialize.com/docs/sql/create-source/webhook/#handling-batch-events)
in the webhook source via the new `JSON ARRAY` format.
- ```sql
+ ```mzsql
CREATE SOURCE webhook_source_json_batch IN CLUSTER my_cluster FROM WEBHOOK
BODY FORMAT JSON ARRAY
INCLUDE HEADERS;
@@ -27,7 +27,7 @@ patch: 1
]
```
- ```sql
+ ```mzsql
SELECT COUNT(body) FROM webhook_source_json_batch;
----
3
@@ -37,7 +37,7 @@ patch: 1
Use the new `map_build` function to turn all headers exposed via `INCLUDE
HEADERS` into a `map`, which makes it easier to extract header values.
- ```sql
+ ```mzsql
SELECT
id,
seller,
diff --git a/doc/user/content/releases/v0.87.md b/doc/user/content/releases/v0.87.md
index c5b944d50e7a..02cee97be4ee 100644
--- a/doc/user/content/releases/v0.87.md
+++ b/doc/user/content/releases/v0.87.md
@@ -12,7 +12,7 @@ patch: 2
* Add support for handling batched events formatted as `NDJSON` in the
[webhook source](https://materialize.com/docs/sql/create-source/webhook/).
- ```sql
+ ```mzsql
CREATE SOURCE webhook_json IN CLUSTER quickstart FROM WEBHOOK
BODY FORMAT JSON;
@@ -30,7 +30,7 @@ patch: 2
used to connect to all brokers, and is exclusive with the `BROKER` and
`BROKERS` options.
- ```sql
+ ```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1')
diff --git a/doc/user/content/releases/v0.88.md b/doc/user/content/releases/v0.88.md
index 453f9de5d61e..0de95cc14b5b 100644
--- a/doc/user/content/releases/v0.88.md
+++ b/doc/user/content/releases/v0.88.md
@@ -11,7 +11,7 @@ patch: 1
* Allow `LIMIT` expressions to contain parameters.
- ```sql
+ ```mzsql
PREPARE foo AS SELECT generate_series(1, 10) LIMIT $1;
EXECUTE foo (7::bigint);
diff --git a/doc/user/content/releases/v0.91.md b/doc/user/content/releases/v0.91.md
index 284b24f1836e..bae16d5e15fa 100644
--- a/doc/user/content/releases/v0.91.md
+++ b/doc/user/content/releases/v0.91.md
@@ -17,7 +17,7 @@ behind a feature flag."
**Syntax**
- ```sql
+ ```mzsql
CREATE SECRET mysqlpass AS '';
CREATE CONNECTION mysql_connection TO MYSQL (
diff --git a/doc/user/content/releases/v0.97.md b/doc/user/content/releases/v0.97.md
index 6f7004051234..49d2f598e2f6 100644
--- a/doc/user/content/releases/v0.97.md
+++ b/doc/user/content/releases/v0.97.md
@@ -18,7 +18,7 @@ patch: 3
string with the first character of every word in upper case and all other
characters in lower case.
- ```sql
+ ```mzsql
SELECT initcap('bye DrivEr');
initcap
diff --git a/doc/user/content/releases/v0.99.md b/doc/user/content/releases/v0.99.md
index f8f8a4cc5c07..54c72d1a0f2e 100644
--- a/doc/user/content/releases/v0.99.md
+++ b/doc/user/content/releases/v0.99.md
@@ -15,7 +15,7 @@ patch: 2
**Syntax**
- ```sql
+ ```mzsql
CREATE CONNECTION s3_conn
TO AWS (ASSUME ROLE ARN = 'arn:aws:iam::000000000000:role/Materializes3Exporter');
@@ -45,11 +45,11 @@ patch: 2
**Syntax**
- ```sql
+ ```mzsql
ALTER MATERIALIZED VIEW winning_bids SET (RETAIN HISTORY FOR '2hr');
```
- ```sql
+ ```mzsql
ALTER MATERIALIZED VIEW winning_bids RESET (RETAIN HISTORY);
```
diff --git a/doc/user/content/serve-results/deepnote.md b/doc/user/content/serve-results/deepnote.md
index 4d2581390ca0..37a75008a9c9 100644
--- a/doc/user/content/serve-results/deepnote.md
+++ b/doc/user/content/serve-results/deepnote.md
@@ -40,7 +40,7 @@ This guide walks you through the steps required to use the collaborative data no
1. Create a new SQL block.
2. Inside the block, select the new **Materialize** integration and paste the following query:
- ```sql
+ ```mzsql
SELECT
number,
row_num
diff --git a/doc/user/content/serve-results/hex.md b/doc/user/content/serve-results/hex.md
index 2b0014eafb4d..9d366cbf48e8 100644
--- a/doc/user/content/serve-results/hex.md
+++ b/doc/user/content/serve-results/hex.md
@@ -41,7 +41,7 @@ This guide walks you through the steps required to use the collaborative data no
1. Create a new SQL cell.
2. Inside the cell, select the new **Materialize** connection and paste the following query:
- ```sql
+ ```mzsql
SELECT
number,
row_num
diff --git a/doc/user/content/serve-results/power-bi.md b/doc/user/content/serve-results/power-bi.md
index 02dcb1ddc2d1..492266a9b41e 100644
--- a/doc/user/content/serve-results/power-bi.md
+++ b/doc/user/content/serve-results/power-bi.md
@@ -64,7 +64,7 @@ To work around this Power BI limitation, you can use one of the following option
For example, if you have a materialized view called `my_view`, you can create a view called `my_view_bi` with the following SQL:
- ```sql
+ ```mzsql
CREATE VIEW my_view_bi AS SELECT * FROM my_view;
```
diff --git a/doc/user/content/serve-results/s3.md b/doc/user/content/serve-results/s3.md
index e2d6ac801774..6fb4b9ae3412 100644
--- a/doc/user/content/serve-results/s3.md
+++ b/doc/user/content/serve-results/s3.md
@@ -152,14 +152,14 @@ Next, you must attach the policy you just created to a Materialize-specific
AWS account, and `` with the name of the role you created in the
previous step:
- ```sql
+ ```mzsql
CREATE CONNECTION aws_connection
TO AWS (ASSUME ROLE ARN = 'arn:aws:iam:::role/');
```
1. Retrieve the external ID for the connection:
- ```sql
+ ```mzsql
SELECT awsc.id, external_id
FROM mz_internal.mz_aws_connections awsc
JOIN mz_connections c ON awsc.id = c.id
@@ -186,7 +186,7 @@ Next, you must attach the policy you just created to a Materialize-specific
1. Back in Materialize, validate the AWS connection you created using the
[`VALIDATE CONNECTION`](/sql/validate-connection) command.
- ```sql
+ ```mzsql
VALIDATE CONNECTION aws_connection;
```
@@ -201,7 +201,7 @@ command, and the AWS connection you created in the previous step.
{{< tabs >}}
{{< tab "Parquet">}}
-```sql
+```mzsql
COPY some_object TO 's3:///'
WITH (
AWS CONNECTION = aws_connection,
@@ -216,7 +216,7 @@ type support and conversion, check the [reference documentation](/sql/copy-to/#c
{{< tab "CSV">}}
-```sql
+```mzsql
COPY some_object TO 's3:///'
WITH (
AWS CONNECTION = aws_connection,
diff --git a/doc/user/content/serve-results/sink-troubleshooting.md b/doc/user/content/serve-results/sink-troubleshooting.md
index 33334620d216..502e0ef565a4 100644
--- a/doc/user/content/serve-results/sink-troubleshooting.md
+++ b/doc/user/content/serve-results/sink-troubleshooting.md
@@ -14,7 +14,7 @@ menu:
## Why isn't my sink exporting data?
First, look for errors in [`mz_sink_statuses`](/sql/system-catalog/mz_internal/#mz_sink_statuses):
-```sql
+```mzsql
SELECT * FROM mz_internal.mz_sink_statuses
WHERE name = ;
```
@@ -31,7 +31,7 @@ Repeatedly query the
[`mz_sink_statistics`](/sql/system-catalog/mz_internal/#mz_sink_statistics)
table and look for ingestion statistics that advance over time:
-```sql
+```mzsql
SELECT
messages_staged,
messages_committed,
diff --git a/doc/user/content/sql/alter-cluster.md b/doc/user/content/sql/alter-cluster.md
index 762f797c37ac..9471b81ec5c2 100644
--- a/doc/user/content/sql/alter-cluster.md
+++ b/doc/user/content/sql/alter-cluster.md
@@ -23,7 +23,7 @@ cluster, use [`ALTER ... RENAME`](/sql/alter-rename/).
Alter cluster to two replicas:
-```sql
+```mzsql
ALTER CLUSTER c1 SET (REPLICATION FACTOR 2);
```
@@ -31,7 +31,7 @@ ALTER CLUSTER c1 SET (REPLICATION FACTOR 2);
Alter cluster to size `100cc`:
-```sql
+```mzsql
ALTER CLUSTER c1 SET (SIZE '100cc');
```
@@ -47,7 +47,7 @@ by following the instructions below.
Alter the `managed` status of a cluster to managed:
-```sql
+```mzsql
ALTER CLUSTER c1 SET (MANAGED);
```
diff --git a/doc/user/content/sql/alter-default-privileges.md b/doc/user/content/sql/alter-default-privileges.md
index ec3155f0d36c..a5fc1a2ba799 100644
--- a/doc/user/content/sql/alter-default-privileges.md
+++ b/doc/user/content/sql/alter-default-privileges.md
@@ -67,19 +67,19 @@ type for sources, views, and materialized views.
## Examples
-```sql
+```mzsql
ALTER DEFAULT PRIVILEGES FOR ROLE mike GRANT SELECT ON TABLES TO joe;
```
-```sql
+```mzsql
ALTER DEFAULT PRIVILEGES FOR ROLE interns IN DATABASE dev GRANT ALL PRIVILEGES ON TABLES TO intern_managers;
```
-```sql
+```mzsql
ALTER DEFAULT PRIVILEGES FOR ROLE developers REVOKE USAGE ON SECRETS FROM project_managers;
```
-```sql
+```mzsql
ALTER DEFAULT PRIVILEGES FOR ALL ROLES GRANT SELECT ON TABLES TO managers;
```
diff --git a/doc/user/content/sql/alter-owner.md b/doc/user/content/sql/alter-owner.md
index 85db8161472e..00d76f844ff8 100644
--- a/doc/user/content/sql/alter-owner.md
+++ b/doc/user/content/sql/alter-owner.md
@@ -26,11 +26,11 @@ index owner is always kept in-sync with the owner of the underlying relation.
## Examples
-```sql
+```mzsql
ALTER TABLE t OWNER TO joe;
```
-```sql
+```mzsql
ALTER CLUSTER REPLICA production.r1 OWNER TO admin;
```
diff --git a/doc/user/content/sql/alter-rename.md b/doc/user/content/sql/alter-rename.md
index 63ffd447899b..960386ed0ae2 100644
--- a/doc/user/content/sql/alter-rename.md
+++ b/doc/user/content/sql/alter-rename.md
@@ -40,7 +40,7 @@ You cannot rename items if:
You can only rename either view named `v1` if every dependent view's query
that contains references to both views fully qualifies all references, e.g.
- ```sql
+ ```mzsql
CREATE VIEW v2 AS
SELECT *
FROM db1.s1.v1
@@ -56,7 +56,7 @@ You cannot rename items if:
In the following examples, `v1` could _not_ be renamed:
- ```sql
+ ```mzsql
CREATE VIEW v3 AS
SELECT *
FROM v1
@@ -64,7 +64,7 @@ You cannot rename items if:
ON v1.a = v2.v1
```
- ```sql
+ ```mzsql
CREATE VIEW v4 AS
SELECT *
FROM v1
@@ -81,7 +81,7 @@ whether that identifier is used implicitly or explicitly.
Consider this example:
-```sql
+```mzsql
CREATE VIEW v5 AS
SELECT *
FROM d1.s1.v2
@@ -102,7 +102,7 @@ However, you could rename `v1` to any other [legal identifier](/sql/identifiers)
## Examples
-```sql
+```mzsql
SHOW VIEWS;
```
```nofmt
@@ -110,7 +110,7 @@ SHOW VIEWS;
-------
v1
```
-```sql
+```mzsql
ALTER VIEW v1 RENAME TO v2;
SHOW VIEWS;
```
diff --git a/doc/user/content/sql/alter-role.md b/doc/user/content/sql/alter-role.md
index 88e307d9b960..f16d5975f2da 100644
--- a/doc/user/content/sql/alter-role.md
+++ b/doc/user/content/sql/alter-role.md
@@ -61,10 +61,10 @@ current configuration parameter defaults for a role, see [`mz_role_parameters`](
#### Altering the attributes of a role
-```sql
+```mzsql
ALTER ROLE rj INHERIT;
```
-```sql
+```mzsql
SELECT name, inherit FROM mz_roles WHERE name = 'rj';
```
```nofmt
@@ -73,7 +73,7 @@ rj true
#### Setting configuration parameters for a role
-```sql
+```mzsql
SHOW cluster;
quickstart
@@ -93,7 +93,7 @@ quickstart
```
##### Non-inheritance
-```sql
+```mzsql
CREATE ROLE team;
CREATE ROLE member;
diff --git a/doc/user/content/sql/alter-secret.md b/doc/user/content/sql/alter-secret.md
index a2b3f4eda820..c28318b695ee 100644
--- a/doc/user/content/sql/alter-secret.md
+++ b/doc/user/content/sql/alter-secret.md
@@ -47,7 +47,7 @@ After an `ALTER SECRET` command is executed:
## Examples
-```sql
+```mzsql
ALTER SECRET upstash_kafka_ca_cert AS decode('c2VjcmV0Cg==', 'base64');
```
diff --git a/doc/user/content/sql/alter-sink.md b/doc/user/content/sql/alter-sink.md
index a233a5bbfb79..eeb9ab83b3c2 100644
--- a/doc/user/content/sql/alter-sink.md
+++ b/doc/user/content/sql/alter-sink.md
@@ -94,7 +94,7 @@ keyspaces.
To alter a sink originally created to use `matview_1` as the upstream relation,
and start sinking the contents to `matview_2` instead:
-```sql
+```mzsql
CREATE SINK avro_sink
FROM matview_1
INTO KAFKA CONNECTION kafka_connection (TOPIC 'test_avro_topic')
@@ -103,7 +103,7 @@ CREATE SINK avro_sink
ENVELOPE UPSERT;
```
-```sql
+```mzsql
ALTER SINK foo SET FROM matview_2;
```
diff --git a/doc/user/content/sql/alter-source.md b/doc/user/content/sql/alter-source.md
index acfae120256a..f348c6a83165 100644
--- a/doc/user/content/sql/alter-source.md
+++ b/doc/user/content/sql/alter-source.md
@@ -63,7 +63,7 @@ You cannot drop the "progress subsource".
### Adding subsources
-```sql
+```mzsql
ALTER SOURCE pg_src ADD SUBSOURCE tbl_a, tbl_b AS b WITH (TEXT COLUMNS [tbl_a.col]);
```
@@ -71,7 +71,7 @@ ALTER SOURCE pg_src ADD SUBSOURCE tbl_a, tbl_b AS b WITH (TEXT COLUMNS [tbl_a.co
To drop a subsource, use the [`DROP SOURCE`](/sql/drop-source/) command:
-```sql
+```mzsql
DROP SOURCE tbl_a, b CASCADE;
```
diff --git a/doc/user/content/sql/alter-swap.md b/doc/user/content/sql/alter-swap.md
index 8e9d52eb979b..ec3ba7dc5f5a 100644
--- a/doc/user/content/sql/alter-swap.md
+++ b/doc/user/content/sql/alter-swap.md
@@ -21,7 +21,7 @@ _target_name_ | The target [identifier](/sql/identifiers) of the item you
Swapping two items is useful for a blue/green deployment
-```sql
+```mzsql
CREATE SCHEMA blue;
CREATE TABLE blue.numbers (n int);
diff --git a/doc/user/content/sql/alter-system-reset.md b/doc/user/content/sql/alter-system-reset.md
index b76985aec958..bee4cd81c7f5 100644
--- a/doc/user/content/sql/alter-system-reset.md
+++ b/doc/user/content/sql/alter-system-reset.md
@@ -31,7 +31,7 @@ this statement.
### Reset enable RBAC
-```sql
+```mzsql
SHOW enable_rbac_checks;
enable_rbac_checks
diff --git a/doc/user/content/sql/alter-system-set.md b/doc/user/content/sql/alter-system-set.md
index 1e973ae50a43..5ed9734d910e 100644
--- a/doc/user/content/sql/alter-system-set.md
+++ b/doc/user/content/sql/alter-system-set.md
@@ -32,7 +32,7 @@ this statement.
### Enable RBAC
-```sql
+```mzsql
SHOW enable_rbac_checks;
enable_rbac_checks
diff --git a/doc/user/content/sql/comment-on.md b/doc/user/content/sql/comment-on.md
index aa0c5f6157f0..17c92e78a009 100644
--- a/doc/user/content/sql/comment-on.md
+++ b/doc/user/content/sql/comment-on.md
@@ -31,7 +31,7 @@ information on ownership and privileges, see [Role-based access control](/manage
## Examples
-```sql
+```mzsql
--- Add comments.
COMMENT ON TABLE foo IS 'this table is important';
COMMENT ON COLUMN foo.x IS 'holds all of the important data';
diff --git a/doc/user/content/sql/copy-from.md b/doc/user/content/sql/copy-from.md
index 60510726e144..52c616828f2c 100644
--- a/doc/user/content/sql/copy-from.md
+++ b/doc/user/content/sql/copy-from.md
@@ -67,15 +67,15 @@ except that:
## Example
-```sql
+```mzsql
COPY t FROM STDIN WITH (DELIMITER '|');
```
-```sql
+```mzsql
COPY t FROM STDIN (FORMAT CSV);
```
-```sql
+```mzsql
COPY t FROM STDIN (DELIMITER '|');
```
diff --git a/doc/user/content/sql/copy-to.md b/doc/user/content/sql/copy-to.md
index 09ee3d34999e..15ea34d00e9f 100644
--- a/doc/user/content/sql/copy-to.md
+++ b/doc/user/content/sql/copy-to.md
@@ -35,7 +35,7 @@ Name | Values | Default value | Description
#### Subscribing to a view with binary output
-```sql
+```mzsql
COPY (SUBSCRIBE some_view) TO STDOUT WITH (FORMAT binary);
```
@@ -156,7 +156,7 @@ Materialize type | Arrow extension name | [Arrow type](https://github.com/apache
{{< tabs >}}
{{< tab "Parquet">}}
-```sql
+```mzsql
COPY some_view TO 's3://mz-to-snow/parquet/'
WITH (
AWS CONNECTION = aws_role_assumption,
@@ -168,7 +168,7 @@ WITH (
{{< tab "CSV">}}
-```sql
+```mzsql
COPY some_view TO 's3://mz-to-snow/csv/'
WITH (
AWS CONNECTION = aws_role_assumption,
diff --git a/doc/user/content/sql/create-cluster-replica.md b/doc/user/content/sql/create-cluster-replica.md
index d3c265e06546..c2ab0c621207 100644
--- a/doc/user/content/sql/create-cluster-replica.md
+++ b/doc/user/content/sql/create-cluster-replica.md
@@ -92,7 +92,7 @@ machines had computed.
## Example
-```sql
+```mzsql
CREATE CLUSTER REPLICA c1.r1 (SIZE = '400cc');
```
diff --git a/doc/user/content/sql/create-cluster.md b/doc/user/content/sql/create-cluster.md
index 73886b45faf4..4c83a59ecf27 100644
--- a/doc/user/content/sql/create-cluster.md
+++ b/doc/user/content/sql/create-cluster.md
@@ -45,13 +45,13 @@ active cluster.
To show your session's active cluster, use the [`SHOW`](/sql/show) command:
-```sql
+```mzsql
SHOW cluster;
```
To switch your session's active cluster, use the [`SET`](/sql/set) command:
-```sql
+```mzsql
SET cluster = other_cluster;
```
@@ -252,7 +252,7 @@ We plan to remove these restrictions in future versions of Materialize.
Create a cluster with two `400cc` replicas:
-```sql
+```mzsql
CREATE CLUSTER c1 (SIZE = '400cc', REPLICATION FACTOR = 2);
```
@@ -260,7 +260,7 @@ CREATE CLUSTER c1 (SIZE = '400cc', REPLICATION FACTOR = 2);
Create a cluster with a single replica and introspection disabled:
-```sql
+```mzsql
CREATE CLUSTER c (SIZE = '100cc', INTROSPECTION INTERVAL = 0);
```
@@ -272,7 +272,7 @@ that cluster replica.
Create a cluster with no replicas:
-```sql
+```mzsql
CREATE CLUSTER c1 (SIZE '100cc', REPLICATION FACTOR = 0);
```
diff --git a/doc/user/content/sql/create-connection.md b/doc/user/content/sql/create-connection.md
index f576c0dc120e..402dcee5d382 100644
--- a/doc/user/content/sql/create-connection.md
+++ b/doc/user/content/sql/create-connection.md
@@ -97,7 +97,7 @@ policy, by querying the
[`mz_internal.mz_aws_connections`](/sql/system-catalog/mz_internal/#mz_aws_connections)
table:
-```sql
+```mzsql
SELECT id, external_id, example_trust_policy FROM mz_internal.mz_aws_connections;
```
@@ -145,7 +145,7 @@ assume:
To create an AWS connection that will assume the `WarehouseExport` role:
-```sql
+```mzsql
CREATE CONNECTION aws_role_assumption TO AWS (
ASSUME ROLE ARN = 'arn:aws:iam::400121260767:role/WarehouseExport',
);
@@ -160,7 +160,7 @@ the use of role assumption-based authentication instead.
To create an AWS connection that uses static access key credentials:
-```sql
+```mzsql
CREATE SECRET aws_secret_access_key = '...';
CREATE CONNECTION aws_credentials TO AWS (
ACCESS KEY ID = 'ASIAV2KIV5LPTG6HGXG6',
@@ -206,7 +206,7 @@ Field | Value | Description
To connect to a Kafka cluster with multiple bootstrap servers, use the `BROKERS`
option:
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKERS ('broker1:9092', 'broker2:9092')
);
@@ -221,7 +221,7 @@ It is insecure to use the `PLAINTEXT` security protocol unless
you are using a [network security connection](#network-security-connections)
to tunnel into a private network, as shown below.
{{< /warning >}}
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'unique-jellyfish-0000-kafka.upstash.io:9092',
SECURITY PROTOCOL = 'PLAINTEXT',
@@ -232,7 +232,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
{{< tab "SSL">}}
With both TLS encryption and TLS client authentication:
-```sql
+```mzsql
CREATE SECRET kafka_ssl_cert AS '-----BEGIN CERTIFICATE----- ...';
CREATE SECRET kafka_ssl_key AS '-----BEGIN PRIVATE KEY----- ...';
CREATE SECRET ca_cert AS '-----BEGIN CERTIFICATE----- ...';
@@ -254,7 +254,7 @@ It is insecure to use TLS encryption with no authentication unless
you are using a [network security connection](#network-security-connections)
to tunnel into a private network as shown below.
{{< /warning >}}
-```sql
+```mzsql
CREATE SECRET ca_cert AS '-----BEGIN CERTIFICATE----- ...';
CREATE CONNECTION kafka_connection TO KAFKA (
@@ -275,7 +275,7 @@ you are using a [network security connection](#network-security-connections)
to tunnel into a private network, as shown below.
{{< /warning >}}
-```sql
+```mzsql
CREATE SECRET kafka_password AS '...';
CREATE CONNECTION kafka_connection TO KAFKA (
@@ -290,7 +290,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
{{< /tab >}}
{{< tab "SASL_SSL">}}
-```sql
+```mzsql
CREATE SECRET kafka_password AS '...';
CREATE SECRET ca_cert AS '-----BEGIN CERTIFICATE----- ...';
@@ -365,7 +365,7 @@ Suppose you have the following infrastructure:
You can create a connection to this Kafka broker in Materialize like so:
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
@@ -398,7 +398,7 @@ Field | Value | Required | Descript
##### Example {#kafka-privatelink-default-example}
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1')
@@ -450,7 +450,7 @@ Field | Value | Required | Description
Using a default SSH tunnel:
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
USER '',
@@ -466,7 +466,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
Using different SSH tunnels for each broker, with a default for brokers that are
not listed:
-```sql
+```mzsql
CREATE CONNECTION ssh1 TO SSH TUNNEL (HOST 'ssh1', ...);
CREATE CONNECTION ssh2 TO SSH TUNNEL (HOST 'ssh2', ...);
@@ -516,7 +516,7 @@ Field | Value | Description
Using username and password authentication with TLS encryption:
-```sql
+```mzsql
CREATE SECRET csr_password AS '...';
CREATE SECRET ca_cert AS '-----BEGIN CERTIFICATE----- ...';
@@ -532,7 +532,7 @@ CREATE CONNECTION csr_basic TO CONFLUENT SCHEMA REGISTRY (
Using TLS for encryption and authentication:
-```sql
+```mzsql
CREATE SECRET csr_ssl_cert AS '-----BEGIN CERTIFICATE----- ...';
CREATE SECRET csr_ssl_key AS '-----BEGIN PRIVATE KEY----- ...';
CREATE SECRET ca_cert AS '-----BEGIN CERTIFICATE----- ...';
@@ -564,7 +564,7 @@ Field | Value | Required | Description
##### Example {#csr-privatelink-example}
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
@@ -587,7 +587,7 @@ Field | Value | Required | Description
##### Example {#csr-ssh-example}
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
USER '',
@@ -635,7 +635,7 @@ Field | Value | Description
#### Example {#mysql-example}
-```sql
+```mzsql
CREATE SECRET mysqlpass AS '';
CREATE CONNECTION mysql_connection TO MYSQL (
@@ -662,7 +662,7 @@ Field | Value | Required | Description
##### Example {#mysql-ssh-example}
-```sql
+```mzsql
CREATE CONNECTION tunnel TO SSH TUNNEL (
HOST 'bastion-host',
PORT 22,
@@ -712,7 +712,7 @@ Field | Value | Description
#### Example {#postgres-example}
-```sql
+```mzsql
CREATE SECRET pgpass AS '';
CREATE CONNECTION pg_connection TO POSTGRES (
@@ -741,7 +741,7 @@ Field | Value | Required | Description
##### Example {#postgres-privatelink-example}
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
@@ -772,7 +772,7 @@ Field | Value | Required | Description
##### Example {#postgres-ssh-example}
-```sql
+```mzsql
CREATE CONNECTION tunnel TO SSH TUNNEL (
HOST 'bastion-host',
PORT 22,
@@ -828,7 +828,7 @@ principals for AWS PrivateLink connections in your region are stored in
the [`mz_aws_privatelink_connections`](/sql/system-catalog/mz_catalog/#mz_aws_privatelink_connections)
system table.
-```sql
+```mzsql
SELECT * FROM mz_aws_privatelink_connections;
```
```
@@ -856,7 +856,7 @@ accepting connection requests, see the [AWS PrivateLink documentation](https://d
#### Example {#aws-privatelink-example}
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
@@ -908,7 +908,7 @@ generation algorithm as security best practices evolve.
Create an SSH tunnel connection:
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST 'bastion-host',
PORT 22,
@@ -918,7 +918,7 @@ CREATE CONNECTION ssh_connection TO SSH TUNNEL (
Retrieve the public keys for the SSH tunnel connection you just created:
-```sql
+```mzsql
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
diff --git a/doc/user/content/sql/create-database.md b/doc/user/content/sql/create-database.md
index 2d4fa293e9d0..35301a9bda27 100644
--- a/doc/user/content/sql/create-database.md
+++ b/doc/user/content/sql/create-database.md
@@ -38,10 +38,10 @@ details](../namespaces/#database-details).
## Examples
-```sql
+```mzsql
CREATE DATABASE IF NOT EXISTS my_db;
```
-```sql
+```mzsql
SHOW DATABASES;
```
```nofmt
diff --git a/doc/user/content/sql/create-index.md b/doc/user/content/sql/create-index.md
index 781153eb4a0d..02f418558f87 100644
--- a/doc/user/content/sql/create-index.md
+++ b/doc/user/content/sql/create-index.md
@@ -107,7 +107,7 @@ of the index.
You can optimize the performance of `JOIN` on two relations by ensuring their
join keys are the key columns in an index.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW active_customers AS
SELECT guid, geo_id, last_active_on
FROM customer_source
@@ -138,7 +138,7 @@ In the above example, the index `active_customers_geo_idx`...
If you commonly filter by a certain column being equal to a literal value, you can set up an index over that column to speed up your queries:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW active_customers AS
SELECT guid, geo_id, last_active_on
FROM customer_source
diff --git a/doc/user/content/sql/create-materialized-view.md b/doc/user/content/sql/create-materialized-view.md
index 3dec38b19388..b1a1f3c14b90 100644
--- a/doc/user/content/sql/create-materialized-view.md
+++ b/doc/user/content/sql/create-materialized-view.md
@@ -96,7 +96,7 @@ in #27521."
### Creating a materialized view
-```sql
+```mzsql
CREATE MATERIALIZED VIEW winning_bids AS
SELECT auction_id,
bid_id,
@@ -108,7 +108,7 @@ WHERE end_time < mz_now();
### Using non-null assertions
-```sql
+```mzsql
CREATE MATERIALIZED VIEW users_and_orders WITH (
-- The semantics of a FULL OUTER JOIN guarantee that user_id is not null,
-- because one of `users.id` or `orders.user_id` must be not null, but
diff --git a/doc/user/content/sql/create-role.md b/doc/user/content/sql/create-role.md
index f22ffac48f2f..118d067f83df 100644
--- a/doc/user/content/sql/create-role.md
+++ b/doc/user/content/sql/create-role.md
@@ -41,10 +41,10 @@ When RBAC is enabled a role must have the `CREATEROLE` system privilege to creat
## Examples
-```sql
+```mzsql
CREATE ROLE db_reader;
```
-```sql
+```mzsql
SELECT name FROM mz_roles;
```
```nofmt
diff --git a/doc/user/content/sql/create-schema.md b/doc/user/content/sql/create-schema.md
index 10b8620a2dbb..3d635b1fbf6e 100644
--- a/doc/user/content/sql/create-schema.md
+++ b/doc/user/content/sql/create-schema.md
@@ -33,10 +33,10 @@ _schema_name_ | A name for the schema. You can specify the data
## Examples
-```sql
+```mzsql
CREATE SCHEMA my_db.my_schema;
```
-```sql
+```mzsql
SHOW SCHEMAS FROM my_db;
```
```nofmt
diff --git a/doc/user/content/sql/create-secret.md b/doc/user/content/sql/create-secret.md
index c7791e310e5b..3a61cf5e21b5 100644
--- a/doc/user/content/sql/create-secret.md
+++ b/doc/user/content/sql/create-secret.md
@@ -20,7 +20,7 @@ _value_ | The value for the secret. The _value_ expression may not reference any
## Examples
-```sql
+```mzsql
CREATE SECRET upstash_kafka_ca_cert AS decode('c2VjcmV0Cg==', 'base64');
```
diff --git a/doc/user/content/sql/create-sink/kafka.md b/doc/user/content/sql/create-sink/kafka.md
index 6447e940b16d..b5ecaf69d112 100644
--- a/doc/user/content/sql/create-sink/kafka.md
+++ b/doc/user/content/sql/create-sink/kafka.md
@@ -482,7 +482,7 @@ There are three ways to resolve this error:
* Create a materialized view that deduplicates the input relation by the
desired upsert key:
- ```sql
+ ```mzsql
-- For each row with the same key `k`, the `ORDER BY` clause ensures we
-- keep the row with the largest value of `v`.
CREATE MATERIALIZED VIEW deduped AS
@@ -507,7 +507,7 @@ There are three ways to resolve this error:
* Use the `NOT ENFORCED` clause to disable Materialize's validation of the key's
uniqueness:
- ```sql
+ ```mzsql
CREATE SINK s
FROM original_input
INTO KAFKA CONNECTION kafka_connection (TOPIC 't')
@@ -545,7 +545,7 @@ statements. For more details on creating connections, check the
{{< tabs tabID="1" >}}
{{< tab "SSL">}}
-```sql
+```mzsql
CREATE SECRET kafka_ssl_key AS '';
CREATE SECRET kafka_ssl_crt AS '';
@@ -559,7 +559,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
{{< /tab >}}
{{< tab "SASL">}}
-```sql
+```mzsql
CREATE SECRET kafka_password AS '';
CREATE CONNECTION kafka_connection TO KAFKA (
@@ -578,7 +578,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
{{< tabs tabID="1" >}}
{{< tab "SSL">}}
-```sql
+```mzsql
CREATE SECRET csr_ssl_crt AS '';
CREATE SECRET csr_ssl_key AS '';
CREATE SECRET csr_password AS '';
@@ -595,7 +595,7 @@ CREATE CONNECTION csr_ssl TO CONFLUENT SCHEMA REGISTRY (
{{< /tab >}}
{{< tab "Basic HTTP Authentication">}}
-```sql
+```mzsql
CREATE SECRET IF NOT EXISTS csr_username AS '';
CREATE SECRET IF NOT EXISTS csr_password AS '';
@@ -616,7 +616,7 @@ CREATE CONNECTION csr_basic_http
{{< tabs >}}
{{< tab "Avro">}}
-```sql
+```mzsql
CREATE SINK avro_sink
FROM
INTO KAFKA CONNECTION kafka_connection (TOPIC 'test_avro_topic')
@@ -628,7 +628,7 @@ CREATE SINK avro_sink
{{< /tab >}}
{{< tab "JSON">}}
-```sql
+```mzsql
CREATE SINK json_sink
FROM
INTO KAFKA CONNECTION kafka_connection (TOPIC 'test_json_topic')
@@ -645,7 +645,7 @@ CREATE SINK json_sink
{{< tabs >}}
{{< tab "Avro">}}
-```sql
+```mzsql
CREATE SINK avro_sink
FROM
INTO KAFKA CONNECTION kafka_connection (TOPIC 'test_avro_topic')
@@ -658,7 +658,7 @@ CREATE SINK avro_sink
#### Topic configuration
-```sql
+```mzsql
CREATE SINK custom_topic_sink
IN CLUSTER my_io_cluster
FROM
@@ -675,7 +675,7 @@ CREATE SINK custom_topic_sink
#### Schema compatibility levels
-```sql
+```mzsql
CREATE SINK compatibility_level_sink
IN CLUSTER my_io_cluster
FROM
@@ -694,7 +694,7 @@ CREATE SINK compatibility_level_sink
Consider the following sink, `docs_sink`, built on top of a relation `t` with
several [SQL comments](/sql/comment-on) attached.
-```sql
+```mzsql
CREATE TABLE t (key int NOT NULL, value text NOT NULL);
COMMENT ON TABLE t IS 'SQL comment on t';
COMMENT ON COLUMN t.value IS 'SQL comment on t.value';
diff --git a/doc/user/content/sql/create-source/_index.md b/doc/user/content/sql/create-source/_index.md
index e5a9726dfc53..4774bd931796 100644
--- a/doc/user/content/sql/create-source/_index.md
+++ b/doc/user/content/sql/create-source/_index.md
@@ -84,7 +84,7 @@ If your JSON messages have a consistent shape, we recommend creating a parsing
[view](/get-started/key-concepts/#views) that maps the individual fields to
columns with the required data types:
-```sql
+```mzsql
-- extract jsonb into typed columns
CREATE VIEW my_typed_source AS
SELECT
diff --git a/doc/user/content/sql/create-source/kafka.md b/doc/user/content/sql/create-source/kafka.md
index 16252637a6d8..16e151687617 100644
--- a/doc/user/content/sql/create-source/kafka.md
+++ b/doc/user/content/sql/create-source/kafka.md
@@ -72,7 +72,7 @@ By default, the message key is decoded using the same format as the message valu
To create a source that uses the standard key-value convention to support inserts, updates, and deletes within Materialize, you can use `ENVELOPE UPSERT`:
-```sql
+```mzsql
CREATE SOURCE kafka_upsert
FROM KAFKA CONNECTION kafka_connection (TOPIC 'events')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -110,7 +110,7 @@ echo ":" | kcat -b $BROKER -t $TOPIC -Z -K: \
Materialize provides a dedicated envelope (`ENVELOPE DEBEZIUM`) to decode Kafka messages produced by [Debezium](https://debezium.io/). To create a source that interprets Debezium messages:
-```sql
+```mzsql
CREATE SOURCE kafka_repl
FROM KAFKA CONNECTION kafka_connection (TOPIC 'pg_repl.public.table1')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -144,7 +144,7 @@ In addition to the message value, Materialize can expose the message key, header
The message key is exposed via the `INCLUDE KEY` option. Composite keys are also supported {{% gh 7645 %}}.
-```sql
+```mzsql
CREATE SOURCE kafka_metadata
FROM KAFKA CONNECTION kafka_connection (TOPIC 'data')
KEY FORMAT TEXT
@@ -173,7 +173,7 @@ All of a message's headers can be exposed using `INCLUDE HEADERS`, followed by a
This introduces column with the name specified or `headers` if none was specified. The column has the type `record(key: text, value: bytea?) list`, i.e. a list of records containing key-value pairs, where the keys are `text` and the values are nullable `bytea`s.
-```sql
+```mzsql
CREATE SOURCE kafka_metadata
FROM KAFKA CONNECTION kafka_connection (TOPIC 'data')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -183,7 +183,7 @@ CREATE SOURCE kafka_metadata
To simplify turning the headers column into a `map` (so individual headers can be searched), you can use the `map_build` function, e.g.
-```sql
+```mzsql
SELECT
id,
seller,
@@ -192,7 +192,10 @@ SELECT
map_build(headers)->'encryption_key' AS encryption_key,
FROM kafka_metadata;
```
-```
+
+
+
+```nofmt
id | seller | item | client_id | encryption_key
----+--------+--------------------+-----------+----------------------
2 | 1592 | Custom Art | 23 | \x796f75207769736821
@@ -205,7 +208,7 @@ Individual message headers can be exposed via the `INCLUDE HEADER key AS name` o
The `bytea` value of the header is automatically parsed into an UTF-8 string. To expose the raw `bytea` instead, the `BYTES` option can be used.
-```sql
+```mzsql
CREATE SOURCE kafka_metadata
FROM KAFKA CONNECTION kafka_connection (TOPIC 'data')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -215,7 +218,7 @@ CREATE SOURCE kafka_metadata
Headers can be queried as any other column in the source:
-```sql
+```mzsql
SELECT
id,
seller,
@@ -223,7 +226,11 @@ SELECT
client_id::numeric,
encryption_key
FROM kafka_metadata;
+```
+
+
+```nofmt
id | seller | item | client_id | encryption_key
----+--------+--------------------+-----------+----------------------
2 | 1592 | Custom Art | 23 | \x796f75207769736821
@@ -237,7 +244,7 @@ Note that:
These metadata fields are exposed via the `INCLUDE PARTITION`, `INCLUDE OFFSET` and `INCLUDE TIMESTAMP` options.
-```sql
+```mzsql
CREATE SOURCE kafka_metadata
FROM KAFKA CONNECTION kafka_connection (TOPIC 'data')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
@@ -245,9 +252,13 @@ CREATE SOURCE kafka_metadata
ENVELOPE NONE;
```
-```sql
+```mzsql
SELECT "offset" FROM kafka_metadata WHERE ts > '2021-01-01';
+```
+
+
+```nofmt
offset
------
15
@@ -259,7 +270,7 @@ offset
To start consuming a Kafka stream from a specific offset, you can use the `START OFFSET` option.
-```sql
+```mzsql
CREATE SOURCE kafka_offset
FROM KAFKA CONNECTION kafka_connection (
TOPIC 'data',
@@ -313,7 +324,7 @@ Field | Type | Meaning
And can be queried using:
-```sql
+```mzsql
SELECT
partition, "offset"
FROM
@@ -366,7 +377,7 @@ The consumer group ID prefix for each Kafka source in the system is available in
the `group_id_prefix` column of the [`mz_kafka_sources`] table. To look up the
`group_id_prefix` for a source by name, use:
-```sql
+```mzsql
SELECT group_id_prefix
FROM mz_internal.mz_kafka_sources ks
JOIN mz_sources s ON s.id = ks.id
@@ -395,7 +406,7 @@ Once created, a connection is **reusable** across multiple `CREATE SOURCE` state
{{< tabs tabID="1" >}}
{{< tab "SSL">}}
-```sql
+```mzsql
CREATE SECRET kafka_ssl_key AS '';
CREATE SECRET kafka_ssl_crt AS '';
@@ -408,7 +419,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
{{< /tab >}}
{{< tab "SASL">}}
-```sql
+```mzsql
CREATE SECRET kafka_password AS '';
CREATE CONNECTION kafka_connection TO KAFKA (
@@ -426,14 +437,14 @@ If your Kafka broker is not exposed to the public internet, you can [tunnel the
{{< tabs tabID="1" >}}
{{< tab "AWS PrivateLink">}}
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
);
```
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKERS (
'broker1:9092' USING AWS PRIVATELINK privatelink_svc,
@@ -446,7 +457,7 @@ For step-by-step instructions on creating AWS PrivateLink connections and config
{{< /tab >}}
{{< tab "SSH tunnel">}}
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
USER '',
@@ -454,7 +465,7 @@ CREATE CONNECTION ssh_connection TO SSH TUNNEL (
);
```
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKERS (
'broker1:9092' USING SSH TUNNEL ssh_connection,
@@ -471,7 +482,7 @@ For step-by-step instructions on creating SSH tunnel connections and configuring
{{< tabs tabID="1" >}}
{{< tab "SSL">}}
-```sql
+```mzsql
CREATE SECRET csr_ssl_crt AS '';
CREATE SECRET csr_ssl_key AS '';
CREATE SECRET csr_password AS '';
@@ -486,7 +497,7 @@ CREATE CONNECTION csr_connection TO CONFLUENT SCHEMA REGISTRY (
```
{{< /tab >}}
{{< tab "Basic HTTP Authentication">}}
-```sql
+```mzsql
CREATE SECRET IF NOT EXISTS csr_username AS '';
CREATE SECRET IF NOT EXISTS csr_password AS '';
@@ -504,14 +515,14 @@ If your Confluent Schema Registry server is not exposed to the public internet,
{{< tabs tabID="1" >}}
{{< tab "AWS PrivateLink">}}
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
);
```
-```sql
+```mzsql
CREATE CONNECTION csr_connection TO CONFLUENT SCHEMA REGISTRY (
URL 'http://my-confluent-schema-registry:8081',
AWS PRIVATELINK privatelink_svc
@@ -521,7 +532,7 @@ CREATE CONNECTION csr_connection TO CONFLUENT SCHEMA REGISTRY (
For step-by-step instructions on creating AWS PrivateLink connections and configuring an AWS PrivateLink service to accept connections from Materialize, check [this guide](/ops/network-security/privatelink/).
{{< /tab >}}
{{< tab "SSH tunnel">}}
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
USER '',
@@ -529,7 +540,7 @@ CREATE CONNECTION ssh_connection TO SSH TUNNEL (
);
```
-```sql
+```mzsql
CREATE CONNECTION csr_connection TO CONFLUENT SCHEMA REGISTRY (
URL 'http://my-confluent-schema-registry:8081',
SSH TUNNEL ssh_connection
@@ -547,7 +558,7 @@ For step-by-step instructions on creating SSH tunnel connections and configuring
**Using Confluent Schema Registry**
-```sql
+```mzsql
CREATE SOURCE avro_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection;
@@ -556,13 +567,13 @@ CREATE SOURCE avro_source
{{< /tab >}}
{{< tab "JSON">}}
-```sql
+```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT JSON;
```
-```sql
+```mzsql
CREATE VIEW typed_kafka_source AS
SELECT
(data->>'field1')::boolean AS field_1,
@@ -581,7 +592,7 @@ manually, you can use [this **JSON parsing widget**](/sql/types/jsonb/#parsing)!
**Using Confluent Schema Registry**
-```sql
+```mzsql
CREATE SOURCE proto_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT PROTOBUF USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection;
@@ -616,7 +627,7 @@ If you're not using a schema registry, you can use the `MESSAGE...SCHEMA` clause
* Create the source using the encoded descriptor bytes from the previous step
(including the `\x` at the beginning):
- ```sql
+ ```mzsql
CREATE SOURCE proto_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT PROTOBUF MESSAGE 'Batch' USING SCHEMA '\x0a300a0d62696...';
@@ -628,7 +639,7 @@ If you're not using a schema registry, you can use the `MESSAGE...SCHEMA` clause
{{< /tab >}}
{{< tab "Text/bytes">}}
-```sql
+```mzsql
CREATE SOURCE text_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT TEXT
@@ -638,7 +649,7 @@ CREATE SOURCE text_source
{{< /tab >}}
{{< tab "CSV">}}
-```sql
+```mzsql
CREATE SOURCE csv_source (col_foo, col_bar, col_baz)
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT CSV WITH 3 COLUMNS;
diff --git a/doc/user/content/sql/create-source/load-generator.md b/doc/user/content/sql/create-source/load-generator.md
index e268a62da6a9..252212c473ed 100644
--- a/doc/user/content/sql/create-source/load-generator.md
+++ b/doc/user/content/sql/create-source/load-generator.md
@@ -228,7 +228,7 @@ Field | Type | Meaning
And can be queried using:
-```sql
+```mzsql
SELECT "offset"
FROM _progress;
```
@@ -244,7 +244,7 @@ issues, see [Troubleshooting](/ops/troubleshooting/).
To create a load generator source that emits the next number in the sequence every
500 milliseconds:
-```sql
+```mzsql
CREATE SOURCE counter
FROM LOAD GENERATOR COUNTER
(TICK INTERVAL '500ms');
@@ -252,7 +252,7 @@ CREATE SOURCE counter
To examine the counter:
-```sql
+```mzsql
SELECT * FROM counter;
```
```nofmt
@@ -267,7 +267,7 @@ SELECT * FROM counter;
To create a load generator source that simulates an auction house and emits new data every second:
-```sql
+```mzsql
CREATE SOURCE auction_house
FROM LOAD GENERATOR AUCTION
(TICK INTERVAL '1s')
@@ -276,7 +276,7 @@ CREATE SOURCE auction_house
To display the created subsources:
-```sql
+```mzsql
SHOW SOURCES;
```
```nofmt
@@ -293,7 +293,7 @@ SHOW SOURCES;
To examine the simulated bids:
-```sql
+```mzsql
SELECT * from bids;
```
```nofmt
@@ -308,7 +308,7 @@ SELECT * from bids;
To create a load generator source that simulates an online marketing campaign:
-```sql
+```mzsql
CREATE SOURCE marketing
FROM LOAD GENERATOR MARKETING
FOR ALL TABLES;
@@ -316,7 +316,7 @@ CREATE SOURCE marketing
To display the created subsources:
-```sql
+```mzsql
SHOW SOURCES;
```
@@ -335,7 +335,7 @@ SHOW SOURCES;
To find all impressions and clicks associated with a campaign over the last 30 days:
-```sql
+```mzsql
WITH
click_rollup AS
(
@@ -384,7 +384,7 @@ GROUP BY campaign_id;
To create the load generator source and its associated subsources:
-```sql
+```mzsql
CREATE SOURCE tpch
FROM LOAD GENERATOR TPCH (SCALE FACTOR 1)
FOR ALL TABLES;
@@ -392,7 +392,7 @@ CREATE SOURCE tpch
To display the created subsources:
-```sql
+```mzsql
SHOW SOURCES;
```
```nofmt
@@ -413,7 +413,7 @@ SHOW SOURCES;
To run the Pricing Summary Report Query (Q1), which reports the amount of
billed, shipped, and returned items:
-```sql
+```mzsql
SELECT
l_returnflag,
l_linestatus,
diff --git a/doc/user/content/sql/create-source/materialize-cdc.md b/doc/user/content/sql/create-source/materialize-cdc.md
index ee1e948c2843..cce3175667ca 100644
--- a/doc/user/content/sql/create-source/materialize-cdc.md
+++ b/doc/user/content/sql/create-source/materialize-cdc.md
@@ -138,7 +138,7 @@ Field | Description
You specify the use of the Materialize CDC format in the [Avro schema](/sql/create-source/kafka/#format_spec) when a source is created.
-```sql
+```mzsql
CREATE CONNECTION kafka_conn TO KAFKA (BROKER 'kafka_url:9092');
CREATE SOURCE name_of_source
diff --git a/doc/user/content/sql/create-source/mysql.md b/doc/user/content/sql/create-source/mysql.md
index 85f3eb9f16b4..74bb97850539 100644
--- a/doc/user/content/sql/create-source/mysql.md
+++ b/doc/user/content/sql/create-source/mysql.md
@@ -125,7 +125,7 @@ that overrides `binlog_expire_logs_seconds` and is set to `NULL` by default.
Materialize ingests the raw replication stream data for all (or a specific set
of) tables in your upstream MySQL database.
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM MYSQL CONNECTION mysql_connection
FOR ALL TABLES;
@@ -137,7 +137,7 @@ When you define a source, Materialize will automatically:
initial, snapshot-based sync of the tables before it starts ingesting change
events.
- ```sql
+ ```mzsql
SHOW SOURCES;
```
@@ -167,7 +167,7 @@ replicating `schema1.table_1` and `schema2.table_1`. Use the `FOR TABLES`
clause to provide aliases for each upstream table, in such cases, or to specify
an alternative destination schema in Materialize.
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM MYSQL CONNECTION mysql_connection
FOR TABLES (schema1.table_1 AS s1_table_1, schema2.table_1 AS s2_table_1);
@@ -194,7 +194,7 @@ Field | Type | D
And can be queried using:
-```sql
+```mzsql
SELECT transaction_id
FROM _progress;
```
@@ -267,7 +267,7 @@ truncated while replicated, the whole source becomes inaccessible and will not
produce any data until it is recreated. Instead, remove all rows from a table
using an unqualified `DELETE`.
-```sql
+```mzsql
DELETE FROM t;
```
@@ -292,7 +292,7 @@ Once created, a connection is **reusable** across multiple `CREATE SOURCE`
statements. For more details on creating connections, check the
[`CREATE CONNECTION`](/sql/create-connection/#mysql) documentation page.
-```sql
+```mzsql
CREATE SECRET mysqlpass AS '';
CREATE CONNECTION mysql_connection TO MYSQL (
@@ -309,7 +309,7 @@ through an SSH bastion host.
{{< tabs tabID="1" >}}
{{< tab "SSH tunnel">}}
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST 'bastion-host',
PORT 22,
@@ -317,7 +317,7 @@ CREATE CONNECTION ssh_connection TO SSH TUNNEL (
);
```
-```sql
+```mzsql
CREATE CONNECTION mysql_connection TO MYSQL (
HOST 'instance.foo000.us-west-1.rds.amazonaws.com',
SSH TUNNEL ssh_connection
@@ -335,7 +335,7 @@ an SSH bastion server to accept connections from Materialize, check
_Create subsources for all tables in MySQL_
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM MYSQL CONNECTION mysql_connection
FOR ALL TABLES;
@@ -343,7 +343,7 @@ CREATE SOURCE mz_source
_Create subsources for all tables from specific schemas in MySQL_
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM MYSQL CONNECTION mysql_connection
FOR SCHEMAS (mydb, project);
@@ -351,7 +351,7 @@ CREATE SOURCE mz_source
_Create subsources for specific tables in MySQL_
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM MYSQL CONNECTION mysql_connection
FOR TABLES (mydb.table_1, mydb.table_2 AS alias_table_2);
@@ -364,7 +364,7 @@ by Materialize, use the `TEXT COLUMNS` option to decode data as `text` for the
affected columns. This option expects the upstream fully-qualified names of the
replicated table and column (i.e. as defined in your MySQL database).
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM MYSQL CONNECTION mysql_connection (
TEXT COLUMNS (mydb.table_1.column_of_unsupported_type)
@@ -378,7 +378,7 @@ MySQL doesn't provide a way to filter out columns from the replication stream.
To exclude specific upstream columns from being ingested, use the `IGNORE
COLUMNS` option.
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM MYSQL CONNECTION mysql_connection (
IGNORE COLUMNS (mydb.table_1.column_to_ignore)
@@ -393,7 +393,7 @@ the [`DROP SOURCE`](/sql/alter-source/#context) syntax to drop the affected
subsource, and then [`ALTER SOURCE...ADD SUBSOURCE`](/sql/alter-source/) to add
the subsource back to the source.
-```sql
+```mzsql
-- List all subsources in mz_source
SHOW SUBSOURCES ON mz_source;
diff --git a/doc/user/content/sql/create-source/postgres.md b/doc/user/content/sql/create-source/postgres.md
index 1b659e6c3381..7bc3ca50446d 100644
--- a/doc/user/content/sql/create-source/postgres.md
+++ b/doc/user/content/sql/create-source/postgres.md
@@ -82,7 +82,7 @@ To avoid creating multiple replication slots in the upstream PostgreSQL database
and minimize the required bandwidth, Materialize ingests the raw replication
stream data for some specific set of tables in your publication.
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
@@ -97,7 +97,7 @@ When you define a source, Materialize will automatically:
`materialize_` for easy identification, and can be looked up in
`mz_internal.mz_postgres_sources`.
- ```sql
+ ```mzsql
SELECT id, replication_slot FROM mz_internal.mz_postgres_sources;
```
@@ -108,7 +108,7 @@ When you define a source, Materialize will automatically:
```
1. Create a **subsource** for each original table in the publication.
- ```sql
+ ```mzsql
SHOW SOURCES;
```
@@ -164,7 +164,7 @@ replicating `schema1.table_1` and `schema2.table_1`. Use the `FOR TABLES`
clause to provide aliases for each upstream table, in such cases, or to specify
an alternative destination schema in Materialize.
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR TABLES (schema1.table_1 AS s1_table_1, schema2_table_1 AS s2_table_1);
@@ -185,7 +185,7 @@ Field | Type | Meaning
And can be queried using:
-```sql
+```mzsql
SELECT lsn
FROM _progress;
```
@@ -271,7 +271,7 @@ truncated while replicated, the whole source becomes inaccessible and will not
produce any data until it is recreated. Instead, remove all rows from a table
using an unqualified `DELETE`.
-```sql
+```mzsql
DELETE FROM t;
```
@@ -317,7 +317,7 @@ Once created, a connection is **reusable** across multiple `CREATE SOURCE`
statements. For more details on creating connections, check the
[`CREATE CONNECTION`](/sql/create-connection/#postgresql) documentation page.
-```sql
+```mzsql
CREATE SECRET pgpass AS '';
CREATE CONNECTION pg_connection TO POSTGRES (
@@ -337,14 +337,14 @@ through an AWS PrivateLink service or an SSH bastion host.
{{< tabs tabID="1" >}}
{{< tab "AWS PrivateLink">}}
-```sql
+```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
);
```
-```sql
+```mzsql
CREATE SECRET pgpass AS '';
CREATE CONNECTION pg_connection TO POSTGRES (
@@ -363,7 +363,7 @@ check [this guide](/ops/network-security/privatelink/).
{{< /tab >}}
{{< tab "SSH tunnel">}}
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST 'bastion-host',
PORT 22,
@@ -371,7 +371,7 @@ CREATE CONNECTION ssh_connection TO SSH TUNNEL (
);
```
-```sql
+```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST 'instance.foo000.us-west-1.rds.amazonaws.com',
PORT 5432,
@@ -391,7 +391,7 @@ an SSH bastion server to accept connections from Materialize, check
_Create subsources for all tables included in the PostgreSQL publication_
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
@@ -400,7 +400,7 @@ CREATE SOURCE mz_source
_Create subsources for all tables from specific schemas included in the
PostgreSQL publication_
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR SCHEMAS (public, project);
@@ -408,7 +408,7 @@ CREATE SOURCE mz_source
_Create subsources for specific tables included in the PostgreSQL publication_
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR TABLES (table_1, table_2 AS alias_table_2);
@@ -421,7 +421,7 @@ unsupported by Materialize, use the `TEXT COLUMNS` option to decode data as
`text` for the affected columns. This option expects the upstream names of the
replicated table and column (i.e. as defined in your PostgreSQL database).
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (
PUBLICATION 'mz_source',
@@ -436,7 +436,7 @@ the [`DROP SOURCE`](/sql/alter-source/#context) syntax to drop the affected
subsource, and then [`ALTER SOURCE...ADD SUBSOURCE`](/sql/alter-source/) to add
the subsource back to the source.
-```sql
+```mzsql
-- List all subsources in mz_source
SHOW SUBSOURCES ON mz_source;
diff --git a/doc/user/content/sql/create-source/webhook.md b/doc/user/content/sql/create-source/webhook.md
index 2f60eb47410e..6792240d5bd3 100644
--- a/doc/user/content/sql/create-source/webhook.md
+++ b/doc/user/content/sql/create-source/webhook.md
@@ -89,7 +89,7 @@ In addition to the request body, Materialize can expose headers to SQL. If a
request header exists, you can map its fields to columns using the `INCLUDE
HEADER` syntax.
-```sql
+```mzsql
CREATE SOURCE my_webhook_source FROM WEBHOOK
BODY FORMAT JSON
INCLUDE HEADER 'timestamp' as ts
@@ -114,7 +114,7 @@ syntax in combination with the `NOT` option. This can be useful if, for
example, you need to accept a dynamic list of fields but want to exclude
sensitive information like authorization.
-```sql
+```mzsql
CREATE SOURCE my_webhook_source FROM WEBHOOK
BODY FORMAT JSON
INCLUDE HEADERS ( NOT 'authorization', NOT 'x-api-key' );
@@ -146,7 +146,7 @@ For example, the following source HMACs the request body using the `sha256`
hashing algorithm, and asserts the result is equal to the value provided in the
`x-signature` header, decoded with `base64`.
-```sql
+```mzsql
CREATE SOURCE my_webhook_source FROM WEBHOOK
BODY FORMAT JSON
CHECK (
@@ -180,7 +180,7 @@ application does not have a way to send test events. If you're having trouble
with your `CHECK` statement, we recommend creating a temporary source without
`CHECK` and using that to iterate more quickly.
-```sql
+```mzsql
CREATE SOURCE my_webhook_temporary_debug FROM WEBHOOK
-- Specify the BODY FORMAT as TEXT or BYTES,
-- which is how it's provided to CHECK.
@@ -191,7 +191,7 @@ CREATE SOURCE my_webhook_temporary_debug FROM WEBHOOK
Once you have a few events in _my_webhook_temporary_debug_, you can query it with your would-be
`CHECK` statement.
-```sql
+```mzsql
SELECT
-- Your would-be CHECK statement.
constant_time_eq(
@@ -213,7 +213,7 @@ Given any number of conditions, e.g. a network hiccup, it's possible for your ap
an event more than once. If your event contains a unique identifier, you can de-duplicate these events
using a [`MATERIALIZED VIEW`](/sql/create-materialized-view/) and the `DISTINCT ON` clause.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW my_webhook_idempotent AS (
SELECT DISTINCT ON (body->>'unique_id') *
FROM my_webhook_source
@@ -234,7 +234,7 @@ When a build job starts we receive an event containing _id_ and the _started_at_
build finished, we'll receive a second event with the same _id_ but now a _finished_at_ timestamp.
To merge these events into a single row, we can again use the `DISTINCT ON` clause.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW my_build_jobs_merged AS (
SELECT DISTINCT ON (id) *
FROM (
@@ -264,7 +264,7 @@ in the following formats:
You can automatically expand a batch of requests formatted as a JSON array into
separate rows using `BODY FORMAT JSON ARRAY`.
-```sql
+```mzsql
-- Webhook source that parses request bodies as a JSON array.
CREATE SOURCE webhook_source_json_array FROM WEBHOOK
BODY FORMAT JSON ARRAY
@@ -283,7 +283,7 @@ POST webhook_source_json_array
]
```
-```sql
+```mzsql
SELECT COUNT(body) FROM webhook_source_json_array;
----
3
@@ -297,7 +297,7 @@ POST webhook_source_json_array
{ "event_type": "d" }
```
-```sql
+```mzsql
SELECT body FROM webhook_source_json_array;
----
{ "event_type": "a" }
@@ -311,7 +311,7 @@ SELECT body FROM webhook_source_json_array;
You can automatically expand a batch of requests formatted as NDJSON into
separate rows using `BODY FORMAT JSON`.
-```sql
+```mzsql
-- Webhook source that parses request bodies as NDJSON.
CREATE SOURCE webhook_source_ndjson FROM WEBHOOK
BODY FORMAT JSON;
@@ -326,7 +326,7 @@ POST 'webhook_source_ndjson'
{ 'event_type': 'bar' }
```
-```sql
+```mzsql
SELECT COUNT(body) FROM webhook_source_ndjson;
----
2
@@ -355,7 +355,7 @@ source.
To store the sensitive credentials and make them reusable across multiple
`CREATE SOURCE` statements, use [secrets](https://materialize.com/docs/sql/create-secret/).
-```sql
+```mzsql
CREATE SECRET basic_hook_auth AS 'Basic ';
```
@@ -365,7 +365,7 @@ After a successful secret creation, you can use the same secret to create
different webhooks with the same basic authentication to check if a request is
valid.
-```sql
+```mzsql
CREATE SOURCE webhook_with_basic_auth
FROM WEBHOOK
BODY FORMAT JSON
diff --git a/doc/user/content/sql/create-table.md b/doc/user/content/sql/create-table.md
index cc7b20b72de9..6256bd56402c 100644
--- a/doc/user/content/sql/create-table.md
+++ b/doc/user/content/sql/create-table.md
@@ -74,13 +74,13 @@ tables may not depend on temporary objects.
You can create a table `t` with the following statement:
-```sql
+```mzsql
CREATE TABLE t (a int, b text NOT NULL);
```
Once a table is created, you can inspect the table with various `SHOW` commands.
-```sql
+```mzsql
SHOW TABLES;
TABLES
------
diff --git a/doc/user/content/sql/create-type.md b/doc/user/content/sql/create-type.md
index 664c34a84b8b..5fdb5a5a21db 100644
--- a/doc/user/content/sql/create-type.md
+++ b/doc/user/content/sql/create-type.md
@@ -70,7 +70,7 @@ custom type's properties.
### Custom `list`
-```sql
+```mzsql
CREATE TYPE int4_list AS LIST (ELEMENT TYPE = int4);
SELECT '{1,2}'::int4_list::text AS custom_list;
@@ -83,7 +83,7 @@ SELECT '{1,2}'::int4_list::text AS custom_list;
### Nested custom `list`
-```sql
+```mzsql
CREATE TYPE int4_list_list AS LIST (ELEMENT TYPE = int4_list);
SELECT '{{1,2}}'::int4_list_list::text AS custom_nested_list;
@@ -96,7 +96,7 @@ SELECT '{{1,2}}'::int4_list_list::text AS custom_nested_list;
### Custom `map`
-```sql
+```mzsql
CREATE TYPE int4_map AS MAP (KEY TYPE = text, VALUE TYPE = int4);
SELECT '{a=>1}'::int4_map::text AS custom_map;
@@ -109,7 +109,7 @@ SELECT '{a=>1}'::int4_map::text AS custom_map;
### Nested custom `map`
-```sql
+```mzsql
CREATE TYPE int4_map_map AS MAP (KEY TYPE = text, VALUE TYPE = int4_map);
SELECT '{a=>{a=>1}}'::int4_map_map::text AS custom_nested_map;
@@ -121,7 +121,7 @@ SELECT '{a=>{a=>1}}'::int4_map_map::text AS custom_nested_map;
```
### Custom `row` type
-```sql
+```mzsql
CREATE TYPE row_type AS (a int, b text);
SELECT ROW(1, 'a')::row_type as custom_row_type;
```
@@ -132,7 +132,7 @@ custom_row_type
```
### Nested `row` type
-```sql
+```mzsql
CREATE TYPE nested_row_type AS (a row_type, b float8);
SELECT ROW(ROW(1, 'a'), 2.3)::nested_row_type AS custom_nested_row_type;
```
diff --git a/doc/user/content/sql/create-view.md b/doc/user/content/sql/create-view.md
index e2064449104c..e9c471d0935b 100644
--- a/doc/user/content/sql/create-view.md
+++ b/doc/user/content/sql/create-view.md
@@ -47,7 +47,7 @@ views may not depend on temporary objects.
### Creating a view
-```sql
+```mzsql
CREATE VIEW purchase_sum_by_region
AS
SELECT sum(purchase.amount) AS region_sum,
diff --git a/doc/user/content/sql/deallocate.md b/doc/user/content/sql/deallocate.md
index b87bfb02e32d..19374f9c54fe 100644
--- a/doc/user/content/sql/deallocate.md
+++ b/doc/user/content/sql/deallocate.md
@@ -20,7 +20,7 @@ Field | Use
## Example
-```sql
+```mzsql
DEALLOCATE a;
```
diff --git a/doc/user/content/sql/delete.md b/doc/user/content/sql/delete.md
index e4b0bbe96f22..f6c3dd3d07ce 100644
--- a/doc/user/content/sql/delete.md
+++ b/doc/user/content/sql/delete.md
@@ -30,7 +30,7 @@ _alias_ | Only permit references to _table_name_ as _alias_.
## Examples
-```sql
+```mzsql
CREATE TABLE delete_me (a int, b text);
INSERT INTO delete_me
VALUES
@@ -46,7 +46,7 @@ SELECT * FROM delete_me ORDER BY a;
2 | goodbye
3 | ok
```
-```sql
+```mzsql
CREATE TABLE delete_using (b text);
INSERT INTO delete_using VALUES ('goodbye'), ('ciao');
DELETE FROM delete_me
@@ -59,7 +59,7 @@ SELECT * FROM delete_me;
---+----
3 | ok
```
-```sql
+```mzsql
DELETE FROM delete_me;
SELECT * FROM delete_me;
```
diff --git a/doc/user/content/sql/drop-cluster-replica.md b/doc/user/content/sql/drop-cluster-replica.md
index b31edb024b9b..47dd03d94603 100644
--- a/doc/user/content/sql/drop-cluster-replica.md
+++ b/doc/user/content/sql/drop-cluster-replica.md
@@ -30,7 +30,7 @@ _replica_name_ | The cluster replica you want to drop. For available clus
## Examples
-```sql
+```mzsql
SHOW CLUSTER REPLICAS WHERE cluster = 'auction_house';
```
@@ -40,7 +40,7 @@ SHOW CLUSTER REPLICAS WHERE cluster = 'auction_house';
auction_house | bigger
```
-```sql
+```mzsql
DROP CLUSTER REPLICA auction_house.bigger;
```
diff --git a/doc/user/content/sql/drop-cluster.md b/doc/user/content/sql/drop-cluster.md
index 80d4a5d715da..ddeab3afcc6b 100644
--- a/doc/user/content/sql/drop-cluster.md
+++ b/doc/user/content/sql/drop-cluster.md
@@ -26,13 +26,13 @@ _cluster_name_ | The cluster you want to drop. For available clusters, se
To drop an existing cluster, run:
-```sql
+```mzsql
DROP CLUSTER auction_house;
```
To avoid issuing an error if the specified cluster does not exist, use the `IF EXISTS` option:
-```sql
+```mzsql
DROP CLUSTER IF EXISTS auction_house;
```
@@ -40,7 +40,7 @@ DROP CLUSTER IF EXISTS auction_house;
If the cluster has dependencies, Materialize will throw an error similar to:
-```sql
+```mzsql
DROP CLUSTER auction_house;
```
@@ -50,7 +50,7 @@ ERROR: cannot drop cluster with active indexes or materialized views
, and you'll have to explicitly ask to also remove any dependent objects using the `CASCADE` option:
-```sql
+```mzsql
DROP CLUSTER auction_house CASCADE;
```
diff --git a/doc/user/content/sql/drop-connection.md b/doc/user/content/sql/drop-connection.md
index 58697eea43a4..9ed8de9bbac0 100644
--- a/doc/user/content/sql/drop-connection.md
+++ b/doc/user/content/sql/drop-connection.md
@@ -28,13 +28,13 @@ _connection_name_ | The connection you want to drop. For available connec
To drop an existing connection, run:
-```sql
+```mzsql
DROP CONNECTION kafka_connection;
```
To avoid issuing an error if the specified connection does not exist, use the `IF EXISTS` option:
-```sql
+```mzsql
DROP CONNECTION IF EXISTS kafka_connection;
```
@@ -42,7 +42,7 @@ DROP CONNECTION IF EXISTS kafka_connection;
If the connection has dependencies, Materialize will throw an error similar to:
-```sql
+```mzsql
DROP CONNECTION kafka_connection;
```
@@ -53,7 +53,7 @@ ERROR: cannot drop materialize.public.kafka_connection: still depended upon by
, and you'll have to explicitly ask to also remove any dependent objects using the `CASCADE` option:
-```sql
+```mzsql
DROP CONNECTION kafka_connection CASCADE;
```
diff --git a/doc/user/content/sql/drop-database.md b/doc/user/content/sql/drop-database.md
index f546293fdf25..94de9f33709f 100644
--- a/doc/user/content/sql/drop-database.md
+++ b/doc/user/content/sql/drop-database.md
@@ -27,20 +27,20 @@ _database_name_ | The database you want to drop. For available databases,
### Remove a database containing schemas
You can use either of the following commands:
-- ```sql
+- ```mzsql
DROP DATABASE my_db;
```
-- ```sql
+- ```mzsql
DROP DATABASE my_db CASCADE;
```
### Remove a database only if it contains no schemas
-```sql
+```mzsql
DROP DATABASE my_db RESTRICT;
```
### Do not issue an error if attempting to remove a nonexistent database
-```sql
+```mzsql
DROP DATABASE IF EXISTS my_db;
```
diff --git a/doc/user/content/sql/drop-index.md b/doc/user/content/sql/drop-index.md
index b9dd70e9921a..725b660287e3 100644
--- a/doc/user/content/sql/drop-index.md
+++ b/doc/user/content/sql/drop-index.md
@@ -25,7 +25,7 @@ _index_name_ | The name of the index you want to remove.
### Remove an index
-```sql
+```mzsql
SHOW VIEWS;
```
```nofmt
@@ -36,7 +36,7 @@ SHOW VIEWS;
| q01 |
+-----------------------------------+
```
-```sql
+```mzsql
SHOW INDEXES ON q01;
```
```nofmt
@@ -51,19 +51,19 @@ You can use the unqualified index name (`q01_geo_idx`) rather the fully qualifie
You can remove an index with any of the following commands:
-- ```sql
+- ```mzsql
DROP INDEX q01_geo_idx;
```
-- ```sql
+- ```mzsql
DROP INDEX q01_geo_idx RESTRICT;
```
-- ```sql
+- ```mzsql
DROP INDEX q01_geo_idx CASCADE;
```
### Do not issue an error if attempting to remove a nonexistent index
-```sql
+```mzsql
DROP INDEX IF EXISTS q01_geo_idx;
```
diff --git a/doc/user/content/sql/drop-materialized-view.md b/doc/user/content/sql/drop-materialized-view.md
index 05d7ad57c373..eae353f378f6 100644
--- a/doc/user/content/sql/drop-materialized-view.md
+++ b/doc/user/content/sql/drop-materialized-view.md
@@ -25,7 +25,7 @@ _view_name_ | The materialized view you want to drop. For available mater
### Dropping a materialized view with no dependencies
-```sql
+```mzsql
DROP MATERIALIZED VIEW winning_bids;
```
```nofmt
@@ -34,7 +34,7 @@ DROP MATERIALIZED VIEW
### Dropping a materialized view with dependencies
-```sql
+```mzsql
DROP MATERIALIZED VIEW winning_bids;
```
diff --git a/doc/user/content/sql/drop-owned.md b/doc/user/content/sql/drop-owned.md
index 5b84db2dc4dc..ce9b7e3d4d0a 100644
--- a/doc/user/content/sql/drop-owned.md
+++ b/doc/user/content/sql/drop-owned.md
@@ -26,11 +26,11 @@ _role_name_ | The role name whose owned objects will be dropped.
## Examples
-```sql
+```mzsql
DROP OWNED BY joe;
```
-```sql
+```mzsql
DROP OWNED BY joe, george CASCADE;
```
diff --git a/doc/user/content/sql/drop-schema.md b/doc/user/content/sql/drop-schema.md
index 31bc1c8f13d2..f702523a0c1e 100644
--- a/doc/user/content/sql/drop-schema.md
+++ b/doc/user/content/sql/drop-schema.md
@@ -27,24 +27,24 @@ Before you can drop a schema, you must [drop all sources](../drop-source) and
## Example
### Remove a schema with no dependent objects
-```sql
+```mzsql
SHOW SOURCES FROM my_schema;
```
```nofmt
my_file_source
```
-```sql
+```mzsql
DROP SCHEMA my_schema;
```
### Remove a schema with dependent objects
-```sql
+```mzsql
SHOW SOURCES FROM my_schema;
```
```nofmt
my_file_source
```
-```sql
+```mzsql
DROP SCHEMA my_schema CASCADE;
```
@@ -52,16 +52,16 @@ DROP SCHEMA my_schema CASCADE;
You can use either of the following commands:
-- ```sql
+- ```mzsql
DROP SCHEMA my_schema;
```
-- ```sql
+- ```mzsql
DROP SCHEMA my_schema RESTRICT;
```
### Do not issue an error if attempting to remove a nonexistent schema
-```sql
+```mzsql
DROP SCHEMA IF EXISTS my_schema;
```
diff --git a/doc/user/content/sql/drop-secret.md b/doc/user/content/sql/drop-secret.md
index 32eb92c27615..d5b86f81150d 100644
--- a/doc/user/content/sql/drop-secret.md
+++ b/doc/user/content/sql/drop-secret.md
@@ -26,13 +26,13 @@ _secret_name_ | The secret you want to drop. For available secrets, see [
To drop an existing secret, run:
-```sql
+```mzsql
DROP SECRET upstash_sasl_password;
```
To avoid issuing an error if the specified secret does not exist, use the `IF EXISTS` option:
-```sql
+```mzsql
DROP SECRET IF EXISTS upstash_sasl_password;
```
@@ -40,7 +40,7 @@ DROP SECRET IF EXISTS upstash_sasl_password;
If the secret has dependencies, Materialize will throw an error similar to:
-```sql
+```mzsql
DROP SECRET upstash_sasl_password;
```
@@ -51,7 +51,7 @@ ERROR: cannot drop materialize.public.upstash_sasl_password: still depended upo
, and you'll have to explicitly ask to also remove any dependent objects using the `CASCADE` option:
-```sql
+```mzsql
DROP SECRET upstash_sasl_password CASCADE;
```
diff --git a/doc/user/content/sql/drop-sink.md b/doc/user/content/sql/drop-sink.md
index 22ab65a9322e..f12a2ecce4be 100644
--- a/doc/user/content/sql/drop-sink.md
+++ b/doc/user/content/sql/drop-sink.md
@@ -20,13 +20,13 @@ _sink_name_ | The sink you want to drop. You can find available sink name
## Examples
-```sql
+```mzsql
SHOW SINKS;
```
```nofmt
my_sink
```
-```sql
+```mzsql
DROP SINK my_sink;
```
```nofmt
diff --git a/doc/user/content/sql/drop-source.md b/doc/user/content/sql/drop-source.md
index c6c59fda4f90..a60063f166d8 100644
--- a/doc/user/content/sql/drop-source.md
+++ b/doc/user/content/sql/drop-source.md
@@ -25,27 +25,27 @@ _source_name_ | The name of the source you want to remove.
### Remove a source with no dependent objects
-```sql
+```mzsql
SHOW SOURCES;
```
```nofmt
...
my_source
```
-```sql
+```mzsql
DROP SOURCE my_source;
```
### Remove a source with dependent objects
-```sql
+```mzsql
SHOW SOURCES;
```
```nofmt
...
my_source
```
-```sql
+```mzsql
DROP SOURCE my_source CASCADE;
```
@@ -53,16 +53,16 @@ DROP SOURCE my_source CASCADE;
You can use either of the following commands:
-- ```sql
+- ```mzsql
DROP SOURCE my_source;
```
-- ```sql
+- ```mzsql
DROP SOURCE my_source RESTRICT;
```
### Do not issue an error if attempting to remove a nonexistent source
-```sql
+```mzsql
DROP SOURCE IF EXISTS my_source;
```
diff --git a/doc/user/content/sql/drop-table.md b/doc/user/content/sql/drop-table.md
index d57338ce7181..895abdf6352f 100644
--- a/doc/user/content/sql/drop-table.md
+++ b/doc/user/content/sql/drop-table.md
@@ -29,7 +29,7 @@ _table_name_ | The name of the table to remove.
### Remove a table with no dependent objects
Create a table *t* and verify that it was created:
-```sql
+```mzsql
CREATE TABLE t (a int, b text NOT NULL);
SHOW TABLES;
```
@@ -41,14 +41,14 @@ t
Remove the table:
-```sql
+```mzsql
DROP TABLE t;
```
### Remove a table with dependent objects
Create a table *t*:
-```sql
+```mzsql
CREATE TABLE t (a int, b text NOT NULL);
INSERT INTO t VALUES (1, 'yes'), (2, 'no'), (3, 'maybe');
SELECT * FROM t;
@@ -64,7 +64,7 @@ a | b
Create a materialized view from *t*:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW t_view AS SELECT sum(a) AS sum FROM t;
SHOW MATERIALIZED VIEWS;
```
@@ -77,7 +77,7 @@ t_view | default
Remove table *t*:
-```sql
+```mzsql
DROP TABLE t CASCADE;
```
@@ -85,16 +85,16 @@ DROP TABLE t CASCADE;
You can use either of the following commands:
-- ```sql
+- ```mzsql
DROP TABLE t;
```
-- ```sql
+- ```mzsql
DROP TABLE t RESTRICT;
```
### Do not issue an error if attempting to remove a nonexistent table
-```sql
+```mzsql
DROP TABLE IF EXISTS t;
```
diff --git a/doc/user/content/sql/drop-type.md b/doc/user/content/sql/drop-type.md
index c738fb614d8d..f2b6050f2cc6 100644
--- a/doc/user/content/sql/drop-type.md
+++ b/doc/user/content/sql/drop-type.md
@@ -22,7 +22,7 @@ _data_type_name_ | The name of the type to remove.
## Examples
### Remove a type with no dependent objects
-```sql
+```mzsql
CREATE TYPE int4_map AS MAP (KEY TYPE = text, VALUE TYPE = int4);
SHOW TYPES;
@@ -34,7 +34,7 @@ SHOW TYPES;
(1 row)
```
-```sql
+```mzsql
DROP TYPE int4_map;
SHOW TYPES;
@@ -51,7 +51,7 @@ By default, `DROP TYPE` will not remove a type with dependent objects. The **CAS
In the example below, the **CASCADE** switch removes `int4_list`, `int4_list_list` (which depends on `int4_list`), and the table *t*, which has a column of data type `int4_list`.
-```sql
+```mzsql
CREATE TYPE int4_list AS LIST (ELEMENT TYPE = int4);
CREATE TYPE int4_list_list AS LIST (ELEMENT TYPE = int4_list);
@@ -68,7 +68,7 @@ SHOW TYPES;
(2 rows)
```
-```sql
+```mzsql
DROP TYPE int4_list CASCADE;
SHOW TYPES;
@@ -86,16 +86,16 @@ ERROR: unknown catalog item 't'
You can use either of the following commands:
-- ```sql
+- ```mzsql
DROP TYPE int4_list;
```
-- ```sql
+- ```mzsql
DROP TYPE int4_list RESTRICT;
```
### Do not issue an error if attempting to remove a nonexistent type
-```sql
+```mzsql
DROP TYPE IF EXISTS int4_list;
```
diff --git a/doc/user/content/sql/drop-view.md b/doc/user/content/sql/drop-view.md
index 09cab479e652..8be235dde76e 100644
--- a/doc/user/content/sql/drop-view.md
+++ b/doc/user/content/sql/drop-view.md
@@ -29,7 +29,7 @@ _view_name_ | The view you want to drop. You can find available view name
## Examples
-```sql
+```mzsql
SHOW VIEWS;
```
```nofmt
@@ -37,7 +37,7 @@ SHOW VIEWS;
---------
my_view
```
-```sql
+```mzsql
DROP VIEW my_view;
```
```nofmt
diff --git a/doc/user/content/sql/execute.md b/doc/user/content/sql/execute.md
index e53f4c2d18d1..aaced0f67231 100644
--- a/doc/user/content/sql/execute.md
+++ b/doc/user/content/sql/execute.md
@@ -22,7 +22,7 @@ Field | Use
## Example
-```sql
+```mzsql
EXECUTE a ('a', 'b', 1 + 2)
```
diff --git a/doc/user/content/sql/explain-filter-pushdown.md b/doc/user/content/sql/explain-filter-pushdown.md
index 4d92aed6ab2f..bc40f1e6b07c 100644
--- a/doc/user/content/sql/explain-filter-pushdown.md
+++ b/doc/user/content/sql/explain-filter-pushdown.md
@@ -54,7 +54,7 @@ in your environment.
Suppose you're interested in checking the number of recent bids.
-```sql
+```mzsql
SELECT count(*) FROM bids WHERE bid_time + '5 minutes' > mz_now();
```
@@ -67,7 +67,7 @@ performance.
Explaining this query includes a `pushdown=` field under `Source materialize.public.bids`,
which indicates that this filter can be pushed down.
-```sql
+```mzsql
EXPLAIN
SELECT count(*) FROM bids WHERE bid_time + '5 minutes' > mz_now();
```
@@ -89,7 +89,7 @@ Suppose it's been \~1 hour since you set up the auction house load generator
source, and you'd like to get a sense of how much data your query would need to
fetch.
-```sql
+```mzsql
EXPLAIN FILTER PUSHDOWN FOR
SELECT count(*) FROM bids WHERE bid_time + '5 minutes' > mz_now();
```
@@ -110,7 +110,7 @@ If you instead query for the last hour of data, you can see that since you only
created the auction house source \~1 hour ago, Materialize needs to fetch
almost everything.
-```sql
+```mzsql
EXPLAIN FILTER PUSHDOWN FOR
SELECT count(*) FROM bids WHERE bid_time + '1 hour' > mz_now();
```
diff --git a/doc/user/content/sql/explain-plan.md b/doc/user/content/sql/explain-plan.md
index a9d02100b619..b000de7f8e7b 100644
--- a/doc/user/content/sql/explain-plan.md
+++ b/doc/user/content/sql/explain-plan.md
@@ -22,7 +22,7 @@ change arbitrarily in future versions of Materialize.
Note that the `FOR` keyword is required if the `PLAN` keyword is present. In other words, the following three statements are equivalent:
-```sql
+```mzsql
EXPLAIN ;
EXPLAIN PLAN FOR ;
EXPLAIN OPTIMIZED PLAN FOR ;
@@ -283,35 +283,35 @@ Let's start with a simple join query that lists the total amounts bid per buyer.
Explain the optimized plan as text:
-```sql
+```mzsql
EXPLAIN
SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP BY a.id;
```
Same as above, but a bit more verbose:
-```sql
+```mzsql
EXPLAIN PLAN
SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP BY a.id;
```
Same as above, but even more verbose:
-```sql
+```mzsql
EXPLAIN OPTIMIZED PLAN AS TEXT FOR
SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP BY a.id;
```
Same as above, but every sub-plan is annotated with its schema types:
-```sql
+```mzsql
EXPLAIN WITH(types) FOR
SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP BY a.id;
```
Explain the physical plan as text:
-```sql
+```mzsql
EXPLAIN PHYSICAL PLAN FOR
SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP BY a.id;
```
@@ -320,7 +320,7 @@ SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP
Let's create a view with an index for the above query.
-```sql
+```mzsql
-- create the view
CREATE VIEW my_view AS
SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP BY a.id;
@@ -332,35 +332,35 @@ You can inspect the plan of the dataflow that will maintain your index with the
Explain the optimized plan as text:
-```sql
+```mzsql
EXPLAIN
INDEX my_view_idx;
```
Same as above, but a bit more verbose:
-```sql
+```mzsql
EXPLAIN PLAN FOR
INDEX my_view_idx;
```
Same as above, but even more verbose:
-```sql
+```mzsql
EXPLAIN OPTIMIZED PLAN AS TEXT FOR
INDEX my_view_idx;
```
Same as above, but every sub-plan is annotated with its schema types:
-```sql
+```mzsql
EXPLAIN WITH(types) FOR
INDEX my_view_idx;
```
Explain the physical plan as text:
-```sql
+```mzsql
EXPLAIN PHYSICAL PLAN FOR
INDEX my_view_idx;
```
@@ -369,7 +369,7 @@ INDEX my_view_idx;
Let's create a materialized view for the above `SELECT` query.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW my_mat_view AS
SELECT a.id, sum(b.amount) FROM accounts a JOIN bids b ON(a.id = b.buyer) GROUP BY a.id;
```
@@ -378,35 +378,35 @@ You can inspect the plan of the dataflow that will maintain your view with the f
Explain the optimized plan as text:
-```sql
+```mzsql
EXPLAIN
MATERIALIZED VIEW my_mat_view;
```
Same as above, but a bit more verbose:
-```sql
+```mzsql
EXPLAIN PLAN FOR
MATERIALIZED VIEW my_mat_view;
```
Same as above, but even more verbose:
-```sql
+```mzsql
EXPLAIN OPTIMIZED PLAN AS TEXT FOR
MATERIALIZED VIEW my_mat_view;
```
Same as above, but every sub-plan is annotated with its schema types:
-```sql
+```mzsql
EXPLAIN WITH(types)
MATERIALIZED VIEW my_mat_view;
```
Explain the physical plan as text:
-```sql
+```mzsql
EXPLAIN PHYSICAL PLAN FOR
MATERIALIZED VIEW my_mat_view;
```
diff --git a/doc/user/content/sql/explain-schema.md b/doc/user/content/sql/explain-schema.md
index 454cb66fb275..b0f6cea08126 100644
--- a/doc/user/content/sql/explain-schema.md
+++ b/doc/user/content/sql/explain-schema.md
@@ -36,7 +36,7 @@ This command shows what the generated schemas would look like, without creating
## Examples
-```sql
+```mzsql
CREATE TABLE t (c1 int, c2 text);
COMMENT ON TABLE t IS 'materialize comment on t';
COMMENT ON COLUMN t.c2 IS 'materialize comment on t.c2';
diff --git a/doc/user/content/sql/explain-timestamp.md b/doc/user/content/sql/explain-timestamp.md
index e7f744304c06..3e059f99889b 100644
--- a/doc/user/content/sql/explain-timestamp.md
+++ b/doc/user/content/sql/explain-timestamp.md
@@ -77,7 +77,7 @@ Field | Meaning | Example
## Examples
-```sql
+```mzsql
EXPLAIN TIMESTAMP FOR MATERIALIZED VIEW users;
```
```
diff --git a/doc/user/content/sql/functions/_index.md b/doc/user/content/sql/functions/_index.md
index e3dd65e3c183..e7e64300818d 100644
--- a/doc/user/content/sql/functions/_index.md
+++ b/doc/user/content/sql/functions/_index.md
@@ -34,7 +34,7 @@ canceling a query running on another connection.
Materialize offers only limited support for these functions. They may be called
only at the top level of a `SELECT` statement, like so:
-```sql
+```mzsql
SELECT side_effecting_function(arg, ...);
```
diff --git a/doc/user/content/sql/functions/array_agg.md b/doc/user/content/sql/functions/array_agg.md
index 9efe59a95830..6fc2955dc235 100644
--- a/doc/user/content/sql/functions/array_agg.md
+++ b/doc/user/content/sql/functions/array_agg.md
@@ -39,14 +39,14 @@ Instead, we recommend that you materialize all components required for the
`array_agg` function call and create a non-materialized view using `array_agg`
on top of that. That pattern is illustrated in the following statements:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW foo_view AS SELECT * FROM foo;
CREATE VIEW bar AS SELECT array_agg(foo_view.bar) FROM foo_view;
```
## Examples
-```sql
+```mzsql
SELECT
title,
ARRAY_AGG (
diff --git a/doc/user/content/sql/functions/cast.md b/doc/user/content/sql/functions/cast.md
index bac1e42f143a..01e721501e2c 100644
--- a/doc/user/content/sql/functions/cast.md
+++ b/doc/user/content/sql/functions/cast.md
@@ -177,7 +177,7 @@ Source type | Return type
## Examples
-```sql
+```mzsql
SELECT INT '4';
```
```nofmt
@@ -188,7 +188,7 @@ SELECT INT '4';
-```sql
+```mzsql
SELECT CAST (CAST (100.21 AS numeric(10, 2)) AS float) AS dec_to_float;
```
```nofmt
@@ -199,7 +199,7 @@ SELECT CAST (CAST (100.21 AS numeric(10, 2)) AS float) AS dec_to_float;
-```sql
+```mzsql
SELECT 100.21::numeric(10, 2)::float AS dec_to_float;
```
```nofmt
diff --git a/doc/user/content/sql/functions/coalesce.md b/doc/user/content/sql/functions/coalesce.md
index 05fa0655bd99..80c70f4450ef 100644
--- a/doc/user/content/sql/functions/coalesce.md
+++ b/doc/user/content/sql/functions/coalesce.md
@@ -20,7 +20,7 @@ All elements of the parameters for `coalesce` must be of the same type; `coalesc
## Examples
-```sql
+```mzsql
SELECT coalesce(NULL, 3, 2, 1) AS coalesce_res;
```
```nofmt
diff --git a/doc/user/content/sql/functions/csv_extract.md b/doc/user/content/sql/functions/csv_extract.md
index 9bb14533e88e..062364124124 100644
--- a/doc/user/content/sql/functions/csv_extract.md
+++ b/doc/user/content/sql/functions/csv_extract.md
@@ -25,7 +25,7 @@ _col_name_ | [`string`](../../types/text/) | The name of the column containing
Create a table where one column is in CSV format and insert some rows:
-```sql
+```mzsql
CREATE TABLE t (id int, data string);
INSERT INTO t
VALUES (1, 'some,data'), (2, 'more,data'), (3, 'also,data');
@@ -33,7 +33,7 @@ INSERT INTO t
Extract the component columns from the table column which is a CSV string, sorted by column `id`:
-```sql
+```mzsql
SELECT csv.* FROM t, csv_extract(2, data) csv
ORDER BY t.id;
```
diff --git a/doc/user/content/sql/functions/date-bin-hopping.md b/doc/user/content/sql/functions/date-bin-hopping.md
index cc7e8d27f491..c7624d761833 100644
--- a/doc/user/content/sql/functions/date-bin-hopping.md
+++ b/doc/user/content/sql/functions/date-bin-hopping.md
@@ -39,7 +39,7 @@ _origin_ | Must be the same as _source_ | Align bins to this value. If not provi
## Examples
-```sql
+```mzsql
SELECT * FROM date_bin_hopping('45s', '1m', TIMESTAMP '2001-01-01 00:01:20');
```
```nofmt
@@ -49,7 +49,7 @@ SELECT * FROM date_bin_hopping('45s', '1m', TIMESTAMP '2001-01-01 00:01:20');
2001-01-01 00:01:15
```
-```sql
+```mzsql
SELECT date_bin_hopping AS timeframe_start, sum(v)
FROM ( VALUES
(TIMESTAMP '2021-01-01 01:05', 41),
diff --git a/doc/user/content/sql/functions/date-bin.md b/doc/user/content/sql/functions/date-bin.md
index b264088d3ba9..5136ece441a6 100644
--- a/doc/user/content/sql/functions/date-bin.md
+++ b/doc/user/content/sql/functions/date-bin.md
@@ -53,7 +53,7 @@ _origin_ | Must be the same as _source_ | Align bins to this value.
## Examples
-```sql
+```mzsql
SELECT
date_bin(
'15 minutes',
@@ -67,7 +67,7 @@ SELECT
2001-02-16 20:35:00
```
-```sql
+```mzsql
SELECT
str,
"interval",
diff --git a/doc/user/content/sql/functions/date-part.md b/doc/user/content/sql/functions/date-part.md
index ab68f90fd592..d721b4f9c84e 100644
--- a/doc/user/content/sql/functions/date-part.md
+++ b/doc/user/content/sql/functions/date-part.md
@@ -51,7 +51,7 @@ day of year | `DOY`
### Extract second from timestamptz
-```sql
+```mzsql
SELECT date_part('S', TIMESTAMP '2006-01-02 15:04:05.06');
```
```nofmt
@@ -62,7 +62,7 @@ SELECT date_part('S', TIMESTAMP '2006-01-02 15:04:05.06');
### Extract century from date
-```sql
+```mzsql
SELECT date_part('CENTURIES', DATE '2006-01-02');
```
```nofmt
diff --git a/doc/user/content/sql/functions/date-trunc.md b/doc/user/content/sql/functions/date-trunc.md
index f2b84f0ffaa9..0d7df5d84f76 100644
--- a/doc/user/content/sql/functions/date-trunc.md
+++ b/doc/user/content/sql/functions/date-trunc.md
@@ -25,7 +25,7 @@ _val_ | [`timestamp`], [`timestamp with time zone`], [`interval`] | The value yo
## Examples
-```sql
+```mzsql
SELECT date_trunc('hour', TIMESTAMP '2019-11-26 15:56:46.241150') AS hour_trunc;
```
```nofmt
@@ -34,7 +34,7 @@ SELECT date_trunc('hour', TIMESTAMP '2019-11-26 15:56:46.241150') AS hour_trunc;
2019-11-26 15:00:00.000000000
```
-```sql
+```mzsql
SELECT date_trunc('year', TIMESTAMP '2019-11-26 15:56:46.241150') AS year_trunc;
```
```nofmt
@@ -43,7 +43,7 @@ SELECT date_trunc('year', TIMESTAMP '2019-11-26 15:56:46.241150') AS year_trunc;
2019-01-01 00:00:00.000000000
```
-```sql
+```mzsql
SELECT date_trunc('millennium', INTERVAL '1234 years 11 months 23 days 23:59:12.123456789') AS millennium_trunc;
```
```nofmt
diff --git a/doc/user/content/sql/functions/encode.md b/doc/user/content/sql/functions/encode.md
index e92846c17d38..84fe52937811 100644
--- a/doc/user/content/sql/functions/encode.md
+++ b/doc/user/content/sql/functions/encode.md
@@ -48,7 +48,7 @@ each encoded byte, though not within a byte.
Encoding and decoding in the `base64` format:
-```sql
+```mzsql
SELECT encode('\x00404142ff', 'base64');
```
```nofmt
@@ -57,7 +57,7 @@ SELECT encode('\x00404142ff', 'base64');
AEBBQv8=
```
-```sql
+```mzsql
SELECT decode('A EB BQv8 =', 'base64');
```
```nofmt
@@ -66,7 +66,7 @@ SELECT decode('A EB BQv8 =', 'base64');
\x00404142ff
```
-```sql
+```mzsql
SELECT encode('This message is long enough that the output will run to multiple lines.', 'base64');
```
```nofmt
@@ -80,7 +80,7 @@ SELECT encode('This message is long enough that the output will run to multiple
Encoding and decoding in the `escape` format:
-```sql
+```mzsql
SELECT encode('\x00404142ff', 'escape');
```
```nofmt
@@ -89,7 +89,7 @@ SELECT encode('\x00404142ff', 'escape');
\000@AB\377
```
-```sql
+```mzsql
SELECT decode('\000@AB\377', 'escape');
```
```nofmt
@@ -102,7 +102,7 @@ SELECT decode('\000@AB\377', 'escape');
Encoding and decoding in the `hex` format:
-```sql
+```mzsql
SELECT encode('\x00404142ff', 'hex');
```
```nofmt
@@ -111,7 +111,7 @@ SELECT encode('\x00404142ff', 'hex');
00404142ff
```
-```sql
+```mzsql
SELECT decode('00 40 41 42 ff', 'hex');
```
```nofmt
diff --git a/doc/user/content/sql/functions/extract.md b/doc/user/content/sql/functions/extract.md
index 6a5d7b221770..a2cdcbf6f194 100644
--- a/doc/user/content/sql/functions/extract.md
+++ b/doc/user/content/sql/functions/extract.md
@@ -48,7 +48,7 @@ decade | `DEC`, `DECS`, `DECADE`, `DECADES`
### Extract second from timestamptz
-```sql
+```mzsql
SELECT EXTRACT(S FROM TIMESTAMP '2006-01-02 15:04:05.06');
```
```nofmt
@@ -59,7 +59,7 @@ SELECT EXTRACT(S FROM TIMESTAMP '2006-01-02 15:04:05.06');
### Extract century from date
-```sql
+```mzsql
SELECT EXTRACT(CENTURIES FROM DATE '2006-01-02');
```
```nofmt
diff --git a/doc/user/content/sql/functions/filters.md b/doc/user/content/sql/functions/filters.md
index 7d397c441bbd..bac81e658249 100644
--- a/doc/user/content/sql/functions/filters.md
+++ b/doc/user/content/sql/functions/filters.md
@@ -16,7 +16,7 @@ Temporal filters cannot be used in aggregate function filters.
## Examples
-```sql
+```mzsql
SELECT
COUNT(*) AS unfiltered,
-- The FILTER guards the evaluation which might otherwise error.
diff --git a/doc/user/content/sql/functions/jsonb_agg.md b/doc/user/content/sql/functions/jsonb_agg.md
index 75fcf3b5e141..4b407d9b0002 100644
--- a/doc/user/content/sql/functions/jsonb_agg.md
+++ b/doc/user/content/sql/functions/jsonb_agg.md
@@ -40,14 +40,14 @@ Instead, we recommend that you materialize all components required for the
`jsonb_agg` function call and create a non-materialized view using `jsonb_agg`
on top of that. That pattern is illustrated in the following statements:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW foo_view AS SELECT * FROM foo;
CREATE VIEW bar AS SELECT jsonb_agg(foo_view.bar) FROM foo_view;
```
## Examples
-```sql
+```mzsql
SELECT
jsonb_agg(t) FILTER (WHERE t.content LIKE 'h%')
AS my_agg
diff --git a/doc/user/content/sql/functions/jsonb_object_agg.md b/doc/user/content/sql/functions/jsonb_object_agg.md
index 506982681240..08b942ec94c5 100644
--- a/doc/user/content/sql/functions/jsonb_object_agg.md
+++ b/doc/user/content/sql/functions/jsonb_object_agg.md
@@ -47,7 +47,7 @@ Instead, we recommend that you materialize all components required for the
`jsonb_object_agg` on top of that. That pattern is illustrated in the following
statements:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW foo_view AS SELECT key_col, val_col FROM foo;
CREATE VIEW bar AS SELECT jsonb_object_agg(key_col, val_col) FROM foo_view;
```
@@ -55,7 +55,7 @@ CREATE VIEW bar AS SELECT jsonb_object_agg(key_col, val_col) FROM foo_view;
## Examples
Consider this query:
-```sql
+```mzsql
SELECT
jsonb_object_agg(
t.col1,
diff --git a/doc/user/content/sql/functions/justify-days.md b/doc/user/content/sql/functions/justify-days.md
index 9abbc48db840..e22f04648224 100644
--- a/doc/user/content/sql/functions/justify-days.md
+++ b/doc/user/content/sql/functions/justify-days.md
@@ -24,7 +24,7 @@ _interval_ | [`interval`](../../types/interval) | The interval value to justify.
## Example
-```sql
+```mzsql
SELECT justify_days(interval '35 days');
```
```nofmt
diff --git a/doc/user/content/sql/functions/justify-hours.md b/doc/user/content/sql/functions/justify-hours.md
index a2827d2eb376..c662a06e5d18 100644
--- a/doc/user/content/sql/functions/justify-hours.md
+++ b/doc/user/content/sql/functions/justify-hours.md
@@ -24,7 +24,7 @@ _interval_ | [`interval`](../../types/interval) | The interval value to justify.
## Example
-```sql
+```mzsql
SELECT justify_hours(interval '27 hours');
```
```nofmt
diff --git a/doc/user/content/sql/functions/justify-interval.md b/doc/user/content/sql/functions/justify-interval.md
index 817b3e4eccd5..014250269b66 100644
--- a/doc/user/content/sql/functions/justify-interval.md
+++ b/doc/user/content/sql/functions/justify-interval.md
@@ -26,7 +26,7 @@ _interval_ | [`interval`](../../types/interval) | The interval value to justify.
## Example
-```sql
+```mzsql
SELECT justify_interval(interval '1 mon -1 hour');
```
```nofmt
diff --git a/doc/user/content/sql/functions/length.md b/doc/user/content/sql/functions/length.md
index 567090e7ef49..23a169ca4131 100644
--- a/doc/user/content/sql/functions/length.md
+++ b/doc/user/content/sql/functions/length.md
@@ -69,7 +69,7 @@ issue](https://github.com/MaterializeInc/materialize/issues/589).
## Examples
-```sql
+```mzsql
SELECT length('你好') AS len;
```
```nofmt
@@ -80,7 +80,7 @@ SELECT length('你好') AS len;
-```sql
+```mzsql
SELECT length('你好', 'big5') AS len;
```
```nofmt
diff --git a/doc/user/content/sql/functions/list_agg.md b/doc/user/content/sql/functions/list_agg.md
index 609772238272..294e76721d7c 100644
--- a/doc/user/content/sql/functions/list_agg.md
+++ b/doc/user/content/sql/functions/list_agg.md
@@ -40,14 +40,14 @@ Instead, we recommend that you materialize all components required for the
`list_agg` function call and create a non-materialized view using `list_agg`
on top of that. That pattern is illustrated in the following statements:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW foo_view AS SELECT * FROM foo;
CREATE VIEW bar AS SELECT list_agg(foo_view.bar) FROM foo_view;
```
## Examples
-```sql
+```mzsql
SELECT
title,
LIST_AGG (
diff --git a/doc/user/content/sql/functions/map_agg.md b/doc/user/content/sql/functions/map_agg.md
index 64a1b552403f..217a9ce2ce35 100644
--- a/doc/user/content/sql/functions/map_agg.md
+++ b/doc/user/content/sql/functions/map_agg.md
@@ -47,7 +47,7 @@ Instead, we recommend that you materialize all components required for the
`map_agg` on top of that. That pattern is illustrated in the following
statements:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW foo_view AS SELECT key_col, val_col FROM foo;
CREATE VIEW bar AS SELECT map_agg(key_col, val_col) FROM foo_view;
```
@@ -56,7 +56,7 @@ CREATE VIEW bar AS SELECT map_agg(key_col, val_col) FROM foo_view;
Consider this query:
-```sql
+```mzsql
SELECT
map_agg(
t.k,
diff --git a/doc/user/content/sql/functions/now_and_mz_now.md b/doc/user/content/sql/functions/now_and_mz_now.md
index ccb6a98a9f8d..3ebf4f3c720c 100644
--- a/doc/user/content/sql/functions/now_and_mz_now.md
+++ b/doc/user/content/sql/functions/now_and_mz_now.md
@@ -69,7 +69,7 @@ materialized would be resource prohibitive.
It is common for real-time applications to be concerned with only a recent period of time.
In this case, we will filter a table to only include records from the last 30 seconds.
-```sql
+```mzsql
-- Create a table of timestamped events.
CREATE TABLE events (
content TEXT,
@@ -85,13 +85,13 @@ WHERE mz_now() <= event_ts + INTERVAL '30s';
Next, subscribe to the results of the view.
-```sql
+```mzsql
COPY (SUBSCRIBE (SELECT event_ts, content FROM last_30_sec)) TO STDOUT;
```
In a separate session, insert a record.
-```sql
+```mzsql
INSERT INTO events VALUES (
'hello',
now()
@@ -111,7 +111,7 @@ You can materialize the `last_30_sec` view by creating an index on it (results s
If you haven't already done so in the previous example, create a table called `events` and add a few records.
-```sql
+```mzsql
-- Create a table of timestamped events.
CREATE TABLE events (
content TEXT,
@@ -134,7 +134,7 @@ INSERT INTO events VALUES (
Execute this ad hoc query that adds the current system timestamp and current logical timestamp to the events in the `events` table.
-```sql
+```mzsql
SELECT now(), mz_now(), * FROM events
```
@@ -149,7 +149,7 @@ SELECT now(), mz_now(), * FROM events
Notice when you try to materialize this query, you get errors:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW cant_materialize
AS SELECT now(), mz_now(), * FROM events;
```
diff --git a/doc/user/content/sql/functions/pushdown.md b/doc/user/content/sql/functions/pushdown.md
index b38dda5e55d8..18cb8be0c5cd 100644
--- a/doc/user/content/sql/functions/pushdown.md
+++ b/doc/user/content/sql/functions/pushdown.md
@@ -41,7 +41,7 @@ optimization for your query.
## Examples
-```sql
+```mzsql
SELECT try_parse_monotonic_iso8601_timestamp('2015-09-18T23:56:04.123Z') AS ts;
```
```nofmt
@@ -52,7 +52,7 @@ SELECT try_parse_monotonic_iso8601_timestamp('2015-09-18T23:56:04.123Z') AS ts;
-```sql
+```mzsql
SELECT try_parse_monotonic_iso8601_timestamp('nope') AS ts;
```
```nofmt
diff --git a/doc/user/content/sql/functions/string_agg.md b/doc/user/content/sql/functions/string_agg.md
index b73b3554a072..5360900b062c 100644
--- a/doc/user/content/sql/functions/string_agg.md
+++ b/doc/user/content/sql/functions/string_agg.md
@@ -42,14 +42,14 @@ Instead, we recommend that you materialize all components required for the
`string_agg` on top of that. That pattern is illustrated in the following
statements:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW foo_view AS SELECT * FROM foo;
CREATE VIEW bar AS SELECT string_agg(foo_view.bar, ',');
```
## Examples
-```sql
+```mzsql
SELECT string_agg(column1, column2)
FROM (
VALUES ('z', ' !'), ('a', ' @'), ('m', ' #')
@@ -63,7 +63,7 @@ FROM (
Note that in the following example, the `ORDER BY` of the subquery feeding into `string_agg` gets ignored.
-```sql
+```mzsql
SELECT column1, column2
FROM (
VALUES ('z', ' !'), ('a', ' @'), ('m', ' #')
@@ -77,7 +77,7 @@ FROM (
a | @
```
-```sql
+```mzsql
SELECT string_agg(column1, column2)
FROM (
SELECT column1, column2
@@ -92,6 +92,6 @@ FROM (
a #m !z
```
-```sql
+```mzsql
SELECT string_agg(b, ',' ORDER BY a DESC) FROM table;
```
diff --git a/doc/user/content/sql/functions/substring.md b/doc/user/content/sql/functions/substring.md
index 276f446fd40f..75bc9cf8ed55 100644
--- a/doc/user/content/sql/functions/substring.md
+++ b/doc/user/content/sql/functions/substring.md
@@ -24,7 +24,7 @@ _len_ | [`int`](../../types/int) | The length of the substring you want to retur
## Examples
-```sql
+```mzsql
SELECT substring('abcdefg', 3) AS substr;
```
```nofmt
@@ -35,7 +35,7 @@ SELECT substring('abcdefg', 3) AS substr;
-```sql
+```mzsql
SELECT substring('abcdefg', 3, 3) AS substr;
```
```nofmt
diff --git a/doc/user/content/sql/functions/timezone-and-at-time-zone.md b/doc/user/content/sql/functions/timezone-and-at-time-zone.md
index f7ff9ef6cdbb..af0ecbff8c2f 100644
--- a/doc/user/content/sql/functions/timezone-and-at-time-zone.md
+++ b/doc/user/content/sql/functions/timezone-and-at-time-zone.md
@@ -33,7 +33,7 @@ _timestamptz_ | [`timestamptz`](../../types/timestamp/#timestamp-with-time-zone-
### Convert timestamp to another time zone, returned as UTC with offset
-```sql
+```mzsql
SELECT TIMESTAMP '2020-12-21 18:53:49' AT TIME ZONE 'America/New_York'::text;
```
```
@@ -43,7 +43,7 @@ SELECT TIMESTAMP '2020-12-21 18:53:49' AT TIME ZONE 'America/New_York'::text;
(1 row)
```
-```sql
+```mzsql
SELECT TIMEZONE('America/New_York'::text,'2020-12-21 18:53:49');
```
```
@@ -55,7 +55,7 @@ SELECT TIMEZONE('America/New_York'::text,'2020-12-21 18:53:49');
### Convert timestamp to another time zone, returned as specified local time
-```sql
+```mzsql
SELECT TIMESTAMPTZ '2020-12-21 18:53:49+08' AT TIME ZONE 'America/New_York'::text;
```
```
@@ -65,7 +65,7 @@ SELECT TIMESTAMPTZ '2020-12-21 18:53:49+08' AT TIME ZONE 'America/New_York'::tex
(1 row)
```
-```sql
+```mzsql
SELECT TIMEZONE ('America/New_York'::text,'2020-12-21 18:53:49+08');
```
```
diff --git a/doc/user/content/sql/functions/to_char.md b/doc/user/content/sql/functions/to_char.md
index 569dcfa9ff6a..8cd9bb486f5d 100644
--- a/doc/user/content/sql/functions/to_char.md
+++ b/doc/user/content/sql/functions/to_char.md
@@ -16,7 +16,7 @@ specifier token inside of double-quotes to emit it literally.
#### RFC 2822 format
-```sql
+```mzsql
SELECT to_char(TIMESTAMPTZ '2019-11-26 15:56:46 +00:00', 'Dy, Mon DD YYYY HH24:MI:SS +0000') AS formatted
```
```nofmt
@@ -30,7 +30,7 @@ SELECT to_char(TIMESTAMPTZ '2019-11-26 15:56:46 +00:00', 'Dy, Mon DD YYYY HH24:M
Normally the `W` in "Welcome" would be converted to the week number, so we must quote it.
The "to" doesn't match any format specifiers, so quotes are optional.
-```sql
+```mzsql
SELECT to_char(TIMESTAMPTZ '2019-11-26 15:56:46 +00:00', '"Welcome" to Mon, YYYY') AS formatted
```
```nofmt
@@ -41,7 +41,7 @@ SELECT to_char(TIMESTAMPTZ '2019-11-26 15:56:46 +00:00', '"Welcome" to Mon, YYYY
#### Ordinal modifiers
-```sql
+```mzsql
SELECT to_char(TIMESTAMPTZ '2019-11-1 15:56:46 +00:00', 'Dth of Mon') AS formatted
```
```nofmt
diff --git a/doc/user/content/sql/grant-privilege.md b/doc/user/content/sql/grant-privilege.md
index d807fab11cf2..26f97ed06772 100644
--- a/doc/user/content/sql/grant-privilege.md
+++ b/doc/user/content/sql/grant-privilege.md
@@ -71,19 +71,19 @@ type for sources, views, and materialized views, or omit the object type.
## Examples
-```sql
+```mzsql
GRANT SELECT ON mv TO joe, mike;
```
-```sql
+```mzsql
GRANT USAGE, CREATE ON DATABASE materialize TO joe;
```
-```sql
+```mzsql
GRANT ALL ON CLUSTER dev TO joe;
```
-```sql
+```mzsql
GRANT CREATEDB ON SYSTEM TO joe;
```
diff --git a/doc/user/content/sql/grant-role.md b/doc/user/content/sql/grant-role.md
index 9ff137093273..50425b7fbfe9 100644
--- a/doc/user/content/sql/grant-role.md
+++ b/doc/user/content/sql/grant-role.md
@@ -20,11 +20,11 @@ _member_name_ | The role name to add to _role_name_ as a member.
## Examples
-```sql
+```mzsql
GRANT data_scientist TO joe;
```
-```sql
+```mzsql
GRANT data_scientist TO joe, mike;
```
diff --git a/doc/user/content/sql/insert.md b/doc/user/content/sql/insert.md
index af2e1255fac6..a223ccce2ee3 100644
--- a/doc/user/content/sql/insert.md
+++ b/doc/user/content/sql/insert.md
@@ -43,7 +43,7 @@ To insert data into a table, execute an `INSERT` statement where the `VALUES` cl
is followed by a list of tuples. Each tuple in the `VALUES` clause must have a value
for each column in the table. If a column is nullable, a `NULL` value may be provided.
-```sql
+```mzsql
CREATE TABLE t (a int, b text NOT NULL);
INSERT INTO t VALUES (1, 'a'), (NULL, 'b');
@@ -60,7 +60,7 @@ is nullable. `NULL` values may not be inserted into column `b`, which is not nul
You may also insert data using a column specification.
-```sql
+```mzsql
CREATE TABLE t (a int, b text NOT NULL);
INSERT INTO t (b, a) VALUES ('a', 1), ('b', NULL);
@@ -76,7 +76,7 @@ SELECT * FROM t;
You can also insert the values returned from `SELECT` statements:
-```sql
+```mzsql
CREATE TABLE s (a text);
INSERT INTO s VALUES ('c');
diff --git a/doc/user/content/sql/namespaces.md b/doc/user/content/sql/namespaces.md
index ce0544fe3a21..663f8634afd5 100644
--- a/doc/user/content/sql/namespaces.md
+++ b/doc/user/content/sql/namespaces.md
@@ -53,7 +53,7 @@ These objects are not referenced by the standard SQL namespace.
For example, to create a materialized view in a specific cluster, your SQL
statement would be:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW mv IN CLUSTER cluster1 AS ...
```
@@ -62,13 +62,13 @@ Replicas are referenced as `.`.
For example, to delete replica `r1` in cluster `cluster1`, your SQL statement
would be:
-```sql
+```mzsql
DROP CLUSTER REPLICA cluster1.r1
```
Roles are referenced by their name. For example, to alter the `manager` role, your SQL statement would be:
-```sql
+```mzsql
ALTER ROLE manager ...
```
diff --git a/doc/user/content/sql/prepare.md b/doc/user/content/sql/prepare.md
index b3f849c4c7c2..4d8f1fd47b76 100644
--- a/doc/user/content/sql/prepare.md
+++ b/doc/user/content/sql/prepare.md
@@ -27,19 +27,19 @@ Prepared statements only last for the duration of the current database session.
### Create a prepared statement
-```sql
+```mzsql
PREPARE a AS SELECT 1 + $1;
```
### Execute a prepared statement
-```sql
+```mzsql
EXECUTE a ('a', 'b', 1 + 2)
```
### Deallocate a prepared statement
-```sql
+```mzsql
DEALLOCATE a;
```
diff --git a/doc/user/content/sql/reassign-owned.md b/doc/user/content/sql/reassign-owned.md
index f1b2deca90cd..6f6f151ed973 100644
--- a/doc/user/content/sql/reassign-owned.md
+++ b/doc/user/content/sql/reassign-owned.md
@@ -24,11 +24,11 @@ _new_role_ | The role name of the new owner of all the objects.
## Examples
-```sql
+```mzsql
REASSIGN OWNED BY joe TO mike;
```
-```sql
+```mzsql
REASSIGN OWNED BY joe, george TO mike;
```
diff --git a/doc/user/content/sql/recursive-ctes.md b/doc/user/content/sql/recursive-ctes.md
index 10558fce2819..0f482f0d7f76 100644
--- a/doc/user/content/sql/recursive-ctes.md
+++ b/doc/user/content/sql/recursive-ctes.md
@@ -36,7 +36,7 @@ Within a recursive CTEs block, any `cte_ident` alias can be referenced in all `r
A `WITH MUTUALLY RECURSIVE` block with a general form
-```sql
+```mzsql
WITH MUTUALLY RECURSIVE
-- A sequence of bindings, all in scope for all definitions.
$R_1(...) AS ( $sql_cte_1 ),
@@ -83,7 +83,7 @@ commands below.
### Example schema
-```sql
+```mzsql
-- A hierarchy of geographical locations with various levels of granularity.
CREATE TABLE areas(id int not null, parent int, name text);
-- A collection of users.
@@ -94,7 +94,7 @@ CREATE TABLE transfers(src_id char(1), tgt_id char(1), amount numeric, ts timest
### Example data
-```sql
+```mzsql
DELETE FROM areas;
DELETE FROM users;
DELETE FROM transfers;
@@ -126,7 +126,7 @@ The following view will compute `connected` as the transitive closure of a graph
* each `user` is a graph vertex, and
* a graph edge between users `x` and `y` exists only if a transfer from `x` to `y` was made recently (using the rather small `10 seconds` period here for the sake of illustration):
-```sql
+```mzsql
CREATE MATERIALIZED VIEW connected AS
WITH MUTUALLY RECURSIVE
connected(src_id char(1), dst_id char(1)) AS (
@@ -141,7 +141,7 @@ To see results change over time, you can [`SUBSCRIBE`](/sql/subscribe/) to the
materialized view and then use a different SQL Shell session to insert
some sample data into the base tables used in the view:
-```sql
+```mzsql
SUBSCRIBE(SELECT * FROM connected) WITH (SNAPSHOT = FALSE);
```
@@ -164,7 +164,7 @@ Consequently, given the `connected` contents, we can:
1. Restrict `connected` to the subset of `symmetric` connections that go in both directions.
2. Identify the `scc` of each `users` entry with the lowest `dst_id` of all `symmetric` neighbors and its own `id`.
-```sql
+```mzsql
CREATE MATERIALIZED VIEW strongly_connected_components AS
WITH
symmetric(src_id, dst_id) AS (
@@ -181,7 +181,7 @@ CREATE MATERIALIZED VIEW strongly_connected_components AS
Again, you can insert some sample data into the base tables and observe how the
materialized view contents change over time using `SUBSCRIBE`:
-```sql
+```mzsql
SUBSCRIBE(SELECT * FROM strongly_connected_components) WITH (SNAPSHOT = FALSE);
```
@@ -203,7 +203,7 @@ This can be achieved in three steps:
A materialized view that does the above three steps in three CTEs (of which the last one is recursive) can be defined as follows:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW area_balances AS
WITH MUTUALLY RECURSIVE
user_balances(id char(1), balance numeric) AS (
@@ -259,7 +259,7 @@ CREATE MATERIALIZED VIEW area_balances AS
As before, you can insert [the example data](#example-data) and observe how the materialized view contents change over time from the `psql` with the `\watch` command:
-```sql
+```mzsql
SELECT id, name, balance FROM area_balances JOIN areas USING(id) ORDER BY id;
\watch 1
```
@@ -273,7 +273,7 @@ Let's look at a slight variation of the [transitive closure example](#transitive
3. The `WITH MUTUALLY RECURSIVE` clause has an optional `ERROR AT RECURSION LIMIT 100`.
4. The final result in this example is ordered by `src_id, dst_id`.
-```sql
+```mzsql
WITH MUTUALLY RECURSIVE (ERROR AT RECURSION LIMIT 100)
connected(src_id char(1), dst_id char(1)) AS (
SELECT DISTINCT src_id, tgt_id FROM transfers
@@ -294,13 +294,13 @@ ERROR: Evaluation error: Recursive query exceeded the recursion limit 100. (Use
The recursive CTE `connected` has not converged to a fixpoint within the first 100 iterations!
To see why, you can run variants of the same query where the
-```sql
+```mzsql
ERROR AT RECURSION LIMIT 100
```
clause is replaced by
-```sql
+```mzsql
RETURN AT RECURSION LIMIT $n -- where $n = 1, 2, 3, ...
```
diff --git a/doc/user/content/sql/reset.md b/doc/user/content/sql/reset.md
index 720be0278f2e..c8ced8de16b8 100644
--- a/doc/user/content/sql/reset.md
+++ b/doc/user/content/sql/reset.md
@@ -25,7 +25,7 @@ _name_ | The configuration parameter's name.
### Reset search path
-```sql
+```mzsql
SHOW search_path;
search_path
diff --git a/doc/user/content/sql/revoke-privilege.md b/doc/user/content/sql/revoke-privilege.md
index 76c7aaee3973..70e356b50210 100644
--- a/doc/user/content/sql/revoke-privilege.md
+++ b/doc/user/content/sql/revoke-privilege.md
@@ -71,19 +71,19 @@ type for sources, views, and materialized views, or omit the object type.
## Examples
-```sql
+```mzsql
REVOKE SELECT ON mv FROM joe, mike;
```
-```sql
+```mzsql
REVOKE USAGE, CREATE ON DATABASE materialize FROM joe;
```
-```sql
+```mzsql
REVOKE ALL ON CLUSTER dev FROM joe;
```
-```sql
+```mzsql
REVOKE CREATEDB ON SYSTEM FROM joe;
```
diff --git a/doc/user/content/sql/revoke-role.md b/doc/user/content/sql/revoke-role.md
index c1151b025670..57749cf48d98 100644
--- a/doc/user/content/sql/revoke-role.md
+++ b/doc/user/content/sql/revoke-role.md
@@ -25,11 +25,11 @@ You may not set up circular membership loops.
## Examples
-```sql
+```mzsql
REVOKE data_scientist FROM joe;
```
-```sql
+```mzsql
REVOKE data_scientist FROM joe, mike;
```
diff --git a/doc/user/content/sql/select.md b/doc/user/content/sql/select.md
index 537bad5820dc..8704c5d5af09 100644
--- a/doc/user/content/sql/select.md
+++ b/doc/user/content/sql/select.md
@@ -79,7 +79,7 @@ Queries that can't simply read out from an index will create an ephemeral datafl
the results. These dataflows are bound to the active [cluster](/get-started/key-concepts#clusters),
which you can change using:
-```sql
+```mzsql
SET cluster = ;
```
@@ -157,7 +157,7 @@ This assumes you've already [created a source](../create-source).
The following query creates a view representing the total of all
purchases made by users per region, and then creates an index on this view.
-```sql
+```mzsql
CREATE VIEW purchases_by_region AS
SELECT region.id, sum(purchase.total)
FROM mysql_simple_purchase AS purchase
@@ -176,7 +176,7 @@ dropped.
Assuming you've created the indexed view listed above, named `purchases_by_region`, you can simply read from the index with an ad hoc `SELECT` query:
-```sql
+```mzsql
SELECT * FROM purchases_by_region;
```
@@ -184,7 +184,7 @@ In this case, Materialize simply returns the results that the index is maintaini
### Ad hoc querying
-```sql
+```mzsql
SELECT region.id, sum(purchase.total)
FROM mysql_simple_purchase AS purchase
JOIN mysql_simple_user AS user ON purchase.user_id = user.id
@@ -199,7 +199,7 @@ you may want to create an [index](/sql/create-index) (in memory) and/or a [mater
### Using regular CTEs
-```sql
+```mzsql
WITH
regional_sales (region, total_sales) AS (
SELECT region, sum(amount)
diff --git a/doc/user/content/sql/set.md b/doc/user/content/sql/set.md
index 2d14158c0933..23e7e20bb213 100644
--- a/doc/user/content/sql/set.md
+++ b/doc/user/content/sql/set.md
@@ -39,7 +39,7 @@ configuration parameters.
### Set active cluster
-```sql
+```mzsql
SHOW cluster;
cluster
@@ -57,17 +57,17 @@ SHOW cluster;
### Set transaction isolation level
-```sql
+```mzsql
SET transaction_isolation = 'serializable';
```
### Set search path
-```sql
+```mzsql
SET search_path = public, qck;
```
-```sql
+```mzsql
SET schema = qck;
```
diff --git a/doc/user/content/sql/show-cluster-replicas.md b/doc/user/content/sql/show-cluster-replicas.md
index 5f8b089be303..21ac2c6e1583 100644
--- a/doc/user/content/sql/show-cluster-replicas.md
+++ b/doc/user/content/sql/show-cluster-replicas.md
@@ -16,7 +16,7 @@ cluster configured in Materialize.
## Examples
-```sql
+```mzsql
SHOW CLUSTER REPLICAS;
```
@@ -27,7 +27,7 @@ SHOW CLUSTER REPLICAS;
quickstart | r1 | 25cc | t |
```
-```sql
+```mzsql
SHOW CLUSTER REPLICAS WHERE cluster = 'quickstart';
```
diff --git a/doc/user/content/sql/show-clusters.md b/doc/user/content/sql/show-clusters.md
index 67de572b05a5..726274202a05 100644
--- a/doc/user/content/sql/show-clusters.md
+++ b/doc/user/content/sql/show-clusters.md
@@ -92,7 +92,7 @@ The following characteristics apply to the `mz_system` cluster:
## Examples
-```sql
+```mzsql
SET CLUSTER = mz_catalog_server;
SHOW CLUSTERS;
@@ -109,7 +109,7 @@ SHOW CLUSTERS;
mz_support |
```
-```sql
+```mzsql
SHOW CLUSTERS LIKE 'auction_%';
```
diff --git a/doc/user/content/sql/show-columns.md b/doc/user/content/sql/show-columns.md
index c8c34f006ba8..61dd00b97dc9 100644
--- a/doc/user/content/sql/show-columns.md
+++ b/doc/user/content/sql/show-columns.md
@@ -44,7 +44,7 @@ object.
## Examples
-```sql
+```mzsql
SHOW SOURCES;
```
```nofmt
@@ -52,7 +52,7 @@ SHOW SOURCES;
----------
my_sources
```
-```sql
+```mzsql
SHOW COLUMNS FROM my_source;
```
```nofmt
diff --git a/doc/user/content/sql/show-connections.md b/doc/user/content/sql/show-connections.md
index 462f68fb89c7..839a7d3c5be2 100644
--- a/doc/user/content/sql/show-connections.md
+++ b/doc/user/content/sql/show-connections.md
@@ -20,7 +20,7 @@ _schema_name_ | The schema to show connections from. If omitted, connecti
## Examples
-```sql
+```mzsql
SHOW CONNECTIONS;
```
@@ -31,7 +31,7 @@ SHOW CONNECTIONS;
postgres_connection | postgres
```
-```sql
+```mzsql
SHOW CONNECTIONS LIKE 'kafka%';
```
diff --git a/doc/user/content/sql/show-create-connection.md b/doc/user/content/sql/show-create-connection.md
index 28fb43852f69..7819a56ffb21 100644
--- a/doc/user/content/sql/show-create-connection.md
+++ b/doc/user/content/sql/show-create-connection.md
@@ -18,7 +18,7 @@ _connection_name_ | The connection you want to get the `CREATE` statement
## Examples
-```sql
+```mzsql
SHOW CREATE CONNECTION kafka_connection;
```
diff --git a/doc/user/content/sql/show-create-index.md b/doc/user/content/sql/show-create-index.md
index a46739526317..37a956cf9318 100644
--- a/doc/user/content/sql/show-create-index.md
+++ b/doc/user/content/sql/show-create-index.md
@@ -18,7 +18,7 @@ _index_name_ | The index you want use. You can find available index names
## Examples
-```sql
+```mzsql
SHOW INDEXES FROM my_view;
```
@@ -28,7 +28,7 @@ SHOW INDEXES FROM my_view;
my_view_idx | t | quickstart | {a, b}
```
-```sql
+```mzsql
SHOW CREATE INDEX my_view_idx;
```
diff --git a/doc/user/content/sql/show-create-materialized-view.md b/doc/user/content/sql/show-create-materialized-view.md
index 7896bf403706..7e90e6ddbb76 100644
--- a/doc/user/content/sql/show-create-materialized-view.md
+++ b/doc/user/content/sql/show-create-materialized-view.md
@@ -18,7 +18,7 @@ _view_name_ | The materialized view you want to use. You can find availab
## Examples
-```sql
+```mzsql
SHOW CREATE MATERIALIZED VIEW winning_bids;
```
```nofmt
diff --git a/doc/user/content/sql/show-create-sink.md b/doc/user/content/sql/show-create-sink.md
index 75f5930508ad..4890acdf927c 100644
--- a/doc/user/content/sql/show-create-sink.md
+++ b/doc/user/content/sql/show-create-sink.md
@@ -18,7 +18,7 @@ _sink_name_ | The sink you want use. You can find available sink names th
## Examples
-```sql
+```mzsql
SHOW SINKS
```
@@ -28,7 +28,7 @@ SHOW SINKS
my_view_sink
```
-```sql
+```mzsql
SHOW CREATE SINK my_view_sink;
```
diff --git a/doc/user/content/sql/show-create-source.md b/doc/user/content/sql/show-create-source.md
index ee9f4d686c49..2c10e4b8a29d 100644
--- a/doc/user/content/sql/show-create-source.md
+++ b/doc/user/content/sql/show-create-source.md
@@ -18,7 +18,7 @@ _source_name_ | The source you want use. You can find available source na
## Examples
-```sql
+```mzsql
SHOW CREATE SOURCE market_orders_raw;
```
diff --git a/doc/user/content/sql/show-create-table.md b/doc/user/content/sql/show-create-table.md
index d39f267f0aa4..1c2caec19c8a 100644
--- a/doc/user/content/sql/show-create-table.md
+++ b/doc/user/content/sql/show-create-table.md
@@ -18,11 +18,11 @@ _table_name_ | The table you want use. You can find available table names
## Examples
-```sql
+```mzsql
CREATE TABLE t (a int, b text NOT NULL);
```
-```sql
+```mzsql
SHOW CREATE TABLE t;
```
```nofmt
diff --git a/doc/user/content/sql/show-create-view.md b/doc/user/content/sql/show-create-view.md
index fd9bf4b03784..c72288a06f12 100644
--- a/doc/user/content/sql/show-create-view.md
+++ b/doc/user/content/sql/show-create-view.md
@@ -18,7 +18,7 @@ _view_name_ | The view you want to use. You can find available view names
## Examples
-```sql
+```mzsql
SHOW CREATE VIEW my_view;
```
```nofmt
diff --git a/doc/user/content/sql/show-databases.md b/doc/user/content/sql/show-databases.md
index 52a4c375019d..241a88c1986f 100644
--- a/doc/user/content/sql/show-databases.md
+++ b/doc/user/content/sql/show-databases.md
@@ -20,10 +20,10 @@ menu:
## Examples
-```sql
+```mzsql
CREATE DATABASE my_db;
```
-```sql
+```mzsql
SHOW DATABASES;
```
```nofmt
diff --git a/doc/user/content/sql/show-default-privileges.md b/doc/user/content/sql/show-default-privileges.md
index 8cff60263a12..bec3412de18c 100644
--- a/doc/user/content/sql/show-default-privileges.md
+++ b/doc/user/content/sql/show-default-privileges.md
@@ -22,7 +22,7 @@ _role_name_ | Only shows default privile
## Examples
-```sql
+```mzsql
SHOW DEFAULT PRIVILEGES;
```
@@ -35,7 +35,7 @@ SHOW DEFAULT PRIVILEGES;
mike | | | table | joe | SELECT
```
-```sql
+```mzsql
SHOW DEFAULT PRIVILEGES ON SCHEMAS;
```
@@ -45,7 +45,7 @@ SHOW DEFAULT PRIVILEGES ON SCHEMAS;
PUBLIC | | | schema | mike | CREATE
```
-```sql
+```mzsql
SHOW DEFAULT PRIVILEGES FOR joe;
```
diff --git a/doc/user/content/sql/show-indexes.md b/doc/user/content/sql/show-indexes.md
index b954c374510c..1e7f25f01aaa 100644
--- a/doc/user/content/sql/show-indexes.md
+++ b/doc/user/content/sql/show-indexes.md
@@ -39,7 +39,7 @@ Field | Meaning
## Examples
-```sql
+```mzsql
SHOW VIEWS;
```
```nofmt
@@ -49,7 +49,7 @@ SHOW VIEWS;
my_materialized_view
```
-```sql
+```mzsql
SHOW INDEXES ON my_materialized_view;
```
```nofmt
diff --git a/doc/user/content/sql/show-materialized-views.md b/doc/user/content/sql/show-materialized-views.md
index f37941ca234d..4b605655565c 100644
--- a/doc/user/content/sql/show-materialized-views.md
+++ b/doc/user/content/sql/show-materialized-views.md
@@ -20,7 +20,7 @@ _cluster_name_ | The cluster to show materialized views from. If omitted,
## Examples
-```sql
+```mzsql
SHOW MATERIALIZED VIEWS;
```
@@ -30,7 +30,7 @@ SHOW MATERIALIZED VIEWS;
winning_bids | quickstart
```
-```sql
+```mzsql
SHOW MATERIALIZED VIEWS LIKE '%bid%';
```
diff --git a/doc/user/content/sql/show-objects.md b/doc/user/content/sql/show-objects.md
index 17b4243cb38c..c3d728c6fb90 100644
--- a/doc/user/content/sql/show-objects.md
+++ b/doc/user/content/sql/show-objects.md
@@ -28,7 +28,7 @@ _schema_name_ | The schema to show objects from. Defaults to first resolv
## Examples
-```sql
+```mzsql
SHOW SCHEMAS;
```
```nofmt
@@ -36,7 +36,7 @@ SHOW SCHEMAS;
--------
public
```
-```sql
+```mzsql
SHOW OBJECTS FROM public;
```
```nofmt
@@ -47,7 +47,7 @@ my_source | source
my_view | view
my_other_source | source
```
-```sql
+```mzsql
SHOW OBJECTS;
```
```nofmt
diff --git a/doc/user/content/sql/show-privileges.md b/doc/user/content/sql/show-privileges.md
index a1db0322d6c3..8a5e745b2d64 100644
--- a/doc/user/content/sql/show-privileges.md
+++ b/doc/user/content/sql/show-privileges.md
@@ -23,7 +23,7 @@ _role_name_ | Only shows privileges gran
## Examples
-```sql
+```mzsql
SHOW PRIVILEGES;
```
@@ -44,7 +44,7 @@ SHOW PRIVILEGES;
mz_system | materialize | | | | system | CREATEROLE
```
-```sql
+```mzsql
SHOW PRIVILEGES ON SCHEMAS;
```
@@ -56,7 +56,7 @@ SHOW PRIVILEGES ON SCHEMAS;
mz_system | materialize | materialize | | public | schema | USAGE
```
-```sql
+```mzsql
SHOW PRIVILEGES FOR materialize;
```
diff --git a/doc/user/content/sql/show-role-membership.md b/doc/user/content/sql/show-role-membership.md
index 44d9b708f044..969ca0881457 100644
--- a/doc/user/content/sql/show-role-membership.md
+++ b/doc/user/content/sql/show-role-membership.md
@@ -22,7 +22,7 @@ _role_name_ | Only shows role membership
## Examples
-```sql
+```mzsql
SHOW ROLE MEMBERSHIP;
```
@@ -35,7 +35,7 @@ SHOW ROLE MEMBERSHIP;
r6 | r5 | mz_system
```
-```sql
+```mzsql
SHOW ROLE MEMBERSHIP FOR r2;
```
diff --git a/doc/user/content/sql/show-roles.md b/doc/user/content/sql/show-roles.md
index 01d2b16ecacf..53dd8c6c1410 100644
--- a/doc/user/content/sql/show-roles.md
+++ b/doc/user/content/sql/show-roles.md
@@ -15,7 +15,7 @@ menu:
## Examples
-```sql
+```mzsql
SHOW ROLES;
```
```nofmt
@@ -25,7 +25,7 @@ SHOW ROLES;
mike@ko.sh
```
-```sql
+```mzsql
SHOW ROLES LIKE 'jo%';
```
```nofmt
@@ -34,7 +34,7 @@ SHOW ROLES LIKE 'jo%';
joe@ko.sh
```
-```sql
+```mzsql
SHOW ROLES WHERE name = 'mike@ko.sh';
```
```nofmt
diff --git a/doc/user/content/sql/show-schemas.md b/doc/user/content/sql/show-schemas.md
index 4efb503e2736..b5e6b417a461 100644
--- a/doc/user/content/sql/show-schemas.md
+++ b/doc/user/content/sql/show-schemas.md
@@ -24,7 +24,7 @@ _database_name_ | The database to show schemas from. Defaults to the curr
## Examples
-```sql
+```mzsql
SHOW DATABASES;
```
```nofmt
@@ -33,7 +33,7 @@ SHOW DATABASES;
materialize
my_db
```
-```sql
+```mzsql
SHOW SCHEMAS FROM my_db
```
```nofmt
diff --git a/doc/user/content/sql/show-secrets.md b/doc/user/content/sql/show-secrets.md
index 45f11cbe04eb..2b3bc43d3338 100644
--- a/doc/user/content/sql/show-secrets.md
+++ b/doc/user/content/sql/show-secrets.md
@@ -18,7 +18,7 @@ _schema_name_ | The schema to show secrets from. If omitted, secrets from
## Examples
-```sql
+```mzsql
SHOW SECRETS;
```
@@ -30,7 +30,7 @@ SHOW SECRETS;
upstash_sasl_username
```
-```sql
+```mzsql
SHOW SECRETS FROM public LIKE '%cert%';
```
diff --git a/doc/user/content/sql/show-sinks.md b/doc/user/content/sql/show-sinks.md
index 4ee5bb88b297..a925da442896 100644
--- a/doc/user/content/sql/show-sinks.md
+++ b/doc/user/content/sql/show-sinks.md
@@ -40,7 +40,7 @@ Field | Meaning
## Examples
-```sql
+```mzsql
SHOW SINKS;
```
```nofmt
@@ -50,7 +50,7 @@ my_sink | kafka | | c1
my_other_sink | kafka | | c2
```
-```sql
+```mzsql
SHOW SINKS IN CLUSTER c1;
```
```nofmt
diff --git a/doc/user/content/sql/show-sources.md b/doc/user/content/sql/show-sources.md
index ca539a493e0b..c69b0cd1b821 100644
--- a/doc/user/content/sql/show-sources.md
+++ b/doc/user/content/sql/show-sources.md
@@ -38,7 +38,7 @@ Field | Meaning
## Examples
-```sql
+```mzsql
SHOW SOURCES;
```
```nofmt
@@ -48,7 +48,7 @@ SHOW SOURCES;
my_postgres_source | postgres | | c2
```
-```sql
+```mzsql
SHOW SOURCES IN CLUSTER c2;
```
```nofmt
diff --git a/doc/user/content/sql/show-subsources.md b/doc/user/content/sql/show-subsources.md
index 218838951ac5..aee738ebd2d6 100644
--- a/doc/user/content/sql/show-subsources.md
+++ b/doc/user/content/sql/show-subsources.md
@@ -48,7 +48,7 @@ Field | Meaning
## Examples
-```sql
+```mzsql
SHOW SOURCES;
```
```nofmt
@@ -57,7 +57,7 @@ SHOW SOURCES;
postgres
kafka
```
-```sql
+```mzsql
SHOW SUBSOURCES ON pg;
```
```nofmt
@@ -67,7 +67,7 @@ SHOW SUBSOURCES ON pg;
table1_in_postgres | subsource
table2_in_postgres | subsource
```
-```sql
+```mzsql
SHOW SUBSOURCES ON kafka;
```
```nofmt
diff --git a/doc/user/content/sql/show-tables.md b/doc/user/content/sql/show-tables.md
index 6f706a024ac6..17e167821b23 100644
--- a/doc/user/content/sql/show-tables.md
+++ b/doc/user/content/sql/show-tables.md
@@ -25,7 +25,7 @@ _schema_name_ | The schema to show tables from. Defaults to first resolva
## Examples
### Show user-created tables
-```sql
+```mzsql
SHOW TABLES;
```
```nofmt
@@ -36,7 +36,7 @@ SHOW TABLES;
```
### Show tables from specified schema
-```sql
+```mzsql
SHOW SCHEMAS;
```
```nofmt
@@ -44,7 +44,7 @@ SHOW SCHEMAS;
--------
public
```
-```sql
+```mzsql
SHOW TABLES FROM public;
```
```nofmt
diff --git a/doc/user/content/sql/show-types.md b/doc/user/content/sql/show-types.md
index 2e053fe12202..86ccfb5da8f0 100644
--- a/doc/user/content/sql/show-types.md
+++ b/doc/user/content/sql/show-types.md
@@ -21,7 +21,7 @@ _schema_name_ | The schema to show types from. Defaults to first resolvab
### Show custom data types
-```sql
+```mzsql
SHOW TYPES;
```
```
diff --git a/doc/user/content/sql/show-views.md b/doc/user/content/sql/show-views.md
index 9f6f948013e9..48c52db298f0 100644
--- a/doc/user/content/sql/show-views.md
+++ b/doc/user/content/sql/show-views.md
@@ -34,7 +34,7 @@ Field | Meaning
## Examples
-```sql
+```mzsql
SHOW VIEWS;
```
```nofmt
diff --git a/doc/user/content/sql/show.md b/doc/user/content/sql/show.md
index a4f2f1297f0b..40be37784700 100644
--- a/doc/user/content/sql/show.md
+++ b/doc/user/content/sql/show.md
@@ -33,7 +33,7 @@ configuration parameters.
### Show active cluster
-```sql
+```mzsql
SHOW cluster;
```
```
@@ -44,7 +44,7 @@ SHOW cluster;
### Show transaction isolation level
-```sql
+```mzsql
SHOW transaction_isolation;
```
```
diff --git a/doc/user/content/sql/subscribe.md b/doc/user/content/sql/subscribe.md
index 55b1c12877c9..f162d42845a9 100644
--- a/doc/user/content/sql/subscribe.md
+++ b/doc/user/content/sql/subscribe.md
@@ -222,7 +222,7 @@ Below are the recommended ways to work around this.
As an example, we'll create a [counter load generator](https://materialize.com/docs/sql/create-source/load-generator/#creating-a-counter-load-generator) that emits a row every second:
-```sql
+```mzsql
CREATE SOURCE counter FROM LOAD GENERATOR COUNTER;
```
@@ -235,14 +235,14 @@ Next, let's subscribe to the `counter` load generator source that we've created
First, declare a `SUBSCRIBE` cursor:
-```sql
+```mzsql
BEGIN;
DECLARE c CURSOR FOR SUBSCRIBE (SELECT * FROM counter);
```
Then, use [`FETCH`](/sql/fetch) in a loop to retrieve each batch of results as soon as it's ready:
-```sql
+```mzsql
FETCH ALL c;
```
@@ -250,19 +250,19 @@ That will retrieve all of the rows that are currently available.
If there are no rows available, it will wait until there are some ready and return those.
A `timeout` can be used to specify a window in which to wait for rows. This will return up to the specified count (or `ALL`) of rows that are ready within the timeout. To retrieve up to 100 rows that are available in at most the next `1s`:
-```sql
+```mzsql
FETCH 100 c WITH (timeout='1s');
```
To retrieve all available rows available over the next `1s`:
-```sql
+```mzsql
FETCH ALL c WITH (timeout='1s');
```
A `0s` timeout can be used to return rows that are available now without waiting:
-```sql
+```mzsql
FETCH ALL c WITH (timeout='0s');
```
@@ -270,7 +270,7 @@ FETCH ALL c WITH (timeout='0s');
If you want to use `SUBSCRIBE` from an interactive SQL session (e.g.`psql`), wrap the query in `COPY`:
-```sql
+```mzsql
COPY (SUBSCRIBE (SELECT * FROM counter)) TO STDOUT;
```
@@ -319,11 +319,11 @@ value columns.
* Using this modifier, the output rows will have the following
structure:
- ```sql
+ ```mzsql
SUBSCRIBE mview ENVELOPE UPSERT (KEY (key));
```
- ```sql
+ ```mzsql
mz_timestamp | mz_state | key | value
-------------|----------|------|--------
100 | upsert | 1 | 2
@@ -336,7 +336,7 @@ structure:
_Insert_
- ```sql
+ ```mzsql
-- at time 200, add a new row with key=3, value=6
mz_timestamp | mz_state | key | value
-------------|----------|------|--------
@@ -347,7 +347,7 @@ structure:
_Update_
- ```sql
+ ```mzsql
-- at time 300, update key=1's value to 10
mz_timestamp | mz_state | key | value
-------------|----------|------|--------
@@ -361,7 +361,7 @@ structure:
_Delete_
- ```sql
+ ```mzsql
-- at time 400, delete all rows
mz_timestamp | mz_state | key | value
-------------|----------|------|--------
@@ -380,7 +380,7 @@ structure:
_Key violation_
- ```sql
+ ```mzsql
-- at time 500, introduce a key_violation
mz_timestamp | mz_state | key | value
-------------|-----------------|------|--------
@@ -412,11 +412,11 @@ value of the columns.
* Using this modifier, the output rows will have the following
structure:
- ```sql
+ ```mzsql
SUBSCRIBE mview ENVELOPE DEBEZIUM (KEY (key));
```
- ```sql
+ ```mzsql
mz_timestamp | mz_state | key | before_value | after_value
-------------|----------|------|--------------|-------
100 | upsert | 1 | NULL | 2
@@ -428,7 +428,7 @@ structure:
_Insert_
- ```sql
+ ```mzsql
-- at time 200, add a new row with key=3, value=6
mz_timestamp | mz_state | key | before_value | after_value
-------------|----------|------|--------------|-------
@@ -442,7 +442,7 @@ structure:
_Update_
- ```sql
+ ```mzsql
-- at time 300, update key=1's value to 10
mz_timestamp | mz_state | key | before_value | after_value
-------------|----------|------|--------------|-------
@@ -456,7 +456,7 @@ structure:
_Delete_
- ```sql
+ ```mzsql
-- at time 400, delete all rows
mz_timestamp | mz_state | key | before_value | after_value
-------------|----------|------|--------------|-------
@@ -475,7 +475,7 @@ structure:
_Key violation_
- ```sql
+ ```mzsql
-- at time 500, introduce a key_violation
mz_timestamp | mz_state | key | before_value | after_value
-------------|-----------------|------|--------------|-------
@@ -499,7 +499,7 @@ to sort the rows within each distinct timestamp.
* The `ORDER BY` expression can take any column in the underlying object or
query, including `mz_diff`.
- ```sql
+ ```mzsql
SUBSCRIBE mview WITHIN TIMESTAMP ORDER BY c1, c2 DESC NULLS LAST, mz_diff;
mz_timestamp | mz_diff | c1 | c2 | c3
@@ -518,7 +518,7 @@ to sort the rows within each distinct timestamp.
When you're done, you can drop the `counter` load generator source:
-```sql
+```mzsql
DROP SOURCE counter;
```
diff --git a/doc/user/content/sql/table.md b/doc/user/content/sql/table.md
index 59b93208e9fe..3034a238107d 100644
--- a/doc/user/content/sql/table.md
+++ b/doc/user/content/sql/table.md
@@ -23,7 +23,7 @@ _table\_name_ | The name of the tablefrom which to retrieve rows.
The expression `TABLE t` is exactly equivalent to the following [`SELECT`]
expression:
-```sql
+```mzsql
SELECT * FROM t;
```
@@ -31,7 +31,7 @@ SELECT * FROM t;
Using a `TABLE` expression as a standalone statement:
-```sql
+```mzsql
TABLE t;
```
```nofmt
@@ -43,7 +43,7 @@ TABLE t;
Using a `TABLE` expression in place of a [`SELECT`] expression:
-```sql
+```mzsql
TABLE t ORDER BY a DESC LIMIT 1;
```
```nofmt
diff --git a/doc/user/content/sql/types/_index.md b/doc/user/content/sql/types/_index.md
index c2ee7ca8ffdb..fd17a4b35945 100644
--- a/doc/user/content/sql/types/_index.md
+++ b/doc/user/content/sql/types/_index.md
@@ -130,7 +130,7 @@ If we concatenate a custom `list` (in this example, `custom_list`) and a
structurally equivalent built-in `list` (`int4 list`), the result is of the same
type as the custom `list` (`custom_list`).
-```sql
+```mzsql
CREATE TYPE custom_list AS LIST (ELEMENT TYPE int4);
SELECT pg_typeof(
@@ -151,7 +151,7 @@ If we append a structurally appropriate element (`int4`) to a custom `list`
(`custom_list`), the result is of the same type as the custom `list`
(`custom_list`).
-```sql
+```mzsql
SELECT pg_typeof(
list_append('{1}'::custom_list, 2)
) AS custom_list_built_in_element_cat;
@@ -166,7 +166,7 @@ SELECT pg_typeof(
If we append a structurally appropriate custom element (`custom_list`) to a
built-in `list` (`int4 list list`), the result is a `list` of custom elements.
-```sql
+```mzsql
SELECT pg_typeof(
list_append('{{1}}'::int4 list list, '{2}'::custom_list)
) AS built_in_list_custom_element_append;
@@ -190,7 +190,7 @@ types' polymorphic constraints.
For example, values of type `custom_list list` and `custom_nested_list` cannot
both be used as `listany` values for the same function:
-```sql
+```mzsql
CREATE TYPE custom_nested_list AS LIST (element_type=custom_list);
SELECT list_cat(
@@ -208,7 +208,7 @@ As another example, when using `custom_list list` values for `listany`
parameters, you can only use `custom_list` or `int4 list` values for
`listelementany` parameters––using any other custom type will fail:
-```sql
+```mzsql
CREATE TYPE second_custom_list AS LIST (element_type=int4);
SELECT list_append(
@@ -227,7 +227,7 @@ To make custom types interoperable, you must cast them to the same type. For
example, casting `custom_nested_list` to `custom_list list` (or vice versa)
makes the values passed to `listany` parameters of the same custom type:
-```sql
+```mzsql
SELECT pg_typeof(
list_cat(
-- result is "custom_list list"
diff --git a/doc/user/content/sql/types/array.md b/doc/user/content/sql/types/array.md
index 1ba519629e0e..6a068ee24cda 100644
--- a/doc/user/content/sql/types/array.md
+++ b/doc/user/content/sql/types/array.md
@@ -39,7 +39,7 @@ whenever possible.
You can construct arrays using the special `ARRAY` expression:
-```sql
+```mzsql
SELECT ARRAY[1, 2, 3]
```
```nofmt
@@ -50,7 +50,7 @@ SELECT ARRAY[1, 2, 3]
You can nest `ARRAY` constructors to create multidimensional arrays:
-```sql
+```mzsql
SELECT ARRAY[ARRAY['a', 'b'], ARRAY['c', 'd']]
```
```nofmt
@@ -62,7 +62,7 @@ SELECT ARRAY[ARRAY['a', 'b'], ARRAY['c', 'd']]
Alternatively, you can construct an array from the results subquery. These subqueries must return a single column. Note
that, in this form of the `ARRAY` expression, parentheses are used rather than square brackets.
-```sql
+```mzsql
SELECT ARRAY(SELECT x FROM test0 WHERE x > 0 ORDER BY x DESC LIMIT 3);
```
```nofmt
@@ -75,7 +75,7 @@ Arrays cannot be "ragged." The length of each array expression must equal the
length of all other array constructors in the same dimension. For example, the
following ragged array is rejected:
-```sql
+```mzsql
SELECT ARRAY[ARRAY[1, 2], ARRAY[3]]
```
```nofmt
@@ -102,7 +102,7 @@ quotes, backslashes and double quotes are backslash-escaped.
The following example demonstrates the output format and includes many of the
aforementioned special cases.
-```sql
+```mzsql
SELECT ARRAY[ARRAY['a', 'white space'], ARRAY[NULL, ''], ARRAY['escape"m\e', 'nUlL']]
```
```nofmt
@@ -152,7 +152,7 @@ You can cast any type of array to a list of the same element type, as long as
the array has only 0 or 1 dimensions, i.e. you can cast `integer[]` to `integer
list`, as long as the array is empty or does not contain any arrays itself.
-```sql
+```mzsql
SELECT pg_typeof('{1,2,3}`::integer[]::integer list);
```
```
@@ -161,7 +161,7 @@ integer list
## Examples
-```sql
+```mzsql
SELECT '{1,2,3}'::int[]
```
```nofmt
@@ -170,7 +170,7 @@ SELECT '{1,2,3}'::int[]
{1,2,3}
```
-```sql
+```mzsql
SELECT ARRAY[ARRAY[1, 2], ARRAY[NULL, 4]]::text
```
```nofmt
diff --git a/doc/user/content/sql/types/boolean.md b/doc/user/content/sql/types/boolean.md
index 3580cf7149b2..82c136a1815b 100644
--- a/doc/user/content/sql/types/boolean.md
+++ b/doc/user/content/sql/types/boolean.md
@@ -43,7 +43,7 @@ You can [cast](../../functions/cast) the following types to `boolean`:
## Examples
-```sql
+```mzsql
SELECT TRUE AS t_val;
```
```nofmt
@@ -52,7 +52,7 @@ SELECT TRUE AS t_val;
t
```
-```sql
+```mzsql
SELECT FALSE AS f_val;
f_val
-------
diff --git a/doc/user/content/sql/types/bytea.md b/doc/user/content/sql/types/bytea.md
index 23b5b8a5e388..0424de7d349b 100644
--- a/doc/user/content/sql/types/bytea.md
+++ b/doc/user/content/sql/types/bytea.md
@@ -67,7 +67,7 @@ You can explicitly [cast](../../functions/cast) [`text`](../text) to `bytea`.
Unless a `text` value is a [hex-formatted](#hex-format) string, casting to
`bytea` will encode characters using UTF-8:
-```sql
+```mzsql
SELECT 'hello 👋'::bytea;
```
```text
@@ -80,7 +80,7 @@ The reverse, however, is not true. Casting a `bytea` value to `text` will not
decode UTF-8 bytes into characters. Instead, the cast unconditionally produces a
[hex-formatted](#hex-format) string:
-```sql
+```mzsql
SELECT '\x68656c6c6f20f09f918b'::bytea::text
```
```text
@@ -92,10 +92,10 @@ SELECT '\x68656c6c6f20f09f918b'::bytea::text
To decode UTF-8 bytes into characters, use the
[`convert_from`](../../functions#convert_from) function instead of casting:
-```sql
+```mzsql
SELECT convert_from('\x68656c6c6f20f09f918b', 'utf8') AS text;
```
-```sql
+```mzsql
text
---------
hello 👋
@@ -104,7 +104,7 @@ SELECT convert_from('\x68656c6c6f20f09f918b', 'utf8') AS text;
## Examples
-```sql
+```mzsql
SELECT '\xDEADBEEF'::bytea AS bytea_val;
```
```nofmt
@@ -115,7 +115,7 @@ SELECT '\xDEADBEEF'::bytea AS bytea_val;
-```sql
+```mzsql
SELECT '\000'::bytea AS bytea_val;
```
```nofmt
diff --git a/doc/user/content/sql/types/date.md b/doc/user/content/sql/types/date.md
index 1e1aa18c24bc..acc2429f15ae 100644
--- a/doc/user/content/sql/types/date.md
+++ b/doc/user/content/sql/types/date.md
@@ -61,7 +61,7 @@ Operation | Computes
## Examples
-```sql
+```mzsql
SELECT DATE '2007-02-01' AS date_v;
```
```nofmt
diff --git a/doc/user/content/sql/types/float.md b/doc/user/content/sql/types/float.md
index 893ef48c71b1..8b7719360924 100644
--- a/doc/user/content/sql/types/float.md
+++ b/doc/user/content/sql/types/float.md
@@ -57,7 +57,7 @@ Value | Aliases | Represents
To input these special values, write them as a string and cast that string to
the desired floating-point type. For example:
-```sql
+```mzsql
SELECT 'NaN'::real AS nan
```
```nofmt
@@ -91,7 +91,7 @@ You can [cast](../../functions/cast) to `real` or `double precision` from the fo
## Examples
-```sql
+```mzsql
SELECT 1.23::real AS real_v;
```
```nofmt
diff --git a/doc/user/content/sql/types/integer.md b/doc/user/content/sql/types/integer.md
index 793e04f653ed..142e7ac15895 100644
--- a/doc/user/content/sql/types/integer.md
+++ b/doc/user/content/sql/types/integer.md
@@ -90,7 +90,7 @@ From | Required context
## Examples
-```sql
+```mzsql
SELECT 123::integer AS int_v;
```
```nofmt
@@ -101,7 +101,7 @@ SELECT 123::integer AS int_v;
-```sql
+```mzsql
SELECT 1.23::integer AS int_v;
```
```nofmt
diff --git a/doc/user/content/sql/types/interval.md b/doc/user/content/sql/types/interval.md
index 7c6928f6feb8..db1f71e37935 100644
--- a/doc/user/content/sql/types/interval.md
+++ b/doc/user/content/sql/types/interval.md
@@ -115,7 +115,7 @@ Operation | Computes | Notes
## Examples
-```sql
+```mzsql
SELECT INTERVAL '1' MINUTE AS interval_m;
```
@@ -127,7 +127,7 @@ SELECT INTERVAL '1' MINUTE AS interval_m;
### SQL Standard syntax
-```sql
+```mzsql
SELECT INTERVAL '1-2 3 4:5:6.7' AS interval_p;
```
@@ -139,7 +139,7 @@ SELECT INTERVAL '1-2 3 4:5:6.7' AS interval_p;
### PostgreSQL syntax
-```sql
+```mzsql
SELECT INTERVAL '1 year 2.3 days 4.5 seconds' AS interval_p;
```
@@ -153,7 +153,7 @@ SELECT INTERVAL '1 year 2.3 days 4.5 seconds' AS interval_p;
`interval_n` demonstrates using negative and positive components in an interval.
-```sql
+```mzsql
SELECT INTERVAL '-1 day 2:3:4.5' AS interval_n;
```
@@ -168,7 +168,7 @@ SELECT INTERVAL '-1 day 2:3:4.5' AS interval_n;
`interval_r` demonstrates how `head_time_unit` and `tail_time_unit` truncate the
interval.
-```sql
+```mzsql
SELECT INTERVAL '1-2 3 4:5:6.7' DAY TO MINUTE AS interval_r;
```
@@ -184,7 +184,7 @@ SELECT INTERVAL '1-2 3 4:5:6.7' DAY TO MINUTE AS interval_r;
as well as using `tail_time_unit` to control the `time_unit` of the last value
of the `interval` string.
-```sql
+```mzsql
SELECT INTERVAL '1 day 2-3 4' MINUTE AS interval_w;
```
@@ -196,7 +196,7 @@ SELECT INTERVAL '1 day 2-3 4' MINUTE AS interval_w;
### Interaction with timestamps
-```sql
+```mzsql
SELECT TIMESTAMP '2020-01-01 8:00:00' + INTERVAL '1' DAY AS ts_interaction;
```
diff --git a/doc/user/content/sql/types/jsonb.md b/doc/user/content/sql/types/jsonb.md
index f65f9634c731..2de65bcca7e9 100644
--- a/doc/user/content/sql/types/jsonb.md
+++ b/doc/user/content/sql/types/jsonb.md
@@ -47,7 +47,7 @@ Functions that return `Col`s are considered table functions and can only be used
as tables, i.e. you cannot use them as scalar values. For example, you can only
use `jsonb_object_keys` in the following way:
-```sql
+```mzsql
SELECT * FROM jsonb_object_keys('{"1":2,"3":4}'::jsonb);
```
@@ -86,7 +86,7 @@ You can explicitly [cast](../../functions/cast) from [`text`](../text) to `jsonb
- `jsonb::text` always produces the printed version of the JSON.
- ```sql
+ ```mzsql
SELECT ('"a"'::jsonb)::text AS jsonb_elem;
```
```nofmt
@@ -99,7 +99,7 @@ You can explicitly [cast](../../functions/cast) from [`text`](../text) to `jsonb
element, unless the output is a single JSON string in which case they print it
without quotes, i.e. as a SQL `text` value.
- ```sql
+ ```mzsql
SELECT ('"a"'::jsonb)->>0 AS string_elem;
```
```nofmt
@@ -111,7 +111,7 @@ You can explicitly [cast](../../functions/cast) from [`text`](../text) to `jsonb
- `text` values passed to `to_jsonb` with quotes (`"`) produced `jsonb` strings
with the quotes escaped.
- ```sql
+ ```mzsql
SELECT to_jsonb('"foo"') AS escaped_quotes;
```
```nofmt
@@ -134,7 +134,7 @@ object key does not exist, or if either the input value or subscript value is
To extract an element from an array, supply the 0-indexed position as the
subscript:
-```sql
+```mzsql
SELECT ('[1, 2, 3]'::jsonb)[1]
```
```nofmt
@@ -151,7 +151,7 @@ and [`array`] types, whose subscripting operation uses 1-indexed positions.
To extract a value from an object, supply the key as the subscript:
-```sql
+```mzsql
SELECT ('{"a": 1, "b": 2, "c": 3}'::jsonb)['b'];
```
```nofmt
@@ -162,7 +162,7 @@ SELECT ('{"a": 1, "b": 2, "c": 3}'::jsonb)['b'];
You can chain subscript operations to retrieve deeply nested elements:
-```sql
+```mzsql
SELECT ('{"1": 2, "a": ["b", "c"]}'::jsonb)['a'][1];
```
```nofmt
@@ -177,7 +177,7 @@ Because the output type of the subscript operation is always `jsonb`, when
comparing the output of a subscript to a string, you must supply a JSON string
to compare against:
-```sql
+```mzsql
SELECT ('["a", "b"]::jsonb)[1] = '"b"'
```
@@ -197,7 +197,7 @@ The type of JSON element you're accessing dictates the RHS's type.
- Use a `string` to return the value for a specific key:
- ```sql
+ ```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb->'1' AS field_jsonb;
```
```nofmt
@@ -208,7 +208,7 @@ The type of JSON element you're accessing dictates the RHS's type.
- Use an `int` to return the value in an array at a specific index:
- ```sql
+ ```mzsql
SELECT '["1", "a", 2]'::jsonb->1 AS field_jsonb;
```
```nofmt
@@ -218,7 +218,7 @@ The type of JSON element you're accessing dictates the RHS's type.
```
Field accessors can also be chained together.
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb->'a'->1 AS field_jsonb;
```
```nofmt
@@ -237,7 +237,7 @@ The type of JSON element you're accessing dictates the RHS's type.
- Use a `string` to return the value for a specific key:
- ```sql
+ ```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb->>'1' AS field_text;
```
```nofmt
@@ -248,7 +248,7 @@ The type of JSON element you're accessing dictates the RHS's type.
- Use an `int` to return the value in an array at a specific index:
- ```sql
+ ```mzsql
SELECT '["1", "a", 2]'::jsonb->>1 AS field_text;
```
```nofmt
@@ -260,7 +260,7 @@ The type of JSON element you're accessing dictates the RHS's type.
Field accessors can also be chained together, as long as the LHS remains
`jsonb`.
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb->'a'->>1 AS field_text;
```
```nofmt
@@ -277,7 +277,7 @@ You can access specific elements in a `jsonb` value using a "path", which is a
[text array](/sql/types/array) where each element is either a field key or an
array element:
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb #> '{a,1}' AS field_jsonb;
```
```nofmt
@@ -294,7 +294,7 @@ The operator returns a value of type `jsonb`. If the path is invalid, it returns
The `#>>` operator is equivalent to the [`#>`](#path-access-as-jsonb-) operator,
except that the operator returns a value of type `text`.
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb #>> '{a,1}' AS field_text;
```
```nofmt
@@ -307,7 +307,7 @@ SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb #>> '{a,1}' AS field_text;
#### `jsonb` concat (`||`)
-```sql
+```mzsql
SELECT '{"1": 2}'::jsonb ||
'{"a": ["b", "c"]}'::jsonb AS concat;
```
@@ -321,7 +321,7 @@ SELECT '{"1": 2}'::jsonb ||
#### Remove key (`-`)
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb - 'a' AS rm_key;
```
```nofmt
@@ -336,7 +336,7 @@ SELECT '{"1": 2}'::jsonb ||
Here, the left hand side does contain the right hand side, so the result is `t` for true.
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb @>
'{"1": 2}'::jsonb AS lhs_contains_rhs;
```
@@ -352,7 +352,7 @@ SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb @>
Here, the right hand side does contain the left hand side, so the result is `t` for true.
-```sql
+```mzsql
SELECT '{"1": 2}'::jsonb <@
'{"1": 2, "a": ["b", "c"]}'::jsonb AS lhs_contains_rhs;
```
@@ -366,7 +366,7 @@ SELECT '{"1": 2}'::jsonb <@
#### Search top-level keys (`?`)
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb ? 'a' AS search_for_key;
```
```nofmt
@@ -375,7 +375,7 @@ SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb ? 'a' AS search_for_key;
t
```
-```sql
+```mzsql
SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb ? 'b' AS search_for_key;
```
```nofmt
@@ -390,7 +390,7 @@ SELECT '{"1": 2, "a": ["b", "c"]}'::jsonb ? 'b' AS search_for_key;
##### Expanding a JSON array
-```sql
+```mzsql
SELECT * FROM jsonb_array_elements('[true, 1, "a", {"b": 2}, null]'::jsonb);
```
```nofmt
@@ -405,7 +405,7 @@ SELECT * FROM jsonb_array_elements('[true, 1, "a", {"b": 2}, null]'::jsonb);
##### Flattening a JSON array
-```sql
+```mzsql
SELECT t.id,
obj->>'a' AS a,
obj->>'b' AS b
@@ -431,7 +431,7 @@ CROSS JOIN jsonb_array_elements(t.json_col) AS obj;
#### `jsonb_array_elements_text`
-```sql
+```mzsql
SELECT * FROM jsonb_array_elements_text('[true, 1, "a", {"b": 2}, null]'::jsonb);
```
```nofmt
@@ -448,7 +448,7 @@ SELECT * FROM jsonb_array_elements_text('[true, 1, "a", {"b": 2}, null]'::jsonb)
#### `jsonb_array_length`
-```sql
+```mzsql
SELECT jsonb_array_length('[true, 1, "a", {"b": 2}, null]'::jsonb);
```
```nofmt
@@ -461,7 +461,7 @@ SELECT jsonb_array_length('[true, 1, "a", {"b": 2}, null]'::jsonb);
#### `jsonb_build_array`
-```sql
+```mzsql
SELECT jsonb_build_array('a', 1::float, 2.0::float, true);
```
```nofmt
@@ -474,7 +474,7 @@ SELECT jsonb_build_array('a', 1::float, 2.0::float, true);
#### `jsonb_build_object`
-```sql
+```mzsql
SELECT jsonb_build_object(2.0::float, 'b', 'a', 1.1::float);
```
```nofmt
@@ -487,7 +487,7 @@ SELECT jsonb_build_object(2.0::float, 'b', 'a', 1.1::float);
#### `jsonb_each`
-```sql
+```mzsql
SELECT * FROM jsonb_each('{"1": 2.1, "a": ["b", "c"]}'::jsonb);
```
```nofmt
@@ -503,7 +503,7 @@ Note that the `value` column is `jsonb`.
#### `jsonb_each_text`
-```sql
+```mzsql
SELECT * FROM jsonb_each_text('{"1": 2.1, "a": ["b", "c"]}'::jsonb);
```
```nofmt
@@ -519,7 +519,7 @@ Note that the `value` column is `string`.
#### `jsonb_object_keys`
-```sql
+```mzsql
SELECT * FROM jsonb_object_keys('{"1": 2, "a": ["b", "c"]}'::jsonb);
```
```nofmt
@@ -533,7 +533,7 @@ SELECT * FROM jsonb_object_keys('{"1": 2, "a": ["b", "c"]}'::jsonb);
#### `jsonb_pretty`
-```sql
+```mzsql
SELECT jsonb_pretty('{"1": 2, "a": ["b", "c"]}'::jsonb);
```
```nofmt
@@ -552,7 +552,7 @@ SELECT jsonb_pretty('{"1": 2, "a": ["b", "c"]}'::jsonb);
#### `jsonb_typeof`
-```sql
+```mzsql
SELECT jsonb_typeof('[true, 1, "a", {"b": 2}, null]'::jsonb);
```
```nofmt
@@ -561,7 +561,7 @@ SELECT jsonb_typeof('[true, 1, "a", {"b": 2}, null]'::jsonb);
array
```
-```sql
+```mzsql
SELECT * FROM jsonb_typeof('{"1": 2, "a": ["b", "c"]}'::jsonb);
```
```nofmt
@@ -574,7 +574,7 @@ SELECT * FROM jsonb_typeof('{"1": 2, "a": ["b", "c"]}'::jsonb);
#### `jsonb_strip_nulls`
-```sql
+```mzsql
SELECT jsonb_strip_nulls('[{"1":"a","2":null},"b",null,"c"]'::jsonb);
```
```nofmt
@@ -587,7 +587,7 @@ SELECT jsonb_strip_nulls('[{"1":"a","2":null},"b",null,"c"]'::jsonb);
#### `to_jsonb`
-```sql
+```mzsql
SELECT to_jsonb(t) AS jsonified_row
FROM (
VALUES
diff --git a/doc/user/content/sql/types/list.md b/doc/user/content/sql/types/list.md
index 382f2e02f9c5..8e406620c816 100644
--- a/doc/user/content/sql/types/list.md
+++ b/doc/user/content/sql/types/list.md
@@ -61,7 +61,7 @@ The name of a list type is the name of its element type followed by `list`, e.g.
You can construct lists using the `LIST` expression:
-```sql
+```mzsql
SELECT LIST[1, 2, 3];
```
```nofmt
@@ -72,7 +72,7 @@ SELECT LIST[1, 2, 3];
You can nest `LIST` constructors to create layered lists:
-```sql
+```mzsql
SELECT LIST[LIST['a', 'b'], LIST['c']];
```
```nofmt
@@ -83,7 +83,7 @@ SELECT LIST[LIST['a', 'b'], LIST['c']];
You can also elide the `LIST` keyword from the interior list expressions:
-```sql
+```mzsql
SELECT LIST[['a', 'b'], ['c']];
```
```nofmt
@@ -96,7 +96,7 @@ Alternatively, you can construct a list from the results of a subquery. The
subquery must return a single column. Note that, in this form of the `LIST`
expression, parentheses are used rather than square brackets.
-```sql
+```mzsql
SELECT LIST(SELECT x FROM test0 WHERE x > 0 ORDER BY x DESC LIMIT 3);
```
```nofmt
@@ -124,7 +124,7 @@ You can access elements of lists through:
To access an individual element of list, you can “index” into it using brackets
(`[]`) and 1-index element positions:
-```sql
+```mzsql
SELECT LIST[['a', 'b'], ['c']][1];
```
```nofmt
@@ -135,7 +135,7 @@ SELECT LIST[['a', 'b'], ['c']][1];
Indexing operations can be chained together to descend the list’s layers:
-```sql
+```mzsql
SELECT LIST[['a', 'b'], ['c']][1][2];
```
```nofmt
@@ -147,7 +147,7 @@ SELECT LIST[['a', 'b'], ['c']][1][2];
If the index is invalid (either less than 1 or greater than the maximum index),
lists return _NULL_.
-```sql
+```mzsql
SELECT LIST[['a', 'b'], ['c']][1][5] AS exceed_index;
```
```nofmt
@@ -160,7 +160,7 @@ Lists have types based on their layers (unlike arrays' dimension), and error if
you attempt to index a non-list element (i.e. indexing past the list’s last
layer):
-```sql
+```mzsql
SELECT LIST[['a', 'b'], ['c']][1][2][3];
```
```nofmt
@@ -172,7 +172,7 @@ ERROR: cannot subscript type string
To access contiguous ranges of a list, you can slice it using `[first index :
last index]`, using 1-indexed positions:
-```sql
+```mzsql
SELECT LIST[1,2,3,4,5][2:4] AS two_to_four;
```
```nofmt
@@ -184,7 +184,7 @@ SELECT LIST[1,2,3,4,5][2:4] AS two_to_four;
You can omit the first index to use the first value in the list, and omit the
last index to use all elements remaining in the list.
-```sql
+```mzsql
SELECT LIST[1,2,3,4,5][:3] AS one_to_three;
```
```nofmt
@@ -193,7 +193,7 @@ SELECT LIST[1,2,3,4,5][:3] AS one_to_three;
{1,2,3}
```
-```sql
+```mzsql
SELECT LIST[1,2,3,4,5][3:] AS three_to_five;
```
```nofmt
@@ -205,7 +205,7 @@ SELECT LIST[1,2,3,4,5][3:] AS three_to_five;
If the first index exceeds the list's maximum index, the operation returns an
empty list:
-```sql
+```mzsql
SELECT LIST[1,2,3,4,5][10:] AS exceed_index;
```
```nofmt
@@ -217,7 +217,7 @@ SELECT LIST[1,2,3,4,5][10:] AS exceed_index;
If the last index exceeds the list’s maximum index, the operation returns all
remaining elements up to its final element.
-```sql
+```mzsql
SELECT LIST[1,2,3,4,5][2:10] AS two_to_end;
```
```nofmt
@@ -230,7 +230,7 @@ Performing successive slices behaves more like a traditional programming
language taking slices of an array, rather than PostgreSQL's slicing, which
descends into each layer.
-```sql
+```mzsql
SELECT LIST[1,2,3,4,5][2:][2:3] AS successive;
```
```nofmt
@@ -258,7 +258,7 @@ backslashes and double quotes are backslash-escaped.
The following example demonstrates the output format and includes many of the
aforementioned special cases.
-```sql
+```mzsql
SELECT LIST[['a', 'white space'], [NULL, ''], ['escape"m\e', 'nUlL']];
```
```nofmt
@@ -283,7 +283,7 @@ The text you cast must:
For example, to cast `text` to a `date list`, you use `date`'s `text`
representation:
- ```sql
+ ```mzsql
SELECT '{2001-02-03, 2004-05-06}'::date list as date_list;
```
@@ -305,7 +305,7 @@ The text you cast must:
For example:
- ```sql
+ ```mzsql
SELECT '{
"{brackets}",
"\"quotes\"",
@@ -365,7 +365,7 @@ their dimension. For example, arrays of `text` are all of type `text[]` and 1D,
example, in a two-layer list, each of the first layer’s lists can be of a
different length:
-```sql
+```mzsql
SELECT LIST[[1,2], [3]] AS ragged_list;
```
```
@@ -380,7 +380,7 @@ This is known as a "ragged list."
example, if the first element in a 2D list has a length of 2, all subsequent
members must also have a length of 2.
-```sql
+```mzsql
SELECT ARRAY[[1,2], [3]] AS ragged_array;
```
```
@@ -392,7 +392,7 @@ ERROR: number of array elements (3) does not match declared cardinality (4)
When indexed, lists return a value with one less layer than the indexed list.
For example, indexing a two-layer list returns a one-layer list.
-```sql
+```mzsql
SELECT LIST[['foo'],['bar']][1] AS indexing;
```
```
@@ -404,7 +404,7 @@ SELECT LIST[['foo'],['bar']][1] AS indexing;
Attempting to index twice into a `text list` (i.e. a one-layer list), fails
because you cannot index `text`.
-```sql
+```mzsql
SELECT LIST['foo'][1][2];
```
```
@@ -467,7 +467,7 @@ You can [cast](../../functions/cast) the following types to `list`:
### Literals
-```sql
+```mzsql
SELECT LIST[[1.5, NULL],[2.25]];
```
```nofmt
@@ -478,7 +478,7 @@ SELECT LIST[[1.5, NULL],[2.25]];
### Casting between lists
-```sql
+```mzsql
SELECT LIST[[1.5, NULL],[2.25]]::int list list;
```
```nofmt
@@ -489,7 +489,7 @@ SELECT LIST[[1.5, NULL],[2.25]]::int list list;
### Casting to text
-```sql
+```mzsql
SELECT LIST[[1.5, NULL],[2.25]]::text;
```
```nofmt
@@ -501,7 +501,7 @@ SELECT LIST[[1.5, NULL],[2.25]]::text;
Despite the fact that the output looks the same as the above examples, it is, in
fact, `text`.
-```sql
+```mzsql
SELECT length(LIST[[1.5, NULL],[2.25]]::text);
```
```nofmt
@@ -512,7 +512,7 @@ SELECT length(LIST[[1.5, NULL],[2.25]]::text);
### Casting from text
-```sql
+```mzsql
SELECT '{{1.5,NULL},{2.25}}'::numeric(38,2) list list AS text_to_list;
```
```nofmt
diff --git a/doc/user/content/sql/types/map.md b/doc/user/content/sql/types/map.md
index 8d732d671012..64e7cd311a54 100644
--- a/doc/user/content/sql/types/map.md
+++ b/doc/user/content/sql/types/map.md
@@ -42,7 +42,7 @@ _value_type_ | The [type](../../types) of the map's values.
You can construct maps using the `MAP` expression:
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2];
```
```nofmt
@@ -53,7 +53,7 @@ SELECT MAP['a' => 1, 'b' => 2];
You can nest `MAP` constructors:
-```sql
+```mzsql
SELECT MAP['a' => MAP['b' => 'c']];
```
```nofmt
@@ -64,7 +64,7 @@ SELECT MAP['a' => MAP['b' => 'c']];
You can also elide the `MAP` keyword from the interior map expressions:
-```sql
+```mzsql
SELECT MAP['a' => ['b' => 'c']];
```
```nofmt
@@ -75,7 +75,7 @@ SELECT MAP['a' => ['b' => 'c']];
`MAP` expressions evalute expressions for both keys and values:
-```sql
+```mzsql
SELECT MAP['a' || 'b' => 1 + 2];
```
```nofmt
@@ -89,7 +89,7 @@ subquery must return two columns: a key column of type `text` and a value column
of any type, in that order. Note that, in this form of the `MAP` expression,
parentheses are used rather than square brackets.
-```sql
+```mzsql
SELECT MAP(SELECT key, value FROM test0 ORDER BY x DESC LIMIT 3);
```
```nofmt
@@ -128,7 +128,7 @@ encoding and decoding][binary] for these types, as well.
The textual format for a `map` is a sequence of `key => value` mappings
separated by commas and surrounded by curly braces (`{}`). For example:
-```sql
+```mzsql
SELECT '{a=>123.4, b=>111.1}'::map[text=>double] as m;
```
```nofmt
@@ -138,7 +138,7 @@ SELECT '{a=>123.4, b=>111.1}'::map[text=>double] as m;
```
You can create nested maps the same way:
-```sql
+```mzsql
SELECT '{a=>{b=>{c=>d}}}'::map[text=>map[text=>map[text=>text]]] as nested_map;
```
```nofmt
@@ -177,7 +177,7 @@ You can [cast](../../functions/cast) `map` to and from the following types:
Retrieves and returns the target value or `NULL`.
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] -> 'a' as field_map;
```
```nofmt
@@ -186,7 +186,7 @@ SELECT MAP['a' => 1, 'b' => 2] -> 'a' as field_map;
1
```
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] -> 'c' as field_map;
```
```nofmt
@@ -197,7 +197,7 @@ SELECT MAP['a' => 1, 'b' => 2] -> 'c' as field_map;
Field accessors can also be chained together.
-```sql
+```mzsql
SELECT MAP['a' => ['b' => 1], 'c' => ['d' => 2]] -> 'a' -> 'b' as field_map;
```
```nofmt
@@ -212,7 +212,7 @@ Note that all returned values are of the map's value type.
#### LHS contains RHS (`@>`)
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] @> MAP['a' => 1] AS lhs_contains_rhs;
```
```nofmt
@@ -225,7 +225,7 @@ SELECT MAP['a' => 1, 'b' => 2] @> MAP['a' => 1] AS lhs_contains_rhs;
#### RHS contains LHS (`<@`)
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] <@ MAP['a' => 1] as rhs_contains_lhs;
```
```nofmt
@@ -238,7 +238,7 @@ SELECT MAP['a' => 1, 'b' => 2] <@ MAP['a' => 1] as rhs_contains_lhs;
#### Search top-level keys (`?`)
-```sql
+```mzsql
SELECT MAP['a' => 1.9, 'b' => 2.0] ? 'a' AS search_for_key;
```
```nofmt
@@ -247,7 +247,7 @@ SELECT MAP['a' => 1.9, 'b' => 2.0] ? 'a' AS search_for_key;
t
```
-```sql
+```mzsql
SELECT MAP['a' => ['aa' => 1.9], 'b' => ['bb' => 2.0]] ? 'aa' AS search_for_key;
```
```nofmt
@@ -261,7 +261,7 @@ SELECT MAP['a' => ['aa' => 1.9], 'b' => ['bb' => 2.0]] ? 'aa' AS search_for_key;
Returns `true` if all keys provided on the RHS are present in the top-level of
the map, `false` otherwise.
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] ?& ARRAY['b', 'a'] as search_for_all_keys;
```
```nofmt
@@ -270,7 +270,7 @@ SELECT MAP['a' => 1, 'b' => 2] ?& ARRAY['b', 'a'] as search_for_all_keys;
t
```
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] ?& ARRAY['c', 'b'] as search_for_all_keys;
```
```nofmt
@@ -284,7 +284,7 @@ SELECT MAP['a' => 1, 'b' => 2] ?& ARRAY['c', 'b'] as search_for_all_keys;
Returns `true` if any keys provided on the RHS are present in the top-level of
the map, `false` otherwise.
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] ?| ARRAY['c', 'b'] as search_for_any_keys;
```
```nofmt
@@ -293,7 +293,7 @@ SELECT MAP['a' => 1, 'b' => 2] ?| ARRAY['c', 'b'] as search_for_any_keys;
t
```
-```sql
+```mzsql
SELECT MAP['a' => 1, 'b' => 2] ?| ARRAY['c', 'd', '1'] as search_for_any_keys;
```
```nofmt
@@ -306,7 +306,7 @@ SELECT MAP['a' => 1, 'b' => 2] ?| ARRAY['c', 'd', '1'] as search_for_any_keys;
Returns the number of entries in the map.
-```sql
+```mzsql
SELECT map_length(MAP['a' => 1, 'b' => 2]);
```
```nofmt
diff --git a/doc/user/content/sql/types/numeric.md b/doc/user/content/sql/types/numeric.md
index 1450a5fcdebf..55e2b6ec142b 100644
--- a/doc/user/content/sql/types/numeric.md
+++ b/doc/user/content/sql/types/numeric.md
@@ -72,7 +72,7 @@ For details on exceeding the `numeric` type's maximum precision, see
By default, `numeric` values do not have a specified scale, so values can have
anywhere between 0 and 39 digits after the decimal point. For example:
-```sql
+```mzsql
CREATE TABLE unscaled (c NUMERIC);
INSERT INTO unscaled VALUES
(987654321098765432109876543210987654321),
@@ -92,7 +92,7 @@ However, if you specify a scale on a `numeric` value, values will be rescaled
appropriately. If the resulting value exceeds the maximum precision for
`numeric` types, you'll receive an error.
-```sql
+```mzsql
CREATE TABLE scaled (c NUMERIC(39, 20));
INSERT INTO scaled VALUES
@@ -101,7 +101,7 @@ INSERT INTO scaled VALUES
```
ERROR: numeric field overflow
```
-```sql
+```mzsql
INSERT INTO scaled VALUES
(9876543210987654321.09876543210987654321),
(.987654321098765432109876543210987654321);
@@ -119,7 +119,7 @@ SELECT c FROM scaled;
`numeric` operations will always round off fractional values to limit their
values to 39 digits of precision.
-```sql
+```mzsql
SELECT 2 * 9876543210987654321.09876543210987654321 AS rounded;
rounded
@@ -189,7 +189,7 @@ You can [cast](../../functions/cast) from the following types to `numeric`:
## Examples
-```sql
+```mzsql
SELECT 1.23::numeric AS num_v;
```
```nofmt
@@ -199,7 +199,7 @@ SELECT 1.23::numeric AS num_v;
```
-```sql
+```mzsql
SELECT 1.23::numeric(38,3) AS num_38_3_v;
```
```nofmt
@@ -210,7 +210,7 @@ SELECT 1.23::numeric(38,3) AS num_38_3_v;
-```sql
+```mzsql
SELECT 1.23e4 AS num_w_exp;
```
```nofmt
diff --git a/doc/user/content/sql/types/record.md b/doc/user/content/sql/types/record.md
index fcc57b2ba8d3..fff34ef9b50a 100644
--- a/doc/user/content/sql/types/record.md
+++ b/doc/user/content/sql/types/record.md
@@ -41,7 +41,7 @@ You cannot cast from any other types to `record`.
## Examples
-```sql
+```mzsql
SELECT ROW(1, 2) AS record;
```
```nofmt
@@ -52,7 +52,7 @@ SELECT ROW(1, 2) AS record;
-```sql
+```mzsql
SELECT record, (record).f2 FROM (SELECT ROW(1, 2) AS record);
```
```nofmt
@@ -66,7 +66,7 @@ record | f2
Forgetting to parenthesize the record expression in a field selection operation
will result in errors like the following
-```sql
+```mzsql
SELECT record.f2 FROM (SELECT ROW(1, 2) AS record);
```
```nofmt
diff --git a/doc/user/content/sql/types/text.md b/doc/user/content/sql/types/text.md
index 56cc2a4c7df0..dc25d98313a3 100644
--- a/doc/user/content/sql/types/text.md
+++ b/doc/user/content/sql/types/text.md
@@ -29,7 +29,7 @@ Detail | Info
To escape a single quote character (`'`) in a standard string literal, write two
adjacent single quotes:
-```sql
+```mzsql
SELECT 'single''quote' AS output
```
```nofmt
@@ -82,7 +82,7 @@ You can [cast](../../functions/cast) [all types](../) to `text`. All casts are b
## Examples
-```sql
+```mzsql
SELECT 'hello' AS text_val;
```
```nofmt
@@ -93,7 +93,7 @@ SELECT 'hello' AS text_val;
-```sql
+```mzsql
SELECT E'behold\nescape strings\U0001F632' AS escape_val;
```
```nofmt
diff --git a/doc/user/content/sql/types/time.md b/doc/user/content/sql/types/time.md
index a61dbb87d9de..1da17166a7b2 100644
--- a/doc/user/content/sql/types/time.md
+++ b/doc/user/content/sql/types/time.md
@@ -58,7 +58,7 @@ Operation | Computes
## Examples
-```sql
+```mzsql
SELECT TIME '01:23:45' AS t_v;
```
```nofmt
@@ -69,7 +69,7 @@ SELECT TIME '01:23:45' AS t_v;
-```sql
+```mzsql
SELECT DATE '2001-02-03' + TIME '12:34:56' AS d_t;
```
```nofmt
diff --git a/doc/user/content/sql/types/timestamp.md b/doc/user/content/sql/types/timestamp.md
index 99ce4272e271..1034b2083e8b 100644
--- a/doc/user/content/sql/types/timestamp.md
+++ b/doc/user/content/sql/types/timestamp.md
@@ -98,7 +98,7 @@ Operation | Computes
### Return timestamp
-```sql
+```mzsql
SELECT TIMESTAMP '2007-02-01 15:04:05' AS ts_v;
```
```nofmt
@@ -109,7 +109,7 @@ SELECT TIMESTAMP '2007-02-01 15:04:05' AS ts_v;
### Return timestamp with time zone
-```sql
+```mzsql
SELECT TIMESTAMPTZ '2007-02-01 15:04:05+06' AS tstz_v;
```
```nofmt
diff --git a/doc/user/content/sql/types/uint.md b/doc/user/content/sql/types/uint.md
index aed74fb918db..290f2565c684 100644
--- a/doc/user/content/sql/types/uint.md
+++ b/doc/user/content/sql/types/uint.md
@@ -82,7 +82,7 @@ From | Required context
## Examples
-```sql
+```mzsql
SELECT 123::uint4 AS int_v;
```
```nofmt
@@ -93,7 +93,7 @@ SELECT 123::uint4 AS int_v;
-```sql
+```mzsql
SELECT 1.23::uint4 AS int_v;
```
```nofmt
diff --git a/doc/user/content/sql/types/uuid.md b/doc/user/content/sql/types/uuid.md
index c19daefad160..2ae36b9c2dde 100644
--- a/doc/user/content/sql/types/uuid.md
+++ b/doc/user/content/sql/types/uuid.md
@@ -51,7 +51,7 @@ You can [cast](../../functions/cast) `uuid` to [`text`](../text) by assignment a
## Examples
-```sql
+```mzsql
SELECT UUID 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11' AS uuid
```
```nofmt
diff --git a/doc/user/content/sql/update.md b/doc/user/content/sql/update.md
index 41f5a9442b7b..62674639d7bb 100644
--- a/doc/user/content/sql/update.md
+++ b/doc/user/content/sql/update.md
@@ -30,7 +30,7 @@ _alias_ | Only permit references to _table_name_ as _alias_.
## Examples
-```sql
+```mzsql
CREATE TABLE update_me (a int, b text);
INSERT INTO update_me VALUES (1, 'hello'), (2, 'goodbye');
UPDATE update_me SET a = a + 2 WHERE b = 'hello';
@@ -42,7 +42,7 @@ SELECT * FROM update_me;
3 | hello
2 | goodbye
```
-```sql
+```mzsql
UPDATE update_me SET b = 'aloha';
SELECT * FROM update_me;
```
diff --git a/doc/user/content/sql/values.md b/doc/user/content/sql/values.md
index 629e165da80a..e5cc50935491 100644
--- a/doc/user/content/sql/values.md
+++ b/doc/user/content/sql/values.md
@@ -26,7 +26,7 @@ alone.
Using a `VALUES` expression as a standalone statement:
-```sql
+```mzsql
VALUES (1, 2, 3), (4, 5, 6);
```
```nofmt
@@ -38,7 +38,7 @@ VALUES (1, 2, 3), (4, 5, 6);
Using a `VALUES` expression in place of a `SELECT` expression:
-```sql
+```mzsql
VALUES (1), (2), (3) ORDER BY column1 DESC LIMIT 2;
```
```nofmt
@@ -50,7 +50,7 @@ VALUES (1), (2), (3) ORDER BY column1 DESC LIMIT 2;
Using a `VALUES` expression in an [`INSERT`] statement:
-```sql
+```mzsql
INSERT INTO t VALUES (1, 2), (3, 4);
```
diff --git a/doc/user/content/transform-data/dataflow-troubleshooting.md b/doc/user/content/transform-data/dataflow-troubleshooting.md
index 6d17cc90c864..8b105580e149 100644
--- a/doc/user/content/transform-data/dataflow-troubleshooting.md
+++ b/doc/user/content/transform-data/dataflow-troubleshooting.md
@@ -30,7 +30,7 @@ joining relations.
To make these concepts a bit more tangible, let's look at the example from the
[getting started guide](https://materialize.com/docs/get-started/quickstart/).
-```sql
+```mzsql
CREATE SOURCE auction_house
FROM LOAD GENERATOR AUCTION
(TICK INTERVAL '100ms')
@@ -52,7 +52,7 @@ understand how this SQL query is translated to a dataflow, we can use
[`EXPLAIN PLAN`](https://materialize.com/docs/sql/explain-plan/) to display the
plan used to evaluate the join.
-```sql
+```mzsql
EXPLAIN MATERIALIZED VIEW num_bids;
```
```
@@ -135,7 +135,7 @@ To understand which dataflow is taking the most time we can query the
time the dataflows was busy since the system started and the dataflow was
created.
-```sql
+```mzsql
-- Extract raw elapsed time information for dataflows
SELECT
mdo.id,
@@ -177,7 +177,7 @@ interpret. The following query therefore only returns operators from the
`mz_scheduling_elapsed` relation. You can further drill down by adding a filter
condition that matches the name of a specific dataflow.
-```sql
+```mzsql
SELECT
mdod.id,
mdod.name,
@@ -231,7 +231,7 @@ operators, it will become visible in the histogram. The offending operator will
be scheduled in much longer intervals compared to other operators, which
reflects in the histogram as larger time buckets.
-```sql
+```mzsql
-- Extract raw scheduling histogram information for operators
WITH histograms AS (
SELECT
@@ -280,7 +280,7 @@ The reported duration is still reporting aggregated values since the operator
has been created. To get a feeling for which operators are currently doing
work, you can subscribe to the changes of the relation.
-```sql
+```mzsql
-- Observe changes to the raw scheduling histogram information
COPY(SUBSCRIBE(
WITH histograms AS (
@@ -331,7 +331,7 @@ numbers of records and the size of the arrangements. The reported records may
exceed the number of logical records; the report reflects the uncompacted
state.
-```sql
+```mzsql
-- Extract dataflow records and sizes
SELECT
id,
@@ -351,7 +351,7 @@ ORDER BY size DESC
If you need to drill down into individual operators, you can query
`mz_arrangement_sizes` instead.
-```sql
+```mzsql
-- Extract operator records and sizes
SELECT
mdod.id,
@@ -401,7 +401,7 @@ they (currently) have a granularity determined by the source itself. For
example, Kafka topic ingestion work can become skewed if most of the data is in
only one out of multiple partitions.
-```sql
+```mzsql
-- Get operators where one worker has spent more than 2 times the average
-- amount of time spent. The number 2 can be changed according to the threshold
-- for the amount of skew deemed problematic.
@@ -438,7 +438,7 @@ position `n`, then it is part of the `x` subregion of the region defined by
positions `0..n-1`. The example SQL query and result below shows an operator
whose `id` is 515 that belongs to "subregion 5 of region 1 of dataflow 21".
-```sql
+```mzsql
SELECT * FROM mz_internal.mz_dataflow_addresses WHERE id=515;
```
```
@@ -456,7 +456,7 @@ said operator has only a single entry. For the example operator 515 above, you
can find the name of the dataflow if you can find the name of the operator
whose address is just "dataflow 21."
-```sql
+```mzsql
-- get id and name of the operator representing the entirety of the dataflow
-- that a problematic operator comes from
SELECT
diff --git a/doc/user/content/transform-data/join.md b/doc/user/content/transform-data/join.md
index 0d2cb7237adc..d86a1f8febc6 100644
--- a/doc/user/content/transform-data/join.md
+++ b/doc/user/content/transform-data/join.md
@@ -90,7 +90,7 @@ involving `LATERAL` joins, Materialize can optimize away the join entirely.
As a simple example, the following query uses `LATERAL` to count from 1 to `x`
for all the values of `x` in `xs`.
-```sql
+```mzsql
SELECT * FROM
(VALUES (1), (3)) xs (x)
CROSS JOIN LATERAL generate_series(1, x) y;
@@ -145,7 +145,7 @@ valid.
![inner join diagram](/images/join-inner.png)
-```sql
+```mzsql
SELECT
employees."name" AS employee,
managers."name" AS manager
@@ -169,7 +169,7 @@ is referenced.
![left outer join diagram](/images/join-left-outer.png)
-```sql
+```mzsql
SELECT
employees."name" AS employee,
managers."name" AS manager
@@ -197,7 +197,7 @@ table contain `NULL` wherever the left-hand table is referenced.
![right outer join diagram](/images/join-right-outer.png)
-```sql
+```mzsql
SELECT
employees."name" AS employee,
managers."name" AS manager
@@ -223,7 +223,7 @@ other table is referenced.
![full outer join diagram](/images/join-full-outer.png)
-```sql
+```mzsql
SELECT
employees."name" AS employee,
managers."name" AS manager
diff --git a/doc/user/content/transform-data/optimization.md b/doc/user/content/transform-data/optimization.md
index 171ba2b87de7..ac38e97ae900 100644
--- a/doc/user/content/transform-data/optimization.md
+++ b/doc/user/content/transform-data/optimization.md
@@ -38,7 +38,7 @@ Speed up a query involving a `WHERE` clause with equality comparisons to literal
| `WHERE upper(y) = 'HELLO'` | `CREATE INDEX ON obj_name (upper(y));` |
You can verify that Materialize is accessing the input by an index lookup using `EXPLAIN`. Check for `lookup_value` after the index name to confirm that an index lookup is happening, i.e., that Materialize is only reading the matching records from the index instead of scanning the entire index:
-```sql
+```mzsql
EXPLAIN SELECT * FROM foo WHERE x = 42 AND y = 'hello';
```
```
@@ -66,7 +66,7 @@ In general, you can [improve the performance of your joins](https://materialize.
Let's create a few tables to work through examples.
-```sql
+```mzsql
CREATE TABLE teachers (id INT, name TEXT);
CREATE TABLE sections (id INT, teacher_id INT, course_id INT, schedule TEXT);
CREATE TABLE courses (id INT, name TEXT);
@@ -78,7 +78,7 @@ Let's consider two queries that join on a common collection. The idea is to crea
Here is a query where we join a collection `teachers` to a collection `sections` to see the name of the teacher, schedule, and course ID for a specific section of a course.
-```sql
+```mzsql
SELECT
t.name,
s.schedule,
@@ -89,7 +89,7 @@ INNER JOIN sections s ON t.id = s.teacher_id;
Here is another query that also joins on `teachers.id`. This one counts the number of sections each teacher teaches.
-```sql
+```mzsql
SELECT
t.id,
t.name,
@@ -101,7 +101,7 @@ GROUP BY t.id, t.name;
We can eliminate redundant memory usage for these two queries by creating an index on the common column being joined, `teachers.id`.
-```sql
+```mzsql
CREATE INDEX pk_teachers ON teachers (id);
```
@@ -113,7 +113,7 @@ Note that when the same input is being used in a join as well as being constrain
- on `teachers(name)` to perform the `t.name = 'Escalante'` point lookup before the join,
- on `teachers(id)` to speed up the join and then perform the `WHERE t.name = 'Escalante'`.
-```sql
+```mzsql
SELECT
t.name,
s.schedule,
@@ -131,7 +131,7 @@ Materialize has access to a join execution strategy we call `DeltaQuery`, a.k.a.
From the previous example, add the name of the course rather than just the course ID.
-```sql
+```mzsql
CREATE VIEW course_schedule AS
SELECT
t.name AS teacher_name,
@@ -144,14 +144,14 @@ CREATE VIEW course_schedule AS
In this case, we create indexes on the join keys to optimize the query:
-```sql
+```mzsql
CREATE INDEX pk_teachers ON teachers (id);
CREATE INDEX sections_fk_teachers ON sections (teacher_id);
CREATE INDEX pk_courses ON courses (id);
CREATE INDEX sections_fk_courses ON sections (course_id);
```
-```sql
+```mzsql
EXPLAIN SELECT * FROM course_schedule;
```
@@ -186,7 +186,7 @@ To understand late materialization, you need to know about primary and foreign k
In many relational databases, indexes don't replicate the entire collection of data. Rather, they maintain just a mapping from the indexed columns back to a primary key. These few columns can take substantially less space than the whole collection, and may also change less as various unrelated attributes are updated. This is called **late materialization**, and it is possible to achieve in Materialize as well. Here are the steps to implementing late materialization along with examples.
1. Create indexes on the primary key column(s) for your input collections.
- ```sql
+ ```mzsql
CREATE INDEX pk_teachers ON teachers (id);
CREATE INDEX pk_sections ON sections (id);
CREATE INDEX pk_courses ON courses (id);
@@ -194,7 +194,7 @@ In many relational databases, indexes don't replicate the entire collection of d
2. For each foreign key in the join, create a "narrow" view with just two columns: foreign key and primary key. Then create two indexes: one for the foreign key and one for the primary key. In our example, the two foreign keys are `sections.teacher_id` and `sections.course_id`, so we do the following:
- ```sql
+ ```mzsql
-- Create a "narrow" view containing primary key sections.id
-- and foreign key sections.teacher_id
CREATE VIEW sections_narrow_teachers AS SELECT id, teacher_id FROM sections;
@@ -202,7 +202,7 @@ In many relational databases, indexes don't replicate the entire collection of d
CREATE INDEX sections_narrow_teachers_0 ON sections_narrow_teachers (id);
CREATE INDEX sections_narrow_teachers_1 ON sections_narrow_teachers (teacher_id);
```
- ```sql
+ ```mzsql
-- Create a "narrow" view containing primary key sections.id
-- and foreign key sections.course_id
CREATE VIEW sections_narrow_courses AS SELECT id, course_id FROM sections;
@@ -216,7 +216,7 @@ In many relational databases, indexes don't replicate the entire collection of d
3. Rewrite your query to use your narrow collections in the join conditions. Example:
- ```sql
+ ```mzsql
SELECT
t.name AS teacher_name,
s.schedule,
@@ -240,7 +240,7 @@ Clause | Index
Use `EXPLAIN` to verify that indexes are used as you expect. For example:
-```SQL
+```mzsql
CREATE TABLE teachers (id INT, name TEXT);
CREATE TABLE sections (id INT, teacher_id INT, course_id INT, schedule TEXT);
CREATE TABLE courses (id INT, name TEXT);
@@ -301,7 +301,7 @@ The number of levels needed in the hierarchical scheme is by default set assumin
Consider the previous example with the collection `sections`. Maintenance of the maximum `course_id` per `teacher` can be achieved with a materialized view:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW max_course_id_per_teacher AS
SELECT teacher_id, MAX(course_id)
FROM sections
@@ -310,7 +310,7 @@ GROUP BY teacher_id;
If the largest number of `course_id` values that are allocated to a single `teacher_id` is known, then this number can be provided as the `AGGREGATE INPUT GROUP SIZE`. For the query above, it is possible to get an estimate for this number by:
-```sql
+```mzsql
SELECT MAX(course_count)
FROM (
SELECT teacher_id, COUNT(*) course_count
@@ -323,7 +323,7 @@ However, the estimate is based only on data that is already present in the syste
For our example, let's suppose that we determined the largest number of courses per teacher to be `1000`. Then, the original definition of `max_course_id_per_teacher` can be revised to include the `AGGREGATE INPUT GROUP SIZE` query hint as follows:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW max_course_id_per_teacher AS
SELECT teacher_id, MAX(course_id)
FROM sections
@@ -333,7 +333,7 @@ OPTIONS (AGGREGATE INPUT GROUP SIZE = 1000)
The other two hints can be provided in [Top K] query patterns specified by `DISTINCT ON` or `LIMIT`. As examples, consider that we wish not to compute the maximum `course_id`, but rather the `id` of the section of this top course. This computation can be incrementally maintained by the following materialized view:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW section_of_top_course_per_teacher AS
SELECT DISTINCT ON(teacher_id) teacher_id, id AS section_id
FROM sections
@@ -343,7 +343,7 @@ ORDER BY teacher_id ASC, course_id DESC;
In the above examples, we see that the query hints are always positioned in an `OPTIONS` clause after a `GROUP BY` clause, but before an `ORDER BY`, as captured by the [`SELECT` syntax]. However, in the case of Top K using a `LATERAL` subquery and `LIMIT`, it is important to note that the hint is specified in the subquery. For instance, the following materialized view illustrates how to incrementally maintain the top-3 section `id`s ranked by `course_id` for each teacher:
-```sql
+```mzsql
CREATE MATERIALIZED VIEW sections_of_top_3_courses_per_teacher AS
SELECT id AS teacher_id, section_id
FROM teachers grp,
@@ -357,7 +357,7 @@ FROM teachers grp,
For indexed and materialized views that have already been created without specifying query hints, Materialize includes an introspection view, [`mz_internal.mz_expected_group_size_advice`], that can be used to query, for a given cluster, all incrementally maintained [dataflows] where tuning of the above query hints could be beneficial. The introspection view also provides an advice value based on an estimate of how many levels could be cut from the hierarchy. The following query illustrates how to access this introspection view:
-```sql
+```mzsql
SELECT dataflow_name, region_name, levels, to_cut, hint
FROM mz_internal.mz_expected_group_size_advice
ORDER BY dataflow_name, region_name;
diff --git a/doc/user/content/transform-data/patterns/percentiles.md b/doc/user/content/transform-data/patterns/percentiles.md
index 91fbe748799b..badc967e389e 100644
--- a/doc/user/content/transform-data/patterns/percentiles.md
+++ b/doc/user/content/transform-data/patterns/percentiles.md
@@ -18,7 +18,7 @@ Histograms have a lower memory footprint, linear to the number of _unique_ value
Histograms reduce the memory footprint by tracking a count for each unique value, in a `bucket`, instead of tracking all values. Given an `input`, define the histogram for `values` as follows:
-```sql
+```mzsql
CREATE VIEW histogram AS
SELECT
value AS bucket,
@@ -29,7 +29,7 @@ GROUP BY value;
To query percentiles from the view `histogram`, it's no longer possible to just order the `values` and pick a value from the right spot. Instead, the distribution of values needs to be reconstructed. It is done by determining the cumulative count (the sum of counts up through each bucket that came before) for each `bucket`. This is accomplished through a cross-join in the following view:
-```sql
+```mzsql
CREATE VIEW distribution AS
SELECT
h.bucket,
@@ -44,7 +44,7 @@ ORDER BY cumulative_distribution;
The cumulative count and the cumulative distribution can then be used to query for arbitrary percentiles. The following query returns the 90-th percentile.
-```sql
+```mzsql
SELECT bucket AS percentile
FROM distribution
WHERE cumulative_distribution >= 0.9
@@ -54,7 +54,7 @@ LIMIT 1;
To increase query performance, it can make sense to keep the `distribution` always up to date by creating an index on the view:
-```sql
+```mzsql
CREATE INDEX distribution_idx ON distribution (cumulative_distribution);
```
@@ -75,7 +75,7 @@ and the precision of the mantissa is then reduced to compute the respective buck
The basic ideas of using histograms to compute percentiles remain the same; but determining the bucket becomes more involved because it’s now composed of the triple (sign, mantissa, exponent).
-```sql
+```mzsql
-- precision for the representation of the mantissa in bits
\set precision 4
@@ -91,7 +91,7 @@ GROUP BY sign, exponent, mantissa;
The `hdr_distribution` view below reconstructs the `bucket` (with reduced precision), and determines the cumulative count and cumulative distribution.
-```sql
+```mzsql
CREATE VIEW hdr_distribution AS
SELECT
h.sign*(1.0+h.mantissa/pow(2.0, :precision))*pow(2.0,h.exponent) AS bucket,
@@ -106,7 +106,7 @@ ORDER BY cumulative_distribution;
This view can then be used to query _approximate_ percentiles. More precisely, the query returns the lower bound for the percentile (the next larger bucket represents the upper bound).
-```sql
+```mzsql
SELECT bucket AS approximate_percentile
FROM hdr_distribution
WHERE cumulative_distribution >= 0.9
@@ -116,26 +116,26 @@ LIMIT 1;
As with histograms, increase query performance by creating an index on the `cumulative_distribution` column.
-```sql
+```mzsql
CREATE INDEX hdr_distribution_idx ON hdr_distribution (cumulative_distribution);
```
## Examples
-```sql
+```mzsql
CREATE TABLE input (value BIGINT);
```
Let's add the values 1 to 10 into the `input` table.
-```sql
+```mzsql
INSERT INTO input SELECT n FROM generate_series(1,10) AS n;
```
For small numbers, `distribution` and `hdr_distribution` are identical. Even in `hdr_distribution`, all numbers from 1 to 10 are stored in their own buckets.
-```sql
+```mzsql
SELECT * FROM hdr_distribution;
bucket | frequency | cumulative_frequency | cumulative_distribution
@@ -155,13 +155,13 @@ SELECT * FROM hdr_distribution;
But if values grow larger, buckets can contain more than one value. Let's see what happens if more values are added to the `input` table.
-```sql
+```mzsql
INSERT INTO input SELECT n FROM generate_series(11,10001) AS n;
```
In the case of the `hdr_distribution`, a single bucket represents up to 512 distinct values, whereas each bucket of the `distribution` contains only a single value.
-```sql
+```mzsql
SELECT * FROM hdr_distribution ORDER BY cumulative_distribution;
bucket | frequency | cumulative_frequency | cumulative_distribution
@@ -184,7 +184,7 @@ SELECT * FROM hdr_distribution ORDER BY cumulative_distribution;
Note that `hdr_distribution` only contains 163 rows as opposed to the 10001 rows of `distribution`, which is used in the histogram approach. However, when querying for the 90-th percentile, the query returns an approximate percentile of `8704` (or more precisely between `8704`and `9216`) whereas the precise percentile is `9001`.
-```sql
+```mzsql
SELECT bucket AS approximate_percentile
FROM hdr_distribution
WHERE cumulative_distribution >= 0.9
diff --git a/doc/user/content/transform-data/patterns/rules-engine.md b/doc/user/content/transform-data/patterns/rules-engine.md
index ce0aa9452c2e..d83d0b4cdbbe 100644
--- a/doc/user/content/transform-data/patterns/rules-engine.md
+++ b/doc/user/content/transform-data/patterns/rules-engine.md
@@ -27,7 +27,7 @@ In our example, for each rule in a `bird_rules` dataset, we filter the `birds` d
### Create Resources
1. Create the `birds` table and insert some birds.
- ```sql
+ ```mzsql
CREATE TABLE birds (
id INT,
name VARCHAR(50),
@@ -48,7 +48,7 @@ In our example, for each rule in a `bird_rules` dataset, we filter the `birds` d
(10, 'Pelican', 180.4, '["White"]');
```
1. Create the `bird_rules` table and insert a few rules.
- ```sql
+ ```mzsql
CREATE TABLE bird_rules (
id INT,
starts_with CHAR(1),
@@ -69,7 +69,7 @@ In our example, for each rule in a `bird_rules` dataset, we filter the `birds` d
Here is the view that will execute our bird rules:
-```sql
+```mzsql
CREATE VIEW birds_filtered AS
SELECT r.id AS rule_id, b.name, b.colors, b.wingspan_cm
FROM
@@ -92,7 +92,7 @@ LATERAL (
### Subscribe to Changes
1. Subscribe to the changes of `birds_filtered`.
- ```sql
+ ```mzsql
COPY(SUBSCRIBE birds_filtered) TO STDOUT;
```
```nofmt
@@ -102,7 +102,7 @@ LATERAL (
```
Notice that the majestic penguin satisfies rule 2. None of the other birds satisfy any of the rules.
1. In a separate session, insert a new bird that satisfies rule 3. Rule 3 requires a bird whose first letter is 'R', with a wingspan greater than or equal to 20 centimeters, and whose colors contain "Red". We will insert a "Really big robin" that satisfies this rule.
- ```sql
+ ```mzsql
INSERT INTO birds VALUES (11, 'Really big robin', 25.0, '["Red"]');
```
Back in the `SUBSCRIBE` terminal, notice the output was immediately updated.
@@ -112,7 +112,7 @@ LATERAL (
1688674195279 1 3 Really big robin ["Red"] 25
```
1. For fun, let's delete rule 3 and see what happens.
- ```sql
+ ```mzsql
DELETE FROM bird_rules WHERE id = 3;
```
```nofmt
@@ -122,7 +122,7 @@ LATERAL (
```
Notice the bird was removed because the rule no longer exists.
1. Now let's update an existing bird so that it satisfies a new rule. It turns out our penguin also has some blue coloration we didn't notice before.
- ```sql
+ ```mzsql
UPDATE birds SET colors = '["Black","White","Blue"]' WHERE name = 'Penguin';
```
```nofmt
@@ -138,7 +138,7 @@ LATERAL (
Press `Ctrl+C` to stop your `SUBSCRIBE` query and then drop the tables to clean up.
-```sql
+```mzsql
DROP TABLE birds CASCADE;
DROP TABLE bird_rules CASCADE;
```
diff --git a/doc/user/content/transform-data/patterns/temporal-filters.md b/doc/user/content/transform-data/patterns/temporal-filters.md
index 094d7d80020c..872745abb625 100644
--- a/doc/user/content/transform-data/patterns/temporal-filters.md
+++ b/doc/user/content/transform-data/patterns/temporal-filters.md
@@ -15,7 +15,7 @@ Applying a temporal filter reduces the working dataset, saving memory resources
Here is a typical temporal filter that considers records whose timestamps are within the last 5 minutes.
-```sql
+```mzsql
WHERE mz_now() <= event_ts + INTERVAL '5min'
```
@@ -54,7 +54,7 @@ Other systems use this term differently because they cannot achieve a continuous
In this case, we will filter a table to only include only records from the last 30 seconds.
1. First, create a table called `events` and a view of the most recent 30 seconds of events.
- ```sql
+ ```mzsql
--Create a table of timestamped events.
CREATE TABLE events (
content TEXT,
@@ -69,12 +69,12 @@ In this case, we will filter a table to only include only records from the last
```
1. Next, subscribe to the results of the view.
- ```sql
+ ```mzsql
COPY (SUBSCRIBE (SELECT ts, content FROM last_30_sec)) TO STDOUT;
```
1. In a separate session, insert a record.
- ```sql
+ ```mzsql
INSERT INTO events VALUES ('hello', now());
```
@@ -94,19 +94,19 @@ This example uses a `tasks` table with a time to live for each task.
Materialize then helps perform actions according to each task's expiration time.
1. First, create a table:
- ```sql
+ ```mzsql
CREATE TABLE tasks (name TEXT, created_ts TIMESTAMP, ttl INTERVAL);
```
1. Add some tasks to track:
- ```sql
+ ```mzsql
INSERT INTO tasks VALUES ('send_email', now(), INTERVAL '5 minutes');
INSERT INTO tasks VALUES ('time_to_eat', now(), INTERVAL '1 hour');
INSERT INTO tasks VALUES ('security_block', now(), INTERVAL '1 day');
```
1. Create a view using a temporal filter **over the expiration time**. For our example, the expiration time represents the sum between the task's `created_ts` and its `ttl`.
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW tracking_tasks AS
SELECT
name,
@@ -119,21 +119,21 @@ Materialize then helps perform actions according to each task's expiration time.
You can now:
- Query the remaining time for a row:
- ```sql
+ ```mzsql
SELECT expiration_time - now() AS remaining_ttl
FROM tracking_tasks
WHERE name = 'time_to_eat';
```
- Check if a particular row is still available:
- ```sql
+ ```mzsql
SELECT true
FROM tracking_tasks
WHERE name = 'security_block';
```
- Trigger an external process when a row expires:
- ```sql
+ ```mzsql
INSERT INTO tasks VALUES ('send_email', now(), INTERVAL '5 seconds');
COPY( SUBSCRIBE tracking_tasks WITH (SNAPSHOT = false) ) TO STDOUT;
@@ -153,11 +153,11 @@ Materialize [date functions](/sql/functions/#date-and-time-func) are helpful for
The strategy for this example is to put an initial temporal filter on the input (say, 30 days) to bound it, use the [`date_bin` function](/sql/functions/date-bin) to bin records into 1 minute windows, use a second temporal filter to emit results at the end of the window, and finally apply a third temporal filter shorter than the first (say, 7 days) to set how long results should persist in Materialize.
1. First, create a table for the input records.
- ```sql
+ ```mzsql
CREATE TABLE input (id INT, event_ts TIMESTAMP);
```
1. Create a view that filters the input for the most recent 30 days and buckets records into 1 minute windows.
- ```sql
+ ```mzsql
CREATE VIEW
input_recent_bucketed
AS
@@ -174,7 +174,7 @@ The strategy for this example is to put an initial temporal filter on the input
WHERE mz_now() <= event_ts + INTERVAL '30 days';
```
1. Create the final output view that does the aggregation and maintains 7 days worth of results.
- ```sql
+ ```mzsql
CREATE MATERIALIZED VIEW output
AS
SELECT
@@ -190,11 +190,11 @@ The strategy for this example is to put an initial temporal filter on the input
```
This `WHERE` clause means "the result for a 1-minute window should come into effect when `mz_now()` reaches `window_end` and be removed 7 days later". Without the latter constraint, records in the result set would receive strange updates as records expire from the initial 30 day filter on the input.
1. Subscribe to the `output`.
- ```sql
+ ```mzsql
COPY (SUBSCRIBE (SELECT * FROM output)) TO STDOUT;
```
1. In a different session, insert some records.
- ```sql
+ ```mzsql
INSERT INTO input VALUES (1, now());
-- wait a moment
INSERT INTO input VALUES (1, now());
@@ -223,7 +223,7 @@ How can you account for late arriving data in Materialize?
Consider the temporal filter for the most recent hour's worth of records.
-```sql
+```mzsql
WHERE mz_now() <= event_ts + INTERVAL '1hr'
```
@@ -251,7 +251,7 @@ However, the values in the `content` column are not correlated with insertion ti
Temporal filters that consist of arithmetic, date math, and comparisons are eligible for pushdown, including all the examples in this page.
However, more complex filters might not be. You can check whether the filters in your query can be pushed down by using [the `filter_pushdown` option](/sql/explain-plan/#output-modifiers) in an `EXPLAIN` statement. For example:
-```sql
+```mzsql
EXPLAIN WITH(filter_pushdown)
SELECT count(*)
FROM events
diff --git a/doc/user/content/transform-data/patterns/time-travel-queries.md b/doc/user/content/transform-data/patterns/time-travel-queries.md
index e0d2b5a4b77e..a7427bfe4c02 100644
--- a/doc/user/content/transform-data/patterns/time-travel-queries.md
+++ b/doc/user/content/transform-data/patterns/time-travel-queries.md
@@ -129,7 +129,7 @@ state). This will allow you to resume using the retained history upstream.
1. The first time you start the subscription, run the following
continuous query against Materialize in your application code:
- ```sql
+ ```mzsql
SUBSCRIBE () WITH (PROGRESS, SNAPSHOT true);
```
@@ -146,7 +146,7 @@ is complete, so you can:
1. To resume the subscription in subsequent restarts,
use the following continuous query against Materialize in your application code:
- ```sql
+ ```mzsql
SUBSCRIBE () WITH (PROGRESS, SNAPSHOT false) AS OF ;
```
@@ -185,7 +185,7 @@ To set a history retention period for an object, use the `RETAIN HISTORY`
option, which accepts positive [interval](/sql/types/interval/) values
(e.g. `'1hr'`):
-```sql
+```mzsql
CREATE MATERIALIZED VIEW winning_bids
WITH (RETAIN HISTORY FOR '1hr') AS
SELECT auction_id,
@@ -200,7 +200,7 @@ WHERE end_time < mz_now();
To adjust the history retention period for an object, use `ALTER`:
-```sql
+```mzsql
ALTER MATERIALIZED VIEW winning_bids SET (RETAIN HISTORY FOR '2hr');
```
@@ -209,7 +209,7 @@ ALTER MATERIALIZED VIEW winning_bids SET (RETAIN HISTORY FOR '2hr');
To see what history retention period has been configured for an object,
look up the object in the [`mz_internal.mz_history_retention_strategies`](/sql/system-catalog/mz_internal/#mz_history_retention_strategies) catalog table.
-```sql
+```mzsql
SELECT
d.name AS database_name,
s.name AS schema_name,
@@ -234,6 +234,6 @@ WHERE mv.name = 'winning_bids';
To disable history retention, reset the `RETAIN HISTORY` option:
-```sql
+```mzsql
ALTER MATERIALIZED VIEW winning_bids RESET (RETAIN HISTORY);
```
diff --git a/doc/user/content/transform-data/patterns/top-k.md b/doc/user/content/transform-data/patterns/top-k.md
index 79be15844cb3..9952bfef4a67 100644
--- a/doc/user/content/transform-data/patterns/top-k.md
+++ b/doc/user/content/transform-data/patterns/top-k.md
@@ -20,7 +20,7 @@ databases, you might use window functions. In Materialize, we recommend using a
[`LATERAL` subquery](/transform-data/join/#lateral-subqueries). The general form of the
query looks like this:
-```sql
+```mzsql
SELECT * FROM
(SELECT DISTINCT key_col FROM tbl) grp,
LATERAL (
@@ -33,7 +33,7 @@ SELECT * FROM
For example, suppose you have a relation containing the population of various
U.S. cities.
-```sql
+```mzsql
CREATE TABLE cities (
name text NOT NULL,
state text NOT NULL,
@@ -56,7 +56,7 @@ INSERT INTO cities VALUES
To fetch the three most populous cities in each state:
-```sql
+```mzsql
SELECT state, name FROM
(SELECT DISTINCT state FROM cities) grp,
LATERAL (
@@ -80,7 +80,7 @@ TX Dallas
Despite the verbosity of the above query, Materialize produces a straightforward
plan:
-```sql
+```mzsql
EXPLAIN SELECT state, name FROM ...
```
```nofmt
@@ -94,7 +94,7 @@ Explained Query:
If _K_ = 1, i.e., you would like to see only the most populous city in each state, another approach is to use `DISTINCT ON`:
-```sql
+```mzsql
SELECT DISTINCT ON(state) state, name
FROM cities
ORDER BY state, pop DESC;
@@ -106,7 +106,7 @@ Note that the `ORDER BY` clause should start with the expressions that are in th
When using either the above `LATERAL` subquery pattern or `DISTINCT ON`, we recommend
specifying [query hints](/sql/select/#query-hints) to improve memory usage. For example:
-```sql
+```mzsql
SELECT state, name FROM
(SELECT DISTINCT state FROM cities) grp,
LATERAL (
@@ -119,7 +119,7 @@ SELECT state, name FROM
or
-```sql
+```mzsql
SELECT DISTINCT ON(state) state, name
FROM cities
OPTIONS (DISTINCT ON INPUT GROUP SIZE = 1000)
diff --git a/doc/user/content/transform-data/patterns/window-functions.md b/doc/user/content/transform-data/patterns/window-functions.md
index c3db24a645cd..24cabde144d6 100644
--- a/doc/user/content/transform-data/patterns/window-functions.md
+++ b/doc/user/content/transform-data/patterns/window-functions.md
@@ -20,7 +20,7 @@ It's important to note that **temporal windows** are _not_ the focus of this pag
Let's use the following sample data as input for examples:
-```sql
+```mzsql
CREATE TABLE cities (
name text NOT NULL,
state text NOT NULL,
@@ -44,7 +44,7 @@ INSERT INTO cities VALUES
## Top K using `ROW_NUMBER`
In other databases, a popular way of computing the top _K_ records per key is to use the `ROW_NUMBER` window function. For example, to get the 3 most populous city in each state:
-```sql
+```mzsql
SELECT state, name
FROM (
SELECT state, name, ROW_NUMBER() OVER
@@ -55,7 +55,7 @@ WHERE row_num <= 3;
```
If there are states that have many cities, a more performant way to express this in Materialize is to use a lateral join (or `DISTINCT ON`, if _K_ = 1) instead of window functions:
-```sql
+```mzsql
SELECT state, name FROM
(SELECT DISTINCT state FROM cities) grp,
LATERAL (
@@ -69,7 +69,7 @@ For more details, see [Top K by group](/sql/patterns/top-k).
## `FIRST_VALUE`/`LAST_VALUE` of an entire partition
Suppose that you want to compute the ratio of each city's population vs. the most populous city in the same state. You can do so using window functions as follows:
-```sql
+```mzsql
SELECT state, name,
CAST(pop AS float) / FIRST_VALUE(pop)
OVER (PARTITION BY state ORDER BY pop DESC)
@@ -78,7 +78,7 @@ FROM cities;
For better performance, you can rewrite this query to first compute the largest population of each state using an aggregation, and then join against that:
-```sql
+```mzsql
SELECT cities.state, name, CAST(pop as float) / max_pops.max_pop
FROM cities,
(SELECT state, MAX(pop) as max_pop
@@ -93,7 +93,7 @@ If the `ROW_NUMBER` would be called with an expression that is different from th
If the input has a column that advances by regular amounts, then `LAG` and `LEAD` can be replaced by an equi-join. Suppose that you have the following data:
-```sql
+```mzsql
CREATE TABLE measurements(time timestamp, value float);
INSERT INTO measurements VALUES
(TIMESTAMP '2007-02-01 15:04:01', 8),
@@ -102,14 +102,14 @@ INSERT INTO measurements VALUES
```
You can compute the differences between consecutive measurements using `LAG()`:
-```sql
+```mzsql
SELECT time, value - LAG(value) OVER (ORDER BY time)
FROM measurements;
```
For better performance, you can rewrite this query using an equi-join:
-```sql
+```mzsql
SELECT m2.time, m2.value - m1.value
FROM measurements m1, measurements m2
WHERE m2.time = m1.time + INTERVAL '1' MINUTE;
diff --git a/doc/user/content/transform-data/troubleshooting.md b/doc/user/content/transform-data/troubleshooting.md
index 8f5629a411e8..b8c2f20fad70 100644
--- a/doc/user/content/transform-data/troubleshooting.md
+++ b/doc/user/content/transform-data/troubleshooting.md
@@ -110,7 +110,7 @@ It's important to note that this only applies to basic queries against **a
single** source, materialized view or table, with no ordering, filters or
offsets.
-```sql
+```mzsql
SELECT
FROM
LIMIT <25 or less>;
@@ -125,7 +125,7 @@ to get the execution plan for the query, and validate that it starts with
Use temporal flters to filter results on a timestamp column that correlates with
the insertion or update time of each row. For example:
-```sql
+```mzsql
WHERE mz_now() <= event_ts + INTERVAL '1hr'
```
@@ -187,7 +187,7 @@ The measure of cluster busyness is CPU. You can monitor CPU usage in the
the **"Clusters"** tab in the navigation bar, and clicking into the cluster.
You can also grab CPU usage from the system catalog using SQL:
-```sql
+```mzsql
SELECT cru.cpu_percent
FROM mz_internal.mz_cluster_replica_utilization cru
LEFT JOIN mz_catalog.mz_cluster_replicas cr ON cru.replica_id = cr.id
diff --git a/doc/user/layouts/shortcodes/mysql-direct/check-the-ingestion-status.html b/doc/user/layouts/shortcodes/mysql-direct/check-the-ingestion-status.html
index 5b1130a81cb5..098bb0e17a94 100644
--- a/doc/user/layouts/shortcodes/mysql-direct/check-the-ingestion-status.html
+++ b/doc/user/layouts/shortcodes/mysql-direct/check-the-ingestion-status.html
@@ -9,7 +9,7 @@
[`mz_source_statuses`](/sql/system-catalog/mz_internal/#mz_source_statuses)
table to check the overall status of your source:
- ```sql
+ ```mzsql
WITH
source_ids AS
(SELECT id FROM mz_sources WHERE name = 'mz_source')
@@ -37,7 +37,7 @@
2. Once the source is running, use the [`mz_source_statistics`](/sql/system-catalog/mz_internal/#mz_source_statistics)
table to check the status of the initial snapshot:
- ```sql
+ ```mzsql
WITH
source_ids AS
(SELECT id FROM mz_sources WHERE name = 'mz_source')
diff --git a/doc/user/layouts/shortcodes/mysql-direct/create-a-cluster.html b/doc/user/layouts/shortcodes/mysql-direct/create-a-cluster.html
index 4ddcea4cf523..ef3d13013104 100644
--- a/doc/user/layouts/shortcodes/mysql-direct/create-a-cluster.html
+++ b/doc/user/layouts/shortcodes/mysql-direct/create-a-cluster.html
@@ -12,7 +12,7 @@
client connected to Materialize, use the [`CREATE CLUSTER`](/sql/create-cluster/)
command to create the new cluster:
- ```sql
+ ```mzsql
CREATE CLUSTER ingest_mysql (SIZE = '200cc');
SET CLUSTER = ingest_mysql;
diff --git a/doc/user/layouts/shortcodes/mysql-direct/create-a-user-for-replication.html b/doc/user/layouts/shortcodes/mysql-direct/create-a-user-for-replication.html
index 03e94f3dce9e..9c882bfc1c74 100644
--- a/doc/user/layouts/shortcodes/mysql-direct/create-a-user-for-replication.html
+++ b/doc/user/layouts/shortcodes/mysql-direct/create-a-user-for-replication.html
@@ -6,7 +6,7 @@
1. Create a dedicated user for Materialize, if you don't already have one:
- ```sql
+ ```mysql
CREATE USER 'materialize'@'%' IDENTIFIED BY '';
ALTER USER 'materialize'@'%' REQUIRE SSL;
@@ -14,7 +14,7 @@
1. Grant the user permission to manage replication:
- ```sql
+ ```mysql
GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT, LOCK TABLES ON *.* TO 'materialize'@'%';
```
@@ -24,6 +24,6 @@
1. Apply the changes:
- ```sql
+ ```mysql
FLUSH PRIVILEGES;
```
diff --git a/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/allow-materialize-ips.html b/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/allow-materialize-ips.html
index f83926244ecf..915ae9283504 100644
--- a/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/allow-materialize-ips.html
+++ b/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/allow-materialize-ips.html
@@ -4,7 +4,7 @@
command to securely store the password for the `materialize` MySQL user
you created [earlier](#step-2-create-a-user-for-replication):
- ```sql
+ ```mzsql
CREATE SECRET mysqlpass AS '';
```
@@ -12,7 +12,7 @@
connection object with access and authentication details for Materialize to
use:
- ```sql
+ ```mzsql
CREATE CONNECTION mysql_connection TO MYSQL (
HOST ,
PORT 3306,
@@ -27,7 +27,7 @@
1. Use the [`CREATE SOURCE`](/sql/create-source/) command to connect Materialize
to your Azure instance and start ingesting data:
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
FROM mysql CONNECTION mysql_connection
FOR ALL TABLES;
diff --git a/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/use-ssh-tunnel.html b/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/use-ssh-tunnel.html
index 638da32383f8..8d41bfe6baab 100644
--- a/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/use-ssh-tunnel.html
+++ b/doc/user/layouts/shortcodes/mysql-direct/ingesting-data/use-ssh-tunnel.html
@@ -2,7 +2,7 @@
client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#ssh-tunnel)
command to create an SSH tunnel connection:
- ```sql
+ ```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
PORT ,
@@ -16,7 +16,7 @@
1. Get Materialize's public keys for the SSH tunnel connection:
- ```sql
+ ```mzsql
SELECT * FROM mz_ssh_tunnel_connections;
```
@@ -30,7 +30,7 @@
1. Back in the SQL client connected to Materialize, validate the SSH tunnel connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection) command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION ssh_connection;
```
@@ -38,13 +38,13 @@
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the password for the `materialize` MySQL user you created [earlier](#step-2-create-a-user-for-replication):
- ```sql
+ ```mzsql
CREATE SECRET mysqlpass AS '';
```
1. Use the [`CREATE CONNECTION`](/sql/create-connection/) command to create another connection object, this time with database access and authentication details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION mysql_connection TO MYSQL (
HOST '',
SSH TUNNEL ssh_connection
@@ -55,7 +55,7 @@
1. Use the [`CREATE SOURCE`](/sql/create-source/) command to connect Materialize to your Azure instance and start ingesting data:
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
FROM mysql CONNECTION mysql_connection
FOR ALL TABLES;
diff --git a/doc/user/layouts/shortcodes/mysql-direct/right-size-the-cluster.html b/doc/user/layouts/shortcodes/mysql-direct/right-size-the-cluster.html
index 5b45faf3e2a3..fd8b61438a4f 100644
--- a/doc/user/layouts/shortcodes/mysql-direct/right-size-the-cluster.html
+++ b/doc/user/layouts/shortcodes/mysql-direct/right-size-the-cluster.html
@@ -6,7 +6,7 @@
1. Still in a SQL client connected to Materialize, use the [`ALTER CLUSTER`](/sql/alter-cluster/)
command to downsize the cluster to `100cc`:
- ```sql
+ ```mzsql
ALTER CLUSTER ingest_mysql SET (SIZE '100cc');
```
@@ -16,7 +16,7 @@
1. Use the [`SHOW CLUSTER REPLICAS`](/sql/show-cluster-replicas/) command to
check the status of the new replica:
- ```sql
+ ```mzsql
SHOW CLUSTER REPLICAS WHERE cluster = 'ingest_mysql';
```
diff --git a/doc/user/layouts/shortcodes/network-security/privatelink-kafka.md b/doc/user/layouts/shortcodes/network-security/privatelink-kafka.md
index 5e1b0993edd5..7167cf735bdf 100644
--- a/doc/user/layouts/shortcodes/network-security/privatelink-kafka.md
+++ b/doc/user/layouts/shortcodes/network-security/privatelink-kafka.md
@@ -52,7 +52,7 @@ and retrieve the AWS principal needed to configure the AWS PrivateLink service.
1. #### Create an AWS PrivateLink connection
In Materialize, create an [AWS PrivateLink connection](/sql/create-connection/#aws-privatelink) that references the endpoint service that you created in the previous step.
- ```sql
+ ```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce..vpce-svc-',
AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az3')
@@ -65,7 +65,7 @@ and retrieve the AWS principal needed to configure the AWS PrivateLink service.
1. Retrieve the AWS principal for the AWS PrivateLink connection you just created:
- ```sql
+ ```mzsql
SELECT principal
FROM mz_aws_privatelink_connections plc
JOIN mz_connections c ON plc.id = c.id
@@ -90,7 +90,7 @@ and retrieve the AWS principal needed to configure the AWS PrivateLink service.
Validate the AWS PrivateLink connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection) command:
-```sql
+```mzsql
VALIDATE CONNECTION privatelink_svc;
```
@@ -100,7 +100,7 @@ If no validation error is returned, move to the next step.
In Materialize, create a source connection that uses the AWS PrivateLink connection you just configured:
-```sql
+```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKERS (
'b-1.hostname-1:9096' USING AWS PRIVATELINK privatelink_svc (PORT 9001, AVAILABILITY ZONE 'use1-az2'),
diff --git a/doc/user/layouts/shortcodes/network-security/privatelink-postgres.md b/doc/user/layouts/shortcodes/network-security/privatelink-postgres.md
index 8ba4173ad68c..5ef0609c4054 100644
--- a/doc/user/layouts/shortcodes/network-security/privatelink-postgres.md
+++ b/doc/user/layouts/shortcodes/network-security/privatelink-postgres.md
@@ -39,7 +39,7 @@
1. #### Create an AWS PrivateLink Connection
In Materialize, create an [AWS PrivateLink connection](/sql/create-connection/#aws-privatelink) that references the endpoint service that you created in the previous step.
- ```sql
+ ```mzsql
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce..vpce-svc-',
AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az3')
@@ -52,7 +52,7 @@
1. Retrieve the AWS principal for the AWS PrivateLink connection you just created:
- ```sql
+ ```mzsql
SELECT principal
FROM mz_aws_privatelink_connections plc
JOIN mz_connections c ON plc.id = c.id
@@ -77,7 +77,7 @@
Validate the AWS PrivateLink connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection) command:
-```sql
+```mzsql
VALIDATE CONNECTION privatelink_svc;
```
@@ -87,7 +87,7 @@ If no validation error is returned, move to the next step.
In Materialize, create a source connection that uses the AWS PrivateLink connection you just configured:
-```sql
+```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST 'instance.foo000.us-west-1.rds.amazonaws.com',
PORT 5432,
@@ -100,7 +100,7 @@ CREATE CONNECTION pg_connection TO POSTGRES (
This PostgreSQL connection can then be reused across multiple [CREATE SOURCE](https://materialize.com/docs/sql/create-source/postgres/) statements:
-```sql
+```mzsql
CREATE SOURCE mz_source
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
diff --git a/doc/user/layouts/shortcodes/network-security/ssh-tunnel.md b/doc/user/layouts/shortcodes/network-security/ssh-tunnel.md
index 315021531746..37de24e4acdb 100644
--- a/doc/user/layouts/shortcodes/network-security/ssh-tunnel.md
+++ b/doc/user/layouts/shortcodes/network-security/ssh-tunnel.md
@@ -14,7 +14,7 @@ Before you begin, make sure you have access to a bastion host. You will need:
In Materialize, create an [SSH tunnel connection](/sql/create-connection/#ssh-tunnel) to the bastion host:
-```sql
+```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
USER '',
@@ -29,7 +29,7 @@ created in the previous step.
1. Materialize stores public keys for SSH tunnels in the system catalog. Query [`mz_ssh_tunnel_connections`](/sql/system-catalog/mz_catalog/#mz_ssh_tunnel_connections) to retrieve the public keys for the SSH tunnel connection you just created:
- ```sql
+ ```mzsql
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
@@ -112,7 +112,7 @@ created in the previous step.
5. Retrieve the static egress IPs from Materialize and configure the firewall rules (e.g. AWS Security Groups) for your bastion host to allow SSH traffic for those IP addresses only.
- ```sql
+ ```mzsql
SELECT * FROM mz_catalog.mz_egress_ips;
```
@@ -126,7 +126,7 @@ created in the previous step.
To confirm that the SSH tunnel connection is correctly configured, use the [`VALIDATE CONNECTION`](/sql/validate-connection) command:
-```sql
+```mzsql
VALIDATE CONNECTION ssh_connection;
```
diff --git a/doc/user/layouts/shortcodes/postgres-direct/check-the-ingestion-status.html b/doc/user/layouts/shortcodes/postgres-direct/check-the-ingestion-status.html
index fbe3dc3ccd4d..8e47c2a2518c 100644
--- a/doc/user/layouts/shortcodes/postgres-direct/check-the-ingestion-status.html
+++ b/doc/user/layouts/shortcodes/postgres-direct/check-the-ingestion-status.html
@@ -9,7 +9,7 @@
[`mz_source_statuses`](/sql/system-catalog/mz_internal/#mz_source_statuses)
table to check the overall status of your source:
- ```sql
+ ```mzsql
WITH
source_ids AS
(SELECT id FROM mz_sources WHERE name = 'mz_source')
@@ -37,7 +37,7 @@
2. Once the source is running, use the [`mz_source_statistics`](/sql/system-catalog/mz_internal/#mz_source_statistics)
table to check the status of the initial snapshot:
- ```sql
+ ```mzsql
WITH
source_ids AS
(SELECT id FROM mz_sources WHERE name = 'mz_source')
diff --git a/doc/user/layouts/shortcodes/postgres-direct/create-a-cluster.html b/doc/user/layouts/shortcodes/postgres-direct/create-a-cluster.html
index 23dadd689fc3..fda182f43c4e 100644
--- a/doc/user/layouts/shortcodes/postgres-direct/create-a-cluster.html
+++ b/doc/user/layouts/shortcodes/postgres-direct/create-a-cluster.html
@@ -12,7 +12,7 @@
client connected to Materialize, use the [`CREATE CLUSTER`](/sql/create-cluster/)
command to create the new cluster:
- ```sql
+ ```mzsql
CREATE CLUSTER ingest_postgres (SIZE = '200cc');
SET CLUSTER = ingest_postgres;
diff --git a/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-aws.html b/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-aws.html
index 82502bd1cea2..39e19055c8ee 100644
--- a/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-aws.html
+++ b/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-aws.html
@@ -9,11 +9,11 @@
[replica identity](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY)
to `FULL`:
- ```sql
+ ```postgres
ALTER TABLE REPLICA IDENTITY FULL;
```
- ```sql
+ ```postgres
ALTER TABLE REPLICA IDENTITY FULL;
```
@@ -28,13 +28,13 @@
_For specific tables:_
- ```sql
+ ```postgres
CREATE PUBLICATION mz_source FOR TABLE , ;
```
_For all tables in the database:_
- ```sql
+ ```postgres
CREATE PUBLICATION mz_source FOR ALL TABLES;
```
@@ -48,31 +48,31 @@
1. Create a user for Materialize, if you don't already have one:
- ``` sql
+ ```postgres
CREATE USER materialize PASSWORD '';
```
1. Grant the user permission to manage replication:
- ``` sql
+ ```postgres
GRANT rds_replication TO materialize;
```
1. Grant the user the required permissions on the tables you want to replicate:
- ```sql
+ ```postgres
GRANT CONNECT ON DATABASE TO materialize;
```
- ```sql
+ ```postgres
GRANT USAGE ON SCHEMA TO materialize;
```
- ```sql
+ ```postgres
GRANT SELECT ON TO materialize;
```
- ```sql
+ ```postgres
GRANT SELECT ON TO materialize;
```
@@ -83,6 +83,6 @@
If you expect to add tables to your publication, you can grant `SELECT` on
all tables in the schema instead of naming the specific tables:
- ```sql
+ ```postgres
GRANT SELECT ON ALL TABLES IN SCHEMA TO materialize;
```
diff --git a/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-other.html b/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-other.html
index 3330b7b98f38..ce817f8456e7 100644
--- a/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-other.html
+++ b/doc/user/layouts/shortcodes/postgres-direct/create-a-publication-other.html
@@ -6,11 +6,11 @@
[replica identity](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY)
to `FULL`:
- ```sql
+ ```postgres
ALTER TABLE REPLICA IDENTITY FULL;
```
- ```sql
+ ```postgres
ALTER TABLE REPLICA IDENTITY FULL;
```
@@ -25,13 +25,13 @@
_For specific tables:_
- ```sql
+ ```postgres
CREATE PUBLICATION mz_source FOR TABLE , ;
```
_For all tables in the database:_
- ```sql
+ ```postgres
CREATE PUBLICATION mz_source FOR ALL TABLES;
```
@@ -45,31 +45,31 @@
1. Create a user for Materialize, if you don't already have one:
- ``` sql
+ ```postgres
CREATE USER materialize PASSWORD '';
```
1. Grant the user permission to manage replication:
- ``` sql
+ ```postgres
ALTER ROLE materialize WITH REPLICATION;
```
1. Grant the user the required permissions on the tables you want to replicate:
- ```sql
+ ```postgres
GRANT CONNECT ON DATABASE TO materialize;
```
- ```sql
+ ```postgres
GRANT USAGE ON SCHEMA TO materialize;
```
- ```sql
+ ```postgres
GRANT SELECT ON TO materialize;
```
- ```sql
+ ```postgres
GRANT SELECT ON TO materialize;
```
@@ -80,6 +80,6 @@
If you expect to add tables to your publication, you can grant `SELECT` on
all tables in the schema instead of naming the specific tables:
- ```sql
+ ```postgres
GRANT SELECT ON ALL TABLES IN SCHEMA TO materialize;
```
diff --git a/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/allow-materialize-ips.html b/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/allow-materialize-ips.html
index 887ce241112f..d3192c8dbd57 100644
--- a/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/allow-materialize-ips.html
+++ b/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/allow-materialize-ips.html
@@ -3,7 +3,7 @@
command to securely store the password for the `materialize` PostgreSQL user you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
@@ -11,7 +11,7 @@
connection object with access and authentication details for Materialize to
use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -31,7 +31,7 @@
to your PostgreSQL instance and start ingesting data from the publication you
created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
diff --git a/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/use-ssh-tunnel.html b/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/use-ssh-tunnel.html
index 250b2aa87ba7..4fb4a628a40b 100644
--- a/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/use-ssh-tunnel.html
+++ b/doc/user/layouts/shortcodes/postgres-direct/ingesting-data/use-ssh-tunnel.html
@@ -1,6 +1,6 @@
1. In the SQL client connected to Materialize, use the [`CREATE CONNECTION`](/sql/create-connection/#ssh-tunnel) command to create an SSH tunnel connection:
- ```sql
+ ```mzsql
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '',
PORT ,
@@ -13,7 +13,7 @@
1. Get Materialize's public keys for the SSH tunnel connection:
- ```sql
+ ```mzsql
SELECT * FROM mz_ssh_tunnel_connections;
```
@@ -27,7 +27,7 @@
1. Back in the SQL client connected to Materialize, validate the SSH tunnel connection you created using the [`VALIDATE CONNECTION`](/sql/validate-connection) command:
- ```sql
+ ```mzsql
VALIDATE CONNECTION ssh_connection;
```
@@ -35,13 +35,13 @@
1. Use the [`CREATE SECRET`](/sql/create-secret/) command to securely store the password for the `materialize` PostgreSQL user you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SECRET pgpass AS '';
```
1. Use the [`CREATE CONNECTION`](/sql/create-connection/) command to create another connection object, this time with database access and authentication details for Materialize to use:
- ```sql
+ ```mzsql
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '',
PORT 5432,
@@ -58,7 +58,7 @@
1. Use the [`CREATE SOURCE`](/sql/create-source/) command to connect Materialize to your Azure instance and start ingesting data from the publication you created [earlier](#step-2-create-a-publication):
- ```sql
+ ```mzsql
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
diff --git a/doc/user/layouts/shortcodes/postgres-direct/right-size-the-cluster.html b/doc/user/layouts/shortcodes/postgres-direct/right-size-the-cluster.html
index 06b4700a0726..980822e38d93 100644
--- a/doc/user/layouts/shortcodes/postgres-direct/right-size-the-cluster.html
+++ b/doc/user/layouts/shortcodes/postgres-direct/right-size-the-cluster.html
@@ -6,7 +6,7 @@
1. Still in a SQL client connected to Materialize, use the [`ALTER CLUSTER`](/sql/alter-cluster/)
command to downsize the cluster to `100cc`:
- ```sql
+ ```mzsql
ALTER CLUSTER ingest_postgres SET (SIZE '100cc');
```
@@ -16,7 +16,7 @@
1. Use the [`SHOW CLUSTER REPLICAS`](/sql/show-cluster-replicas/) command to
check the status of the new replica:
- ```sql
+ ```mzsql
SHOW CLUSTER REPLICAS WHERE cluster = 'ingest_postgres';
```
@@ -35,7 +35,7 @@
PostgreSQL source from the [`mz_internal.mz_postgres_sources`](/sql/system-catalog/mz_internal/#mz_postgres_sources)
table:
- ```sql
+ ```mzsql
SELECT
d.name AS database_name,
n.name AS schema_name,
@@ -51,7 +51,7 @@
1. In PostgreSQL, check the replication slot lag, using the replication slot
name from the previous step:
- ```sql
+ ```postgres
SELECT
pg_size_pretty(pg_current_wal_lsn() - confirmed_flush_lsn)
AS replication_lag_bytes