Skip to content

Commit

Permalink
Merge pull request #27805 from arusahni/docs/mzsql-highlighting
Browse files Browse the repository at this point in the history
  • Loading branch information
arusahni committed Jun 24, 2024
2 parents 6b86f93 + 2756b65 commit 40cb516
Show file tree
Hide file tree
Showing 268 changed files with 1,210 additions and 1,150 deletions.
4 changes: 2 additions & 2 deletions doc/user/content/free-trial-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ trial period.
To see your current credit consumption rate, measured in credits per hour, run
the following query against Materialize:

```sql
```mzsql
SELECT sum(s.credits_per_hour) AS credit_consumption_rate
FROM mz_cluster_replicas r
JOIN mz_cluster_replica_sizes s ON r.size = s.size;
Expand All @@ -68,7 +68,7 @@ No, you cannot go over the rate limit of 4 credits per hour at any time during
your free trial. If you try to add a replica that puts you over the limit,
Materialize will return an error similar to:

```sql
```nofmt
Error: creating cluster replica would violate max_credit_consumption_rate limit (desired: 6, limit: 4, current: 3)
Hint: Drop an existing cluster replica or contact support to request a limit increase.
```
Expand Down
4 changes: 2 additions & 2 deletions doc/user/content/get-started/isolation-level.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ Isolation level is a configuration parameter that can be set by the user on a se

## Examples

```sql
```mzsql
SET TRANSACTION_ISOLATION TO 'SERIALIZABLE';
```

```sql
```mzsql
SET TRANSACTION_ISOLATION TO 'STRICT SERIALIZABLE';
```

Expand Down
34 changes: 17 additions & 17 deletions doc/user/content/get-started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,15 @@ purchase items only to quickly resell them for profit.
1. Let's start by kicking off the built-in [auction load generator](/sql/create-source/load-generator/#auction), so you have some data to work
with.

```sql
```mzsql
CREATE SOURCE auction_house
FROM LOAD GENERATOR AUCTION
(TICK INTERVAL '1s', AS OF 100000)
FOR ALL TABLES;
```
1. Use the [`SHOW SOURCES`](/sql/show-sources/) command to get an idea of the data being generated:

```sql
```mzsql
SHOW SOURCES;
```
<p></p>
Expand All @@ -75,7 +75,7 @@ with.

1. Before moving on, get a sense for the data you'll be working with:

```sql
```mzsql
SELECT * FROM auctions LIMIT 1;
```
<p></p>
Expand All @@ -86,7 +86,7 @@ with.
1 | 1824 | Best Pizza in Town | 2023-09-10 21:24:54.838+00
```

```sql
```mzsql
SELECT * FROM bids LIMIT 1;
```
<p></p>
Expand All @@ -107,7 +107,7 @@ with.
joins data from `auctions` and `bids` to get the bid with the highest `amount`
for each auction at its `end_time`.

```sql
```mzsql
CREATE VIEW winning_bids AS
SELECT DISTINCT ON (auctions.id) bids.*, auctions.item, auctions.seller
FROM auctions, bids
Expand All @@ -130,7 +130,7 @@ for each auction at its `end_time`.
yet! Querying the view re-runs the embedded statement, which comes at some cost
on growing amounts of data.

```sql
```mzsql
SELECT * FROM winning_bids
WHERE item = 'Best Pizza in Town'
ORDER BY bid_time DESC;
Expand All @@ -142,7 +142,7 @@ on growing amounts of data.
1. Next, try creating several indexes on the `winning_bids` view using columns
that can help optimize operations like point lookups and joins.

```sql
```mzsql
CREATE INDEX wins_by_item ON winning_bids (item);
CREATE INDEX wins_by_bidder ON winning_bids (buyer);
CREATE INDEX wins_by_seller ON winning_bids (seller);
Expand All @@ -156,7 +156,7 @@ that can help optimize operations like point lookups and joins.
indexes (e.g., with a point lookup), things should be a whole lot more
interactive.

```sql
```mzsql
SELECT * FROM winning_bids WHERE item = 'Best Pizza in Town' ORDER BY bid_time DESC;
```

Expand All @@ -172,7 +172,7 @@ interactive.
1. Create a view that detects when a user wins an auction as a bidder, and then
is identified as a seller for an item at a higher price.

```sql
```mzsql
CREATE VIEW fraud_activity AS
SELECT w2.seller,
w2.item AS seller_item,
Expand All @@ -191,7 +191,7 @@ is identified as a seller for an item at a higher price.

Aha! You can now catch any auction flippers in real time, based on the results of this view.

```sql
```mzsql
SELECT * FROM fraud_activity LIMIT 100;
```

Expand All @@ -205,7 +205,7 @@ is identified as a seller for an item at a higher price.
1. Create a [**table**](/sql/create-table/) that allows you to manually flag
fraudulent accounts.

```sql
```mzsql
CREATE TABLE fraud_accounts (id bigint);
```

Expand All @@ -215,7 +215,7 @@ fraudulent accounts.
1. To see results change over time, let's `SUBSCRIBE` to a query that returns
the Top 5 auction winners, overall.

```sql
```mzsql
SUBSCRIBE TO (
SELECT buyer, count(*)
FROM winning_bids
Expand All @@ -232,7 +232,7 @@ the Top 5 auction winners, overall.
the `SUBSCRIBE`, and mark them as fraudulent by adding them to the
`fraud_accounts` table.

```sql
```mzsql
INSERT INTO fraud_accounts VALUES (<id>);
```

Expand All @@ -256,7 +256,7 @@ operational use case: profit & loss alerts.

1. Create a view to track the sales and purchases of each auction house user.

```sql
```mzsql
CREATE VIEW funds_movement AS
SELECT id, SUM(credits) as credits, SUM(debits) as debits
FROM (
Expand All @@ -277,13 +277,13 @@ operational use case: profit & loss alerts.
spot that results are correct and consistent. As an example, the total credit
and total debit amounts should always add up.

```sql
```mzsql
SELECT SUM(credits), SUM(debits) FROM funds_movement;
```

You can also `SUBSCRIBE` to this query, and watch the sums change in lock step as auctions close.

```sql
```mzsql
SUBSCRIBE TO (
SELECT SUM(credits), SUM(debits) FROM funds_movement
);
Expand All @@ -297,7 +297,7 @@ As the auction house operator, you should now have a high degree of confidence t

Once you’re done exploring the auction house source, remember to clean up your environment:

```sql
```mzsql
DROP SOURCE auction_house CASCADE;
DROP TABLE fraud_accounts;
Expand Down
8 changes: 4 additions & 4 deletions doc/user/content/ingest-data/amazon-eventbridge.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ scenarios, we recommend separating your workloads into multiple clusters for

To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):

```sql
```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
Expand All @@ -40,7 +40,7 @@ SET CLUSTER = webhooks_cluster;
To validate requests between Amazon EventBridge and Materialize, you must create
a [secret](/sql/create-secret/):

```sql
```mzsql
CREATE SECRET eventbridge_webhook_secret AS '<secret_value>';
```

Expand All @@ -54,7 +54,7 @@ in Materialize to ingest data from Amazon EventBridge. By default, the source
will be created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.

```sql
```mzsql
CREATE SOURCE eventbridge_source
FROM WEBHOOK
BODY FORMAT JSON
Expand Down Expand Up @@ -121,7 +121,7 @@ Amazon EventBridge, you can now query the incoming data:

1. Use SQL queries to inspect and analyze the incoming data:

```sql
```mzsql
SELECT * FROM eventbridge_source LIMIT 10;
```

Expand Down
6 changes: 3 additions & 3 deletions doc/user/content/ingest-data/amazon-msk.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ the TCP listeners (step 3) and the VPC endpoint service (step 5).

In Materialize, create a source connection that uses the SSH tunnel connection you configured in the previous section:

```sql
```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'broker1:9092',
SSH TUNNEL ssh_connection
Expand Down Expand Up @@ -201,7 +201,7 @@ The process to connect Materialize to Amazon MSK consists of the following steps

e. Create a connection using the command below. The broker URL is what you copied in step c of this subsection. The `<topic-name>` is the name of the topic you created in Step 4. The `<your-username>` and `<your-password>` is from _Store a new secret_ under Step 2.

```sql
```mzsql
CREATE SECRET msk_password AS '<your-password>';
CREATE CONNECTION kafka_connection TO KAFKA (
Expand All @@ -226,7 +226,7 @@ multiple [`CREATE SOURCE`](/sql/create-source/kafka/) statements. By default,
the source will be created in the active cluster; to use a different cluster,
use the `IN CLUSTER` clause.

```sql
```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT JSON;
Expand Down
4 changes: 2 additions & 2 deletions doc/user/content/ingest-data/cdc-sql-server.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ information about upstream database operations, like the `before` and `after`
values for each record. To create a source that interprets the
[Debezium envelope](/sql/create-source/kafka/#using-debezium) in Materialize:

```sql
```mzsql
CREATE SOURCE kafka_repl
FROM KAFKA CONNECTION kafka_connection (TOPIC 'server1.testDB.tableName')
FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION csr_connection
Expand All @@ -177,7 +177,7 @@ Any materialized view defined on top of this source will be incrementally
updated as new change events stream in through Kafka, resulting from `INSERT`,
`UPDATE`, and `DELETE` operations in the original SQL Server database.

```sql
```mzsql
CREATE MATERIALIZED VIEW cnt_table AS
SELECT field1,
COUNT(*) AS cnt
Expand Down
2 changes: 1 addition & 1 deletion doc/user/content/ingest-data/confluent-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ of the following steps:
Step 4. The `<your-api-key>` and `<your-api-secret>` are from the _Create
an API Key_ step.

```sql
```mzsql
CREATE SECRET confluent_username AS '<your-api-key>';
CREATE SECRET confluent_password AS '<your-api-secret>';
Expand Down
10 changes: 5 additions & 5 deletions doc/user/content/ingest-data/hubspot.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ scenarios, we recommend separating your workloads into multiple clusters for

To create a cluster in Materialize, use the [`CREATE CLUSTER` command](/sql/create-cluster):

```sql
```mzsql
CREATE CLUSTER webhooks_cluster (SIZE = '25cc');
SET CLUSTER = webhooks_cluster;
Expand All @@ -37,7 +37,7 @@ SET CLUSTER = webhooks_cluster;

To validate requests between HubSpot and Materialize, you must create a [secret](/sql/create-secret/):

```sql
```mzsql
CREATE SECRET hubspot_webhook_secret AS '<secret_value>';
```

Expand All @@ -51,7 +51,7 @@ in Materialize to ingest data from HubSpot. By default, the source will be
created in the active cluster; to use a different cluster, use the `IN
CLUSTER` clause.

```sql
```mzsql
CREATE SOURCE hubspot_source
FROM WEBHOOK
BODY FORMAT JSON
Expand Down Expand Up @@ -159,7 +159,7 @@ HubSpot, you can now query the incoming data:

1. Use SQL queries to inspect and analyze the incoming data:

```sql
```mzsql
SELECT * FROM hubspot_source LIMIT 10;
```

Expand All @@ -171,7 +171,7 @@ Webhook data is ingested as a JSON blob. We recommend creating a parsing view on
top of your webhook source that uses [`jsonb` operators](/sql/types/jsonb/#operators)
to map the individual fields to columns with the required data types.

```sql
```mzsql
CREATE VIEW parse_hubspot AS SELECT
body->>'city' AS city,
body->>'firstname' AS firstname,
Expand Down
8 changes: 4 additions & 4 deletions doc/user/content/ingest-data/kafka-self-hosted.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ endpoint service (step 5).
In Materialize, create a source connection that uses the SSH tunnel connection
you configured in the previous section:

```sql
```mzsql
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'broker1:9092',
SSH TUNNEL ssh_connection
Expand All @@ -82,7 +82,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:

```sql
```mzsql
SELECT * FROM mz_egress_ips;
```

Expand All @@ -92,7 +92,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
1. Create a [Kafka connection](/sql/create-connection/#kafka) that references
your Kafka cluster:

```sql
```mzsql
CREATE SECRET kafka_password AS '<your-password>';
CREATE CONNECTION kafka_connection TO KAFKA (
Expand All @@ -112,7 +112,7 @@ CREATE CONNECTION kafka_connection TO KAFKA (
The Kafka connection created in the previous section can then be reused across
multiple [`CREATE SOURCE`](/sql/create-source/kafka/) statements:

```sql
```mzsql
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT JSON;
Expand Down
4 changes: 2 additions & 2 deletions doc/user/content/ingest-data/mysql/amazon-aurora.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Select the option that works best for you.
client connected to Materialize, find the static egress IP addresses for the
Materialize region you are running in:

```sql
```mzsql
SELECT * FROM mz_egress_ips;
```

Expand Down Expand Up @@ -117,7 +117,7 @@ configuration of resources for an SSH tunnel. For more details, see the
SQL client connected to Materialize, get the static egress IP addresses for
the Materialize region you are running in:

```sql
```mzsql
SELECT * FROM mz_egress_ips;
```

Expand Down
Loading

0 comments on commit 40cb516

Please sign in to comment.