Skip to content

Commit

Permalink
Use new tip shortcode everywhere
Browse files Browse the repository at this point in the history
  • Loading branch information
philrz committed Dec 13, 2024
1 parent 7ba51dc commit 6539a3e
Show file tree
Hide file tree
Showing 24 changed files with 97 additions and 98 deletions.
28 changes: 14 additions & 14 deletions docs/commands/super-db.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ title: super db
<p id="status"></p>

:::tip Status
{{< tip "Status" >}}
While [`super`](super.md) and its accompanying [formats](../formats/_index.md)
are production quality, the SuperDB data lake is still fairly early in development
and alpha quality.
Expand All @@ -25,7 +25,7 @@ is deployed to manage the lake's data layout via the
[lake API](../lake/api.md).

Enhanced scalability with self-tuning configuration is under development.
:::
{{< /tip >}}

## The Lake Model

Expand Down Expand Up @@ -153,7 +153,7 @@ running any `super db` lake command all pointing at the same storage endpoint
and the lake's data footprint will always remain consistent as the endpoints
all adhere to the consistency semantics of the lake.

:::tip caveat
{{< tip "Caveat" >}}
Data consistency is not fully implemented yet for
the S3 endpoint so only single-node access to S3 is available right now,
though support for multi-node access is forthcoming.
Expand All @@ -164,7 +164,7 @@ access to a local file system has been thoroughly tested and should be
deemed reliable, i.e., you can run a direct-access instance of `super db` alongside
a server instance of `super db` on the same file system and data consistency will
be maintained.
:::
{{< /tip >}}

### Locating the Lake

Expand Down Expand Up @@ -206,11 +206,11 @@ Each commit object is assigned a global ID.
Similar to Git, commit objects are arranged into a tree and
represent the entire commit history of the lake.

:::tip note
{{< tip "Note" >}}
Technically speaking, Git can merge from multiple parents and thus
Git commits form a directed acyclic graph instead of a tree;
SuperDB does not currently support multiple parents in the commit object history.
:::
{{< /tip >}}

A branch is simply a named pointer to a commit object in the lake
and like a pool, a branch name can be any valid UTF-8 string.
Expand Down Expand Up @@ -272,10 +272,10 @@ key. For example, on a pool with pool key `ts`, the query `ts == 100`
will be optimized to scan only the data objects where the value `100` could be
present.

:::tip note
{{< tip "Note" >}}
The pool key will also serve as the primary key for the forthcoming
CRUD semantics.
:::
{{< /tip >}}

A pool also has a configured sort order, either ascending or descending
and data is organized in the pool in accordance with this order.
Expand Down Expand Up @@ -325,9 +325,9 @@ using that pool's "branches log" in a similar fashion, then its corresponding
commit object can be used to construct the data of that branch at that
past point in time.

:::tip note
{{< tip "Note" >}}
Time travel using timestamps is a forthcoming feature.
:::
{{< /tip >}}

## `super db` Commands

Expand Down Expand Up @@ -407,11 +407,11 @@ the [special value `this`](../language/pipeline-model.md#the-special-value-this)

A newly created pool is initialized with a branch called `main`.

:::tip note
{{< tip "Note" >}}
Lakes can be used without thinking about branches. When referencing a pool without
a branch, the tooling presumes the "main" branch as the default, and everything
can be done on main without having to think about branching.
:::
{{< /tip >}}

### Delete
```
Expand Down Expand Up @@ -582,9 +582,9 @@ that is stored in the commit journal for reference. These values may
be specified as options to the [`load`](#load) command, and are also available in the
[lake API](../lake/api.md) for automation.

:::tip note
{{< tip "Note" >}}
The branchlog meta-query source is not yet implemented.
:::
{{< /tip >}}

### Ls
```
Expand Down
4 changes: 2 additions & 2 deletions docs/commands/super.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,13 +187,13 @@ not desirable because (1) the Super JSON parser is not particularly performant a
(2) all JSON numbers are floating point but the Super JSON parser will parse as
JSON any number that appears without a decimal point as an integer type.

:::tip note
{{< tip "Note" >}}
The reason `super` is not particularly performant for Super JSON is that the [Super Binary](../formats/bsup.md) or
[Super Columnar](../formats/csup.md) formats are semantically equivalent to Super JSON but much more efficient and
the design intent is that these efficient binary formats should be used in
use cases where performance matters. Super JSON is typically used only when
data needs to be human-readable in interactive settings or in automated tests.
:::
{{< /tip >}}

To this end, `super` uses a heuristic to select between Super JSON and plain JSON when the
`-i` option is not specified. Specifically, plain JSON is selected when the first values
Expand Down
8 changes: 4 additions & 4 deletions docs/formats/bsup.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ size decompression buffers in advance of decoding.
Values for the `format` byte are defined in the
[Super Binary compression format specification](./compression.md).

:::tip note
{{< tip "Note" >}}
This arrangement of frames separating types and values allows
for efficient scanning and parallelization. In general, values depend
on type definitions but as long as all of the types are known when
Expand All @@ -143,7 +143,7 @@ heuristics, e.g., knowing a filtering predicate can't be true based on a
quick scan of the data perhaps using the Boyer-Moore algorithm to determine
that a comparison with a string constant would not work for any
value in the buffer.
:::
{{< /tip >}}

Whether the payload was originally uncompressed or was decompressed, it is
then interpreted according to the `T` bits of the frame code as a
Expand Down Expand Up @@ -211,12 +211,12 @@ is further encoded as a "counted string", which is the `uvarint` encoding
of the length of the string followed by that many bytes of UTF-8 encoded
string data.

:::tip note
{{< tip "Note" >}}
As defined by [Super JSON](jsup.md), a field name can be any valid UTF-8 string much like JSON
objects can be indexed with arbitrary string keys (via index operator)
even if the field names available to the dot operator are restricted
by language syntax for identifiers.
:::
{{< /tip >}}

The type ID follows the field name and is encoded as a `uvarint`.

Expand Down
36 changes: 18 additions & 18 deletions docs/formats/csup.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,12 +64,12 @@ then write the metadata into the reassembly section along with the trailer
at the end. This allows a stream to be converted to a Super Columnar file
in a single pass.

:::tip note
{{< tip "Note" >}}
That said, the layout is
flexible enough that an implementation may optimize the data layout with
additional passes or by writing the output to multiple files then
merging them together (or even leaving the Super Columnar entity as separate files).
:::
{{< /tip >}}

### The Data Section

Expand All @@ -85,17 +85,17 @@ There is no information in the data section for how segments relate
to one another or how they are reconstructed into columns. They are just
blobs of Super Binary data.

:::tip note
{{< tip "Note" >}}
Unlike Parquet, there is no explicit arrangement of the column chunks into
row groups but rather they are allowed to grow at different rates so a
high-volume column might be comprised of many segments while a low-volume
column must just be one or several. This allows scans of low-volume record types
(the "mice") to perform well amongst high-volume record types (the "elephants"),
i.e., there are not a bunch of seeks with tiny reads of mice data interspersed
throughout the elephants.
:::
{{< /tip >}}

:::tip TBD
{{< tip "TBD" >}}
The mice/elephants model creates an interesting and challenging layout
problem. If you let the row indexes get too far apart (call this "skew"), then
you have to buffer very large amounts of data to keep the column data aligned.
Expand All @@ -109,15 +109,15 @@ if you use lots of buffering on ingest, you can write the mice in front of the
elephants so the read path requires less buffering to align columns. Or you can
do two passes where you store segments in separate files then merge them at close
according to an optimization plan.
:::
{{< /tip >}}

### The Reassembly Section

The reassembly section provides the information needed to reconstruct
column streams from segments, and in turn, to reconstruct the original values
from column streams, i.e., to map columns back to composite values.

:::tip note
{{< tip "Note" >}}
Of course, the reassembly section also provides the ability to extract just subsets of columns
to be read and searched efficiently without ever needing to reconstruct
the original rows. How well this performs is up to any particular
Expand All @@ -127,7 +127,7 @@ Also, the reassembly section is in general vastly smaller than the data section
so the goal here isn't to express information in cute and obscure compact forms
but rather to represent data in an easy-to-digest, programmer-friendly form that
leverages Super Binary.
:::
{{< /tip >}}

The reassembly section is a Super Binary stream. Unlike Parquet,
which uses an externally described schema
Expand All @@ -147,9 +147,9 @@ A super type's integer position in this sequence defines its identifier
encoded in the [super column](#the-super-column). This identifier is called
the super ID.

:::tip note
{{< tip "Note" >}}
Change the first N values to type values instead of nulls?
:::
{{< /tip >}}

The next N+1 records contain reassembly information for each of the N super types
where each record defines the column streams needed to reconstruct the original
Expand All @@ -171,11 +171,11 @@ type signature:
In the rest of this document, we will refer to this type as `<segmap>` for
shorthand and refer to the concept as a "segmap".

:::tip note
{{< tip "Note" >}}
We use the type name "segmap" to emphasize that this information represents
a set of byte ranges where data is stored and must be read from *rather than*
the data itself.
:::
{{< /tip >}}

#### The Super Column

Expand Down Expand Up @@ -216,11 +216,11 @@ This simple top-down arrangement, along with the definition of the other
column structures below, is all that is needed to reconstruct all of the
original data.

:::tip note
{{< tip "Note" >}}
Each row reassembly record has its own layout of columnar
values and there is no attempt made to store like-typed columns from different
schemas in the same physical column.
:::
{{< /tip >}}

The notation `<any_column>` refers to any instance of the five column types:
* [`<record_column>`](#record-column),
Expand Down Expand Up @@ -296,9 +296,9 @@ in the same column order implied by the union type, and
* `tags` is a column of `int32` values where each subsequent value encodes
the tag of the union type indicating which column the value falls within.

:::tip TBD
{{< tip "TBD" >}}
Change code to conform to columns array instead of record{c0,c1,...}
:::
{{< /tip >}}

The number of times each value of `tags` appears must equal the number of values
in each respective column.
Expand Down Expand Up @@ -350,14 +350,14 @@ data in the file,
it will typically fit comfortably in memory and it can be very fast to scan the
entire reassembly structure for any purpose.

:::tip Example
{{< tip "Example" >}}
For a given query, a "scan planner" could traverse all the
reassembly records to figure out which segments will be needed, then construct
an intelligent plan for reading the needed segments and attempt to read them
in mostly sequential order, which could serve as
an optimizing intermediary between any underlying storage API and the
Super Columnar decoding logic.
:::
{{< /tip >}}

To decode the "next" row, its schema index is read from the root reassembly
column stream.
Expand Down
4 changes: 2 additions & 2 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,11 @@ This installs the `super` binary in your `$GOPATH/bin`.

Once installed, run a [quick test](#quick-tests).

:::tip note
{{< tip "Note" >}}
If you don't have Go installed, download and install it from the
[Go install page](https://golang.org/doc/install). Go 1.23 or later is
required.
:::
{{< /tip >}}

## Quick Tests

Expand Down
4 changes: 2 additions & 2 deletions docs/integrations/amazon-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ You must specify an AWS region via one of the following:
You can create `~/.aws/config` by installing the
[AWS CLI](https://aws.amazon.com/cli/) and running `aws configure`.

:::tip Note
{{< tip "Note" >}}
If using S3-compatible storage that does not recognize the concept of regions,
a region must still be specified, e.g., by providing a dummy value for
`AWS_REGION`.
:::
{{< /tip >}}

## Credentials

Expand Down
9 changes: 5 additions & 4 deletions docs/integrations/fluentd.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,14 @@ The default settings when running `zed create` set the
field and sort the stored data in descending order by that key. This
configuration is ideal for Zeek log data.

:::tip Note
{{< tip "Note" >}}
The [Zui](https://zui.brimdata.io/) desktop application automatically starts a
Zed lake service when it launches. Therefore if you are using Zui you can
skip the first set of commands shown above. The pool can be created from Zui
by clicking **+**, selecting **New Pool**, then entering `ts` for the
[pool key](../commands/super-db.md#pool-key).
:::
{{< /tip >}}


### Fluentd

Expand Down Expand Up @@ -366,15 +367,15 @@ leverage, you can reduce the lake's storage footprint by periodically running
storage that contain the granular commits that have already been rolled into
larger objects by compaction.

:::tip Note
{{< tip "Note" >}}
As described in issue [super/4934](https://github.com/brimdata/super/issues/4934),
even after running `zed vacuum`, some files related to commit history are
currently still left behind below the lake storage path. The issue describes
manual steps that can be taken to remove these files safely, if desired.
However, if you find yourself needing to take these steps in your environment,
please [contact us](#contact-us) as it will allow us to boost the priority
of addressing the issue.
:::
{{< /tip >}}

## Ideas For Enhancement

Expand Down
8 changes: 4 additions & 4 deletions docs/integrations/zed-lake-auth/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,10 @@ and then clicking the **Create API** button.

2. Enter any **Name** and URL **Identifier** for the API, then click the
**Create** button.
:::tip
{{< tip "Tip" >}}
Note the value you enter for the **Identifier** as you'll
need it later for the Zed lake service configuration.
:::
{{< /tip >}}

![api-name-identifier](api-name-identifier.png)

Expand All @@ -50,11 +50,11 @@ need it later for the Zed lake service configuration.

1. Begin creating a new application by clicking **Applications** in the left
navigation menu and then clicking the **Create Application** button.
:::tip Note
{{< tip "Note" >}}
Neither the "Zed lake (Test Application)" that was created for us
automatically when we created our API nor the Default App that came with the
trial are used in this configuration.
:::
{{< /tip >}}

![create-application](create-application.png)

Expand Down
4 changes: 2 additions & 2 deletions docs/integrations/zeek/data-type-compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,15 +49,15 @@ applicable to handling certain types.
| [`vector`](https://docs.zeek.org/en/current/script-reference/types.html#type-vector) | [`array`](../../formats/zed.md#22-array | |
| [`record`](https://docs.zeek.org/en/current/script-reference/types.html#type-record) | [`record`](../../formats/zed.md#21-record | See [`record` details](#record) |

:::tip Note
{{< tip "Note" >}}
The [Zeek data types](https://docs.zeek.org/en/current/script-reference/types.html)
page describes the types in the context of the
[Zeek scripting language](https://docs.zeek.org/en/master/scripting/index.html).
The Zeek types available in scripting are a superset of the data types that
may appear in Zeek log files. The encodings of the types also differ in some
ways between the two contexts. However, we link to this reference because
there is no authoritative specification of the Zeek TSV log format.
:::
{{< /tip >}}

## Example

Expand Down
Loading

0 comments on commit 6539a3e

Please sign in to comment.