Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-format for hugo docs #5489

Open
wants to merge 21 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions docs/README.md → docs/_index.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 1
sidebar_label: Introduction
weight: 1
title: Docs
heading: SuperDB
---

# SuperDB

SuperDB offers a new approach that makes it easier to manipulate and manage
your data. With its [super-structured data model](formats/README.md#2-a-super-structured-pattern),
messy JSON data can easily be given the fully-typed precision of relational tables
Expand Down
2 changes: 0 additions & 2 deletions docs/commands/_category_.yaml

This file was deleted.

5 changes: 4 additions & 1 deletion docs/commands/README.md → docs/commands/_index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Command Tooling
---
title: Commands
weight: 2
---

The [`super` command](super.md) is used to execute command-line queries on
inputs from files, HTTP URLs, or [S3](../integrations/amazon-s3.md).
Expand Down
6 changes: 2 additions & 4 deletions docs/commands/super-db.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 2
sidebar_label: super db
weight: 2
title: super db
---

# `super db`

> **TL;DR** `super db` is a sub-command of `super` to manage and query SuperDB data lakes.
> You can import data from a variety of formats and it will automatically
> be committed in [super-structured](../formats/README.md)
Expand Down
33 changes: 29 additions & 4 deletions docs/commands/super.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 1
sidebar_label: super
weight: 1
title: super
---

# `super`

> **TL;DR** `super` is a command-line tool that uses [SuperSQL](../language/README.md)
> to query a variety of data formats in files, over HTTP, or in [S3](../integrations/amazon-s3.md)
> storage. Best performance is achieved when operating on data in binary formats such as
Expand All @@ -26,9 +24,36 @@ to filter, transform, and/or analyze input data.
Super's SQL pipes dialect is extensive, so much so that it can resemble
a log-search experience despite its SQL foundation.

<<<<<<< HEAD
Each `input` argument must be a file path, an HTTP or HTTPS URL,
an S3 URL, or standard input specified with `-`.

For built-in command help and a listing of all available options,
simply run `super` with no arguments.

`super` supports a number of [input](#input-formats) and [output](#output-formats) formats, but [Super Binary](../formats/bsup.md)
tends to be the most space-efficient and most performant. Super Binary has efficiency similar to
[Avro](https://avro.apache.org)
and [Protocol Buffers](https://developers.google.com/protocol-buffers)
but its comprehensive [type system](../formats/zed.md) obviates
the need for schema specification or registries.
Also, the [Super JSON](../formats/jsup.md) format is human-readable and entirely one-to-one with Super Binary
so there is no need to represent non-readable formats like Avro or Protocol Buffers
in a clunky JSON encapsulated form.

`super` typically operates on Super Binary-encoded data and when you want to inspect
human-readable bits of output, you merely format it as Super JSON, which is the
default format when output is directed to the terminal. Super Binary is the default
when redirecting to a non-terminal output like a file or pipe.

When run with input arguments, each input's format is [automatically inferred](#auto-detection)
and each input is scanned
in the order appearing on the command line forming the input stream.
=======

This comment was marked as resolved.

The `super` command works with data from ephemeral sources like files and URLs.
If you want to persist your data into a data lake for persistent storage,
check out the [`super db`](super-db.md) set of commands.
>>>>>>> origin/main

This comment was marked as resolved.


By invoking the `-c` option, a query expressed in the [SuperSQL language](../language/README.md)
may be specified and applied to the input stream.
Expand Down
11 changes: 8 additions & 3 deletions docs/formats/README.md → docs/formats/_index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Formats
---
title: Formats
weight: 5
---

> **TL;DR** The super data model defines a new and easy way to manage, store,
> and process data utilizing an emerging concept called
Expand Down Expand Up @@ -228,11 +231,13 @@ anywhere that a value can appear. In particular, types can be aggregation keys.
This is very powerful for data discovery and introspection. For example,
to count the different shapes of data, you might have a SuperSQL query,
operating on each input value as `this`, that has the form:
```

```sql
SELECT count(), typeof(this) AS shape GROUP BY shape, count
```
Likewise, you could select a sample value of each shape like this:
```

```sql
SELECT shape FROM (
SELECT any(this) AS sample, typeof(this) AS shape GROUP BY shape,sample
)
Expand Down
9 changes: 4 additions & 5 deletions docs/formats/bsup.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 2
sidebar_label: Super Binary
weight: 2
title: Super Binary
heading: Super Binary Specification
---

# Super Binary Specification

## 1. Introduction

Super Binary is an efficient, sequence-oriented serialization format for any data
Expand Down Expand Up @@ -296,7 +295,7 @@ indicated by `<type-id>`.

#### 2.1.8 Named Type Typedef

A named type defines a new type ID that binds a name to a previously existing type ID.
A named type defines a new type ID that binds a name to a previously existing type ID.

A named type is encoded as follows:
```
Expand Down
7 changes: 3 additions & 4 deletions docs/formats/compression.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 6
sidebar_label: Compression
weight: 6
title: Compression
heading: ZNG Compression Types
---

# ZNG Compression Types

This document specifies values for the `<format>` byte of a
[Super Binary compressed value message block](bsup.md#2-the-super-binary-format)
and the corresponding algorithms for the `<compressed payload>` byte sequence.
Expand Down
11 changes: 5 additions & 6 deletions docs/formats/csup.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 4
sidebar_label: Super Columnar
weight: 4
title: Super Columnar
heading: Super Columnar Specification
---

# Super Columnar Specification

Super Columnar is a file format based on
the [super data model](zed.md) where data is stacked to form columns.
Its purpose is to provide for efficient analytics and search over
Expand Down Expand Up @@ -245,7 +244,7 @@ and has the form:
<fld2>:{column:<any_column>,presence:<segmap>},
...
<fldn>:{column:<any_column>,presence:<segmap>}
}
}
```
where
* `<fld1>` through `<fldn>` are the names of the top-level fields of the
Expand All @@ -268,7 +267,7 @@ An `<array_column>` has the form:
{values:<any_column>,lengths:<segmap>}
```
where
* `values` represents a continuous sequence of values of the array elements
* `values` represents a continuous sequence of values of the array elements
that are sliced into array values based on the length information, and
* `lengths` encodes an `int32` sequence of values that represent the length
of each array value.
Expand Down
7 changes: 3 additions & 4 deletions docs/formats/jsup.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 3
sidebar_label: Super JSON
weight: 3
title: Super JSON
heading: Super JSON Specification
---

# Super JSON Specification

## 1. Introduction

Super JSON is the human-readable, text-based serialization format of
Expand Down
10 changes: 4 additions & 6 deletions docs/formats/zed.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 1
sidebar_label: Data Model
weight: 1
title: Data Model
---

# Zed Data Model

Zed data is defined as an ordered sequence of one or more typed data values.
Each value's type is either a "primitive type", a "complex type", the "type type",
a "named type", or the "null type".
Expand Down Expand Up @@ -154,7 +152,7 @@ have a common Zed type and the values have a common Zed type.

Each key across an instance of a map value must be a unique value.

A map value may be empty.
A map value may be empty.

A map type is uniquely defined by its key type and value type.

Expand Down Expand Up @@ -196,7 +194,7 @@ the type order of the constituent types in left to right order.

### 2.7 Error

An error represents any value designated as an error.
An error represents any value designated as an error.

The type order of an error is the type order of the type of its contained value.

Expand Down
7 changes: 3 additions & 4 deletions docs/formats/zjson.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 5
sidebar_label: ZJSON
weight: 5
title: ZJSON
heading: ZJSON Specification
---

# ZJSON Specification

## 1. Introduction

The [super data model](zed.md)
Expand Down
6 changes: 2 additions & 4 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 2
sidebar_label: Installation
weight: 2
title: Installation
---

# Installation

Several options for installing `super` are available:
* [Homebrew](#homebrew) for Mac or Linux,
* [Binary download](#binary-download), or
Expand Down
2 changes: 0 additions & 2 deletions docs/integrations/_category_.yaml

This file was deleted.

4 changes: 4 additions & 0 deletions docs/integrations/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
weight: 8
title: Integrations
---
6 changes: 2 additions & 4 deletions docs/integrations/amazon-s3.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 1
sidebar_label: Amazon S3
weight: 1
title: Amazon S3
---

# Amazon S3

Zed tools can access [Amazon S3](https://aws.amazon.com/s3/) and
S3-compatible storage via `s3://` URIs. Details are described below.

Expand Down
6 changes: 2 additions & 4 deletions docs/integrations/fluentd.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 3
sidebar_label: Fluentd
weight: 3
title: Fluentd
---

# Fluentd

The [Fluentd](https://www.fluentd.org/) open source data collector can be used
to push log data to a [SuperDB data lake](../commands/super-db.md) in a continuous manner.
This allows for querying near-"live" event data to enable use cases such as
Expand Down
7 changes: 3 additions & 4 deletions docs/integrations/grafana.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 4
sidebar_label: Grafana
weight: 4
title: Grafana
heading: Grafana Data Source Plugin
---

# Grafana Data Source Plugin

A [data source plugin](https://grafana.com/grafana/plugins/?type=datasource)
for [Grafana](https://grafana.com/) is available that enables plotting of
time-series data that's stored in [SuperDB data lakes](../commands/super-db.md). See the
Expand Down
9 changes: 4 additions & 5 deletions docs/integrations/zed-lake-auth.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 2
sidebar_label: Authentication Configuration
weight: 2
title: Authentication Configuration
heading: Configuring Authentication for a Zed Lake Service
---

# Configuring Authentication for a Zed Lake Service

A [SuperDB data lake service](../commands/super-db.md#serve) may be configured to require
user authentication to be accessed from clients such as the
[Zui](https://zui.brimdata.io/) application, the
Expand Down Expand Up @@ -136,7 +135,7 @@ authentication configuration along with a directory name for lake storage.
-auth.jwkspath=jwks.json \
-auth.audience=$auth0_api_identifier \
-lake=lake

{"level":"info","ts":1678909988.9797907,"logger":"core","msg":"Started"}
{"level":"info","ts":1678909988.9804773,"logger":"httpd","msg":"Listening","addr":"[::]:9867"}
...
Expand Down
2 changes: 0 additions & 2 deletions docs/integrations/zeek/_category_.yaml

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Zed Interoperability with Zeek Logs
---
title: Zeek
heading: Zed Interoperability with Zeek Logs
---

Zed includes functionality and reference configurations specific to working
with logs from the [Zeek](https://zeek.org/) open source network security
Expand Down
6 changes: 2 additions & 4 deletions docs/integrations/zeek/data-type-compatibility.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 2
sidebar_label: Zed/Zeek Data Type Compatibility
weight: 2
title: Zed/Zeek Data Type Compatibility
---

# Zed/Zeek Data Type Compatibility

As the [super data model](../../formats/zed.md) was in many ways inspired by the
[Zeek TSV log format](https://docs.zeek.org/en/master/log-formats.html#zeek-tsv-format-logs),
SuperDB's rich storage formats ([Super JSON](../../formats/jsup.md),
Expand Down
6 changes: 2 additions & 4 deletions docs/integrations/zeek/reading-zeek-log-formats.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 1
sidebar_label: Reading Zeek Log Formats
weight: 1
title: Reading Zeek Log Formats
---

# Reading Zeek Log Formats

Zed is capable of reading both common Zeek log formats. This document
provides guidance for what to expect when reading logs of these formats using
the Zed [command line tools](../../commands/README.md).
Expand Down
6 changes: 2 additions & 4 deletions docs/integrations/zeek/shaping-zeek-json.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
sidebar_position: 3
sidebar_label: Shaping Zeek JSON
weight: 3
title: Shaping Zeek JSON
---

# Shaping Zeek JSON

When [reading Zeek JSON format logs](reading-zeek-log-formats.md#zeek-json),
much of the rich data typing that was originally present inside Zeek is at risk
of being lost. This detail can be restored using a Zed
Expand Down
2 changes: 0 additions & 2 deletions docs/lake/_category_.yaml

This file was deleted.

4 changes: 4 additions & 0 deletions docs/lake/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: Lake
weight: 6
---
7 changes: 3 additions & 4 deletions docs/lake/api.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
sidebar_position: 1
sidebar_label: API
weight: 1
title: API
heading: Zed lake API
---

# Zed lake API

## _Status_

> This is a brief sketch of the functionality exposed in the
Expand Down
Loading