Skip to content

Commit

Permalink
Docs: More cleanup (#720)
Browse files Browse the repository at this point in the history
* added metadata and renamed

* Update scraping-service.md

* Update operation-guide.md

* edits

* Update _index.md

* Update loki-config.md

* Update _index.md

* Update prometheus-config.md

* Create create-config-file.md

* Update _index.md

* Update architecture.md

* Update getting-started.md

* Update README.md

* Update README.md

* Update _index.md
  • Loading branch information
oddlittlebird authored Jul 7, 2021
1 parent ef865d3 commit 2ed8fd8
Show file tree
Hide file tree
Showing 12 changed files with 290 additions and 282 deletions.
29 changes: 5 additions & 24 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
+++
draft = "True"
+++

<p align="center"><img src="assets/logo_and_name.png" alt="Grafana Agent logo"></p>

Grafana Agent is an telemetry collector for sending metrics, logs,
Expand All @@ -14,27 +18,4 @@ with:
- Grafana Agent allows for deploying multiple instances of the Agent in a
cluster and only scraping metrics from targets that running at the same host.
This allows distributing memory requirements across the cluster
rather than pressurizing a single node.

## Table of Contents

1. [Overview](./overview.md)
1. [Metrics](./overview.md#metrics)
2. [Logs](./overview.md#logs)
3. [Comparison to alternatives](./overview.md#comparison-to-alternatives)
4. [Next Steps](./overview.md#next-steps)
2. [Getting Started](./getting-started/_index.md)
1. [Docker-Compose Example](./getting-started/_index.md#docker-compose-example)
2. [k3d Example](./getting-started/_index.md#k3d-example)
3. [Installing](./getting-started/_index.md#installing)
4. [Creating a Config File](./getting-started/_index.md#creating-a-config-file)
1. [Integrations](./getting-started/_index.md#integrations)
2. [Prometheus-like Config/Migrating from Prometheus](./getting-started/_index.md#prometheus-like-configmigrating-from-prometheus)
3. [Loki Config/Migrating from Promtail](./getting-started/_index.md#loki-configmigrating-from-promtail)
5. [Running](./getting-started/_index.md#running)
3. [Configure Grafana Agent](./configuration/_index.md)
4. [Upgrade Guide](./upgrade-guide.md)
5. [API](./api.md)
6. [Scraping Service Mode](./scraping-service.md)
7. [Operation Guide](./operation-guide.md)
8. [Windows Guide](./getting-started/install-agent-on-windows.md)
rather than pressurizing a single node.
39 changes: 24 additions & 15 deletions docs/overview.md → docs/_index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# Overview
+++
title = "Grafana Agent"
weight = 1
+++

# Grafana Agent

Grafana Agent is an telemetry collector for sending metrics, logs,
and trace data to the opinionated Grafana observability stack. It works best
Expand All @@ -13,6 +18,10 @@ code from the official platforms. It uses Prometheus for metrics collection,
Grafana Loki for log collection, and [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) for trace
collection.

Grafana Agent uses less memory on average than Prometheus – by doing less (only focusing on `remote_write`-related functionality).

Grafana Agent allows for deploying multiple instances of the Agent in a cluster and only scraping metrics from targets that running at the same host. This allows distributing memory requirements across the cluster rather than pressurizing a single node.

## Metrics

Unlike Prometheus, the Grafana Agent is _just_ targeting `remote_write`,
Expand All @@ -25,13 +34,13 @@ its own mini Prometheus agent with their own `scrape_configs` section and
`remote_write` rules. More than one instance is useful when you want to have
completely separated configs that write to two different locations without
needing to worry about advanced metric relabeling rules. Multiple instances also
come into play for the [Scraping Service Mode](./scraping-service.md).
come into play for the [Scraping Service Mode]({{< relref "./scraping-service.md" >}}).

The Grafana Agent can be deployed in three modes:

- Prometheus `remote_write` drop-in
- [Host Filtering mode](#host-filtering)
- [Scraping Service Mode](./scraping-service.md)
- [Scraping Service Mode]({{< relref "./scraping-service.md" >}})

The default deployment mode of the Grafana Agent is the _drop-in_
replacement for Prometheus `remote_write`. The Agent will act similarly to a
Expand All @@ -56,12 +65,11 @@ clusters a subset of agents. It acts as the in-between of the drop-in mode
(which does no automatic sharding) and `host_filter` mode (which forces sharding
by node). The Scraping Service Mode clusters a set of agents with a set of
shared configs and distributes the scrape load automatically between them. For
more information, please read the dedicated
[Scraping Service Mode](./scraping-service.md) documentation.
more information, refer to ({{< relref "./scraping-service.md" >}}).

### Host Filtering
### Host filtering

Host Filtering configures Agents to scrape targets that are running on the same
Host filtering configures Agents to scrape targets that are running on the same
machine as the Grafana Agent process. It does the following:

1. Gets the hostname of the agent by the `HOSTNAME` environment variable or
Expand All @@ -73,7 +81,7 @@ If the filter passes, the target is allowed to be scraped. Otherwise, the target
will be silently ignored and not scraped.

For detailed information on the host filtering mode, refer to the [operation
guide](./operation-guide.md#host-filtering)
guide]({{< relref "./operation-guide.md#host-filtering" >}}).

## Logs

Expand All @@ -87,20 +95,21 @@ developer team.

Grafana Agent supports collecting traces and sending them to Tempo using its
`tempo` subsystem. This is done by utilizing the upstream [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector).
The agent is capable of ingesting OpenTelemetry, OpenCensus, Jaeger, Zipkin or Kafka spans.
See documentation on how to configure [receivers](./configuration/tempo-config.md).
Agent can ingest OpenTelemetry, OpenCensus, Jaeger, Zipkin, or Kafka spans.
See documentation on how to configure [receivers]({{< relref "./configuration/tempo-config.md" >}}).
The agent is capable of exporting to any OpenTelemetry GRPC compatible system.

## Comparison to Alternatives
## Comparison to alternatives

Grafana Agent is optimized for [Grafana Cloud](https://grafana.com/products/cloud/),
but can be used while using an on-prem `remote_write`-compatible Prometheus API
and an on-prem Loki. Unlike alternatives, Grafana Agent extends the
official code with extra functionality. This allows the Agent to give an
experience closest to its official counterparts compared to alternatives which
may try to reimplement everything from scratch.
might try to re-implement everything from scratch.

### Why not just use Telegraf?

Telegraf is a fantastic project and was actually considered as an alternative
to building our own agent.
It could work, but ultimately it was not chosen due to lacking service discovery
Expand All @@ -116,9 +125,9 @@ specifically designed to work seamlessly with Grafana Cloud and other
`remote_write` compatible Prometheus endpoints as well as Loki for logs
and Tempo for traces, all-in-one.

## Next Steps
## Next steps

For more information on installing and running the agent, see
[Getting started](./getting-started/_index.md) or
[Configuration Reference](./configuration/_index.md) for a detailed reference
[Getting started]({{< relref "./getting-started/_index.md" >}}) or
[Configuration Reference]({{< relref "./configuration/_index.md" >}}) for a detailed reference
on the configuration file.
7 changes: 6 additions & 1 deletion docs/api.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# API
+++
title = "Grafana Agent API"
weight = 400
+++

# Grafana Agent API

The API is divided into several parts:

Expand Down
20 changes: 10 additions & 10 deletions docs/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ While `/-/reload` is enabled on the primary HTTP server, it is not recommended
to use it, since changing the HTTP server configuration will cause it to
restart.

## File Format
## File format

To specify which configuration file to load, pass the `-config.file` flag at
the command line. The file is written in the [YAML
Expand All @@ -72,16 +72,16 @@ value is set to the specified default.

Generic placeholders are defined as follows:

* `<boolean>`: a boolean that can take the values `true` or `false`
* `<int>`: any integer matching the regular expression `[1-9]+[0-9]*`
* `<duration>`: a duration matching the regular expression `[0-9]+(ns|us|µs|ms|[smh])`
* `<labelname>`: a string matching the regular expression `[a-zA-Z_][a-zA-Z0-9_]*`
* `<labelvalue>`: a string of unicode characters
* `<filename>`: a valid path relative to current working directory or an
- `<boolean>`: a boolean that can take the values `true` or `false`
- `<int>`: any integer matching the regular expression `[1-9]+[0-9]*`
- `<duration>`: a duration matching the regular expression `[0-9]+(ns|us|µs|ms|[smh])`
- `<labelname>`: a string matching the regular expression `[a-zA-Z_][a-zA-Z0-9_]*`
- `<labelvalue>`: a string of unicode characters
- `<filename>`: a valid path relative to current working directory or an
absolute path.
* `<host>`: a valid string consisting of a hostname or IP followed by an optional port number
* `<string>`: a regular string
* `<secret>`: a regular string that is a secret, such as a password
- `<host>`: a valid string consisting of a hostname or IP followed by an optional port number
- `<string>`: a regular string
- `<secret>`: a regular string that is a secret, such as a password

Support contents and default values of `agent.yaml`:

Expand Down
2 changes: 1 addition & 1 deletion docs/configuration/loki-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,4 @@ scrape_configs:
- [<promtail.scrape_config>]
[target_config: <promtail.target_config>]
```
```
14 changes: 7 additions & 7 deletions docs/configuration/prometheus-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -1668,21 +1668,21 @@ anchored on both ends. To un-anchor the regex, use `.*<regex>.*`.

`<relabel_action>` determines the relabeling action to take:

* `replace`: Match regex against the concatenated source_labels. Then, set
- `replace`: Match regex against the concatenated source_labels. Then, set
target_label to replacement, with match group references (${1}, ${2}, ...) in
replacement substituted by their value. If regex does not match, no
replacement takes place.
* `keep`: Drop targets for which regex does not match the concatenated
- `keep`: Drop targets for which regex does not match the concatenated
source_labels.
* `drop`: Drop targets for which regex matches the concatenated source_labels.
* `hashmod`: Set target_label to the modulus of a hash of the concatenated
- `drop`: Drop targets for which regex matches the concatenated source_labels.
- `hashmod`: Set target_label to the modulus of a hash of the concatenated
source_labels.
* `labelmap`: Match regex against all label names. Then copy the values of the
- `labelmap`: Match regex against all label names. Then copy the values of the
matching labels to label names given by replacement with match group
references (${1}, ${2}, ...) in replacement substituted by their value.
* `labeldrop`: Match regex against all label names. Any label that matches will
- `labeldrop`: Match regex against all label names. Any label that matches will
be removed from the set of labels.
* `labelkeep`: Match regex against all label names. Any label that does not
- `labelkeep`: Match regex against all label names. Any label that does not
match will be removed from the set of labels.

Care must be taken with `labeldrop` and `labelkeep` to ensure that metrics are still uniquely labeled once the labels are removed.
Expand Down
Loading

0 comments on commit 2ed8fd8

Please sign in to comment.