diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 49fbafc368aec..e361aef8af46d 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -1,20 +1,21 @@ --- title: Sending Logs to Loki via Kafka using Alloy menuTitle: Sending Logs to Loki via Kafka using Alloy -description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki. +description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki. weight: 250 killercoda: title: Sending Logs to Loki via Kafka using Alloy - description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki. + description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki. backend: imageid: ubuntu --- - + -# Sending Logs to Loki via Kafka using Alloy +# Sending Logs to Loki via Kafka using Alloy Alloy natively supports receiving logs via Kafka. In this example, we will configure Alloy to receive logs via Kafka using two different methods: + - [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. - [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. @@ -38,9 +39,10 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos {{< /admonition >}} - ## Scenario + In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services: + - **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. - **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. - **Simulation Service:** Generates sensor data for each plant. @@ -50,7 +52,8 @@ In this scenario, we have a microservices application called the Carnivorous Gre - **Database:** A database that stores user and plant data. Each service generates logs that are sent to Alloy via Kafka. In this example, they are sent on two different topics: -- `loki`: This sends a structured log formatted message (json). + +- `loki`: This sends a structured log formatted message (json). - `otlp`: This sends a serialized OpenTelemetry log message. You would not typically do this within your own application, but for the purposes of this example we wanted to show how Alloy can handle different types of log messages over Kafka. @@ -69,7 +72,8 @@ In this step, we will set up our environment by cloning the repository that cont git clone -b microservice-kafka https://github.com/grafana/loki-fundamentals.git ``` -1. Next we will spin up our observability stack using Docker Compose: + +1. Next we will spin up our observability stack using Docker Compose: ```bash @@ -80,7 +84,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< docs/ignore >}} - ```bash + ```bash docker-compose -f loki-fundamentals/docker-compose.yml up -d ``` @@ -88,6 +92,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< /docs/ignore >}} This will spin up the following services: + ```console ✔ Container loki-fundamentals-grafana-1 Started ✔ Container loki-fundamentals-loki-1 Started @@ -97,6 +102,7 @@ In this step, we will set up our environment by cloning the repository that cont ``` We will be access two UI interfaces: + - Alloy at [http://localhost:12345](http://localhost:12345) - Grafana at [http://localhost:3000](http://localhost:3000) @@ -107,12 +113,13 @@ We will be access two UI interfaces: In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration. -### Open your Code Editor and Locate the `config.alloy` file +### Open your code editor and locate the `config.alloy` file Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy` file in the code editor: {{< docs/ignore >}} **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** + 1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab. 1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory). 1. Click on the `config.alloy` file to open it in the code editor. @@ -126,13 +133,14 @@ Grafana Alloy requires a configuration file to define the components and their r You will copy all three of the following configuration snippets into the `config.alloy` file. -### Source logs from kafka +### Source logs from Kafka First, we will configure the Loki Kafka source. `loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`. Add the following configuration to the `config.alloy` file: + ```alloy loki.source.kafka "raw" { brokers = ["kafka:9092"] @@ -145,6 +153,7 @@ loki.source.kafka "raw" { ``` In this configuration: + - `brokers`: The Kafka brokers to connect to. - `topics`: The Kafka topics to consume. In this case, we are consuming the `loki` topic. - `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. @@ -159,6 +168,7 @@ For more information on the `loki.source.kafka` configuration, see the [Loki Kaf Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component. Now add the following configuration to the `config.alloy` file: + ```alloy loki.relabel "kafka" { forward_to = [loki.write.http.receiver] @@ -170,6 +180,7 @@ loki.relabel "kafka" { ``` In this configuration: + - `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component. - `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`. @@ -180,6 +191,7 @@ For more information on the `loki.relabel` configuration, see the [Loki Relabel Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format. And finally, add the following configuration to the `config.alloy` file: + ```alloy loki.write "http" { endpoint { @@ -189,6 +201,7 @@ loki.write "http" { ``` In this configuration: + - `endpoint`: The endpoint to send the logs to. In this case, we are sending the logs to the Loki HTTP endpoint. For more information on the `loki.write` configuration, see the [Loki Write documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.write/). @@ -209,7 +222,6 @@ The new configuration will be loaded. You can verify this by checking the Alloy If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: - ```bash cp loki-fundamentals/completed/config-raw.alloy loki-fundamentals/config.alloy @@ -225,16 +237,16 @@ curl -X POST http://localhost:12345/-/reload Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components. -### Open your Code Editor and Locate the `config.alloy` file +### Open your code editor and locate the `config.alloy` file Like before, we generate our next pipeline configuration within the same `config.alloy` file. You will add the following configuration snippets to the file **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. - ### Source OpenTelemetry logs from Kafka -First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. +First, we will configure the OpenTelemetry Kafka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. Now add the following configuration to the `config.alloy` file: + ```alloy otelcol.receiver.kafka "default" { brokers = ["kafka:9092"] @@ -249,6 +261,7 @@ otelcol.receiver.kafka "default" { ``` In this configuration: + - `brokers`: The Kafka brokers to connect to. - `protocol_version`: The Kafka protocol version to use. - `topic`: The Kafka topic to consume. In this case, we are consuming the `otlp` topic. @@ -257,12 +270,12 @@ In this configuration: For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/). - ### Batch OpenTelemetry logs before sending Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. Now add the following configuration to the `config.alloy` file: + ```alloy otelcol.processor.batch "default" { output { @@ -272,6 +285,7 @@ otelcol.processor.batch "default" { ``` In this configuration: + - `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.exporter.otlphttp.default.input`. For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/). @@ -281,6 +295,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. Finally, add the following configuration to the `config.alloy` file: + ```alloy otelcol.exporter.otlphttp "default" { client { @@ -290,6 +305,7 @@ otelcol.exporter.otlphttp "default" { ``` In this configuration: + - `client`: The client configuration for the exporter. In this case, we are sending the logs to the Loki OTLP endpoint. For more information on the `otelcol.exporter.otlphttp` configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/). @@ -341,7 +357,6 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- ``` - {{< docs/ignore >}} @@ -353,6 +368,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- {{< /docs/ignore >}} This will start the following services: + ```console ✔ Container greenhouse-db-1 Started ✔ Container greenhouse-websocket_service-1 Started @@ -372,7 +388,6 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). - @@ -383,7 +398,8 @@ In this example, we configured Alloy to ingest logs via Kafka. We configured All {{< docs/ignore >}} -### Back to Docs +### Back to docs + Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) {{< /docs/ignore >}} @@ -391,6 +407,7 @@ Head back to where you started from to continue with the Loki documentation: [Lo ## Further reading For more information on Grafana Alloy, refer to the following resources: + - [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/) - [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/) @@ -400,5 +417,5 @@ If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, y The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud. - - \ No newline at end of file + + \ No newline at end of file diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 6b98b67ede7d7..ee1cf8aa66690 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -9,12 +9,13 @@ killercoda: backend: imageid: ubuntu --- - + # Sending OpenTelemetry logs to Loki using Alloy Alloy natively supports receiving logs in the OpenTelemetry format. This allows you to send logs from applications instrumented with OpenTelemetry to Alloy, which can then be sent to Loki for storage and visualization in Grafana. In this example, we will make use of 3 Alloy components to achieve this: + - **OpenTelemetry Receiver:** This component will receive logs in the OpenTelemetry format via HTTP and gRPC. - **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*` components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. - **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. @@ -42,7 +43,7 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos ## Scenario -In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: +In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services: - **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. - **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. @@ -68,7 +69,7 @@ In this step, we will set up our environment by cloning the repository that cont git clone -b microservice-otel https://github.com/grafana/loki-fundamentals.git ``` -1. Next we will spin up our observability stack using Docker Compose: +1. Next we will spin up our observability stack using Docker Compose: ```bash @@ -79,7 +80,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< docs/ignore >}} - ```bash + ```bash docker-compose -f loki-fundamentals/docker-compose.yml up -d ``` @@ -87,6 +88,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< /docs/ignore >}} This will spin up the following services: + ```console ✔ Container loki-fundamentals-grafana-1 Started ✔ Container loki-fundamentals-loki-1 Started @@ -94,6 +96,7 @@ In this step, we will set up our environment by cloning the repository that cont ``` We will be access two UI interfaces: + - Alloy at [http://localhost:12345](http://localhost:12345) - Grafana at [http://localhost:3000](http://localhost:3000) @@ -104,12 +107,13 @@ We will be access two UI interfaces: To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. -### Open your Code Editor and Locate the `config.alloy` file +### Open your code editor and locate the `config.alloy` file Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy` file in the code editor: {{< docs/ignore >}} **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** + 1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab. 1. Locate the `config.alloy` file in the top level directory, `loki-fundamentals'. 1. Click on the `config.alloy` file to open it in the code editor. @@ -128,6 +132,7 @@ You will copy all three of the following configuration snippets into the `config First, we will configure the OpenTelemetry receiver. `otelcol.receiver.otlp` accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. Now add the following configuration to the `config.alloy` file: + ```alloy otelcol.receiver.otlp "default" { http {} @@ -140,18 +145,19 @@ Now add the following configuration to the `config.alloy` file: ``` In this configuration: + - `http`: The HTTP configuration for the receiver. This configuration is used to receive logs in the OpenTelemetry format via HTTP. - `grpc`: The gRPC configuration for the receiver. This configuration is used to receive logs in the OpenTelemetry format via gRPC. - `output`: The list of processors to forward the logs to. In this case, we are forwarding the logs to the `otelcol.processor.batch.default.input`. For more information on the `otelcol.receiver.otlp` configuration, see the [OpenTelemetry Receiver OTLP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.otlp/). - -### Create batches of logs using a OpenTelemetry Processor +### Create batches of logs using a OpenTelemetry processor Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other `otelcol` components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. Now add the following configuration to the `config.alloy` file: + ```alloy otelcol.processor.batch "default" { output { @@ -161,15 +167,17 @@ otelcol.processor.batch "default" { ``` In this configuration: + - `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.exporter.otlphttp.default.input`. For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/). -### Export logs to Loki using a OpenTelemetry Exporter +### Export logs to Loki using a OpenTelemetry exporter Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. Now add the following configuration to the `config.alloy` file: + ```alloy otelcol.exporter.otlphttp "default" { client { @@ -227,7 +235,6 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- ``` - {{< docs/ignore >}} @@ -239,6 +246,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- {{< /docs/ignore >}} This will start the following services: + ```bash ✔ Container greenhouse-db-1 Started ✔ Container greenhouse-websocket_service-1 Started @@ -258,7 +266,6 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). - @@ -269,15 +276,16 @@ In this example, we configured Alloy to ingest OpenTelemetry logs and send them {{< docs/ignore >}} -### Back to Docs +### Back to docs + Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) {{< /docs/ignore >}} - ## Further reading For more information on Grafana Alloy, refer to the following resources: + - [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/) - [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/) @@ -287,5 +295,5 @@ If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, y The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud. - + \ No newline at end of file diff --git a/docs/sources/send-data/otel/otel-collector-getting-started.md b/docs/sources/send-data/otel/otel-collector-getting-started.md index 08b374656d969..2bead1f1372d4 100644 --- a/docs/sources/send-data/otel/otel-collector-getting-started.md +++ b/docs/sources/send-data/otel/otel-collector-getting-started.md @@ -14,12 +14,14 @@ killercoda: imageid: ubuntu --- + # Getting started with the OpenTelemetry Collector and Loki tutorial The OpenTelemetry Collector offers a vendor-agnostic implementation of how to receive, process, and export telemetry data. With the introduction of the OTLP endpoint in Loki, you can now send logs from applications instrumented with OpenTelemetry to Loki using the OpenTelemetry Collector in native OTLP format. In this example, we will teach you how to configure the OpenTelemetry Collector to receive logs in the OpenTelemetry format and send them to Loki using the OTLP HTTP protocol. This will involve configuring the following components in the OpenTelemetry Collector: + - **OpenTelemetry Receiver:** This component will receive logs in the OpenTelemetry format via HTTP and gRPC. - **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*` components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. - **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint. @@ -73,7 +75,7 @@ In this step, we will set up our environment by cloning the repository that cont git clone -b microservice-otel-collector https://github.com/grafana/loki-fundamentals.git ``` -1. Next we will spin up our observability stack using Docker Compose: +1. Next we will spin up our observability stack using Docker Compose: ```bash @@ -84,7 +86,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< docs/ignore >}} - ```bash + ```bash docker-compose -f loki-fundamentals/docker-compose.yml up -d ``` @@ -92,6 +94,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< /docs/ignore >}} To check the status of services we can run the following command: + ```bash docker ps -a ``` @@ -101,7 +104,6 @@ In this step, we will set up our environment by cloning the repository that cont {{< /admonition >}} - After we've finished configuring the OpenTelemetry Collector and sending logs to Loki, we will be able to view the logs in Grafana. To check if Grafana is up and running, navigate to the following URL: [http://localhost:3000](http://localhost:3000) @@ -117,6 +119,7 @@ The configuration file is written using **YAML** configuration syntax. To start, {{< docs/ignore >}} **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** + 1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab. 2. Locate the `otel-config.yaml` file in the top level directory, `loki-fundamentals`. 3. Click on the `otel-config.yaml` file to open it in the code editor. @@ -135,6 +138,7 @@ You will copy all three of the following configuration snippets into the `otel-c First, we will configure the OpenTelemetry receiver. `otlp:` accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. Now add the following configuration to the `otel-config.yaml` file: + ```yaml # Receivers receivers: @@ -147,6 +151,7 @@ receivers: ``` In this configuration: + - `receivers`: The list of receivers to receive telemetry data. In this case, we are using the `otlp` receiver. - `otlp`: The OpenTelemetry receiver that accepts logs in the OpenTelemetry format. - `protocols`: The list of protocols that the receiver supports. In this case, we are using `grpc` and `http`. @@ -156,10 +161,10 @@ In this configuration: For more information on the `otlp` receiver configuration, see the [OpenTelemetry Receiver OTLP documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md). - -### Create batches of logs using a OpenTelemetry Processor +### Create batches of logs using a OpenTelemetry processor Next add the following configuration to the `otel-config.yaml` file: + ```yaml # Processors processors: @@ -167,14 +172,16 @@ processors: ``` In this configuration: + - `processors`: The list of processors to process telemetry data. In this case, we are using the `batch` processor. - `batch`: The OpenTelemetry processor that accepts telemetry data from other `otelcol` components and places them into batches. For more information on the `batch` processor configuration, see the [OpenTelemetry Processor Batch documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md). -### Export logs to Loki using a OpenTelemetry Exporter +### Export logs to Loki using a OpenTelemetry exporter We will use the `otlphttp/logs` exporter to send the logs to the Loki native OTLP endpoint. Add the following configuration to the `otel-config.yaml` file: + ```yaml # Exporters exporters: @@ -183,7 +190,9 @@ exporters: tls: insecure: true ``` + In this configuration: + - `exporters`: The list of exporters to export telemetry data. In this case, we are using the `otlphttp/logs` exporter. - `otlphttp/logs`: The OpenTelemetry exporter that accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. - `endpoint`: The URL to send the telemetry data to. In this case, we are sending the logs to the Loki native OTLP endpoint at `http://loki:3100/otlp`. @@ -192,9 +201,10 @@ In this configuration: For more information on the `otlphttp/logs` exporter configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlphttpexporter/README.md) -### Creating the Pipeline +### Creating the pipeline Now that we have configured the receiver, processor, and exporter, we need to create a pipeline to connect these components. Add the following configuration to the `otel-config.yaml` file: + ```yaml # Pipelines service: @@ -206,13 +216,13 @@ service: ``` In this configuration: + - `pipelines`: The list of pipelines to connect the receiver, processor, and exporter. In this case, we are using the `logs` pipeline but there is also pipelines for metrics, traces, and continuous profiling. - `receivers`: The list of receivers to receive telemetry data. In this case, we are using the `otlp` receiver component we created earlier. - `processors`: The list of processors to process telemetry data. In this case, we are using the `batch` processor component we created earlier. - `exporters`: The list of exporters to export telemetry data. In this case, we are using the `otlphttp/logs` component exporter we created earlier. - -### Load the Configuration +### Load the configuration Before you load the configuration into the OpenTelemetry Collector, compare your configuration with the completed configuration below: @@ -245,6 +255,7 @@ service: processors: [batch] exporters: [otlphttp/logs] ``` + Next, we need apply the configuration to the OpenTelemetry Collector. To do this, we will restart the OpenTelemetry Collector container: ```bash @@ -259,6 +270,7 @@ docker logs loki-fundamentals-otel-collector-1 ``` Within the logs, you should see the following message: + ```console 2024-08-02T13:10:25.136Z info service@v0.106.1/service.go:225 Everything is ready. Begin running and processing data. ``` @@ -299,7 +311,6 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- ``` - {{< docs/ignore >}} @@ -311,6 +322,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- {{< /docs/ignore >}} This will start the following services: + ```console ✔ Container greenhouse-db-1 Started ✔ Container greenhouse-websocket_service-1 Started @@ -330,7 +342,6 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). - @@ -341,12 +352,12 @@ In this example, we configured the OpenTelemetry Collector to receive logs from {{< docs/ignore >}} -### Back to Docs +### Back to docs + Head back to where you started from to continue with the [Loki documentation](https://grafana.com/docs/loki/latest/send-data/otel). {{< /docs/ignore >}} - ## Further reading For more information on the OpenTelemetry Collector and the native OTLP endpoint of Loki, refer to the following resources: @@ -355,12 +366,11 @@ For more information on the OpenTelemetry Collector and the native OTLP endpoint - [How is native OTLP endpoint different from Loki Exporter](https://grafana.com/docs/loki//send-data/otel/native_otlp_vs_loki_exporter) - [OpenTelemetry Collector Configuration](https://opentelemetry.io/docs/collector/configuration/) - ## Complete metrics, logs, traces, and profiling example If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt). `Intro-to-mltp` provides a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud. - + \ No newline at end of file