Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send observability signals to Kafka #290

Closed
arnitolog opened this issue Dec 15, 2023 · 9 comments · Fixed by #717
Closed

Send observability signals to Kafka #290

arnitolog opened this issue Dec 15, 2023 · 9 comments · Fixed by #717
Labels
enhancement New feature or request frozen-due-to-age

Comments

@arnitolog
Copy link

Request

Hello,
Grafana Tempo can consume traces directly from Kafka already. Grafana Mimir merged the changes to do that recently (grafana/mimir#6929).
I think it will be good to have the ability to send all kinds of observability data from the agent to Kafka.
Is it somewhere on the roadmap?

Use case

Kafka can be used as an intermediate buffer for all observability signals. So metrics scraping/ tracing, logs and profile collection, and actual data ingestion can go at their own pace. It also will prevent data loosing in case of overloaded ingesters. So data will not be discarded but delayed

@arnitolog arnitolog added the enhancement New feature or request label Dec 15, 2023
@ptodev
Copy link
Contributor

ptodev commented Dec 15, 2023

Hi, thank you for your suggestion! It would be great to leverage Kafka's resiliency, but I'm not sure how we could do that. The Agent is normally the one which sends the signals to the end location. Would each Agent component send data to Kafka, only for another component on the same Agent process to pick it up? It seems like a lot of unnecessary networking overhead.

Another issue with this is that it'd mean the user of the Agent has to be able to run Kafka. The Agent is normally a self sufficient executable, so this would be a big break from convention for us.

I think a more realistic solution would be something like #323. This article explains it in a bit more detail.

@arnitolog
Copy link
Author

@ptodev, once we have all the data in Kafka we can use either native consumers (Tempo/Mimir) to consume data or Agent to consume it and push it to the backend system.
The main problem right now is if we push data directly from the Agent to Mimir/Loki/Tempo and the ingesters are overloaded/unavailable, the dataset will be discarded and lost. With Kafka in the middle, we will have some kind of buffer where the data can live for a couple of hours/days. So data will not be lost and will be ingested eventually.
I would say it is a more guaranteed way for observability data delivery.

@arnitolog
Copy link
Author

and again, this shouldn't be the only way how data can be ingested. Having a Kafka exporter (Kafka as destination) will allow building such systems that have Kafka in place. But whoever doesn't have it or doesn't want it can still use the existing approach to push directly to the backend system.

@hainenber
Copy link
Contributor

it seems to me you're trying to look for equivalent of Filebeat's Kafka output and Vector's Kafka sink?

Having said that, the PR you've linked is for Kafka's message consumption, which can be done by the agent's loki.source.kafka :D

@arnitolog
Copy link
Author

@hainenber, I'm looking for something like this: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/kafkaexporter. The idea is to use Agent to collect all kinds of observability data and push it to Kafka. Then it can be consumed by different systems (another Agent,Mimir,Tempo, another external system).

@ptodev
Copy link
Contributor

ptodev commented Dec 18, 2023

I see, thank you for clarifying. It would certainly be possible to port Collector's Kafka exporter. I suspect it won't be a lot of effort - most of the time would be spent in documenting the features. However, I do not know to what extent databases such as Mimir, Loki, and Tempo can ingest signals from Kafka (especially on Grafana Cloud).

Also, if we reuse the OTel Collector's component, we wouldn't be able to send "profile" signals, because they are not yet part of the OTel standard.

@arnitolog
Copy link
Author

@ptodev I think "profiles" will eventually be onboarded as well in OTel.

@ptodev
Copy link
Contributor

ptodev commented Jan 5, 2024

Realistically, I don't think the core development team can work on adding an otelcol.exporter.kafka component in the near future. However, we would welcome a community contribution for it.

Copy link
Contributor

github-actions bot commented Feb 6, 2024

This issue has not had any activity in the past 30 days, so the needs-attention label has been added to it.
If the opened issue is a bug, check to see if a newer release fixed your issue. If it is no longer relevant, please feel free to close this issue.
The needs-attention label signals to maintainers that something has fallen through the cracks. No action is needed by you; your issue will be kept open and you do not have to respond to this comment. The label will be removed the next time this job runs if there is new activity.
Thank you for your contributions!

@rfratto rfratto transferred this issue from grafana/agent Apr 11, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 8, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request frozen-due-to-age
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants