-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OTLP trace receiver not working #1977
Comments
I don't know why @cfstras closed his similar issue but my understanding is that Grafana Agent only accepts Prometheus emitted Histograms with examplars:
Also in traces config, they write
The point is that my simple app only emits traces and does not include the trace id in any metrics, I guess this is why the Agent doesn't receive such traces, that are standard opentelemetry traces though. Indeed they can be received (and sent wherever, including Grafana Coud) thanks to an opentelemetry collector, instead of the agent, as already explained in the above mentioned @mdisibio blog post. |
My main error was trying to test the gRPC endpoint with curl -- which won't work. So as long as your OTLP receiver config has the gRPC port enabled, and the It might be helpful if you post the config you use for grafana agent, and the exact URL you set your trace exporter to. |
Hello sir, thanks for your help, I'm surprised you say the Agent works without exemplars (well, the explanation could be the insecure flag missing in the receiver part? I'm going to test it, more about it below: nope there is no insecure flag there). I resorted to the standard OTEL collector and I'm happy with it but surely I want to learn what I did wrong and I really appreciate your help. I published all my kubernetes files (well, I started with docker-compose, then I moved to kubernetes to double check, it's more or less the same thing, we can easily switch from one to another. The only think I obscured is my grafana key... lol, actually I published it, but then I rewrited git history, last commit). Ok so they are in the grafana-cloud branch of my last public repo. In particulare you ask about the OTLP receiver port (I assume you are speaking about the grafana agent, correct? because with the pure otel collector I said it's ok), so that the agent config in k8s is this agent configmap.
Notice I can write
Ehm, actually there is an insecure flag in the remote write part but there is no insecure flag active in the receiver part. Now, to save money, I destroyed the kuberenetes cluster, so I will copy the docker agent config of your issue and I will proceed with docker-compose. I will add that insecure flag there (in your initial comment you didn't include it indeed). I will comment again when done. Hopefully this should be the solution, in any case thank you very much again for your help!!! very much appreciated!!! Edit - Please note |
Ehm, I am running your exact cmd:
with the only change in your config.yaml being adding an
but docker-compose complains:
Notice also that in my working otel collector, there is no insecure flag activated in the receiver part. Afaics, the insecure part is only in the remote write, and it is not needed there because I'm going to secure Grafana Cloud tls. As far as I can see, the issue is still open on my side. |
It would be the same I'm using in the otel collector, so I would use the following in the agent to send to Grafana Cloud
or I would put insecure for an internal destination, but neither is working for me. |
In conclusion - AFAICS - I would still claim that the Agent remote write would not work without Prometheus metric exemplars, as opposite to the standard opentelemetry collector. |
Could you paste your entire agent config which doesn't work, scrubbing out credentials? The tracing parts of Grafana Agent use OpenTelemetry Collector internally, and it should work for forwarding OTLP trace data to an OTLP endpoint. |
Hello @rfratto , very pleased to meet you! Assuming a docker-compose situation, the agent.yaml (or config.yaml) is this
then I run
on an ubuntu window and on another I call my myinstr.py python on localhost port 4317 like that:
I have tested also Full output below (until I breack with ctrl-c)
and the other window in the meantime - in case of otlp grpc (not http)
|
Thanks! Can you also post the OpenTelemetry Collector config which did work for you?
That's true, at the time we called the section |
The OpenTelemetry Collector
MyToken is obtained following the instructions of the mentioned blog:
The command to run docker
The command to run myinstr.py is the same as above. Full output of docker until I break it
And finally my awesome trace on Grafana Could! 😄 🎆 |
Thanks! That's interesting, I wonder if it's because you don't have the batch processor defined on the agent equivalent, though I don't understand why that would matter. server:
http_listen_port: 12345
log_level: debug
traces:
configs:
- name: integrations
# Configure batch processor
batch:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
remote_write:
- endpoint: tempo-eu-west-0.grafana.net:443
basic_auth:
username: 201581
password: <my API key> @mapno would you expect traces to not show up in tempo if the batch processor isn't declared? |
FWIW, if you're happy with using OpenTelemetry Collector, you should by all means continue using it :) But I appreciate you helping take the time for us to get to the bottom of why the Grafana Agent config wasn't working for you so we can help other people that might run into the same issue. |
My wild guess at this point is that this was a bug in an older release of the opentelemetry collector that has been fixed in a release maybe newer than the one used to produce the Agent image. But I don't know if this makes really sense. Yes, as you said I'm happy with the Collector, but I think it would be also great to have the Agent working in this situation. Not strictly needed though. Yes I confirm that on my side this is resolved with the Collector. |
I commented that part and indeed it doesn't matter, sorry for adding it in the minimal repro.
Well, you recently upgraded from v0.46.0 to v0.55.0 and the former version of the colletor (namely otel/opentelemetry-collector-contrib:0.46.0) does not work. Edit What's even more important is that the otel/opentelemetry-collector-contrib:0.55.0 or the otel/opentelemetry-collector-contrib:latest - i.e. from version 55 on of the contrib collector - are working OK (for my use case) but the otel/opentelemetry-collector:latest - i.e. even the latest version of the core collector - is not working. Looking at your go.mod I see that you do use the contrib version for e.g. the jaeger receiver and exporter but oddly not in general hence not for the otlp receiver/exporter. I imagine you could "fix" this by using the contrib version also for the otlp part (or something like that...) |
Wait a moment, why is the latest version of the core collector not working? |
Yeah, And also
|
Reference to otel collector core issue and changelog
Docker tagsNOTICE that your latest tag on docker grafana/agent is "Last pushed 9 days ago" with agent release v0.26 pointing to collector v0.46, hence it is still not working at the time of writing, whilst the main tag is already OK. |
Hello, I also faced the same issue here, and it looks like it's fixed on |
I agree with @daper: better to consider the issue "open" until it's fixed on BTW There is also another subtle aspect that is not very clear to me. But a code like this no_otel_collector_url = "https://tempo-eu-west-0.grafana.net"
otlp_exporter = OTLPSpanExporter(endpoint=no_otel_collector_url, headers={"authorization": "Basic myencodedtoken") correctly passing the authentication, still results in no trace shown on grafancloud explore search. |
@daper I've tested that now - with the new version 1.12 of python opentelemetry sdk - even the Same solution as for this tempo straight ingestion issue. |
My code to send an OTLP trace is on github .
Tried to feed the agent and send to Tempo local or in Cloud but it doesn't work (I also texted about this on Slack).
I resolved the issue by using an standard OTEL collector instead of the Agent, as per the very helpful @mdisibio blog post in Grafana Labs . I'm happy with this solution. If it's by design, please close this issue but notice and consider that in your Agent documentation (see here and this blog) they say it can receive OTLP traces using grpc or http protocol.
The text was updated successfully, but these errors were encountered: