Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collector constantly breaking down #6420

Closed
ambition-consulting opened this issue Oct 27, 2022 · 24 comments
Closed

Collector constantly breaking down #6420

ambition-consulting opened this issue Oct 27, 2022 · 24 comments
Labels
bug Something isn't working

Comments

@ambition-consulting
Copy link

ambition-consulting commented Oct 27, 2022

Describe the bug
After a while the pod, on which the collector runs, stops with this error:

panic: runtime error: slice bounds out of range [-2:]
32
31
goroutine 137 [running]:
30
go.opentelemetry.io/collector/pdata/internal/data/protogen/common/v1.(*AnyValue).MarshalToSizedBuffer(0xc002dfd810, {0xc001e84000, 0x22, 0x30c16})
29
go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/common/v1/common.pb.go:483 +0xd0
28
go.opentelemetry.io/collector/pdata/internal/data/protogen/common/v1.(*KeyValue).MarshalToSizedBuffer(0xc002dfd800, {0xc001e84000, 0x22, 0x30c16})
27
go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/common/v1/common.pb.go:700 +0x3a
26
go.opentelemetry.io/collector/pdata/internal/data/protogen/resource/v1.(*Resource).MarshalToSizedBuffer(0xc000753360, {0xc001e84000, 0x30984?, 0x30c16})
25
go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/resource/v1/resource.pb.go:146 +0xf0
24
go.opentelemetry.io/collector/pdata/internal/data/protogen/trace/v1.(*ResourceSpans).MarshalToSizedBuffer(0xc000753360, {0xc001e84000, 0x30984, 0x30c16})
23
go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/trace/v1/trace.pb.go:890 +0x105
22
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).MarshalToSizedBuffer(0xc003408198, {0xc001e84000, 0x30c16, 0x30c16})
21
go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:351 +0xac
20
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).Marshal(0xc0cebc8861798197?)
19
go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:331 +0x56
18
go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.Request.MarshalProto(...)
17
go.opentelemetry.io/collector/pdata@v0.62.1/ptrace/ptraceotlp/traces.go:88
16
go.opentelemetry.io/collector/exporter/otlphttpexporter.(*exporter).pushTraces(0xc0001255f0, {0x7388850, 0xc003291470}, {0xc002756e80?})
15
go.opentelemetry.io/collector@v0.62.1/exporter/otlphttpexporter/otlp.go:99 +0x32
14
go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesRequest).Export(0x279293e?, {0x7388850?, 0xc003291470?})
13
go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/traces.go:70 +0x34
12
go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send(0xc000d34750, {0x73a7158, 0xc00340ac30})
11
go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/common.go:203 +0x96
10
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send(0xc000125680, {0x73a7158, 0xc00340ac30})
9
go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/queued_retry.go:388 +0x58d
8
go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send(0xc000de0e88, {0x73a7158, 0xc00340ac30})
7
go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/traces.go:134 +0x88
6
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1({0x73a7158, 0xc00340ac30})
5
go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/queued_retry.go:206 +0x39
4
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1()
3
go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/internal/bounded_memory_queue.go:61 +0xb6
2
created by go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers
1
go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/internal/bounded_memory_queue.go:56 +0x45

Steps to reproduce
Using otel/opentelemetry-collector-contrib:0.62.1 docker image and config:

apiVersion: v1
kind: ConfigMap
metadata:
  name: ecom-opentelemetry-collector
  labels:
    helm.sh/chart: opentelemetry-collector-0.30.0
    app.kubernetes.io/name: ecom-opentelemetry-collector
    app.kubernetes.io/instance: ecom-dev
    app.kubernetes.io/version: "0.59.0"
    app.kubernetes.io/managed-by: Helm
data:
  relay: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
      otlp/spanmetrics:
        protocols:
          grpc:
            endpoint: localhost:12345
    processors:
      batch: {}
      spanmetrics:
        metrics_exporter: otlp/spanmetrics
        dimensions_cache_size: 5000
        latency_histogram_buckets:
          - 10ms
          - 100ms
          - 1s
          - 2s
          - 4s
          - 8s
          - 16s
          - 32s
        aggregation_temporality: AGGREGATION_TEMPORALITY_CUMULATIVE        
        dimensions:
          - name: http.status_code
          - name : target_xpath
          - name : some_more_stuff
    exporters:
      logging:
        loglevel: debug
      otlphttp:
        endpoint: http://jaeger-collector.jaeger.svc:4318
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
      zipkin:
        endpoint: http://jaeger-collector.jaeger.svc:9411/api/v2/spans
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
      otlp/spanmetrics:
        endpoint: 127.0.0.1:4317
        tls:
          insecure: true
      prometheus:
        endpoint: 0.0.0.0:8889
        namespace: default
    service:
      extensions:
      - health_check
      telemetry:
        logs:
          level: debug
        metrics:
          level: detailed
          address: 0.0.0.0:8888
      pipelines:
        logs:
          receivers:
          - otlp
          processors:
          - batch
          exporters:
          - logging
        traces:
          receivers:
          - otlp
          processors:
          - spanmetrics
          - batch
          exporters:
          - otlphttp
          - logging
        metrics:
          receivers:
          - otlp
          processors:
          - batch
          exporters:
          - logging
          - prometheus
        metrics/spanmetrics:
          receivers:
            - otlp/spanmetrics
          exporters:
            - otlp/spanmetrics
    extensions:
      health_check: {}

Environment
OS: linux docker on k8s

@ambition-consulting ambition-consulting added the bug Something isn't working label Oct 27, 2022
@ambition-consulting
Copy link
Author

@ambition-consulting
Copy link
Author

Even though I am using the contrib docker, it's code coming from this repo, where it's panicking:

https://github.com/open-telemetry/opentelemetry-collector/blob/main/pdata/internal/data/protogen/collector/trace/v1/trace_service.pb.go#L437

@ambition-consulting
Copy link
Author

@tigrannajaryan Any idea why this might be breaking?

@bogdandrutu
Copy link
Member

To me this seems to be caused by the "spanmetrics" since I have a feeling that it mutates the data that it also sends to the exporter.

@tigrannajaryan
Copy link
Member

To me this seems to be caused by the "spanmetrics" since I have a feeling that it mutates the data that it also sends to the exporter.

This is a good hunch. The failure shows we run out of buffer while marshalling. One way this can happen is if data is mutated after the buffer size is calculated. A component is likely misbehaving and mutating data when it shouldn't.

@bogdandrutu
Copy link
Member

@ambition-consulting are you sure about the version you are using? Since I see something like helm chart version 30 which is very old.

@ambition-consulting
Copy link
Author

Yes, I don't have permissions to run the operator on k8s, thus I've been using the helm chart template, but replaced the version inside:

---
# Source: opentelemetry-collector/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ecom-opentelemetry-collector
  labels:
    helm.sh/chart: opentelemetry-collector-0.30.0
    app.kubernetes.io/name: ecom-opentelemetry-collector
    app.kubernetes.io/instance: ecom-dev
    app.kubernetes.io/version: "0.59.0"
    app.kubernetes.io/managed-by: Helm

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ecom-opentelemetry-collector
      app.kubernetes.io/instance: ecom-dev
      component: standalone-collector
  template:
    metadata:
      annotations:
        checksum/config: 5bf2b54bf88ff7f38ff238e33e6252ebaec3b51103be6281b594693f27437f06
      labels:
        app.kubernetes.io/name: ecom-opentelemetry-collector
        app.kubernetes.io/instance: ecom-dev
        component: standalone-collector
    spec:
      automountServiceAccountToken: true
      securityContext:
        {}
      containers:
        - name: ecom-opentelemetry-collector
          command:
            - /otelcol-contrib
            - --config=/conf/relay.yaml
          securityContext:
            {}
          image: "otel/opentelemetry-collector-contrib:0.63.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: jaeger-compact
              containerPort: 6831
              protocol: UDP
            - name: jaeger-grpc
              containerPort: 14250
              protocol: TCP
            - name: jaeger-thrift
              containerPort: 14268
              protocol: TCP
            - name: otlp
              containerPort: 4317
              protocol: TCP
            - name: otlp-http
              containerPort: 4318
              protocol: TCP
            - name: zipkin
              containerPort: 9411
              protocol: TCP
            - name: otel-metrics
              containerPort: 8888
              protocol: TCP
            - name: app-metrics
              containerPort: 8889
              protocol: TCP
          env:
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
          livenessProbe:
            httpGet:
              path: /
              port: 13133
          readinessProbe:
            httpGet:
              path: /
              port: 13133
          resources:
            limits:
              cpu: 256m
              memory: 512Mi
          volumeMounts:
            - mountPath: /conf
              name: ecom-opentelemetry-collector
      volumes:
        - name: ecom-opentelemetry-collector
          configMap:
            name: ecom-opentelemetry-collector
            items:
              - key: relay
                path: relay.yaml

image

2022/10/29 09:23:09 proto: duplicate proto type registered: jaeger.api_v2.PostSpansRequest
2022/10/29 09:23:09 proto: duplicate proto type registered: jaeger.api_v2.PostSpansResponse
2022-10-29T09:23:10.352Z	info	service/telemetry.go:110	Setting up own telemetry...
2022-10-29T09:23:10.352Z	info	service/telemetry.go:140	Serving Prometheus metrics	{"address": "0.0.0.0:8888", "level": "normal"}
2022-10-29T09:23:10.352Z	info	components/components.go:30	In development component. May change in the future.	{"kind": "exporter", "data_type": "traces", "name": "logging", "stability": "in development"}
2022-10-29T09:23:10.352Z	warn	loggingexporter@v0.63.0/factory.go:110	'loglevel' option is deprecated in favor of 'verbosity'. Set 'verbosity' to equivalent value to preserve behavior.	{"kind": "exporter", "data_type": "traces", "name": "logging", "loglevel": "debug", "equivalent verbosity level": "detailed"}
2022-10-29T09:23:10.352Z	info	components/components.go:30	In development component. May change in the future.	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces", "stability": "in development"}
2022-10-29T09:23:10.352Z	info	spanmetricsprocessor@v0.63.0/processor.go:93	Building spanmetricsprocessor	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2022-10-29T09:23:10.352Z	info	components/components.go:30	In development component. May change in the future.	{"kind": "exporter", "data_type": "metrics", "name": "logging", "stability": "in development"}
2022-10-29T09:23:10.353Z	info	components/components.go:30	In development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "logging", "stability": "in development"}
2022-10-29T09:23:10.358Z	info	service/service.go:89	Starting otelcol-contrib...	{"Version": "0.63.0", "NumCPU": 16}
2022-10-29T09:23:10.358Z	info	extensions/extensions.go:42	Starting extensions...
2022-10-29T09:23:10.358Z	info	extensions/extensions.go:45	Extension is starting...	{"kind": "extension", "name": "health_check"}
2022-10-29T09:23:10.358Z	info	healthcheckextension@v0.63.0/healthcheckextension.go:44	Starting health_check extension	{"kind": "extension", "name": "health_check", "config": {"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"Path":"/","CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2022-10-29T09:23:10.358Z	info	extensions/extensions.go:49	Extension started.	{"kind": "extension", "name": "health_check"}
2022-10-29T09:23:10.358Z	info	pipelines/pipelines.go:74	Starting exporters...
2022-10-29T09:23:10.358Z	info	pipelines/pipelines.go:78	Exporter is starting...	{"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-10-29T09:23:10.358Z	info	pipelines/pipelines.go:82	Exporter started.	{"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-10-29T09:23:10.358Z	info	pipelines/pipelines.go:78	Exporter is starting...	{"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
2022-10-29T09:23:10.358Z	info	pipelines/pipelines.go:82	Exporter started.	{"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
2022-10-29T09:23:10.358Z	info	pipelines/pipelines.go:78	Exporter is starting...	{"kind": "exporter", "data_type": "metrics", "name": "otlp/spanmetrics"}
2022-10-29T09:23:10.359Z	info	pipelines/pipelines.go:82	Exporter started.	{"kind": "exporter", "data_type": "metrics", "name": "otlp/spanmetrics"}
2022-10-29T09:23:10.359Z	info	pipelines/pipelines.go:78	Exporter is starting...	{"kind": "exporter", "data_type": "logs", "name": "logging"}
2022-10-29T09:23:10.359Z	info	pipelines/pipelines.go:82	Exporter started.	{"kind": "exporter", "data_type": "logs", "name": "logging"}
2022-10-29T09:23:10.359Z	info	pipelines/pipelines.go:78	Exporter is starting...	{"kind": "exporter", "data_type": "traces", "name": "otlp"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:82	Exporter started.	{"kind": "exporter", "data_type": "traces", "name": "otlp"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:78	Exporter is starting...	{"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:82	Exporter started.	{"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:86	Starting processors...
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:90	Processor is starting...	{"kind": "processor", "name": "batch", "pipeline": "traces"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:94	Processor started.	{"kind": "processor", "name": "batch", "pipeline": "traces"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:90	Processor is starting...	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2022-10-29T09:23:10.443Z	info	spanmetricsprocessor@v0.63.0/processor.go:177	Starting spanmetricsprocessor	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2022-10-29T09:23:10.443Z	info	spanmetricsprocessor@v0.63.0/processor.go:197	Found exporter	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces", "spanmetrics-exporter": "otlp/spanmetrics"}
2022-10-29T09:23:10.443Z	info	spanmetricsprocessor@v0.63.0/processor.go:205	Started spanmetricsprocessor	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:94	Processor started.	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:90	Processor is starting...	{"kind": "processor", "name": "batch", "pipeline": "metrics"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:94	Processor started.	{"kind": "processor", "name": "batch", "pipeline": "metrics"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:90	Processor is starting...	{"kind": "processor", "name": "batch", "pipeline": "logs"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:94	Processor started.	{"kind": "processor", "name": "batch", "pipeline": "logs"}
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:98	Starting receivers...
2022-10-29T09:23:10.443Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "otlp", "pipeline": "traces"}
2022-10-29T09:23:10.443Z	info	otlpreceiver/otlp.go:71	Starting GRPC server	{"kind": "receiver", "name": "otlp", "pipeline": "traces", "endpoint": "0.0.0.0:4317"}
2022-10-29T09:23:10.444Z	info	otlpreceiver/otlp.go:89	Starting HTTP server	{"kind": "receiver", "name": "otlp", "pipeline": "traces", "endpoint": "0.0.0.0:4318"}
2022-10-29T09:23:10.444Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "otlp", "pipeline": "traces"}
2022-10-29T09:23:10.444Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "otlp", "pipeline": "metrics"}
2022-10-29T09:23:10.444Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "otlp", "pipeline": "metrics"}
2022-10-29T09:23:10.444Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "otlp/spanmetrics", "pipeline": "metrics"}
2022-10-29T09:23:10.444Z	info	otlpreceiver/otlp.go:71	Starting GRPC server	{"kind": "receiver", "name": "otlp/spanmetrics", "pipeline": "metrics", "endpoint": "localhost:12345"}
2022-10-29T09:23:10.444Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "otlp/spanmetrics", "pipeline": "metrics"}
2022-10-29T09:23:10.444Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "otlp", "pipeline": "logs"}
2022-10-29T09:23:10.444Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "otlp", "pipeline": "logs"}
2022-10-29T09:23:10.444Z	info	healthcheck/handler.go:129	Health Check state change	{"kind": "extension", "name": "health_check", "status": "ready"}
2022-10-29T09:23:10.444Z	info	service/service.go:106	Everything is ready. Begin running and processing data.

@bogdandrutu
Copy link
Member

bogdandrutu commented Nov 2, 2022

@ambition-consulting still investigating, is this the first version you saw this? Have you run 61 successfully without any error?

Updated: Do you see this with v0.63.0 as well or just with v0.62.1?

@bogdandrutu
Copy link
Member

Also, can you run the collector with this configuration for the pipelines, to isolate the problem:

      pipelines:
        logs:
          receivers:
          - otlp
          processors:
          - batch
          exporters:
          - logging
        traces:
          receivers:
          - otlp
          processors:
          - batch
          exporters:
          - otlphttp
        traces/spanmetrics:
          receivers:
          - otlp
          processors:
          - spanmetrics
          exporters:
          - logging
        metrics:
          receivers:
          - otlp
          processors:
          - batch
          exporters:
          - logging
          - prometheus
        metrics/spanmetrics:
          receivers:
            - otlp/spanmetrics
          exporters:
            - otlp/spanmetrics

@ambition-consulting
Copy link
Author

@ambition-consulting still investigating, is this the first version you saw this? Have you run 61 successfully without any error?

Updated: Do you see this with v0.63.0 as well or just with v0.62.1?

yes with both - those are also the only versions I have tested.

@bogdandrutu
Copy link
Member

@ambition-consulting let me know when you have updates from the new config proposal

@andretong
Copy link

Hello everyone!
I'm having the same issue with both versions, v0.63.0 and v0.62.1

As an extra, I'm using the Attributes Span Processor feature

The only thing I could notice prior to the error is that is trying to process a trace with at least 26 spans and then it crashes.
Here I attach the stack trace of the error

docker-compose-otel-collector-1  | 	{"kind": "exporter", "data_type": "traces", "name": "logging"}
docker-compose-otel-collector-1  | panic: runtime error: slice bounds out of range [-3:]
docker-compose-otel-collector-1  | 
docker-compose-otel-collector-1  | goroutine 99 [running]:
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/common/v1.(*KeyValue).MarshalToSizedBuffer(0xc0014e3c00, {0xc001700000, 0x9, 0x2675})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/common/v1/common.pb.go:711 +0x21b
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/resource/v1.(*Resource).MarshalToSizedBuffer(0xc0011dda90, {0xc001700000, 0x264d?, 0x2675})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/resource/v1/resource.pb.go:146 +0xf0
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/trace/v1.(*ResourceSpans).MarshalToSizedBuffer(0xc0011dda90, {0xc001700000, 0x2675, 0x2675})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/trace/v1/trace.pb.go:890 +0x105
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).MarshalToSizedBuffer(0xc00151ec90, {0xc001700000, 0x2675, 0x2675})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:351 +0xac
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).Marshal(0xc00151ec90?)
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:331 +0x56
docker-compose-otel-collector-1  | google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x73ba118, 0xc00166e7f0}, {0x0, 0x0, 0x0}, 0x0})
docker-compose-otel-collector-1  | 	google.golang.org/protobuf@v1.28.1/internal/impl/legacy_message.go:402 +0xa2
docker-compose-otel-collector-1  | google.golang.org/protobuf/proto.MarshalOptions.size({{}, 0x90?, 0xec?, 0x51?}, {0x73ba118, 0xc00166e7f0})
docker-compose-otel-collector-1  | 	google.golang.org/protobuf@v1.28.1/proto/size.go:43 +0xa6
docker-compose-otel-collector-1  | google.golang.org/protobuf/proto.MarshalOptions.Size({{}, 0xc0?, 0x5a?, 0x40?}, {0x733b780?, 0xc00166e7f0?})
docker-compose-otel-collector-1  | 	google.golang.org/protobuf@v1.28.1/proto/size.go:26 +0x54
docker-compose-otel-collector-1  | google.golang.org/protobuf/proto.Size(...)
docker-compose-otel-collector-1  | 	google.golang.org/protobuf@v1.28.1/proto/size.go:16
docker-compose-otel-collector-1  | github.com/golang/protobuf/proto.Size({0x7f387be10158?, 0xc00151ec90?})
docker-compose-otel-collector-1  | 	github.com/golang/protobuf@v1.5.2/proto/wire.go:18 +0x45
docker-compose-otel-collector-1  | go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.messageType.Event({{0x6716adf, 0xc}, {0x4, 0x0, {0x66f7565, 0x4}, {0x0, 0x0}}}, {0x7388850, 0xc0020cf6b0}, ...)
docker-compose-otel-collector-1  | 	go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@v0.36.1/interceptor.go:50 +0x165
docker-compose-otel-collector-1  | go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1({0x7388850, 0xc0020cf5c0}, {0x686ead8, 0x3b}, {0x6405ac0, 0xc00151ec90}, {0x6405c00, 0xc00012fe68}, 0xc0008fe000, 0x69a8358, ...)
docker-compose-otel-collector-1  | 	go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@v0.36.1/interceptor.go:106 +0x6aa
docker-compose-otel-collector-1  | google.golang.org/grpc.(*ClientConn).Invoke(0xc0008fe000?, {0x7388850?, 0xc0020cf5c0?}, {0x686ead8?, 0x3b?}, {0x6405ac0?, 0xc00151ec90?}, {0x6405c00?, 0xc00012fe68?}, {0xc0011e7150, ...})
docker-compose-otel-collector-1  | 	google.golang.org/grpc@v1.50.0/call.go:35 +0x223
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*traceServiceClient).Export(0xc0013d0248, {0x7388850, 0xc0020cf5c0}, 0xc0011e5a70?, {0xc0011e7150, 0x1, 0x1})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.62.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:271 +0xc9
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.(*tracesClient).Export(0x49cc20?, {0x7388850?, 0xc0020cf5c0?}, {0xc0020cf590?}, {0xc0011e7150?, 0x0?, 0x0?})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.62.1/ptrace/ptraceotlp/traces.go:140 +0x30
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/otlpexporter.(*exporter).pushTraces(0xc00116f860, {0x7388818?, 0xc0011ce9c0?}, {0x7388850?})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/otlpexporter/otlp.go:105 +0x69
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesRequest).Export(0x7388850?, {0x7388818?, 0xc0011ce9c0?})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/traces.go:70 +0x34
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send(0xc00119d4a0, {0x73a7158, 0xc0020cf560})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/common.go:203 +0x96
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send(0xc000bf27e0, {0x73a7158, 0xc0020cf560})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/queued_retry.go:388 +0x58d
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send(0xc0011e2420, {0x73a7158, 0xc0020cf560})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/traces.go:134 +0x88
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1({0x73a7158, 0xc0020cf560})
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/queued_retry.go:206 +0x39
docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1()
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/internal/bounded_memory_queue.go:61 +0xb6
docker-compose-otel-collector-1  | created by go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers
docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.62.1/exporter/exporterhelper/internal/bounded_memory_queue.go:56 +0x45
docker-compose-otel-collector-1 exited with code 2

@bogdandrutu
Copy link
Member

@andretong can you share your config as well?

@andretong
Copy link

andretong commented Nov 8, 2022

@bogdandrutu
Sure, here I attach:

extensions:
  memory_ballast:
    size_in_percentage: 20
  zpages:
    endpoint: 0.0.0.0:55679
  health_check:

receivers:
  otlp:
    protocols:
      http:
      grpc:

processors:
  attributes/requestid_processor:
    actions:
      - key: "http.request.header.requestid"
        action: convert
        converted_type: string        
      - key: "http.request.header.requestid"
        pattern: ^\[\"(?P<previousRequestId>.*\s\\u003e\s)?(?P<requestId>.*)\"]
        action: extract
      - key: "previousRequestId"
        action: delete
  attributes/bannerid_processor:
    actions:
      - key: "http.request.header.bannerid"
        action: convert
        converted_type: string        
      - key: "http.request.header.bannerid"
        pattern: ^\[\"(?P<bannerId>.*)\"]
        action: extract
  attributes/regionid_processor:
    actions:
      - key: "http.request.header.regionid"
        action: convert
        converted_type: string        
      - key: "http.request.header.regionid"
        pattern: ^\[\"(?P<regionId>.*)\"]
        action: extract
  batch:  

exporters:
  logging:
    loglevel: debug
  otlp/grafana:
    endpoint: tempo-eu-west-0.grafana.net:443
    headers:
      authorization: Basic ${TEMPO_BASIC_AUTH}  

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch, attributes/requestid_processor, attributes/bannerid_processor]
      exporters: [otlp/grafana, logging]

@Edition-X
Copy link

I'm receiving the same error as @andretong above. My config is pretty similar but with the addition of the transform processor:

extensions:
  memory_ballast:
    size_in_percentage: 20
  zpages:
    endpoint: 0.0.0.0:55679
  health_check:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:
  attributes:
    actions:
      - key: "com.ocado.kibana.baseUrl"
        action: insert
        value: "https://<internal_url>/_dashboards/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))&_a=(columns:!(appId,level,message),filters:!(),index:engprod-idt-logs,interval:auto,query:(language:kuery,query:'trace_id:%20"
      - key: "com.ocado.kibana.endUrl"
        action: insert
        value: "'),sort:!())"
      - key: "com.ocado.kibana.fullUrl"
        action: insert
        value: ""
  transform:
    traces:
      statements:
        - set(attributes["com.ocado.kibana.fullUrl"], Concat([attributes["com.ocado.kibana.baseUrl"], trace_id.string, attributes["com.ocado.kibana.endUrl"]], ""))

exporters:
  logging:
    loglevel: debug
  otlp/grafana:
    endpoint: tempo-eu-west-0.grafana.net:443
    headers:
      authorization: Basic ${TEMPO_BASIC_AUTH}

service:
  telemetry:
    logs:
      level: "debug"
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch, attributes, transform]
      exporters: [otlp/grafana, logging]
  extensions: [memory_ballast, zpages, health_check]

@andretong
Copy link

@bogdandrutu Hi! Here I attach other stack trace that is crashing the collector in version 0.63.1

comms-docker-compose-otel-collector-1  | panic: runtime error: index out of range [-1]
comms-docker-compose-otel-collector-1  |
comms-docker-compose-otel-collector-1  | goroutine 113 [running]:
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/trace/v1.encodeVarintTrace(...)
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/trace/v1/trace.pb.go:1294
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/trace/v1.(*ResourceSpans).MarshalToSizedBuffer(0xc000bef1a0, {0xc001482000, 0x2882, 0x2882})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/trace/v1/trace.pb.go:919 +0x1f0
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).MarshalToSizedBuffer(0xc0014b6288, {0xc001482000, 0x2882, 0x2882})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:351 +0xac
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).Marshal(0xc0013f06e0?)
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:331 +0x56
comms-docker-compose-otel-collector-1  | google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x74909f8, 0xc0013f06e0}, {0x0, 0x0, 0x0}, 0x0})
comms-docker-compose-otel-collector-1  | 	google.golang.org/protobuf@v1.28.1/internal/impl/legacy_message.go:402 +0xa2
comms-docker-compose-otel-collector-1  | google.golang.org/protobuf/proto.MarshalOptions.marshal({{}, 0x28?, 0x0, 0x0}, {0x0, 0x0, 0x0}, {0x74909f8, 0xc0013f06e0})
comms-docker-compose-otel-collector-1  | 	google.golang.org/protobuf@v1.28.1/proto/encode.go:166 +0x27b
comms-docker-compose-otel-collector-1  | google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend({{}, 0xe0?, 0x40?, 0x4c?}, {0x0, 0x0, 0x0}, {0x7410c60?, 0xc0013f06e0?})
comms-docker-compose-otel-collector-1  | 	google.golang.org/protobuf@v1.28.1/proto/encode.go:125 +0x79
comms-docker-compose-otel-collector-1  | github.com/golang/protobuf/proto.marshalAppend({0x0, 0x0, 0x0}, {0x7fd764f4c148?, 0xc0014b6288?}, 0x0?)
comms-docker-compose-otel-collector-1  | 	github.com/golang/protobuf@v1.5.2/proto/wire.go:40 +0xa5
comms-docker-compose-otel-collector-1  | github.com/golang/protobuf/proto.Marshal(...)
comms-docker-compose-otel-collector-1  | 	github.com/golang/protobuf@v1.5.2/proto/wire.go:23
comms-docker-compose-otel-collector-1  | google.golang.org/grpc/encoding/proto.codec.Marshal({}, {0x64c40e0, 0xc0014b6288})
comms-docker-compose-otel-collector-1  | 	google.golang.org/grpc@v1.50.1/encoding/proto/proto.go:45 +0x4e
comms-docker-compose-otel-collector-1  | google.golang.org/grpc.encode({0x7fd764f4c0d8?, 0xb312e00?}, {0x64c40e0?, 0xc0014b6288?})
comms-docker-compose-otel-collector-1  | 	google.golang.org/grpc@v1.50.1/rpc_util.go:594 +0x44
comms-docker-compose-otel-collector-1  | google.golang.org/grpc.prepareMsg({0x64c40e0?, 0xc0014b6288?}, {0x7fd764f4c0d8?, 0xb312e00?}, {0x0, 0x0}, {0x744f500, 0xc0000faaf0})
comms-docker-compose-otel-collector-1  | 	google.golang.org/grpc@v1.50.1/stream.go:1692 +0xd2
comms-docker-compose-otel-collector-1  | google.golang.org/grpc.(*clientStream).SendMsg(0xc0008d50e0, {0x64c40e0?, 0xc0014b6288})
comms-docker-compose-otel-collector-1  | 	google.golang.org/grpc@v1.50.1/stream.go:830 +0xfd
comms-docker-compose-otel-collector-1  | google.golang.org/grpc.invoke({0x745e5f0?, 0xc000ae4a80?}, {0x693260d?, 0x4?}, {0x64c40e0, 0xc0014b6288}, {0x64c4220, 0xc0014b6840}, 0x0?, {0xc0014b8080, ...})
comms-docker-compose-otel-collector-1  | 	google.golang.org/grpc@v1.50.1/call.go:70 +0xa8
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1({0x745e5f0, 0xc000ae49c0}, {0x693260d, 0x3b}, {0x64c40e0, 0xc0014b6288}, {0x64c4220, 0xc0014b6840}, 0xc00096f180, 0x6a6f038, ...)
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@v0.36.4/interceptor.go:105 +0x3e4
comms-docker-compose-otel-collector-1  | google.golang.org/grpc.(*ClientConn).Invoke(0xc00096f180?, {0x745e5f0?, 0xc000ae49c0?}, {0x693260d?, 0x3b?}, {0x64c40e0?, 0xc0014b6288?}, {0x64c4220?, 0xc0014b6840?}, {0xc000bd0c00, ...})
comms-docker-compose-otel-collector-1  | 	google.golang.org/grpc@v1.50.1/call.go:35 +0x223
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*traceServiceClient).Export(0xc0005476a0, {0x745e5f0, 0xc000ae49c0}, 0xc000ba9dd0?, {0xc000bd0c00, 0x1, 0x1})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:271 +0xc9
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.(*grpcClient).Export(0x49cd20?, {0x745e5f0?, 0xc000ae49c0?}, {0xc000ae4990?}, {0xc000bd0c00?, 0x0?, 0x0?})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/pdata@v0.63.1/ptrace/ptraceotlp/grpc.go:51 +0x30
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/otlpexporter.(*exporter).pushTraces(0xc000b6c820, {0x745e5b8?, 0xc000bef200?}, {0x745e5f0?})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector/exporter/otlpexporter@v0.63.1/otlp.go:105 +0x69
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesRequest).Export(0x745e5f0?, {0x745e5b8?, 0xc000bef200?})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/traces.go:70 +0x34
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send(0xc000edbd58, {0x747d698, 0xc000a0ed50})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/common.go:203 +0x96
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send(0xc00068e000, {0x747d698, 0xc000a0ed50})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/queued_retry.go:388 +0x58d
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send(0xc000b9adf8, {0x747d698, 0xc000a0ed50})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/traces.go:134 +0x88
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1({0x747d698, 0xc000a0ed50})
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/queued_retry.go:206 +0x39
comms-docker-compose-otel-collector-1  | go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1()
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/internal/bounded_memory_queue.go:61 +0xb6
comms-docker-compose-otel-collector-1  | created by go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers
comms-docker-compose-otel-collector-1  | 	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/internal/bounded_memory_queue.go:56 +0x45
comms-docker-compose-otel-collector-1 exited with code 2

@bogdandrutu
Copy link
Member

Does any of you @andretong @Edition-X happen to run a previous version where this did not happen, and can help me identify in which version it started?

@bogdandrutu
Copy link
Member

Also, can any of you (@andretong @Edition-X @ambition-consulting) deploy the collector without the "batch" processor, seem to be the common thing between all your pipeline.

@andretong
Copy link

Hi @bogdandrutu

I made the test without the batch processor, but still breaks

Attributes:
     -> http.flavor: Str(1.1)
     -> http.method: Str(POST)
     -> http.response_content_length: Int(275)
     -> http.status_code: Int(200)
     -> http.url: Str(http://localstack:4566)
     -> net.peer.name: Str(localstack)
     -> net.peer.port: Int(4566)
     -> net.transport: Str(ip_tcp)
     -> rpc.method: Str(ReceiveMessage)
     -> rpc.service: Str(AmazonSQS)
     -> rpc.system: Str(aws-api)
     -> thread.id: Int(77)
     -> thread.name: Str(simpleMessageListenerContainer-9)
	{"kind": "exporter", "data_type": "traces", "name": "logging"}
panic: runtime error: index out of range [-2]

goroutine 105 [running]:
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.encodeVarintTraceService(...)
	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:437
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).MarshalToSizedBuffer(0xc00114aeb8, {0xc001e51800, 0x16ca, 0x16ca})
	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:356 +0x16d
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*ExportTraceServiceRequest).Marshal(0xc001eac310?)
	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:331 +0x56
google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x74909f8, 0xc001eac310}, {0x0, 0x0, 0x0}, 0x0})
	google.golang.org/protobuf@v1.28.1/internal/impl/legacy_message.go:402 +0xa2
google.golang.org/protobuf/proto.MarshalOptions.marshal({{}, 0x28?, 0x0, 0x0}, {0x0, 0x0, 0x0}, {0x74909f8, 0xc001eac310})
	google.golang.org/protobuf@v1.28.1/proto/encode.go:166 +0x27b
google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend({{}, 0xe0?, 0x40?, 0x4c?}, {0x0, 0x0, 0x0}, {0x7410c60?, 0xc001eac310?})
	google.golang.org/protobuf@v1.28.1/proto/encode.go:125 +0x79
github.com/golang/protobuf/proto.marshalAppend({0x0, 0x0, 0x0}, {0x7fca4f7f3188?, 0xc00114aeb8?}, 0x0?)
	github.com/golang/protobuf@v1.5.2/proto/wire.go:40 +0xa5
github.com/golang/protobuf/proto.Marshal(...)
	github.com/golang/protobuf@v1.5.2/proto/wire.go:23
google.golang.org/grpc/encoding/proto.codec.Marshal({}, {0x64c40e0, 0xc00114aeb8})
	google.golang.org/grpc@v1.50.1/encoding/proto/proto.go:45 +0x4e
google.golang.org/grpc.encode({0x7fca4f7f3118?, 0xb312e00?}, {0x64c40e0?, 0xc00114aeb8?})
	google.golang.org/grpc@v1.50.1/rpc_util.go:594 +0x44
google.golang.org/grpc.prepareMsg({0x64c40e0?, 0xc00114aeb8?}, {0x7fca4f7f3118?, 0xb312e00?}, {0x0, 0x0}, {0x744f500, 0xc000108af0})
	google.golang.org/grpc@v1.50.1/stream.go:1692 +0xd2
google.golang.org/grpc.(*clientStream).SendMsg(0xc000917320, {0x64c40e0?, 0xc00114aeb8})
	google.golang.org/grpc@v1.50.1/stream.go:830 +0xfd
google.golang.org/grpc.invoke({0x745e5f0?, 0xc0022134d0?}, {0x693260d?, 0x4?}, {0x64c40e0, 0xc00114aeb8}, {0x64c4220, 0xc00144a480}, 0x0?, {0xc0007f4940, ...})
	google.golang.org/grpc@v1.50.1/call.go:70 +0xa8
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1({0x745e5f0, 0xc002213410}, {0x693260d, 0x3b}, {0x64c40e0, 0xc00114aeb8}, {0x64c4220, 0xc00144a480}, 0xc000919880, 0x6a6f038, ...)
	go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@v0.36.4/interceptor.go:105 +0x3e4
google.golang.org/grpc.(*ClientConn).Invoke(0xc000919880?, {0x745e5f0?, 0xc002213410?}, {0x693260d?, 0x3b?}, {0x64c40e0?, 0xc00114aeb8?}, {0x64c4220?, 0xc00144a480?}, {0xc001149760, ...})
	google.golang.org/grpc@v1.50.1/call.go:35 +0x223
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1.(*traceServiceClient).Export(0xc0003a3b48, {0x745e5f0, 0xc002213410}, 0xc00114de00?, {0xc001149760, 0x1, 0x1})
	go.opentelemetry.io/collector/pdata@v0.63.1/internal/data/protogen/collector/trace/v1/trace_service.pb.go:271 +0xc9
go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.(*grpcClient).Export(0x49cd20?, {0x745e5f0?, 0xc002213410?}, {0xc0022133e0?}, {0xc001149760?, 0x0?, 0x0?})
	go.opentelemetry.io/collector/pdata@v0.63.1/ptrace/ptraceotlp/grpc.go:51 +0x30
go.opentelemetry.io/collector/exporter/otlpexporter.(*exporter).pushTraces(0xc001140280, {0x745e5b8?, 0xc001e205a0?}, {0x745e5f0?})
	go.opentelemetry.io/collector/exporter/otlpexporter@v0.63.1/otlp.go:105 +0x69
go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesRequest).Export(0x745e5f0?, {0x745e5b8?, 0xc001e205a0?})
	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/traces.go:70 +0x34
go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send(0xc001128c18, {0x747d698, 0xc0022d1620})
	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/common.go:203 +0x96
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send(0xc000b46480, {0x747d698, 0xc0022d1620})
	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/queued_retry.go:388 +0x58d
go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send(0xc00114a9a8, {0x747d698, 0xc0022d1620})
	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/traces.go:134 +0x88
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1({0x747d698, 0xc0022d1620})
	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/queued_retry.go:206 +0x39
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1()
	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/internal/bounded_memory_queue.go:61 +0xb6
created by go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers
	go.opentelemetry.io/collector@v0.63.1/exporter/exporterhelper/internal/bounded_memory_queue.go:56 +0x45

@andretong
Copy link

@bogdandrutu

I ran the same config file with the above versions:

0.60.0 -> OK
0.61.0 -> OK
0.62.0 -> ERROR

@bogdandrutu
Copy link
Member

bogdandrutu commented Nov 10, 2022

@ambition-consulting @andretong @Edition-X found the bug, will submit a fix soon. In the meantime if you want a quick fix, remove the logging exporter from the pipelines, or do not configure loglevel: debug.

@Edition-X
Copy link

Thanks @bogdandrutu

@andretong
Copy link

@bogdandrutu thank you very much!

@bogdandrutu
Copy link
Member

@Edition-X @andretong the v0.64.1 is ready to be tested :)

codeboten pushed a commit that referenced this issue Jan 3, 2023
Removes the first two points from the bugfix release criteria.

I think the remaining points give a more accurate picture of the decision making process we have taken so far, (e.g for #6420, where the first two points were not fulfilled).

We can revisit this in the future if there are disagreements on when to do a bugfix release
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants