Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump OpenTelemetry sdk to 1.42.1 #43752

Merged
merged 1 commit into from
Oct 11, 2024
Merged

Conversation

brunobat
Copy link
Contributor

@brunobat brunobat commented Oct 7, 2024

  • Instrumentation to 2.8.0
  • Semantic conventions to 1.27.0
  • Point deprecated conventions to the new classes.

@quarkus-bot quarkus-bot bot added area/dependencies Pull requests that update a dependency file area/tracing labels Oct 7, 2024
Copy link

quarkus-bot bot commented Oct 7, 2024

/cc @radcortez (opentelemetry)

@brunobat brunobat marked this pull request as draft October 7, 2024 17:06
@brunobat
Copy link
Contributor Author

brunobat commented Oct 7, 2024

Do not merge for now, because of possible side effects on Microprofile Telemetry 2.0 certification. Set as draft.

@brunobat brunobat changed the title Upgrade OpenTelemetry sdk to 1.42.1 Bump OpenTelemetry sdk to 1.42.1 Oct 7, 2024
Copy link

github-actions bot commented Oct 7, 2024

🎊 PR Preview 0545457 has been successfully built and deployed to https://quarkus-pr-main-43752-preview.surge.sh/version/main/guides/

  • Images of blog posts older than 3 months are not available.
  • Newsletters older than 3 months are not available.

@brunobat
Copy link
Contributor Author

brunobat commented Oct 8, 2024

Needed for #43678

@brunobat brunobat marked this pull request as ready for review October 8, 2024 16:48
Copy link

quarkus-bot bot commented Oct 8, 2024

Status for workflow Quarkus CI

This is the status report for running Quarkus CI on commit e559cbd.

✅ The latest workflow run for the pull request has completed successfully.

It should be safe to merge provided you have a look at the other checks in the summary.

You can consult the Develocity build scans.


Flaky tests - Develocity

⚙️ JVM Tests - JDK 17

📦 integration-tests/grpc-hibernate

com.example.grpc.hibernate.BlockingRawTest.shouldAdd - History

  • Condition with Lambda expression in com.example.grpc.hibernate.BlockingRawTestBase was not fulfilled within 30 seconds. - org.awaitility.core.ConditionTimeoutException
org.awaitility.core.ConditionTimeoutException: Condition with Lambda expression in com.example.grpc.hibernate.BlockingRawTestBase was not fulfilled within 30 seconds.
	at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
	at org.awaitility.core.CallableCondition.await(CallableCondition.java:78)
	at org.awaitility.core.CallableCondition.await(CallableCondition.java:26)
	at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:1006)
	at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:975)
	at com.example.grpc.hibernate.BlockingRawTestBase.shouldAdd(BlockingRawTestBase.java:59)
	at java.base/java.lang.reflect.Method.invoke(Method.java:569)

@brunobat brunobat merged commit 8f50794 into quarkusio:main Oct 11, 2024
103 checks passed
@quarkus-bot quarkus-bot bot added this to the 3.16 - main milestone Oct 11, 2024
@zakkak
Copy link
Contributor

zakkak commented Oct 14, 2024

@brunobat after this change we see a SEVERE error instead of a WARNING we used to get in https://github.com/Karm/mandrel-integration-tests (note that we don't really want to setup and test any OpenTracing collector).

See Karm/mandrel-integration-tests#266 (comment)

Before:

2024-10-14 12:34:11,656 WARNING [io.ope.exp.int.grp.GrpcExporter] (vert.x-eventloop-thread-1) Failed to export spans. Server responded with gRPC status code 2. Error message: Failed to export TraceRequestMarshalers. The request could not be executed. Full error message: Connection refused: localhost/127.0.0.1:4317

After

2024-10-14 12:31:28,716 SEVERE [io.ope.exp.int.grp.GrpcExporter] (vert.x-eventloop-thread-1) Failed to export spans. The request could not be executed. Error message: Connection refused: localhost/127.0.0.1:4317: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:4317
	Suppressed: java.lang.IllegalStateException: Retries exhausted: 3/3
		at io.smallrye.mutiny.helpers.ExponentialBackoff$1.lambda$apply$0(ExponentialBackoff.java:46)
		at io.smallrye.context.impl.wrappers.SlowContextualFunction.apply(SlowContextualFunction.java:21)
		at io.smallrye.mutiny.groups.MultiOnItem.lambda$transformToUni$6(MultiOnItem.java:268)
		at io.smallrye.mutiny.operators.multi.MultiConcatMapOp$MainSubscriber.onItem(MultiConcatMapOp.java:117)
		at io.smallrye.mutiny.subscription.MultiSubscriber.onNext(MultiSubscriber.java:61)
		at io.smallrye.mutiny.operators.multi.processors.UnicastProcessor.drainWithDownstream(UnicastProcessor.java:108)
		at io.smallrye.mutiny.operators.multi.processors.UnicastProcessor.drain(UnicastProcessor.java:139)
		at io.smallrye.mutiny.operators.multi.processors.UnicastProcessor.onNext(UnicastProcessor.java:205)
		at io.smallrye.mutiny.operators.multi.processors.SerializedProcessor.onNext(SerializedProcessor.java:104)
		at io.smallrye.mutiny.subscription.SerializedSubscriber.onItem(SerializedSubscriber.java:74)
		at io.smallrye.mutiny.subscription.MultiSubscriber.onNext(MultiSubscriber.java:61)
		at io.smallrye.mutiny.operators.multi.MultiRetryWhenOp$RetryWhenOperator.onFailure(MultiRetryWhenOp.java:127)
		at io.smallrye.mutiny.subscription.MultiSubscriber.onError(MultiSubscriber.java:73)
		at io.smallrye.mutiny.converters.uni.UniToMultiPublisher$UniToMultiSubscription.onFailure(UniToMultiPublisher.java:104)
		at io.smallrye.mutiny.operators.uni.UniOnFailureFlatMap$UniOnFailureFlatMapProcessor.dispatch(UniOnFailureFlatMap.java:85)
		at io.smallrye.mutiny.operators.uni.UniOnFailureFlatMap$UniOnFailureFlatMapProcessor.onFailure(UniOnFailureFlatMap.java:60)
		at io.smallrye.mutiny.operators.uni.builders.UniCreateFromCompletionStage$CompletionStageUniSubscription.forwardResult(UniCreateFromCompletionStage.java:60)
		at java.base@21.0.4/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
		at java.base@21.0.4/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
		at java.base@21.0.4/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
		at java.base@21.0.4/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2194)
		at io.vertx.core.Future.lambda$toCompletionStage$3(Future.java:603)
		at io.vertx.core.impl.future.FutureImpl$4.onFailure(FutureImpl.java:188)
		at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:81)
		at io.vertx.core.impl.future.FutureImpl.tryFail(FutureImpl.java:278)
		at io.vertx.core.impl.future.Mapping.onFailure(Mapping.java:45)
		at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:81)
		at io.vertx.core.impl.future.FutureImpl.tryFail(FutureImpl.java:278)
		at io.vertx.core.http.impl.HttpClientImpl.lambda$doRequest$4(HttpClientImpl.java:398)
		at io.vertx.core.net.impl.pool.Endpoint.lambda$getConnection$0(Endpoint.java:52)
		at io.vertx.core.http.impl.SharedClientHttpStreamEndpoint$Request.handle(SharedClientHttpStreamEndpoint.java:162)
		at io.vertx.core.http.impl.SharedClientHttpStreamEndpoint$Request.handle(SharedClientHttpStreamEndpoint.java:123)
		at io.vertx.core.impl.ContextImpl.emit(ContextImpl.java:328)
		at io.vertx.core.impl.ContextImpl.emit(ContextImpl.java:321)
		at io.vertx.core.net.impl.pool.SimpleConnectionPool$ConnectFailed$1.run(SimpleConnectionPool.java:380)
		at io.vertx.core.net.impl.pool.Task.runNextTasks(Task.java:43)
		at io.vertx.core.net.impl.pool.CombinerExecutor.submit(CombinerExecutor.java:91)
		at io.vertx.core.net.impl.pool.SimpleConnectionPool.execute(SimpleConnectionPool.java:244)
		at io.vertx.core.net.impl.pool.SimpleConnectionPool.lambda$connect$2(SimpleConnectionPool.java:258)
		at io.vertx.core.http.impl.SharedClientHttpStreamEndpoint.lambda$connect$2(SharedClientHttpStreamEndpoint.java:104)
		at io.vertx.core.impl.future.FutureImpl$4.onFailure(FutureImpl.java:188)
		at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:81)
		at io.vertx.core.impl.future.FutureImpl.tryFail(FutureImpl.java:278)
		at io.vertx.core.impl.future.Composition$1.onFailure(Composition.java:66)
		at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:81)
		at io.vertx.core.impl.future.FailedFuture.addListener(FailedFuture.java:98)
		at io.vertx.core.impl.future.Composition.onFailure(Composition.java:55)
		at io.vertx.core.impl.future.FutureBase.emitFailure(FutureBase.java:81)
		at io.vertx.core.impl.future.FutureImpl.tryFail(FutureImpl.java:278)
		at io.vertx.core.impl.ContextImpl.emit(ContextImpl.java:328)
		at io.vertx.core.impl.ContextImpl.emit(ContextImpl.java:321)
		at io.vertx.core.net.impl.NetClientImpl.failed(NetClientImpl.java:352)
		at io.vertx.core.net.impl.NetClientImpl.lambda$connectInternal2$6(NetClientImpl.java:324)
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
		at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
		at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
		at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
		at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
		at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:110)
		at io.vertx.core.net.impl.ChannelProvider.lambda$handleConnect$0(ChannelProvider.java:157)
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
		at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
		at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
		at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
		at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
		at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
		at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
		at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994)
		at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
		at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
		at java.base@21.0.4/java.lang.Thread.runWith(Thread.java:1596)
		at java.base@21.0.4/java.lang.Thread.run(Thread.java:1583)
		at org.graalvm.nativeimage.builder/com.oracle.svm.core.thread.PlatformThreads.threadStartRoutine(PlatformThreads.java:896)
		at org.graalvm.nativeimage.builder/com.oracle.svm.core.thread.PlatformThreads.threadStartRoutine(PlatformThreads.java:872)
	Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:4317
		Suppressed: [CIRCULAR REFERENCE: java.lang.IllegalStateException: Retries exhausted: 3/3]
	Caused by: java.net.ConnectException: Connection refused
		at java.base@21.0.4/sun.nio.ch.Net.pollConnect(Native Method)
		at java.base@21.0.4/sun.nio.ch.Net.pollConnectNow(Net.java:682)
		at java.base@21.0.4/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973)
		at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:336)
		at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:339)
		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
		at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994)
		at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
		at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
		at java.base@21.0.4/java.lang.Thread.runWith(Thread.java:1596)
		at java.base@21.0.4/java.lang.Thread.run(Thread.java:1583)
		at org.graalvm.nativeimage.builder/com.oracle.svm.core.thread.PlatformThreads.threadStartRoutine(PlatformThreads.java:896)
		at org.graalvm.nativeimage.builder/com.oracle.svm.core.thread.PlatformThreads.threadStartRoutine(PlatformThreads.java:872)
Caused by: [CIRCULAR REFERENCE: java.net.ConnectException: Connection refused]

Is this the expected behaviour?

cc @Karm

@brunobat
Copy link
Contributor Author

brunobat commented Oct 14, 2024

@zakkak that test seems to be the one related with sending real OTLP protocol data to the OpenTelemetry collector... You don't need to setup the collector, the test should do it out of the box with TestContainers and Docker, but for some reason, the sender request is not reaching the collector: ...Connection refused: localhost/127.0.0.1:4317...

@zakkak
Copy link
Contributor

zakkak commented Oct 14, 2024

That's intended AFAIK (@Karm please confirm). The question is, should the application report a SEVERE error in such cases? Till now it used to just print a WARNING.

@brunobat
Copy link
Contributor Author

I'm afraid this comes from the upstream commit: open-telemetry/opentelemetry-java@1f6de35

lindseyburnett added a commit to RedHatInsights/rhsm-subscriptions that referenced this pull request Nov 4, 2024
## Description
Bump quarkusVersion from 3.15.1 to 3.16.1
Quarkus 3.16 fixes the opentelemetry traces exporter issue:
quarkusio/quarkus#43752
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants