-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to release a message: UnpooledSlicedByteBuf(freed) #11201
Comments
Can you provide a more complete code involving your multiple stream observers use case? |
When a client connects, we store the observer in an observerMap; in reality, we currently only have one client. @Getter
private final Map<Context, ServerCallStreamObserver<StringValue>> observerMap = new ConcurrentHashMap<>();
@Override
public void currentChartUpdateSubscribe(Empty request, StreamObserver<StringValue> responseObserver) {
var serverCallStreamObserver = (ServerCallStreamObserver<StringValue>) responseObserver;
Context currentContext = Context.current();
observerMap.put(currentContext, serverCallStreamObserver);
currentContext.addListener(context -> {
observerMap.remove(context);
}, ExecutorConfig.GRPC_EXECUTOR);
} When a SpringEvent occurs, it will trigger the @Async(value = "priceTaskExecutor")
@EventListener(value = TickShort.class)
public void onPricePublish(TickShort tickShort) {
String symbol = tickShort.getSymbol();
StringValue data = StringValue.newBuilder()
.setValue(symbol)
.build();
Map<Context, ServerCallStreamObserver<StringValue>> observers = chartBarEndpoint.getObserverMap();
observers.forEach((context, observer) -> {
if (!context.isCancelled() && observer.isReady()) {
observer.onNext(data);
}
});
} |
This is probably the same as #11115 (fixed in 1.64.0) |
I'll release the backported fix to v1.63. |
I'm running into the same issue. I upgraded to 1.64.0, as recommended by the fix above, retested my code and didn't see any improvement. My stack trace is slightly different from the one above, but the same message is printed during a high volume test. |
@tcronis, if running on 1.64.0, please open a new issue and include the messages you are seeing. |
The v1.63.1 release is now available. Bug fixes
|
I'm closing this ticket, but feel free to reopen it if you still encounter the issue after upgrading to v1.63.1. |
he issue reoccurred when I used version 1.63.1. 2024-05-21 09:28:20.134 [grpc-nio-worker-ELG-1-5] WARN io.grpc.netty.shaded.io.netty.util.ReferenceCountUtil.safeRelease - Failed to release a message: UnpooledSlicedByteBuf(freed)
io.grpc.netty.shaded.io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1
at io.grpc.netty.shaded.io.netty.util.internal.ReferenceCountUpdater.toLiveRealRefCnt(ReferenceCountUpdater.java:83)
at io.grpc.netty.shaded.io.netty.util.internal.ReferenceCountUpdater.release(ReferenceCountUpdater.java:148)
at io.grpc.netty.shaded.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:101)
at io.grpc.netty.shaded.io.netty.buffer.CompositeByteBuf$Component.free(CompositeByteBuf.java:1959)
at io.grpc.netty.shaded.io.netty.buffer.CompositeByteBuf.deallocate(CompositeByteBuf.java:2264)
at io.grpc.netty.shaded.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:111)
at io.grpc.netty.shaded.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:101)
at io.grpc.netty.shaded.io.netty.buffer.AbstractDerivedByteBuf.release0(AbstractDerivedByteBuf.java:98)
at io.grpc.netty.shaded.io.netty.buffer.AbstractDerivedByteBuf.release(AbstractDerivedByteBuf.java:94)
at io.grpc.netty.shaded.io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:90)
at io.grpc.netty.shaded.io.netty.util.ReferenceCountUtil.safeRelease(ReferenceCountUtil.java:116)
at io.grpc.netty.shaded.io.netty.channel.ChannelOutboundBuffer.remove(ChannelOutboundBuffer.java:280)
at io.grpc.netty.shaded.io.netty.channel.ChannelOutboundBuffer.removeBytes(ChannelOutboundBuffer.java:361)
at io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:438)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:931)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:921)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:907)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:893)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.flush(Http2ConnectionHandler.java:197)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:925)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:907)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:893)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:967)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannel.flush(AbstractChannel.java:254)
at io.grpc.netty.shaded.io.grpc.netty.WriteQueue.flush(WriteQueue.java:143)
at io.grpc.netty.shaded.io.grpc.netty.WriteQueue.access$000(WriteQueue.java:35)
at io.grpc.netty.shaded.io.grpc.netty.WriteQueue$1.run(WriteQueue.java:47)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.runTask$$$capture(AbstractEventExecutor.java:173)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:840) |
|
Did you work well with previous versions of gRPC? @cdgjzzy |
We were previously using version 1.58. The issue occurred, and then we tried to resolve it by upgrading to version 1.63.0, but unfortunately, it still persists. |
@cdgjzzy, I'm suspicious that this isn't a gRPC bug if you saw it in 1.58. I think you might be able to get this if you used a StreamObserver from multiple threads without synchronization. |
@cdgjzzy, can you confirm that you are only calling each stream observer instance from a single thread? If you are calling from multiple threads, it is probably missing synchronization in your code. |
Yes, this kind of error occurs when the stream observer is in a multi-threaded environment and there is no Synchronization. I previously thought that the stream observer might be thread-safe, and currently, I have added a Lock in the code to ensure thread safety. |
Glad things seem to be working for you now. If not, comment, and it can be reopened. |
gRPC-java===1.63.0
java: 17.0.9 2023-10-17 LTS 64-Bit
When I use gRPC streaming responses, I handle it like this:
This method gets triggered about 1500 times per second. Under such concurrent circumstances, there are issues with gRPC failing to send messages.
Stacktrace and logs
If I control the sending frequency with a queue, then this exception does not occur.
The text was updated successfully, but these errors were encountered: