Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HashedWheelTimer's queue gets full #3449

Closed
beiwei30 opened this issue Feb 11, 2019 · 6 comments
Closed

HashedWheelTimer's queue gets full #3449

beiwei30 opened this issue Feb 11, 2019 · 6 comments
Assignees
Milestone

Comments

@beiwei30
Copy link
Member

HashedWheelTimer's queue gets full when run https://github.com/hank-whu/rpc-benchmark with 2.7.0, see error below:

java.lang.IllegalStateException: Queue full
	at java.base/java.util.AbstractQueue.add(AbstractQueue.java:98)
	at java.base/java.util.concurrent.ArrayBlockingQueue.add(ArrayBlockingQueue.java:326)
	at org.apache.dubbo.common.timer.HashedWheelTimer.newTimeout(HashedWheelTimer.java:404)
	at org.apache.dubbo.remoting.exchange.support.DefaultFuture.timeoutCheck(DefaultFuture.java:87)
	at org.apache.dubbo.remoting.exchange.support.DefaultFuture.newFuture(DefaultFuture.java:103)
	at org.apache.dubbo.remoting.exchange.support.header.HeaderExchangeChannel.request(HeaderExchangeChannel.java:114)
	at org.apache.dubbo.remoting.exchange.support.header.HeaderExchangeClient.request(HeaderExchangeClient.java:88)
	at org.apache.dubbo.rpc.protocol.dubbo.ReferenceCountExchangeClient.request(ReferenceCountExchangeClient.java:83)
	at org.apache.dubbo.rpc.protocol.dubbo.DubboInvoker.doInvoke(DubboInvoker.java:108)
	at org.apache.dubbo.rpc.protocol.AbstractInvoker.invoke(AbstractInvoker.java:156)
	at org.apache.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:88)
	at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:73)
	at org.apache.dubbo.rpc.protocol.dubbo.filter.FutureFilter.invoke(FutureFilter.java:49)
	at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:73)
	at org.apache.dubbo.rpc.filter.ConsumerContextFilter.invoke(ConsumerContextFilter.java:54)
	at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:73)
	at org.apache.dubbo.rpc.listener.ListenerInvokerWrapper.invoke(ListenerInvokerWrapper.java:77)
	at org.apache.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:57)
	at org.apache.dubbo.common.bytecode.proxy0.existUser(proxy0.java)
	at benchmark.rpc.AbstractClient.existUser(AbstractClient.java:18)
	at benchmark.rpc.Client.existUser(Client.java:51)
	at benchmark.rpc.generated.Client_existUser_jmhTest.existUser_avgt_jmhStub(Client_existUser_jmhTest.java:234)
	at benchmark.rpc.generated.Client_existUser_jmhTest.existUser_AverageTime(Client_existUser_jmhTest.java:174)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)

Pls. check hank-whu/rpc-benchmark#22 for more details.

@carryxyh
Copy link
Member

I have checked this problem, because the size of the original queue is 1024, which is not enough in some scenarios (especially in the case of stress testing). So I will increase the size to Integer.MAX_VALUE.

@lexburner
Copy link
Contributor

    /**
     * Creates a new timer.
     *
     * @param threadFactory      a {@link ThreadFactory} that creates a
     *                           background {@link Thread} which is dedicated to
     *                           {@link TimerTask} execution.
     * @param tickDuration       the duration between tick
     * @param unit               the time unit of the {@code tickDuration}
     * @param ticksPerWheel      the size of the wheel
     * @param maxPendingTimeouts The maximum number of pending timeouts after which call to
     *                           {@code newTimeout} will result in
     *                           {@link java.util.concurrent.RejectedExecutionException}
     *                           being thrown. No maximum pending timeouts limit is assumed if
     *                           this value is 0 or negative.
     * @throws NullPointerException     if either of {@code threadFactory} and {@code unit} is {@code null}
     * @throws IllegalArgumentException if either of {@code tickDuration} and {@code ticksPerWheel} is <= 0
     */
    public HashedWheelTimer(
            ThreadFactory threadFactory,
            long tickDuration, TimeUnit unit, int ticksPerWheel,
            long maxPendingTimeouts) {}

@carryxyh keep the maxPendingTimeouts as default value -1 maybe work well.

@carryxyh
Copy link
Member

@lexburner
Not the same problem. Otherwise, we use default -1 for our timeout check.

@lexburner
Copy link
Contributor

lexburner commented Feb 12, 2019

Dubbo reference the source code from Netty HashedWheelTimer, use ArrayBlockingQueue instead of MpscQueue,

new ArrayBlockingQueue<HashedWheelTimeout>(1024)

the ArrayBlockingQueue is unable to increase its size. Using LinkedBlockingQueue is better than setting the size to Integer.MAX_VALUE for ArrayBlockingQueue.
I will fix it through a pr.

@carryxyh
Copy link
Member

@lexburner
Agree with u.
Looking forward to your pr. I will help to review it!

@carryxyh
Copy link
Member

Close via #3451

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants