Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge tail latency in the TLS performance tests #1434

Open
vankoven opened this issue Aug 5, 2020 · 4 comments
Open

Huge tail latency in the TLS performance tests #1434

vankoven opened this issue Aug 5, 2020 · 4 comments
Assignees
Labels
kernel The Linux mainstream issues performance
Milestone

Comments

@vankoven
Copy link
Contributor

vankoven commented Aug 5, 2020

Scope

While performance testing we bumped into unexpected latency gaps on huge number of connections. Seems like our processing on receive (in softirq) may add some inequity in parallel connections progress. Seems like epoll-mode does extra balancing and latencies for all connections are almost the same.

@vankoven vankoven added the bug label Aug 5, 2020
@vankoven vankoven added this to the 0.7 HTTP/2 milestone Aug 5, 2020
@krizhanovsky krizhanovsky added the kernel The Linux mainstream issues label Aug 14, 2020
@krizhanovsky krizhanovsky self-assigned this Aug 19, 2020
@krizhanovsky
Copy link
Contributor

Actually, in all our tests using tls-perf we saw huge tail/max latnecy, sometimes even higher than for Nginx/OpenSSL, so there is definitely an issue.

@krizhanovsky
Copy link
Contributor

krizhanovsky commented Sep 2, 2020

I suppose the high tail latency is linked with the heavy cryptographic computations in softirq and how Linux executes __do_softirq() handler. Also I suppose that we see the other side of the effect in that we see much higher performance difference for Nginx/OpenSSL and OpenSSL in a VM than on a bare metal setup.

Note that the TTLS routines on the flame graph (collected in a VM) are called in two different contexts: run_ksoftirqd() (the wider one) and do_IRQ().
ttls4
During the benchmarks with various Tempesta FW versions and Nginx/OpenSSL I observed very different ratios for both the contexts: sometimes we do more work in ksoftirqs threads, sometimes in IRQ contexts. IIRC Tempesta FW and Nginx/OpenSSL had different ratios, but I'm not sure. Probably, for Nginx/OpenSSL we just have more VM-exit events (see https://github.com/tempesta-tech/tempesta/wiki/Hardware-virtualization-performance#hostguest-transitions), so we see so dramatic performance improvement in a VM w/o vAPIC.

Regarding the original issue with high tail latency, up to +3000%, I believe this isn't just about whether to process the crypto in ksoftirqd or do_IRQ. I assume significant network packet drops are involved, e.g. we do some heavy work in do_IRQ() and miss a NIC interrupt, drop a packet, so TCP has to retransmit. Need to collect histograms on packet drops, missed interuptions, TCP retransmissions. Probably, an instrumentation patch is required to track the outlier cases only. I propose to start from analyzing the ratio between ksoftirqd and do_IRQ jobs for Tempesta FW and Nginx/OpenSSL in VM and bare metal setups.

The problem is linked with #1446 , kTLS encryption in softirq.

@krizhanovsky krizhanovsky changed the title Latency can increase for huge amount of TLS connections Huge tail latency in TLS performance test Sep 2, 2020
@krizhanovsky krizhanovsky changed the title Huge tail latency in TLS performance test Huge tail latency in the TLS performance tests Sep 2, 2020
@vankoven
Copy link
Contributor Author

vankoven commented Sep 4, 2020

Here is statistics about different concurrent client connections. Tested on Linux 4.14. Tempesta is on 2b61411 . I run tls-perf with different arguments:

tls-perf -T 180 -l $conns -t 8 --tls 1.2 192.168.76.7 8081

So actual number of concurent connections was $conns * tread_num = $conns * 8. First default sysctls was used:

net.ipv4.tcp_max_tw_buckets = 131072
net.ipv4.tcp_max_orphans = 131072
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_fin_timeout = 60
net.core.netdev_max_backlog = 10000
net.core.somaxconn = 131072
net.ipv4.tcp_max_syn_backlog = 131072
|-------------------------------------------------------------------------------|
|               | HANDSHAKES/sec                | LATENCY, ms                   |
|---------------|-------|-------|-------|-------|-------|-------|-------|-------|
| Server	| MAX	| AVG	| 95P	| MIN	| MIN	| AVG	| 95P	| MAX   |
|---------------|-------|-------|-------|-------|-------|-------|-------|-------|
|    nginx-10	|  8122	| 8065	| 8066	| 7751	|   1	|   9	|   13	|    23 |
| tempesta-10	|  9540	| 9476	| 9460	| 9005	|   1	|   6	|    9	|    20 |
|    nginx-25	|  8161	| 8049	| 8039	| 6875	|   1	|  23	|   35	|    56 |
| tempesta-25	|  9667	| 9550	| 9553	| 8483	|   4	|  15	|   23	|    54 |
|    nginx-50	|  8140	| 7987	| 7950	| 6156	|   5	|  48	|   70	|   102 |
| tempesta-50	|  9608	| 9522	| 9515	| 8122	|   1	|  32	|   60	|    74 |
|    nginx-75	|  8180	| 7934	| 7869	| 7017	|   5	|  73	|  109	|   128 |
| tempesta-75	|  9615	| 9512	| 9523	| 7485	|   3	|  48	|   91	|   105 |
|    nginx-100	|  8164	| 7879	| 7784	| 6340	|   2	|  98	|  143	|   186 |
| tempesta-100	|  9585	| 9440	| 9423	| 7855	|   2	|  62	|  114	|   202 |
|    nginx-250	|  8283	| 7783	| 7536	| 5915	|  58	| 206	|  263	|   456 |
| tempesta-250	|  9554	| 9359	| 9325	| 6498	|   4	| 136	|  434	|  1137 |
|    nginx-500	|  8317	| 7736	| 7451	| 6094	|  77	| 272	|  387	|   728 |
| tempesta-500	|  9645	| 9328	| 9267	| 6911	|  53	| 282	|  462	|  1199 |
|    nginx-750	|  8437	| 7746	| 7369	| 6169	| 117	| 431	|  407	| 19355 |
| tempesta-750	| 10195	| 9297	| 9020	| 5399	| 234	| 420	|  486	|  2206 |
|    nginx-1000	|  8281	| 7734	| 7432	| 7168	| 113	| 549	|  441	| 34534 |
| tempesta-1000	| 10968	| 9285	| 8301	| 7994	| 162	| 502	| 1046	|  2865 |
|-------------------------------------------------------------------------------|

Tempesta usually provides a lower tail latencies on all connections num except 100, 250, and Nginx had huge tail latencies on connections nums bigger than 500. During bechmarking for Netdev I saw absolutely opposite results on 1000 concurrent connections! Average performance here is also better than I had in tests for the article.

Then I set up sysctl similar to values used in "VM tests" from the article:

net.ipv4.tcp_max_tw_buckets = 32
net.ipv4.tcp_max_orphans = 32
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 1
net.core.netdev_max_backlog = 10000
net.core.somaxconn = 131072
net.ipv4.tcp_max_syn_backlog = 131072
|-------------------------------------------------------------------------------|
|               | HANDSHAKES/sec                | LATENCY, ms                   |
|---------------|-------|-------|-------|-------|-------|-------|-------|-------|
| Server	| MAX	| AVG	| 95P	| MIN	| MIN	| AVG	| 95P	| MAX   |
|---------------|-------|-------|-------|-------|-------|-------|-------|-------|
|    nginx-10	|  8112	| 7891	| 7004	| 4972	|   1	|   9	|    13	|    38 |
| tempesta-10	|  9519	| 9227	| 8112	| 5907	|   1	|   6	|     9	|    33 |
|    nginx-25	|  8158	| 7944	| 7357	| 6092	|   1	|  23	|    36	|    70 |
| tempesta-25	|  9628	| 9325	| 8187	| 6943	|   1	|  16	|    26	|    56 |
|    nginx-50	|  8194	| 7881	| 7353	| 5184	|   1	|  48	|    73	|    90 |
| tempesta-50	|  9583	| 9396	| 9062	| 7954	|   3	|  31	|    58	|    66 |
|    nginx-75	|  8108	| 7763	| 7174	| 5795	|   1	|  78	|   138	|   168 |
| tempesta-75	|  9583	| 9285	| 8603	| 7427	|   3	|  51	|    89	|   173 |
|    nginx-100	|  8188	| 7724	| 7093	| 5405	|   3	|  98	|   147	|   174 |
| tempesta-100	|  9571	| 9245	| 8547	| 6775	|  16	|  65	|    99	|   137 |
|    nginx-250	|  8206	| 7664	| 6726	| 5765	|  29	|  183	|   253	|   457 |
| tempesta-250	|  9540	| 9255	| 8758	| 7075	|  14	|  141	|   328	|   449 |
|    nginx-500	|  8270	| 7569	| 7049	| 6034	|  79	|  287	|   392	|   683 |
| tempesta-500	|  9924	| 9130	| 8543	| 5162	|  55	|  286	|   473	|  2243 |
|    nginx-750	|  8407	| 7587	| 6867	| 5675	| 131	|  644	|   409	| 29819 |
| tempesta-750	| 10074	| 9075	| 8259	| 6028	| 124	|  422	|   503	|  1903 |
|    nginx-1000	|  8268	| 7576	| 6873	| 5335	| 100	| 1703	| 23419	| 24266 |
| tempesta-1000	| 11319	| 9108	| 7869	| 6534	| 185	|  570	|  1152	|  2947 |
|-------------------------------------------------------------------------------|

Statistically nothing has changes, Nginx has larger tails, not Tempesta... Still guessing, how it could be.

I run the bench many times, sometimes large latency tails appeared on Tempesta, but not always, and Nginx still got larger ones.

% tail -n 2 test-1434/*1000*                       
==> test-1434/tls-1.2-hs-bench-nginx-no-tickets-1000.out <==
 HANDSHAKES/sec:  MAX 8375; AVG 7560; 95P 6984; MIN 4495
 LATENCY (ms):    MIN 136; AVG 625; 95P 402; MAX 43012

==> test-1434/tls-1.2-hs-bench-tempesta-no-tickets-1000.out <==
 HANDSHAKES/sec:  MAX 11343; AVG 9106; 95P 7904; MIN 6731
 LATENCY (ms):    MIN 149; AVG 551; 95P 1152; MAX 3245

% tail -n 2 test-1434/*1000*                       
==> test-1434/tls-1.2-hs-bench-nginx-no-tickets-1000.out <==
 HANDSHAKES/sec:  MAX 8387; AVG 7687; 95P 7344; MIN 5550
 LATENCY (ms):    MIN 80; AVG 518; 95P 447; MAX 102095

==> test-1434/tls-1.2-hs-bench-tempesta-no-tickets-1000.out <==
 HANDSHAKES/sec:  MAX 10900; AVG 9108; 95P 7591; MIN 6233
 LATENCY (ms):    MIN 239; AVG 548; 95P 1135; MAX 51750

Tried to monitor packet drops via https://github.com/nhorman/dropwatch:

% sudo ./src/dropwatch -l kas
Initializing kallsyms db
dropwatch> start
Enabling monitoring...
Kernel monitoring activated.
Issue Ctrl-C to stop monitoring

--------------------- While stressing Tempesta --------------------------

122 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
3848 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
18821 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
90 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1498 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
18783 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
208 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1452 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
712 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
19243 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
85 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
19217 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1280 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
119 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
19021 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
61 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1062 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
18951 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
106 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1097 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
18820 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
71 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1384 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
18868 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
83 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1503 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
19100 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
119 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
5934 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
19332 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
5751 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
137 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
18917 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
277 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1172 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
1 drops at sk_stream_kill_queues+52 (0xffffffff9155fe52) [software]
19048 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
207 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
743 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
18831 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1623 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
74 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
18845 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
47 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1424 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
19218 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
134 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
833 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
19239 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
105 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1017 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
18841 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
74 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1437 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
18913 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
75 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1621 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
19220 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1246 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
124 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
19290 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1089 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
119 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
18861 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
96 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1525 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
18841 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
87 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1604 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
19236 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1285 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
142 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
19152 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1252 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
132 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
18824 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
104 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
2439 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
18824 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
111 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
2451 drops at kfree_skb_list+13 (0xffffffff91558f43) [software]

--------------------------------- while stressing nginx ----------------------------

36 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
583 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
74 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
669 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
2 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
85 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
524 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
105 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
722 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
123 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
25 drops at sk_stream_kill_queues+52 (0xffffffff9155fe52) [software]
372 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
66 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
449 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
318 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
306 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
294 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
515 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
569 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
298 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
145 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
153 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
199 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
389 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at sk_stream_kill_queues+52 (0xffffffff9155fe52) [software]
413 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
323 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
62 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
308 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
82 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
141 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
100 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
155 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
76 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]
138 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
239 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
2 drops at tcp_v4_do_rcv+6b (0xffffffff915dad0b) [software]

----------------------- This how it's look for Tempesta, on a small number of connections (10) -----------------

19334 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19329 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19346 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19310 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19357 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
17618 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19357 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at sk_stream_kill_queues+52 (0xffffffff9155fe52) [software]
18556 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
18631 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
17102 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
14651 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
2 drops at unix_stream_connect+2d7 (0xffffffff91621497) [software]
12335 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
2 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
3 drops at unix_stream_connect+2d7 (0xffffffff91621497) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
16665 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
18840 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
18564 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
17956 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
17673 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19439 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19499 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19449 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19345 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19426 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19477 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19451 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19381 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19472 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19516 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19433 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19475 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19442 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19386 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
19446 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
19493 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
19430 drops at __brk_limit+2dee0186 (0xffffffffc0d13186) [software]

------------------- almost silent for nginx on 10 connections: --------------
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
3 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
3 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
1 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
4 drops at sk_stream_kill_queues+52 (0xffffffff9155fe52) [software]
1 drops at ip_forward+1b3 (0xffffffff915b65b3) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
25 drops at sk_stream_kill_queues+52 (0xffffffff9155fe52) [software]
1 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
3 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
1 drops at tcp_v4_rcv+15f (0xffffffff915dbb2f) [software]
3 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
1 drops at sk_stream_kill_queues+52 (0xffffffff9155fe52) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]
2 drops at __udp4_lib_rcv+954 (0xffffffff915e88e4) [software]


Don't bother with udp drops - its not related to the issue.

@krizhanovsky
Copy link
Contributor

In 0.7 we improve the tail latency with the even faster crypto thanks to #1064, so I move the task for 0.8 closer to #1446

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kernel The Linux mainstream issues performance
Projects
None yet
Development

No branches or pull requests

2 participants