-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-keepalive HTTP requests handling #1415
Comments
1 CPU VMTest caseI tried 600-byte responses as in https://github.com/F-Stack/f-stack#nginx-testing-result ( Tempesta FW and Nginx are running inside a VM with The VM is accessible from the host system with All the test cases were performed on the Nginx configNginx 1.14.2 was used with the following configuration file:
Tempesta FW configThe Nginx instance is used as the backend web server.
Benchmark tool
The same non-keepalive test can be done with ResultsDuring all the tests
and there is no idle CPU. VM eats more than 100% CPU due to the Nginx
Tempesta FW
Which is just 5% more than for Nginx. Tempesta FW (HTTP/2 regression)
Which is below the Nginx's results. There are 3 key points in the perf profile for Tempesta FW:
Resume
|
User space TCP/IP stacksAs the original report was referencing F-stack as an alternative solution, it's worth discussing the differences of the approaches. Also read our Review of Google Snap paper. DISCLAMER 1: we are married with the Linux TCP/IP stack at the moment, but we'd love to split up if we see more opportunities with a user-space networking. That'd be quite a work to move Tempesta FW to DPDK or similar platform, but it is doable. DISCLAMER 2: there is nothing specific about F-Stack and there are several other kernel bypass solutions, e.g. Seastar. This is also not a competitive comparison of Tempesta FW with F-Stack/Nginx:
Scaling vs performanceFirst of all F-stack delivers even worse performance than the Linux kernel TCP/IP stack on small number of connections. It's still struggling from memory copies. As discussed in the referenced thread, the kernel bypass project is mostly about scaling on CPU cores rather than pure performance. It's still TODO for this issue to explore how the Linux TCP/IP stack scales with Tempesta FW with increased number of CPU cores. Application layer is the most bottle neckSome time ago we made comparison of in-kernel Tempesta FW with a DPDK-based HTTP server Seastar, details are in my talk).
The same for Redis on top of F-Stack: fast network layer doesn't impact to much for the application performance. Mainstream performance extensionsWith Tempesta FW we're keep the Linux kernel patch as small as possible to be able to migrate to newer kernels easily. (Honestly, we're not so quick in this.) F-Stack team seems needs quite a work to move to a newer FreeBSD TCP/IP stack. Since F-Stack seems doesn't do much work in reworking the FreeBSD TCP/IP stack, the question is whether the FreeBSD TCP/IP stack is actually faster than the Linux's one. It seems not. There are other performance comparisons aroung Linux vs FreeBSD performance and scaling on multiple cores, e.g. https://www.phoronix.com/scan.php?page=article&item=3990x-freebsd-bsd . Back in 2009-2010 we did some work in FreeBSD performance improvements for web hosting needs. In most cases we just re-implemented some mechanisms from the Linux kernel. We also considered FreeBSD as the platform for Tempesta FW (mostly because of the license), but end up with Linux solely due to the performance reason. DPDK in generalIt does make sense to consider DPDK for new network protocols like QUIC. However, following concerns must be considered:
From the other hand, DPDK basically doesn't provide anything better than the Linux softirq infrastructure. This means that QUIC, being developed from scratch, in kernel space won't have legacy code supporting too many features, so it could be not less fast than a DPDK one. Resume
|
It seems there are some problem with the performance tests of F-stack against vanilla Nginx/Linux F-Stack/f-stack#519 . I hope the F-stack team reply regarding the issue, otherwise it make sense to test Tempesta FW only against Nginx/Linux, not Nginx/F-stack. |
Preliminary results are described in the Wiki https://github.com/tempesta-tech/tempesta/wiki/HTTP-transactions-performance :
We need to check the results further with the F-Stack and F5 guys because our picture is completely different:
TODO
|
A few notes about DPDK as an addition to #1415 (comment)
Preemption control is still possible on DPDK. |
IMHO very poor links are used in a comparison of the networking performance in FreeBSD and Linux. Using FreeBSD and commodity parts, Netflix achieves 90 Gb/s serving TLS-encrypted connections with ~55% CPU on a 16-core 2.6-GHz CPU (source). Add FreeBSD kernel-side support for in-kernel TLS NUMA Siloing in the FreeBSD Network Stack (Or how to serve 200Gb/s of TLS from FreeBSD) |
Hi @sburn , I didn't get whether 'links' mean references which I used to compare FreeBSD and Linux networking stacks or network links :) In the second case, there is no physical network links in the benchmarks - all the networking was done among two VMs on the same host. In the first case, unfortunately I didn't find any good and fresh comparative benchmarks for Linux and FreeBSD networking. FreeBSD uses very tiny socket buffers |
The issue is actually a duplicate of #806 . See more details in the wiki pages:
Performance data for Tempesta FW on 4 CPU VM with macvtap interface and
For Nginx on the same setup:
The bottle neck for Tempesta FW is host interrupts (
Unfortunately, for the moment we have no good enough hardware with NIC supporting SR-IOV and CPU supporting vAPIC. I created task for the HTTP/2 performance regression #1422 . I'll also update the Wiki pages about the benchmarks, virtual environment performance and add specific system requirements to https://github.com/tempesta-tech/tempesta/wiki/Requirements for virtual environments. |
Scope
There was a claim that Tempesta FW processes new HTTP connections (
ab
without-k
was used) at about the same speed as Nginx or HAProxy. The expected results aren't less than https://github.com/F-Stack/f-stack#nginx-testing-result .Testing
Need to measure the performance and write an appropriate Wiki page how to setup a testing environment (I'd expect that
ab
was unable to cause enough load, the issue is in a virtualized NIC inside a VM, or some other environmental issue).I mark the issue as bug as we never profiled exactly this workload, so there could be some synchronization issue.
The text was updated successfully, but these errors were encountered: