Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance improvements #1250

Merged
merged 2 commits into from
Jan 19, 2021
Merged

Performance improvements #1250

merged 2 commits into from
Jan 19, 2021

Conversation

eloycoto
Copy link
Contributor

@@ -34,6 +34,7 @@ events {

http {
sendfile on;
sendfile_max_chunk 512k;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've read the description in https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile_max_chunk but it's not clear to me in which cases this could be worse than the default. What scenario did you test and what perf numbers did you get?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly, when large bodies are send. We have an Hyperfoil testcase where a lot of large bodies are in place. With this change, we got around 5% more rps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍
Do you know if there's a performance penalty for use cases with small or no request bodies?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no penalty, because is write in chunks instead of fill the chunk, so not issues at all.

When a large body is send over APICast, the sendfile is unlimited, so a
single connection can seize the whole worker process and things can be
in a very bad situation.

This commit set the limit by chunks

Doc: http://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile_max_chunk

Signed-off-by: Eloy Coto <eloy.coto@acalustra.com>
Because a lot of users are using Openid connect, the token is stored in
this shared memory, so  it get full quite fast.

With this change, the value is a bit bigger, and authrep calls will be
executed out of band, instead of request time.

Signed-off-by: Eloy Coto <eloy.coto@acalustra.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants