Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[THREESCALE-8410] Add support to set proxy buffer size #1475

Merged
merged 2 commits into from
Jul 25, 2024

Conversation

tkan145
Copy link
Contributor

@tkan145 tkan145 commented Jun 17, 2024

What

Fix https://issues.redhat.com/browse/THREESCALE-8410

Notes

What are the differences between buffers?

  • proxy_buffers: this is the total size of the buffer that Nginx can use to hold the response from upstream. If the size of the response larger then the total size of the proxy_buffers Nginx will write response to disk
  • proxy_buffer_size: mainly used to hold the response header.
  • proxy_busy_buffer_size: size of the buffer can be used for delivering the response to clients while it was not fully read from the upstream server.

What are the correct values for the buffers?

  • proxy_buffers:
    • Default: 8 4k|8k;
    • Min: must be at least 2 buffers
    • Max: no limit
  • proxy_buffer_size:
    • Min: No limit, can be set to smaller default value (4k|8k) but it's not recommend.
    • Max: No limit, but should be no less than the maximum possible size of response HTTP headers
    • Default: one memory page size (4k|8k)
  • proxy_busy_buffer_size:
    • Min: can't be smaller than a single proxy_buffers and must be equal to or greater than the maximum of the value of proxy_buffer_size and one of the proxy_buffers.

    • Max: must be less than the total value of proxy_buffers minus one buffer. (ie 8*4 = 32k - 4k = 28k)

    • Default: if not explicitly defined, the value for proxy_busy_buffer_size is "the bigger of: twice proxy_buffer_size and the size of two proxy_buffers". This also mean if you set bigger proxy_buffer_size, you are implicitly increasing proxy_busy_buffer_size as well.

Reference: https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L3442

Why 4k|8k?

This is equal to one memory page size, ie either 4K or 8K, depending on a platform.

How to check my pagesize

$ getconf PAGE_SIZE

Would increase buffer size also increase the memory consumption?

Yes the buffer is allocated per connection. How much you may ask? I honestly don't know, once I get the profiling tool sorted I'll run a few benchmark tests.

Increase the buffer number vs increase the buffer size

The difference between a bigger number of smaller buffers, or a smaller number of bigger buffers, may depend on each user use case, ie a lot of small size response vs a lot of big response. As well as how much memory they have and how much memory they want to be wasted. So it's hard to provide one solution to fit all.

Due to the above complex rule, I personally think we should just provide one setting and increase the buffer size instead of messing around with the number and size of the buffer. And memory is cheap.

The downside of this approach is if user set a really big buffer size, ie proxy_buffers: 8 1024k ie allocating a 1MB buffer for every buffered connection even the response can fit in the default memory page size (4k|8k). However from my initial test, nginx allow allocate needed memory, again I will need to get those profiling tools sorted so I can peek into what is allocated.

Does this setting apply per product?

No this setting is global.

Common errors:

upstream sent too big header while reading response header from upstream 

proxy_buffer_size is the only directive that needs tuning in order to solve the error. However due to the rule described above, proxy_busy_buffer_size also needs adjustment

Verification steps

  • Checkout this branch
  • Edit docker-compose-devel.yaml as follow
diff --git a/docker-compose-devel.yml b/docker-compose-devel.yml
index 6e118560..a5ddbb2d 100644
--- a/docker-compose-devel.yml
+++ b/docker-compose-devel.yml
@@ -23,3 +23,7 @@ services:
       GIT_COMMITTER_EMAIL: ${GIT_COMMITTER_EMAIL:-""}
   redis:
     image: redis
+  my-upstream:
+    image: mccutchen/go-httpbin
+    expose:
+      - "8080"
  • Create a apicast-config.json file with the following content
cat <<EOF >apicast-config.json
{
    "services": [
        {
            "id": "1",
            "backend_version": "1",
            "proxy": {
                "hosts": [
                    "one"
                ],
                "api_backend": "http://my-upstream:8080/response-headers",
                "backend": {
                    "endpoint": "http://127.0.0.1:8081",
                    "host": "backend"
                },
                "policy_chain": [
                    {
                        "name": "apicast.policy.apicast"
                    }
                ],
                "proxy_rules": [
                    {
                        "http_method": "GET",
                        "pattern": "/",
                        "metric_system_name": "hits",
                        "delta": 1,
                        "parameters": [],
                        "querystring_parameters": {}
                    }
                ]
            }
        }
    ]
} 
EOF
  • Checkout this branch and start dev environment
make development
make dependencies
  • Run apicast locally
THREESCALE_DEPLOYMENT_ENV=staging APICAST_LOG_LEVEL=debug APICAST_WORKER=1 APICAST_CONFIGURATION_LOADER=lazy APICAST_CONFIGURATION_CACHE=0 THREESCALE_CONFIG_FILE=apicast-config.json ./bin/apicast
  • Capture apicast IP
APICAST_IP=$(docker inspect apicast_build_0-development-1 | yq e -P '.[0].NetworkSettings.Networks.apicast_build_0_default.IPAddress' -)
  • Generate big header
LARGE_HEADER=$(for i in {1..1024}; do echo -n 'ABCDE'; done)
  • Send request with big header
curl -i -k -H "Host: one" -H "Accept: application/json" "http://${APICAST_IP}:8080/?key=${LARGE_HEADER}&user_key="

It should return 502

HTTP/1.1 502 Bad Gateway
Server: openresty
Date: Thu, 20 Jun 2024 08:31:46 GMT
Content-Type: text/html
Content-Length: 154
Connection: keep-alive

and this line from the log

upstream sent too big header while reading response header from upstream
  • Stop the gateway
CTRL-C
  • Start gateway again with APICAST_PROXY_BUFFER_SIZE=8k
THREESCALE_DEPLOYMENT_ENV=staging APICAST_LOG_LEVEL=debug APICAST_WORKER=1 APICAST_CONFIGURATION_LOADER=lazy APICAST_CONFIGURATION_CACHE=0 THREESCALE_CONFIG_FILE=apicast-config.json APICAST_PROXY_BUFFER_SIZE="8k" ./bin/apicast
  • Send request again
curl -i -k -H "Host: one" -H "Accept: application/json" "http://${APICAST_IP}:8080/?key=${LARGE_HEADER}&user_key="

This time it should return HTTP/1.1 200 OK

HTTP/1.1 200 OK
Server: openresty
Date: Thu, 20 Jun 2024 09:04:58 GMT
Content-Type: application/json; charset=utf-8
Transfer-Encoding: chunked

@tkan145 tkan145 changed the title Add support to set proxy buffer size [THREESCALE-8410] Add support to set proxy buffer size Jun 19, 2024
@tkan145 tkan145 marked this pull request as ready for review June 20, 2024 09:09
@tkan145 tkan145 requested review from a team as code owners June 20, 2024 09:09
CHANGELOG.md Outdated
@@ -17,6 +17,8 @@ and this project adheres to [Semantic Versioning](http://semver.org/).

- Support Financial-grade API (FAPI) - Baseline profile [PR #1465](https://github.com/3scale/APIcast/pull/1465) [THREESCALE-10973](https://issues.redhat.com/browse/THREESCALE-10973)

- Added `APICAST_PROXY_BUFFER_SIZE` variable to allow configure the size of the buffer used for handling the response received from the proxied server. [PR #1473](https://github.com/3scale/APIcast/pull/1473), [THREESCALE-8410](https://issues.redhat.com/browse/THREESCALE-8410)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Added `APICAST_PROXY_BUFFER_SIZE` variable to allow configure the size of the buffer used for handling the response received from the proxied server. [PR #1473](https://github.com/3scale/APIcast/pull/1473), [THREESCALE-8410](https://issues.redhat.com/browse/THREESCALE-8410)
- Added the `APICAST_PROXY_BUFFER_SIZE` variable to allow configuration of the buffer size for handling responses from proxied servers. [PR #1473](https://github.com/3scale/APIcast/pull/1473), [THREESCALE-8410](https://issues.redhat.com/browse/THREESCALE-8410)

**Default:** 4k|8k;
**Value:** string

Sets the size of the buffer used for handling the response received from the proxied server. This variable will set both [`proxy_buffer` NGINX directive](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) and [`proxy_buffer_size` NGINX directive](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size). By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.
Copy link
Contributor

@dfennessy dfennessy Jun 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Sets the size of the buffer used for handling the response received from the proxied server. This variable will set both [`proxy_buffer` NGINX directive](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) and [`proxy_buffer_size` NGINX directive](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size). By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.
Sets the size of the buffer used for handling the response received from the proxied server. This variable sets both [`proxy_buffer` NGINX directive](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) and [`proxy_buffer_size` NGINX directive](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size). By default, the buffer size is equal to one memory page. This is either 4 K or 8 K, depending on a platform.

Copy link
Contributor

@dfennessy dfennessy Jun 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What unit does the K stand for? Do you mean kilobytes or kibibytes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for my ignorance, but aren't they the same? ie 1024bytes? Actually the above sentence was copied from Nginx docs
https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I asked for clarification is that the IBM Style Guide "distinguishes between uppercase K
(meaning 1024, or 2 to the power of 10) and lowercase k (meaning 1000, or 10 to the power of 3). Therefore, KB means 1024 bytes, and kB means 1000 bytes" for kilobyte, and therefore we must use either KB (1024 bytes) or kB (1000 bytes) depending on the meaning of 4 K or 8 K in the documentation.

However, we would write kibibytes as KiB.

Sorry for the confusion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dfennessy updated to use KiB as you suggested. If you are happy with this change, could you please click that ✅ button? 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @tkan145 It wasn't my intention to muddy the waters. Thank you for your understanding 😄

I'll just to reiterate, KB means 1024 bytes, and kB means 1000 in kilobytes.

If you're sure you want to use kibibytes (KiB), where KiB equals 1024 bytes, then I approve.

Either if the above I approve as long as it works for the reader in their understanding 🤓

Copy link
Contributor

@dfennessy dfennessy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of suggestions and a question 🤓

@tkan145
Copy link
Contributor Author

tkan145 commented Jul 12, 2024

@dfennessy thanks. I've updated the documentation based on your feedback

@tkan145 tkan145 force-pushed the THREESCALE-8410-proxy-buffers branch from e70fd45 to 9b428f9 Compare July 16, 2024 06:15
@tkan145 tkan145 force-pushed the THREESCALE-8410-proxy-buffers branch from 9b428f9 to 5b984c6 Compare July 23, 2024 01:50
@dfennessy dfennessy self-requested a review July 24, 2024 08:53
Copy link
Contributor

@dfennessy dfennessy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@tkan145 tkan145 merged commit db70f43 into 3scale:master Jul 25, 2024
13 checks passed
@tkan145 tkan145 deleted the THREESCALE-8410-proxy-buffers branch July 25, 2024 01:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants