Skip to content

Commit

Permalink
Add support to set large_client_header_buffers directive
Browse files Browse the repository at this point in the history
  • Loading branch information
tkan145 committed Feb 8, 2024
1 parent 49722c9 commit 9e6c973
Show file tree
Hide file tree
Showing 4 changed files with 213 additions and 1 deletion.
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ and this project adheres to [Semantic Versioning](http://semver.org/).

- Dev environment: Camel proxy [PR #1441](https://github.com/3scale/APIcast/pull/1441)

- Added `APICAST_CLIENT_REQUEST_HEADER_BUFFERS` variable that allows to configure nginx `client_request_header_buffers` directive [PR #1446](https://github.com/3scale/APIcast/pull/1446), [THREESCALE-10164](https://issues.redhat.com/browse/THREESCALE-10164)

## [3.14.0] 2023-07-25

### Fixed
Expand Down
10 changes: 10 additions & 0 deletions doc/parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -507,6 +507,16 @@ directive](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_
This parameter is only used by the services that are using content caching
policy.

### `APICAST_LARGE_CLIENT_HEADER_BUFFERS`

**Default:** 4 8k
**Value:** string

Sets the maximum number and size of buffers used for reading large client request header.

The format for this value is defined by the [`large_client_header_buffers` NGINX
directive](https://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers)

### `OPENTELEMETRY`

This environment variable enables NGINX instrumentation using OpenTelemetry tracing library.
Expand Down
4 changes: 3 additions & 1 deletion gateway/http.d/core.conf
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
client_max_body_size 0;
client_max_body_size 0;

large_client_header_buffers {{env.APICAST_LARGE_CLIENT_HEADER_BUFFERS | default: "4 8k"}};
198 changes: 198 additions & 0 deletions t/large-client-header-buffers.t
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
use lib 't';
use Test::APIcast::Blackbox 'no_plan';

run_tests();

__DATA__
=== TEST 1: large header (the header exceed exceed the size of one buffer)
Default configuration for large_client_header_buffers: 4 8k
--- configuration env
{
"services": [
{
"id": 42,
"backend_version": 1,
"backend_authentication_type": "service_token",
"backend_authentication_value": "token-value",
"proxy": {
"api_backend": "http://test:$TEST_NGINX_SERVER_PORT/",
"proxy_rules": [
{ "pattern": "/", "http_method": "GET", "metric_system_name": "hits", "delta": 2 }
]
}
}
]
}
--- backend
location /transactions/authrep.xml {
content_by_lua_block {
ngx.exit(ngx.OK)
}
}
--- upstream
client_header_buffer_size 10;
location / {
content_by_lua_block {
ngx.print(ngx.req.raw_header())
}
}
--- more_headers eval
my $s = "User-Agent: curl\nBah: bah\n";
$s .= "Accept: */*\n";
$s .= "Large-Header: " . "ABCDEFGH" x 1024 . "\n";
$s
--- request
GET /?user_key=value
--- error_code: 400
--- no_error_log
=== TEST 2: large header with APICAST_LARGE_CLIENT_HEADER_BUFFERS set to 4 12k
--- env eval
(
'APICAST_LARGE_CLIENT_HEADER_BUFFERS' => '4 12k',
)
--- configuration
{
"services": [
{
"id": 42,
"backend_version": 1,
"backend_authentication_type": "service_token",
"backend_authentication_value": "token-value",
"proxy": {
"api_backend": "http://test:$TEST_NGINX_SERVER_PORT/",
"proxy_rules": [
{ "pattern": "/", "http_method": "GET", "metric_system_name": "hits", "delta": 2 }
]
}
}
]
}
--- backend
location /transactions/authrep.xml {
content_by_lua_block {
ngx.exit(ngx.OK)
}
}
--- upstream
client_header_buffer_size 10;
location / {
content_by_lua_block {
ngx.print(ngx.req.raw_header())
}
}
--- more_headers eval
my $s = "User-Agent: curl\nBah: bah\n";
$s .= "Accept: */*\n";
$s .= "Large-Header: " . "ABCDEFGH" x 1024 . "\n";
$s
--- request
GET /?user_key=value
--- response_body eval
"GET /?user_key=value HTTP/1.1\r
X-Real-IP: 127.0.0.1\r
Host: test:$ENV{TEST_NGINX_SERVER_PORT}\r
User-Agent: curl\r
Bah: bah\r
Accept: */*\r
Large-Header: " . ("ABCDEFGH" x 1024) . "\r\n\r\n"
--- error_code: 200
--- no_error_log
=== TEST 3: large request line that exceed default header buffer
--- configuration env
{
"services": [
{
"id": 42,
"backend_version": 1,
"backend_authentication_type": "service_token",
"backend_authentication_value": "token-value",
"proxy": {
"api_backend": "http://test:$TEST_NGINX_SERVER_PORT/",
"proxy_rules": [
{ "pattern": "/", "http_method": "GET", "metric_system_name": "hits", "delta": 2 }
]
}
}
]
}
--- backend
location /transactions/authrep.xml {
content_by_lua_block {
ngx.exit(ngx.OK)
}
}
--- upstream
client_header_buffer_size 10;
location / {
content_by_lua_block {
ngx.print(ngx.req.raw_header())
}
}
--- more_headers eval
my $s = "User-Agent: curl\nBah: bah\n";
$s .= "Accept: */*\n";
$s .= "Large-Header: " . "ABCDEFGH" x 1024 . "\n";
$s
--- request eval
"GET /?user_key=value&foo=" . ("ABCDEFGH" x 1024)
--- error_code: 414
--- error_log
client sent too long URI while reading client request line
=== TEST 4: large request line with APICAST_LARGE_CLIENT_HEADER_BUFFERS set to "4 12k"
--- env eval
(
'APICAST_LARGE_CLIENT_HEADER_BUFFERS' => '4 12k',
)
--- configuration
{
"services": [
{
"id": 42,
"backend_version": 1,
"backend_authentication_type": "service_token",
"backend_authentication_value": "token-value",
"proxy": {
"api_backend": "http://test:$TEST_NGINX_SERVER_PORT/",
"proxy_rules": [
{ "pattern": "/", "http_method": "GET", "metric_system_name": "hits", "delta": 2 }
]
}
}
]
}
--- backend
location /transactions/authrep.xml {
content_by_lua_block {
ngx.exit(ngx.OK)
}
}
--- upstream
client_header_buffer_size 10;
location / {
content_by_lua_block {
ngx.print(ngx.req.raw_header())
}
}
--- more_headers eval
my $s = "User-Agent: curl\nBah: bah\n";
$s .= "Accept: */*\n";
$s .= "Large-Header: " . "ABCDEFGH" x 1024 . "\n";
$s
--- request eval
"GET /?user_key=value&foo=" . ("ABCDEFGH" x 1024)
--- error_code: 200
--- no_error_log

0 comments on commit 9e6c973

Please sign in to comment.