Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a client request body is buffered to a temporary file #7418

Closed
highwayliu opened this issue Jun 3, 2021 · 11 comments · Fixed by #10204 or #10595
Closed

a client request body is buffered to a temporary file #7418

highwayliu opened this issue Jun 3, 2021 · 11 comments · Fixed by #10204 or #10595

Comments

@highwayliu
Copy link

highwayliu commented Jun 3, 2021

Summary

  • Kong version ($ kong version)
  • 2.4.1
  • A concise summary of the bug.
  • I user kong for grpc, and the kong error log always write " a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000181" , I have inject the nginx variable by docker environment. But ie doesn't work.

injected nginx_proxy_* directives

client_body_buffer_size 8m;                  
client_max_body_size 8m;                    
proxy_buffer_size 128k;                      
proxy_buffers 4 256k;                       
proxy_busy_buffers_size 256k;
real_ip_header X-Real-IP;
real_ip_recursive off;          

spec:

  containers:
  - env:
    - name: KONG_DATABASE
      value: postgres
    - name: KONG_PG_HOST
      value: 172.26.0.45
    - name: KONG_PG_PORT
      value: "5432"
    - name: KONG_PG_USER
      value: kong
    - name: KONG_PG_PASSWORD
      value: kong
    - name: KONG_PG_DATABASE
      value: kong
    - name: KONG_PLUGINS
      value: bundled,grpc-itop-verify,grpc-api-log,grpc-error-handler,p-prometheus,correlationid,itop-verify,g-gateway
    - name: KONG_LUA_PACKAGE_PATH
      value: ./?.lua;./?/init.lua;/usr/local/custom/?.lua;;
    - name: KONG_PROXY_LISTEN
      value: 0.0.0.0:8000, 0.0.0.0:8443 ssl, 0.0.0.0:9080 http2, 0.0.0.0:9081 http2 ssl
    - name: KONG_ADMIN_LISTEN
      value: 0.0.0.0:8001
    - name: KONG_PROXY_ACCESS_LOG
      value: /usr/local/kong/logs/proxy-access.log
    - name: KONG_ADMIN_ACCESS_LOG
      value: /usr/local/kong/logs/admin-access.log
    - name: KONG_PROXY_ERROR_LOG
      value: /usr/local/kong/logs/proxy-error.log
    - name: KONG_ADMIN_ERROR_LOG
      value: /usr/local/kong/logs/admin-error.log
    - name: KONG_SSL_CERT
      value: /usr/local/custom/kong/ssl/server.pem
    - name: KONG_SSL_CERT_KEY
      value: /usr/local/custom/kong/ssl/server.key
    - name: KONG_LOG_LEVEL
      value: info
    - name: KONG_NGINX_PROXY_CLIENT_BODY_BUFFER_SIZE
      value: 8m
    - name: KONG_NGINX_PROXY_CLIENT_MAX_BODY_SIZE
      value: 8m
    - name: KONG_NGINX_PROXY_PROXY_BUFFERS
      value: 4 256k
    - name: KONG_NGINX_PROXY_PROXY_BUFFER_SIZE
      value: 128k
    - name: KONG_NGINX_PROXY_PROXY_BUSY_BUFFERS_SIZE
      value: 256k

Steps To Reproduce

Additional Details & Logs

  • Kong debug-level startup logs ($ kong start --vv)
  • 2021/06/03 07:55:03 [verbose] Kong: 2.4.1
    2021/06/03 07:55:03 [debug] ngx_lua: 10019
    2021/06/03 07:55:03 [debug] nginx: 1019003
    2021/06/03 07:55:03 [debug] Lua: LuaJIT 2.1.0-beta3
    2021/06/03 07:55:03 [verbose] no config file found at /etc/kong/kong.conf
    2021/06/03 07:55:03 [verbose] no config file found at /etc/kong.conf
    2021/06/03 07:55:03 [verbose] no config file, skip loading
    2021/06/03 07:55:03 [debug] reading environment variables
    2021/06/03 07:55:03 [debug] KONG_SSL_CERT_KEY ENV found with "/usr/local/custom/kong/ssl/server.key"
    2021/06/03 07:55:03 [debug] KONG_PROXY_LISTEN ENV found with "0.0.0.0:8000, 0.0.0.0:8443 ssl, 0.0.0.0:9080 http2, 0.0.0.0:9081 http2 ssl"
    2021/06/03 07:55:03 [debug] KONG_ADMIN_LISTEN ENV found with "0.0.0.0:8001"
    2021/06/03 07:55:03 [debug] KONG_PG_HOST ENV found with "172.26.0.45"
    2021/06/03 07:55:03 [debug] KONG_PG_DATABASE ENV found with "kong"
    2021/06/03 07:55:03 [debug] KONG_LUA_PACKAGE_PATH ENV found with "./?.lua;./?/init.lua;/usr/local/custom/?.lua;;"
    2021/06/03 07:55:03 [debug] KONG_NGINX_PROXY_CLIENT_BODY_BUFFER_SIZE ENV found with "8m"
    2021/06/03 07:55:03 [debug] KONG_NGINX_PROXY_CLIENT_MAX_BODY_SIZE ENV found with "8m"
    2021/06/03 07:55:03 [debug] KONG_PG_PORT ENV found with "5432"
    2021/06/03 07:55:03 [debug] KONG_NGINX_PROXY_PROXY_BUFFERS ENV found with "4 256k"
    2021/06/03 07:55:03 [debug] KONG_NGINX_PROXY_PROXY_BUFFER_SIZE ENV found with "128k"
    2021/06/03 07:55:03 [debug] KONG_NGINX_PROXY_PROXY_BUSY_BUFFERS_SIZE ENV found with "256k"
    2021/06/03 07:55:03 [debug] KONG_SSL_CERT ENV found with "/usr/local/custom/kong/ssl/server.pem"
    2021/06/03 07:55:03 [debug] KONG_PROXY_ACCESS_LOG ENV found with "/usr/local/kong/logs/proxy-access.log"
    2021/06/03 07:55:03 [debug] KONG_PROXY_ERROR_LOG ENV found with "/usr/local/kong/logs/proxy-error.log"
    2021/06/03 07:55:03 [debug] KONG_ADMIN_ACCESS_LOG ENV found with "/usr/local/kong/logs/admin-access.log"
    2021/06/03 07:55:03 [debug] KONG_ADMIN_ERROR_LOG ENV found with "/usr/local/kong/logs/admin-error.log"
    2021/06/03 07:55:03 [debug] KONG_LOG_LEVEL ENV found with "info"
    2021/06/03 07:55:03 [debug] KONG_PLUGINS ENV found with "bundled,grpc-itop-verify,grpc-api-log,grpc-error-handler,p-prometheus,correlationid,itop-verify,g-gateway"
    2021/06/03 07:55:03 [debug] KONG_PG_USER ENV found with "kong"
    2021/06/03 07:55:03 [debug] KONG_PG_PASSWORD ENV found with ""
    2021/06/03 07:55:03 [debug] KONG_DATABASE ENV found with "postgres"
    2021/06/03 07:55:03 [debug] admin_access_log = "/usr/local/kong/logs/admin-access.log"
    2021/06/03 07:55:03 [debug] admin_error_log = "/usr/local/kong/logs/admin-error.log"
    2021/06/03 07:55:03 [debug] admin_listen = {"0.0.0.0:8001"}
    2021/06/03 07:55:03 [debug] admin_ssl_cert = {}
    2021/06/03 07:55:03 [debug] admin_ssl_cert_key = {}
    2021/06/03 07:55:03 [debug] anonymous_reports = true
    2021/06/03 07:55:03 [debug] cassandra_contact_points = {"127.0.0.1"}
    2021/06/03 07:55:03 [debug] cassandra_data_centers = {"dc1:2","dc2:3"}
    2021/06/03 07:55:03 [debug] cassandra_keyspace = "kong"
    2021/06/03 07:55:03 [debug] cassandra_lb_policy = "RequestRoundRobin"
    2021/06/03 07:55:03 [debug] cassandra_port = 9042
    2021/06/03 07:55:03 [debug] cassandra_read_consistency = "ONE"
    2021/06/03 07:55:03 [debug] cassandra_refresh_frequency = 60
    2021/06/03 07:55:03 [debug] cassandra_repl_factor = 1
    2021/06/03 07:55:03 [debug] cassandra_repl_strategy = "SimpleStrategy"
    2021/06/03 07:55:03 [debug] cassandra_schema_consensus_timeout = 10000
    2021/06/03 07:55:03 [debug] cassandra_ssl = false
    2021/06/03 07:55:03 [debug] cassandra_ssl_verify = false
    2021/06/03 07:55:03 [debug] cassandra_timeout = 5000
    2021/06/03 07:55:03 [debug] cassandra_username = "kong"
    2021/06/03 07:55:03 [debug] cassandra_write_consistency = "ONE"
    2021/06/03 07:55:03 [debug] client_body_buffer_size = "8k"
    2021/06/03 07:55:03 [debug] client_max_body_size = "0"
    2021/06/03 07:55:03 [debug] client_ssl = false
    2021/06/03 07:55:03 [debug] cluster_control_plane = "127.0.0.1:8005"
    2021/06/03 07:55:03 [debug] cluster_data_plane_purge_delay = 1209600
    2021/06/03 07:55:03 [debug] cluster_listen = {"0.0.0.0:8005"}
    2021/06/03 07:55:03 [debug] cluster_mtls = "shared"
    2021/06/03 07:55:03 [debug] cluster_ocsp = "off"
    2021/06/03 07:55:03 [debug] database = "postgres"
    2021/06/03 07:55:03 [debug] db_cache_ttl = 0
    2021/06/03 07:55:03 [debug] db_cache_warmup_entities = {"services"}
    2021/06/03 07:55:03 [debug] db_resurrect_ttl = 30
    2021/06/03 07:55:03 [debug] db_update_frequency = 5
    2021/06/03 07:55:03 [debug] db_update_propagation = 0
    2021/06/03 07:55:03 [debug] dns_error_ttl = 1
    2021/06/03 07:55:03 [debug] dns_hostsfile = "/etc/hosts"
    2021/06/03 07:55:03 [debug] dns_no_sync = false
    2021/06/03 07:55:03 [debug] dns_not_found_ttl = 30
    2021/06/03 07:55:03 [debug] dns_order = {"LAST","SRV","A","CNAME"}
    2021/06/03 07:55:03 [debug] dns_resolver = {}
    2021/06/03 07:55:03 [debug] dns_stale_ttl = 4
    2021/06/03 07:55:03 [debug] error_default_type = "text/plain"
    2021/06/03 07:55:03 [debug] go_plugins_dir = "off"
    2021/06/03 07:55:03 [debug] go_pluginserver_exe = "/usr/local/bin/go-pluginserver"
    2021/06/03 07:55:03 [debug] headers = {"server_tokens","latency_tokens"}
    2021/06/03 07:55:03 [debug] host_ports = {}
    2021/06/03 07:55:03 [debug] kic = false
    2021/06/03 07:55:03 [debug] log_level = "info"
    2021/06/03 07:55:03 [debug] lua_package_cpath = ""
    2021/06/03 07:55:03 [debug] lua_package_path = "./?.lua;./?/init.lua;/usr/local/custom/?.lua;;"
    2021/06/03 07:55:03 [debug] lua_socket_pool_size = 30
    2021/06/03 07:55:03 [debug] lua_ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
    2021/06/03 07:55:03 [debug] lua_ssl_trusted_certificate = {}
    2021/06/03 07:55:03 [debug] lua_ssl_verify_depth = 1
    2021/06/03 07:55:03 [debug] mem_cache_size = "128m"
    2021/06/03 07:55:03 [debug] nginx_admin_client_body_buffer_size = "10m"
    2021/06/03 07:55:03 [debug] nginx_admin_client_max_body_size = "10m"
    2021/06/03 07:55:03 [debug] nginx_admin_directives = {{name="client_max_body_size",value="10m"},{name="client_body_buffer_size",value="10m"}}
    2021/06/03 07:55:03 [debug] nginx_daemon = "on"
    2021/06/03 07:55:03 [debug] nginx_events_directives = {{name="multi_accept",value="on"},{name="worker_connections",value="auto"}}
    2021/06/03 07:55:03 [debug] nginx_events_multi_accept = "on"
    2021/06/03 07:55:03 [debug] nginx_events_worker_connections = "auto"
    2021/06/03 07:55:03 [debug] nginx_http_client_body_buffer_size = "8k"
    2021/06/03 07:55:03 [debug] nginx_http_client_max_body_size = "0"
    2021/06/03 07:55:03 [debug] nginx_http_directives = {{name="ssl_dhparam",value="ffdhe2048"},{name="client_max_body_size",value="0"},{name="client_body_buffer_size",value="8k"},{name="ssl_protocols",value="TLSv1.2 TLSv1.3"},{name="ssl_prefer_server_ciphers",value="off"},{name="ssl_session_tickets",value="on"},{name="ssl_session_timeout",value="1d"},{name="lua_ssl_protocols",value="TLSv1.1 TLSv1.2 TLSv1.3"}}
    2021/06/03 07:55:03 [debug] nginx_http_lua_ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
    2021/06/03 07:55:03 [debug] nginx_http_ssl_dhparam = "ffdhe2048"
    2021/06/03 07:55:03 [debug] nginx_http_ssl_prefer_server_ciphers = "off"
    2021/06/03 07:55:03 [debug] nginx_http_ssl_protocols = "TLSv1.2 TLSv1.3"
    2021/06/03 07:55:03 [debug] nginx_http_ssl_session_tickets = "on"
    2021/06/03 07:55:03 [debug] nginx_http_ssl_session_timeout = "1d"
    2021/06/03 07:55:03 [debug] nginx_http_status_directives = {}
    2021/06/03 07:55:03 [debug] nginx_http_upstream_directives = {}
    2021/06/03 07:55:03 [debug] nginx_main_daemon = "on"
    2021/06/03 07:55:03 [debug] nginx_main_directives = {{name="daemon",value="on"},{name="worker_rlimit_nofile",value="auto"},{name="worker_processes",value="auto"}}
    2021/06/03 07:55:03 [debug] nginx_main_worker_processes = "auto"
    2021/06/03 07:55:03 [debug] nginx_main_worker_rlimit_nofile = "auto"
    2021/06/03 07:55:03 [debug] nginx_optimizations = true
    2021/06/03 07:55:03 [debug] nginx_proxy_client_body_buffer_size = "8m"
    2021/06/03 07:55:03 [debug] nginx_proxy_client_max_body_size = "8m"
    2021/06/03 07:55:03 [debug] nginx_proxy_directives = {{name="real_ip_header",value="X-Real-IP"},{name="real_ip_recursive",value="off"},{name="client_max_body_size",value="8m"},{name="proxy_buffers",value="4 256k"},{name="proxy_buffer_size",value="128k"},{name="proxy_busy_buffers_size",value="256k"},{name="client_body_buffer_size",value="8m"}}
    2021/06/03 07:55:03 [debug] nginx_proxy_proxy_buffer_size = "128k"
    2021/06/03 07:55:03 [debug] nginx_proxy_proxy_buffers = "4 256k"
    2021/06/03 07:55:03 [debug] nginx_proxy_proxy_busy_buffers_size = "256k"
    2021/06/03 07:55:03 [debug] nginx_proxy_real_ip_header = "X-Real-IP"
    2021/06/03 07:55:03 [debug] nginx_proxy_real_ip_recursive = "off"
    2021/06/03 07:55:03 [debug] nginx_sproxy_directives = {}
    2021/06/03 07:55:03 [debug] nginx_status_directives = {}
    2021/06/03 07:55:03 [debug] nginx_stream_directives = {{name="ssl_dhparam",value="ffdhe2048"},{name="ssl_protocols",value="TLSv1.2 TLSv1.3"},{name="ssl_prefer_server_ciphers",value="off"},{name="ssl_session_tickets",value="on"},{name="ssl_session_timeout",value="1d"},{name="lua_ssl_protocols",value="TLSv1.1 TLSv1.2 TLSv1.3"}}
    2021/06/03 07:55:03 [debug] nginx_stream_lua_ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
    2021/06/03 07:55:03 [debug] nginx_stream_ssl_dhparam = "ffdhe2048"
    2021/06/03 07:55:03 [debug] nginx_stream_ssl_prefer_server_ciphers = "off"
    2021/06/03 07:55:03 [debug] nginx_stream_ssl_protocols = "TLSv1.2 TLSv1.3"
    2021/06/03 07:55:03 [debug] nginx_stream_ssl_session_tickets = "on"
    2021/06/03 07:55:03 [debug] nginx_stream_ssl_session_timeout = "1d"
    2021/06/03 07:55:03 [debug] nginx_supstream_directives = {}
    2021/06/03 07:55:03 [debug] nginx_upstream_directives = {}
    2021/06/03 07:55:03 [debug] nginx_worker_processes = "auto"
    2021/06/03 07:55:03 [debug] pg_database = "kong"
    2021/06/03 07:55:03 [debug] pg_host = "172.26.0.45"
    2021/06/03 07:55:03 [debug] pg_max_concurrent_queries = 0
    2021/06/03 07:55:03 [debug] pg_password = "
    "
    2021/06/03 07:55:03 [debug] pg_port = 5432
    2021/06/03 07:55:03 [debug] pg_ro_ssl = false
    2021/06/03 07:55:03 [debug] pg_ro_ssl_verify = false
    2021/06/03 07:55:03 [debug] pg_semaphore_timeout = 60000
    2021/06/03 07:55:03 [debug] pg_ssl = false
    2021/06/03 07:55:03 [debug] pg_ssl_verify = false
    2021/06/03 07:55:03 [debug] pg_timeout = 5000
    2021/06/03 07:55:03 [debug] pg_user = "kong"
    2021/06/03 07:55:03 [debug] plugins = {"bundled","grpc-itop-verify","grpc-api-log","grpc-error-handler","p-prometheus","correlationid","itop-verify","g-gateway"}
    2021/06/03 07:55:03 [debug] pluginserver_names = {}
    2021/06/03 07:55:03 [debug] port_maps = {}
    2021/06/03 07:55:03 [debug] prefix = "/usr/local/kong/"
    2021/06/03 07:55:03 [debug] proxy_access_log = "/usr/local/kong/logs/proxy-access.log"
    2021/06/03 07:55:03 [debug] proxy_error_log = "/usr/local/kong/logs/proxy-error.log"
    2021/06/03 07:55:03 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl","0.0.0.0:9080 http2","0.0.0.0:9081 http2 ssl"}
    2021/06/03 07:55:03 [debug] proxy_stream_access_log = "logs/access.log basic"
    2021/06/03 07:55:03 [debug] proxy_stream_error_log = "logs/error.log"
    2021/06/03 07:55:03 [debug] real_ip_header = "X-Real-IP"
    2021/06/03 07:55:03 [debug] real_ip_recursive = "off"
    2021/06/03 07:55:03 [debug] role = "traditional"
    2021/06/03 07:55:03 [debug] ssl_cert = {"/usr/local/custom/kong/ssl/server.pem"}
    2021/06/03 07:55:03 [debug] ssl_cert_key = {"/usr/local/custom/kong/ssl/server.key"}
    2021/06/03 07:55:03 [debug] ssl_cipher_suite = "intermediate"
    2021/06/03 07:55:03 [debug] ssl_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
    2021/06/03 07:55:03 [debug] ssl_dhparam = "ffdhe2048"
    2021/06/03 07:55:03 [debug] ssl_prefer_server_ciphers = "on"
    2021/06/03 07:55:03 [debug] ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
    2021/06/03 07:55:03 [debug] ssl_session_tickets = "on"
    2021/06/03 07:55:03 [debug] ssl_session_timeout = "1d"
    2021/06/03 07:55:03 [debug] status_access_log = "off"
    2021/06/03 07:55:03 [debug] status_error_log = "logs/status_error.log"
    2021/06/03 07:55:03 [debug] status_listen = {"off"}
    2021/06/03 07:55:03 [debug] status_ssl_cert = {}
    2021/06/03 07:55:03 [debug] status_ssl_cert_key = {}
    2021/06/03 07:55:03 [debug] stream_listen = {"off"}
    2021/06/03 07:55:03 [debug] trusted_ips = {}
    2021/06/03 07:55:03 [debug] untrusted_lua = "sandbox"
    2021/06/03 07:55:03 [debug] untrusted_lua_sandbox_environment = {}
    2021/06/03 07:55:03 [debug] untrusted_lua_sandbox_requires = {}
    2021/06/03 07:55:03 [debug] upstream_keepalive_idle_timeout = 60
    2021/06/03 07:55:03 [debug] upstream_keepalive_max_requests = 100
    2021/06/03 07:55:03 [debug] upstream_keepalive_pool_size = 60
    2021/06/03 07:55:03 [debug] worker_consistency = "strict"
    2021/06/03 07:55:03 [debug] worker_state_update_frequency = 5
    2021/06/03 07:55:03 [verbose] prefix in use: /usr/local/kong
    2021/06/03 07:55:03 [verbose] preparing nginx prefix directory at /usr/local/kong
    2021/06/03 07:55:03 [debug] searching for OpenResty 'nginx' executable
    2021/06/03 07:55:03 [debug] /usr/local/openresty/nginx/sbin/nginx -v: 'nginx version: openresty/1.19.3.1'
    2021/06/03 07:55:03 [debug] found OpenResty 'nginx' executable at /usr/local/openresty/nginx/sbin/nginx
    2021/06/03 07:55:03 [debug] testing nginx configuration: KONG_NGINX_CONF_CHECK=true /usr/local/openresty/nginx/sbin/nginx -t -p /usr/local/kong -c nginx.conf
    2021/06/03 07:55:03 [debug] sending signal to pid at: /usr/local/kong/pids/nginx.pid
    2021/06/03 07:55:03 [debug] kill -0 cat /usr/local/kong/pids/nginx.pid >/dev/null 2>&1
    Error:
    /usr/local/share/lua/5.1/kong/cmd/start.lua:26: Kong is already running in /usr/local/kong
    stack traceback:
    [C]: in function 'assert'
    /usr/local/share/lua/5.1/kong/cmd/start.lua:26: in function 'cmd_exec'
    /usr/local/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:88>
    [C]: in function 'xpcall'
    /usr/local/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:45>
    /usr/local/bin/kong:9: in function 'file_gen'
    init_worker_by_lua:47: in function <init_worker_by_lua:45>
    [C]: in function 'xpcall'
    init_worker_by_lua:54: in function <init_worker_by_lua:52>
  • Kong error logs (<KONG_PREFIX>/logs/error.log)
  • 021/06/03 07:54:00 [warn] 22#0: *62537 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000146, client: 103.7.29.9, server: kong, request: "POST /sns.Club/Panel HTTP/2.0", host: "meemo"
    2021/06/03 07:54:00 [warn] 22#0: *62537 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000147, client: 103.7.29.9, server: kong, request: "POST /sns.Club/Reco HTTP/2.0", host: "meemo"
    2021/06/03 07:54:00 [warn] 22#0: *62537 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000148, client: 103.7.29.9, server: kong, request: "POST /sns.Club/JoinedClubs HTTP/2.0", host: "meemo"
    2021/06/03 07:54:00 [warn] 22#0: *62538 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000149, client: 103.7.29.9, server: kong, request: "POST /sns.User/GetProfile HTTP/2.0", host: "meemo"
    2021/06/03 07:54:00 [warn] 22#0: *62539 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000150, client: 103.7.29.9, server: kong, request: "POST /sns.User/Sync HTTP/2.0", host: "meemo"
    2021/06/03 07:54:00 [warn] 22#0: *62541 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000151, client: 103.7.29.9, server: kong, request: "POST /sns.User/GetProfile HTTP/2.0", host: "meemo"
    2021/06/03 07:54:00 [warn] 22#0: *62537 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000152, client: 103.7.29.9, server: kong, request: "POST /sns.Club/NewMessageNum HTTP/2.0", host: "meemo"
    2021/06/03 07:54:00 [info] 22#0: *62571 client 124.156.154.236 closed keepalive connection (104: Connection reset by peer)
  • Kong configuration (the output of a GET request to Kong's Admin port - see
    https://docs.konghq.com/latest/admin-api/#retrieve-node-information)
  • Operating system
    centos v8
@joelsdc
Copy link

joelsdc commented Jun 4, 2021

I was troubleshooting the same issue, but in my case it happens with regular HTTP endpoints.

TL;DR --> I don't think it's related to gRPC but more to the HTTP version used.

I have a route setup like:

- name: SERVICE1_EXTERNAL
  url: http://service1
  routes:
  - name: service1_endpoint
    paths: ['/']
    hosts: ['service1.example.com']
    protocols: ['https']
    request_buffering: false
    response_buffering: false
    preserve_host: true
    strip_path: false

Which technically has all (request/response) buffering disabled. In my tests, I found that:

  • Send a request using HTTP1.1, there is no buffering. ✅
  • Send a request using HTTP2, there is buffering. ❌

Example using HTTP1.1:

$ curl --http1.1 -v https://service1.example.com/getFile\?fileID\=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI -o tempfile
*   Trying X.X.X.X...
* TCP_NODELAY set
* Connected to service1.example.com (X.X.X.X) port 443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [229 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [108 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [2561 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: CN=*.example.com
*  start date: May  1 10:22:30 2021 GMT
*  expire date: Jul 30 10:22:30 2021 GMT
*  subjectAltName: host "service1.example.com" matched cert's "*.example.com"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
> GET /getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI HTTP/1.1
> Host: service1.example.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: video/mp4
< Content-Length: 2413332
< Connection: keep-alive
< server: Apache-Coyote/1.1
< content-disposition: inline;filename="20ca4cc3-38b8-425a-9410-712a36aa7f36.mp4"
< accept-ranges: bytes
< etag: 20ca4cc3-38b8-425a-9410-712a36aa7f36_2413332_1622597494
< last-modified: Mon, 19 Jan 1970 18:43:17 GMT
< expires: Fri, 11 Jun 2021 00:51:05 GMT
< content-range: bytes 0-2413331/2413332
< date: Fri, 04 Jun 2021 00:51:05 GMT
< X-Kong-Upstream-Latency: 138
< X-Kong-Proxy-Latency: 0
< Via: kong/2.4.1
<
{ [3671 bytes data]
100 2356k  100 2356k    0     0  1657k      0  0:00:01  0:00:01 --:--:-- 1656k
* Connection #0 to host service1.example.com left intact
* Closing connection 0

Example using HTTP2:

$ curl -v https://service1.example.com/getFile\?fileID\=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI -o tempfile
*   Trying X.X.X.X...
* TCP_NODELAY set
* Connected to service1.example.com (X.X.X.X) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [232 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [102 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [2561 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=*.example.com
*  start date: May  1 10:22:30 2021 GMT
*  expire date: Jul 30 10:22:30 2021 GMT
*  subjectAltName: host "service1.example.com" matched cert's "*.example.com"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7f9fbf810800)
> GET /getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI HTTP/2
> Host: service1.example.com
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< content-type: video/mp4
< content-length: 2413332
< server: Apache-Coyote/1.1
< content-disposition: inline;filename="20ca4cc3-38b8-425a-9410-712a36aa7f36.mp4"
< accept-ranges: bytes
< etag: 20ca4cc3-38b8-425a-9410-712a36aa7f36_2413332_1622597494
< last-modified: Mon, 19 Jan 1970 18:43:17 GMT
< expires: Fri, 11 Jun 2021 00:51:11 GMT
< content-range: bytes 0-2413331/2413332
< date: Fri, 04 Jun 2021 00:51:11 GMT
< x-kong-upstream-latency: 74
< x-kong-proxy-latency: 1
< via: kong/2.4.1
<
{ [3671 bytes data]
100 2356k  100 2356k    0     0  2028k      0  0:00:01  0:00:01 --:--:-- 2028k
* Connection #0 to host service1.example.com left intact
* Closing connection 0
$

In case of HTTP2, a can always see the following log:

2021/06/04 00:58:42 [warn] 26#0: *301981940 an upstream response is buffered to a temporary file /usr/local/kong/proxy_temp/4/33/0000000334 while reading upstream, client: Y.Y.Y.Y, server: kong, request: "GET /getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI HTTP/2.0", upstream: "http://Z.Z.Z.Z:80/getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI", host: "service1.example.com"

Using HTTP1.1, I never get that log (which matches the expected behavior giving request_buffering=false and response_buffering=false are defined for the route.)

All that said, I'm not completely sure this is a Kong issue, as it seems we are not the first to hit this issue on NGINX:

cloudendpoints/esp#395 (comment)
cloudendpoints/esp#395 (comment)

Which in turns leads to:

http://mailman.nginx.org/pipermail/nginx-devel/2018-December/011657.html

Any suggestions? 😂

Our setup:

Kong 2.4.1 running in db-less mode on docker.

@highwayliu
Copy link
Author

I was troubleshooting the same issue, but in my case it happens with regular HTTP endpoints.

TL;DR --> I don't think it's related to gRPC but more to the HTTP version used.

I have a route setup like:

- name: SERVICE1_EXTERNAL
  url: http://service1
  routes:
  - name: service1_endpoint
    paths: ['/']
    hosts: ['service1.example.com']
    protocols: ['https']
    request_buffering: false
    response_buffering: false
    preserve_host: true
    strip_path: false

Which technically has all (request/response) buffering disabled. In my tests, I found that:

  • Send a request using HTTP1.1, there is no buffering. ✅
  • Send a request using HTTP2, there is buffering. ❌

Example using HTTP1.1:

$ curl --http1.1 -v https://service1.example.com/getFile\?fileID\=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI -o tempfile
*   Trying X.X.X.X...
* TCP_NODELAY set
* Connected to service1.example.com (X.X.X.X) port 443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [229 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [108 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [2561 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: CN=*.example.com
*  start date: May  1 10:22:30 2021 GMT
*  expire date: Jul 30 10:22:30 2021 GMT
*  subjectAltName: host "service1.example.com" matched cert's "*.example.com"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
> GET /getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI HTTP/1.1
> Host: service1.example.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: video/mp4
< Content-Length: 2413332
< Connection: keep-alive
< server: Apache-Coyote/1.1
< content-disposition: inline;filename="20ca4cc3-38b8-425a-9410-712a36aa7f36.mp4"
< accept-ranges: bytes
< etag: 20ca4cc3-38b8-425a-9410-712a36aa7f36_2413332_1622597494
< last-modified: Mon, 19 Jan 1970 18:43:17 GMT
< expires: Fri, 11 Jun 2021 00:51:05 GMT
< content-range: bytes 0-2413331/2413332
< date: Fri, 04 Jun 2021 00:51:05 GMT
< X-Kong-Upstream-Latency: 138
< X-Kong-Proxy-Latency: 0
< Via: kong/2.4.1
<
{ [3671 bytes data]
100 2356k  100 2356k    0     0  1657k      0  0:00:01  0:00:01 --:--:-- 1656k
* Connection #0 to host service1.example.com left intact
* Closing connection 0

Example using HTTP2:

$ curl -v https://service1.example.com/getFile\?fileID\=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI -o tempfile
*   Trying X.X.X.X...
* TCP_NODELAY set
* Connected to service1.example.com (X.X.X.X) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [232 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [102 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [2561 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=*.example.com
*  start date: May  1 10:22:30 2021 GMT
*  expire date: Jul 30 10:22:30 2021 GMT
*  subjectAltName: host "service1.example.com" matched cert's "*.example.com"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7f9fbf810800)
> GET /getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI HTTP/2
> Host: service1.example.com
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< content-type: video/mp4
< content-length: 2413332
< server: Apache-Coyote/1.1
< content-disposition: inline;filename="20ca4cc3-38b8-425a-9410-712a36aa7f36.mp4"
< accept-ranges: bytes
< etag: 20ca4cc3-38b8-425a-9410-712a36aa7f36_2413332_1622597494
< last-modified: Mon, 19 Jan 1970 18:43:17 GMT
< expires: Fri, 11 Jun 2021 00:51:11 GMT
< content-range: bytes 0-2413331/2413332
< date: Fri, 04 Jun 2021 00:51:11 GMT
< x-kong-upstream-latency: 74
< x-kong-proxy-latency: 1
< via: kong/2.4.1
<
{ [3671 bytes data]
100 2356k  100 2356k    0     0  2028k      0  0:00:01  0:00:01 --:--:-- 2028k
* Connection #0 to host service1.example.com left intact
* Closing connection 0
$

In case of HTTP2, a can always see the following log:

2021/06/04 00:58:42 [warn] 26#0: *301981940 an upstream response is buffered to a temporary file /usr/local/kong/proxy_temp/4/33/0000000334 while reading upstream, client: Y.Y.Y.Y, server: kong, request: "GET /getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI HTTP/2.0", upstream: "http://Z.Z.Z.Z:80/getFile?fileID=B--TeXDI-cQLKLJuHRTBg-x4w_EHjI", host: "service1.example.com"

Using HTTP1.1, I never get that log (which matches the expected behavior giving request_buffering=false and response_buffering=false are defined for the route.)

All that said, I'm not completely sure this is a Kong issue, as it seems we are not the first to hit this issue on NGINX:

cloudendpoints/esp#395 (comment)
cloudendpoints/esp#395 (comment)

Which in turns leads to:

http://mailman.nginx.org/pipermail/nginx-devel/2018-December/011657.html

Any suggestions? 😂

Our setup:

Kong 2.4.1 running in db-less mode on docker.

Thank you for your answer.
Is there anything grpc client can do to avoid the buffered to temporary file.
if the client set content-length in header, the problem would solved?

@joelsdc
Copy link

joelsdc commented Jun 4, 2021

I'm not sure either, let's wait to see what the Kong guys think about this, but I think it's going to be a NGINX problem, not specifically tied to Kong.

@bungle
Copy link
Member

bungle commented Jun 9, 2021

    request_buffering: false
    response_buffering: false

These only work with http 1.1.

@bungle
Copy link
Member

bungle commented Jun 9, 2021

@bungle
Copy link
Member

bungle commented Jun 9, 2021

It might be related to this too:
http://mailman.nginx.org/pipermail/nginx-devel/2018-December/011657.html

There have been some changes in Nginx recently, but currently latest Kong is using 1.19.3.

@highwayliu
Copy link
Author

highwayliu commented Jun 14, 2021

It might be related to this too:
http://mailman.nginx.org/pipermail/nginx-devel/2018-December/011657.html

There have been some changes in Nginx recently, but currently latest Kong is using 1.19.3.

it's written in nginx
"Before version 1.9.14, buffering of a client request body could not be disabled regardless of proxy_request_buffering, fastcgi_request_buffering, uwsgi_request_buffering, and scgi_request_buffering directive values."

it seems that nginx solve the problem in 1.9.14. but it doesn't work now .
is the newest nginx version wolud solve the problem?

@PidgeyBE
Copy link
Contributor

Now that kong 3.x is released and we have nginx > 1.9.14, can we remove this check on HTTP/1.1 https://github.com/Kong/kong/blob/3.1.1/kong/runloop/handler.lua#L1577 so

    request_buffering: false
    response_buffering: false

also works for HTTP/2.0?

@highwayliu
Copy link
Author

highwayliu commented Jan 31, 2023 via email

@PidgeyBE
Copy link
Contributor

PidgeyBE commented Feb 1, 2023

I've tried to hardcode every value in nginx_kong.lua to off:

bash-5.1$ cat /tmp/tmp.EjnKmF/nginx-kong.conf | grep buffering
    proxy_buffering          off;
    proxy_request_buffering  off;
        proxy_buffering          off;
        proxy_request_buffering  off;
        proxy_buffering         off;
        proxy_request_buffering off;
        proxy_buffering         off;
        proxy_request_buffering off;
        proxy_buffering         off;
        proxy_request_buffering  off;

But still my requests are getting buffered:

$ kubectl -n x logs -f x-kong-6b4b556f56-hqcfj -c proxy | grep tempor
2023/02/01 09:11:48 [warn] 1127#0: *2344 a client request body is buffered to a temporary file /kong_prefix/client_body_temp/0000000001, client: 10.42.0.1, server: kong, request: "POST /io/?data=69d73d83-982d-4504-982f-4a7d0a3fd7b9 HTTP/2.0", host: "x", referrer: "https://x/projects/saf"

Also HTTP/1.1 is not working:

3/02/01 09:36:05 [warn] 1127#0: *1853 a client request body is buffered to a temporary file /kong_prefix/client_body_temp/0000000005, client: 10.42.0.1, server: kong, request: "POST /io/api/v1/imports?data=31bbd520-6121-48d1-8c68-31d4f6ea3f56&tagName=x HTTP/1.1", host: "x", referrer: "https://x/projects/sdfw"

I'm on kong 3.1.1.
Does anyone know how to turn buffering off?

//edit:
!!!! I found out that request_buffering can not be deactivated on **multipart/form-data** requests, because we use the session plugin: https://github.com/Kong/kong-plugin-session/blob/master/kong/plugins/session/session.lua#L105
Patched that and now it's working!

bungle added a commit that referenced this issue Feb 22, 2023
…ethods

### Summary

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, the default
value for ´logout_post_arg` in session plugin was removed. The bodies are only
read when this is configured. This might change behavior on scripts that create
session plugin and which also think that logout by body argument works as before.
On the other hand it is a way more common to read session than it is to log out
session, thus it should be better value for future of us.
bungle added a commit that referenced this issue Feb 22, 2023
…ethods

### Summary

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, the default
value for ´logout_post_arg` in session plugin was removed. The bodies are only
read when this is configured. This might change behavior on scripts that create
session plugin and which also think that logout by body argument works as before.
On the other hand it is a way more common to read session than it is to log out
session, thus it should be better value for future of us.
bungle added a commit that referenced this issue Mar 2, 2023
…ethods

### Summary

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, a new
configuration option `read_bodies` was added with a default value of `false`.
The bodies are only read when this is configured as `true`. This is a
**breaking** change that the plugin does not anymore read body to detect
logout if the `read_bodies` is not explicitly set to `true`, On the other
hand it is a way more common to read session than it is to log out
session, thus it should be better default for future of us.
hanshuebner pushed a commit that referenced this issue Mar 14, 2023
…ethods

### Summary

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, a new
configuration option `read_bodies` was added with a default value of `false`.
The bodies are only read when this is configured as `true`. This is a
**breaking** change that the plugin does not anymore read body to detect
logout if the `read_bodies` is not explicitly set to `true`, On the other
hand it is a way more common to read session than it is to log out
session, thus it should be better default for future of us.
bungle added a commit that referenced this issue Mar 22, 2023
…ethods

### Summary

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, a new
configuration option `read_bodies` was added with a default value of `false`.
The bodies are only read when this is configured as `true`. This is a
**breaking** change that the plugin does not anymore read body to detect
logout if the `read_bodies` is not explicitly set to `true`, On the other
hand it is a way more common to read session than it is to log out
session, thus it should be better default for future of us.

Signed-off-by: Aapo Talvensaari <aapo.talvensaari@gmail.com>
@highwayliu
Copy link
Author

highwayliu commented Mar 31, 2023 via email

bungle pushed a commit that referenced this issue Mar 31, 2023
### Summary

Turning request/response buffering off for HTTP/2.0 is not possible.
There is a check that causes buffering only to be controllable for HTTP/1.1.

This was probably done because of an issue in nginx, which was fixed in
version 1.9.14 (http://nginx.org/en/docs/http/ngx_http_v2_module.html):

> Before version 1.9.14, buffering of a client request body could not be
> disabled regardless of [proxy_request_buffering](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering),
> [fastcgi_request_buffering](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_request_buffering),
> [uwsgi_request_buffering](http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html#uwsgi_request_buffering), and
> [scgi_request_buffering](http://nginx.org/en/docs/http/ngx_http_scgi_module.html#scgi_request_buffering) directive values.

Kong now has Nginx > 1.9.14, so the check is not needed any more.

The work was done by @PidgeyBE, thank you very much!

### Issues Resolved

Fix #7418
Close #10204

Signed-off-by: Aapo Talvensaari <aapo.talvensaari@gmail.com>
hanshuebner pushed a commit that referenced this issue Mar 31, 2023
### Summary

Turning request/response buffering off for HTTP/2.0 is not possible.
There is a check that causes buffering only to be controllable for HTTP/1.1.

This was probably done because of an issue in nginx, which was fixed in
version 1.9.14 (http://nginx.org/en/docs/http/ngx_http_v2_module.html):

> Before version 1.9.14, buffering of a client request body could not be
> disabled regardless of [proxy_request_buffering](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering),
> [fastcgi_request_buffering](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_request_buffering),
> [uwsgi_request_buffering](http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html#uwsgi_request_buffering), and
> [scgi_request_buffering](http://nginx.org/en/docs/http/ngx_http_scgi_module.html#scgi_request_buffering) directive values.

Kong now has Nginx > 1.9.14, so the check is not needed any more.

The work was done by @PidgeyBE, thank you very much!

### Issues Resolved

Fix #7418
Close #10204

Signed-off-by: Aapo Talvensaari <aapo.talvensaari@gmail.com>
Co-authored-by: PidgeyBE <pidgey.be@gmail.com>
bungle added a commit that referenced this issue Jun 5, 2023
…ethods

### Summary

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, a new
configuration option `read_bodies` was added with a default value of `false`.
The bodies are only read when this is configured as `true`. This is a
**breaking** change that the plugin does not anymore read body to detect
logout if the `read_bodies` is not explicitly set to `true`, On the other
hand it is a way more common to read session than it is to log out
session, thus it should be better default for future of us.

Signed-off-by: Aapo Talvensaari <aapo.talvensaari@gmail.com>
jschmid1 pushed a commit that referenced this issue Oct 17, 2023
…ethods

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, a new
configuration option `read_body_for_logout` was added with a default value of `false`.
The bodies are only read when this is configured as `true`. This is a
**breaking** change that the plugin does not anymore read body to detect
logout if the `read_body_for_logout` is not explicitly set to `true`, On the other
hand it is a way more common to read session than it is to log out
session, thus it should be better default for future of us.

Signed-off-by: Aapo Talvensaari <aapo.talvensaari@gmail.com>
jschmid1 pushed a commit that referenced this issue Oct 17, 2023
…ethods (#10333)

On comment: #7418 (comment)
@PidgeyBE mentioned that session plugin reads bodies by on every HTTP request
that is not an GET request.

Because it is quite common to use bodies to send large files, reading body
makes features like route.request_buffering=off, not working. Thus, a new
configuration option `read_body_for_logout` was added with a default value of `false`.
The bodies are only read when this is configured as `true`. This is a
**breaking** change that the plugin does not anymore read body to detect
logout if the `read_body_for_logout` is not explicitly set to `true`, On the other
hand it is a way more common to read session than it is to log out
session, thus it should be better default for future of us.

Signed-off-by: Aapo Talvensaari <aapo.talvensaari@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants