Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

connect() to unix:/usr/local/kong/go_pluginserver.sock failed(starting instance: nil) #6500

Closed
lampnick opened this issue Oct 21, 2020 · 18 comments

Comments

@lampnick
Copy link

lampnick commented Oct 21, 2020

Summary

Kong run normal for a period of time, but a few days later the go plugin invalid. Get the following errors. If run kong prepare and then run kong reload, go plugin work well.

Steps To Reproduce

1.using golang to wrote a plugin
2.loaded go plugin
3.Normal run for a period of time
4.A few days later the go plugin invalid.

Additional Details & Logs

  • Kong version ($ kong version) 2.1.4
"go-pluginserver data[1]===> " is my log write in /usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:160 file.
log format:
		kong.log.err("go-pluginserver data[1]===> ", tostring(data[1]))
		kong.log.err("go-pluginserver data[2]===> ", tostring(data[2]))
		kong.log.err("go-pluginserver data[3]===> ", tostring(data[3]))
		kong.log.err("go-pluginserver data[4]===> ", tostring(data[4]))

the go-pluginserver return error 413 after 1 minute.

##################################go-hello(offical demo)##################################
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:161 [custom-rate-limiting] go-pluginserver data[1]===> 2, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:162 [custom-rate-limiting] go-pluginserver data[2]===> serverPid, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:163 [custom-rate-limiting] go-pluginserver data[3]===> 413, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:164 [custom-rate-limiting] go-pluginserver data[4]===> nil, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:24:49 [error] 305#0: *29570480 [kong] go.lua:434 [custom-rate-limiting] starting instance: nil, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:24:49 [error] 305#0: *29570480 lua coroutine: runtime error: unknown reason
stack traceback:
coroutine 0:
        [C]: in function 'error'
        /usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:437: in function 'get_instance'
        /usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:503: in function </usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:502>
coroutine 1:
        [C]: in function 'resume'
        coroutine.wrap:21: in function <coroutine.wrap:21>
        /usr/local/share/lua/5.1/kong/init.lua:756: in function 'access'
        access_by_lua(nginx-kong.conf:89):2: in main chunk, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
10.10.5.164 - - [22/Oct/2020:02:24:49 +0000] "GET /test-backend HTTP/1.1" 404 10 "-" "curl/7.29.0"

##################################custom-rate-limiting(my custom go plugin)##################################
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:161 [custom-rate-limiting] go-pluginserver data[1]===> 2, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:162 [custom-rate-limiting] go-pluginserver data[2]===> serverPid, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:163 [custom-rate-limiting] go-pluginserver data[3]===> 413, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:23:49 [error] 305#0: *29570480 [kong] go.lua:164 [custom-rate-limiting] go-pluginserver data[4]===> nil, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:24:49 [error] 305#0: *29570480 [kong] go.lua:434 [custom-rate-limiting] starting instance: nil, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/22 02:24:49 [error] 305#0: *29570480 lua coroutine: runtime error: unknown reason
stack traceback:
coroutine 0:
        [C]: in function 'error'
        /usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:437: in function 'get_instance'
        /usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:503: in function </usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:502>
coroutine 1:
        [C]: in function 'resume'
        coroutine.wrap:21: in function <coroutine.wrap:21>
        /usr/local/share/lua/5.1/kong/init.lua:756: in function 'access'
        access_by_lua(nginx-kong.conf:89):2: in main chunk, client: 10.10.5.164, server: kong, request: "GET /test-backend HTTP/1.1", host: "gateway-test.xxx.com"
10.10.5.164 - - [22/Oct/2020:02:24:49 +0000] "GET /test-backend HTTP/1.1" 404 10 "-" "curl/7.29.0"
  • Kong configuration (the output of a GET request to Kong's Admin port - see
{
	"plugins": {
		"enabled_in_cluster": ["rate-limiting", "rate-limiting-orgcode", "go-hello", "prometheus", "custom-rate-limiting", "jwt"],
		"available_on_server": {
			"grpc-web": true,
			"go-hello": true,
			"correlation-id": true,
			"pre-function": true,
			"cors": true,
			"rate-limiting": true,
			"loggly": true,
			"hmac-auth": true,
			"zipkin": true,
			"bot-detection": true,
			"azure-functions": true,
			"custom-rate-limiting": true,
			"request-transformer": true,
			"oauth2": true,
			"response-transformer": true,
			"syslog": true,
			"statsd": true,
			"jwt": true,
			"proxy-cache": true,
			"basic-auth": true,
			"key-auth": true,
			"rate-limiting-orgcode": true,
			"http-log": true,
			"session": true,
			"prometheus": true,
			"datadog": true,
			"tcp-log": true,
			"ldap-auth": true,
			"post-function": true,
			"ip-restriction": true,
			"acl": true,
			"grpc-gateway": true,
			"response-ratelimiting": true,
			"request-size-limiting": true,
			"udp-log": true,
			"file-log": true,
			"aws-lambda": true,
			"acme": true,
			"orgcode-gray": true,
			"request-termination": true
		}
	},
	"tagline": "Welcome to kong",
	"configuration": {
		"plugins": ["bundled", "custom-rate-limiting", "go-hello"],
		"cassandra_read_consistency": "ONE",
		"admin_listen": ["0.0.0.0:8001", "0.0.0.0:8444 ssl"],
		"proxy_access_log": "\/dev\/stdout",
		"nginx_stream_directives": [{
			"value": "stream_prometheus_metrics 5m",
			"name": "lua_shared_dict"
		}, {
			"value": "off",
			"name": "ssl_prefer_server_ciphers"
		}, {
			"value": "TLSv1.2 TLSv1.3",
			"name": "ssl_protocols"
		}, {
			"value": "on",
			"name": "ssl_session_tickets"
		}, {
			"value": "1d",
			"name": "ssl_session_timeout"
		}],
		"nginx_conf": "\/usr\/local\/kong\/nginx.conf",
		"cassandra_username": "kong",
		"nginx_events_directives": [{
			"value": "on",
			"name": "multi_accept"
		}, {
			"value": "auto",
			"name": "worker_connections"
		}],
		"admin_ssl_cert_key": "\/usr\/local\/kong\/ssl\/admin-kong-default.key",
		"dns_resolver": {},
		"nginx_upstream_keepalive_requests": "100",
		"nginx_http_upstream_directives": [{
			"value": "60",
			"name": "keepalive"
		}, {
			"value": "100",
			"name": "keepalive_requests"
		}, {
			"value": "60s",
			"name": "keepalive_timeout"
		}],
		"nginx_main_daemon": "on",
		"stream_proxy_ssl_enabled": false,
		"nginx_acc_logs": "\/usr\/local\/kong\/logs\/access.log",
		"pg_semaphore_timeout": 60000,
		"proxy_listen": ["0.0.0.0:8000 reuseport backlog=16384", "0.0.0.0:8443 http2 ssl reuseport backlog=16384"],
		"client_ssl_cert_default": "\/usr\/local\/kong\/ssl\/kong-default.crt",
		"go_pluginserver_exe": "\/usr\/local\/bin\/go-pluginserver",
		"dns_no_sync": false,
		"db_update_propagation": 0,
		"nginx_stream_ssl_session_tickets": "on",
		"nginx_err_logs": "\/usr\/local\/kong\/logs\/error.log",
		"ssl_prefer_server_ciphers": "on",
		"headers": ["server_tokens", "latency_tokens"],
		"nginx_http_client_max_body_size": "0",
		"status_ssl_enabled": false,
		"status_listen": ["off"],
		"cassandra_lb_policy": "RequestRoundRobin",
		"cluster_control_plane": "127.0.0.1:8005",
		"nginx_http_ssl_prefer_server_ciphers": "off",
		"pg_database": "k8s_kong_test",
		"nginx_http_client_body_buffer_size": "8k",
		"admin_acc_logs": "\/usr\/local\/kong\/logs\/admin_access.log",
		"cassandra_refresh_frequency": 60,
		"nginx_pid": "\/usr\/local\/kong\/pids\/nginx.pid",
		"nginx_main_worker_rlimit_nofile": "auto",
		"cassandra_contact_points": ["127.0.0.1"],
		"proxy_listeners": [{
			"listener": "0.0.0.0:8000 reuseport backlog=16384",
			"proxy_protocol": false,
			"reuseport": true,
			"deferred": false,
			"ssl": false,
			"ip": "0.0.0.0",
			"backlog=16384": true,
			"http2": false,
			"port": 8000,
			"bind": false
		}, {
			"listener": "0.0.0.0:8443 ssl http2 reuseport backlog=16384",
			"proxy_protocol": false,
			"reuseport": true,
			"deferred": false,
			"ssl": true,
			"ip": "0.0.0.0",
			"backlog=16384": true,
			"http2": true,
			"port": 8443,
			"bind": false
		}],
		"db_cache_warmup_entities": ["services", "plugins"],
		"enabled_headers": {
			"latency_tokens": true,
			"X-Kong-Response-Latency": true,
			"Server": true,
			"X-Kong-Admin-Latency": true,
			"X-Kong-Upstream-Status": false,
			"Via": true,
			"X-Kong-Proxy-Latency": true,
			"server_tokens": true,
			"X-Kong-Upstream-Latency": true
		},
		"nginx_http_ssl_protocols": "TLSv1.2 TLSv1.3",
		"upstream_keepalive_idle_timeout": 60,
		"db_cache_ttl": 0,
		"nginx_events_multi_accept": "on",
		"status_listeners": {},
		"pg_ssl": false,
		"status_access_log": "off",
		"cluster_listeners": [{
			"listener": "0.0.0.0:8005",
			"proxy_protocol": false,
			"reuseport": false,
			"backlog=%d+": false,
			"deferred": false,
			"ssl": false,
			"ip": "0.0.0.0",
			"port": 8005,
			"http2": false,
			"bind": false
		}],
		"ssl_protocols": "TLSv1.1 TLSv1.2 TLSv1.3",
		"kong_env": "\/usr\/local\/kong\/.kong_env",
		"cassandra_schema_consensus_timeout": 10000,
		"log_level": "notice",
		"admin_ssl_cert_key_default": "\/usr\/local\/kong\/ssl\/admin-kong-default.key",
		"ssl_session_timeout": "1d",
		"real_ip_recursive": "off",
		"cassandra_repl_factor": 1,
		"nginx_main_worker_processes": "16",
		"port_maps": {},
		"pg_port": 3433,
		"cassandra_keyspace": "kong",
		"ssl_cert_default": "\/usr\/local\/kong\/ssl\/kong-default.crt",
		"nginx_http_ssl_session_timeout": "1d",
		"error_default_type": "text\/plain",
		"upstream_keepalive_pool_size": 60,
		"worker_consistency": "strict",
		"nginx_stream_ssl_session_timeout": "1d",
		"admin_ssl_enabled": true,
		"trusted_ips": ["0.0.0.0\/0", "::\/0"],
		"loaded_plugins": {
			"grpc-web": true,
			"go-hello": true,
			"session": true,
			"pre-function": true,
			"cors": true,
			"rate-limiting": true,
			"loggly": true,
			"hmac-auth": true,
			"zipkin": true,
			"bot-detection": true,
			"azure-functions": true,
			"custom-rate-limiting": true,
			"request-transformer": true,
			"oauth2": true,
			"prometheus": true,
			"syslog": true,
			"statsd": true,
			"jwt": true,
			"proxy-cache": true,
			"basic-auth": true,
			"key-auth": true,
			"rate-limiting-orgcode": true,
			"http-log": true,
			"ip-restriction": true,
			"orgcode-gray": true,
			"datadog": true,
			"tcp-log": true,
			"acme": true,
			"post-function": true,
			"correlation-id": true,
			"acl": true,
			"grpc-gateway": true,
			"file-log": true,
			"request-size-limiting": true,
			"udp-log": true,
			"response-ratelimiting": true,
			"aws-lambda": true,
			"response-transformer": true,
			"ldap-auth": true,
			"request-termination": true
		},
		"nginx_supstream_directives": {},
		"ssl_cert_key": "\/usr\/local\/kong\/ssl\/kong-default.key",
		"host_ports": {},
		"pg_user": "myscrm_super",
		"mem_cache_size": "128m",
		"cassandra_data_centers": ["dc1:2", "dc2:3"],
		"nginx_admin_directives": {},
		"nginx_upstream_keepalive_timeout": "60s",
		"nginx_http_directives": [{
			"value": "8k",
			"name": "client_body_buffer_size"
		}, {
			"value": "0",
			"name": "client_max_body_size"
		}, {
			"value": "prometheus_metrics 5m",
			"name": "lua_shared_dict"
		}, {
			"value": "off",
			"name": "ssl_prefer_server_ciphers"
		}, {
			"value": "TLSv1.2 TLSv1.3",
			"name": "ssl_protocols"
		}, {
			"value": "on",
			"name": "ssl_session_tickets"
		}, {
			"value": "1d",
			"name": "ssl_session_timeout"
		}],
		"pg_host": "postgresql",
		"nginx_kong_stream_conf": "\/usr\/local\/kong\/nginx-kong-stream.conf",
		"ssl_cert_key_default": "\/usr\/local\/kong\/ssl\/kong-default.key",
		"go_plugins_dir": "\/opt\/kong\/plugins",
		"db_update_frequency": 5,
		"cassandra_write_consistency": "ONE",
		"dns_order": ["LAST", "SRV", "A", "CNAME"],
		"dns_error_ttl": 1,
		"nginx_sproxy_directives": {},
		"nginx_http_upstream_keepalive_timeout": "60s",
		"pg_timeout": 5000,
		"nginx_http_upstream_keepalive_requests": "100",
		"database": "postgres",
		"nginx_upstream_keepalive": "60",
		"nginx_worker_processes": "16",
		"nginx_http_status_directives": {},
		"prefix": "\/usr\/local\/kong",
		"nginx_optimizations": true,
		"nginx_proxy_real_ip_header": "X-Forwarded-For",
		"lua_package_path": ".\/?.lua;.\/?\/init.lua;",
		"nginx_status_directives": {},
		"upstream_keepalive": 60,
		"nginx_stream_ssl_protocols": "TLSv1.2 TLSv1.3",
		"worker_state_update_frequency": 5,
		"pg_password": "******",
		"cassandra_port": 9042,
		"pg_max_concurrent_queries": 0,
		"lua_package_cpath": "",
		"admin_access_log": "\/dev\/stdout",
		"lua_ssl_verify_depth": 1,
		"proxy_ssl_enabled": true,
		"nginx_http_upstream_keepalive": "60",
		"upstream_keepalive_max_requests": 100,
		"lua_socket_pool_size": 30,
		"pg_ro_ssl_verify": false,
		"cassandra_ssl": false,
		"db_resurrect_ttl": 30,
		"admin_ssl_cert": "\/usr\/local\/kong\/ssl\/admin-kong-default.crt",
		"nginx_proxy_directives": [{
			"value": "X-Forwarded-For",
			"name": "real_ip_header"
		}, {
			"value": "off",
			"name": "real_ip_recursive"
		}],
		"client_max_body_size": "0",
		"admin_error_log": "\/dev\/stderr",
		"nginx_main_directives": [{
			"value": "on",
			"name": "daemon"
		}, {
			"value": "16",
			"name": "worker_processes"
		}, {
			"value": "auto",
			"name": "worker_rlimit_nofile"
		}],
		"dns_not_found_ttl": 30,
		"nginx_http_ssl_session_tickets": "on",
		"ssl_cipher_suite": "intermediate",
		"cassandra_ssl_verify": false,
		"cassandra_repl_strategy": "SimpleStrategy",
		"status_error_log": "logs\/status_error.log",
		"dns_stale_ttl": 4,
		"kic": false,
		"proxy_error_log": "\/dev\/stderr",
		"nginx_kong_conf": "\/usr\/local\/kong\/nginx-kong.conf",
		"real_ip_header": "X-Forwarded-For",
		"status_ssl_cert_key_default": "\/usr\/local\/kong\/ssl\/status-kong-default.key",
		"admin_listeners": [{
			"listener": "0.0.0.0:8001",
			"proxy_protocol": false,
			"reuseport": false,
			"backlog=%d+": false,
			"deferred": false,
			"ssl": false,
			"ip": "0.0.0.0",
			"port": 8001,
			"http2": false,
			"bind": false
		}, {
			"listener": "0.0.0.0:8444 ssl",
			"proxy_protocol": false,
			"reuseport": false,
			"backlog=%d+": false,
			"deferred": false,
			"ssl": true,
			"ip": "0.0.0.0",
			"port": 8444,
			"http2": false,
			"bind": false
		}],
		"pg_ssl_verify": false,
		"ssl_cert": "\/usr\/local\/kong\/ssl\/kong-default.crt",
		"nginx_proxy_real_ip_recursive": "off",
		"pg_ro_ssl": false,
		"nginx_stream_ssl_prefer_server_ciphers": "off",
		"dns_hostsfile": "\/etc\/hosts",
		"stream_listen": ["off"],
		"client_ssl": false,
		"nginx_events_worker_connections": "auto",
		"client_ssl_cert_key_default": "\/usr\/local\/kong\/ssl\/kong-default.key",
		"nginx_daemon": "off",
		"anonymous_reports": true,
		"cluster_listen": ["0.0.0.0:8005"],
		"cassandra_timeout": 5000,
		"status_ssl_cert_default": "\/usr\/local\/kong\/ssl\/status-kong-default.crt",
		"admin_ssl_cert_default": "\/usr\/local\/kong\/ssl\/admin-kong-default.crt",
		"client_body_buffer_size": "8k",
		"ssl_cert_csr_default": "\/usr\/local\/kong\/ssl\/kong-default.csr",
		"stream_listeners": {},
		"nginx_upstream_directives": [{
			"value": "60",
			"name": "keepalive"
		}, {
			"value": "100",
			"name": "keepalive_requests"
		}, {
			"value": "60s",
			"name": "keepalive_timeout"
		}],
		"ssl_session_tickets": "on",
		"role": "traditional",
		"cluster_mtls": "shared",
		"ssl_ciphers": "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
	},
	"version": "2.1.4",
	"node_id": "5fccf127-0c4e-4927-8011-27c5be4ccec6",
	"lua_version": "LuaJIT 2.1.0-beta3",
	"prng_seeds": {
		"pid: 306": 652332415221,
		"pid: 308": 130120156241,
		"pid: 303": 246194207170,
		"pid: 1": 120172216884,
		"pid: 309": 255119341351,
		"pid: 307": 178240532251,
		"pid: 294": 243373318236,
		"pid: 302": 651951982172,
		"pid: 301": 225841022194,
		"pid: 297": 214175255413,
		"pid: 299": 165751426223,
		"pid: 304": 236361741268,
		"pid: 298": 971112131131,
		"pid: 296": 208540231107,
		"pid: 300": 214228417092,
		"pid: 305": 917912183381,
		"pid: 295": 731033617614
	},
	"timers": {
		"pending": 8,
		"running": 0
	},
	"hostname": "kong-public-test-68c97877b5-wkh96"
}
  • Operating system centos7 in k8s env
@lampnick
Copy link
Author

lampnick commented Nov 2, 2020

Hi, when I modified kong/db/dao/plugins/go.lua as you modified(4620c42), got the following err:
kong version: v2.1.4
2020/10/30 08:04:17 [error] 53#0: *20086162 [kong] go.lua:427 [custom-rate-limiting] starting instance: no data, client: 172.17.68.1, server: kong, request: "GET /test HTTP/1.1", host: "gateway-test.xxx.com"
2020/10/30 08:04:17 [error] 53#0: *20086162 lua coroutine: runtime error: /usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:430: no data
stack traceback:
coroutine 0:
[C]: in function 'error'
/usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:430: in function 'get_instance'
/usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:496: in function </usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:495>
coroutine 1:
[C]: in function 'resume'
coroutine.wrap:21: in function coroutine.wrap:21
/usr/local/share/lua/5.1/kong/init.lua:756: in function 'access'
access_by_lua(nginx-kong.conf:89):2: in main chunk, client: 172.17.68.1, server: kong, request: "GET /test HTTP/1.1", host: "gateway-test.xxx.com"

@lampnick
Copy link
Author

lampnick commented Nov 3, 2020

Hi, @javierguerragiraldez , when I run the following bash on the kong pod. Can't connect the go-pluginserver, it's blocked.

#!/bin/bash

echo "pwd: $PWD"

SOCKET='/usr/local/kong/go_pluginserver.sock'

msg() {
        query="$1"
        rq <<< "$query"
        rq <<< "$query" -M | hexdump
        METHOD="$(rq <<< "$query" -- 'at([2])')"
        response="$(rq <<< "$query" -M | nc -U "$SOCKET" | rq -m | jq 'select(.[0]==1)' )"
        rq <<< "$response"

        ERROR="$(jq <<< "$response" '.[2]')"
        RESULT="$(jq <<< "$response" '.[3]')"
        echo "ERROR:===>$ERROR"
        echo "result:===>$RESULT"
}

assert_noerr() {
        if [ "$ERROR" != "null" ]; then
                echo "query: $query"
                echo "response: $response"
                echo "$METHOD : $ERROR" > /dev/stderr
                exit 1
        fi
        echo "$METHOD: ok"
}

assert_fld_match() {
        fld="$1"
        pattern="$2"

        fld_v="$(query_result '.'$fld'')"
        if [[ "$fld_v" =~ "$pattern" ]]; then
                echo "==> $fld_v : ok"
        else
                echo "==> $fld_v : no match '$pattern'"
                exit 1
        fi
}

query_result() {
        jq <<< "$RESULT" "$1"
}

msg '[0, 19, "plugin.SetPluginDir", ["/opt/kong/plugins"]]'
assert_noerr
#
msg '[0, 19, "plugin.GetStatus", []]'
assert_noerr
assert_fld_match 'Plugins' '{}'

Log:

pwd: /home/kong
[WARN] [rq] You started rq without any input flags, which puts it in JSON input mode.
[WARN] [rq] It's now waiting for JSON input, which might not be what you wanted.
[WARN] [rq] Specify (-j|--input-json) explicitly or run rq --help once to suppress this warning.
[
  0,
  19,
  "plugin.SetPluginDir",
  [
    "/opt/kong/plugins"
  ]
]
[WARN] [rq] You started rq without any input flags, which puts it in JSON input mode.
[WARN] [rq] It's now waiting for JSON input, which might not be what you wanted.
[WARN] [rq] Specify (-j|--input-json) explicitly or run rq --help once to suppress this warning.
0000000 0094 b313 6c70 6775 6e69 532e 7465 6c50
0000010 6775 6e69 6944 9172 2fb1 706f 2f74 6f6b
0000020 676e 702f 756c 6967 736e
000002a
[WARN] [rq] You started rq without any input flags, which puts it in JSON input mode.
[WARN] [rq] It's now waiting for JSON input, which might not be what you wanted.
[WARN] [rq] Specify (-j|--input-json) explicitly or run rq --help once to suppress this warning.
[WARN] [rq] You started rq without any input flags, which puts it in JSON input mode.
[WARN] [rq] It's now waiting for JSON input, which might not be what you wanted.
[WARN] [rq] Specify (-j|--input-json) explicitly or run rq --help once to suppress this warning.

When I using netcat to connect the kong go-pluginserver, also blocked.

nc -w 2s -U /usr/local/kong/go_pluginserver.sock | rq -m |jq 'select(.[0]==1)' )

But run nc -w 2s -U /usr/local/kong/go_pluginserver.sock , it's returned ??serverPid??.

@javierguerragiraldez
Copy link
Contributor

If you're still interested in getting your plugin to run, a better approach would be to show your Go code or, much better, a minimal subset of it that triggers the same "starting instance: no data" message. As mentined elsewhere, this line means that the Go process is crashing when your code is instantiated.

@lampnick
Copy link
Author

lampnick commented Nov 4, 2020

Hi, @javierguerragiraldez , The full code is in this repo https://github.com/lampnick/kong-rate-limiting-golang/blob/master/custom-rate-limiting.go. The minimal code is:

//匹配条件:or
const matchConditionOr = "or"

var ctx = context.Background()

//限流资源列表
var limitResourceList []limitResource

//kong 插件配置
type Config struct {
	QPS                 int    `json:"QPS" validate:"required,gte=0"` //请求限制的QPS值
	Log                 bool   `json:"Log" validate:"omitempty"`      //是否记录日志
	Path                string `json:"Path"`                          //资源路径
	LimitResourcesJson  string `json:"LimitResourcesJson"`            //流控规则选项,使用json配置,然后解析
	RedisHost           string `json:"RedisHost" validate:"required"`
	RedisPort           int    `json:"RedisPort" validate:"required,gte=1,lte=65535"`
	RedisAuth           string `json:"RedisAuth" validate:"omitempty"`
	RedisTimeoutSecond  int    `json:"RedisTimeoutSecond" validate:"required,gt=0"`
	RedisDB             int    `json:"RedisDB" validate:"omitempty,gte=0"`
	RedisLimitKeyPrefix string `json:"RedisLimitKeyPrefix" validate:"omitempty"`         //Redis限流key前缀
	HideClientHeader    bool   `json:"HideClientHeader" validate:"omitempty"`            //隐藏response header
	MatchCondition      string `json:"MatchCondition" validate:"omitempty,oneof=and or"` //流控规则匹配条件,and:所有规则都需要匹配到则成功,or: 匹配到一个则成功, 为空时默认为and
}

//限流资源
type limitResource struct {
	Type  string `json:"type"`  //限流类型,使用英文逗号分隔,如:header,query,body
	Key   string `json:"key"`   //限流key
	Value string `json:"value"` //限流值,使用英文逗号分隔,如:value1,value2,orderId1
}

func New() interface{} {
	return &Config{}
}

// kong Access phase
func (conf Config) Access(kong *pdk.PDK) {
	defer func() {
		if err := recover(); err != nil {
			log.Printf("kong plugin panic at: %v, err: %v", time.Now(), err)
			if kong == nil {
				log.Printf("kong fatal err ===> kong is nil at: %v", time.Now())
			} else {
				_ = kong.Log.Err(fmt.Sprint(err))
			}
		}
	}()
	_ = kong.Response.SetHeader("X-Rate-Limiting-Plugin-Version", version)
	unix := time.Now().Unix()
	//检查配置
	if err := conf.checkConfig(); err != nil {
		_ = kong.Log.Err("[checkConfig] ", err.Error())
		return
	}

	//检查当前请求是否需要限流
	limitKey, matched := conf.checkNeedRateLimit(kong)
	if !matched {
		return
	}
	//获取限制标识identifier
	identifier, err := conf.getIdentifier(kong, limitKey)
	if err != nil {
		_ = kong.Log.Err("[getIdentifier] ", err.Error())
		return
	}
	remaining, stop, err := conf.getRemainingAndIncr(kong, identifier, unix)
	if err != nil {
		//出错只记录日志,不处理
		_ = kong.Log.Err("[getUsage] ", err.Error())
		return
	}
	//如果设置不隐藏header,则输出到header
	if !conf.HideClientHeader {
		_ = kong.Response.SetHeader("X-Rate-Limiting-Limit-QPS", strconv.Itoa(conf.QPS))
		_ = kong.Response.SetHeader("X-Rate-Limiting-Remaining", strconv.Itoa(remaining))
	}
	if stop {
		kong.Response.Exit(429, "API rate limit exceeded", nil)
		return
	}
}

//获取剩余数量的同时加1
func (conf Config) getRemainingAndIncr(kong *pdk.PDK, identifier string, unix int64) (remaining int, stop bool, err error) {
	stop = false
	remaining = 0
	limitKey := conf.getRateLimitKey(identifier, unix)
	if conf.Log {
		_ = kong.Log.Err("[rateLimitKey] ", limitKey)
	}
	//第一次执行才设置有效期,如果过了有效期,则为下一时间段,使用lua保证原子性
	luaScript := `
		local key, value, expiration = KEYS[1], tonumber(ARGV[1]), ARGV[2]
		local newVal = redis.call("incrby", key, value)
		if newVal == value then
			redis.call("expire", key, expiration)
		end
		return newVal - 1
`
	redisClient := conf.newRedisClient()
	defer redisClient.Close()
	result, err := redisClient.Eval(ctx, luaScript, []string{limitKey}, 1, 1).Result()
	if err == redis.Nil {
		return remaining, stop, nil
	} else if err != nil {
		return remaining, stop, err
	} else {
		int64Usage := result.(int64)
		usageStr := strconv.FormatInt(int64Usage, 10)
		intUsage, err := strconv.Atoi(usageStr)
		if err != nil {
			return remaining, stop, err
		}
		remaining = conf.QPS - intUsage
		if remaining <= 0 {
			stop = true
			remaining = 0
		} else {
			//friendly show
			remaining -= 1
		}
		return remaining, stop, nil
	}
}

//获取限流标识符
func (conf Config) getIdentifier(kong *pdk.PDK, limitKey string) (string, error) {
	var identifier string
	consumer, err := kong.Client.GetConsumer()
	if err != nil {
		return "", err
	}
	service, err := kong.Router.GetService()
	if err != nil {
		return "", err
	}
	route, err := kong.Router.GetRoute()
	if err != nil {
		return "", err
	}
	if consumer.Id != "" {
		identifier += ":consumer:" + consumer.Id
	}
	if service.Id != "" {
		identifier += ":service:" + service.Id
	}
	if route.Id != "" {
		identifier += ":route:" + route.Id
	}
	identifier += ":" + limitKey
	return identifier, nil
}


//redis客户端
func (conf Config) newRedisClient() *redis.Client {
	options := &redis.Options{
		Addr:        conf.RedisHost + ":" + strconv.Itoa(conf.RedisPort),
		Password:    conf.RedisAuth,
		DB:          conf.RedisDB,
		DialTimeout: time.Duration(conf.RedisTimeoutSecond) * time.Second,
	}
	return redis.NewClient(options)
}

//检查并返回是否需要限流的key
func (conf Config) checkNeedRateLimit(kong *pdk.PDK) (limitKey string, matched bool) {
	var matchedKey []string
	for _, limitResource := range limitResourceList {
		typeList := strings.Split(limitResource.Type, ",")
		valueList := strings.Split(limitResource.Value, ",")
		rateLimitValue, matched := conf.matchRateLimitValue(kong, limitResource.Key, typeList, valueList)
		//如果匹配到了是or关系,返回匹配成功(如果没有配置MatchCondition,默认会为空字符串,默认匹配条件为and)
		if matchConditionOr == conf.MatchCondition {
			if matched {
				return rateLimitValue, true
			}
		} else {
			//否则是and的关系,没有匹配到,返回匹配失败,否则加入到数组中
			if !matched {
				return "", false
			} else {
				matchedKey = append(matchedKey, rateLimitValue)
			}
		}
	}
	//如果limitResourceList为空(没有配置Path和LimitResourcesJson),则返回匹配成功
	//如果全匹配,则转为字符串返回
	if len(limitResourceList) == len(matchedKey) {
		return strings.Join(matchedKey, ":"), true
	}
	return "", false
}

//match rate limit key
func (conf Config) matchRateLimitValue(kong *pdk.PDK, key string, typeList, valueList []string) (limitKey string, matched bool) {
	for _, limitType := range typeList {
		limitType = strings.ToLower(limitType)
		switch limitType {
		case "header":
			find, err := kong.Request.GetHeader(key)
			//获取失败,跳过
			if err != nil {
				continue
			}
			//如果请求头中存在被限制的列表,则返回
			if inSlice(find, valueList) {
				return find, true
			}
		case "query":
			find, err := kong.Request.GetQueryArg(key)
			//获取失败,跳过
			if err != nil {
				continue
			}
			//如果请求头中存在被限制的列表,则返回
			if inSlice(find, valueList) {
				return find, true
			}
		case "body":
			rawBody, err := kong.Request.GetRawBody()
			//获取失败,跳过
			if err != nil {
				continue
			}
			//TODO if json format or other raw format, maybe use contain judge or use equal after decode to key value pairs.
			if !strings.Contains(rawBody, key) {
				continue
			}
			bodySlice := strings.Split(rawBody, "&")
			for _, value := range valueList {
				limitValue := key + "=" + value
				if inSlice(limitValue, bodySlice) {
					return value, true
				}
			}
		case "path":
			find, err := kong.Request.GetPath()
			//获取失败,跳过
			if err != nil {
				continue
			}
			//如果在被限制的列表,则返回
			if inSlice(find, valueList) {
				return find, true
			}
		case "cookie":
			//not support
			continue
		case "ip":
			//next iteration will support
			continue
		default:
			continue
		}
	}
	return "", false
}

Thanks!

@lampnick
Copy link
Author

lampnick commented Nov 9, 2020

@javierguerragiraldez Hello, is there any problems with my go plugin code?

@tuanpm90
Copy link

Hi @javierguerragiraldez, i have same issue as @lampnick. At some point i'm getting error:
[error] 45#0: *800160 [kong] init.lua:255 [ewallet-oauth] /usr/local/share/lua/5.1/kong/db/dao/plugins/go.lua:438: no data, client: 192.168.35.35, server: kong
[error] 45#0: *800160 [kong] go.lua:435 [ewallet-oauth] starting instance: no data, client: 192.168.35.35, server: kong, request: "GET /v2/users/profile HTTP/1.0"
I have to restart Kong to resolve this issue
How can i solve this issue? Thanks.

@lampnick
Copy link
Author

@javierguerragiraldez Hi, I had add pprof to go-pluginserver. Get the following msg, the go-pluginserver deadlocked.

goroutine profile: total 99
49 @ 0x627410 0x637b20 0x637b0b 0x637887 0xa2c699 0xa2c1f0 0x685bc6 0x685384 0x98178f 0x654a71
#	0x637886	sync.runtime_SemacquireMutex+0x46	/usr/local/go/src/runtime/sema.go:71
#	0xa2c698	sync.(*RWMutex).RLock+0x4d8		/usr/local/go/src/sync/rwmutex.go:50
#	0xa2c1ef	main.(*PluginServer).HandleEvent+0x2f	/tmp/go/src/kong-plugins/go-pluginserver/event.go:32
#	0x685bc5	reflect.Value.call+0x5f5		/usr/local/go/src/reflect/value.go:460
#	0x685383	reflect.Value.Call+0xb3			/usr/local/go/src/reflect/value.go:321
#	0x98178e	net/rpc.(*service).call+0x16e		/usr/local/go/src/net/rpc/server.go:377

41 @ 0x627410 0x637b20 0x637b0b 0x637772 0x6722c4 0x9821ea 0x983971 0x654a71
#	0x637771	sync.runtime_Semacquire+0x41		/usr/local/go/src/runtime/sema.go:56
#	0x6722c3	sync.(*WaitGroup).Wait+0x63		/usr/local/go/src/sync/waitgroup.go:130
#	0x9821e9	net/rpc.(*Server).ServeCodec+0x1e9	/usr/local/go/src/net/rpc/server.go:478
#	0x983970	net/rpc.ServeCodec+0x40			/usr/local/go/src/net/rpc/server.go:673

3 @ 0x627410 0x637b20 0x637b0b 0x637887 0x67081c 0x671ee7 0x671e72 0xa2f45c 0xa2d458 0x685bc6 0x685384 0x98178f 0x654a71
#	0x637886	sync.runtime_SemacquireMutex+0x46	/usr/local/go/src/runtime/sema.go:71
#	0x67081b	sync.(*Mutex).lockSlow+0xfb		/usr/local/go/src/sync/mutex.go:138
#	0x671ee6	sync.(*Mutex).Lock+0x96			/usr/local/go/src/sync/mutex.go:81
#	0x671e71	sync.(*RWMutex).Lock+0x21		/usr/local/go/src/sync/rwmutex.go:98
#	0xa2f45b	main.(*PluginServer).loadPlugin+0x5b	/tmp/go/src/kong-plugins/go-pluginserver/pluginserver.go:109
#	0xa2d457	main.(*PluginServer).StartInstance+0x67	/tmp/go/src/kong-plugins/go-pluginserver/instance.go:91
#	0x685bc5	reflect.Value.call+0x5f5		/usr/local/go/src/reflect/value.go:460
#	0x685383	reflect.Value.Call+0xb3			/usr/local/go/src/reflect/value.go:321
#	0x98178e	net/rpc.(*service).call+0x16e		/usr/local/go/src/net/rpc/server.go:377

1 @ 0x627410 0x62145a 0x620a25 0x6bace5 0x6bd668 0x6bd647 0x747542 0x764762 0x7632e7 0x8f1e00 0x8f1b27 0xa31e04 0xa31e0f 0x654a71
#	0x620a24	internal/poll.runtime_pollWait+0x54		/usr/local/go/src/runtime/netpoll.go:184
#	0x6bace4	internal/poll.(*pollDesc).wait+0x44		/usr/local/go/src/internal/poll/fd_poll_runtime.go:87
#	0x6bd667	internal/poll.(*pollDesc).waitRead+0x1f7	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
#	0x6bd646	internal/poll.(*FD).Accept+0x1d6		/usr/local/go/src/internal/poll/fd_unix.go:384
#	0x747541	net.(*netFD).accept+0x41			/usr/local/go/src/net/fd_unix.go:238
#	0x764761	net.(*TCPListener).accept+0x31			/usr/local/go/src/net/tcpsock_posix.go:139
#	0x7632e6	net.(*TCPListener).Accept+0x46			/usr/local/go/src/net/tcpsock.go:261
#	0x8f1dff	net/http.(*Server).Serve+0x27f			/usr/local/go/src/net/http/server.go:2896
#	0x8f1b26	net/http.(*Server).ListenAndServe+0xb6		/usr/local/go/src/net/http/server.go:2825
#	0xa31e03	net/http.ListenAndServe+0xc3			/usr/local/go/src/net/http/server.go:3081
#	0xa31e0e	main.main.func2+0xce				/tmp/go/src/kong-plugins/go-pluginserver/main.go:138

1 @ 0x627410 0x62145a 0x620a25 0x6bace5 0x6bd668 0x6bd647 0x747542 0x76aee2 0x7693e7 0xa2e59b 0xa2ec06 0xa2ed94 0x62703e 0x654a71
#	0x620a24	internal/poll.runtime_pollWait+0x54		/usr/local/go/src/runtime/netpoll.go:184
#	0x6bace4	internal/poll.(*pollDesc).wait+0x44		/usr/local/go/src/internal/poll/fd_poll_runtime.go:87
#	0x6bd667	internal/poll.(*pollDesc).waitRead+0x1f7	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
#	0x6bd646	internal/poll.(*FD).Accept+0x1d6		/usr/local/go/src/internal/poll/fd_unix.go:384
#	0x747541	net.(*netFD).accept+0x41			/usr/local/go/src/net/fd_unix.go:238
#	0x76aee1	net.(*UnixListener).accept+0x31			/usr/local/go/src/net/unixsock_posix.go:162
#	0x7693e6	net.(*UnixListener).Accept+0x46			/usr/local/go/src/net/unixsock.go:260
#	0xa2e59a	main.runServer+0xfa				/tmp/go/src/kong-plugins/go-pluginserver/main.go:75
#	0xa2ec05	main.startServer+0x255				/tmp/go/src/kong-plugins/go-pluginserver/main.go:104
#	0xa2ed93	main.main+0xc3					/tmp/go/src/kong-plugins/go-pluginserver/main.go:140
#	0x62703d	runtime.main+0x21d				/usr/local/go/src/runtime/proc.go:203

1 @ 0x627410 0x643147 0x64311d 0xa31cec 0x654a71
#	0x64311c	time.Sleep+0x12c	/usr/local/go/src/runtime/time.go:105
#	0xa31ceb	main.main.func1+0x2b	/tmp/go/src/kong-plugins/go-pluginserver/main.go:131

1 @ 0x6a5e65 0x6a401a 0x6bc8a1 0x6bc82c 0x6c2867 0x6c2837 0x718277 0x718eb0 0xa2d378 0xa2d77a 0x685bc6 0x685384 0x98178f 0x654a71
#	0x6a5e64	syscall.Syscall+0x4				/usr/local/go/src/syscall/asm_linux_amd64.s:18
#	0x6a4019	syscall.write+0x59				/usr/local/go/src/syscall/zsyscall_linux_amd64.go:1005
#	0x6bc8a0	syscall.Write+0x1a0				/usr/local/go/src/syscall/syscall_unix.go:202
#	0x6bc82b	internal/poll.(*FD).Write+0x12b			/usr/local/go/src/internal/poll/fd_unix.go:268
#	0x6c2866	os.(*File).write+0x76				/usr/local/go/src/os/file_unix.go:276
#	0x6c2836	os.(*File).Write+0x46				/usr/local/go/src/os/file.go:153
#	0x718276	log.(*Logger).Output+0x286			/usr/local/go/src/log/log.go:172
#	0x718eaf	log.Printf+0x7f					/usr/local/go/src/log/log.go:307
#	0xa2d377	main.(*PluginServer).expireInstances+0x457	/tmp/go/src/kong-plugins/go-pluginserver/instance.go:64
#	0xa2d779	main.(*PluginServer).StartInstance+0x389	/tmp/go/src/kong-plugins/go-pluginserver/instance.go:118
#	0x685bc5	reflect.Value.call+0x5f5			/usr/local/go/src/reflect/value.go:460
#	0x685383	reflect.Value.Call+0xb3				/usr/local/go/src/reflect/value.go:321
#	0x98178e	net/rpc.(*service).call+0x16e			/usr/local/go/src/net/rpc/server.go:377

1 @ 0x8e7520 0x654a71
#	0x8e7520	net/http.(*connReader).backgroundRead+0x0	/usr/local/go/src/net/http/server.go:676

1 @ 0x98cb45 0x98c960 0x98958a 0x995efa 0x996911 0x8ee5f4 0x8f04cd 0x8f1a44 0x8ed3e5 0x654a71
#	0x98cb44	runtime/pprof.writeRuntimeProfile+0x94	/usr/local/go/src/runtime/pprof/pprof.go:708
#	0x98c95f	runtime/pprof.writeGoroutine+0x9f	/usr/local/go/src/runtime/pprof/pprof.go:670
#	0x989589	runtime/pprof.(*Profile).WriteTo+0x3d9	/usr/local/go/src/runtime/pprof/pprof.go:329
#	0x995ef9	net/http/pprof.handler.ServeHTTP+0x339	/usr/local/go/src/net/http/pprof/pprof.go:245
#	0x996910	net/http/pprof.Index+0x6f0		/usr/local/go/src/net/http/pprof/pprof.go:268
#	0x8ee5f3	net/http.HandlerFunc.ServeHTTP+0x43	/usr/local/go/src/net/http/server.go:2007
#	0x8f04cc	net/http.(*ServeMux).ServeHTTP+0x1bc	/usr/local/go/src/net/http/server.go:2387
#	0x8f1a43	net/http.serverHandler.ServeHTTP+0xa3	/usr/local/go/src/net/http/server.go:2802
#	0x8ed3e4	net/http.(*conn).serve+0x874		/usr/local/go/src/net/http/server.go:1890

@javierguerragiraldez
Copy link
Contributor

javierguerragiraldez commented Nov 17, 2020

I don't see a deadlock. What I see is threads #49 and #3 waiting for the lock held by thread #1 which is currently writing to the log to notify of a closed plugin instance. Please correct me if you think my reading is wrong.

If the process is spending a significant amount of time logging for closed instances, I'd say check why are they being closed so often. During normal operation, a plugin instance is only closed after 60 seconds of inactivity and then opened only on the next request that requires the plugin; that makes the closing and reopening task a very sparse occurrence. Unless your plugin is crashing immediately.

@lampnick
Copy link
Author

@javierguerragiraldez That's right, it's not a deadlock but got stucked. Goroutint #49 and #3 got stucked because it is waiting for one of the locks in the sync package.
Firstly , my go plugin has defer function to recover the panic, I added logs when catch recover in defer function but I can't see any logs. Secondly, There is no any lock opertions in my go plugin. So I can't see any crash reson in my plugin. Do you know what else can cause go plugin crash?

@lampnick
Copy link
Author

@tuanpm90 Could you add pprof to your go-pluginserver ?

@lampnick
Copy link
Author

lampnick commented Dec 2, 2020

@gszr Hi,could you help me to resolve the problem?

@javierguerragiraldez
Copy link
Contributor

javierguerragiraldez commented Dec 2, 2020

i can't reproduce your error, but your code seems to depend on a global var limitResourceList without any multithreading protection.

@lampnick
Copy link
Author

lampnick commented Dec 3, 2020

i can't reproduce your error, but your code seems to depend on a global var limitResourceList without any multithreading protection.

Thank you very much!

@javierguerragiraldez
Copy link
Contributor

closing issue, please reopen if the issue remains.

@vaxilicaihouxian
Copy link

i can't reproduce your error, but your code seems to depend on a global var limitResourceList without any multithreading protection.

Thank you very much!

是线程安全的问题么?
is that the thread safe problem?

@javierguerragiraldez
Copy link
Contributor

is that the thread safe problem?

as far as I can tell, the custom plugin was thread-unsafe. Golang is by nature multithreaded. in many cases you can avoid use of locks if you keep all mutable data local and communicate between goroutines via channels; but this was a global dictionary modified from different threads. to keep that thread-safe, you need locks to protect the global variable.

@vaxilicaihouxian
Copy link

is that the thread safe problem?

as far as I can tell, the custom plugin was thread-unsafe. Golang is by nature multithreaded. in many cases you can avoid use of locks if you keep all mutable data local and communicate between goroutines via channels; but this was a global dictionary modified from different threads. to keep that thread-safe, you need locks to protect the global variable.

I have a plugin that have the same problem but i did not use global mutable data.
Here is my code:

/*
* 根据service name ,转发到header中机房标签相应的upstream
 */
package main

import (
	//"fmt"
	"os"
	"strings"

	"github.com/Kong/go-pdk"
)

type Config struct {
	Upstreams []string
}

func New() interface{} {
	return &Config{}
}

func (conf Config) Access(kong *pdk.PDK) {
	idc := os.Getenv("IDC")
	kong.Log.Err("IDC :" + idc)
	service, err := kong.Router.GetService()
	if err != nil {
		kong.Log.Err(err.Error())
		return
	}
	//kong.Log.Err("plugin conf:" + conf.Upstreams[0])
	for _, upstream := range conf.Upstreams {
		upstreamConfs := strings.Split(upstream, "|")
		if len(upstreamConfs) != 2 {
			kong.Log.Err(service.Name + " IDC Plugin FormattWrong:" + upstream)
			continue
		}
		upstreamConfIdc := upstreamConfs[0]
		upstreamConfTargetUpstream := upstreamConfs[1]
		//kong.Log.Err("upIdc:" + upstreamConfIdc + " up:" + upstreamConfTargetUpstream)
		if idc == upstreamConfIdc {
			kong.Service.SetUpstream(upstreamConfTargetUpstream)
			return
		}
	}

	kong.Log.Err(service.Name + " Missing IDC:" + idc)
	return
}

@javierguerragiraldez
Copy link
Contributor

please open a new issue, with relevant versions and log output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants