Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

help request: no checker for upstreams #7964

Closed
Raymond0331 opened this issue Sep 22, 2022 · 3 comments
Closed

help request: no checker for upstreams #7964

Raymond0331 opened this issue Sep 22, 2022 · 3 comments
Assignees

Comments

@Raymond0331
Copy link

Description

I have added checks to my upstream, but when I check with the
curl http://127.0.0.1:9092/v1/healthcheck/upstreams/426515099281261249 -s | jq . command , it reports:
{
"error_msg": "no checker for upstreams[426515099281261249]"
}
I've bound the route to this upstream and it's working。
curl 127.0.0.1:9080/dashboard -H "Host: mydvs.org" -s | jq . , return the following result(ignore the error, is correct format )
{
"errors": [
{
"message": "No query document supplied"
}
]
}
The /v1/healthcheck returns:
curl http://127.0.0.1:9092/v1/healthcheck -s |jq .
{}

The upstream configuration like the below:
{
"nodes": [
{
"host": "prod-1.com",
"port": 443,
"weight": 10
},
{
"host": "prod-2.com",
"port": 443,
"weight": 10
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"checks": {
"active": {
"concurrency": 10,
"healthy": {
"http_statuses": [
400
],
"interval": 1,
"successes": 2
},
"host": "dvsapi",
"http_path": "/dashboard",
"https_verify_certificate": false,
"port": 443,
"req_headers": [
"User-Agent: curl/7.29.0"
],
"timeout": 1,
"type": "https",
"unhealthy": {
"http_failures": 5,
"http_statuses": [
500,
501,
502,
503,
504,
505
],
"interval": 1,
"tcp_failures": 2,
"timeouts": 3
}
},
"passive": {
"healthy": {
"http_statuses": [
400
],
"successes": 2
},
"type": "http",
"unhealthy": {
"http_failures": 2,
"http_statuses": [
500,
503
],
"tcp_failures": 2,
"timeouts": 7
}
}
},
"scheme": "https",
"pass_host": "pass",
"name": "dvsapi",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
}

I've checked error.log has some warning, like the below (for security I have to convert the IP address to xx.xx.xxx.95 format ):
2022/09/22 01:22:20 [warn] 48#48: *52004453 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) unhealthy TIMEOUT increment (1/3) for 'dvsapi(xx.xx.xxx.20:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:20 [warn] 48#48: *52004453 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) unhealthy TIMEOUT increment (1/3) for 'dvsapi(xx.xx.xxx.95:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:20 [warn] 48#48: *52004553 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) healthy SUCCESS increment (1/2) for 'dvsapi(xx.xx.xxx.20:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:20 [warn] 48#48: *52004553 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) healthy SUCCESS increment (1/2) for 'dvsapi(xx.xx.xxx.95:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:21 [warn] 48#48: *52004640 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) healthy SUCCESS increment (2/2) for 'dvsapi(xx.xx.xxx.95:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:21 [warn] 48#48: *52004640 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) healthy SUCCESS increment (2/2) for 'dvsapi(xx.xx.xxx.20:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:45 [warn] 48#48: *52006813 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) unhealthy TIMEOUT increment (1/3) for 'dvsapi(xx.xx.xxx.95:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:45 [warn] 48#48: *52006907 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) healthy SUCCESS increment (1/2) for 'dvsapi(xx.xx.xxx.95:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080
2022/09/22 01:22:46 [warn] 48#48: *52007002 [lua] healthcheck.lua:1107: log(): [healthcheck] (upstream#/apisix/upstreams/426515099281261249) healthy SUCCESS increment (2/2) for 'dvsapi(xx.xx.xxx.95:443)', context: ngx.timer, client: 172.20.0.1, server: 0.0.0.0:9080

Environment

  • APISIX version (run apisix version): 2.15.0
  • Operating system (run uname -a): Linux 7414da4d1793 5.10.104-linuxkit change: added doc of how to load plugin. #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 Linux
  • OpenResty / Nginx version (run openresty -V or nginx -V): nginx version: openresty/1.21.4.1
  • etcd version, if relevant (run curl http://127.0.0.1:9090/v1/server_info): {"hostname":"7414da4d1793","boot_time":1663234477,"etcd_version":"3.4.0","version":"2.15.0","id":"67b7e797-4fe7-43b1-a6ec-2965d8581edd"}
  • APISIX Dashboard version, if relevant: 2.13.0
  • Plugin runner version, for issues related to plugin runners:
  • LuaRocks version, for installation issues (run luarocks --version): 3.8.0
@Raymond0331
Copy link
Author

I've try to reproduce in openresty use by lua-resty-upstream-healthcheck script it works.

init_worker_by_lua_block code like the below:


init_worker_by_lua_block {
        local hc = require "resty.upstream.healthcheck"
        local ok, err = hc.spawn_checker{
            shm = "myhealthcheck",  -- defined by "lua_shared_dict"
            upstream = "dvsapi", -- defined by "upstream"
            type = "http",

            http_req = "GET /dashboard HTTP/1.0\r\nHost: dvsapi\r\n\r\n" , --.. "{\"query\":\"query IntrospectionQuery {\\n      __schema {\\n        subscriptionType { name }\\n      }\\n    }\",\"variables\":{}}" .. "\r\n\r\n",
            interval = 3000,  -- run the check cycle every 3 sec
            timeout = 3000,   -- 3 sec is the timeout for network operations
            fall = 3,  -- # of successive failures before turning a peer down
            rise = 2,  -- # of successive successes before turning a peer up
            valid_statuses = {400},  -- a list valid HTTP status code
            concurrency = 10,  -- concurrency level for test requests
        }
        ngx.log(ngx.ERR, "ok status: ", ok)
        if not ok then
            ngx.log(ngx.ERR, "failed to spawn health checker: ", err)
            return
        end
}

the nginx.conf, like the below:

upstream dvsapi {
    server prod-xxx-xxx-ip1.xxxxxx.com:443;
    server prod-xxx-xxx-ip2.xxxxxx.com:443;
    keepalive 1024;
}



server {
    listen       90;
    server_name  localhost;

    location / {
	proxy_set_header HOST $host;
	proxy_set_header X-Forwarded-Proto $scheme;
   	proxy_set_header X-Real-IP $remote_addr;
   	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass          https://dvsapi;
     }

    location /server/status {
        access_log off;
        allow 127.0.0.1;

        default_type text/plain;
        content_by_lua_block {
            local hc = require "resty.upstream.healthcheck"
            ngx.say("Nginx Worker PID: ", ngx.worker.pid())
            ngx.print(hc.status_page())
        }
    }
}

check the server status:
curl localhost:90/server/status, the results:

Nginx Worker PID: 16103
Upstream dvsapi
Primary Peers
xx.xx.xxx.95:443 up
xx.xx.xxx.20:443 up
Backup Peers

Am i do something wrong in APISIX?

@tzssangglass tzssangglass self-assigned this Sep 22, 2022
@tzssangglass
Copy link
Member

it is a known issue, and was anser by: #7141 (comment)

@Raymond0331
Copy link
Author

it is a known issue, and was anser by: #7141 (comment)

I've found out the problem, according to the 'unhealthy TIMEOUT' logs , the timeout and interval settings in unhealthy are set too short. Try to changed timeout: 10 and interval:5 it became normal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants