Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic LB sync non-external backends only when necessary #5418

Closed
wants to merge 3 commits into from

Conversation

ZxYuan
Copy link
Contributor

@ZxYuan ZxYuan commented Apr 22, 2020

The PR records a timestamp in nginx shared configuration when API "/configuration/backends" is called by ingress controller. Every second each nginx worker checks whether it is necessary to sync non-external backends.

What this PR does / why we need it:

In our Kubernetes cluster with thousands of service and ingress resources, each of nginx workers syncing backends every second leads to quite heavy load for CPUs. In my testing environment with no input requests, impact on cpu usage is showed as below. Heavy CPU load also seriously affects request time of Biz APIs.
Although it is necessary to sync backends of services using ExternalName since DNS may change anytime, frequent sync of non-external backends can be avoided.
cpu_usage_by_svc_num

Types of changes

I'm not sure about the type of change, but how dynamic LB works internally is actually modified. Perhaps Breaking change?

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Which issue/s this PR fixes

How Has This Been Tested?

Create N ingress rules and corresponding N non-ExternalName services. Each ingress rule makes a route to one of the services. Without the patch, when N is large, each nginx worker runs out much of its CPU core as the figure above shows.

Checklist:

  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I've read the CONTRIBUTION guide
  • I have added tests to cover my changes.
  • All new and existing tests passed.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Apr 22, 2020
@k8s-ci-robot
Copy link
Contributor

Hi @ZxYuan. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 22, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ZxYuan
To complete the pull request process, please assign elvinefendi
You can assign the PR to them by writing /assign @elvinefendi in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ElvinEfendi
Copy link
Member

@ZxYuan can you share some more numbers please, ideally graphs? I'd like to see how much impact does this change have on CPU usage.

@aledbf
Copy link
Member

aledbf commented Apr 22, 2020

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 22, 2020
@aledbf
Copy link
Member

aledbf commented Apr 22, 2020

/assign @ElvinEfendi

@@ -36,6 +36,8 @@ local IMPLEMENTATIONS = {

local _M = {}
local balancers = {}
local external_backends = {}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about backends_with_external_name?

@@ -36,6 +36,8 @@ local IMPLEMENTATIONS = {

local _M = {}
local balancers = {}
local external_backends = {}
local last_timestamp = 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be more specific and indicate that it is specifically about backend sync - what about backends_last_synced_at?

@@ -92,6 +94,12 @@ local function format_ipv6_endpoints(endpoints)
return formatted_endpoints
end

local function use_external_name(backend)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to https://github.com/kubernetes/ingress-nginx/pull/5418/files#r412948805, I'd go with is_backend_with_external_name.

end
return
end
last_timestamp = timestamp
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you deliberately updating this in the beginning of function body to avoid retries in case of i.e json decoding error happens? Intuitively I'd have expected this update to be in the end of the function body. I wonder how we can make the intention clearer for the reader.

@@ -17,6 +17,14 @@ function _M.get_general_data()
return configuration_data:get("general")
end

function _M.get_timestamp_data()
Copy link
Member

@ElvinEfendi ElvinEfendi Apr 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -17,6 +17,14 @@ function _M.get_general_data()
return configuration_data:get("general")
end

function _M.get_timestamp_data()
local timestamp = configuration_data:get("timestamp")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/timestamp/raw_backends_last_sycned_at

@@ -177,6 +185,14 @@ local function handle_backends()
return
end

local timestamp = os.time()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not ngx.update_time and ngx.time?

local timestamp = os.time()
success, err = configuration_data:set("timestamp", timestamp)
if not success then
ngx.log(ngx.ERR, "dynamic-configuration: error setting sync timestamp: " .. tostring(err))
Copy link
Member

@ElvinEfendi ElvinEfendi Apr 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include some information in the error message about the potential consequences of not being able to update this timestamp.

@@ -129,6 +135,15 @@ local function sync_backend(backend)
end

local function sync_backends()
local timestamp = configuration.get_timestamp_data()
if timestamp <= last_timestamp then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we force sync if last_timestamp - timestamp is too large?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean we should force sync if current timestamp is too larger than backends_last_sycned_at stored in each worker, avoiding failure of backends syncing caused by error updating raw_backends_last_sycned_at in shared configuration data?

@ZxYuan
Copy link
Contributor Author

ZxYuan commented Apr 23, 2020

@ZxYuan can you share some more numbers please, ideally graphs? I'd like to see how much impact does this change have on CPU usage.

Thanks for your nice suggestions. I'll give a patch later.
The figures below show how much impact current LB has on CPU usage. Current version=0.30.0 seems a little better than what I mentioned, but still has the issue.

cpu_usage_by_svc_num
1000 services (or ingresses)
1000
2000 services (or ingresses)
2000
3000 services (or ingresses)
3000
4000 services (or ingresses)
4000
5000 services (or ingresses)
5000
6000 services (or ingresses)
6000

@k8s-ci-robot
Copy link
Contributor

@ZxYuan: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-ingress-nginx-e2e-1-17 7d4aea1 link /test pull-ingress-nginx-e2e-1-17
pull-ingress-nginx-e2e-1-15 7d4aea1 link /test pull-ingress-nginx-e2e-1-15
pull-ingress-nginx-e2e-1-18 7d4aea1 link /test pull-ingress-nginx-e2e-1-18
pull-ingress-nginx-e2e-1-16 7d4aea1 link /test pull-ingress-nginx-e2e-1-16

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@aledbf
Copy link
Member

aledbf commented Apr 28, 2020

@ZxYuan do you have a script to generate this scenario?

@ZxYuan
Copy link
Contributor Author

ZxYuan commented Apr 30, 2020

@ZxYuan do you have a script to generate this scenario?

FYI. Create a nginx deployment on port 80 with label app: nginx
Save the following yaml as press.yaml.tmpl

kind: Service
apiVersion: v1
metadata:
  name: service-press-HAHA
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-press-HAHA
spec:
  rules:
  - http:
      paths:
      - path: /press/HAHA
        backend:
          serviceName: service-press-HAHA
          servicePort: 80

Run the bash script

for i in {1..6000}
do
    sed s#HAHA#${i}#g press.yaml.tmpl > tmp.yaml
    kubectl apply -f tmp.yaml
done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants