Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After Modified service targetPort, the targetPort not sync to cooresponing backend #11863

Closed
zengyuxing007 opened this issue Aug 26, 2024 · 11 comments
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@zengyuxing007
Copy link
Contributor

zengyuxing007 commented Aug 26, 2024

What happened:

After Modified service targetPort, the targetPort not sync to cooresponing backend

kubectl get svc nginx-svc

apiVersion: v1
kind: Service
metadata:
  annotations:
    aa: aaa
  creationTimestamp: "2024-08-22T03:44:11Z"
  labels:
    aa: ccc
  name: nginx-svc
  namespace: default
  resourceVersion: "1998073"
  uid: a936ed5d-6b33-4175-9ae3-405c53131c8f
spec:
  clusterIP: 172.30.86.90
  clusterIPs:
  - 172.30.86.90
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8336
  selector:
    app: nginx
  sessionAffinity: None
  type: ClusterIP

but the cooresponing backends port not be synced.

kubectl exec deployment/nginx-ingress-controller -n kube-system  -- /dbg backends get default-nginx-svc-8080


Defaulted container "nginx-ingress-controller" out of: nginx-ingress-controller, init-sysctl (init)
{
  "endpoints": [
    {
      "address": "10.0.0.139",
      "port": "8335"
    }
  ],
  "name": "default-nginx-svc-8080",
  "noServer": false,
  "port": 8080,
  "service": {
    "metadata": {
      "creationTimestamp": null
    },
    "spec": {
      "clusterIP": "172.30.86.90",
      "clusterIPs": [
        "172.30.86.90"
      ],
      "internalTrafficPolicy": "Cluster",
      "ipFamilies": [
        "IPv4"
      ],
      "ipFamilyPolicy": "SingleStack",
      "ports": [
        {
          "name": "http",
          "port": 8080,
          "protocol": "TCP",
          "targetPort": 8335
        }
      ],
      "selector": {
        "app": "nginx"
      },
      "sessionAffinity": "None",
      "type": "ClusterIP"
    },
    "status": {
      "loadBalancer": {}
    }
  },
  "sessionAffinityConfig": {
    "cookieSessionAffinity": {
      "name": ""
    },
    "mode": "",
    "name": ""
  },
  "sslPassthrough": false,
  "trafficShapingPolicy": {
    "cookie": "",
    "header": "",
    "headerPattern": "",
    "headerValue": "",
    "weight": 0,
    "weightTotal": 0
  },
  "upstreamHashByConfig": {
    "upstream-hash-by-subset-size": 3
  }
}

What you expected to happen:

the coorespoing backend port is synced to 8336

image

From the log information, we can see the detailed processing of the service update event, and ultimately, the backend update is ignored.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

v1.10.4

Kubernetes version (use kubectl version):
v1.22.15

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
  • Basic cluster related info:
    • kubectl version
      v1.22.15
    • kubectl get nodes -o wide

Anything else we need to know:

@zengyuxing007 zengyuxing007 added the kind/bug Categorizes issue or PR as related to a bug. label Aug 26, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 26, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

Its possible that you are trying to change the default ports 80/443 of the controller.

But the information you have provided requires readers to guess your goal and there is no data like logs and output of kubectl commands to be analyzed in the issue description.

Check the template of a new bug report and then edit the issue description here to provide answers to the questions asked in a new bug report. That could help readers get data for analysis and hence make helpful comments to solve your problem.

/remove-kind bug
/kind support
/triage needs-information

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. and removed kind/bug Categorizes issue or PR as related to a bug. labels Aug 26, 2024
@zmquan
Copy link

zmquan commented Aug 26, 2024

@longwuyuan
1、The targetPort of svc was originally 1111.
image

2、Then just edit the targetPort of svc to 2222.
image

3、Then use kubectl exec deployment/nginx-ingress-controller -n kube-system -- /dbg backends get default-nginx-svc-8080 to check if the backends are still the original 1111.
image

The endpointslices.go still retrieves the old endpointslices, rather than the endpointslices for port 2222.
image

4、kubectl version : v1.26
image

@longwuyuan
Copy link
Contributor

@zmquan @zengyuxing007 you are not giving the info suggested and asked in the new bug report template.

Please wait for someone who reads this can understand your problem and make comments to help you solve it.

@longwuyuan
Copy link
Contributor

@zmquan @zengyuxing007 I am closing this as that PR is merged

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

@zmquan @zengyuxing007 I am closing this as that PR is merged

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

Sorry I missed that the merge was into @zmquan fork.

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Sep 7, 2024
@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Reopened this issue.

In response to this:

Sorry I missed that the merge was into @zmquan fork.

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

@zmquan @zengyuxing007 the data here seems that you changed the targetPort of a service and queried the controller for status change. That test is not enough data to explain any problems for taking a action. The information you provided does not have proof that your backend did not have a listening socket on the new port number. Also there is no proof that there was no network problem in your cluster.

I think you may be reporting a real-work use problem, but you have to also help out by providing complete detailed information to other readers here, so that some practical and useful action can be taken.

If you can use a kind cluster to reproduce the problem and provide all detailed step-by-step instructions to re-create the problem, then it will help. Please provide the info like kubectl describe outputs for all related resources, the actual process used to change configs, the logs of the controller, the state of other related resources etc etc , both before and after change. The screenshots you posted are not a good source of information for developers/maintainers to take action on. It will leave everyone guessing on the exact details of the reproduce steps.

Now, since in the PRs visible here, you seem to be attempting to increase the timeout period, it is proof that you are choosing the delay interval that applies to your use case. That delay interval may not apply to all other users. Otherwise several users of the controller would have reported the exact same problem. Further-more there is no e2e-test to verify how it will impact the controller.

Being a community project, detailed info is helpful to reduce time & effort by the volunteers who work in their free time.

I will close the issue for now as there is no action-item being tracked here for you or the project. Once you have provided the step-by-step guide to reproduce a problem at will, by anyone, using your guide, on a kind cluster, please re-open the issue.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

@zmquan @zengyuxing007 the data here seems that you changed the targetPort of a service and queried the controller for status change. That test is not enough data to explain any problems for taking a action. The information you provided does not have proof that your backend did not have a listening socket on the new port number. Also there is no proof that there was no network problem in your cluster.

I think you may be reporting a real-work use problem, but you have to also help out by providing complete detailed information to other readers here, so that some practical and useful action can be taken.

If you can use a kind cluster to reproduce the problem and provide all detailed step-by-step instructions to re-create the problem, then it will help. Please provide the info like kubectl describe outputs for all related resources, the actual process used to change configs, the logs of the controller, the state of other related resources etc etc , both before and after change. The screenshots you posted are not a good source of information for developers/maintainers to take action on. It will leave everyone guessing on the exact details of the reproduce steps.

Now, since in the PRs visible here, you seem to be attempting to increase the timeout period, it is proof that you are choosing the delay interval that applies to your use case. That delay interval may not apply to all other users. Otherwise several users of the controller would have reported the exact same problem. Further-more there is no e2e-test to verify how it will impact the controller.

Being a community project, detailed info is helpful to reduce time & effort by the volunteers who work in their free time.

I will close the issue for now as there is no action-item being tracked here for you or the project. Once you have provided the step-by-step guide to reproduce a problem at will, by anyone, using your guide, on a kind cluster, please re-open the issue.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@zengyuxing007
Copy link
Contributor Author

cc @Gacko @strongjz @rikatz

Please take a look at this Issue and PR, thanks ~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
Development

No branches or pull requests

5 participants
@zengyuxing007 @longwuyuan @k8s-ci-robot @zmquan and others