Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support CPU pinning with IPV6 #4134

Closed
tssurya opened this issue Feb 6, 2024 · 0 comments · Fixed by #4407
Closed

Support CPU pinning with IPV6 #4134

tssurya opened this issue Feb 6, 2024 · 0 comments · Fixed by #4407
Assignees
Labels
ci-ipv6 Add support for IPV6 e2e's to run upstream

Comments

@tssurya
Copy link
Member

tssurya commented Feb 6, 2024

This one is weird actually because this doesn't always fail!
only fails on LGW+IPV6:
https://github.com/ovn-org/ovn-kubernetes/actions/runs/7799278970/job/21270728006?pr=4106

Summarizing 3 Failures:
  [FAIL] OVS CPU affinity pinning [It] can be enabled on specific nodes by creating enable_dynamic_cpu_affinity file
  /home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/ovspinning.go:24
  [FAIL] e2e ingress to host-networked pods traffic validation Validating ingress traffic to Host Networked pods with externalTrafficPolicy=local [It] Should be allowed to node local host-networked endpoints by nodeport services
  /home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/e2e.go:1773
  [FAIL] Services of type NodePort [It] should listen on each host addresses
  /home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/service.go:734

Ran 57 of 251 Specs in 1785.912 seconds
FAIL! -- 54 Passed | 3 Failed | 1 Flaked | 0 Pending | 194 Skipped

but it Flaked in SGW and passed the second time :D
https://github.com/ovn-org/ovn-kubernetes/actions/runs/7799278970/job/21270728558?pr=4106

2024-02-06T12:50:03.8766151Z   �[1mSTEP:�[0m Destroying namespace "inter-node-e2e-9485" for this suite. �[38;5;243m@ 02/06/24 12:50:03.876�[0m
2024-02-06T12:50:03.8793512Z �[38;5;10m↺ [FLAKEY TEST - TOOK 2 ATTEMPTS TO PASS] [337.162 seconds]�[0m
2024-02-06T12:50:03.8795155Z �[38;5;243m------------------------------�[0m
2024-02-06T12:50:03.8812723Z �[0mOVS CPU affinity pinning �[0m�[1mcan be enabled on specific nodes by creating enable_dynamic_cpu_affinity file�[0m
2024-02-06T12:50:03.8814570Z �[38;5;243m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/ovspinning.go:12�[0m
2024-02-06T12:50:03.8815774Z   �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m@ 02/06/24 12:50:03.879�[0m
2024-02-06T12:50:03.8816493Z   Feb  6 12:50:03.879: INFO: >>> kubeConfig: /home/runner/ovn.conf
2024-02-06T12:50:03.8817413Z   �[1mSTEP:�[0m Building a namespace api object, basename ovspinning �[38;5;243m@ 02/06/24 12:50:03.88�[0m
2024-02-06T12:50:03.8866687Z   Feb  6 12:50:03.884: INFO: Skipping waiting for service account
2024-02-06T12:50:03.9389235Z   Feb  6 12:50:03.938: INFO: restarting ovnkube-node for [ovn-worker ovn-worker2]
2024-02-06T12:50:03.9438782Z   Feb  6 12:50:03.943: INFO: Deleting pod "ovnkube-node-fwnxm" in namespace "ovn-kubernetes"
2024-02-06T12:50:03.9440600Z   Feb  6 12:50:03.943: INFO: Deleting pod "ovnkube-node-2c47f" in namespace "ovn-kubernetes"
2024-02-06T12:50:03.9488669Z   Feb  6 12:50:03.948: INFO: Wait up to 5m0s for pod "ovnkube-node-2c47f" to be fully deleted
2024-02-06T12:50:03.9749902Z   Feb  6 12:50:03.974: INFO: Wait up to 5m0s for pod "ovnkube-node-fwnxm" to be fully deleted
2024-02-06T12:50:14.0227116Z   Feb  6 12:50:14.022: INFO: waiting for node ovn-worker2 to have running ovnkube-node pod

Goal; Fix this and re-enable this test back in the CP lanes. Now they are skipped as part of #4106 for IPV6

@tssurya tssurya added the ci-ipv6 Add support for IPV6 e2e's to run upstream label Feb 6, 2024
@kyrtapz kyrtapz self-assigned this May 28, 2024
@kyrtapz kyrtapz linked a pull request May 28, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci-ipv6 Add support for IPV6 e2e's to run upstream
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants