Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-1.6] Fix cache miss hit on resource recreation #1818

Conversation

kubevirt-bot
Copy link
Contributor

This is an automated cherry-pick of #1813

/assign tiraboschi

Fix cache miss hit on resource recreation

The k8s client is configured to use selectors
based on label to watch only a subset of
certain resources to limit the memory consumption.

IF the user explictly removes that label
from one of the watched object, the client
cache will miss it and so the operator will try
recreating it failing then with AlreadyExists.

Let's explictly detect this corner case and
fix it setting the missing label with
a custom client to bypass the cache.

Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=2032837

Signed-off-by: Simone Tiraboschi <stirabos@redhat.com>
@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Mar 15, 2022
@sonarcloud
Copy link

sonarcloud bot commented Mar 15, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

@coveralls
Copy link
Collaborator

Pull Request Test Coverage Report for Build 1985232453

  • 4 of 42 (9.52%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.7%) to 84.463%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/controller/operands/operand.go 4 42 9.52%
Totals Coverage Status
Change from base Build 1978792561: -0.7%
Covered Lines: 3713
Relevant Lines: 4396

💛 - Coveralls

@tiraboschi
Copy link
Member

ignoring slightly decreased coverage due to a path that cannot be covered with the fake client
/override coverage/coveralls

@kubevirt-bot
Copy link
Contributor Author

@tiraboschi: Overrode contexts on behalf of tiraboschi: coverage/coveralls

In response to this:

ignoring slightly decreased coverage due to a path that cannot be covered with the fake client
/override coverage/coveralls

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Mar 15, 2022
@tiraboschi
Copy link
Member

/approve

@kubevirt-bot
Copy link
Contributor Author

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: tiraboschi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 15, 2022
@kubevirt-bot kubevirt-bot merged commit 9c826c9 into kubevirt:release-1.6 Mar 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants