Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UPSTREAM: 58991: restore original object on apply err #18337

Conversation

juanvallejo
Copy link
Contributor

@juanvallejo juanvallejo commented Jan 29, 2018

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1539529

When oc apply is used with the --force flag, it will attempt to patch an existing object A up to 5 times before it deletes the existing object and re-creates it with a new object B originally used to patch A.

If object B has invalid syntax, or a change preventing it from passing validation, the new object will fail to be created and oc apply will simply delete the original object without informing the user that their object no longer exists, and a new one was not created.

This patch restores the original object in the event that an error occurs while attempting to outright replace it with a new one.

Hoping to use this thread to at least discuss the nature of oc apply --force, and better define its behavior when it encounters a non-conflict error.

cc @ironcladlou

@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Jan 29, 2018
@openshift-merge-robot openshift-merge-robot added the vendor-update Touching vendor dir or related files label Jan 29, 2018
@ironcladlou
Copy link
Contributor

I want to create an upstream issue to discuss. We shouldn't delete the resource only to recreate it if we can establish in advance the resource is invalid and has no hope of being persisted. This also brings in to question the semantics of --force in the absence of any merge conflicts (but in the presence of validation errors).

@ironcladlou
Copy link
Contributor

kubectl actually seems to handle this correctly, so we need to investigate why there's a behavioral disparity in oc.

@ironcladlou
Copy link
Contributor

I was able to break kubectl with a server-side-validated replicationcontroller in the same way reported for deploymentconfigs, so we do still have an upstream issue to discuss.

@ironcladlou
Copy link
Contributor

The apply command's --validate flag seems to be false by default in oc but true by default in kubectl, which accounts for the disparity when trying to apply a resource with --force which can be validated client side...

@ironcladlou
Copy link
Contributor

@juanvallejo juanvallejo force-pushed the jvallejo/restore-obj-onfailure-apply branch from 1eac65f to 117f9f4 Compare January 29, 2018 22:07
@juanvallejo
Copy link
Contributor Author

juanvallejo commented Jan 29, 2018

Explanation of what is happening in the bugzilla (two different edge-cases are being hit): https://bugzilla.redhat.com/show_bug.cgi?id=1539529#c3

@juanvallejo
Copy link
Contributor Author

Upstream PR: kubernetes/kubernetes#58991

@juanvallejo juanvallejo changed the title UPSTREAM: 0000: restore original object on apply err UPSTREAM: 58991: restore original object on apply err Jan 29, 2018
@juanvallejo juanvallejo force-pushed the jvallejo/restore-obj-onfailure-apply branch from 117f9f4 to 10d0aff Compare January 29, 2018 22:34
@deads2k
Copy link
Contributor

deads2k commented Feb 1, 2018

/hold

@juanvallejo just to help sort out what's being picked and not yet lgtm'd/merged upstream, can you apply holds on the picks?

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 1, 2018
@juanvallejo juanvallejo force-pushed the jvallejo/restore-obj-onfailure-apply branch from 10d0aff to be74c71 Compare February 6, 2018 19:04
@juanvallejo juanvallejo force-pushed the jvallejo/restore-obj-onfailure-apply branch from be74c71 to 1816810 Compare February 6, 2018 19:05
@juanvallejo
Copy link
Contributor Author

@deads2k @ironcladlou Upstream PR merged kubernetes/kubernetes#58991

This one is now up to date

@juanvallejo
Copy link
Contributor Author

/retest

1 similar comment
@mfojtik
Copy link
Contributor

mfojtik commented Feb 7, 2018

/retest

@deads2k
Copy link
Contributor

deads2k commented Feb 7, 2018

/hold cancel
already merged upstream
/lgtm

@openshift-ci-robot openshift-ci-robot added lgtm Indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Feb 7, 2018
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: deads2k, juanvallejo

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 7, 2018
@juanvallejo
Copy link
Contributor Author

/retest

@juanvallejo
Copy link
Contributor Author

/test cmd

@juanvallejo
Copy link
Contributor Author

/test extended_conformance_install

@openshift-merge-robot
Copy link
Contributor

/test all [submit-queue is verifying that this PR is safe to merge]

@openshift-merge-robot
Copy link
Contributor

Automatic merge from submit-queue.

@openshift-merge-robot openshift-merge-robot merged commit 75f08f0 into openshift:master Feb 8, 2018
@juanvallejo juanvallejo deleted the jvallejo/restore-obj-onfailure-apply branch February 8, 2018 19:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. vendor-update Touching vendor dir or related files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants