Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing to detect state mismatch after kubectl scale + refresh #694

Closed
lblackstone opened this issue Aug 5, 2019 · 5 comments
Closed

Failing to detect state mismatch after kubectl scale + refresh #694

lblackstone opened this issue Aug 5, 2019 · 5 comments
Assignees
Labels
dry-run-diff Related to dry run diff behavior kind/bug Some behavior is incorrect or out of spec last-applied-configuration Issues related to the last-applied-configuration annotation resolution/fixed This issue was fixed

Comments

@lblackstone
Copy link
Member

Here's the repro:

  1. Run pulumi up with the following:
const appLabels = {app: "nginx"};
new k8s.apps.v1.Deployment("foo", {
    metadata: {
        name: "scale-test",
    },
    spec: {
        selector: {matchLabels: appLabels},
        replicas: 1,
        template: {
            metadata: {labels: appLabels},
            spec: {
                containers: [
                    {name: "nginx", image: "nginx:1.13", ports: [{containerPort: 80}]}
                ],
            }
        }
    }
});
  1. Run kubectl scale deployment --replicas=5 scale-test
  2. Run pulumi refresh (should succeed and update the replica state to 5)
  3. Run pulumi up again (no changes detected)

I expected to detect the mismatch between the current state (5 replicas) and the declared state (1 replica).

@lblackstone
Copy link
Member Author

@pgavlin indicated that this is expected behavior with legacy diff because the kubectl scale command apparently doesn't set the kubectl.kubernetes.io/last-applied-configuration annotation.

This does, in fact, work properly with the upcoming dryRun-based diff behavior.

Related to #641

@hausdorff
Copy link
Contributor

We're not going to fix this for Q3, or at all until we move to the dry-run stuff, right? If so we should remove both the Q3 and P1 tags.

@lblackstone lblackstone added dry-run-diff Related to dry run diff behavior and removed feature/q3 labels Aug 7, 2019
@lblackstone lblackstone removed this from the 0.26 milestone Aug 7, 2019
@Dominik-K
Copy link

Dominik-K commented Oct 21, 2019

@lblackstone I've stumbled upon this issue while kubectl editing resources from a Helm chart (applied by Pulumi). I'm using this way to fix some chart resources as this is the fastest way. Then I want Pulumi to tell me what I changed to codify the transformations. We've talked about this with Dan & Nick in today's call and on Slack.

Is it right that both the legacy diff & the dry-run diff don't compare to the stack's state but to Kubernetes' last-applied-configuration resp. live config? That would mean in Pulumi terms, this provider always implicitly runs a --refresh on pulumi up (related to pulumi/pulumi#2247)?

@lblackstone
Copy link
Member Author

@Dominik-K The legacy diff behavior is based on Pulumi's checkpoint state of the resource. If you run a refresh, Pulumi will sync the lastAppliedConfiguration value from the live resource to determine the current state.

The dry-run diff behavior fixes this issue, but is not yet a stable feature. Once this feature is ready, it will become the new default.

@lblackstone lblackstone added kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed labels Jul 20, 2022
@lblackstone
Copy link
Member Author

This is fixed with the enableServerSideApply mode available in the v3.20.1 release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dry-run-diff Related to dry run diff behavior kind/bug Some behavior is incorrect or out of spec last-applied-configuration Issues related to the last-applied-configuration annotation resolution/fixed This issue was fixed
Projects
None yet
Development

No branches or pull requests

3 participants