Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugs and crashes from resource auto-aliasing. #849

Closed
timmyers opened this issue Oct 16, 2019 · 7 comments · Fixed by #850
Closed

Bugs and crashes from resource auto-aliasing. #849

timmyers opened this issue Oct 16, 2019 · 7 comments · Fixed by #850
Assignees
Labels
area/community Related to ability of community to participate in pulumi-kubernetes development kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer

Comments

@timmyers
Copy link

timmyers commented Oct 16, 2019

The new auto-aliasing feature has caused issues with all stacks that contain K8S resources. Usually resulting in an error like

panic: fatal: An assertion has failed: Two resources ('urn:pulumi:mirror::mirror::kubernetes:apps/v1:Deployment::traffic-mirror' and 'urn:pulumi:mirror::mirror::kubernetes:autoscaling/v2beta2:HorizontalPodAutoscaler::traffic-mirror') aliased to the same: 'urn:pulumi:mirror::mirror::kubernetes:apps/v1:Deployment::traffic-mirror'

More info in slack thread: https://pulumi-community.slack.com/archives/C84L4E3N1/p1570297147325000

I am also able to repro what seems to be the same issue with this simple example:
https://gist.github.com/timmyers/adc57e4dca7dc7a3c14a310d5f4ce6a6

The program consists of a namespace and a deployment. I have those both applied successfully, pulumi up shows no diff. Then I uncomment the service object and try to pulumi up again.
The first run crashed, the second one gives a very odd unexpected diff.

Running pulumi up over and over seems to randomly either crash, or give the weird diff output, some sort of race condition?

Reverting to @pulumi/kubernetes:1.1.0 resolves all issues.

@lblackstone lblackstone self-assigned this Oct 16, 2019
@lblackstone lblackstone added area/community Related to ability of community to participate in pulumi-kubernetes development kind/bug Some behavior is incorrect or out of spec labels Oct 16, 2019
@lblackstone
Copy link
Member

I was able to reproduce the error, and commenting out the auto-aliasing logic in Deployment.js stopped it from happening. Trying to figure out what's going wrong here.

@lblackstone
Copy link
Member

Removing defaultComponentOptions fixes it. My best guess here is that the opts are not being handled properly somewhere.

This works:

const defaultComponentOptions = {};

const namespace = new k8s.core.v1.Namespace('k8stest', {
    metadata: { name: 'k8stest' },
}, defaultComponentOptions);

const appLabels = { app: 'k8stest' };

const deployment = new k8s.apps.v1.Deployment('k8stest', {
    spec: {
        selector: { matchLabels: appLabels },
        replicas: 1,
        template: {
            metadata: { labels: appLabels, namespace: 'k8stest' },
            spec: {
                containers: [{
                    name: 'nginx',
                    image: 'nginx'
                }],
            },
        },
    },
}, defaultComponentOptions);

const service = new k8s.core.v1.Service('k8stest', {
  metadata: {
    labels: appLabels,
    namespace: 'k8stest',
  },
  spec: {
    type: 'ClusterIP',
    selector: appLabels,
  },
}, {});

This fails:

const defaultComponentOptions = {};

const namespace = new k8s.core.v1.Namespace('k8stest', {
    metadata: { name: 'k8stest' },
}, defaultComponentOptions);

const appLabels = { app: 'k8stest' };

const deployment = new k8s.apps.v1.Deployment('k8stest', {
    spec: {
        selector: { matchLabels: appLabels },
        replicas: 1,
        template: {
            metadata: { labels: appLabels, namespace: 'k8stest' },
            spec: {
                containers: [{
                    name: 'nginx',
                    image: 'nginx'
                }],
            },
        },
    },
}, defaultComponentOptions);

const service = new k8s.core.v1.Service('k8stest', {
  metadata: {
    labels: appLabels,
    namespace: 'k8stest',
  },
  spec: {
    type: 'ClusterIP',
    selector: appLabels,
  },
}, defaultComponentOptions);

Any ideas @lukehoban @pgavlin?

@timmyers
Copy link
Author

Ah, that explains why all my stacks broke haha. I use the options for setting the provider or parent resource extensively.

@lblackstone
Copy link
Member

Ok, so it turns out that the k8s provider was inadvertently mutating the defaultComponentOptions object in each resource, which led to the strange behavior you're seeing. Since the Deployment and Service happened to have the same name, they were erroneously aliased to one another.

I suspect the reason we hadn't seen other reports of this before is that it requires

  1. Using a shared opts object across multiple resources
  2. At least two of these resources have the same name.

@markphillips100
Copy link

@lblackstone just adding that I experienced an issue in relation to the cli not determining the required plugins correctly (and then subsequently telling me to add them manually). I narrowed it down to your discovery here of using a shared opts across multiple resources. Doing the following instead allowed the plugins to be discovered correctly.

{
   ...defaultComponentOptions
}

@lblackstone
Copy link
Member

@markphillips100 I don't quite understand. Is this still an issue after the fix in #850? If so, can you open a new issue with details.

@markphillips100
Copy link

@lblackstone my bad. I've raised an issue to better clarify.

Just to note, my new issue and accompanying demo code doesn't reference the pulumi-kubernetes provider at all hence why I raised it over on the pulumi issue log.

@infin8x infin8x added the p1 A bug severe enough to be the next item assigned to an engineer label Jul 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/community Related to ability of community to participate in pulumi-kubernetes development kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants