-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugs and crashes from resource auto-aliasing. #849
Comments
I was able to reproduce the error, and commenting out the auto-aliasing logic in |
Removing This works: const defaultComponentOptions = {};
const namespace = new k8s.core.v1.Namespace('k8stest', {
metadata: { name: 'k8stest' },
}, defaultComponentOptions);
const appLabels = { app: 'k8stest' };
const deployment = new k8s.apps.v1.Deployment('k8stest', {
spec: {
selector: { matchLabels: appLabels },
replicas: 1,
template: {
metadata: { labels: appLabels, namespace: 'k8stest' },
spec: {
containers: [{
name: 'nginx',
image: 'nginx'
}],
},
},
},
}, defaultComponentOptions);
const service = new k8s.core.v1.Service('k8stest', {
metadata: {
labels: appLabels,
namespace: 'k8stest',
},
spec: {
type: 'ClusterIP',
selector: appLabels,
},
}, {}); This fails: const defaultComponentOptions = {};
const namespace = new k8s.core.v1.Namespace('k8stest', {
metadata: { name: 'k8stest' },
}, defaultComponentOptions);
const appLabels = { app: 'k8stest' };
const deployment = new k8s.apps.v1.Deployment('k8stest', {
spec: {
selector: { matchLabels: appLabels },
replicas: 1,
template: {
metadata: { labels: appLabels, namespace: 'k8stest' },
spec: {
containers: [{
name: 'nginx',
image: 'nginx'
}],
},
},
},
}, defaultComponentOptions);
const service = new k8s.core.v1.Service('k8stest', {
metadata: {
labels: appLabels,
namespace: 'k8stest',
},
spec: {
type: 'ClusterIP',
selector: appLabels,
},
}, defaultComponentOptions); Any ideas @lukehoban @pgavlin? |
Ah, that explains why all my stacks broke haha. I use the options for setting the provider or parent resource extensively. |
Ok, so it turns out that the k8s provider was inadvertently mutating the I suspect the reason we hadn't seen other reports of this before is that it requires
|
@lblackstone just adding that I experienced an issue in relation to the cli not determining the required plugins correctly (and then subsequently telling me to add them manually). I narrowed it down to your discovery here of using a shared opts across multiple resources. Doing the following instead allowed the plugins to be discovered correctly.
|
@markphillips100 I don't quite understand. Is this still an issue after the fix in #850? If so, can you open a new issue with details. |
@lblackstone my bad. I've raised an issue to better clarify. Just to note, my new issue and accompanying demo code doesn't reference the pulumi-kubernetes provider at all hence why I raised it over on the pulumi issue log. |
The new auto-aliasing feature has caused issues with all stacks that contain K8S resources. Usually resulting in an error like
More info in slack thread: https://pulumi-community.slack.com/archives/C84L4E3N1/p1570297147325000
I am also able to repro what seems to be the same issue with this simple example:
https://gist.github.com/timmyers/adc57e4dca7dc7a3c14a310d5f4ce6a6
The program consists of a namespace and a deployment. I have those both applied successfully, pulumi up shows no diff. Then I uncomment the service object and try to pulumi up again.
The first run crashed, the second one gives a very odd unexpected diff.
Running
pulumi up
over and over seems to randomly either crash, or give the weird diff output, some sort of race condition?Reverting to
@pulumi/kubernetes:1.1.0
resolves all issues.The text was updated successfully, but these errors were encountered: