-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Helm/Release] Does not accept an Output in the chart values #1725
Comments
@JoaRiski Could you share the snippet that doesn't work for you?
|
Yeah, I can do a quick test. I actually swapped back to the Either way, give me a bit, I'll get back to this with more details. |
@mikhailshilkov Alright so after some debugging it gets a bit weirder. I managed to deploy it once successfully (when bringing up the entire stack for the first time), but after removing & re-deploying it, it started to fail. This seems to be a preview phase problem. If I destroy the whole stack (cluster included), I'm able to bring it back up, and I suspect this is due to the preview not attempting to preview everything (as the kubeconfig for the cluster does not yet exist). However, if the cluster already exists when I attempt to preview the resource, it will fail. Here's a sample stack which will fail if the cluster is provisioned first, and ingressNginx on a separate deployment: import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import { Provider } from "@pulumi/kubernetes";
import { ComponentResource, ComponentResourceOptions, ResourceOptions } from "@pulumi/pulumi";
import { Address } from "@pulumi/gcp/compute";
import { Namespace } from "@pulumi/kubernetes/core/v1";
import { Release } from "@pulumi/kubernetes/helm/v3";
const stack = pulumi.getStack();
const project = pulumi.getProject();
const gcpConfig = new pulumi.Config("gcp");
const region = gcpConfig.require("region");
const _clusterName = `k8s-${stack}-${project}`;
const cluster = new gcp.container.Cluster(
_clusterName,
{
enableAutopilot: true,
location: region,
minMasterVersion: "1.19",
releaseChannel: {
channel: "STABLE",
},
},
{
ignoreChanges: ["verticalPodAutoscaling"],
}
);
const kubeconfig = pulumi
.all([cluster.name, cluster.endpoint, cluster.masterAuth])
.apply(([name, endpoint, masterAuth]) => {
const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
return `
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${masterAuth.clusterCaCertificate}
server: https://${endpoint}
name: ${context}
contexts:
- context:
cluster: ${context}
user: ${context}
name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
`.trim();
});
const clusterProvider = new Provider(_clusterName, {
kubeconfig: kubeconfig,
});
class IngressNginx extends ComponentResource {
readonly loadBalancerIp: Address;
readonly namespace: Namespace;
readonly ingressNginx: Release;
constructor(name: string, opts?: ComponentResourceOptions) {
super("k8s:svc:ingress-nginx", name, {}, opts);
const childOptions: ResourceOptions = {
parent: this,
};
this.loadBalancerIp = new Address(
"load-balancer-ip",
{},
childOptions
);
this.namespace = new Namespace("ingress-nginx", {}, childOptions);
this.ingressNginx = new Release(
"ingress-nginx",
{
namespace: this.namespace.metadata.name,
chart: "ingress-nginx",
version: "4.0.1",
repositoryOpts: {
repo: "https://kubernetes.github.io/ingress-nginx",
},
values: {
controller: {
service: {
loadBalancerIP: this.loadBalancerIp.address,
},
},
},
},
childOptions
);
}
}
const ingressNginx = new IngressNginx("ingressNginx", {
providers: {
kubernetes: clusterProvider,
}
}) The deployment will fail already in the preview phase:
Here's package.json {
"name": "helm-test",
"devDependencies": {
"@types/node": "^10.0.0"
},
"dependencies": {
"@pulumi/gcp": "^5.0.0",
"@pulumi/kubernetes": "^3.7.2",
"@pulumi/pulumi": "^3.0.0"
}
} |
I'm also hitting this issue: // aws load balancer controller; https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller
Logger.LogDebug("Installing aws load balancer controller");
var awsLbcRole = new RoleX($"{k8sPrefix}-aws-load-balancer-controller",
new RoleXArgs
{
AssumeRolePolicy = IamHelpers.AssumeRoleForServiceAccount(oidcArn, oidcUrl, "kube-system", "aws-load-balancer-controller", awsProvider),
InlinePolicies = { ["policy"] = ReadResource("AwsLoadBalancerPolicy.json") }
},
new ComponentResourceOptions { Provider = awsProvider });
var awsLbcCrds = new ConfigGroup("aws-load-balancer-controller-crds",
new ConfigGroupArgs { Yaml = ReadResource("AwsLoadBalancerCrds.yaml") },
new ComponentResourceOptions { Provider = k8sProvider });
var awsLbcValues = Output.Tuple(clusterName, awsLbcRole.Arn).Apply(((string ClusterName, string RoleArn) tuple) =>
new Dictionary<string, object>
{
["clusterName"] = tuple.ClusterName,
["enableCertManager"] = true,
["serviceAccount"] = new { annotations = new Dictionary<string, string> { ["eks.amazonaws.com/role-arn"] = tuple.RoleArn } }
}.ToDictionary());
var awsLbcRelease = new Release("aws-load-balancer-controller", // ingress records with alb.ingress.kubernetes.io annotations depend on chart finalizers
new ReleaseArgs
{
Namespace = "kube-system",
Name = "aws-load-balancer-controller",
RepositoryOpts = new RepositoryOptsArgs { Repo = "https://aws.github.io/eks-charts" },
Chart = "aws-load-balancer-controller",
Version = K8sConfig.AwsLbcChartVersion,
Values = awsLbcValues,
SkipCrds = true,
Atomic = true
},
new CustomResourceOptions { DependsOn = { awsLbcCrds, certManagerRelease }, Provider = k8sProvider }); Fails with error coming from aws-load-balancer-controller/deployment:
Where |
Here's a simpler repro case: import * as random from "@pulumi/random";
import * as k8s from "@pulumi/kubernetes";
const nsName = new random.RandomPet("test");
const ns = new k8s.core.v1.Namespace("test", {
metadata: {
name: nsName.id
}
});
new k8s.helm.v3.Release("nginx", {
chart: "nginx",
namespace: ns.metadata.name,
repositoryOpts: {
repo: "https://charts.bitnami.com/bitnami",
},
values: {},
}); Running
|
The current implementation of
k8s.helm.v3.Release
is not acceptingpulumi.Output
in the chart values, which was supported ink8s.helm.v3.Chart
from what I can tell, or at least the Chart did not error in the preview stage with the same values which theRelease
does.In my case, I'm trying to deploy the
ingress-nginx
helm chart with acontroller.service.loadBalancerIP
value fed in from a previously createdgcp.compute.GlobalAddress
resource. Currently this fails in the chart rendering phase and I have to work around it.The text was updated successfully, but these errors were encountered: