Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Helm/Release] Does not accept an Output in the chart values #1725

Closed
JoaRiski opened this issue Sep 18, 2021 · 5 comments · Fixed by #1760
Closed

[Helm/Release] Does not accept an Output in the chart values #1725

JoaRiski opened this issue Sep 18, 2021 · 5 comments · Fixed by #1760
Assignees
Labels
area/helm helm-release-ga-blockers Items blocking Helm Release GA kind/enhancement Improvements or new features resolution/fixed This issue was fixed
Milestone

Comments

@JoaRiski
Copy link

JoaRiski commented Sep 18, 2021

The current implementation of k8s.helm.v3.Release is not accepting pulumi.Output in the chart values, which was supported in k8s.helm.v3.Chart from what I can tell, or at least the Chart did not error in the preview stage with the same values which the Release does.

In my case, I'm trying to deploy the ingress-nginx helm chart with a controller.service.loadBalancerIP value fed in from a previously created gcp.compute.GlobalAddress resource. Currently this fails in the chart rendering phase and I have to work around it.

@JoaRiski JoaRiski added the kind/enhancement Improvements or new features label Sep 18, 2021
@mikhailshilkov
Copy link
Member

@JoaRiski Could you share the snippet that doesn't work for you?

values are an Input, so it's supposed to accept outputs but maybe there's a bug down the line.

@JoaRiski
Copy link
Author

Yeah, I can do a quick test.

I actually swapped back to the k8s.helm.v3.Chart resource, as the Release resource was failing to install (due to non-Pulumi issues), but Pulumi failed to clean up the failed install, so the next time I tried to re-apply, helm would just refuse as there already was a helm release with the same name. I'm not sure if this is working as intended, or if it's a separate issue?

Either way, give me a bit, I'll get back to this with more details.

@JoaRiski
Copy link
Author

JoaRiski commented Sep 20, 2021

@mikhailshilkov Alright so after some debugging it gets a bit weirder. I managed to deploy it once successfully (when bringing up the entire stack for the first time), but after removing & re-deploying it, it started to fail.

This seems to be a preview phase problem. If I destroy the whole stack (cluster included), I'm able to bring it back up, and I suspect this is due to the preview not attempting to preview everything (as the kubeconfig for the cluster does not yet exist). However, if the cluster already exists when I attempt to preview the resource, it will fail.

Here's a sample stack which will fail if the cluster is provisioned first, and ingressNginx on a separate deployment:

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import { Provider } from "@pulumi/kubernetes";
import { ComponentResource, ComponentResourceOptions, ResourceOptions } from "@pulumi/pulumi";
import { Address } from "@pulumi/gcp/compute";
import { Namespace } from "@pulumi/kubernetes/core/v1";
import { Release } from "@pulumi/kubernetes/helm/v3";


const stack = pulumi.getStack();
const project = pulumi.getProject();
const gcpConfig = new pulumi.Config("gcp");
const region = gcpConfig.require("region");

const _clusterName = `k8s-${stack}-${project}`;
const cluster = new gcp.container.Cluster(
  _clusterName,
  {
    enableAutopilot: true,
    location: region,
    minMasterVersion: "1.19",
    releaseChannel: {
      channel: "STABLE",
    },
  },
  {
    ignoreChanges: ["verticalPodAutoscaling"],
  }
);

const kubeconfig = pulumi
  .all([cluster.name, cluster.endpoint, cluster.masterAuth])
  .apply(([name, endpoint, masterAuth]) => {
    const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
    return `
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`.trim();
});

const clusterProvider = new Provider(_clusterName, {
  kubeconfig: kubeconfig,
});

class IngressNginx extends ComponentResource {
  readonly loadBalancerIp: Address;
  readonly namespace: Namespace;
  readonly ingressNginx: Release;

  constructor(name: string, opts?: ComponentResourceOptions) {
    super("k8s:svc:ingress-nginx", name, {}, opts);
    const childOptions: ResourceOptions = {
      parent: this,
    };

    this.loadBalancerIp = new Address(
      "load-balancer-ip",
      {},
      childOptions
    );
    this.namespace = new Namespace("ingress-nginx", {}, childOptions);
    this.ingressNginx = new Release(
      "ingress-nginx",
      {
        namespace: this.namespace.metadata.name,
        chart: "ingress-nginx",
        version: "4.0.1",
        repositoryOpts: {
          repo: "https://kubernetes.github.io/ingress-nginx",
        },
        values: {
          controller: {
            service: {
              loadBalancerIP: this.loadBalancerIp.address,
            },
          },
        },
      },
      childOptions
    );
  }
}

const ingressNginx = new IngressNginx("ingressNginx", {
  providers: {
    kubernetes: clusterProvider,
  }
})

The deployment will fail already in the preview phase:

Previewing update (xxxxxx/dev)

View Live: https://app.pulumi.com/xxxxxx/helm-test/dev/previews/eb3541e3-9d1b-4a5f-aa41-19f44eb1ac63

     Type                                Name              Plan       Info
     pulumi:pulumi:Stack                 helm-test-dev
 +   ├─ k8s:svc:ingress-nginx            ingressNginx      create
 +   │  ├─ gcp:compute:GlobalAddress     load-balancer-ip  create
 +   │  └─ kubernetes:core/v1:Namespace  ingress-nginx     create
     └─ kubernetes:helm.sh/v3:Release    ingress-nginx                1 error

Diagnostics:
  kubernetes:helm.sh/v3:Release (ingress-nginx):
    error: failed to create chart from template: YAML parse error on ingress-nginx/templates/controller-service.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{}

Here's package.json

{
    "name": "helm-test",
    "devDependencies": {
        "@types/node": "^10.0.0"
    },
    "dependencies": {
        "@pulumi/gcp": "^5.0.0",
        "@pulumi/kubernetes": "^3.7.2",
        "@pulumi/pulumi": "^3.0.0"
    }
}

@gitfool
Copy link

gitfool commented Sep 26, 2021

I'm also hitting this issue:

// aws load balancer controller; https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller
Logger.LogDebug("Installing aws load balancer controller");
var awsLbcRole = new RoleX($"{k8sPrefix}-aws-load-balancer-controller",
    new RoleXArgs
    {
        AssumeRolePolicy = IamHelpers.AssumeRoleForServiceAccount(oidcArn, oidcUrl, "kube-system", "aws-load-balancer-controller", awsProvider),
        InlinePolicies = { ["policy"] = ReadResource("AwsLoadBalancerPolicy.json") }
    },
    new ComponentResourceOptions { Provider = awsProvider });

var awsLbcCrds = new ConfigGroup("aws-load-balancer-controller-crds",
    new ConfigGroupArgs { Yaml = ReadResource("AwsLoadBalancerCrds.yaml") },
    new ComponentResourceOptions { Provider = k8sProvider });

var awsLbcValues = Output.Tuple(clusterName, awsLbcRole.Arn).Apply(((string ClusterName, string RoleArn) tuple) =>
    new Dictionary<string, object>
    {
        ["clusterName"] = tuple.ClusterName,
        ["enableCertManager"] = true,
        ["serviceAccount"] = new { annotations = new Dictionary<string, string> { ["eks.amazonaws.com/role-arn"] = tuple.RoleArn } }
    }.ToDictionary());

var awsLbcRelease = new Release("aws-load-balancer-controller", // ingress records with alb.ingress.kubernetes.io annotations depend on chart finalizers
    new ReleaseArgs
    {
        Namespace = "kube-system",
        Name = "aws-load-balancer-controller",
        RepositoryOpts = new RepositoryOptsArgs { Repo = "https://aws.github.io/eks-charts" },
        Chart = "aws-load-balancer-controller",
        Version = K8sConfig.AwsLbcChartVersion,
        Values = awsLbcValues,
        SkipCrds = true,
        Atomic = true
    },
    new CustomResourceOptions { DependsOn = { awsLbcCrds, certManagerRelease }, Provider = k8sProvider });

Fails with error coming from aws-load-balancer-controller/deployment:

Diagnostics:
 
aws-load-balancer-controller (kubernetes:helm.sh:Release)
error: failed to create chart from template: execution error at (aws-load-balancer-controller/templates/deployment.yaml:52:28): Chart cannot be installed without a valid clusterName!

Where clusterName is valid and being passed, albeit via an output tuple.

@viveklak viveklak added the helm-release-ga-blockers Items blocking Helm Release GA label Sep 29, 2021
@lblackstone lblackstone assigned lblackstone and unassigned viveklak Oct 5, 2021
@mikhailshilkov mikhailshilkov added this to the 0.63 milestone Oct 6, 2021
@lblackstone
Copy link
Member

Here's a simpler repro case:

import * as random from "@pulumi/random";
import * as k8s from "@pulumi/kubernetes";

const nsName = new random.RandomPet("test");

const ns = new k8s.core.v1.Namespace("test", {
    metadata: {
        name: nsName.id
    }
});

new k8s.helm.v3.Release("nginx", {
    chart: "nginx",
    namespace: ns.metadata.name,
    repositoryOpts: {
        repo: "https://charts.bitnami.com/bitnami",
    },
    values: {},
});

Running pulumi up on an empty stack returns the following error:

  kubernetes:helm.sh/v3:Release (nginx):
    error: decoding failure: 1 error(s) decoding:

    * 'Namespace' expected type 'string', got unconvertible type 'resource.Computed', value: '{{}}'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/helm helm-release-ga-blockers Items blocking Helm Release GA kind/enhancement Improvements or new features resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants