Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for integer keys #3446

Open
sleepycat opened this issue Jan 13, 2021 · 43 comments
Open

Support for integer keys #3446

sleepycat opened this issue Jan 13, 2021 · 43 comments
Labels
area/api issues for api module kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@sleepycat
Copy link

Attempting to build Knative config with 3.9.0 on Manjaro Linux produces the error Error: json: unsupported type: map[interface {}]interface {}.

[mike@ouroboros minikube]$ kustomize version
{Version:3.9.0 GitCommit:$Format:%H$ BuildDate:2020-12-13T07:57:44Z GoOs:linux GoArch:amd64}
[mike@ouroboros minikube]$ cat kustomization.yaml 
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/knative/serving/releases/download/v0.19.0/serving-core.yaml
[mike@ouroboros minikube]$ kustomize build .
Error: json: unsupported type: map[interface {}]interface {}

Kustomize 3.8 will successfully build.

[mike@ouroboros minikube]$ kustomize version
{Version:kustomize/v3.8.8 GitCommit:72262c5e7135045ed51b01e417a7e72f558a22b0 BuildDate:2020-12-10T18:05:35Z GoOs:linux GoArch:amd64}
[mike@ouroboros minikube]$ kustomize build .
apiVersion: v1
kind: Namespace
metadata:
  labels:
    serving.knative.dev/release: v0.19.0
  name: knative-serving
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  labels:
    knative.dev/crd-install: "true"
    serving.knative.dev/release: v0.19.0
  name: certificates.networking.internal.knative.dev
...
@Shell32-Natsu
Copy link
Contributor

We have issues in 3.9.0. Please try 3.9.1.

@Shell32-Natsu Shell32-Natsu added area/api issues for api module kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jan 13, 2021
@stefanprodan
Copy link

stefanprodan commented Jan 17, 2021

@Shell32-Natsu can you shed some light on this? I'm using sigs.k8s.io/kustomize/api v0.7.1 in fluxcd/kustomize-controller but this bug is still there.

I find it very hard to keep up with kustomize upstream changes, there is no changelog expect for a list of commits. For Flux users is even harder, with error outputs like json: unsupported type: map[interface {}]interface {} it's impossible to track down which object from all the manifests caused this.

@echel0n
Copy link

echel0n commented Jan 17, 2021

This issue is still present in API v0.7.2 as well

@stefanprodan
Copy link

stefanprodan commented Jan 18, 2021

This turns out to be a bug in kyaml, here is how to reproduce it.

Having a custom resource with an integer key:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: podinfo
spec:
  interval: 5m
  chart:
    spec:
      chart: podinfo
      sourceRef:
        kind: HelmRepository
        name: podinfo
  values:
    tcp:
      8080: "default/example-tcp-svc:9000"

Fails with:

$ kustomize build
Error: error marshaling into JSON: json: unsupported type: map[interface {}]interface {}

Works with:

kustomize build --enable_kyaml=false

Version:

kustomize version
{Version:kustomize/v3.9.2 GitCommit: BuildDate:2021-01-17T19:01:12+00:00 GoOs:darwin GoArch:amd64}

What I find really concerning is that users have no way to identify the object/resource/file when kustomize build fails. @Shell32-Natsu what is the recommend way to debug this when you have hundreds of manifests?

stefanprodan added a commit to fluxcd/kustomize-controller that referenced this issue Jan 18, 2021
Workaround for upstream bug: kubernetes-sigs/kustomize#3446

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
stefanprodan added a commit to fluxcd/kustomize-controller that referenced this issue Jan 18, 2021
Workaround for upstream bug: kubernetes-sigs/kustomize#3446

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
stefanprodan added a commit to fluxcd/kustomize-controller that referenced this issue Jan 18, 2021
Workaround for upstream bug: kubernetes-sigs/kustomize#3446

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
stefanprodan added a commit to fluxcd/kustomize-controller that referenced this issue Jan 18, 2021
Workaround for upstream bug: kubernetes-sigs/kustomize#3446

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
@Shell32-Natsu Shell32-Natsu added kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jan 19, 2021
@Shell32-Natsu
Copy link
Contributor

@stefanprodan Thanks for the information. I can reproduce it now. Will investigate it.

@echel0n
Copy link

echel0n commented Jan 19, 2021

Is there any chance you can add in the logging of the filename that kicks back the exception as well, please?

@Shell32-Natsu
Copy link
Contributor

Shell32-Natsu commented Jan 20, 2021

This issue is triggered by incompatibility between YAML marshal and unmarshal. I will create a PR to add the resources that causes this error to the error message.

More details about this issue: When kustomize is trying to get the YAML bytes from a resource, it will convert the resource into a map[string]interface{} then call yaml.Marshal. The resource is decoded to map[string]interface{} by yaml.Node.Decode. When there is an integer key, that field will be decoded as map[interface{}]interface{} instead of map[string]interface{} and that is invalid in yaml.Marshal.

@Shell32-Natsu
Copy link
Contributor

After we delete the apimachinery codes we can get the YAML bytes from the RNode directly so this can be fixed.

@Shell32-Natsu
Copy link
Contributor

@monopole

@HansK-p
Copy link
Contributor

HansK-p commented Feb 23, 2021

I have the same issue when using example code from https://istio.io/latest/docs/reference/config/security/peer_authentication:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: foo
spec:
  selector:
    matchLabels:
      app: finance
  mtls:
    mode: STRICT
  portLevelMtls:
    8080:
      mode: DISABLE

I'm also using similar code myself, and my only workaround right now is to stop using Kustomize for this particular deployment - or downgrade Kustomize. The argument --enable_kyaml=false doesn't seem to be a valid anymore.

@nairb774
Copy link

nairb774 commented Mar 3, 2021

Thank you @HansK-p for your comment as you helped identify (for me) what I need to do to work around the bug. The portLevelMtls having a key of 8080 is causing the kyaml parser to make that an int of some sort rather than the expected string. By changing it to "8080" it works again. Going to move this to a slightly different bug, as there is a good reproduction case, and I'm not sure this overlaps with the other issues identified in this issue.

Edit: Just fully read #3446 (comment) - sorry about the spam.

@HansK-p
Copy link
Contributor

HansK-p commented Mar 8, 2021

And thank you. I feel a bit stupid not testing that myself.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 1, 2022
@antoineozenne
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2022
@dwalters
Copy link

dwalters commented May 3, 2022

PR #4604 fixes this, could someone review it?

dwalters added a commit to dwalters/kustomize that referenced this issue May 3, 2022
Properly handles cases where the YAML is not valid JSON, such as
maps with integer keys (kubernetes-sigs#3446).
dwalters added a commit to dwalters/kustomize that referenced this issue May 3, 2022
Properly handles cases where the YAML is not valid JSON, such as
maps with integer keys (kubernetes-sigs#3446).
@Akay7
Copy link

Akay7 commented Jul 17, 2022

@alexdyas
is there are any workaround for your case:

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  9000: "default/example-go:8080"

?

@alexdyas
Copy link

Hi @Akay7,

The only work around I've found at the moment is to use an older version of kustomize. We're using v3.8.10. However beware that there are various unrelated bugs in this version, so check the output.

Hope that helps.

Alex

@mghantous
Copy link

Quoting the integer key seemed to work for our ingress tcp configmap.

dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Nginx-ingress quoting port under udp section due to this bug
kubernetes-sigs/kustomize#3446
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Nginx-ingress quoting port under udp section due to this bug
kubernetes-sigs/kustomize#3446
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2022
@dwalters
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2023
@antoineozenne
Copy link

/remove-lifecycle stale

@daurnimator
Copy link

Quoting the integer key seemed to work for our ingress tcp configmap.

This doesn't seem to work, though the kustomize build succeeds, the configmap key gets the " in it. This then fails to apply with:

ConfigMap "tcp-services" is invalid: data["22"]: Invalid value: ""22"": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
[... 2 more screenfuls of errors, maybe cascading from that one?]

@wibed
Copy link

wibed commented Nov 30, 2023

@natasha41575 please add back to core usability bugs.

nginx-ingress-controller might have monkey patched around the issue but it still persists.

@Garett-MacGowan
Copy link

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api issues for api module kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests