Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

name property of DP_MACVTAP_CONF can't exceed 10 characters #120

Open
tstirmllnl opened this issue May 6, 2024 · 3 comments
Open

name property of DP_MACVTAP_CONF can't exceed 10 characters #120

tstirmllnl opened this issue May 6, 2024 · 3 comments
Labels

Comments

@tstirmllnl
Copy link

What happened:
The name property of DP_MACVTAP_CONF appears to have a character limit of 10 characters. I'm not sure if this due to it or the annotation that has to be set on the NetworkAttachmentDefinition.

What you expected to happen:
I didn't expect this character limit.

How to reproduce it (as minimally and precisely as possible):

  1. Create macvtap device plugin configuration.
    NOTE: If you make the name field dataplanea and update NetworkAttachmentDefinition to be k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplanea it will work.
kind: ConfigMap
apiVersion: v1
metadata:
  name: macvtap-deviceplugin-config
data:
  DP_MACVTAP_CONF: |
    [
      {
        "name" : "dataplaneab",
        "lowerDevice" : "isol",
        "mode": "bridge",
        "capacity" : 50
      },
    ]
  1. Deploy macvtap DaemonSet using: https://github.com/kubevirt/macvtap-cni/blob/main/manifests/macvtap.yaml
  2. Deploy NetworkAttachmentDefinition
kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
  name: isolated-net
  annotations:
    k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplaneab
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "isolated-net",
      "type": "macvtap",
      "ipam": {
              "type": "host-local",
              "subnet": "172.31.0.0/20",
              "rangeStart": "172.31.12.1",
              "rangeEnd": "172.31.15.254",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "172.31.0.1"
            }
    }'
  1. Deploy VM
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  name: vmi-test
spec:
  domain:
    resources:
      requests:
        memory: 64M
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
      interfaces:
        - name: isolated-network
          macvtap: {}
  networks:
    - name: isolated-network
      multus:
        networkName: isolated-net
  volumes:
    - name: containerdisk
      containerDisk:
        image: kubevirt/cirros-container-disk-demo:latest
    - name: cloudinitdisk
      cloudInitNoCloud:
        userData: |
            #!/bin/sh

            echo 'printed from cloud-init userdata'

kubectl describe prints out

Status:           Failed
Reason:           UnexpectedAdmissionError
Message:          Pod Allocate failed due to rpc error: code = Unknown desc = numerical result out of range, which is unexpected

Looking at the node where its scheduled it looks like the macvtap wasn't allocated.

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                                  Requests    Limits
  --------                                  --------    ------
  cpu                                       702m (1%)   770m (1%)
  memory                                    815Mi (0%)  320Mi (0%)
  ephemeral-storage                         0 (0%)      0 (0%)
  hugepages-1Gi                             0 (0%)      0 (0%)
  hugepages-2Mi                             0 (0%)      0 (0%)
  devices.kubevirt.io/kvm                   0           0
  macvtap.network.kubevirt.io/dataplane      0           0
  macvtap.network.kubevirt.io/dataplanea     0           0
  macvtap.network.kubevirt.io/dataplaneab    0           0

NOTE: It also looks like it leaves macvtap.network.kubevirt.io/ resources from previous runs. How does one remove these?

Additional context:
Add any other context about the problem here.

Environment:

  • KubeVirt version (use virtctl version): 1.1.0
  • Kubernetes version (use kubectl version): 1.23.9
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: Baremetal
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 3.10.0-1160.11.1.el7.x86_64
  • Other Tools: Multus Thick client(4.0.2)
@kubevirt-bot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 4, 2024
@maiqueb
Copy link
Collaborator

maiqueb commented Aug 5, 2024

/remove-lifecycle stale

Hi, thanks for opening this issue. Sorry it took so long to get me to look at it.

I'll investigate the length limitation of the resource name. I'm inclined to say it is the name of the resource in the annotation that indirectly has this limitation, but right now I'm not sure.

It also looks like it leaves macvtap.network.kubevirt.io/ resources from previous runs. How does one remove these?

IIRC (been too long since I looked at this project ...) we never implemented support for this. I do agree that it is a known bug.

Would you mind opening a new one ?

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 5, 2024
@tstirmllnl
Copy link
Author

@maiqueb No worries. New issue created about macvtap.network.kubevirt.io/ not getting cleared between runs has been created here: #121

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants