Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: Precedence for applying NNCPs #1101

Open
rajinator opened this issue Jun 14, 2022 · 4 comments
Open

RFE: Precedence for applying NNCPs #1101

rajinator opened this issue Jun 14, 2022 · 4 comments

Comments

@rajinator
Copy link

What happened:
When NNCPs are separated by layer of components on the same node/set of nodes, for example:

  1. Base interface/base bond NNCP
  2. NNCPS for subinterfaces to define VLANs and IPs/Routes

There is no order of precedence so upon node reboots, there's a chance that the controller may try to apply sub-interface NNCP before Base interface NNCP

This kind of use-case is essential when having to do interface configuration/modification on the fly without rebooting nodes as much as possible.

What you expected to happen:
Base interface configured first followed by subinterfaces:
e.g.:

  1. NNCP for bond1 applied first
  2. NNCP forbond1.xyz....bond1.nnn applied after 1 is successfully applied

How to reproduce it (as minimally and precisely as possible):
Configure nodes with multiple NNCPs as seen above:

  1. NNCP for bond1
  2. NNCP for bond1.xyz...bond1.nnn with Static IPs and/or routes

Reboot the nodes and observe NNCPs/NNCEs

Anything else we need to know?:

Environment:

  • Problematic NodeNetworkConfigurationPolicy:
apiVersion: nmstate.io/v1beta1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: cnf-bond-interfaces
spec:
  desiredState:
    interfaces:
    - description: ens1f2
      ipv4:
        enabled: false
      ipv6:
        enabled: false
      mtu: 9000
      name: ens1f2
      state: up
      type: ethernet
    - description: ens1f3
      ipv4:
        enabled: false
      ipv6:
        enabled: false
      mtu: 9000
      name: ens1f3
      state: up
      type: ethernet
    - description: Bond ens1f2 and ens1f3
      ipv4:
        enabled: false
      ipv6:
        enabled: false
      link-aggregation:
        mode: active-backup
        options:
          miimon: "100"
          primary: ens1f2
        slaves:
        - ens1f2
        - ens1f3
      mtu: 9000
      name: bond1
      state: up
      type: bond
  nodeSelector:
    cnf: "true"
---
apiVersion: nmstate.io/v1beta1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: base-vlans
spec:
  desiredState:
    interfaces:
    - description: vlan 123 bond1
      ipv4:
        enabled: false
      name: bond1.123
      state: up
      type: vlan
      vlan:
        base-iface: bond1
        id: 123
    - description: vlan 456 bond1
      ipv4:
        enabled: false
      name: bond1.456
      state: up
      type: vlan
      vlan:
        base-iface: bond1
        id: 456
  nodeSelector:
    base-vlans: "true"
---
apiVersion: nmstate.io/v1beta1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: iscsi-config-iscsinode1
spec:
  desiredState:
    interfaces:
    - description: iscsi bond0
      ipv4:
        address:
        - ip: 192.168.1.12
          prefix-length: 24
        dhcp: false
        enabled: true
      mtu: 9000
      name: bond0.111
      state: up
      type: vlan
      vlan:
        base-iface: bond0
        id: 111
    - description: iscsi bond1
      ipv4:
        address:
        - ip: 192.168.2.12
          prefix-length: 24
        dhcp: false
        enabled: true
      mtu: 9000
      name: bond1.111
      state: up
      type: vlan
      vlan:
        base-iface: bond1
        id: 111
  nodeSelector:
    kubernetes.io/hostname: iscsinode1
  • kubernetes-nmstate image (use kubectl get pods --all-namespaces -l app=kubernetes-nmstate -o jsonpath='{.items[0].spec.containers[0].image}'): registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel8@sha256:2838f5af27a08be5bfc76cf1916d9a512666d4eda93b38c0470cfbbe406cd1c8

  • NetworkManager version (use nmcli --version): nmcli tool, version 1.30.0-13.el8_4

  • Kubernetes version (use kubectl version): Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.8+ee73ea2", GitCommit:"f7310cc5b1c14454dcd067838ca8407e9da13a26", GitTreeState:"clean", BuildDate:"2022-03-10T05:52:32Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}

oc version
Server Version: 4.8.35
Kubernetes Version: v1.21.8+ee73ea2
  • OS (e.g. from /etc/os-release):
cat /etc/os-release
NAME="Red Hat Enterprise Linux CoreOS"
VERSION="48.84.202203140855-0"
  • Others:
@rajinator rajinator changed the title Precedence for applying NNCPs RFE: Precedence for applying NNCPs Jun 14, 2022
@phoracek
Copy link
Member

Hello @rajinator. Thanks for bringing this up.

The lack of ordering/merging was a design decision. The motivation was to keep kubernetes-nmstate as a mere shim, without understanding of the contents of the configuration (which is responsibility of nmstate itself). If there is a dependency between two interfaces, they must be defined in a single NNCP.

That being said, this would be clearly a useful feature. The questions are how much complexity it would add to knmstate, how would it affect debugging, and if we could implement this without understanding semantics of the desired state.

@qinqon
Copy link
Member

qinqon commented Jun 14, 2022

Maybe we can implement a new NodeNetworkConfigurationbTemplate that has some capture placeholders and then have a desiredStateFromTemplate at NNCP to reference it, then the NNCP will fill in the placeholders creating a pair of captures with the IPs.

Something like the following:

piVersion: nmstate.io/v1
kind: NodeNetworkConfigurationTemplate
metadata:
  name: cnf-bond-vlan-bridge
spec:
  snippet:
    interfaces:
    - description: ens1f2
      ipv4:
        enabled: false
      ipv6:
        enabled: false
      mtu: 9000
      name: ens1f2
      state: up
      type: ethernet
    - description: ens1f3
      ipv4:
        enabled: false
      ipv6:
        enabled: false
      mtu: 9000
      name: ens1f3
      state: up
      type: ethernet
    - description: Bond ens1f2 and ens1f3
      ipv4:
        enabled: false
      ipv6:
        enabled: false
      link-aggregation:
        mode: active-backup
        options:
          miimon: "100"
          primary: ens1f2
        slaves:
        - ens1f2
        - ens1f3
      mtu: 9000
      name: bond1
      state: up
      type: bond
    - description: vlan 123 bond1
      ipv4:
        enabled: false
      name: bond1.123
      state: up
      type: vlan
      vlan:
        base-iface: bond1
        id: 123
    - description: vlan 456 bond1
      ipv4:
        enabled: false
      name: bond1.456
      state: up
      type: vlan
      vlan:
        base-iface: bond1
        id: 456
    - description: iscsi bond0
      ipv4:
        address:
        - ip: "{{ capture.ip-bond0 }}"
          prefix-length: 24
        dhcp: false
        enabled: true
      mtu: 9000
      name: bond0.111
      state: up
      type: vlan
      vlan:
        base-iface: bond0
        id: 111
    - description: iscsi bond1
      ipv4:
        address:
        - ip: "{{ capture.ip-bond1 }}"
          prefix-length: 24
        dhcp: false
        enabled: true
      mtu: 9000
      name: bond1.111
      state: up
      type: vlan
      vlan:
        base-iface: bond1
        id: 111
---
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: cnf-bond-vlan-bridge-node1
spec:
  capture: 
    ip-bond0: 192.168.1.12
    ip-bond1: 192.168.2.12
  desiredStateFromTemplate: cnf-bond-vlan-bridge
  nodeSelector:
    kubernetes.io/hostname: iscsinode1

@qinqon
Copy link
Member

qinqon commented Jun 14, 2022

Also note that we are working in a host ip pool proposal that may be of help for you:

@qinqon
Copy link
Member

qinqon commented Jun 15, 2022

We can go a step beyond and think on a scenario with hundred of nodes with static routes each, the NNCP should just define the specific parts of each nodes and a Template will have to expand per node into a NNCE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants