Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network Management #24

Closed
ajeddeloh opened this issue Aug 6, 2018 · 37 comments
Closed

Network Management #24

ajeddeloh opened this issue Aug 6, 2018 · 37 comments
Assignees
Labels
jira for syncing to jira priority/high

Comments

@ajeddeloh
Copy link
Contributor

Fedora uses NetworkManager for handling network configuration. Container Linux uses networkd. We need to decide on one. We don't want to carry both since that's just twice the maintainence and chance of breakage without much benefit.

NetworkManager is advantageous because it has wider adoption, especially within the Fedora (and RedHat) ecosystem. It is (to my knowledge) generally more stable than networkd. Unfortunately, it's also harder to write config files for. The nmstate project will help significantly (makes the configuation more declarative) but it still lacks the flexibility of networkd's configuration. nmstate would need to be rewritten in some compiled language (i.e. not python) for inclusion in FCOS.

networkd has a configuation format that lends itself nicely to Container Linux today. The ability to "layer" configs works well for having a default that can be overridden for cloud specific changes and user specified changes. This is especially powerful when combined with it's matching rules. It's configuration is very similar to systemd's in general. nmstate has a proposal for templates which would help, but they still aren't as flexible as networkd's configuation. Unfortunately, networkd tends to suffer regressions and isn't as actively maintained as the core of systemd or NetworkManager. It cannot handle config file changes without restarting the service, but that isn't an issue with FCOS since the nodes shouldn't be configured after first boot.

Finally, networkd has fewer dependencies than networkmanager (considering we're already shipping systemd), especially since Fedora enables most features. We could change this and repackage it for FCOS stripping out unneeded features, but that'd be another custom package to carry and maintain.

We don't have any visibilty into how existing CL or FAH are using networkd or NetworkManager (respectively). This makes determining what requirements we have for network configuration hard.

In my opinion, networkd is a better fit for FCOS, even if it is more regression-happy than we'd like. I'm also perhaps a bit biased coming from my CL background.

@mskarbek
Copy link

mskarbek commented Aug 6, 2018

+1 for networkd.

@arithx
Copy link
Contributor

arithx commented Aug 6, 2018

It cannot handle config file changes without restarting the service, but that isn't an issue with FCOS since the nodes shouldn't be configured after first boot.

I don't think we can completely discount this issue as it also can affect us in the initramfs (like DO on CL).

@ajeddeloh
Copy link
Contributor Author

I don't think we can completely discount this issue as it also can affect us in the initramfs (like DO on CL).

I think that's workable by doing something hacky like using some ip commands in a ExecPreStart to bootstrap enough networking to run coreos-metadata before trying to start systemd-networkd. Hell we could probably even use a network namespace if we didn't want it messing with the host networking at all. @bgilbert might know better though since he's poked at DO more.

@EdDev
Copy link

EdDev commented Aug 7, 2018

It cannot handle config file changes without restarting the service, but that isn't an issue with FCOS since the nodes shouldn't be configured after first boot.

I understand that this has not been a needed option so far, but with some use cases (like kubevirt), taking down a node is costly and unwanted.
Scenarios where the operator/admin adds a new network for its cluster usage should not interrupt the existing workload. At least, it would be smoother to add the vlan (for example) to the relevant nodes on-the-fly without any interruption.

Networkd vs NM also seems to have long term implications. Beside the shared maintenance burden, we have the need to change/evolve per the changes in requirements.
As more users get into this domain the more requirements they bring with them, the need to change seems to be stronger than ever.

@cgwalters
Copy link
Member

This is a tricky discussion without taking into account layers higher level in the stack. Last I heard, OpenStack (at least RDO/RHOS) do not support NetworkManager (and since this is RHEL7, hence require legacy initscripts).

I'm not sure what the status of things is in the Kubernetes world in general - I imagine there are some things taking a hard dependency on NM, but I don't know.

At least, it would be smoother to add the vlan (for example) to the relevant nodes on-the-fly without any interruption.

I think it's an open question though whether such vlans are truly managed by NM - the way I think of it is you have "host networking" and "kubernetes networking" with the latter being a distinct layer.

@yanivlavi
Copy link

yanivlavi commented Aug 8, 2018

OpenStack are planning to support Network Manager as initscripts is being deprecated in the Fedora side over time (no added features and only major bug fixes).
Kubernetes networking is mainly overlay virtual networks that depend on the host networking to be defined prior to that, it also only handles the networking within Kube main scope. The host networking is a gap, specifically on the advanced use cases that are added by CRD multi-networks concepts (additional network devices to the main one).

The scope of the NM project is much wider in terms of what is manages and what is plans to manage.
It already allows for example to create OVS bridges for overlay to physical connectivity and SR-IOV pools management.
It also plans to add more advanced standardization like DPDK devices creation and management.

I don't think there is interest on the networkd side to handle this and even if there is the gap is wider.

You also need to keep in mind the the Fedora devs attention is on NM, not networkd, which will mean that Container Linux would need to maintain and test a completely different network stack, which is less stable by nature.

@thom311
Copy link

thom311 commented Aug 8, 2018

Hi, an NM guy here.

Discussing this topic is welcomed by us. Thank you for raising the issue. But the thread sounds like pressing for a decision quickly. We just recently started discussing missing gaps for adopting NetworkManager. Unsurprisingly, gaps were identified. All we can assure at this point is that these gaps can and will be filled. But I'd rather not have you take my word for it. Why not take the next months to address issues and provide a proof of concept. It seems better to make a decision after showing that NetworkManager actually works well in your environment.

Anyway, if you really want to force a decision now, then at least discuss it in terms of where NetworkManager can be in a near future. For example, claiming that it's "harder to write config files for" seems not a good argument. It's not clear what the exact issues are, or how hard it is to fix them. I'd even claim that this issue is not severe and NetworkManager will easily improve in this regard (e.g. https://bugzilla.gnome.org/show_bug.cgi?id=772414 )

In my opinion, networkd is a great piece of software and does what it does well. Both NetworkManager and networkd tick checkboxes in the feature matrix. If you purely decide based on that, networkd is very attractive: it works well today, why would you even change?

Yaniv elaborated already on the larger picture of why. Let me add, why we think NetworkManager should be used.

Our stance is that NetworkManager provides an API on Linux for configuring networking. Having multiple APIs is a burden. Look at how cockpit does not support networkd (cockpit-project/cockpit#7987). Can you image the effort to add a suitable API to networkd and leverage it from cockpit? And think further, integrating this API with the rest of the Linux ecosystem. It's really about integrating components on Linux. If you use NetworkManager today, you can use GUI, CLI, files-based configuration, ansible, cockpit. They all integrate with each other because they use the same underlying API. In the future, we aim for management systems (e.g. Openstack and libvirt) to also use the same API.

For example, I don't mind when systemd-timesyncd competes with chrony. Clientside NTP is simple from a configuration point of view. Not so a networking API. Integrating with either networkd or NetworkManager is a large effort. And it's almost unfeasable to find a powerful, common abstraction that targets multiple networking APIs. Sure, networkd or something else could take the role standard API. But NetworkManager is a defacto standard API today, it is quite suitable today, and it integrates with many components today. It will be simpler to fix the shortcomings of NetworkManager and focus on providing a suitable API than bending the rest of the ecosystem towards an API that does not even exist yet.

Granted, it's impossible for the generalist to be the perfect solution for all scenarios. But that is what both Linux kernel and systemd do very successfully. They don't only target a server, desktop, or embedded environment. Being a generalist is key to their success.

Regardless of whether NetworkManager will be adopted, NetworkManager will continue improving at becoming a generally suitable API. It's not that a decision needs to be made today. As NetworkManager keeps getting better, the door for adopting it is only opening further.

TL;DR:

  • NetworkManager's value is in being well maintained, feature-rich and used widely (also by Fedora)
  • It can already be stripped down considerably and configured by dropping in simple configuration files
  • At the moment, switching to it could be difficult, but we're willing to work on Container Linux use cases
  • We need time to understand Container Linux needs better, perhaps making NetworkManager more systemd-networkd-y along the way
  • Before taking a NetworkManager vs networkd decision, can we list main NetworkManager issues and missing functionalities and see if/how we can adapt it in the next NetworkManager releases?

@ashcrow
Copy link
Member

ashcrow commented Aug 8, 2018

Anyway, if you really want to force a decision now, then at least discuss it in terms of where NetworkManager can be in a near future. For example, claiming that it's "harder to write config files for" seems not a good argument.

That's a fair statement. The true issue is more so around passing single configuration that can use hierarchical templates to produce the proper output for different clouds. If nmstate adds in stacking templates that would get us past minimum requirements for the configuration file.

There still is a bit of issue of running the configuration while in initramfs though. I don't think that's a killer, but is pretty limiting as we've moved to ignition.

The "it requires python" thing is also a problem, but nmstate is open to porting to a complied language and there is a workaround for using the python version as a "binary" while the port occurs.

Granted, it's impossible for the generalist to be the perfect solution for all scenarios.

Agreed.

Before taking a NetworkManager vs networkd decision, can we list main NetworkManager issues and missing functionalities and see if/how we can adapt it in the next NetworkManager releases?

I believe we attempted to do that in the last meeting we had. Keep in mind, it's not that "NetworkManager is wrong". Most of us use it on a daily basis and it's great! No matter what the decision is we should still get together over missing functionality to help widen network managers use cases.

@ajeddeloh
Copy link
Contributor Author

Regarding the configuration: networkd's config is pretty great for our use case. There's literally nothing I'd want changed in terms of the configuration format. A big theme with networkd (and systemd for that matter) configuration is that it's fully decomposable into as many files as you'd like and every part of the configuration supports that. There are no special cases; this makes the configuration very clean. "Layering" configs is a fully supported case and first class citizen.

Supporting the networkd style configuration is somewhat antithetical to nmstate's goals from my understanding (please correct me if I'm misunderstanding). nmstate wants to have the running state of an interface and the configuration for that interface be expressed in the same manner (which is pretty cool). Describing the state of an interface is different than describing rules for what the state should be.

In networkd all the config files are parsed into a bunch of rules that get applied to the interface when interfaces appear to networkd. Networkd doesn't know what the state of the interface should be until the interface appears and the rules are applied. nmstate has a config file that describes exactly what the state should be but that needs to be generated somehow from a more generic source.

In the end, the desired state of the network is a function of some rules and an interface to apply them to. There's a few main components:

  1. Configuration describing rules for how the network should be configured when an interface appears
  2. Logic for parsing the configuration into a single unified set of rules (aka combining config files)
  3. Logic for parsing those rules and determining what the state of the network should be given an interface
  4. A way to apply that state to the host

networkd excels at #1 and #2 and combines 3 and 4 into one and the same (afaik networkd doesn't try to determine the ideal state first, it just applies rules until they are all applied). nmstate + nm does a great job at #4. The nmstate template proposal would tackle #3 (correct me if I'm wrong) but templates aren't as expressive or easily tweakable as networkd's configs. Looking at the proposal it looks like #3 would also be manual; we should make that automatic.

Misc thought experiment: what if there was something that took a networkd like set of configs and generate a nmstate config when an interface appeared?

Another difference (not saying this is better or worse, purely different) is that networkd does nothing automatically; everything is very explicit. This means you may need to do manual tweaking yourself but also means there are no surprises. I personally like this and think it goes well with the explicit, declarative nature of CL today, but could be convinced a little magic here and there is ok.

NetworkManager is great and supports a great many things but that also makes me a little nervous. More flexibility and features are not necesarily better in this case. Each feature someone uses is another chance of breakage, which is very painful when you're running automatic updates. Related question: does nm have any stability promises for configs? networkd doesn't make any promises to my knowledge, I'm just curious if NM does.

We also don't want users logging into their machines. The whole idea behind CL and FCOS is you have an Ignition config that defines what your machine should be, and then you don't touch it. The dbus API isn't helpful in this case since changing the config isn't useful. From what I understand (granted I don't understand kubernetes well) the host networking and the networking k8s sets up are non-overlapping, so we should just let each do it's own thing. That is, k8s shouldn't care what the host's networking stack is. @EdDev you talked about kubevirt wanting to make changes, can you elaborate on that?

As someone who doesn't use nm and has now been looking into it a lot, one thing that jumps out at me is since it's (arguable) primary use case is interactively configured the docs on how to configure it via just writing config files don't seem to be as good (comparatively). There's not a lot of examples for setting up nm without using nmcli or a gui tool. (Or maybe I'm just bad at finding them).

Finally, it seems the discussion seems to be "Can we get NetworkManager/nmstate to do enough to fill the role of networkd" and not "which is a better fit for FCOS". If we'd have to shoehorn things into nmstate or NetworkManager, we shouldn't use them. If we could incorporate the things we like from networkd that would be great, but I worry that that'd be a large undertaking to do right.

Sorry if that was a bit ramble-y. I had a lot of thoughts.

@leoluk
Copy link

leoluk commented Aug 9, 2018

I love NetworkManager for workstations, but I always uninstall it on servers. It has caused a fair amount of downtime for me in the past due to its dynamic nature/autoconfiguring unknown interfaces. Most recently, it suddenly felt responsible for configuring VLAN interfaces added by OpenStack Neutron, breaking an OpenStack cluster. Config management is pretty hard, too. Configuring something like bonded interfaces isn't straight-forward at all.

Yes, some of this can be disabled, but to me it feels much more complex and unpredictable than networkd.

@EdDev
Copy link

EdDev commented Aug 9, 2018

Describing the state of an interface is different than describing rules for what the state should be.

Right, I thought a template can be handy to cover the rules part without compromising the declarative state nature.

nmstate has a config file that describes exactly what the state should be but that needs to be generated somehow from a more generic source.

nmstate actually does not need the full state, it will take the requested config state and merge it with the current state to create a new full state. So you could define in the config that you are interested in setting the mtu of an interface, by specifying the interface name, type and the mtu itself and nmstate will internally read the full current state of that interface and overwrite the mtu with the one from the config that was input.

  1. Configuration describing rules for how the network should be configured when an interface appears
  2. Logic for parsing the configuration into a single unified set of rules (aka combining config files)
  3. Logic for parsing those rules and determining what the state of the network should be given an interface
  4. A way to apply that state to the host
  1. nmsate indeed does not attempt to react to a new interface appearing, it assumes the network devices are post init and available, leaving that burden on the NM to handle. Having nmstate triggered when a new device is detected automatically and then processing the templates to generate a config for it is doable, but I would wander if that is really necessary when we could just wait until the system loaded all devices and only then run the "templating" once.
    It is also reasonable to request such an RFE from NM directly, to define template defaults per rules.
  2. That is just merging dicts under the hood, if we define it well I see no problem doing that for both templates and state config files. In a simplistic solution, one is appended over the other and in case of collisions the last win. In my view, the challenge is just in the requirements and definitions of how this should work.

@EdDev you talked about kubevirt wanting to make changes, can you elaborate on that?

Kubevirt purpose is to run VM/s as applications on a k8s cluster and is an example of applications that are stronger bound to the env their run on (OS, HW) and more sensitive when moved from one node to another. As of that, changes to node networking settings (create a new bridge, change mtu, add a bond, replace/add an interface or vlan, etc) is preferred to occur without disturbing much (or at all) the applications that run on that node. As an example, adding a VLAN for a secondary network for the pods to consume should not take down the node with all its pods.

There's not a lot of examples for setting up nm without using nmcli or a gui tool.

I think you are referring to the ifcfg files. NM is consuming these config files and while NM is operational, it replaces the initscripts.

I think that the discussion is more towards: short term vs long term solution and supporting new requirements from the node.

It will be useful to provide an example scenario that incorporates most of the requirements you look for and consider a must. Then an estimation can be made to see the cost or provide alternatives for considerations.

@yanivlavi
Copy link

I love NetworkManager for workstations, but I always uninstall it on servers.

We have been working with the NetworkManager devs on improving the server use cases in recent Fedora releases. But keep in mind that the use case is initscripts running along side NetworkManager in OpenStack, I'm not sure that networkd running along side NetworkManager/initscripts would be any better.

  1. Configuration describing rules for how the network should be configured when an interface appears
  2. Logic for parsing the configuration into a single unified set of rules (aka combining config files)

I would appreciate operators use cases for what kinds of smartness they use after the defaults are applied where is decomposition is useful.

NetworkManager is great and supports a great many things but that also makes me a little nervous. More flexibility and features are not necesarily better in this case. Each feature someone uses is another chance of breakage, which is very painful when you're running automatic updates.
We also don't want users logging into their machines.

That doesn't mean due that these advanced use cases can be deprioritized. We are planning to provide a CRD that will lock support to the policies we expose on the cluster level, the intention is not to let customer SSH and do everything.

You should consider that maintaining something that is out of the Fedora eco-system will have maintenance costs that might be greater than improving nm and if nm is not used to solve these use cases than networkd would need to be able to do that.

host networking and the networking k8s sets up are non-overlapping, so we should just let each do it's own thing. That is, k8s shouldn't care what the host's networking stack is.

In the base installation that is true, but in that case ignition should probably only care about a single NIC or bond needed for K8s networking and scope mentioned here is bigger even for the basic use cases.
What we plan is to expose opt-in advanced capabilities for multiple host networks in a smart way from the cluster level, which is sensitive to the host networking stack and has a greater set of feature requirements.

You need to consider a possible near future where ignition does only care about a single NIC or bond needed for K8s networking and the rest is done after the cluster is up via smart cluster policies, providing a day 2 management for the non-base use cases.

@cgwalters
Copy link
Member

Most recently, it suddenly felt responsible for configuring VLAN interfaces added by OpenStack Neutron, breaking an OpenStack cluster.

Yeah; see e.g. this bug.

@dcbw
Copy link

dcbw commented Aug 9, 2018

Note that all OpenShift installations, by default, run NetworkManager. That includes OpenShift Online (literally hundreds of nodes supporting tens of thousands of users). That is a huge number of systems that have proven that NetworkManager is capable of supporting OpenShift, Kubernetes, and container use-cases without interference or problems. In my 3+ years working with OpenShift and Kubernetes, I have only a few times (in 2015) had to debug an issue that involved NetworkManager.

eg NetworkManager is already widely deployed in mission-critical server environments and seems to work fairly well there for container-based use-cases.

@dcbw
Copy link

dcbw commented Aug 9, 2018

Yeah; see e.g. this bug.

@cgwalters not sure it's that one, for whatever reason we do not (and haven't) had problems with OpenShift and NM, and we extensively use veth interfaces there. NM does not touch them (though it does recognize them and show them in its CLI).

@ajeddeloh
Copy link
Contributor Author

Yes, some of this can be disabled, but to me it feels much more complex and unpredictable than networkd.

If we do go the nm + nmstate route will this be an issue? Or rather, can nmstate ensure that nm only does things configured through nmstate?

Right, I thought a template can be handy to cover the rules part without compromising the declarative state nature.

They can, but I'd want them to be decomposable and "overlay"-able.

nmstate actually does not need the full state, it will take the requested config state and merge it with the current state to create a new full state. So you could define in the config that you are interested in setting the mtu of an interface, by specifying the interface name, type and the mtu itself and nmstate will internally read the full current state of that interface and overwrite the mtu with the one from the config that was input.

I'm curious how this handles unset things. I.e. if the running config has some option set and you want to unset it (particularly in cases "0 values" like 0 or empty string are distinct from nil/null/None values). Ignition also does config merging and can't really unset values from an appended config. I don't know if nmstate's config has places where this would come into play yet, but it's something to think about in the future. This is kinda off topic, I'm just curious how you handle that since I've worked a lot on Ignition with its similar problem.

Also, if you're not making changes to a running machine and only loading the config on boot (as we think you should be running FCOS) then the config being loaded is the full state, yes?

Kubevirt purpose is to run VM/s as applications on a k8s cluster and is an example of applications that are stronger bound to the env their run on (OS, HW) and more sensitive when moved from one node to another. As of that, changes to node networking settings (create a new bridge, change mtu, add a bond, replace/add an interface or vlan, etc) is preferred to occur without disturbing much (or at all) the applications that run on that node. As an example, adding a VLAN for a secondary network for the pods to consume should not take down the node with all its pods.

This sounds like leaving kubevirt to manage its own things would be fine, right?

I think that the discussion is more towards: short term vs long term solution and supporting new requirements from the node.

I don't understand what you mean here.

It will be useful to provide an example scenario that incorporates most of the requirements you look for and consider a must. Then an estimation can be made to see the cost or provide alternatives for considerations.

I don't like the term requirements; it implies a bare minimum functionality. I don't want to throw out the flexibilty of networkd (specifically its configuration). We don't have data on how existing CL users use networkd and I don't want to move to something and have a bunch of users lose functionality they were using. The CL user base is mostly only vocal when things break; for better or worse they don't really participate in the development process or really give much feedback (unless we break them, then they do). We can't give you a hard list of requirements other than "it should support most things we support today in a similarly flexible and elegant configuration". I also worry that if we do find a way to generate a list of requirements, we'll build something to meet just those requirements and not solve more general cases.

I think you are referring to the ifcfg files. NM is consuming these config files and while NM is operational, it replaces the initscripts.

We have been working with the NetworkManager devs on improving the server use cases in recent Fedora releases. But keep in mind that the use case is initscripts running along side NetworkManager in OpenStack, I'm not sure that networkd running along side NetworkManager/initscripts would be any better.

Shipping initscripts or support for initscripts is antithetical to CL and FCOS. We ship a minimal, up to date distro. Initscripts (alongside systemd) are neither. I think (hope?) it's fair to say we're not gunna support initscripts. IMO if we end up shipping initscripts, we have failed.

Also I'm assuming if we go the nmstate + nm route then that will be the only way to configure the network on the host. We should absolutely not ship the ifcfg plugin. Is this a fair assumption?

I would appreciate operators use cases for what kinds of smartness they use after the defaults are applied where is decomposition is useful.

On CL we have a base network config that OEM specific gets get layers on (and we only have to override the differing bits). Users then can further override that. We don't know to what extent users do override that. Yes you could do this in other ways but I don't think any other ways are as clean as how we do it today with networkd. There is value in that cleanliness. What I really don't want is some system where there are a bunch of special cases or restrictions (e.g. only 1 template file allowed, restrictions of what fields can be overridden, etc).

You need to consider a possible near future where ignition does only care about a single NIC or bond needed for K8s networking and the rest is done after the cluster is up via smart cluster policies, providing a day 2 management for the non-base use cases.

I'm going to strongly disagree on this one. FCOS is not just for k8s/clusters. We specifically call out single node as a use case as a primary goal. I'm not going to cripple FCOS and or Ignition to not be flexible enough configure multiple NICs. I will fight tooth and nail for this. Ignition should be able to lay down the config for any networking you want the host to handle.

You should consider that maintaining something that is out of the Fedora eco-system will have maintenance costs that might be greater than improving nm and if nm is not used to solve these use cases than networkd would need to be able to do that.

It's not so much maintaince cost as likelyhood of bugs. We carry two networkd patches on top of upsteam networkd (see coreos/systemd#103) both of which are trivial.

One of the selling points of CL is that it is minimal and (to the best of our ability) clean. We try not to ship things that pull in a lot of dependencies or do 1000 things we don't care about and 1 we do. I don't see this minimalism being valued in this dicsussion and want to make it clear that's one of the things I think made CL successful. I don't dislike NetworkManager and nmstate; I just don't think they're a good fit for FCOS.

I also think networkd is easier to grok than nm. Both because the config is very regular (and well documented), it's minimal, and because it only have one way of being configured. It's also just a smaller project that does less. I want to reiterate that not shipping things we don't use is a feature. We're already replacing many simple components of CL with more complex and featureful components from FAH (e.g. dual partitions with ostree, torcx with package layers, etc). There's going to be a lot of new things for migrating users to learn and unless there are clear benefits for the change it's just more pain from the user perspective. Users don't so much care about the maintainence cost. This is something we spent a while considering when talking about whether to use ostree or dual partitions as well.

Another concern is the number of components (that all need to talk to each other). We'd need NetworkManager, nmstate and whatever nmstate template renderer tool gets created. We'd have to render templates when interfaces appear. This means we'd also need weird udev rules or some other component to track them. This feels like reimplementing networkd out of a bunch of other components. It also seems like a great place to encounter race conditions. I don't particularly like generating configuration during the boot process.

@dcbw
Copy link

dcbw commented Aug 9, 2018

We try not to ship things that pull in a lot of dependencies or do 1000 things we don't care about and 1 we do. I don't see this minimalism being valued in this dicsussion and want to make it clear that's one of the things I think made CL successful.

@ajeddeloh Note that NM also values size and minimal dependencies. Non-core device types (wifi, wwan, team, bluetooth, adsl, ppp, wimax, ifcfg-rh, ifnet, keyfile, etc) and anything that has external deps are optional plugins. NetworkManager has consistently worked to reduce dependencies, spin optional things into non-core plugins, and streamline the binaries and libraries. That said, NM does link to more things that networkd does, and some of those could perhaps be removed to optional plugins. @thom311 would have more info on that.

@keszybz
Copy link

keszybz commented Aug 10, 2018

It cannot handle config file changes without restarting the service

systemd-networkd can be restarted. This works with some caveats. Essentially, it'll apply the new configuration to any interfaces it finds. This means that just updating the config for some interface and restarting does the right thing. https://www.freedesktop.org/software/systemd/man/systemd-networkd.html#Description has some more details.

networkd stability promise for config files

All networkd configuration files can be considered stable.

Unfortunately networkd files are not mentioned at all in https://www.freedesktop.org/wiki/Software/systemd/InterfacePortabilityAndStabilityChart/. I filed systemd/systemd#9850 to track this and add them.

@thom311
Copy link

thom311 commented Aug 10, 2018

Regarding the configuration: networkd's config is pretty great for our use case.

@ajeddeloh, there is little fundamental difference between NetworkManager's profiles and what networkd does. The only differences as I see are:

  • NetworkManager profiles combine what networkd does separately with .network, .link, and .netdev files
  • networkd's files are composable/overlay-able from multiple file snippets.
  • while networkd's configuration is very file oriented, NetworkManager's profiles are file-oriented too, but also accessible via D-Bus and nmcli.
  • currently, a NetworkManager profile can only be active on one device at a time. But that restriction is no longer present in current git-master.

Supporting the networkd style configuration is somewhat antithetical to nmstate's goals

Agree, nmstate follows a different approach, which may or may not be suitable. But NetworkManager's profiles and netword's configuration are fundamentally similar.


Related question: does nm have any stability promises for configs?

Yes

[NetworkManager's] primary use case is interactively configured

You don't need a user to log in and access it via D-Bus. Just prepare (predeploy, generate) the profiles, and NetworkManager will do it automatically. Although you say "networkd does nothing automatically; everything is very explicit". I don't see a large the difference there. Both services start, and automatically configure networking according to the configured profiles (or .link files)


They can, but I'd want them to be decomposable and "overlay"-able.

Currently NetworkManager profiles are not composable/overlayable like they are with networkd, but there is no huge objection from the NM team to adding that capability. The keyfiles (.ini-file format which NM uses by default) can easily be composed from snippets and composability could be added to NM if this functionality is a deal-breaker.

However, NetworkManager can be configured via files and D-Bus API. It is hard (impossible?) to come up with a usable, simple, and powerful D-Bus API, that lets you modify profiles which are assembled from multiple locations. Currently, the D-Bus API is just an "update-entire-profile" call and NetworkManager replaces the entire file on disk. If NetworkManager gets composable keyfiles, these profiles probably cannot be sensibly modified via D-Bus.

So, read-only overlays are easily possible, but they take away another important feature. Since networkd doesn't have a D-Bus API for writing profiles anyway, it doesn't have this "limitation".

Maybe it would be better to approach this from the angle of which problem composable profiles solve, instead of what your current solution to that problem is. Yes, the general problem is clear. But how much do you use this? Do you use it for some properties in particular? Is this more relevant for some properties than for others? If you for example just use it to adjust the MTU you can already configure the default value for ethernet.mtu property in /usr/lib/NetworkManager/conf.d snippets.


Regarding nmstate. I see nmstate as a higher layer API on top of NetworkManager. That aims to make some things easier to do. But it cannot escape limitations that NetworkManager itself has.

For example, nmstate can generate profiles for NetworkManager (templated?), not unlike a generator in systemd. And of course, you can generate profiles via any other means, aside using nmstate. If that is useful to solve the same problem as overlay configuration files, then fine. On the other hand, if overlay-able profiles are deemed as the best solution, NetworkManager can add support for them (despite the shortcomings).

We also value minimal solutions and simplicity (yes, really!). I personally think that CL would need nothing except some pre-deployed profiles and configuration snippets, and the rest should be handled by NetworkManager. If nmstate turns out to be beneficial in this picture, that's very fine with me. But that remains to be seen.


"NetworkManager dependency chain is not really too big when installed without optional plugins (such as Wi-Fi or PPP that may be unneccary in CL).
We'll be dropping libpolkit-gobject-1 and libpolkit-agent-1 soon; libcurl is used for conectivity checking, and could probably moved to a separate plugin/package. libjanson is used for teamd configuration (which can be disabled at compile time, but dunno if that's worth it). If there's any other dependency that you wish not to include in CL then please let us know -- we may figure out a way to get rid of it."


As initrd was mentioned above. IMO, NetworkManager in initrd is a must have feature, regardless of CL. Lubomir is working on that right now.


One thing that could be improved with NetworkManager is still it's memory footprint. I don't see there low-hanging fruits either. But it's a valid criticism, that we should focus more at improving there.


I also worry that if we do find a way to generate a list of requirements, we'll build something to meet just those requirements and not solve more general cases.

NetworkManager will not add features for the sole purpose to check an item in a requirements list. It will add them, if they make sense in the larger picture, and are beneficial to use-cases we want to support. And we want NetworkManager to support a wide range of use-cases, in particular server and CL use-cases. And we do so for years already.


Shipping initscripts or support for initscripts is antithetical to CL and FCOS.

Agree! The preferred configuration format is keyfile; manual nm-settings-keyfile and nm-settings.

Granted, file based configuration is where systemd and networkd excel. NetworkManager's keyfile format and its documentation should improve further. However, it is a first-class citizen for NetworkManager as well. It's just not the only one.

@yanivlavi
Copy link

This sounds like leaving kubevirt to manage its own things would be fine, right?

KubeVirt VMs are only consumers of the pods resources which in turn would require the node to be able to provide it to the pod and keep the expected SLA.

I also worry that if we do find a way to generate a list of requirements, we'll build something to meet just those requirements and not solve more general cases.

Yes, but on the other side of this we do have customer requirements we need that nm+nm-state were created to solve and a roadmap to enable a lot of advanced application capabilities using these tools.

Also I'm assuming if we go the nmstate + nm route then that will be the only way to configure the network on the host. We should absolutely not ship the ifcfg plugin. Is this a fair assumption?

Yes. I was only try to highlight the comparison of usage patterns/expectations.

Ignition should be able to lay down the config for any networking you want the host to handle.

So this would include SR-IOV, DPDK, OVS bridges, Contrail vRouter, Infiniband, VPP and so on?

@lucab
Copy link
Contributor

lucab commented Aug 13, 2018

As a general note, fedora-coreos explicitly does not try to tackle all possible workloads and environments. There will always be custom kernel modules, complex network controllers and pet nodes we are not catering to.

I appreciated @thom311 details. I don't have many real technical points against networkd and I don't have much experience on NM as a distro maintainer. I'm interested in tracking whether NM and nmstate evolution will end up in a declarative yet non-monolithic model similar to networkd, which I think would a be sweet intersection point for our specific usecase.

From a design point of view, I have some fears (but I lack specific knowledge) as neither seem to be designed with any internal-serialized-state/external-user-configuration separation and seem in general not aware of the vendor/user/runtime split and layering (i.e. /usr+/etc+/run). Both are ingredients which allowed us to evolve and update the distribution without forcing frequent manual interventions/re-provisionings.

@ajeddeloh
Copy link
Contributor Author

Quick note of context: This is a discussion for FCOS, not CL. We cannot switch CL without massive breakage.

One thing that could be improved with NetworkManager is still it's memory footprint. I don't see there low-hanging fruits either. But it's a valid criticism, that we should focus more at improving there.

How big is it? CL hasn't historically been too concerned about memory footprint (torcx even unpacks docker et. al. into a tmpfs) so I don't think it'll be too much of an issue unless it's leaky

As initrd was mentioned above. IMO, NetworkManager in initrd is a must have feature, regardless of CL. Lubomir is working on that right now.

In a way, the initramfs is the case we care about least since it's not user configurable. As long as the network in the initfamfs works enough that Ignition can run it's inivisble to the user. In fact NetworkManager in the initramfs could actually help solve the DigitialOcean use case (they don't use DHCP, they reimplemented it over HTTP).

However, NetworkManager can be configured via files and D-Bus API. It is hard (impossible?) to come up with a usable, simple, and powerful D-Bus API, that lets you modify profiles which are assembled from multiple locations. Currently, the D-Bus API is just an "update-entire-profile" call and NetworkManager replaces the entire file on disk. If NetworkManager gets composable keyfiles, these profiles probably cannot be sensibly modified via D-Bus.

I'm not sure I follow 100%. When you make a change over DBus does it (currently) change the config files to match?

So this would include SR-IOV, DPDK, OVS bridges, Contrail vRouter, Infiniband, VPP and so on?

As @lucab said, we don't plan to support absolutely everything.

currently, a NetworkManager profile can only be active on one device at a time. But that restriction is no longer present in current git-master.

What does the NetworkManager release cycle look like? I.e. how soon can we expect to see the changes in a stable release?

Both services start, and automatically configure networking according to the configured profiles (or .link files)

I thought I saw something about ethernet getting auto-connected? Could have been from some plugin's docs though; ignore me if that's the case. Basically if you run NM with no config at all does it have any implicit configutation it uses? If it does have an implicit config, can you force it to not?

@tyll
Copy link

tyll commented Aug 14, 2018

I'm not sure I follow 100%. When you make a change over DBus does it (currently) change the config files to match?

yes

I thought I saw something about ethernet getting auto-connected? Could have been from some plugin's docs though; ignore me if that's the case. Basically if you run NM with no config at all does it have any implicit configutation it uses? If it does have an implicit config, can you force it to not?

There is a RPM package called NetworkManager-config-server that will install a simple Network Manager config snippet to disable automatic configuraion of ethernet devices.

@yanivlavi
Copy link

yanivlavi commented Aug 14, 2018

As @lucab said, we don't plan to support absolutely everything.

But as said we will need these capabilities for advanced applications, like those running in VMs.
Having NM+nmstate on the host will go a long way to allow us to create a cluster level node network config management in a simpler way that will open the door to do many things after the node joins the cluster.

The assumption for aiming to the simple makes sense and we also want to support the FCOS assumptions (like having that config as policy that applies to nodes in a non-specific way). FCOS is a platform and it needs to enable the wider user stories that we are trying to solve in a layered way for this aspects.

@fabiand
Copy link

fabiand commented Aug 16, 2018

For KubeVirt it should not make much of a difference how the network is set up - if it's NM, nmstate, or networkd. In the end there will be a set of interfaces with different roles (classically categorized as control plane(s) and data plane(s)).
IMO Kubernetes, OpenShift, and thus KubeVirt will have a way to consume all/some of these interfaces to assign to or share them with workloads. How the interfaces will be consumed and allocated/accounted for depends on how Kubernetes, OpenShift and other Kube distros decide to do the consumption (CNI, DP, whatelese?).
But all of this is probably not so important for FCOS.

Now, from my perspective it's a little tricky ATM. I understand why networkd is preferred over NM for this use-case (and technically I like this). But considering the pain that not decently maintained software can cause, I also understand why people favor NM.

There are gaps on both sides right now, but it's not only about now - as we know that there will be requirements coming in future, once FCOS sees more adoption. Thus IMHO it should be taken into account if NM or networkd can deal with an incoming stream of requirements and bugs.

Just my 2ct.

@lucab
Copy link
Contributor

lucab commented Jan 21, 2019

This is a very late (sorry!) followup to a meeting we had with the NM folks to see whether we could be on a converging path.
Overall we are indeed pushing in the same direction, however there a lot of bits currently in-flight for F30 and a few more to possibly figure out later on.

NM recently landed an initrd-generator: https://developer.gnome.org/NetworkManager/stable/nm-initrd-generator.html. This is the first step to allow us to replace the dracut-based networking in initramfs with a monolithic solution, which is coherent and shared with how the network in real-rootfs works. Upstream is targeting this at F30 (@lkundrak may have more status update on this).

Upstream NM confirmed that it should be already possible to support runtime-only (i.e. not persisted to FS) configurations. I didn't dig more specifically into the details of this, but investigation may be needed in order to add support for this in coreos-metadata. This is effectively #111.

Regarding carry-on DHCP leases from initramfs: nowadays NM supports multiple DHCP backends (I think the current Fedora default is based on networkd library). As long as NM with a consistent backend is used in both initramfs and rootfs a lease can be carried (or dropped, by deleting the runtime lease file) without issues.

Regarding file-system split, there are hook-scripts shipped by packages under /etc (e.g. /etc/NetworkManager/dispatcher.d/*) which should be moved to /usr, leaving the user the ability to override them via /etc and /run same-name entries. There seems to be consensus on this, and NM folks are looking into adding logic for layered lookups for this.

Regarding internal NM support for merging configuration snippets (i.e. like networkd config), @thom311 confirmed this is likely not happening in the near future. In particular, this is very hard due to fundamental NM design constraints and writable dbus API. This would limits and re-shape a bit the way we ship distro-wide defaults, but it looks like it can't be easily changed in the foreseeable future.

From CoreOS side, we decided to make the upcoming Ignition schema more flexible and not bound to networkd. As such we are dropping the networkd stanza, and instead using plain file entries for network configuration too: coreos/ignition#638

Finally, Anaconda also likes to fiddles with legacy "ifcfg-rh" network configuration, which may interfere with both distro defaults and user configs. As we are moving out of Anaconda image building, this shouldn't be a concern going further.

For future wish items, we briefly touched how profiles declare their own matching rules. Currently it is mostly based on interface names, but upstream is keen on adding new matching parameters. That should allow us to more comfortably allow cloud-platform-specific bits.

@lucab
Copy link
Contributor

lucab commented Feb 8, 2019

@thom311 do you have tickets to reference here for the few RFE items above on NM side? I think you mentioned in Brno that you had action items on your plate, but I couldn't find any on gitlab now.

@bgilbert
Copy link
Contributor

nm-initrd-generator is currently not shipped in the F30 NM packages.

@mrguitar
Copy link

mrguitar commented Apr 2, 2019

It looks like the rolled that back because of anaconda; I don't think that impacts FCOS. Can we include the generator in FCOS anyway?

@bgilbert bgilbert self-assigned this May 30, 2019
@bgilbert bgilbert added this to Proposed in Fedora CoreOS stable via automation Jul 16, 2019
@bgilbert bgilbert moved this from Proposed to Selected in Fedora CoreOS stable Jul 16, 2019
@bgilbert bgilbert removed this from Selected in Fedora CoreOS preview Jul 16, 2019
@thom311
Copy link

thom311 commented Jul 24, 2019

@thom311 do you have tickets to reference here for the few RFE items above on NM side? I think you mentioned in Brno that you had action items on your plate, but I couldn't find any on gitlab now.

@lucab sorry for the late reply. To my understanding, we identified 3 main issues.

  • /etc/NetworkManager/dispatcher.d directory requires a corresponding read-only directory in /usr/lib (fixed upstream 7574b722a6bb).

  • running NetworkManager in initrd. This is also merged upstream, but it requires distributions to enable it (and it also requires patches for dracut, which are however upstream). I think this is working in Fedora 31/rawhide.

  • a read-only directory for connection profiles. That was merged upstream recently.

All these should be ready in upcoming 1.20.0 release. NetworkManager 1.20.0 is not yet released, but that will happen soon and be in rawhide quickly.

@SerialVelocity
Copy link

SerialVelocity commented Sep 19, 2019

From reading this issue it seems like NetworkManager has been decided on but from a user's perspective, trying to get NetworkManager working (and still failing) has been extremely painful and time-consuming.

I have a small request that you have good docs on getting connection profiles set up as currently the information on the internet seems pretty sparse.

For a bit of awareness of the problem I faced (I don't expect it to be solved here though):

  • I created a simple /etc/NetworkManager/system-connections/enp0s20.nmconnection using ignition with the contents:
[connection]
id=enp0s20
interface-name=enp0s20
autoconnect=true
type=ethernet
uuid=<randomly-generated-uuid>
match-device=interface-name:enp0s20

[ethernet]
mac-address=<mac-address>

[ipv4]
method=auto

[ipv6]
method=auto
dhcp-duid=<my-duid>
  • I tried booting my machine, and it failed to set up the DNS servers
  • Enabling debug logging shows:
<debug> [1568935323.0824] Connection 'enp0s20' differs from candidate 'enp0s20' in ipv4.addresses, ipv4.gateway, ipv4.method, ipv6.addresses, ipv6.gateway, ipv6.method
  • It looks like if you use the network during ignition, you cannot set up system connections.
  • In case anyone is looking for a workaround (from StackExchange):
    • Create /etc/systemd/system/NetworkManager.service.d/override.conf with the contents:
[Service]
ExecStartPre=/sbin/ip addr flush enp0s20

Beware, the above will cause connectivity issues if NetworkManager ever crashes or you restart it. It's probably better to put it in a separate unit.

@thom311
Copy link

thom311 commented Sep 21, 2019

@SerialVelocity what are you running in initrd to setup networking?

When NetworkManager starts and finds an interface pre-configured, it assumes that somebody else configured the device and won't autoactivate a profile on it because that would be destructive. That is also the case when you setup networking in initrd before NM is starting. One possible solution here would be to define an API how NetworkManager takes over externally configured devices. Optimally, this API would not be NM specific, so you could run an arbitrary tool in initrd (that honors the API) and pass the configured devices to any other networking configuration tool in real boot. In practice, defining and implementing such an API complicates everything tremendously, and this is when in practice we lack contributors and testers to implement this. So, the suggested solution is instead to also run NetworkManager in initrd. NetworkManager knows how to configure the device and pass it over to itself. Thereby we concentrate our efforts in having one combination working well.

In this github issue, that point is also discussed as a roadblock: running NetworkManager in initrd. And from NetworkManager's side, this should be working, but it requires to configure initrd accordingly (upstream dracut also has support for that and Fedora 31 also runs NetworkManager in initrd).

Yes, the /sbin/ip addr flush enp0s20 works around the issue, because it clears the configuration and NetworkManager sees the device unconfigured and starts autoconnecting the profile. But that is not a proper solution!

@SerialVelocity
Copy link

@thom311 Thanks for the explanation! Another possible solution is a flag that forces the connection to match a certain interface no matter what or just always match if match-device matches? That would be a lot less work that an API that would have to be supported in other tools and more user friendly.

But ok, the NetworkManager in initrd is meant to fix this case, good to know! Thanks again for the info.

Yes, the /sbin/ip addr flush enp0s20 works around the issue, because it clears the configuration and NetworkManager sees the device unconfigured and starts autoconnecting the profile. But that is not a proper solution!

Yes, it made me very sad having to do this. 😭

@jlebon jlebon added the jira for syncing to jira label Oct 16, 2019
@jlebon
Copy link
Member

jlebon commented Oct 21, 2019

In this github issue, that point is also discussed as a roadblock: running NetworkManager in initrd. And from NetworkManager's side, this should be working, but it requires to configure initrd accordingly (upstream dracut also has support for that and Fedora 31 also runs NetworkManager in initrd).

OK, right. Seems like this is the default now in f31?

Just going to cross-link this here: coreos/fedora-coreos-config#200 (comment). Note this is only to get the rebase to f31 going. With that in place, we should be able to switch to NM in the initrd more easily when we're ready. (Though offhand as mentioned in that link, I wasn't able to get it working in testing, but I didn't dig very deeply.)

@keszybz
Copy link

keszybz commented Dec 22, 2019

A update on two points about networkd that were raised earlier in the dicussion:

It cannot handle config file changes without restarting the service

In systemd-244 networkd supports reloading of configuration (through a dbus command or by networkctl reload) and reconfiguring specific devices (again, through dbus or networkctl reconfigure ...) and renewing leases on demand (dbus, networkctl again).

In systemd-243 a little tool called systemd-netwokd-generator to convert kernel commandline arguments that dracut understands to .network/.netdev/.link files.

This generator approach shows that it is fairly easy to write "importers" for external config.

@dustymabe
Copy link
Member

This discussion has been open for a long time. In practice we have shipped Fedora CoreOS out of preview with NetworkManager as the de-facto networking configuration implementation on FCOS. This is even more true with the recent move to use NetworkManager in the initramfs, which brings us closer to the default networking implementation in the initramfs for the rest of Fedora (as of Fedora 31).

I'm going to close this out now. Thanks for the discussion all!

jdoss added a commit to jdoss/fedora-coreos-config that referenced this issue Aug 21, 2020
systemd-networkd was removed because NetworkManager was chosen as the de-facto
networking configuration implementation as discussed in
coreos/fedora-coreos-tracker#24 but removing it entirely
restricts end user choices to fit their use cases.

Since systemd-networkd can live along side of NetworkManager this PR adds it back in.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira for syncing to jira priority/high
Projects
No open projects
Development

No branches or pull requests