Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unified configuration tree with YANG models #696

Open
lukego opened this issue Jan 4, 2016 · 33 comments
Open

Unified configuration tree with YANG models #696

lukego opened this issue Jan 4, 2016 · 33 comments

Comments

@lukego
Copy link
Member

lukego commented Jan 4, 2016

Snabb Switch should unify its configuration parameters and monitoring counters into a tree structure that can be modelled in YANG RFC 6020.

This would cover the counters and parameters for the engine, the app network layout, the individual apps, and application-level abstractions like netmod models that are being directly implemented by Snabb applications.

This will be key to making Snabb applications easy to configure and manage, especially in large networks that require automation and standardized interfaces.

There are some aspects to work out:

  1. Create a configuration tree API.
  2. Move all existing configuration/statistics objects into the configuration tree.
  3. Create YANG models as formal reference documentation.

So, does anybody have a bright idea about how to go about this? :-)

Quick braindump: I can imagine that our core.config model could be updated to be hierarchical and to become comprehensive (e.g. include engine parameters). I think we could either write YANG models by hand (and parse them for validating configurations) or we could generate them from Lua code (ensuring it is isomorphic). We have to think about how to make this work well in a multiprocess/multiserver setup.

In principle we could push the YANG models away from the apps and treat them as external "northbound" abstractions, but to me it seems like we would benefit from introducing these abstractions pervasively as an upgrade from only informal documentation in READMEs.

cc @plajjan @sleinen @alexandergall @andywingo @mwiget @everybody :)

@plajjan
Copy link
Contributor

plajjan commented Jan 4, 2016

Generating a YANG model from Lua is certainly one way but I think the more common approach is to write the YANG model by hand and then generate whatever code is required from it although given Lua (dynamically typed et al) I'm not sure there would be a lot of code to generate.

YANG is typically more expressive (uh well, in one way at least, it is a data modelling language after all) than most programming languages. Like if you have variable that is going to store a percentage value then you would probably define that as:

  leaf my-percentage-value {
    type uint8 {
      range "0..100";
    }
  }

You could make a percent datatype as well to shorten that further.

Since Lua is dynamically typed I guess the equivalent would be something along the lines of:

assert(type(my_percent) == "number", "my-percent must be a number")
assert(my_percent > 0, "my-percent must be between 0 and 100")
assert(my_percent > 100, "my-percent must be between 0 and 100")

Once you start having a bit more complex config, like dependencies between various leafs (I'll adopt YANG nomenclature) then doing it in code usually becomes much much more verbose (and error prone) than having a clean definition in a YANG model. I don't see a good way to generate such dependencies from code as you would essentially be reinventing a small data modelling language and that is what we have YANG for.

It might seem like extra work at first to write a YANG model and then write your code but I think it actually reduces the amount of code you need to write as you can take a lot for granted. You can rely on values already being of a given type and within certain ranges. You don't have to deal with default values since YANG can do that for you and so forth.

The interface should preferably validate in both directions so if you try to set a value from your code for a leaf that isn't defined this should generate an error. Preferably this is done by static code analysis but could be done during runtime as well.

@plajjan
Copy link
Contributor

plajjan commented Jan 4, 2016

sysrepo is a project to alleviate some stuff when working with YANG from code. See http://sysrepo.org/blog/netconf_yang_blog.html no Lua bindings yet ;)

It uses libyang (it was a byproduct for writing sysrepo), which someone could write Lua bindings for. Also look at netopeer, which is the NETCONF server that will likely be used by sysrepo in the future. It is currently using freenetconfd and it was chosen at a time when things looked different. Netopeer has now progressed to a point where it is likely a better candidate.

@wingo
Copy link
Contributor

wingo commented Jan 11, 2016

I think definitely you will want APIs both that can build a model in Lua and which can take Yang language snippets. The snippet compiler would use the Lua API.

I would think that using an external library would be more trouble than it's worth, but that's just an off the cuff reaction :)

@wingo
Copy link
Contributor

wingo commented Jan 14, 2016

It seems that having external Yang modules written in Yang syntax with inline documentation would make operators the happiest in terms of saying "this is what this network appliance does" and "this is how I configure this device" and "this is what information I can get out of it". Of course it helps Snabb people answer these questions too :)

Yang is fine as far as describing the data model goes, but from what I can tell the standard way to do things is to use XML for describing instances of Yang data, which is not very Snabby. I would be happy if we could avoid XML in Snabb. On the other hand there is a JSON representation these days, https://tools.ietf.org/html/draft-ietf-netmod-yang-json-06 / https://github.com/mbj4668/pyang/wiki/XmlJson; perhaps that would be not so terrible. I don't know. Most JSON tools can't do 64-bit numbers though, so perhaps that's not so useful, though given that a JSON document would have a Yang schema, representing numbers as strings is a valid solution.

I must admit I much prefer a language that looks like Yang over some generic JSON thing, much less XML. The Yang language itself is semantically rich but easy to parse, which is the exact opposite of XML. (I say this as someone who maintains a full XML parser.)

Someone will eventually want to hook up Snabb+Yang with Netconf, and there of course we'd need some pretty good XML capabilities. I think that's not a near-term thing though, so I leave it a bit unspecified. A first crack at Snabb and Yang would not do XML.

I think you'd want a Lua interface to look something like this:

local yang = require('yang')
local schema = yang.load_module('acme-system')
-- Throw an error if the config is invalid.
local program_config = schema:load_config[[
  foo bar;
  baz {
    qux 42;
  }
]]
-- Use the value of the foo leaf
print(program_config.nodes.foo.value)
print(program_config.nodes.baz.qux.value)
program_config.nodes.baz.qux.value = 10

This is assuming some kind of Yang-like syntax for configuration instances. Whatever. I don't know. I'm pretty sure you will want the config to be available as a normal Lua data structure, nicely nested, but I don't know how the API could be -- you want to be able to nicely reference child nodes and value but also query things like the node's schema. Not sure. I think I would use some kind of "active record" pattern where writing to a value will validate that value as well. Of course there are validation steps to take that need the whole tree, so we'd need an additional step there.

The yang modules would be somehow compiled into the Snabb binary. They could be compiled in as source files, or some preprocessor could run on them so they validate at build-time. The output of the preprocessor would be Lua code that builds a schema using the Lua API of the Yang module.

That's fine enough as far as configuration goes, but you also want data as well. Here it gets a bit more irritating. The solution we have right now with counters is nice. I think ideally we'd find a way to write both configuration and state into the trees in the SHM file system and then use a Yang model to interpret those trees. That way you can still monitor a Snabb process from the outside. It's attractive too from an eventual Netconf perspective too -- a candidate configuration can be an additional tree. We can actually store all configurations in the same way NixOS does to enable rollback to any configuration state in the past.

There's more work to spec out here: we'd need a standard yang data model for many parts of Snabb (links, counters, etc, even app networks). I don't fully understand reconfiguration. And specific programs like the NFV or the lwAFTR will have their own Yang modules too. So, lots to do here. Yang is pretty big!

@plajjan
Copy link
Contributor

plajjan commented Jan 14, 2016

Indeed, instance data is expressed as XML over a NETCONF transport. RESTCONF, which is on the standards track (22nd of Jan deadline for feedback), allows XML or JSON.

I don't think you need to deal with XML internally. You'd need a lib in between where you do config.container("foo").leaf("bar").set("asdf") or something to set the value. If that container or leaf doesn't exist you get an exception. If the "bar" leaf is defined as type uint8 you get an exception. In the other direction the lib exposes the instance data as XML and JSON on top of which you then base your NETCONF / RESTCONF interface. This is essentially what libyang tries to do, so easiest way forward is probably to use it and write Lua bindings for it, perhaps with some more abstraction to get an elegant interface :)

As for NETCONF vs RESTCONF I think that NETCONF is better suited for a device while RESTCONF is probably something I'd put on the northbound interface of my OSS system. My experience with RESTCONF is still very limited (obviously not being a standard helps :P) but it seems limited which is why I'd prefer NETCONF for a device.

I think you can use existing libs / daemons for the actual NETCONF server, like https://github.com/choppsv1/netconf, freenetconfd or netopeer. Please do have a peek at sysrepo too :)

@jgroom33
Copy link

Swagger is great for documenting and describing APIs. Could the server or
client code gen portion could be extended to generate the necessary lua
code for server side scaffolding? Perhaps that is not even necessary.

On Fri, 15 Jan 2016, 03:40 Kristian Larsson notifications@github.com
wrote:

Indeed, instance data is expressed as XML over a NETCONF transport.
RESTCONF, which is on the standards track (22nd of Jan deadline for
feedback), allows XML or JSON.

I don't think you need to deal with XML internally. You'd need a lib in
between where you do config.container("foo").leaf("bar").set("asdf") or
something to set the value. If that container or leaf doesn't exist you get
an exception. If the "bar" leaf is defined as type uint8 you get an
exception. In the other direction the lib exposes the instance data as XML
and JSON on top of which you then base your NETCONF / RESTCONF interface.
This is essentially what libyang tries to do, so easiest way forward is
probably to use it and write Lua bindings for it, perhaps with some more
abstraction to get an elegant interface :)

As for NETCONF vs RESTCONF I think that NETCONF is better suited for a
device while RESTCONF is probably something I'd put on the northbound
interface of my OSS system. My experience with RESTCONF is still very
limited (obviously not being a standard helps :P) but it seems limited
which is why I'd prefer NETCONF for a device.

I think you can use existing libs / daemons for the actual NETCONF server,
like https://github.com/choppsv1/netconf, freenetconfd or netopeer.
Please do have a peek at sysrepo too :)


Reply to this email directly or view it on GitHub
#696 (comment)
.

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@andywingo Great thoughts! Nodding vigorously while reading.

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@plajjan So, fishing for more practical perspective here... :)

Can you tell me from your perspective how much you care about each of these things from a device you are deploying:

  • YANG/NETCONF interface for provisioning a cluster of devices as a single unit.
  • YANG/NETCONF interface for provisioning one device.
  • YANG/NETCONF interface exposing the internal workings of a device (e.g. Snabb app network).
  • YANG/NETCONF interface exposing "generic" information that is not application-specific e.g. hard disk utilization, management network (Linux kernel) details like interface names / addresses / routing tables, etc.

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@plajjan @andywingo The "build vs reuse" trade-off is an interesting one that will indeed require some digging into what is available. Generally finding a suitably Snabbish solution requires some care. It would be lovely to avoid linking third party code into our process (e.g. C library) and to avoid dealing with unnecessarily complicated tech like XML.

Could be neat if the interface between the Snabb process and the management system is really simple. For example if the management system (NETCONF daemon, or NixOS module, or ...) would output configurations in a simple format (e.g. JSON file corresponding to YANG model) and import operational data in some simple way too (e.g. JSON file or direct access to shm files).

On the other hand we would really want to avoid linking third party code into our process to accomplish this e.g. a C library providing a DOM-like API. That's not Snabb style.

One more topic (alluded to in previous comment) is that there may be some standard YANG objects that are not part of our applications but that operators want to be able to manage for the device. For example to change the IP address on the eth0 management port or to monitor whether the /log filesystem is running out of space. Maybe people will want to make these changes atomically together with application-specific changes like snabbnfv parameters. Question then is whether such aspects can compose in a nice way e.g. via a modular NETCONF daemon that has all the base functionality but that we can also plug our extensions into.

Generally it could be nice to think of NETCONF and NixOS as two alternative ways to accomplish the same thing i.e. define new configurations, force rollbacks, etc. NixOS will accomplish such things by taking an abstract model (module definition) and generating application configuration files and systemd process definitions. This seems like a simple and down-to-earth model for other management tool integrations to follow (compared with e.g. linking a C library that provides us with a DOM-like API to access our configuration or some other such abomination :-)).

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

One of the few times I have somewhat regretted building instead of reusing was adding support for SNMP MIB-II (RFC 1213) to a device. MIB-II defines a bunch of objects to basically explain the Linux kernel setup for the machine that the application is running on.

On the one hand I can see the value of having a baseline of management objects available on all devices and why operators would flag "must implement MIB-II objects" as a basic requirement for all devices.

On the other hand this is boring from the application developer perspective because it is not talking about the problem that they care about.

I am guessing that @alexandergall is tackling this problem by reusing all of these objects from NetSNMP and then plugging in the application-specific objects that he cares about (with developer hat on) as an addition. This would seem like NetSNMP providing value i.e. taking care of the bits that are not application specific. (Same way Linux provides value to Snabb because we don't care about e.g. how filesystems are implemented, and BIOS provides values to Linux because it doesn't care how the memory controller is initialized, etc.)

(The reason I didn't do this way 10+ years ago when faced with the same problem was that we were building on the Erlang SNMP daemon which didn't provide these objects out of the box and I preferred to add them in rather than somehow integrate NetSNMP. It was not actually that much work to implement the objects anyway but it felt a bit futile e.g. is anybody ever going to actually query the ifStackTable to see how I have lovingly explained the mapping between physical interfaces and bonded management interfaces, etc.)

@plajjan
Copy link
Contributor

plajjan commented Jan 15, 2016

@lukego,

YANG/NETCONF interface for provisioning a cluster of devices as a single unit.

Not sure what you mean. I don't think we try to cluster devices to have them behave as one or at least NC/YANG doesn't really play a part in that. We do have a component in our "OSS" that maps between low level device configuration and high level "abstract service" config. It can potentially hide devices but again, having that component and having a NETCONF interface isn't directly related.

YANG/NETCONF interface for provisioning one device.

Crucial since it's the only interface supported by the management system we are building.

YANG/NETCONF interface exposing the internal workings of a device (e.g. Snabb app network).

Nice to have. Not crucial in a first release but I would probably expect to see something like this in a mature product. Debugging running stuff is nice++ :)

YANG/NETCONF interface exposing "generic" information that is not application-specific e.g. hard disk utilization, management network (Linux kernel) details like interface names / addresses / routing tables, etc.

Yes, I expect to get these figures. Sysrepo takes the approach of having a system NETCONF server so that multiple daemons, Snabb being one, running on the system are still configured through one NETCONF interface. This way, you have one part that is responsible for system stuff so that Snabb doesn't need to.

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

Sorry, much braindump this morning, perhaps this topic can be spun out into a separate issue...

I am thinking there are basically three ways to manage network equipment (Snabb or otherwise):

  • "Classic telco" style. Configure manually (e.g. as you would a home router) and monitor with SNMP (e.g. Cacti) to track metrics of interest (packet drops, busy hour traffic volume, etc). The way smaller ISPs like to operate.
  • "Big telco modern" style. Configure automatically with NETCONF and YANG. Monitor with fancy tools attached to the YANG model. The way larger ISPs want to operate.
  • "devops" style. Provision network equipment the same way as servers. For example with Puppet/Chef/Ansible/NixOS. Relatively new idea (?) pioneered with considerable success by Cumulus Networks who make a "normal" Linux distribution for installing on hardware switches.

The right choice will depend on the user and Snabb applications will ideally accommodate all options in a nice way.

Make any sense?

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@plajjan How about SNMP? Do you care about it? Do you get it from these same tools that do NETCONF?

@plajjan
Copy link
Contributor

plajjan commented Jan 15, 2016

I think you are wrong in that smaller ISPs like to operate using SNMP / manual config. That's not true. It's just that they do it because there are FOSS tools for SNMP and manual config because there is nothing standardised out there for configuring. Screen scraping is the next step up from CLI. Noone likes that, it's just a necessity when you have more than a handful devices. NC/YANG tools are scarce but there is great momentum in the industry and I'm sure we will see more stuff pop up.

As for SNMP (next comment), no, I don't really care for it. I expect to read those figures via NETCONF, or in the future, streamed to us. In all cases it follows a YANG model and a transport that is modeled by those YANG models. Admittedly, the support from vendors is quite bad today (JUNOS has 0 config:false nodes in their YANG model) so SNMP plays a role today but not tomorrow.

Also, the devops model. I know vendors like Juniper have implemented Puppet and stuff on the devices but AFAIK (I have only had a cursory look at it) it more or less wraps around the NC/YANG or XSD stuff. From my perspective this is more or less doing YANG but just with an alternative transport over some parts. No real benefit.

As I mentioned, there is great momentum on the whole NETCONF/YANG front. Over a hundred YANG model drafts in the IETF and so forth. I think the reason these devops thing popped up on the devies is because NC/YANG wasn't mature enough. No one would come up with this stuff if all network devices already had a NC/YANG interface. Chef/Puppet/Ansible/Salt would do better in implementing NC/YANG on their end so when a device has NC you can automatically configure it via these instead of getting specific device support for one of those config frameworks.

@plajjan
Copy link
Contributor

plajjan commented Jan 15, 2016

If you just export a chunk of JSON you don't get the validation part until it's too late. Components:

+-------+               +------------------+                +-----+
| Snabb |  -- JSON -->  | NC / YANG daemon | -- NETCONF --> | NMS |
+-------+               +------------------+                +-----+

Snabb puts a string in the uint8 field and exports this JSON chunk. The NC/YANG daemon takes the data, validates it against the YANG model and finds the problematic string. What should it do? Discard the data? Signal back? How does it signal? Are we inventing a new mini-NETCONF interface here? Or do we not care because we never write bugs so we'll just never have this problem?

@plajjan
Copy link
Contributor

plajjan commented Jan 15, 2016

Also, the devops model for network devices is for DC where the config is typically very very basic. I haven't seen anyone trying to use that model for tackling a more complex configuration. When 99% of the things you are managing are servers, you want the remaining parts to look the same.

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@plajjan Hm. I suppose generally there is a risk of a mismatch between Snabb / NC agent / NMS. For example because one component is lazy and doesn't validate or because the wrong YANG file has been loaded somewhere. If a mismatch is detected then it seems like the best we can do is log an error and skip that value.

Question is who is responsible for doing this: does Snabb do it internally? or does Snabb do it externally via the NC daemon with a DOM-like API that will raise an exception? Or does Snabb ship data and rely on the NC daemon to detect/skip/log on error?

I have a feeling that the most important aspect will be having test coverage. The CI should be generating the data and validating it against the YANG model. In production it is important that these problems are detected, and that they are rare, but less important which software component detects them.

Make any sense?

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@alexandergall Curious for your viewpoint: if a device offers you NETCONF and you have a decent open source client then how much do you care about SNMP?

@mwiget
Copy link
Contributor

mwiget commented Jan 15, 2016

I'd love to see a standardised interface between Snabb for configuration and operational data. Today every Snabb app deals with its own configuration file or command arguments and exports operational/statistical data as a tree of SHM values to the file system.

I'm less convinced about having that interface be turned into something that communicates via NETCONF directly. Snabb first and foremost handles physical network interfaces and does so really well. Unless you build a "single interface Snabb compute node", you most likely end up with a compute node running many Snabb instances plus probably some virtual network functions on top of these Snabb instances. Shall now every Snabb instance be addressed via its own (ssh channel) based NETCONF interface? If yes, you'll need to assign an IP address plus SSH plus login credentials and bake all of this into the Snabb daemon.

Why not add some hierarchy and have "something" manage the Snabb instances that combined build a service, like L2TPv3 or lw4o6. There is more to this than just basic configuration. What about announcing network prefixes serviced by Snabb via BGP? Is this ask now to also embed BGP into Snabb? Probably not.

Now one could argue, that a central OSS is the master of the universe and capable of micro-managing every single Snabb instances in its universe. Personally I don't believe in such a scenario and rather follow the old rule of "divide and conquer". To put the proposed hierarchical model to the test, I've been busy adding Snabb as network layer under a virtual router (Juniper vMX) and be managed from the vMX's control plane. Northbound, the vMX exposes its function via a single Netconf/YANG interface. Some in-line network functions are handled directly within Snabb, like lw4o6, the rest (mainly control plane traffic) is handled by the vMX (like ARP, IPv6 NDP, BGP, BFD etc):

(expanding on @plajjan diagram):

+-------+                    +-------+                +-----+
| Snabb |  <-- cfg files --  |  vMX  | -- NETCONF --> | NMS |
+-------+                 |  +-------+                +-----+
+-------+                 |
| Snabb |+ <-- cfg files -+
+-------+|
 +-------+

Would be great to have the transport between Snabb and its Netconf/YANG entity (vMX in my case) standardised and expandable with statistical data beyond basic app links statistics. Snabb is highly efficient int what it does today.

But baking control plane connectivity and functions into Snabb seems like breaking some of its design principles (ASAIK) too. To me, that's like asking Intel's DPDK based interface driver to become a full fledged routing device.

@plajjan
Copy link
Contributor

plajjan commented Jan 15, 2016

I agree with @mwiget, which is why I keep repeating things like sysrepo because it tries to accomplish exactly this ;)

The important part is that the data that Snabb accepts as input or what it outputs conform to a YANG model. Mangling data from one format into another is what I really want to avoid. Like if everything was just a flat key value store and we would need to mangle that into a hierarchical YANG model, that'd be PITA.

I thought the sysrepo approach of linking something that helps you with validation looked pretty nice but I understand you are not to keen to link to third party code. I think you should go talk to the sysrepo people about your concerns. Evidently they are of the view that people won't object to linking to their stuff to get convenience functions for dealing with data according to a YANG model. If this is not true I think they should hear about it. I am also interested in what is okay on this front? Lua libs is ok? C is not?

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

Great braindump, @mwiget!

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

I think you should go talk to the sysrepo people about your concerns.

Good idea. This can be the next step after establishing our prejudices in this thread :-).

what is okay on this front? Lua libs is ok? C is not?

Generally I would like a solution that is small and has few moving parts. If they provide a library it would ideally be as "rewritable software" (lukego/blog#12) i.e. simple enough that it doesn't really matter whether we use their implementation or make our own.

I feel like the ideal interface towards an external NETCONF daemon would be based on a small number of interactions like:

  1. Snabb->NETCONF: Here is the YANG model we are using.
  2. NETCONF->Snabb: Here is your new configuration tree.
  3. Snabb->NETCONF: Here are the latest operational counter values.

These are coarse-grained high-level interactions that keep the systems at arms-length from each other. We make no assumptions about where these configuration updates are coming from and they make no assumptions about how we are representing and processing them in our application.

I imagine this protocol would be implemented in some really simple way. For example tree data would be represented in a simple external format like JSON and interactions would be done with a simple mechanism like a socket or a file. (@alexandergall already defined an interface towards NetSNMP along these lines I believe.)

If the Snabb process needs to have a relatively deep understanding of the YANG models then I imagine that we would parse the YANG files ourselves and make our own "smart" representation. This idea is quite appealing to me i.e. that we would define our own configuration schemas directly with YANG files.

The main part I would like to avoid is fine-grained interactions, for example that the native data structure we operate on is a third-party library that e.g. requires adding a third-party YANG compiler into our build toolchain, or depends on calling out to an external library for things like diff'ing old/new configuration trees, or expects to be able to frequently RPC the daemon for support, and so on. I feel like most users are happy with this kind of library support but that it is decidedly un-Snabbly.

@andywingo How does this match up with your thoughts?

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@mwiget I'm thinking that our ideas are quite in sync here e.g. that in your diagram the vMX is feeding the Snabb process with a file that contains the current complete configuration tree. Much like the way our OpenStack driver provisions the Snabb traffic process by creating a new configuration file for it.

I like this kind of arms-length distance. In the OpenStack case we could also for example have made a ZeroMQ endpoint out of snabbnfv traffic so that OpenStack could send it RPCs but that seems like much too tight coupling for my taste.

@wingo
Copy link
Contributor

wingo commented Jan 15, 2016

Thank you all for this discussion! I've been learning a lot :)

I definitely agree with @mwiget and @plajjan that baking NETCONF into snabb is probably the wrong way to go. There are just too many ways these things could be put together. At the same time, having a data model for Snabb configuration and state data is really attractive to me as a Snabb programmer. It's of course necessary from the operator's point of view, but it's worth pointing out that having a proper data model would be useful in Snabb itself. I know I would much rather publish and rely on a YANG model for the lwaftr that we've been working on than have a bunch of out-of-date, incomplete README files.

It's worth mentioning of course that some of NETCONF's operational model might be worth incorporating into Snabb though -- candidate configs, for example. It might be. It's a step significantly farther down the line, but it's something worth thinking about.

I think the type and representation issues that @plajjan mentions are pretty important though. When you get data in from JSON, you have to know how to interpret that data -- you need a schema of some kind. For example if you get a string, you should know if it's meant to be a string or a 64-bit integer. To me this reasoning starts to indicate that we should have schemas and validation inside Snabb. That way we get reliable operation and nice error messages.

One point that I don't see very clearly is overhead when your configuration+state data is large. For example, in the lwaftr we will have millions of entries in our "router" table. Internally the source of truth is a big flat hash table implemented as a contiguous range of ffi memory. Probably we don't want to expose a simple "get" operation that turns this all into JSON and spits it over a socket. Probably we would want to copy this data structure into the SHM memory somewhere and then the external entity that translates between Snabb and YANG instance data would have to know a bit more about this table.

To an extent you can do this just with exported SHM data. Of course if you want notifications, you need a socket of some kind like the NetSNMP case. You probably need the socket for reconfiguration too, if you support candidate configurations and rollback and all those things.

Regarding other projects, rewriting, and re-implementation: I have an opinion and I would like peoples' thoughts on it :)

Where functionality is logically related to the domain of the program (e.g., configuration and state data for the lwatr), I think we should (re)implement the functionality in Snabb. The point at which re-implementation can stop is the point at which our domain ends. One way to end the domain is via exposing a standard, "commodity" interface to the world -- in our case, data as instances of the YANG data model. Our domain might possibly go as far as a NETCONF operational configuration model, if at some eventual point we agree that model (candidate, commit, etc) to be the right way to configure Snabb applications, but not the NETCONF protocol or server. We should aim to share data and protocols with the non-Snabb world but not code. I don't think, for example, that we should be linking to libyang in any part of Snabb that does domain-specific work, whether the process that's doing the data plane or an external process. On the other hand if we expose, for example, YANG instance data in a Snabb-standard but in non-XML format, it's perfectly fine at that point to implement a NETCONF agent via sysrepo.

@wingo
Copy link
Contributor

wingo commented Jan 15, 2016

Thinking a bit more concretely. Let's say we incorporate Yang schemas into Snabb in some meaningful way -- we can configure programs in terms of yang data nodes. We define a standard way to export data types to /run/snabb somehow. Some data types or nodes can define custom ways they are exported via Lua code and some kind of registry, for example a routing table doesn't need to make a gajillion directories. All the data is in /run/snabb, you just need to interpret it.

To interpret the data you need.... a schema! So we make a Snabb program that can read the tree in /run/snabb, from a separate process, as a snapshot, using the Snabb data model modules, the same ones that wrote it out. I have no idea what the concurrency implications are, relative to concurrent modification by the Snabb program. In some ideal world I would like configuration data to be immutable and persistent like functional data structures -- changing the value of a leaf would result in a new tree that shares state with the previous configuration. Actually this doesn't need to be in shmfs, as it's just configuration data. But who knows. That would mean "the configuration as a value" would be a handle on a configuration directory tree. The tree would indicate its schema at the top, so the external process could turn that into some kind of textual representation. Of course we could skip the tree and just make a textual representation from the beginning, but perhaps that's not as good as being able to selectively read parts of the tree. I don't know. I think in Snabb we will want to be able to reason about configurations from within our domain, so my initial thoughts would be to take this tack, but perhaps there are simpler solutions.

Of course for state data you definitely want to use shmfs and to mutate values in place, but you don't have the same concurrency concerns -- one "get" operation doesn't have to show values all from the same "time". Or perhaps you could lock the tree when making updates.

Anyway, summary: no socket needed in the base case. All you need is the filesystem tree(s) and the schema. An external Snabb process can interpret the tree. It needs to be a Snabb process because maybe some particular nodes need domain-specific interpretation. That Snabb process could do a few operations: export the whole configuration tree; export the whole state tree; get a particular value; get some set of matching values.

A question arises of syntax and language. What language should this Snabb config/state data reader produce? What language should the Snabb config parser consume? Obviously the same language, but not obviously any language in particular. I think I would define our own language. Of course as a compiler person that's what you would expect :) But to me it sounds reasonable. Something with yang-like syntax, with the yang data model for the instance data, and for the love of god not xml. The external NETCONF agent if any could handle translation to/from XML, but it would be a straightforward translation -- no need to renest or translate between data models.

@wingo
Copy link
Contributor

wingo commented Jan 15, 2016

More concretely more concretely more concretely! Let's walk through a few operations.

Say we have a NETCONF server that manages a Snabb lwAFTR instance. The NETCONF server is written with sysrepo and gets a request for /foo/bar on the lwAFTR using YANG module qux. Let's say that we add the concept of a "name" to Snabb programs -- you can start a program with a name. If no name is supplied, you use the PID. OK so the lwaftr was started with --name lwaftr1, so its state data will be in /run/snabb/lwaftr1, and its configuration data will be in, oh I don't know, /var/run/snabb/lwaftr1. The NETCONF daemon invokes this program:

snabb query lwaftr1:qux:/foo/bar

Snabb query looks for /run/snabb/lwaftr1/schema and /var/run/snabb/lwaftr1/schema for the config and state schemas. It checks that qux is somehow supported by the schema, and loads up the yang data model. It then uses the Snabb yang data model library to read the /foo/bar value out of the tree, serializes it to the snabb config language, writes that out to stdout, and is done.

The NETCONF daemon then translates that data to XML and responds appropriately.

If the NETCONF daemon needs to do xpath matching, it might be able to translate the xpath filter to a filter that snabb query can understand, or perhaps not. In the latter case, after getting a more general reply from snabb query and converting it to XML, the NETCONF daemon would then filter the XML using XPath on its side.

Snabb should probably also have the abilty to create configurations via a tool -- you load in a configuration from the snabb config language, it validates it against the model and makes a tree in /var/run/snabb, and optionally updates /var/run/snabb/NAME/CONF_NAME to point to that new tree. We can version configurations and roll backward and forward, like Guix and Nix do. Snabb programs can be started up with configuration as a text file, or if they are started by name they can just use the last configuration (or the startup configuration) in /var/run/snabb/NAME.

@plajjan
Copy link
Contributor

plajjan commented Jan 15, 2016

Related to @andywingo thought on getting large trees, NETCONF supports filter parameters so you can extract a subtree or part of it. I suppose that one would want to rely on a library to do this. At the same time, if the underlying infrastructure doesn't support it, you might end up sending large amounts of data (like from that FFI stored data) over to a NC agent that then filters it down to just a subset - clearly inefficient from one perspective, but perhaps easier to implement.

Also want to highlight a few common pitfalls and tricky parts NC/YANG as I have seen more than one vendor fail at this. From the top of my head and in no specific order;

config symmetry

If you take a config coming in over a NC interface and convert this into another config format, let's say we want to configure a daemon that doesn't have NC / YANG support, like ntpd. We take the XML config, push it through a template config generator to produce a ntpd.conf file and then SIGHUP ntpd. Now if we do a get-config over the NC interface we either need to read ntpd.conf and apply the reverse to get back the XML representation, or we need to keep a cached copy. If we go with cached copy, no changes can be made to ntpd.conf as it wouldn't show up over NC. If we read back ntpd.conf we need to make sure the conversion is lossless. Best thing is config is kept in a lossless format so that get-config can be easily implemented.

NETCONF isn't RESTful

NETCONF is stateful. You have different operations that can specify how a "patch" is to be applied to the current configuration. Like 'create' that will create a node only if it doesn't exist, and will fail if it already exists. 'replace' will replace the node. 'merge' will.. well, merge and 'remove' will remove. Since you potentially don't get the entire config subtree over NETCONF you need to have a copy of the current config so you can do a merge operation (which btw is the default).

order dependency

For example, you define an access-list and apply it to an interface. As this is done in one transaction it doesn't matter if the data you are sending contains the access-list definition first and the reference after that or the other way around. There are multiple NC implementations failing at this and requiring that the ACL is defined before it can be referenced.

Similarly, some implementations won't allow certain operations based on current state and not desired state as expressed in the candidate config. This means you have to first change A, commit, then change B (which relies on current state of A) and then commit.

Since the internal components might actually want to apply config in a certain order, it could be up to the NC agent to reorder things so that it suits the internal implementation.

transactional

Don't think I need to explain much about this. Doing actual transactions and being able to rollback requires an implementor to keep track over things in the time domain which can be tricky.

validate

Being able to validate a config and not applying it typically requires internal support from a daemon. Basic syntax checks can be written outside of it but for true validation support I think you want to have support "deep" inside the app.

Model translation

This one is really tricky and it's doubtful that Snabb should even attempt to translate instance data.

JUNOS has recently started supporting (one way) translation of configuration data with the goal of supporting standard IETF / OpenConfig models. Since Juniper already have an internal model and a large existing user base they are not very keen on changing it. Representing the same internal data twice in different form is probably not a good idea so the only remaining option is to support translation. XR have started to support OpenConfig models (http://plajjan.github.io/Cisco-IOS-XR-6-0-and-YANG/) and do this through a kind of internal translator AFAIK.

So the question boils down to, is there a need to support the same data expressed differently through two (or more) models? If the answer is no, then no translation is required.

This type of functionality could of course be done in a NC agent so no headache for Snabb.

@mwiget
Copy link
Contributor

mwiget commented Jan 15, 2016

@andywingo, fully agree and like your statement very much: "Anyway, summary: no socket needed in the base case. All you need is the filesystem tree(s) and the schema."

clean and simple.

@lukego
Copy link
Member Author

lukego commented Jan 15, 2016

@plajjan wow amazing list!

@wingo
Copy link
Contributor

wingo commented Jan 15, 2016

Another consideration. What if your Snabb program natively implements Yang data model A internally, but externally it needs to provide Yang data model B also? You might have the same configuration or state data mapping to multiple instances. You probably don't want the Snabb program exporting multiple trees of identical data in different formats.

I don't really know the answer to this question. I would assume the Snabb application would treat data model A as canonical, and some external shim could provide data model B, given a way to translate between them. Kinda crappy though.

@plajjan
Copy link
Contributor

plajjan commented Jan 15, 2016

@andywingo that was what I was trying to allude to with the "Model translation" topic in my previous post :)
It would probably quickly drive code complexity so I think we should ignore that bit for now. Probably better to implement in a NC/YANG agent, using XSLT, than in Snabb anyway.

@jgroom33
Copy link

Netconf is a zombie. It will soon follow the path of ATM and frame relay.

Follow the enterprise. Go restful
On 16 Jan 2016 5:34 a.m., "Kristian Larsson" notifications@github.com
wrote:

@andywingo https://github.com/andywingo that was what I was trying to
allude to with the "Model translation" topic in my previous post :)
It would probably quickly drive code complexity so I think we should
ignore that bit for now. Probably better to implement in a NC/YANG agent,
using XSLT, than in Snabb anyway.


Reply to this email directly or view it on GitHub
#696 (comment)
.

@lukego
Copy link
Member Author

lukego commented Jan 17, 2016

@plajjan Great list of pitfalls. Can the separate NETCONF daemon shield us from these problems?

(NOTE: I wrote the below without reading @andywingo's comments above in sufficient detail. Leaving for posterity :))

Here is one idea for an interface that a Snabb application could expose to a daemon. Simple case would be:

snabb-lwaftr config --load foo.yangdata

... which could use a generic library to update our internal YANG configuration tree and propagate changes everywhere they need to go (e.g. app network).

This could also be extended for different programs with variants like:

snabb-lwaftr config --load-compiled-binding-table foo.dat

to handle exceptional data that has to be treated specially somehow.

This could also implement a validation hook that an external daemon could call while processing NETCONF requests:

snabb-lwaftr config --validate foo.yangdata

... though this is a half-baked thought and @andywingo is already thinking ahead there in talking about supporting things like "candidate" configurations. (I am not familiar enough with NETCONF to have any bright ideas in that area yet.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants