Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decouple cmd package from the clab application part1 #1906

Merged
merged 18 commits into from
Mar 15, 2024
Merged

Decouple cmd package from the clab application part1 #1906

merged 18 commits into from
Mar 15, 2024

Conversation

steiler
Copy link
Collaborator

@steiler steiler commented Feb 22, 2024

To allow using Containerlab as a go library we need to adjust some things, that went wrong in the past.

  • Introduce clear seperation of cmd and clab package
  • Build a proper facade that provides the high-level calls.

@steiler
Copy link
Collaborator Author

steiler commented Feb 22, 2024

What would we consider the right abstraction for topology handover?
Would we provide a file pointer or would we take in a json blob (probably with an additional argument, the working dir, where we would create the lab folder).
My tendency is towards the json blob.

@steiler steiler self-assigned this Feb 22, 2024
@steiler steiler added the enhancement New feature or request label Feb 22, 2024
@ldelossa
Copy link

ldelossa commented Feb 22, 2024

As per our discussion here: #1905

I want to outline the (admittedly few) design requirements Cilium would have for utilizing ContainerLab as an embedded library.

I'd like to first mention that our initial goal to using ContainerLab is for embedding test case scenarios. Cilium has quite a few complex testing scenarios. Using ContainerLab would allow us to rely on an already mature solution which can setup network topologies for dedicated Cilium datapath testing. We are tracking this issue here: cilium/cilium#27450

Luckily, our requirements are not too crazy (IMO, you may think otherwise with your mental map of ContainerLab :))

  1. We'd like to define a topology in a JSON file which define a set of network namespaces and connects them via veths (as I understand, this is the core of ContainerLab, we don't really have a scoped focus on other type of network connectivity such as bridge into VMs).
  2. We will need returned to us a list of configured network namespaces and a handle to these namespaces. Whether this is a pinned filesystem path or a FD can be up to the ease of implementation on ContainerLab's side. We just need the handles provided back to us.
  3. I'm discussing this a bit internally, but I also assume we need to understand the created veth devices (netns + ifindex) after the topology is created, since we'd want to apply eBPF programs to these interfaces to facilitate datapath testing.

What would we consider the right abstraction for topology handover?

Can you explain a bit more about what you mean by "topology handover".
If this is "communicating the resulting topology back to the caller" then I think points 2. and 3. above cover this from our requirements.

Would we provide a file pointer or would we take in a json blob (probably with an additional argument, the working dir, where we would create the lab folder).

It would be possible for the library to take in a generic io.Reader, no? Therefore, the caller can provide any medium (file, in-memory, network-socket, etc...) which implements the io.Reader interface. The implications of this is that the one doing the reading must also do the topology syntax checking at read time.

@steiler
Copy link
Collaborator Author

steiler commented Feb 23, 2024

  1. We will need returned to us a list of configured network namespaces and a handle to these namespaces. Whether this is a pinned filesystem path or a FD can be up to the ease of implementation on ContainerLab's side. We just need the handles provided back to us.

What we do is, that we always create the /var/run/netns/<container_name> handle, as this would also be required by the ip command. This contract should suffice getting a hold onto the handle.
(As a sidenote, we already provide json output for most of the clab commands, so you could already get the container names status etc in json as of todays binary)

  1. I'm discussing this a bit internally, but I also assume we need to understand the created veth devices (netns + ifindex) after the topology is created, since we'd want to apply eBPF programs to these interfaces to facilitate datapath testing.

Would it be sufficient for you to get the netns handle and query the interfaces yourself to figure out the ifindex's. Otherwise we could probably provide these in the results.

What would we consider the right abstraction for topology handover?

Can you explain a bit more about what you mean by "topology handover". If this is "communicating the resulting topology back to the caller" then I think points 2. and 3. above cover this from our requirements.

Like what you answered below, provide a file or an io.reader.

Would we provide a file pointer or would we take in a json blob (probably with an additional argument, the working dir, where we would create the lab folder).

It would be possible for the library to take in a generic io.Reader, no? Therefore, the caller can provide any medium (file, in-memory, network-socket, etc...) which implements the io.Reader interface. The implications of this is that the one doing the reading must also do the topology syntax checking at read time.

Yes thats my thinking, you provide an io.Reader seems to be a good approach.
Well for the checking, if we get an io.Reader, we'd hand that over to the yaml library to read the data and hand back an error if that parsing fails. So I think that should be fine. I'm realizing just now that there should be other means to utilize the library function. One would be to export the topology struct, which would allow the topology definition to be created via structs, but we convert that internally to the specific nodes that implement the nodes.Node interface. So this level of abstraction should probably also be exposed as a subsequent step.

@steiler
Copy link
Collaborator Author

steiler commented Feb 23, 2024

@ldelossa Another question: Is your plan to use containerlab also as a means to deploy the container / network namespaces or is it mainly your objective to use containerlab for wiring?

@ldelossa
Copy link

What we do is, that we always create the /var/run/netns/<container_name> handle, as this would also be required by the ip command. This contract should suffice getting a hold onto the handle.
(As a sidenote, we already provide json output for most of the clab commands, so you could already get the container names status etc in json as of todays binary)

This sounds completely reasonable to me. As long as the naming is consistent with the container names in the topology file, the user of the library can do the correlating.

Would it be sufficient for you to get the netns handle and query the interfaces yourself to figure out the ifindex's. Otherwise we could probably provide these in the results.

When you say "query the interfaces" what do you mean here exactly?
My concern here is that say we have a very busy network namespace with 4 veths created.
What is a quick and simple way for us to understand which veth (ifindex/name) goes to which adjacent network?
However if you mean "query the interfaces" as in, Go interfaces the library will provide, then this seems feasible, just not aware of these interfaces.

Yes thats my thinking, you provide an io.Reader seems to be a good approach.
Well for the checking, if we get an io.Reader, we'd hand that over to the yaml library to read the data and hand back an error if that parsing fails. So I think that should be fine. I'm realizing just now that there should be other means to utilize the library function. One would be to export the topology struct, which would allow the topology definition to be created via structs, but we convert that internally to the specific nodes that implement the nodes.Node interface. So this level of abstraction should probably also be exposed as a subsequent step.

100% to this. Especially if you want to modify a topology in Golang, having to go "back-through" the JSON would be inconvenient :).

Another question: Is your plan to use containerlab also as a means to deploy the container / network namespaces or is it mainly your objective to use containerlab for wiring?

I'm not sure I'm seeing a difference between "deploying" and "wiring".
But for our use cases we would want the network names spaces, veths wired up, routing tables configured, etc.. I'm leaning towards "deployed" here, we expect a full end to end network topology setup. From there we will apply eBPF programs to specific interfaces and then run various unit tests across these network namespaces/eBPF programs.

@AntonMuravlev
Copy link

I would also like to express my interest. As a net eng I've been using clab for years. And now I'm going to utilize clab as part of in-house product. Initial idea was to use clab via hacky bash scripting. It will be great to have some API. The main tasks for clab as a library: 1) initial lab deploy - deploy nodes and links between them, 2) links management during lab runtime (deploy new links, destroy links, manage links params like packet loss/delay via tc netem) 3) destroy lab topology

clab/deploy_options.go Outdated Show resolved Hide resolved
clab/deploy_options.go Outdated Show resolved Hide resolved
cmd/disableTxOffload.go Outdated Show resolved Hide resolved
cmd/deploy.go Show resolved Hide resolved
clab/file.go Outdated Show resolved Hide resolved
@hellt hellt marked this pull request as ready for review March 15, 2024 08:03
@hellt hellt changed the title Library Usage: Allow to use containerlab as a library Decouple cmd package from the clab application part1 Mar 15, 2024
Copy link

codecov bot commented Mar 15, 2024

Codecov Report

Attention: Patch coverage is 64.28571% with 140 lines in your changes are missing coverage. Please review.

Project coverage is 54.00%. Comparing base (756d51a) to head (d642559).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1906      +/-   ##
==========================================
+ Coverage   53.90%   54.00%   +0.10%     
==========================================
  Files         156      158       +2     
  Lines       11382    11430      +48     
==========================================
+ Hits         6135     6173      +38     
- Misses       4387     4396       +9     
- Partials      860      861       +1     
Files Coverage Δ
clab/authz_keys.go 54.09% <100.00%> (ø)
clab/file.go 56.86% <100.00%> (ø)
clab/sshconfig.go 62.85% <100.00%> (ø)
cmd/destroy.go 75.83% <100.00%> (+4.31%) ⬆️
cmd/inspect.go 83.33% <100.00%> (ø)
links/endpoint.go 88.00% <100.00%> (ø)
links/endpoint_bridge.go 60.00% <100.00%> (ø)
links/endpoint_host.go 66.66% <100.00%> (ø)
links/endpoint_macvlan.go 22.22% <100.00%> (ø)
links/endpoint_veth.go 100.00% <100.00%> (ø)
... and 27 more

... and 1 file with indirect coverage changes

@hellt hellt merged commit 001e6a2 into main Mar 15, 2024
63 checks passed
@hellt hellt deleted the library branch March 15, 2024 08:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants