Skip to content
This repository has been archived by the owner on Dec 7, 2023. It is now read-only.

Commit

Permalink
Overhaul networking docs, add Flannel usage guide
Browse files Browse the repository at this point in the history
Signed-off-by: Dennis Marttinen <dennis@weave.works>
  • Loading branch information
twelho committed Jul 16, 2020
1 parent 95daefd commit 0a96886
Showing 1 changed file with 115 additions and 41 deletions.
156 changes: 115 additions & 41 deletions docs/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Ignite uses network plugins to manage VM networking.

The default plugin is [CNI](https://github.com/containernetworking/cni); and the default CNI network is automatically put in
`/etc/cni/net.d/10-ignite.conflist` if `/etc/cni/net.d` is empty. In order to switch to some other CNI plugin,
remove `/etc/cni/net.d/10-ignite.conflist`, and install e.g. [Weave Net](#multi-node-networking-with-weave-net) like below.
remove `/etc/cni/net.d/10-ignite.conflist`, and install e.g. [Flannel](#multi-node-networking-with-flannel) like below.

The legacy `docker-bridge` networking provider is also supported, but deprecated.
The legacy `docker-bridge` network plugin is also available, but it is deprecated.

To select the network plugin, use the `--network-plugin` flag for `ignite` and `ignited`:

Expand All @@ -19,75 +19,149 @@ ignited --network-plugin docker-bridge <command>

### The default CNI network

Automatically installed to `/etc/cni/net.d/10-ignite.conflist` unless you have populated `/etc/cni/net.d` with something else.
Automatically installed to `/etc/cni/net.d/10-ignite.conflist` unless you have populated `/etc/cni/net.d` with something else. Uses the CNI `bridge` plugin.

**Pros:**

- **Kubernetes-compatible**: You can use the same overlay networks as you use with Kubernetes, and hence get your VMs on the same network as your containers.
- **Port mapping support**: This mode supports port mappings from the VM to the host
- **Port mapping support**: This mode supports port mappings from the VM to the host.

**Cons:**

- **No multi-node support**: The IP is local (in the `172.18.0.0/16` range), and hence other computers can't connect to your VM's IP address.
- **No multi-node support**: The default bridge has no logic to communicate with other hosts, local VMs are not discoverable externally. VM IPs are local (in the `10.61.0.0/16` range).

### An third-party CNI plugin
### A third-party CNI plugin

For example, [Weave Net](#multi-node-networking-with-weave-net), or an other Kubernetes and/or CNI-implementation.
For example [Flannel](#multi-node-networking-with-flannel) or any other Kubernetes and/or CNI-implementation.

**Pros:**

- **Multi-node support**: CNI implementations can often route packets between multiple physical hosts. External computers can access the VM's IP.
- **Kubernetes-compatible**: You can use the same overlay networks as you use with Kubernetes, and hence get your VMs on the same network as your containers.
- **Port mapping support**: This mode supports port mappings from the VM to the host
- **Port mapping support**: This mode supports port mappings from the VM to the host.

**Cons:**

- **More software needed**: There's now one extra piece of software to configure and manage.

**Note:** If you're running Kubernetes on the physical machine you want to use for Ignite VMs, this approach should work
out of the box, as the CNI implementation is most probably already running in a `DaemonSet` on that machine.

### docker-bridge

**Pros:**

- **Quick start**: If you're running `docker`, you can get up and running without installing extra software
- **Port mapping support**: This mode supports port mappings from the VM to the host
- **Quick start**: If you're running `docker`, you can get up and running without installing extra software.
- **Port mapping support**: This mode supports port mappings from the VM to the host.

**Cons:**

- **docker-dependent**: By design, this mode is can only be used with docker, and is hence not portable across container runtimes.
- **docker-dependent**: By design, this mode is can only be used with Docker, and is hence not portable across container runtimes.
- **No multi-node support**: The IP is local (in the `172.17.0.0/16` range), and hence other computers can't connect to your VM's IP address.

## Multi-node networking with Weave Net
## Multi-node networking with Flannel

[Flannel](https://github.com/coreos/flannel) is a CNI-compliant layer 3 network fabric. It can be used with Ignite as
a third-party CNI plugin to enable networking across multiple hosts/nodes. To ease the setup process, this repository
provides a helper script at [tools/ignite-flannel.sh](../tools/ignite-flannel.sh).

### Configuring the nodes

#### Node 1 (192.168.1.2)

Run `ignite-flannel.sh init` on the first node:

```shell
[node1]$ ./tools/ignite-flannel.sh init
==> Starting ignite-etcd container...
9a99df0dded30a13a7cd6ec4a04a2038db579ec13c129da53933f3a438474dcd
==> Setting Flannel config:
{
"Network": "10.50.0.0/16",
"SubnetLen": 24,
"SubnetMin": "10.50.10.0",
"SubnetMax": "10.50.99.0",
"Backend": {
"Type": "udp",
"Port": 8285
}
}
==> Starting ignite-flannel container...
25d7f304ade52ad6e5648db8e99cffc78555a1bac01caed5d7401dbf63af2193
==> Setting CNI config...
==> Initialized, now start your Ignite VMs with the CNI network plugin.
```

This will start etcd and Flannel in Docker containers on the first node. Flannel uses etcd to store its configuration.

You may now start VMs on this node using `ignite run --network-plugin cni <image>`. To make sure Flannel is active,
verify that the VMs get IP addresses in the `10.50.0.0/16` subnet and that they have internet connectivity.

#### Node 2 (192.168.1.3)

On the second node it's only necessary to run Flannel since the backing etcd is provided by the first node. Check the IP
address or FQDN of the first node and run `ignite-flannel.sh join <first_node_ip_or_fqdn>` on the second node:

```shell
[node2]$ ./tools/ignite-flannel.sh join 192.168.1.2
==> Starting ignite-flannel container...
01ba5b9a258c5b029ce5412418e998fed1612663d5e8ffe3dcdc33eb5c29dc24
==> Setting CNI config...
==> Complete, now check if joining was successful using 'docker logs ignite-flannel'.
==> If so, go ahead and start your Ignite VMs with the CNI network plugin.
```
Verify that Flannel on the second node has successfully connected to the etcd of the first node using
`docker logs ignite-flannel`:
```shell
[node2]$ docker logs ignite-flannel
I0716 15:18:02.190887 1 main.go:518] Determining IP address of default interface
I0716 15:18:02.192746 1 main.go:531] Using interface with name eth0 and address 192.168.1.3
I0716 15:18:02.192844 1 main.go:548] Defaulting external address to interface address (192.168.1.3)
I0716 15:18:02.193384 1 main.go:246] Created subnet manager: Etcd Local Manager with Previous Subnet: 10.50.31.0/24
I0716 15:18:02.193503 1 main.go:249] Installing signal handlers
I0716 15:18:02.201055 1 main.go:390] Found network config - Backend type: udp
I0716 15:18:02.209864 1 local_manager.go:201] Found previously leased subnet (10.50.31.0/24), reusing
I0716 15:18:02.213133 1 local_manager.go:220] Allocated lease (10.50.31.0/24) to current node (192.168.1.3)
I0716 15:18:02.214075 1 main.go:305] Setting up masking rules
I0716 15:18:02.239748 1 main.go:313] Changing default FORWARD chain policy to ACCEPT
I0716 15:18:02.239895 1 main.go:321] Wrote subnet file to /run/flannel/subnet.env
I0716 15:18:02.239910 1 main.go:325] Running backend.
I0716 15:18:02.252725 1 main.go:433] Waiting for 22h59m59.937506934s to renew lease
I0716 15:18:02.260713 1 udp_network_amd64.go:100] Watching for new subnet leases
I0716 15:18:02.279792 1 udp_network_amd64.go:195] Subnet added: 10.50.77.0/24
```
To use e.g. Ignite together with [Weave Net](https://github.com/weaveworks/weave), run this on all physical machines that
need to connect to the overlay network:
If no errors occurred, the overlay network should now be established. Go ahead and start a VM on the second node and
verify Flannel is active by checking the subnet and internet connectivity like for the host node.
If Flannel is throwing errors about the etcd connection:
- Check that you have entered the IP address or FQDN correctly, e.g. verify that you can ping it.
- Make sure that there is no firewall blocking ports `2379/tcp` (etcd) and/or `8285/udp` (Flannel).
At this point you should be able to ping VMs across hosts. Try to ping a VM on the second node from the host node and
vice versa. Note that the (physical) machine running Ignite can only access VMs that run locally on that specific machine.
The overlay network is established only **between the VMs**, not between the machines running the VMs.
### Cleanup
To remove all persistent configuration, run `ignite-flannel.sh cleanup` on both hosts.
```shell
# Remove Ignite's default CNI network if it exists
rm -rf /etc/cni/net.d/10-ignite.conflist

# This tries to autodetect the primary IP address of this machine
# Ref: https://stackoverflow.com/questions/13322485/how-to-get-the-primary-ip-address-of-the-local-machine-on-linux-and-macos
export PRIMARY_IP_ADDRESS=$(ip -o route get 1.1.1.1 | cut -d' ' -f7)
# A space-separated list of all the peers in the overlay network
export KUBE_PEERS="${PRIMARY_IP_ADDRESS}"
# Start Weave Net in a container
docker run -d \
--privileged \
--net host \
--pid host \
--restart always \
-e HOSTNAME="$(hostname)" \
-e KUBE_PEERS="${KUBE_PEERS}" \
-v /var/lib/weave:/weavedb \
-v /opt:/host/opt \
-v /home:/host/home \
-v /etc:/host/etc \
-v /var/lib/dbus:/host/var/lib/dbus \
-v /lib/modules:/lib/modules \
-v /run/xtables.lock:/run/xtables.lock \
--entrypoint /home/weave/launch.sh \
weaveworks/weave-kube:2.5.2
[node*]$ ./tools/ignite-flannel.sh cleanup
==> Cleanup complete. To finish removal of non-persistent resources such as generated
==> network interfaces and iptables rules, reboot your system (or remove them by hand).
```
If you're running Kubernetes on the physical machine you want to use for Ignite VMs, it should work out of the box, as
the CNI implementation is most probably already running in a `DaemonSet` on that machine.
### What about static IPs?
When using CNI, the CNI provider (e.g. Flannel) is responsible for assigning IP addresses to containers (or in this case
the Ignite VMs). Ignite itself only receives an IP from CNI and forwards it to the VM, so it is up to your CNI provider
to persist the IP addresses. See e.g. [Flannel's documentation](https://github.com/coreos/flannel/blob/master/Documentation/reservations.md)
on how you could establish this.
The `ignite-flannel.sh` script is only meant to provide a relatively simple example on how to set up a standalone CNI
network and thus does not have any readily available options to specify static IPs for VMs. That said, it is essentially
just a script to start Flannel and pass it a given configuration, so feel free to take a look at the code in
[tools/ignite-flannel.sh](../tools/ignite-flannel.sh) to see how it works and how you can extend it. Contributions welcome!

0 comments on commit 0a96886

Please sign in to comment.