Skip to content

Tutorial: Getting Started

RafaelSche edited this page Nov 28, 2019 · 9 revisions

Workflow

The following figure shows the general architecture of vim-emu platform and depicts the high-level workflow of a developer using it.
workflow

  1. define a network service package, consisting of service (NSD) and function (VNFD) descriptors as well as Docker files or pre-build images that contain the network functions to be tested
  2. define a multi-PoP topology on which he wants to test the service
  3. start the emulator
  4. connect a MANO system to the emulated PoPs in the platform by using standard cloud interfaces
  5. network service can be deployed on the platform by pushing it to the MANO system
  6. MANO system starts each network function as a Docker container, connects it to the emulated network, and sets up its forwarding chain. The service is deployed and runs inside the platform

In this stage, a developer can directly interact with each running container through Containernet's interactive command line interface (CLI), e.g., to view log files, change configurations, or run arbitrary commands, while the service processes traffic generated by additional containers running in the emulation using, e.g., iperf. Furthermore, a developer and the MANO system can access arbitrary monitoring data generated by the SDN switches or the network functions.

In examples shown in this article no MANO system will be used, its functions will be done manually.

This example shows how to use vim-emu and how to manually deploy some example network service and chain their VNFs.

Topology definition

A user can define arbitrary topologies to be emulated by vim-emu. The description of these topologies is done with Mininet-like Python scripts as described here. A couple of example topologies can be found here. Two of them will be demostrated and exaplaned in Sections below.

Define a topology includes instantiate a DCNetwork object, add datacenters, link them to create an network, instantiate APIs and connect them to the datacenters. If you are familiar with Containernet and Mininet - DCNetwork is a subclass of Containernet class(which inherits the Mininet class), so its object has pretty much the same functions.

Deploy empty docker containers

Service

+--------+       +--------+
| vnf0   <------->  vnf1  |
+--------+       +--------+

Both deployed VNFs are based on an empty ubuntu:trusty Docker container.

Define Topopology with two linked PoPs

tango_default_cli_topology_2_pop.py:

import logging
from mininet.log import setLogLevel
from emuvim.dcemulator.net import DCNetwork
from emuvim.api.rest.rest_api_endpoint import RestApiEndpoint
from emuvim.api.tango import TangoLLCMEndpoint

logging.basicConfig(level=logging.DEBUG)
setLogLevel('info')  # set Mininet loglevel
logging.getLogger('werkzeug').setLevel(logging.DEBUG)
logging.getLogger('5gtango.llcm').setLevel(logging.DEBUG)


def create_topology():
    net = DCNetwork(monitor=False, enable_learning=True)
    # create two data centers
    dc1 = net.addDatacenter("dc1")
    dc2 = net.addDatacenter("dc2")
    # interconnect data centers
    net.addLink(dc1, dc2, delay="20ms")
    # add the command line interface endpoint to the emulated DC (REST API)
    rapi1 = RestApiEndpoint("0.0.0.0", 5001)
    rapi1.connectDCNetwork(net)
    rapi1.connectDatacenter(dc1)
    rapi1.connectDatacenter(dc2)
    rapi1.start()
    # add the 5GTANGO lightweight life cycle manager (LLCM) to the topology
    llcm1 = TangoLLCMEndpoint("0.0.0.0", 5000, deploy_sap=False)
    llcm1.connectDatacenter(dc1)
    llcm1.connectDatacenter(dc2)
    # run the dummy gatekeeper (in another thread, don't block)
    llcm1.start()
    # start the emulation and enter interactive CLI
    net.start()
    net.CLI()
    # when the user types exit in the CLI, we stop the emulator
    net.stop()


def main():
    create_topology()


if __name__ == '__main__':
    main()

As you see creating custom topologies is very similar to Containernet and Mininet. In this example the topology definition is done in the function create_topology().
First an DCNetwork object is created. Class DCNetwork inherits Containernet.

net = DCNetwork(monitor=False, enable_learning=True)

Then create Datacenter with function DCNetwork.addDatacenter:

dc1 = net.addDatacenter("dc1")
dc2 = net.addDatacenter("dc2")

Link them (with 20ms delay):

net.addLink(dc1, dc2, delay="20ms")

After that APIs will be instantiated, which later will allow to instantiate VNFs on the PoPs. In this example only the RestApiEndpoint is needed.

    rapi1 = RestApiEndpoint("0.0.0.0", 5001)
    rapi1.connectDCNetwork(net)
    rapi1.connectDatacenter(dc1)
    rapi1.connectDatacenter(dc2)
    rapi1.start()

Then start the emulation and enter interactive CLI.

net.start()
net.CLI()
# when the user types exit in the CLI, we stop the emulator
net.stop()

Workflow with simple docker containers - Step-by-step

Step 1: Start the emulator with two PoP topology

# 1. (Terminal1) start the demo topology
~/vim-emu$ sudo python examples/tango_default_cli_topology_2_pop.py

Step 2: Check available (emulated) PoPs

# 2. (Terminal 2) datacenter list
vim-emu datacenter list

# output
+---------+-----------------+----------+----------------+--------------------+
| Label   | Internal Name   | Switch   |   # Containers |   # Metadata Items |
+=========+=================+==========+================+====================+
| dc2     | dc2             | dc2.s1   |              0 |                  0 |
+---------+-----------------+----------+----------------+--------------------+
| dc1     | dc1             | dc1.s1   |              0 |                  0 |
+---------+-----------------+----------+----------------+--------------------+

Step 3: Instantiate two VNFs

# 3. (Terminal2) instantiate two VNFs, one in each PoP
vim-emu compute start -d dc1 -n vnf0 -i ubuntu:trusty
vim-emu compute start -d dc2 -n vnf1 -i ubuntu:trusty

Step 4: Check deployment

# 4. (Terminal2) compute list
vim-emu compute list

# output
+--------------+-------------+---------------+------------------+-------------------------+
| Datacenter   | Container   | Image         | Interface list   | Datacenter interfaces   |
+==============+=============+===============+==================+=========================+
| dc2          | vnf1        | ubuntu:trusty | emu0             | dc2.s1-eth2             |
+--------------+-------------+---------------+------------------+-------------------------+
| dc1          | vnf0        | ubuntu:trusty | emu0             | dc1.s1-eth2             |
+--------------+-------------+---------------+------------------+-------------------------+

Step 5: Setup chaining between VNFs

Attention: This step is NOT needed as long as enable_learning=True in tango_default_cli_topology_2_pop.py

# 5. (Terminal2) do the chain setup
vim-emu network add -b -src vnf0:emu0 -dst vnf1:emu0

Step 6: Test connectivity

# 4. (Terminal1) check if everything works
containernet> vnf0 ping -c 3 vnf1

# output:
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=40.5 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=40.1 ms
64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=40.1 ms

--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms

Step 7: Stop the emulator

This also terminates the VNFs.

# 7. (Terminal1) shutdown the emulator
containernet> exit

If you don't terminate the emulator properly or an error occurs during execution, there would be remains, that could bug the execution of scripts in future. To clean them up enter sudo mn -c.

Of course it could be necessary to do other tests than ping and so other tools could be needed, which are not installed in the ubuntu:trusty Docker image.

Custom image

Let's create our own Docker images which will contain what is needed. For example iperf client and server. If you are not familiar with Docker, maybe first you want to checkout the according Get Started Article. Short summary of what to do: first we create a "Dockerfiles", which is some kind of descriptor of the images to create. Then from this Dockerfiles we create the images and modify our workflow to use it. Docker containers used in Containernet must fullfill certain requirements. It has to have following packages installed:

  • net-tools
  • iputils-ping
  • iproute2
    Also for vim-emu containers must contain two environment variables - VIM_EMU_CMD and VIM_EMU_CMD_STOP. The first defines the command to execute after the container is created and the network is set up. The second defines a shutdown command. Our server Dockerfile:
# parent image
FROM ubuntu:trusty

# install needed packages
RUN apt-get update && apt-get install -y \
    net-tools \
    iputils-ping \
    iproute2 \
    telnet telnetd \
    iperf

# set entry point for emulator gatekeeper
ENV VIM_EMU_CMD "iperf -s -D"
ENV VIM_EMU_CMD_STOP "echo 'Stop iperf_server'"

# run bash interpreter
CMD /bin/bash

Note that the iperf server was started with argument -D in daemon mode. Save this in a file named "Dockerfile.iperf_server" (in a folder named example-containers/) and create image:

sudo docker build --tag=iperf_server -f example-containers/Dockerfile.iperf_server example-containers/

For the client we just let ENV VIM_EMU_CMD undefined or without real action:

[...]
ENV VIM_EMU_CMD "echo 'Client started'"
[...]

And build:

sudo docker build --tag=iperf_client -f example-containers/Dockerfile.iperf_client example-containers/

Now we can repeat the workflow shown above with changes in step 3 and 6:

Step 3: Instantiate two VNFs

# 3. (Terminal2) instantiate two VNFs, one in each PoP
vim-emu compute start -d dc1 -n vnf0 -i iperf_server:latest
vim-emu compute start -d dc2 -n vnf1 -i iperf_client:latest

Step 6: Test connectivity

# 4. (Terminal1)
containernet> vnf1 iperf -c vnf0

# Output:
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.4 port 51456 connected with 10.0.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.54 GBytes  1.32 Gbits/sec
containernet> 

The same is possible with one single iperf_image, with starting the server manually, like shown here in a Containernet example.

Deploy a 5GTANGO service package

In the first example raw docker images were used. Now we use an 5GTANGO service package instead. It can be on-boarded and instantiated with the 5GTANGO lightweight life cycle manager (LLCM) module. We modify the above topology as in, by adding the instantiation of the LLCM module:

    llcm1 = TangoLLCMEndpoint("0.0.0.0", 5000, deploy_sap=False)
    llcm1.connectDatacenter(dc1)
    llcm1.connectDatacenter(dc2)
    # run the dummy gatekeeper (in another thread, don't block)
    llcm1.start()

Each VNF is based on an empty Docker container using the ubuntu:trusty image. The entire service is available in the services/ folder of the vim-emu-examples repository:

Preparation

(optional)

If you want, you can create your own 5GTANGO network service package using the 5GTANGO packaging tool. To install this tool, please follow the instructions given in its repository: tng-sdk-package.

# 1. package the network service project
~/vim-emu$ tng-pkg -p misc/tango-demo-service-project/

# output
2018-08-07 15:36:48 [INFO] [packager.py] Packager created: TangoPackager(e1329c21-542e-4b75-a083-70f37ab6af6e)
2018-08-07 15:36:48 [INFO] [packager.py] Creating 5GTANGO package using project: 'misc/tango-demo-service-project/'
2018-08-07 15:36:48 [INFO] [packager.py] Package created: 'eu.5gtango.emulator-example-service.0.1.tgo'
2018-08-07 15:36:48 [INFO] [packager.py] Packager done (0.2283s): TangoPackager(e1329c21-542e-4b75-a083-70f37ab6af6e)
===============================================================================
P A C K A G I N G   R E P O R T
===============================================================================
Packaged:    misc/tango-demo-service-project/
Project:     eu.5gtango.emulator-example-service.0.1
Artifacts:   3
Output:      eu.5gtango.emulator-example-service.0.1.tgo
Error:       None
Result:      Success.
=============================================================================== 

Workflow with a 5GTANGO service package - Step-by-step

Step 1: Start the emulator using a demo topology with two PoPs

# 1. (Terminal1) start the demo topology with 5GTANGO LLCM
~/vim-emu$ sudo python examples/tango_default_cli_topology_2_pop.py

# output (skipped)
containernet>

Step 2: On-board the 5GTANGO network service package to the 5GTANGO LLCM

# 2. (Terminal2) push empty service package to dummy gatekeeper
~/vim-emu$ curl -i -X POST -F package=@misc/eu.5gtango.emulator-example-service.0.1.tgo http://127.0.0.1:5000/packages

# output
{
    "error": null,
    "service_uuid": "8c7a9740-4a05-422a-8fa2-2a5fa34b16a0",
    "sha1": "9b64a73fe5889dd5ccefdf93742395d685ca7b25",
    "size": 3513
}

Step 3: Instantiate the on-boarded service

# 3. (Terminal2) instantiate snort service
~/vim-emu$ curl -X POST http://127.0.0.1:5000/instantiations -d "{}"

# output
{
    "service_instance_uuid": "a0266390-7bcf-40ed-9d53-70fdc0dfc76e"
}

Step 4: Check if VNFs are running

# 4. (Terminal2) check if all containers are up
~/vim-emu$ vim-emu compute list

# output:
+--------------+-------------+---------------+-------------------+-------------------------------------+
| Datacenter   | Container   | Image         | Interface list    | Datacenter interfaces               |
+==============+=============+===============+===================+=====================================+
| dc2          | vnf0.vdu01        | ubuntu:trusty | mgmt,input,output | dc2.s1-eth2,dc2.s1-eth3,dc2.s1-eth4 |
+--------------+-------------+---------------+-------------------+-------------------------------------+
| dc1          |  vnf1.vdu01        | ubuntu:trusty | mgmt,input,output | dc1.s1-eth2,dc1.s1-eth3,dc1.s1-eth4 |
+--------------+-------------+---------------+-------------------+-------------------------------------+

Step 5: Interact with the VNFs through the interactive emulator CLI

Hint: Hit ENTER to have a clean screen in Terminal 1.

# 5. (Terminal1) lets check the network config of one container
containernet>   vnf0.vdu01.0 ifconfig

# output (shortened)
input     Link encap:Ethernet  HWaddr 4e:0c:6a:07:9b:62
          inet addr:10.0.0.3  Bcast:10.255.255.255  Mask:255.0.0.0
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)
mgmt      Link encap:Ethernet  HWaddr ba:89:6e:46:49:b3
          inet addr:10.20.0.1  Bcast:10.20.0.255  Mask:255.255.255.0
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)
output    Link encap:Ethernet  HWaddr 92:47:81:80:99:1c
          inet addr:10.30.0.1  Bcast:10.30.0.3  Mask:255.255.255.252
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)

Step 6: Check inter VNF communication and chaining setup

# 6. (Terminal1) verify that the chained connection between vnf0 and vnf1 works
containernet>  vnf0.vdu01.0 ping -c 3  vnf1.vdu01.0

# output
PING 10.20.0.2 (10.20.0.2) 56(84) bytes of data.
64 bytes from 10.20.0.2: icmp_seq=1 ttl=64 time=81.0 ms
64 bytes from 10.20.0.2: icmp_seq=2 ttl=64 time=40.1 ms
64 bytes from 10.20.0.2: icmp_seq=3 ttl=64 time=40.1 ms
--- 10.20.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms

Step 7: Shutdown the emulator

Note: This will also terminate the service.

# 8. (Terminal1) shutdown the emulator
containernet> exit

5GTANGO LLCM Instance specific ENVs

The 5GTANGO LLCM allows to load instance-specific environment variables (introduced for 5GTANGO NetSoft'19 demo). The idea is to look for <container_name>.env.yml files in a specific folder. If it finds those, it loads the ENVs and injects it into the containers upon start. This allows to simply configure complex service deployments with multiple instances, where each instance has its own config.
So the TangoLLCMEndpoint constructor has the parameter env_conf_folder which you can assign a folder with yaml files. Example:

  llcm1 = TangoLLCMEndpoint(
        "0.0.0.0", 32002, deploy_sap=False,
        env_conf_folder="~/tng-industrial-pilot/emulator-topologies/envs/")

Folder:

~/tng-industrial-pilot$ tree
.
└── emulator-topologies
    └── envs
        ├── vnf0.vdu01.0.env.yml
        └── vnf1.vdu01.0.env.yml

Both yaml files look like this:

vnf0.vdu01.0.env.yml and vnf1.vdu01.0.env.yml:

---
TEST_VAR: "this is some value"
ANOTHER_VAR: 1

The result can be checked with printenv:

containernet> vnf0.vdu01.0 printenv
HOSTNAME=vnf0.vdu01.0
ANOTHER_VAR=1
TERM=xterm
TEST_VAR="this is some value"
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
VNF_NAME=vnf0.vdu01.0
PWD=/
PS1=�
SHLVL=1
HOME=/root
_=/usr/bin/printenv

5GTANGO LLCM Placement

To determine to the PoP on which a VNF should be placed TangoLLCMEndpoint constructor has the parameter placement_algorithm_obj, which expects an object of one three classes: FirstDcPlacement (default), RoundRobinDcPlacement, StaticConfigPlacement from emuvim.api.tango.llcm.

In case of FirstDcPlacement all the VNFs will be placed on the first PoP in the list.

RoundRobinDcPlacement places the first VNF on the first PoP, the second VNF on the second Pop and so on and after the last PoP it begins with first again, as long as there are VNFs to place.

In StaticConfigPlacement the placement is determined by a config yaml file, whose path you path you have to pass to the contructor:

  from emuvim.api.tango.llcm import StaticConfigPlacement
  [...]
  llcm1 = TangoLLCMEndpoint(
        "0.0.0.0", 32002, deploy_sap=False,
        placement_algorithm_obj=StaticConfigPlacement(
            "~/tng-industrial-pilot/emulator-topologies/static_placement.yml"))

static_placement.yml:

vnf0.vdu01.0: dc1
vnf1.vdu01.0: dc2
[...]

Example of a simple snort setup

Deploy Snort IDS service package (from vim-examples), manually start additional probing containers (iperf client/server), and connect them to the deployed chain. Check if Snort VNF works correctly and monitors traffic sent between client and server container. For topology the script tango_default_cli_topology_1_pop.py will be used, with only one PoP.

Setup

+--------+     +-------+     +--------+
| client <-----> snort <-----> server |
+--------+     +-------+     +--------+

Docker images and services

For client and server iperf images will be used like shown in the custom image section.
The snort element will be deployed with a 5GTANGO service package and another custom docker image.

snort_vim/
├── Dockerfile
└── snort
    ├── log_intf_statistics.py
    ├── restart_snort.sh
    ├── start.sh
    ├── stats.py
    └── stop.sh

The most import parts here are start.sh, restart_snort.sh, stop.sh. The rest is for formating results. In the Dockerfile necesary packages ares installed, snort itself is downloaded and installed, necesary scripts from the snort-subdirectory added and rules for snort downloaded and set. The start and stop environment variables are set to:

ENV VIM_EMU_CMD "bash /snort/start.sh"
ENV VIM_EMU_CMD_STOP "echo 'Stop host...'"

VIM_EMU_CMD_STOP is not set to stop.sh, because this script includes storing results, which stay in the container and would be lost after the emulator is shutdown. See in the workflow section further down for more.
Also two environment variables to the interfaces input and output:

ENV IFIN input
ENV IFOUT output

And in start.sh a bridge interface is created:

#!/bin/bash
date > /var/log/snort/start.txt

# remove IPs from input/output interface to prepare them for bridging
ip addr flush dev $IFIN
ip addr flush dev $IFOUT

# bridge interfaces (layer 2) and let snort listen to bridge (IDS mode)
brctl addbr br0
brctl addif br0 $IFIN $IFOUT
ifconfig br0 up

sh restart_snort.sh

In restart_snort.sh snort is restarted and set to listen to the br0-interface.

pkill snort
sleep 1

# run snort as background process (snort3)
./snort -i br0 -k none -l /var/log/snort > /var/log/snort/snort_output.log 2>&1 &

echo "Snort VNF started ..."

We build the image with tag snort_vim:

sudo docker build -t snort_vim snort-vim

The 5GTANGO service package and the 5GTANGO source project for snort is here. Note that if your snort image is tagged differently you have to adjust the vnf-descriptor.

Workflow of the snort example - Step-by-step

# 1. (Terminal1) start the demo topology
sudo python tango_default_cli_topology_1_pop.py

# 2. (Terminal2) push snort service package to dummy gatekeeper
curl -i -X POST -F package=@de.upb.ns-1vnf-ids-snort3.0.1.tgo http://127.0.0.1:5000/packages

# 3. (Terminal2) instantiate snort service
curl -X POST http://127.0.0.1:5000/instantiations -d "{}"

# 4. (Terminal2) start additional probing containers (iperf3 client and server)
vim-emu compute start -d dc1 -n client -i iperf_vim_client:latest
vim-emu compute start -d dc1 -n server -i iperf_vim_server:latest

# 5. (Terminal2) check if all containers are up
vim-emu compute list

# Output:
+--------------+--------------+-------------------------+-------------------+-------------------------------------+
| Datacenter   | Container    | Image                   | Interface list    | Datacenter interfaces               |
+==============+==============+=========================+===================+=====================================+
| dc1          | client       | iperf_vim_client:latest | emu0              | dc1.s1-eth4                         |
+--------------+--------------+-------------------------+-------------------+-------------------------------------+
| dc1          | vnf0.vdu01.0 | snort_vim:latest        | mgmt,input,output | dc1.s1-eth1,dc1.s1-eth2,dc1.s1-eth3 |
+--------------+--------------+-------------------------+-------------------+-------------------------------------+
| dc1          | server       | iperf_vim_server:latest | emu0              | dc1.s1-eth5                         |
+--------------+--------------+-------------------------+-------------------+-------------------------------------+


# 6. (Terminal2) connect probing containers to chain
vim-emu network add -b -src client:emu0 -dst vnf0.vdu01.0:input
vim-emu network add -b -src vnf0.vdu01.0:output -dst server:emu0

# 7.1 (Terminal1) verify that client can connect to server through IDS VNF
containernet>  client ping -c 5 server

# Output:
PING 10.0.0.8 (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: icmp_seq=0 ttl=64 time=0.130 ms
... and so on ...

# 7.2 (Terminal1) measure bandwidth
containernet> client iperf -c server


# Output:
------------------------------------------------------------
Client connecting to 10.0.0.8, TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.6 port 35170 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  24.3 GBytes  20.9 Gbits/sec

# 8 (Terminal1) verify that the Snort IDS investigated our ping traffic
containernet> vnf0.vdu01.0 sh stop.sh

# Output:
Started result processing ...
[...]
done.

# 9.1 (Terminal1) Check whether result files are there:
containernet> vnf0.vdu01.0 ls /var/log/snort

# Output:
result.yml  snort_output.log  start.txt  stop.txt

# 9.2 (Terminal2) Get result files out of the container:
$ sudo docker cp mn.vnf0.vdu01.0:/var/log/snort/result.yml .
$ sudo docker cp mn.vnf0.vdu01.0:/var/log/snort/snort_output.log .
$ sudo docker cp mn.vnf0.vdu01.0:/var/log/snort/start.txt .
$ sudo docker cp mn.vnf0.vdu01.0:/var/log/snort/stop.txt .

# 10 (Terminal1) Stop emulator:
containernet> exit
[...]

Steps 2-6 of terminal 2 are available as a bash script, path to service package as first argument required:

bash terminal2.sh _path_/_to_/de.upb.ns-1vnf-ids-snort3.0.1.tgo

Results

result.yml (an extract):

snort3_total_allow: 7399530.0
snort3_total_analyzed: 7399530.0
snort3_total_discards: 324274.0
snort3_total_dropped: 19868465.0
snort3_total_outstanding: 19868287.0
snort3_total_received: 27267817.0
stat__br0__collisions: 0.0
stat__br0__multicast: 0.0
stat__br0__rx_bytes: 27612900389.0
[...]

snort_output.log:

Unexpected end of /proc/mounts line `overlay / overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/ZM64DQFSGYA2IVCNJS33S5AUII:/var/lib/docker/overlay2/l/VF67RQ7SNQRBT56B3UO5DMWZ7J:/var/lib/docker/overlay2/l/PZ777ZN5ZX7GUXFET5WUX5U3BX:/var/lib/docker/overlay2/l/UMKSDIPWCSAY5MNI54BQFJIVVG:/var/lib/docker/overlay2/l/K24YEAZ7CGLMKACHHA5IMCNOYL:/var/lib/docker/overlay2/l/ZJ5EP6F73NZCEIGMDAUCWAQJPO:/var/lib/docker/overlay2/l/VVK5RLY5K7DUL4ZDPHYMMUAR6D:/var/lib/docker/overlay2/l/5I5BE27H43W5EEUBV6VIZYTO2K:/var/lib/docker/overlay2/l/JQAVZSOA7YENX'
Unexpected end of /proc/mounts line `XJE2TZ4HHWLVN:/var/lib/docker/overlay2/l/OYWSQMWPITTEASHE2Q2H7ASZCJ:/var/lib/docker/overlay2/l/5NIFV4QTFGW65PZ7O5QERHUIZ7:/var/lib/docker/overlay2/l/5B2UEOTQOHLSAI66TOTWYEFDQQ:/var/lib/docker/overlay2/l/KFIY2PQEUB3WBR7LCPD76SIOXL:/var/lib/docker/overlay2/l/5G3UAA3GW6PV37CNXGKGP75MET:/var/lib/docker/overlay2/l/TFSXYHQIPWHZ4RNBL3YV57K7BV:/var/lib/docker/overlay2/l/BBYDYGSYJ6VF77UDMFJTSRSXHL:/var/lib/docker/overlay2/l/MU6U3YUCC4RFTGQ6ZA76CE5GLU:/var/lib/docker/overlay2/l/GD24DLKQJBRUTC4234ME7V5YHJ:/var/lib/do'
Unexpected end of /proc/mounts line `cker/overlay2/l/5OEYWZN3FX263GMFI52EQWFBLC:/var/lib/docker/overlay2/l/KL77JEABNISA7PZ6SBHTPICNML:/var/lib/docker/overlay2/l/HWILQKUWANDSV54HRN63OYDJWP:/var/lib/docker/overlay2/l/25V4RPK2P7UNXJ4NFGNHFZ6QCL:/var/lib/docker/overlay2/l/2TZGTPOA5PYTH2UJFBCQIR3W4J:/var/lib/docker/overlay2/l/OMBUM5XG4MIXREQ5TN5ZRJJQZL:/var/lib/docker/overlay2/l/AHEGARHJDHQ2D2YIFAGDQL32LC:/var/lib/docker/overlay2/l/ZKQ6UDZH63DQZ7RNOD3SN2R2CU:/var/lib/docker/overlay2/l/ETPEIDQJRD6DFVNZDT4SWDHTSR:/var/lib/docker/overlay2/l/PWZ2GBV53'
Unexpected end of /proc/mounts line `6Q57Z5QLKLAJZPVAW:/var/lib/docker/overlay2/l/YOTYO4PGDFFESGB35HNZTFKLMS:/var/lib/docker/overlay2/l/BJQIRGXLFZOLOBKUG7NXBVUTXN:/var/lib/docker/overlay2/l/EZB2XEJDY66MCD47EOKDDMU7CE:/var/lib/docker/overlay2/l/RDZ5X652GTVSQK4ZPK57IKNIQ6:/var/lib/docker/overlay2/l/X4MUFW5YBKIK5ZF2TR2PEXLWIY:/var/lib/docker/overlay2/l/J4TBHM4GSWX4BGDUZ3KSD7CSFQ:/var/lib/docker/overlay2/l/V4QOWSEOWNYTB4DLDH2C2ALSP7,upperdir=/var/lib/docker/overlay2/d158c16e38c3af2dd8ad2d0a793ff9fc34f0040365dc6dc54a09fa36846c7ee4/diff,workdir=/va'
--------------------------------------------------
o")~   Snort++ 3.0.0-247
--------------------------------------------------
--------------------------------------------------
pcap DAQ configured to passive.
Commencing packet processing
++ [0] br0
** caught term signal
== stopping
-- [0] br0
--------------------------------------------------
Packet Statistics
--------------------------------------------------
daq
                 received: 27267817
                 analyzed: 7399530
                  dropped: 19868465
              outstanding: 19868287
                    allow: 7399530
                     idle: 634
                 rx_bytes: 1616173343
--------------------------------------------------
codec
                    total: 7399530     	(100.000%)
                 discards: 324274      	(  4.382%)
                      arp: 6           	(  0.000%)
                      eth: 7399530     	(100.000%)
                    icmp4: 18          	(  0.000%)
                    icmp6: 3218825     	( 43.500%)
                     ipv4: 876304      	( 11.843%)
                     ipv6: 6523220     	( 88.157%)
            ipv6_hop_opts: 2092129     	( 28.274%)
                      tcp: 552012      	(  7.460%)
                      udp: 3304395     	( 44.657%)
                     vlan: 20          	(  0.000%)
--------------------------------------------------
Module Statistics
--------------------------------------------------
detection
                 analyzed: 7399530
--------------------------------------------------
Summary Statistics
--------------------------------------------------
process
                  signals: 1
--------------------------------------------------
timing
                  runtime: 00:11:09
                  seconds: 669.853189
                  packets: 27267817
                 pkts/sec: 40759
o")~   Snort exiting

The error messages at the beginning are harmless and can be safely ignored.
start.txt:

Fri Oct 25 13:49:23 UTC 2019

stop.txt:

Fri Oct 25 14:00:39 UTC 2019
Clone this wiki locally