Skip to content

Setup execution platform (vim emu)

Manuel Peuster edited this page Dec 19, 2019 · 31 revisions

This guide describes how to install and setup tng-bench together with a vim-emu-based execution platform. The entire setup process of the execution platform is automated using Ansible and assumes to be executed against a fresh Ubuntu 16.04 installation.

Note 1: Please read this guide carefully and follow its instructions point-by-point.

Overview

You need two machines (bare metal or VM) for the described installation. A single node installation is not supported.

Resulting setup:

+------------------------+       +----------------------------+
| +--------------------+ |       | +------------------------+ |
| |                    | |       | |                        | |
| |     tng-bench      | |       | |    tng-bench-emusrv    | |
| |(experiment control)|-+-------+-> (vim-emu w. ctrl. API) | |
| |                    | |       | |                        | |
| +--------------------+ |       | +------------------------+ |
|                        |       |                            |
|                        |       |     Machine 2: Target      |
| Machine 1 (tng-bench)  |       |(vim-emu execution platform)|
+------------------------+       +----------------------------+
  • Machine 1: This machine runs tng-sdk-benchmark and manages and controls the benchmarking experiments executed on Machine 2. To do so it needs a network connection to Machine 2 (SSH and TCP 4998:5002).

  • Machine 2: This machine acts as executor for the profiling experiments. In the shown example, vim-emu is used as execution environment. In this case, a vim-emu instance is creates by the tool tng-bench-emusrv. This tool offers a REST API to control experiment deployments on top of vim-emu.

Note 2: This installation guide describes the automated remote installation of Machine 2 using a Ansible playbook executed on Machine 1.

Requirements and Assumptions

  • Machine 1 (controller, running tng-sdk-benchmark):
    • Ubuntu 16.04 (or 18.04)
      • Update packages (sudo apt-get upgrade -y)
    • Ansible installed
    • Git installed
    • Python 3 installed
    • SSH configured for password-less login to Machine 2
  • Machine 2 (execution platform, running vim-emu):
    • Ubuntu 16:04 LTS (fresh!) (or Ubuntu 18.04 (experimental!))
      • Update packages (sudo apt-get upgrade -y)
    • Python 3 installed
    • Password-less SSH access from Machine 1

Note 3: Do not try to install Machine 2 on a machine (or VM) that was previously used for other tasks. The installation does some system reconfigurations, e.g., firewall, that will break Machine 2. Use a fresh Ubuntu 16.04/18.04 installation.

Note 4: Only install on machines in a private network/lab environment. Machine 2 will open control ports to the public without any authentication mechanisms, e.g., Docker. It will also run the emulator as root user. All this can cause security risks!

Installation

Install Machine 1

1. Clone and install tng-sdk-benchmark on Machine 1

Also see the official quick guide.

# on Machine 1 do:
git clone https://github.com/sonata-nfv/tng-sdk-benchmark.git

Create a virtualenv to install the SONATA/5GTANGO SDK tools in:

# find out the path to your python3
which python3
# create a fresh virtualenv
virtualenv -p <path_to_python3> venv
# activate the virtualenv
source venv/bin/activate

Install other 5GTANGO tools

pip3 install tngsdk.project
pip3 install tngsdk.package

Finally install tng-sdk-benchmark

# install tng-sdk-benchmark
cd tng-sdk-benchmark/
python3 setup.py install

Test the installation:

tng-bench --help

Install Machine 2

1. Prepare Machine 2

Create the user tngbench on Machine 2 and allow sudo for the new user.

Example:

sudo adduser tngbench
sudo usermod -aG sudo tngbench

2. Configure SSH connection between Machine 1 and Machine 2

Make sure Machine 1 can connect to Machine 2 via SSH (using user tngbench). And that git and ansible are installed on Machine 1.

3. Configure Ansible on Machine 1 to point to Machine 2

Check the Ansible Documentation to learn how to properly configure target hosts in Ansible.

# on Machine 1 do:
cd tng-sdk-benchmark/node-installers
# add your target (Machine 2) to the hosts.yml
vim hosts.yml

Test your ansible configuration:

ansible vim-emu-nodes -i hosts.yml -m ping

Should give something like:

testvm | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

If this does not succeed, you MUST fix your Ansible installation before you can proceed with the rest of this guide.

4. Run Ansible installer on Machine 1 to install Machine 2

The following command will run an Ansible playbook that does a remote installation on Machine 2.

# on Machine 1 do:
ansible-playbook --ask-become-pass -i hosts.yml node-vim-emu.yml

The installation takes ~30 minutes.

5. Verify your installation on Machine 2

The Ansible script will automatically run the vim-emu server (tng-bench-emusrv) inside a screen session at the end of the installation. To verify that the installation worked and that the server was started, log into Machine 2 (using, e.g., SSH) and run the following:

# on Machine 2 connect to the running screen session:
sudo screen -r  # (the tng-bench-emusrv server is running in a screen session)

If you see something like this, everything worked correctly and Machine 2 is ready to accept run benchmarking experiments (controlled by Machine 1):

2018-11-28 15:38:39 testvm tngsdk.benchmark.pdriver.vimemu.server[10391] INFO Starting tng-bench-emusrv server ... CTRL+C to exit.

Note 5: The vim-emu server (tng-bench-emusrv) runs inside a screen session. This means it will not be started automatically if Machine 2 is rebooted. In that case you have to start the server manually.

Manually start tng-bench-emusrv

Manually start tng-bench-emusrv inside a screen session on Machine 2:

# start (on Machine 2)
sudo screen -d -m tng-bench-emusrv
# check output
sudo screen -r

Configuration

The tng-sdk-benchmark controller on Machine 1 now needs to know where it can find our freshly installed execution platform (on Machine 2). To achieve this, go to the home folder cd ~/ and create a .tng-bench.conf file with the following content:

#
# tng-sdk-benchmark configuration file: ~/.tng-bench.conf
#
---
# list of target platform for bench. execution
targets:
  - name: default
    description: "vim-emu on machine 2"
    pdriver: vimemu  # type of target (vimemu, osm)
    pdriver_config:
      host: <HOST_OR_IP_OF_MACHINE_2>  # <-- change here
      emusrv_port: 4999
      llcm_port: 5000
      docker_port: 4998

Usage and Test

Finally you can run a first benchmarking experiment which is shipped as an example together with tng-sdk-benchmark. The example looks like this:

+-----------+    +-----------------+    +-----------+
| mp.input  |--->|  Suricata IDS   |--->| mp.output |
+-----------+    +-----------------+    +-----------+

To run it, run the following command(s) on Machine 1:

Note 6: You have to activate the Python virtual environment used for the installation of Machine 1: source venv/bin/activate.

Check that tng-bench is correctly installed on Machine 1:

tng-bench -h

Ensure that a tng-workspace was created on that machine, run:

tng-wks
# on Machine 1 (in tng-sdk-benchmark/) do:
tng-bench -p examples/peds/ped_suricata_tp_small.yml --no-prometheus

The terminal output should look like this:

 19:59:44 benchmark[7793] INFO 5GTANGO benchmarking/profiling tool initialized
 19:59:44 benchmark[7793] INFO Loaded PED file '/Users/manuel/tango/tng-sdk-benchmark/examples/peds/ped_suricata_tp_small.yml'.
 19:59:44 experiment[7793] INFO Populated experiment specification: 'service_throughput' with 1 configurations to be executed.
 19:59:44 generator.tango[7793] INFO New 5GTANGO service configuration generator
 19:59:44 generator.tango[7793] INFO Generating 1 service experiments using /Users/manuel/tango/tng-sdk-benchmark/examples/peds/../services/ns-1vnf-ids-suricata
 19:59:45 generator.tango[7793] INFO Generating 1 projects for Experiment(service_throughput)
 19:59:45 generator.tango[7793] INFO Generated project (1/1): service_throughput_00000.tgo
--------------------------------------------------------------------------------
5GTANGO tng-bench: Experiment generation report
--------------------------------------------------------------------------------
Generated packages for 1 experiments with 1 configurations.
Total time: 1.6818
--------------------------------------------------------------------------------
 19:59:45 executor[7793] INFO Initialized executor with 1 experiments and [1] configs
 19:59:45 pdriver.vimemu[7793] INFO Initialized VimEmuDriver with {'host': '172.0.0.120', 'emusrv_port': 4999, 'llcm_port': 5000, 'docker_port': 4998}
 19:59:45 executor[7793] INFO Preparing target platforms
 19:59:45 executor[7793] INFO Executing experiments
 19:59:45 executor[7793] INFO Setting up 'ExperimentConfiguration(service_throughput_00000)'
 19:59:45 pdriver.vimemu.emuc[7793] INFO Waiting for emulator LLCM ... 0/60
 19:59:47 pdriver.vimemu.emuc[7793] INFO Waiting for emulator LLCM ... 1/60
 19:59:49 pdriver.vimemu.emuc[7793] INFO Waiting for emulator LLCM ... 2/60
 19:59:49 pdriver.vimemu.emuc[7793] INFO Emulator LLCM ready
 19:59:49 pdriver.vimemu.emuc[7793] INFO On-boarding to LLCM: /var/folders/yx/lvxqrl7j7954pkz6mmsh72br0000gn/T/tmp8uyzaiss/gen_pkgs/service_throughput_00000.tgo
 19:59:50 pdriver.vimemu.emuc[7793] INFO Instantiating NS: f709c4df-1e5d-4a2b-96f2-b3cc3426b2b6
 19:59:53 pdriver.vimemu[7793] INFO Instantiated service: 71ae0af6-8167-4ea3-8d80-28bdfb2d8000
 19:59:53 executor[7793] INFO Executing 'ExperimentConfiguration(service_throughput_00000)'
 20:00:01 pdriver.vimemu[7793] INFO Collecting experiment results ...
 20:00:01 pdriver.vimemu[7793] INFO Finalized 'ExperimentConfiguration(service_throughput_00000)'
 20:00:03 executor[7793] INFO Teardown 'ExperimentConfiguration(service_throughput_00000)'
 20:00:06 executor[7793] INFO Teardown target platforms
 20:00:06 helper[7793] INFO Downloading: https://raw.githubusercontent.com/mpeuster/vnf-bench-model/dev/experiments/vnf-br/templates/vnf-bd.yaml
 20:00:06 7793] INFO Prepared 1 result processor(s)
 20:00:06 7793] INFO Running result processor '<tngsdk.benchmark.resultprocessor.ietfbmwg.IetfBmwgResultProcessor object at 0x104a4bf60>'
 20:00:06 resultprocessor.ietfbmwg[7793] INFO IETF BMWG BD dir not specified (--ibbd). Skipping.

Finally, a folder with results should be produces in the tng-sdk-benchmark folder:

results/
└── service_throughput_00000
    ├── cmon.json
    ├── mn.mp.input
    │   ├── clogs.log
    │   └── tngbench_share
    │       ├── cmd_start.log
    │       └── cmd_stop.log
    ├── mn.mp.output
    │   ├── clogs.log
    │   └── tngbench_share
    │       ├── cmd_start.log
    │       └── cmd_stop.log
    └── mn.vnf0
        ├── clogs.log
        └── tngbench_share

Checking results/service_throughput_00000/mn.mp.input/tngbench_share/cmd_start.log shows you the performance measured performance on the input probe:

------------------------------------------------------------
Client connecting to 20.0.0.254, TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  3] local 20.0.0.1 port 56872 connected with 20.0.0.254 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 3.0 sec  1.28 GBytes  3.65 Gbits/sec

Congratulations, you did your first fully automated profiling experiment using tng-sdk-benchmark.

FAQ

Nothing yet.