From 81a926621f6834beac60a231b170c701ecb81873 Mon Sep 17 00:00:00 2001 From: Larry Peterson Date: Tue, 21 May 2024 14:45:43 -0700 Subject: [PATCH] sync'ed appendix --- dict.txt | 3 + index.rst | 2 + software/blueprints.rst | 400 +++++++++++++++++++++++++++++++++++++ software/devel.rst | 217 ++++++++++++++++++++ software/developer.todo | 431 ---------------------------------------- software/directory.rst | 80 +++----- software/gnb.rst | 189 ++++++------------ software/gnbsim.rst | 24 ++- software/inspect.rst | 18 +- software/network.rst | 70 ++++++- software/overview.rst | 42 ++-- software/roc.rst | 22 +- software/scale.rst | 35 ++-- software/start.rst | 87 ++++---- 14 files changed, 902 insertions(+), 718 deletions(-) create mode 100644 software/blueprints.rst create mode 100644 software/devel.rst delete mode 100644 software/developer.todo diff --git a/dict.txt b/dict.txt index b23a4c1..e3253b2 100644 --- a/dict.txt +++ b/dict.txt @@ -174,6 +174,7 @@ instantiations interoperate interworking invariants +iteratively judgement kbps kilobits @@ -203,6 +204,7 @@ parameterizing performant pre prem +prereqs programmability pseudowires reachability @@ -228,6 +230,7 @@ submodule submodules subnet subnets +templating toolchain toolset transformative diff --git a/index.rst b/index.rst index 3bdbb23..8d28a65 100644 --- a/index.rst +++ b/index.rst @@ -44,3 +44,5 @@ Larry Peterson, Oguz Sunay, and Bruce Davie software/gnbsim.rst software/gnb.rst software/roc.rst + software/devel.rst + software/blueprints.rst diff --git a/software/blueprints.rst b/software/blueprints.rst new file mode 100644 index 0000000..e8a400e --- /dev/null +++ b/software/blueprints.rst @@ -0,0 +1,400 @@ +Other Blueprints +----------------------- + +The previous sections describe how to deploy three Aether blueprints, +corresponding to three variants of ``var/main.yml``. This section +documents additional blueprints, each defined by a combination of +Ansible components: + +* A ``vars/main-blueprint.yml`` file, checked into the + ``aether-onramp`` repo, is the "root" of the blueprint + specification. + +* A ``hosts.ini`` file, documented by example, specifies the target + servers required by the blueprint. + +* A set of Make targets, defined in a submodule and imported into + OnRamp's global Makefile, provides commands to install and uninstall + the blueprint. + +* (Optional) A new ``aether-blueprint`` repo defines the Ansible Roles + and Playbooks required to deploy a new component. + +* (Optional) New Roles, Playbooks, and Templates, checked to existing + repos/submodules, customize existing components for integration with + the new blueprint. To support blueprint independence, these elements + are intentionally kept "narrow", rather than glommed onto an + existing element. + +* (Optional) Any additional hardware (beyond the Ansible-managed + Aether servers) required to support the blueprint. + +* A Jenkins job, added to the set of OnRamp integration tests, + verifies that the blueprint successfully deploys Aether. + +The goal of establishing a well-defined procedure for adding new +blueprints to OnRamp is to encourage the community to contribute (and +maintain) new Aether configurations and deployment scenarios.\ [#]_ +The rest of this section documents community-contributed blueprints +to-date. + +.. [#] Not all possible configurations of Aether require a + blueprint. There are other ways to add variability, for + example, by documenting simple ways to modify an existing + blueprint. Disabling ``core.standalone`` and selecting an + alternative ``core.values_file`` are two common examples. + +Multiple UPFs +~~~~~~~~~~~~~~~~~~~~~~ + +The base version of SD-Core includes a single UPF, running in the same +Kubernetes namespace as the Core's control plane. This blueprint adds +the ability to bring up multiple UPFs (each in a different namespace), +and uses ROC to establish the *UPF-to-Slice-to-Device* bindings +required to activate end-to-end user traffic. The resulting deployment +is then verified using gNBsim. + +The Multi-UPF blueprint includes the following: + +* Global vars file ``vars/main-upf.yml`` gives the overall + blueprint specification. + +* Inventory file ``hosts.ini`` is identical to that used in the + `Emulated RAN `__ section. Minimally, + SD-Core runs on one server and gNBsim runs on a second server. + (The Quick Start deployment, with both SD-Core and gNBsim running + in the same server, also works.) + +* New make targets, ``5gc-upf-install`` and ``5gc-upf-uninstall``, to + be executed after the standard SD-Core installation. The blueprint + also reuses the ``roc-load`` target to activate new slices in ROC. + +* New Ansible role (``upf``) added to the ``5gc`` submodule, including + a new UPF-specific template (``upf-5g-values.yaml``). + +* New models file (``roc-5g-models-upf2.json``) added to the + ``roc-load`` role in the ``amp`` submodule. This models file is + applied as a patch *on top of* the base set of ROC models. (Since + this blueprint is demonstrated using gNBsim, the assumed base models + are given by ``roc-5g-models.json``.) + +* Two nightly integration tests that validate the Multi-UPF blueprint + can be viewed on Jenkins (assuming you are a registered user): + `single-server test + `__, + `two-server test + `__. + +To use Multi-UPF, first copy the vars file to ``main.yml``: + +.. code-block:: + + $ cd vars + $ cp main-upf.yml main.yml + +Then edit ``hosts.ini`` and ``vars/main.yml`` to match your local +target servers, and deploy the base system (as in previous sections): + +.. code-block:: + + $ make k8s-install + $ make roc-install + $ make roc-load + $ make 5gc-core-install + $ make gnbsim-install + +You can also optionally install the monitoring subsystem. Note that +because ``main.yml`` sets ``core.standalone: "false"``, any models +loaded into ROC are automatically applied to SD-Core. + +At this point you are ready to bring up additional UPFs and bind them +to specific slices and devices. This involves first editing the +``upf`` block in the ``core`` section of ``vars/main.yml``: + +.. code-block:: + + upf: + ip_prefix: "192.168.252.0/24" + iface: "access" + helm: + chart_ref: aether/bess-upf + values_file: "deps/5gc/roles/upf/templates/upf-5g-values.yaml" + additional_upfs: + "1": + ip: + access: "192.168.252.6/24" + core: "192.168.250.6/24" + ue_ip_pool: "172.248.0.0/16" + # "2": + # ip: + # access: "192.168.252.7/24" + # core: "192.168.250.7/24" + # ue_ip_pool: "172.247.0.0/16" + +As shown above, one additional UPF is enabled (beyond ``upf-0`` that +already came up as part of SD-Core), with the spec for yet another UPF +commented out. In this example configuration, each UPF is assigned a +subnet on the ``access`` and ``core`` bridges, along with the IP +address pool for UEs that the UPF serves. Once done with the edits, +launch the new UPF(s) by typing: + +.. code-block:: + + $ make 5gc-upf-install + +At this point the new UPF(s) will be running (you can verify this +using ``kubectl``), but no traffic will be directed to them until UEs +are assigned to their IP address pool. Doing so requires loading the +appropriate bindings into ROC, which you can do by editing the +``roc_models`` line in ``amp`` section of ``vars/main.yml``. Comment +out the original models file already loaded into ROC, and uncomment +the new patch that is to be applied: + +.. code-block:: + + amp: + # roc_models: "deps/amp/roles/roc-load/templates/roc-5g-models.json" + roc_models: "deps/amp/roles/roc-load/templates/roc-5g-models-upf2.json" + +Then run the following to load the patch: + +.. code-block:: + + $ make roc-load + +At this point you can bring up the Aether GUI and see that a second +slice and a second device group have been mapped onto the second UPF. + +Now you are ready to run traffic through both UPFs, which because the +configuration files identified in the ``servers`` block of the +``gnbsim`` section of ``vars/main.yml`` align with the IMSIs bound to +each Device Group (which are bound to each slice, which are in turn +bound to each UPF), the emulator sends data through both UPFs. To run +the emulation, type: + +.. code-block:: + + $ make gnbsim-simulator-run + +SD-RAN +~~~~~~~~~~~~~~~~~~~~~~ + +This blueprint runs SD-Core and SD-RAN in tandem, with RANSIM +emulating various RAN elements. (The OnRamp roadmap includes plans to +couple SD-RAN with other virtual and physical RAN elements, but RANSIM +is currently the only option.) + +The SD-RAN blueprint includes the following: + +* Global vars file ``vars/main-sdran.yml`` gives the overall + blueprint specification. + +* Inventory file ``hosts.ini`` is identical to that used in the Quick + Start deployment, with both SD-RAN and SD-Core co-located on a + single server. + +* New make targets, ``aether-sdran-install`` and + ``aether-sdran-uninstall``, to be executed after the standard + SD-Core installation. + +* A new submodule ``deps/sdran`` (corresponding to repo + ``aether-sdran``) defines the Ansible Roles and Playbooks required + to deploy SD-RAN. + +* A nightly integration test that validates the SD-RAN blueprint can + be viewed on `Jenkins + `__ + (assuming you are a registered user). + +To use SD-RAN, first copy the vars file to ``main.yml``: + +.. code-block:: + + $ cd vars + $ cp main-sdran.yml main.yml + +Then edit ``hosts.ini`` and ``vars/main.yml`` to match your local +target servers, and deploy the base system (as in previous sections), +followed by SD-RAN: + +.. code-block:: + + $ make aether-k8s-install + $ make aether-5gc-install + $ make aether-sdran-install + +Use ``kubectl`` to validate that the SD-RAN workload is running, which +should result in output similar to the following: + +.. code-block:: + + $ kubectl get pods -n sdran + NAME READY STATUS RESTARTS AGE + onos-a1t-68c59fb46-8mnng 2/2 Running 0 3m12s + onos-cli-c7d5b54b4-cddhr 1/1 Running 0 3m12s + onos-config-5786dbc85c-rffv7 3/3 Running 0 3m12s + onos-e2t-5798f554b7-jgv27 2/2 Running 0 3m12s + onos-kpimon-555c9fdb5c-cgl5b 2/2 Running 0 3m12s + onos-topo-6b59c97579-pf5fm 2/2 Running 0 3m12s + onos-uenib-6f65dc66b4-b78zp 2/2 Running 0 3m12s + ran-simulator-5d9465df55-p8b9z 1/1 Running 0 3m12s + sd-ran-consensus-0 1/1 Running 0 3m12s + sd-ran-consensus-1 1/1 Running 0 3m12s + sd-ran-consensus-2 1/1 Running 0 3m12s + +Note that the SD-RAN workload includes RANSIM as one of its pods; +there is no separate "run simulator" step as is the case with gNBsim. +To validate that the emulation ran correctly, query the ONOS CLI as +follows: + +Check ``onos-kpimon`` to see if 6 cells are present: + +.. code-block:: + + $ kubectl exec -it deployment/onos-cli -n sdran -- onos kpimon list metrics + +Check ``ran-simulator`` to see if 10 UEs and 6 cells are present: + +.. code-block:: + + $ kubectl exec -it deployment/onos-cli -n sdran -- onos ransim get cells + $ kubectl exec -it deployment/onos-cli -n sdran -- onos ransim get ues + +Check ``onos-topo`` to see if ``E2Cell`` is present: + +.. code-block:: + + $ kubectl exec -it deployment/onos-cli-n sdran -- onos topo get entity -v + +UERANSIM +~~~~~~~~~~~~~~~~~~~~~~ + +This blueprint runs UERANSIM in place of gNBsim, providing a second +way to direct workload at SD-Core. Of particular note, UERANSIM runs +``iperf3``, making it possible to measure UPF throughput. (In +contrast, gNBsim primarily stresses the Core's Control Plane.) + +The UERANSIM blueprint includes the following: + +* Global vars file ``vars/main-ueransim.yml`` gives the overall + blueprint specification. + +* Inventory file ``hosts.ini`` needs to be modified to identify the + server that is to run UERANSIM. Currently, a second server is + needed, as UERANSIM and SD-Core cannot be deployed on the same + server. As an example, ``hosts.ini`` might look like this: + +.. code-block:: + + [all] + node1 ansible_host=10.76.28.113 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether + node2 ansible_host=10.76.28.115 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether + + [master_nodes] + node1 + + [ueransim_nodes] + node2 + +* New make targets, ``aether-ueransim-install``, + ``aether-ueransim-run``, and ``aether-ueransim-uninstall``, to be + executed after the standard SD-Core installation. + +* A new submodule ``deps/ueransim`` (corresponding to repo + ``aether-ueransim``) defines the Ansible Roles and Playbooks + required to deploy UERANSIM. It also contains configuration files + for the emulator. + +* A nightly integration test that validate the UERANSIM blueprint + can be viewed on Jenkins (assuming you are a registered user): + `two-server test + `__. + + +To use UERANSIM, first copy the vars file to ``main.yml``: + +.. code-block:: + + $ cd vars + $ cp main-ueransim.yml main.yml + +Then edit ``hosts.ini`` and ``vars/main.yml`` to match your local +target servers, and deploy the base system (as in previous sections), +followed by UERANSIM: + +.. code-block:: + + $ make aether-k8s-install + $ make aether-5gc-install + $ make aether-ueransim-install + $ make aether-ueransim-run + +The last step actually starts UERANSIM, configured according to the +specification given in files ``custom-gnb.yaml`` and +``custom-ue.yaml`` located in ``deps/ueransim/config``. Make target +``aether-ueransim-run`` can be run multiple times, where doing so +reflects any recent edits to the config files. More information about +UERANSIM can be found on `GitHub +`__, including how to set up the +config files. + +Finally, since the main value of UERANSIM is to measure user plane +throughput, you may want to play with the UPF's Quality-of-Service +parameters, as defined in +``deps/5gc/roles/core/templates/sdcore-5g-values.yaml``. Specifically, +see both the UE-level settings associated with ``ue-dnn-qos`` and the +slice-level settings associated with ``slice_rate_limit_config``. + +Physical eNBs +~~~~~~~~~~~~~~~~~~ + +Aether OnRamp is geared towards 5G, but it does support physical eNBs, +including 4G-based versions of both SD-Core and AMP. The 4G blueprint +has been demonstrated with `SERCOMM's 4G/LTE CBRS Small Cell +`__. +The blueprint uses all the same Ansible machinery outlined in earlier +sections, but starts with a variant of ``vars/main.yml`` customized +for running physical 4G radios: + +.. code-block:: + + $ cd vars + $ cp main-eNB.yml main.yml + +Assuming that starting point, the following outlines the key +differences from the 5G case: + +* There is a 4G-specific repo, which you can find in ``deps/4gc``. + +* The ``core`` section of ``vars/main.yml`` specifies a 4G-specific values file: + + ``values_file: "deps/4gc/roles/core/templates/radio-4g-values.yaml"`` + +* The ``amp`` section of ``vars/main.yml`` specifies that 4G-specific + models and dashboards get loaded into the ROC and Monitoring + services, respectively: + + ``roc_models: "deps/amp/roles/roc-load/templates/roc-4g-models.json"`` + + ``monitor_dashboard: "deps/amp/roles/monitor-load/templates/4g-monitor"`` + +* You need to edit two files with details for the 4G SIM cards you + use. One is the 4G-specific values file used to configure SD-Core: + + ``deps/4gc/roles/core/templates/radio-4g-values.yaml`` + + The other is the 4G-specific Models file used to bootstrap ROC: + + ``deps/amp/roles/roc-load/templates/radio-4g-models.json`` + +* There are 4G-specific Make targets for SD-Core (e.g., ``make + aether-4gc-install`` and ``make aether-4gc-uninstall``), but the + Make targets for AMP (e.g., ``make aether-amp-install`` and ``make + aether-amp-uninstall``) work unchanged in both 4G and 5G. + +The Quick Start and Emulated RAN (gNBsim) deployments are for 5G only, +but revisiting the previous sections—substituting the above for their +5G counterparts—serves as a guide for deploying a 4G blueprint of +Aether. Note that the network is configured in exactly the same way +for both 4G and 5G. This is because SD-Core's implementation of the +UPF is used in both cases. diff --git a/software/devel.rst b/software/devel.rst new file mode 100644 index 0000000..99fc6a0 --- /dev/null +++ b/software/devel.rst @@ -0,0 +1,217 @@ +Development Support +----------------------- + +OnRamp's primary goal is to support users that want to deploy +officially released versions of Aether on local hardware, but it also +provides a way for users that want to develop new features to deploy +and test them. To this end, this section describes how to configure +OnRamp to use locally modified components, such as Helm Charts and +Docker images (including new images built from source code). + +At a low level, development is a component-specific task, and users +are referred to documentation for the respective subsystems: + +* To develop SD-Core, see the `SD-Core Guide `__. + +* To develop SD-RAN, see the `SD-RAN Guide `__. + +* To develop the ROC-based API, see `ROC Development `__. + +* To develop Monitoring Dashboards, see `Monitoring Development `__. + +At a high level, OnRamp provides a means to deploy developmental +versions of Aether that include local modifications to the standard +components. These modifications range from coarse-grain (i.e., +replacing the Helm Chart for an entire subsystem), to fine-grain +(i.e., replacing the container image for an individual microservice). +The following uses SD-Core as a specific example to illustrate how +this is done. The same approach can be applied to other subsystems. + +Local Helm Charts +~~~~~~~~~~~~~~~~~~~~ + +To substitute a local Helm Chart—for example, one located in directory +``/home/ubuntu/aether/sdcore-helm-charts/sdcore-helm-charts`` on the +server where you run the OnRamp ``make`` targets—edit the ``helm`` +block of the ``core`` section of ``vars/main.yml`` to replace: + +.. code-block:: + + helm: + local_charts: false + chart_ref: aether/sd-core + chart_version: 0.12.8 + +with + +.. code-block:: + + helm: + local_charts: true + chart_ref: "/home/ubuntu/aether/sdcore-helm-charts/sdcore-helm-charts" + chart_version: 0.13.2 + +Note that variable ``core.helm.local_charts`` is a boolean, not the +string ``"true"``. And in this example, we have declared our new chart +to be version ``0.13.2`` instead of ``0.12.8``. + +Finally, while there are situations that require modifying full Helm +charts, it is also possibly to simply substitute an alternative values +override file for an existing chart by changing the ``core.values_file:`` +variable in ``vars/main.yml``. + +Local Container Images +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Being able to modify a Helm Chart makes it possible to substitute +alternative container images for any or all the microservices +identified in the chart. But it is also possible to substitute just a +single container image while using the standard chart. + +To substitute a locally built container image, edit the corresponding +block in the values override file that you have configured in +``vars/main.yml``; e.g., +``deps/5gc/roles/core/templates/sdcore-5g-values.yaml``. For example, +if you want to deploy the AMF image with tag ``my-amf:version-foo`` +from the container registry of your personal GitLab account, then set +the ``images`` block of 5G control plane section accordingly: + +.. code-block:: + + 5g-control-plane: + enable5G: true + images: + repository: "registry.gitlab.com" + tags: + amf: my-account/my-amf:version-foo + +A new Make target streamlines the process of frequently re-installing +the Kubernetes pods that implement the Core: + +.. code-block:: + + $ make 5gc-core-reset + +If you are also modifying gNBsim in concert with changes to SD-Core, +then note that the former is not deployed on Kubernetes, and so there +is no Helm Chart or values override file. Instead, you simply need to +modify the ``image`` variable in the ``gnbsim`` section of +``vars/main.yml`` to reference your locally built image: + +.. code-block:: + + gnbsim: + docker: + container: + image: omecproject/5gc-gnbsim:main-PR_88-cc0d21b + +For convenience, the following Make target restarts the container, +which pulls in the new image. + +.. code-block:: + + $ make gnbsim-reset + +Keep in mind that you can also rerun gNBsim with the *same* container, +but loading the latest gNBsim config file, by typing: + +.. code-block:: + + $ make aether-gnbsim-run + +Directly Invoking Helm +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +It is also possible to directly invoke Helm without engaging OnRamp's +Ansible playbooks. In this scenario, a developer might use OnRamp to +initially set up Aether (e.g., to deploy Kubernetes on a set of nodes, +install the routes and virtual bridges needed to interconnect the +components, and bring up an initial set of pods), but then iteratively +update the pods running on that cluster by executing ``helm``. This +can be the basis for an efficient development loop for users with an +in-depth understanding of Helm and Kubernetes. + +To see how this might work, it is helpful to look at an example +installation playbook, and see how key tasks map onto a corresponding +``helm`` commands. We'll use +``deps/5gc/roles/core/tasks/install.yml``, which installs the 5G core, +as an example. Consider the following two blocks from the playbook +(each block corresponds to an Ansible task): + +.. code-block:: + + - name: add aether chart repo + kubernetes.core.helm_repository: + name: aether + repo_url: "https://charts.aetherproject.org" + when: inventory_hostname in groups['master_nodes'] + + - name: deploy aether 5gc + kubernetes.core.helm: + update_repo_cache: true + name: sd-core + release_namespace: omec + create_namespace: true + chart_ref: "{{ core.helm.chart_ref }}" + chart_version: "{{ core.helm.chart_version }}" + values_files: + - /tmp/sdcore-5g-values.yaml + wait: true + wait_timeout: "2m30s" + force: true + when: inventory_hostname in groups['master_nodes'] + +These two tasks correspond to the following three ``helm`` commands: + +.. code-block:: + + $ helm repo add aether https://charts.aetherproject.org + $ helm repo update + $ helm upgrade --create-namespace \ + --install \ + --version $CHART_VERSION \ + --wait \ + --namespace omec \ + --values $VALUES_FILE \ + sd-core + +The correspondence between task parameters and command arguments is +straightforward, keeping in mind that both approaches take advantage +of variables (as defined in ``vars/main.yml`` for the Ansible tasks, +and corresponding to shell variables ``CHART_VERSION`` and +``VALUES_FILE`` in our example command sequence). The ``when`` line in +the two tasks indicates that the task is to be run on the +``master_nodes`` in your ``hosts.ini`` file; that node is where you +would directly call ``helm``. Note that local charts can be used by +also executing the following command (reusing the example path name +from earlier in this section): + +.. code-block:: + + $ helm dep up /home/ubuntu/aether/sdcore-helm-charts/sdcore-helm-charts + +You will see other tasks in the OnRamp playbooks. These tasks +primarily take care of bookkeeping; automating bookkeeping tasks +(including templating) is one of the main values that Ansible provides. + +Finally, keep in mind that in using SD-Core to illustrate how to build +a customized modify-and-test loop, this section doesn't address some +of the peculiarities of the other components. As one example, ROC has +prerequisites that have to be installed before the ROC itself. These +prereqs are identified in the ROC installation playbook, and include +``onos-operator``, which in turn depends on ``atomix``. + +As another example, the ROC and monitoring services allow you to +program new features by loading alternative "specifications" into the +running pods (in addition to installing new container images). This +approach is described in the `ROC Development +`__ and +`Monitoring Development +`__ +sections, respectively, and implemented by the ``roc-load`` and +``monitor-load`` roles found in ``deps/amp/roles``. + + + + + diff --git a/software/developer.todo b/software/developer.todo deleted file mode 100644 index db4b88c..0000000 --- a/software/developer.todo +++ /dev/null @@ -1,431 +0,0 @@ -Development Loop -============================== - -Helm charts are the primary method of installing the SD-CORE and ROC resources. -AiaB offers a great deal of flexibility regarding which Helm chart versions to install: - -* Local definitions of charts (for testing Helm chart changes) -* Latest published charts (for deploying a development version of Aether) -* Specified versions of charts (for deploying a specific Aether release) - -AiaB can be run on a bare metal machine or VM. System prerequisites: - -* AiaB 4G: Ubuntu 18.04 clean install (18.04 is a requirement of OAISIM which is used to test 4G Aether) -* AiaB 5G: Ubuntu 18.04, 20.04, 22.04 [#]_ - -.. [#] AiaB requires to increase the maximum number of available watches and the maximum number of - inotify instances in Ubuntu 22.04. Otherwise, there will be a "time out" error due to "Too many - files open". Some users have also reported to see this issue when using Ubuntu 20.04. :ref:`AiaB - fails too many files open ` - provides more details on how to address this issue. - -.. note:: - * Running both 4G and 5G SD-CORE simultaneously in AiaB is currently not supported. - * AiaB changes the host server by adding systemd-networkd configuration files to the - host's network configuration. Systemd-networkd is the default networking configuration - tool for Ubuntu, but if your server or VM uses a different method it may not be fully - compatible with AiaB. - - -Installing the 4G AIAB ----------------------- - -4G SD-CORE deploys the following core components to provide mobile connectivity: - -* SPGW (Serving/PDN Gateway): Combined Serving Gateway and Packet Data Network (PDN) Gateway -* UPF (User Plane Function): The interconnect between the mobile infrastructure and the Data Network (DN). -* PCRF (Policy and Charging Rules Function): Data flow detection, policy enforcement, and flow-based charging. -* MME (Mobility Management Entity): Manages UE access network and mobility, and establishing the bearer path for UE. -* HSS (Home Subscriber Server): The main subscriber database. -* Config4g (Config Pod) - -.. figure:: images/4g-call-flow.png - :align: center - :width: 80 % - - *Communication between 4G SD-CORE Components* - -The eNB (evolved Node B) is the Radio Access Network (RAN) of the 4G architecture and allows -the UEs to connect to the Mobile network. -It passes UE's attach request to MME via S1AP interface to be identified and authenticated through HSS. -MME sends the session request to SPGW to create the GTP tunnel and request the default bearer. SPGW sends back the UPF -address to establish the connectivity (GTP tunnel) to the DN through the user plane. - -When the AiaB is up, you can explicitly specify the *oip1* interface within the command to send -data over the 4G datapath. Examples:: - - curl --interface oip1 http://ipv4.download.thinkbroadband.com/5MB.zip --output /dev/null - ping -I oip1 google.com - iperf3 -c la.speedtest.clouvider.net -p 5204 -B 172.250.255.254 - -AiaB deploys a router pod in the "default" namespace with four interfaces: *ran-gw* for the radio network, -*access-gw* for access network, *core-gw* for core network, and *eth0* for the external network. -When a UE starts sending traffics to the data network through the user plane (access network), -the uplink (UE to internet) data packets traverse the following path across the pods:: - - (oip1) enb-0 (enb) ==GTP==> (ran-gw) router (access-gw) ==GTP==> (access) upf-0 (core) - ----> (core-gw) router (NAT,eth0) - -And the downlink (internet to UE) packets follow as:: - - (NAT,eth0) router (core-gw) ----> (core) upf-0 (access) ==GTP==> (access-gw) router (ran-gw) - ==GTP==> (enb) enb-0 (oip1) - -.. note:: - In the above notations, network interfaces within each pod are shown in parenthesis. - The IP packets sent/received between the UE and external host via the user plane are GTP-encapsulated - and tunneled between the eNB and UPF. - -.. _developer-4g-loop: - -Using Custom 4G Container Images -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Suppose you wish to test a new build of a 4G SD-CORE services. You can deploy custom images -by editing:: - - Override file - `~/aether-in-a-box/sd-core-4g-values.yaml` if you are using latest or local Helm charts - Override file - `~/aether-in-a-box/release-2.0/sd-core-4g-values.yaml` if you are using release-2.0 charts - - - #update following content in override values to update image tags - omec-control-plane: - images: - repository: "" # default docker hub - tags: - mme: omecproject/nucleus:master-a8002eb - pullPolicy: IfNotPresent - -To upgrade a running 4G SD-CORE with the new image, or to deploy the 4G SD-CORE with the image. Use appropriate -make commands. Following commands assumes that you are using local helm charts :: - - make reset-test; make test #if you are not using local charts then CHARTS option - -**Note**: You can use locally built image (Clone + Compile Code) or you can refer to omecproject -dockerhub project to see available image tags. - -.. _local-helm-4g: - -Using Local Helm Charts 4G -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -**Note**: Most users will install AiaB using *published Helm charts* (e.g., `CHARTS=latest`, -`CHARTS=release-2.0`). However, if you need to change the Helm -charts themselves, clone these additional repositories to work with the *local Helm charts*:: - - mkdir -p ~/cord - cd ~/cord - git clone "https://gerrit.opencord.org/sdcore-helm-charts" - git clone "https://gerrit.opencord.org/roc-helm-charts" - git clone "https://gerrit.opencord.org/sdfabric-helm-charts" - cd ~/aether-in-a-box - -Modify the helm charts as per your need. Also execute `helm dep update .` in the changed helm -chart repo. Example below to add testOpt option in mme.:: - - node0:~/cord/sdcore-helm-charts$ git diff - diff --git a/omec-control-plane/Chart.yaml b/omec-control-plane/Chart.yaml - index 79c3738..48ae901 100644 - --- a/omec-control-plane/Chart.yaml - +++ b/omec-control-plane/Chart.yaml - @@ -9,4 +9,4 @@ description: OMEC control plane services - name: omec-control-plane - icon: https://guide.opencord.org/logos/cord.svg - - -version: 0.11.1 - +version: 0.11.2 - diff --git a/omec-control-plane/values.yaml b/omec-control-plane/values.yaml - index 33ac6ce..a6b994a 100644 - --- a/omec-control-plane/values.yaml - +++ b/omec-control-plane/values.yaml - @@ -395,6 +395,7 @@ config: - - id: frequency - type: integer - mme: - + testOpt: true - deploy: true - podAnnotations: - fluentbit.io/parser: mme - diff --git a/sdcore-helm-charts/Chart.yaml b/sdcore-helm-charts/Chart.yaml - index 44a5558..151eb07 100644 - --- a/sdcore-helm-charts/Chart.yaml - +++ b/sdcore-helm-charts/Chart.yaml - @@ -8,7 +8,7 @@ name: sd-core - description: SD-Core control plane services - icon: https://guide.opencord.org/logos/cord.svg - type: application - -version: 0.11.8 - +version: 0.11.9 - home: https://opennetworking.org/sd-core/ - maintainers: - - name: SD-Core Support - @@ -16,9 +16,9 @@ maintainers: - - dependencies: - - name: omec-control-plane - - version: 0.11.1 - - repository: https://charts.aetherproject.org - - #repository: "file://../omec-control-plane" - + version: 0.11.2 - + #repository: https://charts.aetherproject.org - + repository: "file://../omec-control-plane" #refer local helm chart - condition: omec-control-plane.enable4G - - - name: omec-sub-provision - node0:~/cord/sdcore-helm-charts$ - - node0:~$ cd cord/sdcore-helm-charts/omec-control-plane/ - node0:~/cord/sdcore-helm-charts/omec-control-plane$ helm dependency update . - - -To install the ROC from the local charts:: - - make roc-4g-models - -To install the 4G SD-CORE from the local charts:: - - make test - -.. note:: - * Helm chart changes can not be done when CHARTS option is used. If you need to change helm chart then you should use local helm charts - -Troubleshooting 4G Issues -^^^^^^^^^^^^^^^^^^^^^^^^^ - -**NOTE: Running both 4G and 5G SD-CORE simultaneously in AiaB is currently not supported.** - -If you suspect a problem, first verify that all pods are in Running state:: - - kubectl -n omec get pods - kubectl -n aether-roc get pods - -4G Test Fails -************* - -Occasionally *make test* (for 4G) fails for unknown reasons; this is true regardless of which Helm charts are used. -If this happens, first try recreating the simulated UE / eNB and re-running the test as follows:: - - make reset-ue - make test - -If that does not work, try cleaning up AiaB as described above and re-building it. - -If *make test* fails consistently, check whether the configuration has been pushed to the SD-CORE:: - - kubectl -n omec logs config4g-0 | grep "Successfully" - -You should see that a device group and slice has been pushed:: - - [INFO][WebUI][CONFIG] Successfully posted message for device group 4g-oaisim-user to main config thread - [INFO][WebUI][CONFIG] Successfully posted message for slice default to main config thread - -Then tail the *config4g-0* log and make sure that the configuration has been successfully pushed to all -SD-CORE components. - - -.. note:: - For more troubleshooting FAQs, please refer here :ref:`Troubleshooting guide ` - -Installing the 5G AIAB ----------------------- - -.. _developer-5g-loop: - -Using Custom 5G Container Images -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Suppose you wish to test a new build of a 5G SD-CORE services. You can deploy custom images -by editing:: - - - Override file - `~/aether-in-a-box/sd-core-5g-values.yaml` if you are using latest or local Helm charts - Override file - `~/aether-in-a-box/release-2.0/sd-core-5g-values.yaml` if you are using release-2.0 charts - - #update following content in override values to update image tags - 5g-control-plane: - images: - tags: - webui: registry.aetherproject.org/omecproject/5gc-webui:onf-release3.0.5-roc-935305f - pullPolicy: IfNotPresent - -To upgrade a running 5G SD-CORE with the new image, or to deploy the 5G SD-CORE with the image. Use appropriate -make commands. Following commands assumes that you are using local helm charts :: - - make reset-5g-test; make 5g-test #if you are not using local charts then use CHARTS option - -**Note**: You can use locally built image (Clone + Compile Code) or you can refer to omecproject -dockerhub project to see available image tags. - -.. _local-helm-5g: - -Using Local Helm Charts 5G -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -**Note**: Most users will install AiaB using *published Helm charts* (e.g., `CHARTS=latest`, -`CHARTS=release-2.0`). However, if you need to change the Helm -charts themselves, clone these additional repositories to work with the *local Helm charts*:: - - mkdir -p ~/cord - cd ~/cord - git clone "https://gerrit.opencord.org/sdcore-helm-charts" - git clone "https://gerrit.opencord.org/roc-helm-charts" - git clone "https://gerrit.opencord.org/sdfabric-helm-charts" - cd ~/aether-in-a-box - -Modify the helm charts as per your need. Also execute `helm dep update .` in the changed helm -chart repo. Example below to add testOpt option in amf.:: - - node0:~/cord/sdcore-helm-charts$ git diff - diff --git a/5g-control-plane/Chart.yaml b/5g-control-plane/Chart.yaml - index 421e7e5..3cea334 100644 - --- a/5g-control-plane/Chart.yaml - +++ b/5g-control-plane/Chart.yaml - @@ -10,7 +10,7 @@ description: SD-Core 5G control plane services - name: 5g-control-plane - icon: https://guide.opencord.org/logos/cord.svg - - -version: 0.7.10 - +version: 0.7.11 - - dependencies: - - name: mongodb - diff --git a/5g-control-plane/values.yaml b/5g-control-plane/values.yaml - index 8ddcf66..c15d77d 100644 - --- a/5g-control-plane/values.yaml - +++ b/5g-control-plane/values.yaml - @@ -417,6 +417,7 @@ config: - ngapIpList: - - "0.0.0.0" - amf: - + testOpt: true - deploy: true - podAnnotations: - field.cattle.io/workloadMetrics: '[{"path":"/metrics","port":9089,"schema":"HTTP"}]' - diff --git a/sdcore-helm-charts/Chart.yaml b/sdcore-helm-charts/Chart.yaml - index 44a5558..8f52f77 100644 - --- a/sdcore-helm-charts/Chart.yaml - +++ b/sdcore-helm-charts/Chart.yaml - @@ -8,7 +8,7 @@ name: sd-core - description: SD-Core control plane services - icon: https://guide.opencord.org/logos/cord.svg - type: application - -version: 0.11.8 - +version: 0.11.9 - home: https://opennetworking.org/sd-core/ - maintainers: - - name: SD-Core Support - @@ -28,9 +28,9 @@ dependencies: - condition: omec-sub-provision.enable - - - name: 5g-control-plane - - version: 0.7.8 - - repository: https://charts.aetherproject.org - - #repository: "file://../5g-control-plane" - + version: 0.7.11 - + #repository: https://charts.aetherproject.org - + repository: "file://../5g-control-plane" #enable this line to refer locally changed helm charts - condition: 5g-control-plane.enable5G - - - name: bess-upf - node0:~/cord/sdcore-helm-charts$ - - node0:~$ cd cord/sdcore-helm-charts/5g-control-plane/ - node0:~/cord/sdcore-helm-charts/5g-control-plane$ helm dependency update . - -To install the ROC from the local charts:: - - make roc-5g-models - -To install the 5G SD-CORE from the local charts:: - - make 5g-test - -.. note:: - * Helm chart changes can not be done when CHARTS option is used. If you need to change helm chart then you should use local helm charts - -Troubleshooting 5G Issues -^^^^^^^^^^^^^^^^^^^^^^^^^ - -**NOTE: Running both 4G and 5G SD-CORE simultaneously in AiaB is currently not supported.** - -If you suspect a problem, first verify that all pods are in Running state:: - - kubectl -n omec get pods - kubectl -n aether-roc get pods - -5G Test Fails -************* - -If the 5G test fails (*make 5g-test*) then you will see output like this:: - - 2022-04-21T17:59:12Z [INFO][GNBSIM][Summary] Profile Name: profile2 , Profile Type: pdusessest - 2022-04-21T17:59:12Z [INFO][GNBSIM][Summary] Ue's Passed: 2 , Ue's Failed: 3 - 2022-04-21T17:59:12Z [INFO][GNBSIM][Summary] Profile Errors: - 2022-04-21T17:59:12Z [ERRO][GNBSIM][Summary] imsi:imsi-208930100007492, procedure:REGISTRATION-PROCEDURE, error:triggering event:REGESTRATION-REQUEST-EVENT, expected event:AUTHENTICATION-REQUEST-EVENT, received event:REGESTRATION-REJECT-EVENT - 2022-04-21T17:59:12Z [ERRO][GNBSIM][Summary] imsi:imsi-208930100007493, procedure:REGISTRATION-PROCEDURE, error:triggering event:REGESTRATION-REQUEST-EVENT, expected event:AUTHENTICATION-REQUEST-EVENT, received event:REGESTRATION-REJECT-EVENT - 2022-04-21T17:59:12Z [ERRO][GNBSIM][Summary] imsi:imsi-208930100007494, procedure:REGISTRATION-PROCEDURE, error:triggering event:REGESTRATION-REQUEST-EVENT, expected event:AUTHENTICATION-REQUEST-EVENT, received event:REGESTRATION-REJECT-EVENT - 2022-04-21T17:59:12Z [INFO][GNBSIM][Summary] Simulation Result: FAIL - -In this case check whether the *webui* pod has restarted... this can happen if it times out waiting -for the database to come up:: - - $ kubectl -n omec get pod -l app=webui - NAME READY STATUS RESTARTS AGE - webui-6b9c957565-zjqls 1/1 Running 1 (6m55s ago) 7m56s - -If the output shows any restarts, then restart the *simapp* pod to cause it to re-push its subscriber state:: - - $ kubectl -n omec delete pod -l app=simapp - pod "simapp-6c49b87c96-hpf82" deleted - -Re-run the 5G test, it should now pass. - -.. note:: - For more troubleshooting FAQs, please refer here :ref:`Troubleshooting guide ` - -Packet Capture --------------- - -`Ksniff `_ is a Kubernetes-integrated packet sniffer shipped as a kubectl plugin. -Ksniff uses tcpdump and Wireshark (Wireshark 3.x) to capture traffic on a specific pod within the cluster. -After installing Ksniff using Krew and Wireshark, by running the following command -you can see the communications between the components. Ksniff uses kubectl to upload -the tcpdump binary into the target container (e.g. mme, amf, upf, ...), and redirects the output to Wireshark:: - - kubectl sniff -n omec mme-0 - -**Note**: To collect packets using Wireshark, the (virtual) machine where Ksniff/Wireshark is running needs -to have a Desktop environment installed for Wireshark to run. Also, note that the desktop machine running -Ksniff/Wireshark doesn't need to be the same machine as the one running AiaB. - -You can see the packets sent/received between the core components from the moment an -UE initiates the attach procedure through eNB until -the dedicated bearer (uplink and downlink) has been established (see figure below). -After the bearer has been established, traffic sent from UE's interface (*oip1*) will go through the eNB and UPF. - -.. figure:: images/wireshark-4g.png - :width: 80 % - :align: center - - *Wireshark output of ksniff on mme pod* - -Using Ksniff on the router pod you can see all the packets exchanged between the UE and external hosts -(e.g. ping an external host from the UE interface):: - - kubectl sniff -n default router - -.. figure:: images/4g-ue-ping.png - :width: 80 % - :align: center - - *Data Flow from UE to an external host through the User Plane (filtered on UE's IP address)* - -Looking at the packet's details, the first and second packets are from *enb* to *router* -and then to *upf* in a GTP tunnel. And the third packet is sent from *router* to the external network via NAT. -The rest are the reply packets from the external host to the UE. - -By default, Ksniff runs *tcpdump* on all interfaces (i.e. *-i any*). To retrieve more details -of packets (e.g. ethernet header information) on a specific interface, -you can explicitly specify the interface along with options (e.g. *-e*). e.g.:: - - kubectl sniff -n default router -i access-gw -f "-e" diff --git a/software/directory.rst b/software/directory.rst index 9c7f7bc..67e8d5e 100644 --- a/software/directory.rst +++ b/software/directory.rst @@ -1,7 +1,7 @@ .. Repositories .. --------------- -Aether is a complex system, assembled from multiple components +Aether is assembled from multiple components spanning several Git repositories. These include repos for different subsystems (e.g., AMP, SD-Core, SD-RAN), but also for different stages of the development pipeline (e.g., source code, deployment artifacts, @@ -18,7 +18,8 @@ up to speed on the rest of the system. useful when you find yourself trying to troubleshoot a problem in a later section. For example, isolating a problem with a physical gNB is easier if you know that connectivity to the AMF and UPF works - correctly, which the *Emulated RAN* section helps to establish. + correctly, which the `Emulated RAN `__ section + helps to establish. Our second hint is to join the ``#aether-onramp`` channel of the `ONF Workspace `__ on Slack, where @@ -32,34 +33,17 @@ Source Repos Source code for Aether and all of its subsystems can be found in the following repositories: -* Gerrit repository for the CORD Project - (https://gerrit.opencord.org): Microservices for AMP, plus source - for the jobs that implement the CI/CD pipeline. - * GitHub repository for the OMEC Project (https://github.com/omec-project): Microservices for SD-Core, plus the emulator (gNBsim) that subjects SD-Core to RAN workloads. * GitHub repository for the ONOS Project - (https://github.com/onosproject): Microservices for SD-Fabric and - SD-RAN, plus the YANG models used to generate the Aether API. - -* GitHub repository for the Stratum Project - (https://github.com/stratum): On-switch components of SD-Fabric. - -For Gerrit, you can either browse Gerrit (select the `master` branch) -or clone the corresponding ** by typing: - -.. code-block:: - - $ git clone ssh://gerrit.opencord.org:29418/ - -If port 29418 is blocked by your network administrator, you can try cloning -using https instead of ssh: + (https://github.com/onosproject): Microservices for SD-RAN and ROC, + plus the YANG models used to generate the Aether API. -.. code-block:: - - $ git clone https://gerrit.opencord.org/ +* GitHub repository for the ONF + (https://github.com/opennetworkinglab): OnRamp documentation and + playbooks for deploying Aether. Anyone wanting to participate in Aether's ongoing development will want to learn how to contribute new features to these source repos. @@ -69,7 +53,7 @@ Artifact Repos Aether includes a *Continuous Integration (CI)* pipeline that builds deployment artifacts (e.g., Helm Charts, Docker Images) from the -source code. These artifacts are stored in the following repositories: +source code. These artifacts are stored in the following registries: Helm Charts @@ -93,29 +77,20 @@ that while Aether documentation often refers its use of "Docker containers," it is now more accurate to say that Aether uses `OCI-Compliant containers `__. -The Aether CI pipeline keeps the above artifact repos in sync with the +The Aether CI pipeline keeps these artifact repos in sync with the source repos listed above. Among those source repos are the source files for all the Helm Charts: - | ROC: https://gerrit.opencord.org/plugins/gitiles/roc-helm-charts + | ROC: https://github.com/onosproject/roc-helm-charts | SD-RAN: https://github.com/onosproject/sdran-helm-charts - | SD-Core: https://gerrit.opencord.org/plugins/gitiles/sdcore-helm-charts - | SD-Fabric (Servers): https://github.com/onosproject/onos-helm-charts - | SD-Fabric (Switches): https://github.com/stratum/stratum-helm-charts - -The QA tests run against code checked into these source repos can be -found here: - - | https://gerrit.opencord.org/plugins/gitiles/aether-system-tests - -The specification for the CI pipeline, which invokes these QA tests, -gates merge requests, and publishes artifacts, can be found here: - - | https://gerrit.opencord.org/plugins/gitiles/aether-ci-management + | SD-Core: https://github.com/omec-project/sdcore-helm-charts -For more information about Aether's CI pipeline, including its QA and -version control strategy, we recommend the Lifecycle Management -chapter of our companion Edge Cloud Operations book. +The CI pipeline for each subsystem is implemented as GitHub Actions in +the respective repos. The approach is based on an earlier version +implemented by set of Jenkins jobs, as described in the Lifecycle +Management chapter of a companion Edge Cloud Operations book. Of +particular note, the current pipeline adopts the version control +strategy of the original mechanism. .. _reading_cicd: .. admonition:: Further Reading @@ -127,14 +102,13 @@ chapter of our companion Edge Cloud Operations book. OnRamp Repos ~~~~~~~~~~~~~~~~~~~ -The process to deploy the artifacts listed above, sometimes -referred to as GitOps, manages the *Continuous Deployment (CD)* half -of the CI/CD pipeline. OnRamp's approach to GitOps uses a different -mechanism than the one the ONF ops team originally used to manage its -multi-site deployment of Aether. The latter approach has a large -startup cost, which has proven difficult to replicate. (It also locks -you into deployment toolchain that may or may not be appropriate for -your situation.) +The process to deploy the artifacts listed above manages the +*Continuous Deployment (CD)* half of the CI/CD pipeline. OnRamp uses a +different mechanism than the one the ONF ops team originally used to +manage its multi-site deployment of Aether. The latter approach has a +large startup cost, which has proven difficult to replicate. (It also +locks you into deployment toolchain that may or may not be appropriate +for your situation.) In its place, OnRamp adopts minimal Ansible tooling. This makes it easier to take ownership of the configuration parameters that define @@ -147,7 +121,9 @@ OnRamp repos: | Deploy 5G Core: https://github.com/opennetworkinglab/aether-5gc | Deploy 4G Core: https://github.com/opennetworkinglab/aether-4gc | Deploy Management Plane: https://github.com/opennetworkinglab/aether-amp - | Deploy 5G RAN Simulator: https://github.com/opennetworkinglab/aether-gnbsim + | Deploy SD-RAN: https://github.com/opennetworkinglab/aether-sdran + | Deploy gNB Simulator: https://github.com/opennetworkinglab/aether-gnbsim + | Deploy UE+gNB Simulator: https://github.com/opennetworkinglab/aether-ueransim | Deploy Kubernetes: https://github.com/opennetworkinglab/aether-k8s It is the first repo that defines a way to integrate all of the Aether diff --git a/software/gnb.rst b/software/gnb.rst index 9f3c8e7..f332e6f 100644 --- a/software/gnb.rst +++ b/software/gnb.rst @@ -31,7 +31,7 @@ options are likely to be different in other countries. `__, with summaries of different combinations people have tried reported in the OnRamp `Troubleshooting Wiki Page - `__. + `__. This blueprint assumes you start with a variant of ``vars/main.yml`` customized for running physical 5G radios. This is easy to do: @@ -187,9 +187,9 @@ entered here is purposely minimal; it's just enough to bring up and debug the installation. Over the lifetime of a running system, information about *Device Groups* and *Slices* (and the other abstractions they build upon) should be entered via the ROC, as -described the section on Runtime Control. When you get to that point, -Ansible variable ``standalone`` in ``vars/main.yml`` (which -corresponds to the override value assigned to +described in the `Runtime Control `__ section. When +you get to that point, Ansible variable ``standalone`` in +``vars/main.yml`` (which corresponds to the override value assigned to ``provision-network-slice`` in ``radio-5g-values.yaml``) should be set to ``false``. Doing so causes the ``device-groups`` and ``network-slices`` blocks of ``radio-5g-values.yaml`` to be @@ -228,7 +228,7 @@ gives detailed instructions about configuring the gNB. .. admonition:: Further Reading `MOSO CANOPY 5G INDOOR SMALL CELL - `__. + `__. .. admonition:: Troubleshooting Hint @@ -252,7 +252,7 @@ follow: :align: center Management dashboard on the Sercomm gNB, showing the dropdown - ``Settings`` menu overlayed on the ``NR Cell Configuration`` page + ``Settings`` menu overlaid on the ``NR Cell Configuration`` page (which shows default radio settings). @@ -307,18 +307,18 @@ follow: page of the management dashboard should confirm that control interface is established. -9. **Connect to Aether User Plane.** As described in an earlier - section, the Aether User Plane (UPF) is running at IP address - ``192.168.252.3``. Connecting to that address requires installing a - route to subnet ``192.168.252.0/24``. How you install this route is - device and site-dependent. If the small cell provides a means to - install static routes, then a route to destination - ``192.168.252.0/24`` via gateway ``10.76.28.113`` (the server - hosting Aether) will work. If the small cell does not allow static - routes (as is the case for the SERCOMM gNB), then ``10.76.28.113`` - can be installed as the default gateway, but doing so requires that - your server also be configured to forward IP packets on to the - Internet. +9. **Connect to Aether User Plane.** As described in the `Verify + Network `__ section, the Aether User Plane (UPF) is + running at IP address ``192.168.252.3``. Connecting to that address + requires installing a route to subnet ``192.168.252.0/24``. How you + install this route is device and site-dependent. If the small cell + provides a means to install static routes, then a route to + destination ``192.168.252.0/24`` via gateway ``10.76.28.113`` (the + server hosting Aether) will work. If the small cell does not allow + static routes (as is the case for the SERCOMM gNB), then + ``10.76.28.113`` can be installed as the default gateway, but doing + so requires that your server also be configured to forward IP + packets on to the Internet. .. admonition:: Troubleshooting Hint @@ -342,109 +342,50 @@ follow: address. Then set the default gateway to the IP address of your Aether server. -Run Diagnostics -~~~~~~~~~~~~~~~~~ - -Successfully connecting a UE to the Internet is not a straightforward -exercise. It involves configuring the UE, gNB, and SD-Core software in -a consistent way; establishing SCTP-based control plane (N2) and -GTP-based user plane (N3) connections between the base station and -Mobile Core; and traversing multiple IP subnets along the end-to-end -path. - -The UE and gNB provide limited diagnostic tools. For example, it's -possible to run ``ping`` and ``traceroute`` from both. You can also -run the ``ksniff`` tool described in the Networking section, but the -most helpful packet traces you can capture are shown in the following -commands. You can run these on the Aether server, where we use our -example ``ens18`` interface for illustrative purposes: - -.. code-block:: - - $ sudo tcpdump -i any sctp -w sctp-test.pcap - $ sudo tcpdump -i ens18 port 2152 -w gtp-outside.pcap - $ sudo tcpdump -i access port 2152 -w gtp-inside.pcap - $ sudo tcpdump -i core net 172.250.0.0/16 -w n6-inside.pcap - $ sudo tcpdump -i ens18 net 172.250.0.0/16 -w n6-outside.pcap - -The first trace, saved in file ``sctp.pcap``, captures SCTP packets -sent to establish the control path between the base station and the -Mobile Core (i.e., N2 messages). Toggling "Mobile Data" on the UE, -for example by turning Airplane Mode off and on, will generate the -relevant control plane traffic. - -The second and third traces, saved in files ``gtp-outside.pcap`` and -``gtp-inside.pcap``, respectively, capture GTP packets (tunneled -through port ``2152`` ) on the RAN side of the UPF. Setting the -interface to ``ens18`` corresponds to "outside" the UPF and setting -the interface to ``access`` corresponds to "inside" the UPF. Running -``ping`` from the UE will generate the relevant user plane (N3) traffic. - -Similarly, the fourth and fifth traces, saved in files -``n6-inside.pcap`` and ``n6-outside.pcap``, respectively, capture IP -packets on the Internet side of the UPF (which is known as the **N6** -interface in 3GPP). In these two tests, ``net 172.250.0.0/16`` -corresponds to the IP addresses assigned to UEs by the SMF. Running -``ping`` from the UE will generate the relevant user plane traffic. - -If the ``gtp-outside.pcap`` has packets and the ``gtp-inside.pcap`` -is empty (no packets captured), you may run the following commands -to make sure packets are forwarded from the ``ens18`` interface -to the ``access`` interface and vice versa: - -.. code-block:: - - $ sudo iptables -A FORWARD -i ens18 -o access -j ACCEPT - $ sudo iptables -A FORWARD -i access -o ens18 -j ACCEPT - -Support for eNBs -~~~~~~~~~~~~~~~~~~ - -Aether OnRamp is geared towards 5G, but it does support physical eNBs, -including 4G-based versions of both SD-Core and AMP. It does not -support an emulated 4G RAN. The 4G blueprint uses all the same Ansible -machinery outlined in earlier sections, but starts with a variant of -``vars/main.yml`` customized for running physical 4G radios: - -.. code-block:: - - $ cd vars - $ cp main-eNB.yml main.yml - -Assuming that starting point, the following outlines the key -differences from the 5G case: - -1. There is a 4G-specific repo, which you can find in ``deps/4gc``. - -2. The ``core`` section of ``vars/main.yml`` specifies a 4G-specific values file: - - ``values_file: "deps/4gc/roles/core/templates/radio-4g-values.yaml"`` - -3. The ``amp`` section of ``vars/main.yml`` specifies that 4G-specific - models and dashboards get loaded into the ROC and Monitoring - services, respectively: - - ``roc_models: "deps/amp/roles/roc-load/templates/roc-4g-models.json"`` - - ``monitor_dashboard: "deps/amp/roles/monitor-load/templates/4g-monitor"`` - -4. You need to edit two files with details for the 4G SIM cards you - use. One is the 4G-specific values file used to configure SD-Core: - - ``deps/4gc/roles/core/templates/radio-4g-values.yaml`` - - The other is the 4G-specific Models file used to bootstrap ROC: - - ``deps/amp/roles/roc-load/templates/radio-4g-models.json`` - -5. There are 4G-specific Make targets for SD-Core (e.g., ``make - aether-4gc-install`` and ``make aether-4gc-uninstall``), but the - Make targets for AMP (e.g., ``make aether-amp-install`` and ``make - aether-amp-uninstall``) work unchanged in both 4G and 5G. - -The Quick Start and Emulated RAN (gNBsim) deployments are for 5G only, -but revisiting the other sections—substituting the above for their 5G -counterparts—serves as a guide for deploying a 4G version of Aether. -Note that the network is configured in exactly the same way for both -4G and 5G. This is because SD-Core's implementation of the UPF is used -in both cases. +Deployment Milestones +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Successfully connecting a UE to the Internet involves configuring the +UE, gNB, and SD-Core in a consistent way, and doing so for both the +control and user planes. This section identifies the key milestones +along the way, and how to use the available diagnostic tools to verify +that you are making progress. (As a reminder, the available tools +include running ``ping`` and ``traceroute`` from all three components, +capturing packet traces on the Aether server, viewing the monitoring +dashboard, and viewing the gNB Status panel). + +* **Milestone 1: Bring up SD-Core.** Success can be verified by using + ``kubectl`` to observe the status of Kubernetes pods, and by noting + that the monitoring dashboard reports *UPF Up*. And as noted earlier + in this section, we recommend running gNBsim on a second server to + verify that you have a working network path between the gNB and the + Core before attempting to do the same with a physical gNB. + +* **Milestone 2: Connect gNB to the Internet.** Configuring the gNB to + treat the Aether server as its default router (and configuring that + server to forward IP packets) is the recommended way to provide the + gNB with Internet connectivity. Such connectivity is needed when + your deployment depends on Internet services like NTP, and it can be + verified by running ``ping`` or ``traceroute`` to those services + from the gNB. + +* **Milestone 3: Connect gNB to the AMF.** The gNB will automatically + try to establish control plane connectivity to the configured AMF, + and once successful, the dashboard will indicate *NR Ready*. The + Aether monitoring dashboard will also show *gNodeB Up*. + +* **Milestone 4: Connect gNB to the UPF.** Until we try to establish + end-to-end connectivity from the UE (see the next Milestone), the + best indicator of user plane connectivity between the gNB and UPF + can be shown by successfully running ``ping 192.168.252.3`` on the + gNB. + +* **Milestone 5: Establish UE Connectivity.** Getting *5G bars* on the + UE, followed by the ability to access Internet content, is the + ultimate demonstration of success. To help diagnose problems, + capture the packet traces described in the `Verify Network + `__ section. + +One reason for calling out this sequence of milestones is that they +establish a baseline that makes it easier for the community to help +troubleshoot a deployment. diff --git a/software/gnbsim.rst b/software/gnbsim.rst index e841495..1c0b5cf 100644 --- a/software/gnbsim.rst +++ b/software/gnbsim.rst @@ -3,12 +3,13 @@ Emulated RAN gNBsim emulates a 5G RAN, generating (mostly) Control Plane traffic that can be directed at SD-Core. This section describes how to -configure gNBsim to customize and scale the workload it -generates. We assume gNBsim runs in one or more servers, independent -of the server(s) that host SD-Core. These servers are specified in the -``hosts.ini`` file, as described in the section on Scaling Aether. This -blueprint assumes you start with a variant of ``vars/main.yml`` -customized for running gNBsim. This is easy to do: +configure gNBsim to customize and scale the workload it generates. We +assume gNBsim runs in one or more servers, independent of the +server(s) that host SD-Core. These servers are specified in the +``hosts.ini`` file, as described in the `Scale Cluster +`__ section. This blueprint assumes you start with a +variant of ``vars/main.yml`` customized for running gNBsim. This is +easy to do: .. code-block:: @@ -56,11 +57,12 @@ The ``container.count`` variable in the ``docker`` block specifies how many containers run in each server (``2`` in this example). The ``router`` block then gives the network specification needed for these containers to connect to the SD-Core; all of these variables are -described in the previous section on Networking. Finally, the -``servers`` block names the configuration files that parameterize each -container. In this example, there are two servers with two containers -running in each, with ``config/gnbsim-s2-p1.yaml`` parameterizing the -first container on the second server. +described in the `Verify Network `__ +section. Finally, the ``servers`` block names the configuration files +that parameterize each container. In this example, there are two +servers with two containers running in each, with +``config/gnbsim-s2-p1.yaml`` parameterizing the first container on the +second server. These config files then specify the second set of gNBsim parameters. A detailed description of these parameters is outside the scope of diff --git a/software/inspect.rst b/software/inspect.rst index 3394c4d..7781e07 100644 --- a/software/inspect.rst +++ b/software/inspect.rst @@ -47,15 +47,15 @@ be accessed at ``http://10.76.28.113:31194/aether-roc-api/`` in our example deployment (where Aether runs on host ``10.76.28.113``). There is much more to say about the ROC and the Aether API, which we -return to in the section on Runtime Control. For now, we suggest you -simply peruse the Control Dashboard by starting with the dropdown menu -in the upper right corner. For example, selecting `Devices` will show -the set of UEs registered with Aether, similar to the screenshot in -:numref:`Figure %s `. In an operational setting, these values -would be entered into the ROC through either the GUI or the underlying -API. For the Quick Start scenario we're limiting ourselves to in this -section, these values are loaded from -``deps/amp/5g-roc/templates/roc-5g-models.json``. +return to in the `Runtime Control `__ section. For +now, we suggest you simply peruse the Control Dashboard by starting +with the dropdown menu in the upper right corner. For example, +selecting `Devices` will show the set of UEs registered with Aether, +similar to the screenshot in :numref:`Figure %s `. In an +operational setting, these values would be entered into the ROC +through either the GUI or the underlying API. For the Quick Start +scenario we're limiting ourselves to in this section, these values are +loaded from ``deps/amp/5g-roc/templates/roc-5g-models.json``. .. _fig-roc: .. figure:: figures/ROC-Dashboard.png diff --git a/software/network.rst b/software/network.rst index f250267..4ae44ad 100644 --- a/software/network.rst +++ b/software/network.rst @@ -3,11 +3,16 @@ Verify Network This section goes into depth on how SD-Core (which runs *inside* the Kubernetes cluster) connects to either physical gNBs or an emulated -RAN (both running *outside* the Kubernetes cluster). For the purpose -of this section, we assume you already have a scalable cluster running -(as outlined in the previous section), SD-Core has been installed on -that cluster, and you have a terminal window open on the Master node -in that cluster. +RAN (both running *outside* the Kubernetes cluster). It also describes +how to run diagnostics to debug potential problems. + +For the purpose of this section, we assume you already have a scalable +cluster running (as outlined in the previous section), SD-Core has +been installed on that cluster, and you have a terminal window open on +the Master node in that cluster. + +Network Schematic +~~~~~~~~~~~~~~~~~~~~~~~~~~ :numref:`Figure %s ` shows a high-level schematic of Aether's end-to-end User Plane connectivity, where we start by @@ -230,3 +235,58 @@ physical gNBs. macvlan: iface: gnbaccess subnet_prefix: "172.20" + +Packet Traces +~~~~~~~~~~~~~~~~~ + +Successfully connecting a UE to the Internet involves configuring the +UE, gNB, and SD-Core software in a consistent way; establishing +SCTP-based control plane (N2) and GTP-based user plane (N3) +connections between the base station and Mobile Core; and traversing +multiple IP subnets along the end-to-end path. + +Packet traces are the best way to diagnose your deployment, and the +most helpful traces you can capture are shown in the following +commands. You can run these on the Aether server, where we use our +example ``ens18`` interface for illustrative purposes: + +.. code-block:: + + $ sudo tcpdump -i any sctp -w sctp-test.pcap + $ sudo tcpdump -i ens18 port 2152 -w gtp-outside.pcap + $ sudo tcpdump -i access port 2152 -w gtp-inside.pcap + $ sudo tcpdump -i core net 172.250.0.0/16 -w n6-inside.pcap + $ sudo tcpdump -i ens18 net 172.250.0.0/16 -w n6-outside.pcap + +The first trace, saved in file ``sctp.pcap``, captures SCTP packets +sent to establish the control path between the base station and the +Mobile Core (i.e., N2 messages). Toggling "Mobile Data" on a physical +UE, for example by turning Airplane Mode off and on, will generate the +relevant control plane traffic; gNBsim automatically triggers this +activity. + +The second and third traces, saved in files ``gtp-outside.pcap`` and +``gtp-inside.pcap``, respectively, capture GTP packets (tunneled +through port ``2152`` ) on the RAN side of the UPF. Setting the +interface to ``ens18`` corresponds to "outside" the UPF and setting +the interface to ``access`` corresponds to "inside" the UPF. Running +``ping`` from a physical UE will generate the relevant user plane (N3) +traffic; gNBsim automatically triggers this activity. + +Similarly, the fourth and fifth traces, saved in files +``n6-inside.pcap`` and ``n6-outside.pcap``, respectively, capture IP +packets on the Internet side of the UPF (which is known as the **N6** +interface in 3GPP). In these two tests, ``net 172.250.0.0/16`` +corresponds to the IP addresses assigned to UEs by the SMF. Running +``ping`` from a physical UE will generate the relevant user plane +traffic; gNBsim automatically triggers this activity. + +If the ``gtp-outside.pcap`` has packets and the ``gtp-inside.pcap`` +is empty (no packets captured), you may run the following commands +to make sure packets are forwarded from the ``ens18`` interface +to the ``access`` interface and vice versa: + +.. code-block:: + + $ sudo iptables -A FORWARD -i ens18 -o access -j ACCEPT + $ sudo iptables -A FORWARD -i access -o ens18 -j ACCEPT diff --git a/software/overview.rst b/software/overview.rst index c726554..4cc4f58 100644 --- a/software/overview.rst +++ b/software/overview.rst @@ -3,31 +3,30 @@ Overview Many of the implementation details presented in this book were informed by Aether, an open source 5G edge cloud connectivity service. -A multi-site deployment of Aether has been running since 2020 in -support of the *Pronto Project*, but that deployment depends on an ops -team with significant insider knowledge about Aether's engineering -details. It is difficult for others to reproduce that know-how and -bring up their own Aether clusters. - -`Aether OnRamp `__ -is a re-packaging of Aether to address that problem. It provides an -incremental path for users to: +This appendix describes `Aether OnRamp +`__ a packaging of +Aether in a way that makes it easy to deploy the system on your own +hardware. It provides an incremental path for users to: * Learn about and observe all the moving parts in Aether. * Customize Aether for different target environments. * Experiment with scalable edge communication. -* Deploy and operate Aether with live traffic. +* Deploy and operate Aether with live 5G traffic. -Aether OnRamp begins with a *Quick Start* deployment that is easy to -bring up in a single VM, but then goes on to prescribe a sequence of +Aether OnRamp begins with a *Quick Start* recipe that deploys Aether +in a single VM or server, but then goes on to prescribe a sequence of steps users can follow to deploy increasingly complex configurations. -OnRamp refers to each such configuration as a *blueprint*, and the set -supports both emulated and physical RANs, along with the runtime +OnRamp refers to each such configuration as a *blueprint*, where the +set supports both emulated and physical RANs, along with the runtime machinery needed to operate an Aether cluster supporting live 5G -workloads. (OnRamp also defines a 4G blueprint that can be used to -connect one or more physical eNBs, but we postpone a discussion of -that capability until a later section. Everything else in this guide -assumes 5G.) +workloads.\ [#]_ The goal of this Guide is to help users take +ownership of the Aether deployment process by incrementally exposing +all the degrees-of-freedom Aether supports. + +.. [#] OnRamp also defines a 4G blueprint that can be used to + connected one or more physical eNBs, but we postpone a + discussion of that capability until a later section. Everything + else in this guide assumes 5G. Note that Aether OnRamp does not include SD-Fabric, which depends on programmable switching hardware. Readers interested in learning @@ -37,14 +36,10 @@ Hands-on Programming appendix of our companion SDN book. .. _reading_pronto: .. admonition:: Further Reading - `Pronto Project: Building Secure Networks Through Verifiable - Closed-Loop Control `__. - `Hands-on Programming (Appendix). Software-Defined Networks: A Systems Approach `__. November 2021. - .. include:: directory.rst Aether OnRamp is still a work in progress, but anyone @@ -52,4 +47,5 @@ interested in participating in that effort is encouraged to join the discussion on Slack in the `ONF Community Workspace `__. A roadmap for the work that needs to be done can be found in the `Aether OnRamp Wiki -`__. +`__. + diff --git a/software/roc.rst b/software/roc.rst index 95a67eb..4c3a091 100644 --- a/software/roc.rst +++ b/software/roc.rst @@ -71,7 +71,10 @@ arbitrary and need only be consistent *within* ``radio-5g-models.json``. Once you are done with these edits, uninstall the SD-Core you had running in the previous stage, and then bring up the ROC followed by a -new instantiation of the SD-Core: +new instantiation of the SD-Core. The order is important because the +Core depends on configuration parameters provided by the ROC. (You may +also need to reboot the gNB, although it typically does so +automatically when it detects that the Core has restarted.) .. code-block:: @@ -79,10 +82,19 @@ new instantiation of the SD-Core: $ make aether-amp-install $ make aether-5gc-install -The order is important, since the Core depends on configuration -parameters provided by the ROC. Also note that you may need to reboot -the gNB, although it typically does so automatically when it detects -that the Core has restarted. +As an aside, the above uses the ``aether-amp-install`` Make target to +install ROC, but that target also installs the monitoring service. The +latter is not required in this situation, but you are always free to +use subsystem-specific targets (as documented in the ``Makefile``) +rather than the Aether-wide targets we've been using. For example, +installing just ROC can be done in two steps: the first provisions ROC +as a Kubernetes application, and the second loads the json files that +define the Models into that service. + +.. code-block:: + + $ make roc-install + $ make roc-load To see these initial configuration values using the GUI, open the dashboard available at ``http://:31194``. If you select diff --git a/software/scale.rst b/software/scale.rst index 6f5feea..7bf3071 100644 --- a/software/scale.rst +++ b/software/scale.rst @@ -16,11 +16,12 @@ There are two aspects of our deployment that scale independently. One is Aether proper: a Kubernetes cluster running the set of microservices that implement SD-Core and AMP (and optionally, other edge apps). The second is gNBsim: the emulated RAN that generates -traffic directed at the Aether cluster. Minimally, two servers are -required—one for the Aether cluster and one for gNBsim—with each able -to scale independently. For example, having four servers would support -a 3-node Aether cluster and a 1-node workload generator. This example -configuration corresponds to the following ``hosts.ini`` file: +traffic directed at the Aether cluster. The assumption in this section +is that there are at least two servers—one for the Aether cluster and +one for gNBsim—with each able to scale independently. For example, +having four servers would support a 3-node Aether cluster and a 1-node +workload generator. This example configuration corresponds to the +following ``hosts.ini`` file: .. code-block:: @@ -36,21 +37,27 @@ configuration corresponds to the following ``hosts.ini`` file: [worker_nodes] node2 node3 - node4 [gnbsim_nodes] node4 The first block identifies all the nodes; the second block designates -which node runs the Ansible client and the Kubernetes control plane -(this is the node you ssh into and invoke Make targets and ``kubectl`` -commands); the third block designates the worker nodes being managed -by the Ansible client; and the last block indicate which nodes run the -gNBsim workload generator (gNBsim scales across multiple Docker +which node runs the Kubernetes control plane (and where you invoke +``kubectl`` commands); the third block designates the worker nodes in +the Kubernetes cluster; and the last block indicate which nodes run +the gNBsim workload generator (gNBsim scales across multiple Docker containers, but these containers are **not** managed by Kubernetes). -Note that having ``master_nodes`` and ``gnbsim_nodes`` contain exactly -one common server (as we did previously) is what triggers Ansible to -instantiate the Quick Start configuration. + +Although not a requirement, this and the following sections make the +simplifying assumption that you install OnRamp and invoke Make targets +on the ``master_nodes``. (In general, the Ansible client that OnRamp +uses to deploy Aether need not run on one of the servers listed in +``hosts.ini``.) Also note that having ``master_nodes`` and +``gnbsim_nodes`` contain exactly one common server (as we did +previously) is what triggers Ansible to instantiate the Quick Start +configuration. (In general, the node groups need not be disjoint, so +for example, a single node could be part of ``worker_nodes`` and +``gnbsim_nodes``.) You need to modify ``hosts.ini`` to match your target deployment. Once you've done that (and assuming you deleted your earlier Quick diff --git a/software/start.rst b/software/start.rst index 3990965..3f99e26 100644 --- a/software/start.rst +++ b/software/start.rst @@ -60,31 +60,28 @@ follows: .. code-block:: - $ sudo apt install pipx - $ sudo apt install python3.8-venv + $ sudo apt install sshpass python3-venv pipx make git $ pipx install --include-deps ansible $ pipx ensurepath - $ sudo apt-get install sshpass + $ source ~/.bashrc Once installed, displaying the Ansible version number should result in -output similar to the following: +output similar to the following on Ubuntu 20.04. (Ubuntu 22.04 will +show ``ansible [core 2.16.4]``.) .. code-block:: $ ansible --version - ansible [core 2.11.12] + ansible [core 2.13.13] config file = None - configured module search path = ['/home/foo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] - ansible python module location = /home/foo/.local/lib/python3.6/site-packages/ansible - ansible collection location = /home/foo/.ansible/collections:/usr/share/ansible/collections - executable location = /home/foo/.local/bin/ansible - python version = 3.6.9 (default, Mar 10 2023, 16:46:00) [GCC 8.4.0] - jinja version = 3.0.3 + configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] + ansible python module location = /home/ubuntu/.local/pipx/venvs/ansible/lib/python3.8/site-packages/ansible + ansible collection location = /home/ubuntu/.ansible/collections:/usr/share/ansible/collections + executable location = /home/ubuntu/.local/bin/ansible + python version = 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0] + jinja version = 3.1.3 libyaml = True -Note that a fresh install of Ubuntu may be missing other packages that -you need (e.g., ``git``, ``curl``, ``make``), but you will be prompted -to install them as you step through the Quick Start sequence. Download Aether OnRamp ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -154,10 +151,16 @@ same server you will be installing Aether on. In this example, address ``10.76.28.113`` and the three occurrences of the string ``aether`` need to be replaced with the appropriate -values. Note that if you set up your server to use SSH keys instead -of passwords, then ``ansible_password=aether`` needs to be replaced -with ``ansible_ssh_private_key_file=~/.ssh/id_rsa`` (or wherever -your private key can be found). +values. + +Note that if you set up your server to use SSH keys instead +of passwords, update the ``hosts.ini`` with your private key (accordingly +adjust the location and filename of your private key) + +.. code-block:: + + node1 ansible_host=10.76.28.113 ansible_user=aether ansible_ssh_private_key_file=~/.ssh/id_rsa + The second set of parameters is in ``vars/main.yml``, where the **two** lines currently reading @@ -213,31 +216,7 @@ you may want to modify as we move beyond the Quick Start deployment. We'll identify those files throughout this section, for informational purposes, and revisit them in later sections. -Many of the tasks specified in the various Ansible playbooks result in -calls to Kubernetes, either directly via ``kubectl``, or indirectly -via ``helm``. This means that after executing the sequence of -Makefile targets described in the rest of this guide, you'll want to -run some combination of the following commands to verify that the -right things happened: - -.. code-block:: - - $ kubectl get pods --all-namespaces - $ helm repo list - $ helm list --namespace kube-system - -The first reports the set of Kubernetes namespaces currently running; -the second shows the known set of repos you are pulling charts from; -and the third shows the version numbers of the charts currently -deployed in the ``kube-system`` namespace. - -If you are not familiar with ``kubectl`` (the CLI for Kubernetes), we -recommend that you start with `Kubernetes Tutorial -`__. - -Note that we have not yet installed Kubernetes or Helm, so these -commands are not yet available. At this point, the only verification -step you can take is to type the following: +At this point, the only verification step you can take is to type the following: .. code-block:: @@ -262,6 +241,26 @@ targets will output red results from time-to-time (indicating an exception or failure), but as long as Ansible keeps progressing through the playbook, such output can be safely ignored. +Many of the tasks specified in the various Ansible playbooks result in +calls to Kubernetes, either directly via ``kubectl``, or indirectly +via ``helm``. This means that you may want to run some combination of the +following commands to verify that the right things happened: + +.. code-block:: + + $ kubectl get pods --all-namespaces + $ helm repo list + $ helm list --namespace kube-system + +The first reports the set of Kubernetes namespaces currently running; +the second shows the known set of repos you are pulling charts from; +and the third shows the version numbers of the charts currently +deployed in the ``kube-system`` namespace. + +If you are not familiar with ``kubectl`` (the CLI for Kubernetes), we +recommend that you start with `Kubernetes Tutorial +`__. + Once the playbook completes, executing ``kubectl`` will show the ``kube-system`` namespace running, with output looking something like the following: @@ -349,7 +348,7 @@ reasons, the Aether Core is called ``omec`` instead of ``sd-core``. directory`` task in the ``router`` role, it indicates that *systemd-networkd* is not configured as expected. Check the OnRamp `Troubleshooting Wiki Page - `__ + `__ for possible workarounds. If you are interested in seeing the details about how SD-Core is