The distribution images generated by Yocto are preconfigured as much as possible but some elements cannot be statically configured during image creation or are specific settings that cannot be included in a generic image.
This is the case for example for the network settings, the hostname or the high-availability cluster setup.
To perform these tasks we use Ansible which is a tool designed to configure Linux machines.
Ansible allows you to play actions contained in Ansible playbooks to Linux machines described with their settings in an Ansible Inventory.
The Ansible documentation is accessible at https://docs.ansible.com/.
Machines that need to be configured by Ansible simply need to provide SSH access and have a Unix Shell and a Python interpreter. The SEAPATH images already fit with this requirements.
cqfd
is a quick and convenient way to run commands in the current directory,
but within a pre-defined Docker container. Using cqfd
allows you to avoids
installing anything else than Docker and repo
on your development machine.
Note
|
We recommend using this method as it greatly simplifies the build configuration management process. |
-
Install
repo
anddocker
if it is not already done.
On Ubuntu, please run:
$ sudo apt-get install repo docker.io
-
Install
cqfd
:
$ git clone https://github.com/savoirfairelinux/cqfd.git
$ cd cqfd
$ sudo make install
The project page on Github contains detailed information on usage and installation.
-
Make sure that
docker
does not requiresudo
Please use the following commands to add your user account to the docker
group:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
Log out and log back in, so that your group membership can be re-evaluated.
The first step with cqfd
is to create the build container. For this, use the
cqfd init
command in the Ansible directory:
$ cqfd init
Note
|
The step above is only required once, as once the container image has been
created on your machine, it will become persistent. Further calls to cqfd init
will do nothing, unless the container definition (.cqfd/docker/Dockerfile) has
changed in the source tree.
|
User can now run commands through cqfd
by using cqfd run
followed by the
command to run. For instance
$ cqfd run ansible-playbook -i inventory.yaml myplaybook.yaml
Note
|
Later you must prefix all ansible command with cqfd run .
|
Without cqfd
you need to install the dependencies manually.
The client machine that is going to run Ansible must have Ansible 2.9 installed
an Inventory file and playbook files to play. To install Ansible 2.9 on this
machine please refer to the Ansible documentation at
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html.
Warning: Curently only the Ansible version 2.9 is supported. Other versions will not work. Ansible 2.9 is the packaged version in Ubuntu 20.04 and Fedora 33.
Also you must also install the netaddr
and six
python3 module as well as the rsync
package.
Ansible plays playbooks in hosts described in an Ansible inventory. In this inventory are described the hosts, the way to access these hosts, their configurations. Hosts can be grouped into groups. Ansible Inventory documentation is available at https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html.
In the examples/inventories directory you can find the file inventory_example.yaml which is a basic example file of Ansible inventory to work with SEAPATH cluster composed of 2 hypervisors, an observer and a virtual machine. This file is in YAML format. Other formats are valid for inventory file but in this document we will only cover the YAML format. This file also contains some commented examples of common variables that can be used with Ansible, but does not contain the variables used by the SEAPATH playbooks.
Note: If you are not familiar with the YAML format you will find a description here: https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
You need to pass your inventory file to all Ansible command with the -i
argument. To validate your Ansible inventory file you can use the
ansible-inventory
command with --list
argument.
For instance if your Ansible file is cluster.yaml:
$ ansible-inventory -i cluster.yaml --list
An Ansible inventory file respects a hierarchy. Ansible actions can be later applied to all hosts included in this level. All level can have hosts and vars (variables). The top level is all. hosts defined here are ungrouped and vars are globals. If you defined a children entry in all you can define a group. For instance:
all:
hosts:
host1:
vars:
my_global_var: variable_content
children:
group1:
hosts:
host2:
host3:
vars:
my_group1_scope_variable: variable_content
group2:
hosts:
host4:
my_host_variable: variable_content
Once you have an Ansible inventory you can test host connexion with the ping module:
$ ansible -i cluster.yaml all -m ping
Like all Ansible commands you need to specify your inventory file with the -i
argument, the host or group to apply the action.
For instance here we use the module ping with the -m ping
argument.
To check all host in group1:
$ ansible -i cluster.yaml group1 -m ping
To check only host3:
$ ansible -i cluster.yaml host3 -m ping
In the examples/inventories folder there is also another inventory example: advanced_inventory_example.yaml. This example adds the variables with their descriptions used by the SEAPATH playbooks. This inventory file should be used as a starting point for writing your inventory file.
Playbooks are files that will contain the actions to be performed by Ansible. For more information about playbooks, see the Ansible documentation: https://docs.ansible.com/ansible/2.9/user_guide/playbooks.html. Ready-to-use playbooks are provided in this repository. Playbooks performing specific actions such as importing a disk will have to be written by you, referring if necessary to the playbook examples in the examples/playbooks folder.
To make writing playbooks easier and simpler, Ansible has set up roles that allow you to group playbooks that can be reused later in other playbooks.
The playbooks useful for this project can be found in the roles folder. Each role contains a README file describing its use.
Calling a role in a playbook is done as in the example below:
- hosts: hypervisors
vars:
- disk_name: disk
- action: check
roles:
- seapath_manage_disks
For more information about roles see: https://docs.ansible.com/ansible/2.9/user_guide/playbooks_reuse_roles.html
First, make sure you are using the git branch corresponding to your version of Seapath.
On Seapath Debian:
$ git checkout debian-main
On Seapath Yocto:
$ git checkout main
Before you can start using playbooks to configure and manage your SEAPATH you need to write the inventory file describing your cluster or your standalone version. To do this you can rely on the example file advanced_inventory_example_cluster.yaml or advanced_inventory_example_standalone.yaml in the examples folder.
You can place your inventory file in the inventories folder provided for this purpose.
In the rest of the document we will consider that the cluster inventory file will be called cluster_inventory.yaml and in the same way standalone_inventory.yaml for standalone and will be placed in the inventories folder.
To set up a SEAPATH machine you can use the playbook seapath_setup_main.yaml which regroups the other playbooks. This playbook also configures the cluster on machines described in cluster_machines Ansible group.
To launch the playbook seapath_setup_main.yaml use the following command:
$ ansible-playbook -i inventories/cluster_inventory.yaml playbooks/seapath_setup_main.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/cluster_inventory.yaml playbooks/seapath_setup_main.yaml
The SEAPATH Ansible modules documentation is published on ansible-galaxy
A basic virtual machine for SEAPATH based on debian can be created using the build_debian_iso repository.
You can also create a yocto VM using the flavour cqfd guest_efi, as described in the yocto-bsp repository, in the following way:
$ cqfd -b guest_efi
To deploy this machine on the cluster, follow these steps :
- Create a folder vm_images
at the base of this repo
- Place the generated qcow2 file in the vm_images
directory with the name guest.qcow2
.
- Create an inventory describing your virtual machines. An example can be found in examples/inventories
- For a cluster, call the playbook playbooks/deploy_vms_cluster.yaml
$ ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_cluster.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_cluster.yaml
Otherwise, for the standalone, call the playbook playbooks/deploy_vms_standalone.yaml
$ ansible-playbook -i inventories/standalone_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_standalone.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/standalone_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_standalone.yaml
Machines are updated using software update.
First, create a swu file using the yocto-bsp repository.
Then, the update will be deployed by ansible. You need to pass two variables in the command line :
- machine_to_update
is the name of the machine that ansible will update
- swu_image
is the name of the swu file that was created in yocto-bsp.
Note: The swu image must be placed in the swu_images
directory.
For the update of a machine in the cluster, call the playbook playbooks/update_machine_cluster.yaml
$ ansible-playbook -i inventories/cluster_inventory.yaml -e "machine_to_update=node1" -e "swu_image=update.swu" playbooks/update_machine_cluster.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/cluster_inventory.yaml -e "machine_to_update=node" -e "swu_image=update.swu" playbooks/update_machine_cluster.yaml
Otherwise, for the standalone, call the playbook playbooks/update_machine_standalone.yaml
$ ansible-playbook -i inventories/standalone_inventory.yaml -e "machine_to_update=node1" -e "swu_image=update.swu" playbooks/update_machine_standalone.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/standalone_inventory.yaml -e "machine_to_update=node" -e "swu_image=update.swu" playbooks/update_machine_standalone.yaml