Skip to content
This repository has been archived by the owner on Sep 18, 2021. It is now read-only.
Jeremy Poulin edited this page Feb 28, 2019 · 43 revisions

The RedHat Multi-Arch QE provisioner project is an effort to provide Jenkins CI users with an easy way build and test packages across all architectures. It is meant to go hand in hand with the multiarch-test-template project so that users can get up and running with their tests quickly. Currently, this project is only expected to work in RedHat's internal environment, we are making active progress towards supporting general usage.

This documentation describes installation and usages for the Multi-Arch CI Provisioner version v1.2.2. This page will always target the latest release.

Getting Started

The Multi-Arch CI Provisioner project supports a variety of usages. For that reason we have designed the installation playbook to be flexible enough to meet most needs. Outlined below are the basic options for installation, and in which environments they are supported.

There are two main options for installation:

  1. Installation of a new single-node OpenShift cluster capable of running the provisioning container.

Supports x86_64 hosts running Fedora, RHEL 7 or CentOS 7. It is recommended that cluster host or VM has at least 4GB of available RAM, 2 vCPUs, and 20GB of available disk space.

  1. Installation of the provisioning container and a Jenkins master container pre-configured to support provisioning multi-arch slaves from RedHat's beaker instance.

Supports x86_64 clusters internal to RedHat only.

Additionally, a cluster with Jenkins master and provisioner installed can be connected to a Jenkins master external to the cluster to enable it to provision multi-arch slaves from RedHat's beaker instance. Instructions for doing this are described in Configuring a Jenkins Master External to Cluster.

Supports Jenkins masters internal to RedHat only.

Installing the Prerequisites

The first step to using the Multi-Arch CI Provisioner is installing the prerequisites for running the installer on the host you'll be running the installation from. The following shell commands do exactly that by installing Ansible and git and then clone down the provisioner repository.

$ sudo yum install ansible git -y
$ git clone https://github.com/Redhat-MultiArch-QE/multiarch-ci-provisioner
$ cd multiarch-ci-provisioner
$ git checkout v1.2.2

Configuring the Installation Playbook

Installation Options

As described above, the installation playbook is capable of three different installation types. You can configure which installation you desire by modifying the configuration file at install/ansible/group_vars/openshift_master.yml.

The key variables to override are the following:

# For openshift cluster deployment
deploy_cluster: true
# For multiarch-qe provisioner deployment
deploy_provisioner: true
Full Installation

If you just want to install a new single-node OpenShift cluster and install the container capable of multi-arch provisioning on it, you should set deploy_cluster to true and deploy_provisioner to true.

Installing Jenkins Master with Multi-Arch Provisioning on an Existing Cluster

If you want to install the provisioning environment on an existing OpenShift cluster, you should set deploy_cluster to false and deploy_provisioner to true.

Setting the target host(s)

The first step to running the provisioner is to set the target host(s) for your play. Hosts are consumed as an Ansible inventory file. We provide a default inventory file by default at install/ansible/default.inventory.

The default inventory file looks like so:

[openshift_master]
localhost

If the target host for your installation is localhost, no modifications need to be made. However, if you're planning to run this playbook on a remote host you'll want to update the targets for [openshift_master] to be an FQDN or public IP for the server.

[openshift_master]
10.8.25.48 ansible_user=centos

If you play on using beaker to provision hosts, you'll need to use a name (or IP) that beaker knows about.

While names like centos@10.8.25.48 are valid inventory file syntax, our plays depend on the inventory hostname being clean of user specifications. If you need to specify a user for the play please override the ansible_user variable as shown above.

Creating and Preparing Secrets

If you are not planning to provision slaves from beaker, you can skip this section in its entirety. If you are planning to provision slaves from beaker, you'll need to pre-install the "secrets" the provisioner will use to authenticate with beaker and the provisioned hosts. In the credentials directory, you want to install the following files.

  • id_rsa
  • id_rsa.pub
  • [kerberos-principal-name].keytab

Additionally, if you are planning to connect this provisioning cluster to a Jenkins master outside of the environment and/or a Central CI Jenkins master, you may want to retrieve the Central CI Jenkins Master keytab first, which is described here.

Should your Kerberos principal name have '/' characters in its file name, you'll need to create a directory for all levels left of the '/'s. For example, a file called jenkins/multi-arch/qe/keytab.keytab would live in credentials/jenkins/multi-arch/qe/keytab.keytab.

To help with this step, there is a shell script you can run to generate a keytab and a keypair.

$ bash install/secrets/generate_secrets.sh

Once you're done creating the ssh keys, you want to upload the public key you'll be using to beaker: https://beaker.engineering.redhat.com/prefs/#ssh-public-keys

Ensure you're logged into beaker as the same Kerberos service principal that you're using a keytab for before uploading this key! You can authenticate as the user by using kinit with your keytab prior to going to the beaker web portal. For more information on how to correctly set up your Kerberos tickets, you can visit the Setting Up Kerberos Guide.

Running the Installer

Now that you've configured the playbook to suit your installation needs, it's time to run the installation!

In order to run Ansible on a Fedora 28 target host, you must install python2 or set the ansible_python_interpreter to /usr/bin/python3 on the target host before running the installer. Additionally, docker version 1.12.1-56 is incompatible with the installer (and is default on Fedora 28), so you'll want to either upgrade or downgrade before starting.

The commands to run the installation are shown below. These commands expect that the user can authenticate to the target host via public key authentication. If public key authentication is not set up, you can modify the ansible-playbook line below to read $ ansible-playbook -K -k -i [inventory-file-name] install.yml to prompt for an SSH password.

The ansible-playbook command assumes that you are running as a user with SSH privileges into the host you're connecting to. If you're using the default.inventory file, you may need to use the $ ansible-playbook --connection=local -K -i [inventory-file-name] install.yml to avoid making an unnecessary SSH connection.

Additionally, if you're running as root, you can omit the -K since since you do not need to be prompted for elevated privileges.

$ cd install/ansible
$ ansible-playbook -K -i [inventory-file-name] install.yml

Now just follow the steps for the provisioner install until it completes. Your provisioning cluster should install without errors. After the playbook completes, the console should be available at https://hostname:8443/console. To login to the console, you can use the default account with username developer and password developer. The password can actually be any set of characters.

After a short while, the Jenkins pod should also become available. It usually takes some time for the Jenkins pod to build and deploy, so don't be surprised if it's not immediately accessible. It can take upwards of 10-15 minutes depending on the capabilities of the host system.

Once it's ready, you should find that your Jenkins pod is installed in project redhat-multiarch-qe and has a service address like https://jenkins-redhat-multiarch-qe.[ip-address].xip.io. The login credentials for the Jenkins environment are the same as the the OpenShift console. For more information on your cluster, you can visit the Origin OC Cluster Up/Down Documentation. Happy testing!

Running Your First Multi-Arch Test

Now that you've installed the provisioner, you should be ready to use the environment to run multi-arch tests. By default, the provisioner installs the multiarch-ci-test-template on a Jenkins pod in your environment. To run it for the first time, you can simplly log in to the Jenkins environment, route to the job, and hit Scan Multibranch Pipeline Now. This will kick off your first multi-arch test!

Alternatively, you can hook up your new provisioning cluster to an existing Jenkins master. This is useful if you have a Jenkins master available through environments like Central CI. Once this is done, you can proceed to the Test Template wiki to learn how to manually install and run your first multi-arch test.

Multi-Arch CI Test Template

Documentation for the test template lives on the Test Template wiki. This is the bread and butter of how to "use" our setup when you have it installed.

Other Documentation

Documentation for the library lives on the Libraries wiki. You can think of the library as the API that the test template calls into to get the resources from the provisioner.

Support

If you're having trouble with any of the documentation, please email us at multiarch-qe@redhat.com for support.