Skip to content

andymcc/multiarch-upi-playbooks

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

multiarch-upi-playbooks

This repository provides a set of ansible playbooks demonstrating the automation of a minimal development/testing OpenShift cluster installation on the z/VM platform.

Please see the official OpenShift github repository for a more thorough description of hardware and networking infrastructure prerequisites.

Example environment prerequisites

OpenShift Nodes

  • 3+ z/VM guests with master node specs
    • 4+ vcpus
    • 16GB+ RAM
  • 2+ z/VM guest(s) with worker specs
    • 2+ vcpus
    • 16GB+ RAM
  • 1 z/VM guest with bootstrap specs
    • 2+ vcpus
    • 16GB+ RAM

Auxiliary Lab Infastructure Service Provider

This host is not considered part of the cluster, but will provide network infrastructure services in the example environment. We use a single host to provide these services for the sake of simplicity of our example environment. This environment is intended to demonstrate how the user provided infrastructure components fit together. It is not a suitable production configuration.

In the automation, this host is referred to as the bastion workstation.

  • 1 zVM guest with RHEL 8 installed
    • 2+ vcpus
    • 8GB+ RAM

Structure of the automation

The example ansible playbooks in this repository are broken down by role. The network infrastructure services playbooks are located in playbooks/examples/. These playbooks are individually optional, and can be replaced with preferred equivalent network services.

After the network infrastructure is set up, whether through the playbooks/example playbooks or via user-chosen alternatives, the cluster can be deployed using the playbooks/create-cluster.yml playbook, and subsequently performing a manual IPL boot of Red Hat CoreOS on each OpenShift node.

Usage Instructions for Example Cluster Playbooks

Configuring the cluster definition

The central configuration file for the automation is located at group_vars/all.yml. This file provides the definition of the IP and mac addresses of the OpenShift nodes and the bastion services host. It also contains fields denoting the download URL for the openshift installer and coreos images. Most of the variables will need to be modified by the user based on their existing network set up. The variables are documented directly in the config file, so we will not discuss all of them in the readme. A few key things to note:

The dns_nameserver value should be the IP address of a lab or public DNS server which will be set as the forward server for the example DNS service.

The bastion_public_ip_address and bastion_private_ip_address fields are normally the same value. They will only be two different values if the bastion is set up to act as the network gateway to the other nodes.

Running the Ansible Playbooks

By default, we will create services on the bastion services host for DNS, load balancing, and file serving. We will not use the dhcp or masquerade playbooks for the default example.

Configure your ansible inventory to point at the bastion services host.

  1. Configure the DNS service
$ ansible-playbook -i inventory playbooks/examples/configure-dns.yml
  1. Configure the HAProxy load balancer service
$ ansible-playbook -i inventory playbooks/examples/configure-haproxy.yml
  1. Configure apache to serve ignition configs
$ ansible-playbook -i inventory playbooks/examples/configure-apache-ignition.yml
  1. Run the create-cluster playbook
$ ansible-playbook -i inventory playbooks/create-cluster.yml
  1. Monitor until the automation is at the following step
TASK [/<pwd>/multiarch-upi-playbooks/playbooks/examples/roles/create-cluster : wait for
bootstrap node accessibility] ***
ok: [bastion-host] => (item=bootstrap-0)
  1. IPL the Red Hat CoreOS nodes. Kernel parameter files generated by the automation will be located on the bastion host in /root/rhcos-bootfiles.

  2. Configure image registry storige profider

  • For devel/testing environments only, patch the image registry to use local storage.
  • For a more robust setup, use NFS instead. TODO document storage.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published