Skip to content

HVBE/installation

 
 

Repository files navigation

Installing Integreatly

Overview

The purpose of this repository is to provide a set of Ansible playbooks that can be used to install a range of Red Hat middleware products on Openshift.

These products include:

  • Single Sign On

  • Managed Services Broker

  • EnMasse

  • Eclipse Che

  • Launcher

  • 3Scale

Prerequisites

Table 1. Prerequisites

Requirement

Version

Ansible

>= v2.6

Openshift Container Platform

>= v3.10

Openshift CLI (OC)

>= v3.10

⚠️
  • SSH Access to Openshift master(s)

  • Cluster administrator permissions

  • The ssh user defined in the inventory, ansible_user, needs have sudo permission

Installation Steps

The following section demonstrates how to install each of the products listed above on an existing Openshift cluster.

1. Clone installation GIT repository locally

git clone https://github.com/integr8ly/installation.git

2. Create the inventory hosts file

  1. Create the host file based on the template(../inventories/hosts.default). Following the an example.

    $ cp inventories/hosts.template inventories/hosts
  2. Update the host file to connect in your OpenShift cluster

    Prior to running the playbooks the master hostname and associated SSH username must be set in the inventory host file to match the target cluster configuration. The following example sets the SSH username to evals and the master hostname to master.evals.example.com:

    ~/installation/inventories/hosts
    
    [OSEv3:children]
    master
    
    [OSEv3:vars]
    ansible_user=evals
    
    [master]
    master.evals.example.com
    ℹ️
    It is possible to add the variable ansible_ssh_private_key_file for the master host when the ssh connection requires a public key.(E.g`ansible_ssh_private_key_file=~/.ssh/ocp-workshop.pem`)

3. Check the connection with the OpenShift cluster

Run the following command in order to check the connection with the OpenShift cluster from the root of the repository.

$ ansible -m ping all

Following an example of the expected output.

$ ansible -m ping all
master.example.openshiftworkshop.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

4. Log into OpenShift

Before run the scripts it is required login via oc client tool to the master/OCP. Following an example.

oc login master.example.openshiftworkshop.com -u <user> -p <password>

5. Install all products from a single playbook

All products can be installed using the install.yml playbook located in the playbooks/ directory.

Before running the installer, please consider the following variables:

Table 2. Install playbook variables

Variable

Description

eval_self_signed_certs

Whether the OpenShift cluster uses self-signed certs or not. Defaults to false

eval_threescale_enable_wildcard_route

Whether 3Scale enables wildcard routing. Defaults to false

github_client_id

GitHub OAuth client ID to enable GitHub authorization for Launcher. If not defined, GitHub authorization for Launcher will be disabled

github_client_secret

GitHub OAuth client secret to enable GitHub authorization for Launcher. If not defined, GitHub authorization for Launcher will be disabled

prerequisites_install

Boolean var that skips the installation of system wide tools/packages that are required by the installer if set to false (needs to be set to false when running the installer in a linux container). Defaults to true.

Some products can be excluded from the install by setting a var. For example, setting gitea=false will not install gitea. Installation of individual products will not be a supported installation method with this repo. You can however write your own playbook to execute specific roles & tasks as needed.

By default Mobile Developer Services will not be installed. In order to install these services, consider the following variables.

Table 3. Install Mobile Developer Services

Variable

Description

mobile_security_service

Whether Mobile Security Service will be installed or not. Defaults to false

5.1. Create GitHub OAuth to enable GitHub authorization for Launcher

  1. Login into GitHub

  2. Go to Settings >> Developer Settings >> New OAuth App. Following an image as example to ilustrate this area.

    GitHub OAuth App
  3. Add the following fields values

    Table 4. Fields values descriptions

    Field

    Value

    Application Name

    Any value

    Home Page URL

    http://localhost

    Authorization callback URL

    http://localhost

    The callback URL is a placeholder for now and will be changed after the installation playbook is finished.
  4. Click on Register Application

  5. The values found in GitHub OAuth App, Client ID and Client Secret, will be required in the next step to install Integreatly enabling GitHub authorization for Launcher. Following an example of this screen.

    GitHub OAuth App Fields

5.2. Run the playbook

$ oc login https://<openshift-master-url> -u <user> -p <password>
$ $ ansible-playbook -i inventories/hosts playbooks/install.yml -e github_client_id=<your_client-id> -e github_client_secret=<your_client_secret>
💡

The following command installs Integreatly without GitHub authorization for Launcher.

$ ansible-playbook -i inventories/hosts playbooks/install.yml

5.3. Add the generated Authorization callback URL to GitHub OAuth

Following and example of the output made at the end of the playbook with this URL.

TASK [debug] *************************************************************************************************************************************************************************************************
ok: [127.0.0.1] => {
    "msg": "All services have been provisioned successfully. Please add 'https://launcher-sso-launcher.apps.example.openshiftworkshop.com/auth/realms/launcher_realm/broker/github/endpoint' as the Authorization callback URL of your GitHub OAuth Application."
}

The http://localhost placeholder added in the GitHub OAuth App should be replaced with this value. Following an example.

GitHub OAuth auhotization callback URL

5.4. Add backup jobs

NOTE: Needs to be used in an existing integreatly cluster.

NOTE: Requires an existing s3 secret s3-credentials in the backup namespace.

Sample command:

ansible-playbook \
-i inventories/host \
-e 'backup_schedule="30 2 * * *"' \
-e 'backup_namespace=openshift-integreatly-backups' \
playbooks/install_backups.yml

Parameters:

Variable

Description

Default

backup_version

backup-container-image tag version

defaults to the most recent backup-container-image tag

backup_resources_location

http url to download openshift cronjob template file(s)

https://raw.githubusercontent.com/integr8ly/backup-container-image/{{ backup_version }}/templates/openshift

backup_image

backup image name to use (is used together with backup_version var)

quay.io/integreatly/backup-container:{{ backup_version }}

backup_schedule

the cronjob schedule for all jobs - NOTE: always encapsulate the value in quotes, example: -e 'backup_schedule="30 2 * * *"'

30 2 * * *

backup_namespace

backup namespace name to add all cronjobs

openshift-integreatly-backups

6. Check the installation

Once the installation has finished you will no longer be able to login via the Openshift console or oc cli as the admin if there is an sso redirect in place. The new admin user is admin@example.com password is Password1
integr8ly WebApp
💡
The project Webapp is responsible for the Integraly interface. You can find the URL looking for the router created for this project. As the following example.
integr8ly WebApp Router

Also, with the evals users created by the installer is possible to check the services in the OpenShift catalog.

The default login credentials are admin@example.com / Password1

Following an image of this console as example.

OCP Console with integr8ly

Uninstalling Integreatly

Run the uninstall.yml playbook from the root of the repository:

$ ansible-playbook -i inventories/hosts playbooks/uninstall.yml

By default this will delete all user-created namespaces as well, if you wish to keep these namespaces then add the following flag:

-e keep_namespaces=true

Troubleshooting

Message "You need to install \"jmespath\" prior to running json_query filter" is shown when the installation fails

The issue means that python version used by Ansible has not this required module. In order to fix it is required to install the missing module. Following the command to install it via pip.

$ pip install jmespath
ℹ️
The module need to be installed in the same version of python used by Ansible. Use the command $ ansible --version to check this path.
Message "jsonpointer module is not available" is shown when the installation fails

The issue means that python version used by Ansible has not this required module. In order to fix it is required to install the missing module. Following the command to install it via pip.

$ pip install jsonpointer

Also, you might need to use the varible ansible_python_interpreter in the host file to fix it, for example:

[local:vars]
ansible_connection=local
ansible_python_interpreter=python
💡
The module need to be installed in the same version of python used by Ansible. Use the command $ ansible --version to check this path.

Self Signed Certs with Che

If your cluster is using a self signed (non CA) certificate, there are a couple of things that needs to be noted.

  • Che will only allow the creation of one workspace when self signed certificates are used.

  • When a workspace is created, the following errors may appear on the workspace:

Connection failed with terminal
Some error happened with terminal WebSocket connection
Failed to import project
  • In order to solve these issues, you will need to accept the certs for all the routes that was created for that workspace. These routes are listed in the workspace deployment within the Che namespace.

About

Integreatly Installer

Resources

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 91.4%
  • Shell 8.5%
  • Smarty 0.1%