The purpose of this repository is to provide an automated way to provision an Openshift 4 development environment hosted on GCP at the lowest possible cost.
This project offers 3 strategies for provisioning an OpenShift 4 development environment in GCP:
-
MNC (Multi Node Cluster): this is an installation via Libvirt with multi node.
All strategies are deployed on a single GCP instance. Also, all strategies use the Libvirt provider to create nested VMs inside the GCP instance.
Below is a summary of the characteristics of each strategy:
CRC | SNC | MNC | |
---|---|---|---|
Use a single GCP instance? |
YES |
YES |
YES |
Single node or multi node (number of VMs created inside the GCP instance) ? |
Single Node |
Single Node |
Multi Node |
Allows installing any version of OCP? |
NO |
YES |
YES |
Allows using wildcard DNS services like nip.io? |
NO |
YES |
YES |
Resource consumption |
Low |
Medium |
High |
Time to provision the environment |
Low |
High |
High |
Go to Cloud Shell and run the following commands:
Create new project:
export TF_VAR_PROJECT_ID=$(python3 -c 'import uuid; print("c" + str(uuid.uuid4().hex[:29]))')
echo "Project ID:" $TF_VAR_PROJECT_ID
echo "export TF_VAR_PROJECT_ID=\"$TF_VAR_PROJECT_ID\"" >> ~/.profile
gcloud projects create $TF_VAR_PROJECT_ID --name="CRConGCP" --labels=type=crc --format="json" --quiet
gcloud config set project $TF_VAR_PROJECT_ID
Link new project to a first billing account (if you have more than one billing account, adjust the commands below)):
export ACCOUNT_ID=$(gcloud alpha billing accounts list --filter='open:TRUE' --format='value(ACCOUNT_ID)' --limit=1)
echo "Billing ACCOUNT ID:" $ACCOUNT_ID
gcloud alpha billing projects link $TF_VAR_PROJECT_ID --billing-account $ACCOUNT_ID
Enable compute API:
gcloud services enable compute.googleapis.com
Config Terraform:
git clone https://github.com/danielmenezesbr/crc-on-gcp-terraform
cd crc-on-gcp-terraform
./terraformDocker.sh init
cat >variables.auto.tfvars.json <<EOL
{
"project_id": "$TF_VAR_PROJECT_ID"
}
EOL
touch secrets.tfvars
Create a service account. Terraform uses the service account to provision the environment in GCP.
gcloud iam service-accounts create terraformuser
gcloud iam service-accounts keys create "terraform.key.json" --iam-account "terraformuser@$TF_VAR_PROJECT_ID.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding $TF_VAR_PROJECT_ID --member "serviceAccount:terraformuser@$TF_VAR_PROJECT_ID.iam.gserviceaccount.com" --role 'roles/owner'
Create pull-secret.txt. Go to
https://cloud.redhat.com/openshift/create/local
> Copy pull secret
and paste below:
cat >pull-secret.txt <<EOL
PASTE HERE
EOL
We will generate the ssh keys which we will use later to access our instance from our laptop.
ssh-keygen -t rsa -f crcuser_key -C crcuser -q -N ""
The previous command generates two files:
-
crcuser_key
: private key which we can use to access the instance remotely with an ssh client -
crcuser_key.pub
: public key that will be included in our instance.
Parameter | Default | Description |
---|---|---|
strategy |
crc |
Strategies: crc snc mnc |
gcp_vm_preemptible |
true |
A preemptible VM is an instance that you can create and run at a much lower price than normal instances. However there are some limitations: * Compute Engine might stop preemptible instances at any time. * Compute Engine always stops preemptible instances after they run for 24 hours. * If the preemptive VM is stopped before the environment is ready, you must reacreate the enviroment. When can you live with these limitations, preemptive VM is a good choice for users who need to reduce spending. Check the documentation for more information on preemptive VM. Set |
gcp_vm_type |
n1-standard-8 |
n1-standard-8 has 8 vCPUs and 30 GB memory. If you choose mnc strategy, choose a machine with more resources like:
or
|
gcp_vm_disk_type |
pd-standard |
pd-standard or pd-ssd |
gcp_vm_disk_size |
50 for CRC/SNC; 128 for MNC |
Disk size (GB). |
DDNS |
disabled |
Adjust other parameters in variables.tf
if necessary.
Provision the environment:
./terraformDocker.sh apply -var-file="secrets.tfvars" -auto-approve
Access the instance via SSH:
gcloud compute ssh crc-build-1 --zone=us-central1-a --quiet --project=$TF_VAR_PROJECT_ID
First,
# use alias 1
1
# or
sudo journalctl -u google-startup-scripts.service -f
At the end of the log failed = 0
indicates dependencies
have been successfully installed.
You can monitor the progress of the installation with:
# use alias 2
2
# or
sudo tail -f /var/log/messages -n +1 | grep runuser
Wait about 25 minutes for the message "Started the OpenShift cluster"
...
Apr 17 16:16:51 crc-build-1 runuser[51541]: Started the OpenShift cluster
Apr 17 16:16:51 crc-build-1 runuser[51541]: To access the cluster, first set up your environment by following the instructions returned by executing 'crc oc-env'.
Apr 17 16:16:51 crc-build-1 runuser[51541]: Then you can access your cluster by running 'oc login -u developer -p developer https://api.crc.testing:6443'.
Apr 17 16:16:51 crc-build-1 runuser[51541]: To login as a cluster admin, run 'oc login -u kubeadmin -p ABCD-EFG-hLQZX-VI9Kg https://api.crc.testing:6443'.
Apr 17 16:16:51 crc-build-1 runuser[51541]: You can also run 'crc console' and use the above credentials to access the OpenShift web console.
Apr 17 16:16:51 crc-build-1 runuser[51541]: The console will open in your default browser.
At this point your CRC environment is ready!
When the machine is rebooted, CRC will be automatically started. You can use the same command described in this section to track CRC startup when the machine is rebooted.
The SNC installation is a long process. It can take up to 2h.
First,
# use alias 1
1
# or
sudo journalctl -u google-startup-scripts.service -f
At the end of the log failed = 0
indicates dependencies
have been successfully installed.
...
May 26 01:52:01 crc-build-1 GCEMetadataScripts[1226]: 2021/05/26 01:52:01 GCEMetadataScripts: startup-script: PLAY RECAP *********************************************************************
May 26 01:52:01 crc-build-1 GCEMetadataScripts[1226]: 2021/05/26 01:52:01 GCEMetadataScripts: startup-script: localhost : ok=19 changed=17 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0
May 26 01:52:01 crc-build-1 GCEMetadataScripts[1226]: 2021/05/26 01:52:01 GCEMetadataScripts: startup-script:
May 26 01:52:01 crc-build-1 GCEMetadataScripts[1226]: 2021/05/26 01:52:01 GCEMetadataScripts: startup-script exit status 0
May 26 01:52:01 crc-build-1 GCEMetadataScripts[1226]: 2021/05/26 01:52:01 GCEMetadataScripts: Finished running startup scripts.
May 26 01:52:01 crc-build-1 systemd[1]: google-startup-scripts.service: Succeeded.
May 26 01:52:01 crc-build-1 systemd[1]: Started Google Compute Engine Startup Scripts
You can monitor the progress of the installation with /home/crcuser/snc/install.out
.
# use alias 2
2
# or
sudo tail -f /home/crcuser/snc/install.out
...
+ oc get pod --no-headers --all-namespaces
+ grep -v Running
+ grep -v Completed
+ retry ./openshift-clients/linux/oc delete pod --field-selector=status.phase==Succeeded --all-namespaces
+ local retries=10
+ local count=0
+ ./openshift-clients/linux/oc delete pod --field-selector=status.phase==Succeeded --all-namespaces
pod "installer-2-crc-2mx9v-master-0" deleted
pod "installer-3-crc-2mx9v-master-0" deleted
pod "revision-pruner-2-crc-2mx9v-master-0" deleted
pod "revision-pruner-3-crc-2mx9v-master-0" deleted
pod "installer-8-crc-2mx9v-master-0" deleted
pod "installer-9-crc-2mx9v-master-0" deleted
pod "revision-pruner-7-crc-2mx9v-master-0" deleted
pod "revision-pruner-8-crc-2mx9v-master-0" deleted
pod "revision-pruner-9-crc-2mx9v-master-0" deleted
pod "revision-pruner-11-crc-2mx9v-master-0" deleted
pod "revision-pruner-9-crc-2mx9v-master-0" deleted
+ return 0 (1)
+ jobs=($(jobs -p))
++ jobs -p
+ '[' -n 56811 ']'
+ (( 5 ))
+ kill 56811
./snc.sh: line 1: kill: (56811) - No such process
+ true
-
+ return 0
indicates SNC is ready.
When the machine is rebooted, SNC will be automatically started.
The MNC installation is a long process. It can take up to 1h.
# use alias 2
2
# or
sudo tail -f /root/ansible.install.out
...
TASK [luisarizmendi.ocp_libvirt_ipi_role : OpenShift Web Console access] *******
task path: /root/.ansible/roles/luisarizmendi.ocp_libvirt_ipi_role/tasks/kvm_publish.yml:98
ok: [localhost] => {
...
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************
localhost : ok=78 changed=58 unreachable=0 failed=0 skipped=31 rescued=0 ignored=1
"failed=0"
indicates MNC is ready.
Detailed information about installing OpenShift install
can be found in .openshift_install.log
.
# use alias 21
21
# or
sudo tail -f /root/ocp/install/.openshift_install.log
The crcuser
operating system user runs CRC / SNC.
The root
operating system user runs MNC.
The password for crcuser
/ root
is password
.
After accessing the VM via gcloud/SSH, change to the crcuser
user if you want to run crc
or
oc
.
For example:
# use alias 3
3
# or
# CRC/SNC
su - crcuser
# MNC
su -
It is not necessary to do "oc login" because
KUBECONFIG
is already configured for crcuser
.
crc
command line is available for crcuser
too:
crc status
CRC VM: Running
OpenShift: Starting (v4.6.15)
Disk Usage: 13.16GB of 32.72GB (Inside the CRC VM)
Cache Usage: 14.31GB
Cache Directory: /home/crcuser/.crc/cache
It is not necessary to do "oc login" because
KUBECONFIG
is already configured for crcuser
.
oc get nodes
NAME STATUS ROLES AGE VERSION
crc-2mx9v-master-0 Ready master,worker 25h v1.19.0+f173eb4
Show kubeadmin password:
cat /home/crcuser/snc/crc-tmp-install-data/auth/kubeadmin-password
It is not necessary to do "oc login" because
KUBECONFIG
is already configured for root
user.
oc get nodes
NAME STATUS ROLES AGE VERSION
mycluster-p4clh-master-0 Ready master 107m v1.19.0+f173eb4
mycluster-p4clh-master-1 Ready master 107m v1.19.0+f173eb4
mycluster-p4clh-master-2 Ready master 107m v1.19.0+f173eb4
mycluster-p4clh-worker-0-4s5mq Ready worker 94m v1.19.0+f173eb4
mycluster-p4clh-worker-0-hhjt8 Ready worker 94m v1.19.0+f173eb4
Show kubeadmin password:
cat /root/ocp/install/auth/kubeadmin-password
After installing the Google Cloud SDK (gcloud) on your laptop, execute the commands in order to forward the local ports 80 and 443 to the IP which OpenShift Enviroment meets the requests.
ℹ️
|
Tip for Windows users: use a shell bash like "Git Bash" to execute the previous commands. Also, install Python 3.9 manually and set CLOUDSDK_PYTHON after opening Git Bash: export CLOUDSDK_PYTHON='/c/Python39/python.exe' |
gcloud auth login
export TF_VAR_PROJECT_ID=$(gcloud projects list --filter='name:CRConGCP' --format='value(project_id)' --limit=1)
export GCP_ZONE=$(gcloud compute instances list --project $TF_VAR_PROJECT_ID --filter="name=('crc-build-1')" --format='value(zone)')
gcloud beta compute ssh --zone "$GCP_ZONE" "crc-build-1" --project $TF_VAR_PROJECT_ID -- -L 80:192.168.130.11:80 -L 443:192.168.130.11:443 -N
gcloud beta compute ssh --zone "$GCP_ZONE" "crc-build-1" --project $TF_VAR_PROJECT_ID -- -L 80:192.168.126.11:80 -L 443:192.168.126.11:443 -N
gcloud beta compute ssh --zone "$GCP_ZONE" "crc-build-1" --project $TF_VAR_PROJECT_ID -- -L 80:192.168.126.51:80 -L 443:192.168.126.51:443 -N
Add at least the following information to the hosts file:
127.0.0.1 api.crc.testing
127.0.0.1 oauth-openshift.apps-crc.testing
127.0.0.1 console-openshift-console.apps-crc.testing
127.0.0.1 default-route-openshift-image-registry.apps-crc.testing
# OpenShift Service Mesh
127.0.0.1 istio-ingressgateway-istio-system.apps-crc.testing
127.0.0.1 grafana-istio-system.apps-crc.testing
127.0.0.1 jaeger-istio-system.apps-crc.testing
127.0.0.1 kiali-istio-system.apps-crc.testing
127.0.0.1 prometheus-istio-system.apps-crc.testing
Whenever you create a route on the OCP and you want to access from your laptop, appropriately change the hosts file.
ℹ️
|
Tip: Mac or Linux users can use Dnsmasq instead of modifying hosts file. |
SNC/MNC configuration uses subdomain 127.0.0.1.nip.io. This means that when accessing the instance remotely (ssh port forwarding) there is no need to change the hosts file because *.127.0.0.1.nip.io will be resolved to 127.0.0.1
3 #alias for "su - crcuser/root" Password: password
ssh master
In the SNC environment the bootstrap machine will be created temporarily during cluster configuration.
ssh bootstrap
By default this project configures and installs CRC 1.22
(OCP 4.6.15).
Although this project was only tested on CRC 1.22,
it should probably work on other versions.
You can change the CRC version in the provision.yml
file.
By default this project configures and installs OCP 4.6.18. Although this project was only tested on OCP 4.6.18, it should probably work on other versions.
If you are trying to install a version other than 4.6.x,
be sure to change the branch
and OPENSHIFT_VERSION
in the following snippet from provision.yml
:
...
git clone --branch 4.6 https://github.com/code-ready/snc /home/crcuser/snc
...
export OPENSHIFT_VERSION="4.6.18"
...
By default this project configures and installs OCP 4.6.18. Although this project was only tested on OCP 4.6.18, it should probably work on other versions.
If you are trying to install a version other than 4.6.x,
try to change ocp_release var in provision.yml
:
...
- hosts: localhost
roles:
- role: luisarizmendi.ocp_libvirt_ipi_role
vars:
ocp_install_file_path: "ocp-config/install-config.yaml"
ocp_release: "4.6.18"
...
To install OSSM, using the MNC strategy, requires a gcp instance with more processors, for example n1-standard-16. It is possible to install OSSM on smaller instances when using CRC or SNC.
When environment is ready, you can use a script (It only works on OCP 4.6) to install OSSM on OCP 4.6:
3 #3 is an alias for su - crcuser/root - Password: password
git clone https://github.com/danielmenezesbr/crc-on-gcp-terraform
cd crc-on-gcp-terraform
./servicemesh-install-OCPv46.sh
In case of any network failure during OSSM installation, usually re-running the script solves the problem.
MNC uses luisarizmendi/ocp-libvirt-ipi-role to provision. Please check luisarizmendi/ocp-libvirt-ipi-role for more customization options.
The current configuration uses an ephemeral IP in the GCP instance. This means that when the machine is initialized, a new IP can be assigned.
Instead of working with IP, it is more practical to use a DNS. To do this, we can optionally configure a free DDNS (Dynamic DNS) service, for example, https://www.duckdns.org/
After creating an account and a subdomain in duckdns,
set the following variables in variable.tf
:
-
ddns_enabled (value true)
-
ddns_hostname (e.g myopenshift.duckdns.org )
Sensitive variables must be set in secrets.tfvars
:
-
ddns_login
-
ddns_password (leave blank for duckdns.org)
cat >secrets.tfvars <<EOL
ddns_login = "YOUR_TOKEN"
ddns_password = "" #leave blank for duckdns.org
EOL
The ddns service
runs during operating system startup.
The following command shows the DDNS service log.
sudo journalctl -u ddns.service
-- Logs begin at Wed 2021-07-07 19:59:35 UTC, end at Wed 2021-07-07 20:24:36 UTC. --
Jul 07 20:05:08 crc-build-1 systemd[1]: Started DDNS.
Jul 07 20:05:08 crc-build-1 podman[5443]: Trying to pull docker.io/troglobit/inadyn:latest...
Jul 07 20:05:09 crc-build-1 podman[5443]: Getting image source signatures
Jul 07 20:05:10 crc-build-1 podman[5443]: Copying blob sha256:e8edeaf8013a6d59edaf786abe7db1d2e84c57007cee30494cd32d85c309>
Jul 07 20:05:10 crc-build-1 podman[5443]: Copying blob sha256:540db60ca9383eac9e418f78490994d0af424aab7bf6d0e47ac8ed4e2e9b>
Jul 07 20:05:10 crc-build-1 podman[5443]: Copying blob sha256:50d5a522733190b7abb2494c60511de7aa5c32a4e4ea725b2e24ced651de>
Jul 07 20:05:10 crc-build-1 podman[5443]: Copying blob sha256:7b6d4b69e20057c1e0fc615e179d9493adf3c3fc572faa9c90ddb45a2656>
Jul 07 20:05:10 crc-build-1 podman[5443]: Copying config sha256:66ea1a5539de606e965afd0a14d39d60f29cf984104b0512cdeccf2d9d>
Jul 07 20:05:10 crc-build-1 podman[5443]: Writing manifest to image destination
Jul 07 20:05:10 crc-build-1 podman[5443]: Storing signatures
Jul 07 20:05:11 crc-build-1 podman[5443]: inadyn[1]: In-a-dyn version 2.8.1 -- Dynamic DNS update client.
Jul 07 20:05:11 crc-build-1 podman[5443]: inadyn[1]: Guessing DDNS plugin 'default@duckdns.org' from 'duckdns.org'
Jul 07 20:05:12 crc-build-1 podman[5443]: inadyn[1]: Update forced for alias myopenshift.duckdns.org, new IP# 34.133.129.97
Jul 07 20:05:12 crc-build-1 podman[5443]: inadyn[1]: Updating cache for myopenshift.duckdns.org
Recreate environment in case of preemption before the environment is ready.
./terraformDocker.sh destroy -auto-approve
./terraformDocker.sh apply -var-file="secrets.tfvars" -auto-approve
Go to Cloud Shell and run the following commands:
cd ~/crc-on-gcp-terraform/
./terraformDocker.sh destroy -auto-approve
gcloud projects delete $TF_VAR_PROJECT_ID --quiet
rm ~/crc-on-gcp-terraform/ -Rf