-
Notifications
You must be signed in to change notification settings - Fork 80
NooBaa in Open Shift
This section will deploy Open Shift (OpenShift) on AWS. Use funcs.io domain
Pull secret from here: https://cloud.openshift.com/clusters/install git clone https://github.com/openshift/installer
At the end of the installation, you will see something like the following
INFO Waiting up to 30m0s for the Kubernetes API...
INFO API v1.11.0+8868a98a7b up
INFO Waiting up to 30m0s for the bootstrap-complete event...
ERROR: logging before flag.Parse: E0204 23:56:33.391618 27781 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
WARNING RetryWatcher - getting event failed! Re-creating the watcher. Last RV: 148
INFO Destroying the bootstrap resources...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO Run '**export KUBECONFIG=/Users/erantamir/workspace/openshift/auth/kubeconfig**' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when '**oc login -u kubeadmin -p XXXXX-XXXXX-XXXXX-XXXXX**' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.xxxx.funcs.io
INFO Login to the console with user: kubeadmin, password: XXXXX-XXXXX-XXXXX-XXXXX
oc login https://xxxx-api.funcs.io:6443
Follow these instructions
kubectl scale --replicas=x statefulset/noobaa-agent
oc apply -f noobaa_statefuleset.yaml
oc rsh noobaa-0
- Use OperatorHub and deploy Promethues on your namespace
- In the newly deployed Promethues operator, create a new service monitor. Update the yaml with:
-
label app:noobaa and
-
selector to match app:noobaa
-
port for mgmt
- Make sure there is only one label, which is NooBaa
yaml example below:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
creationTimestamp: '2019-05-29T11:23:26Z'
generation: 2
labels:
app: noobaa
name: example
namespace: etdelete
resourceVersion: '15515463'
selfLink: /apis/monitoring.coreos.com/v1/namespaces/etdelete/servicemonitors/example
uid: 2d3961eb-8204-11e9-9d85-060f3d1851ac
spec:
endpoints:
- interval: 30s
port: mgmt
selector:
matchLabels:
app: noobaa
- Create new Promehtues instance from the operator. Update its YAML:
-
label app: noobaa
-
serviceMonitorSelector:
matchLabels:
app: noobaa
Example below
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
creationTimestamp: '2019-05-29T11:23:35Z'
generation: 2
labels:
app: noobaa
prometheus: k8s
name: example
namespace: etdelete
resourceVersion: '15507192'
selfLink: /apis/monitoring.coreos.com/v1/namespaces/etdelete/prometheuses/example
uid: 329deae6-8204-11e9-9d85-060f3d1851ac
spec:
alerting:
alertmanagers:
- name: alertmanager-main
namespace: monitoring
port: web
replicas: 2
ruleSelector: {}
securityContext: {}
serviceAccountName: prometheus-k8s
serviceMonitorSelector:
matchLabels:
app: noobaa
version: v2.7.1
[out dated, WIP]
oc new-app -f https://raw.githubusercontent.com/ConSol/springboot-monitoring-example/master/templates/grafana.yaml -p NAMESPACE=grafana
oc policy add-role-to-user view system:serviceaccount:grafana:grafana-ocp -n prometheus
Create data source with prometheus with token
oc sa get-token prometheus -n prometheus
**Skip authenticated connection by running the following command and take the endpoints and use for data source: ** (better use dns, follow this article - https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
oc describe service prometheus -n default

git clone https://github.com/rook/rook.git
cd cluster/examples/kubernetes/ceph/
Edit cluster.yaml and change dataDirHostPath to /mnt/sda1/rook Edit cluster.yaml, look for directories:. Uncomment and change the path to /mnt/sda1/rook
kubectl create -f common.yaml
kubectl create -f operator-openshift.yaml
kubectl create -f cluster.yaml
kubectl create -f object-openshift.yaml
kubectl create -f object-user.yaml
echo "AccessKey:";kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o yaml | grep AccessKey | awk '{print $2}'| base64 --decode;printf "\n";echo "secret:";kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o yaml | grep SecretKey | awk '{print $2}'|base64 --decode;printf "\n"
Add route on top of rook-ceph-rgw-my-store. Name it, Keep path / and select the port
You should be able to navigate to http://<your name>-rook-ceph.apps..funcs.io/ and get 403. Now you can connect NooBaa to this url and the access and secret key.
https://access.redhat.com/documentation/en-us/red_hat_quay/2.9/html-single/deploy_red_hat_quay_on_openshift/index before running the lb yaml, edit the secret according to this article https://access.redhat.com/solutions/3533201
Create a new file called postgres-persistent.json with the following content:
{
"apiVersion": "v1",
"kind": "Template",
"labels": {
"template": "postgresql-persistent-template"
},
"message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Username: ${POSTGRESQL_USER}\n Password: ${POSTGRESQL_PASSWORD}\n Database Name: ${POSTGRESQL_DATABASE}\n Connection URL: postgresql://${DATABASE_SERVICE_NAME}:5432/\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/.",
"metadata": {
"annotations": {
"description": "PostgreSQL database service, with persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/.\n\nNOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.",
"iconClass": "icon-postgresql",
"openshift.io/display-name": "PostgreSQL",
"openshift.io/documentation-url": "https://docs.okd.io/latest/using_images/db_images/postgresql.html",
"openshift.io/long-description": "This template provides a standalone PostgreSQL server with a database created. The database is stored on persistent storage. The database name, username, and password are chosen via parameters when provisioning this service.",
"openshift.io/provider-display-name": "Red Hat, Inc.",
"openshift.io/support-url": "https://access.redhat.com",
"tags": "database,postgresql"
},
"name": "postgresql-persistent"
},
"objects": [{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"template.openshift.io/expose-database_name": "{.data['database-name']}",
"template.openshift.io/expose-password": "{.data['database-password']}",
"template.openshift.io/expose-username": "{.data['database-user']}"
},
"name": "${DATABASE_SERVICE_NAME}"
},
"stringData": {
"database-name": "${POSTGRESQL_DATABASE}",
"database-password": "${POSTGRESQL_PASSWORD}",
"database-user": "${POSTGRESQL_USER}"
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"template.openshift.io/expose-uri": "postgres://{.spec.clusterIP}:{.spec.ports[?(.name==\"postgresql\")].port}"
},
"name": "${DATABASE_SERVICE_NAME}"
},
"spec": {
"ports": [{
"name": "postgresql",
"nodePort": 0,
"port": 5432,
"protocol": "TCP",
"targetPort": 5432
}],
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "${VOLUME_CAPACITY}"
}
}
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"annotations": {
"template.alpha.openshift.io/wait-for-ready": "true"
},
"name": "${DATABASE_SERVICE_NAME}"
},
"spec": {
"replicas": 1,
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
},
"strategy": {
"type": "Recreate"
},
"template": {
"metadata": {
"labels": {
"name": "${DATABASE_SERVICE_NAME}"
}
},
"spec": {
"containers": [{
"capabilities": {},
"env": [{
"name": "POSTGRESQL_USER",
"valueFrom": {
"secretKeyRef": {
"key": "database-user",
"name": "${DATABASE_SERVICE_NAME}"
}
}
},
{
"name": "POSTGRESQL_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"key": "database-password",
"name": "${DATABASE_SERVICE_NAME}"
}
}
},
{
"name": "POSTGRESQL_DATABASE",
"valueFrom": {
"secretKeyRef": {
"key": "database-name",
"name": "${DATABASE_SERVICE_NAME}"
}
}
}
],
"image": " ",
"imagePullPolicy": "IfNotPresent",
"livenessProbe": {
"exec": {
"command": [
"/usr/libexec/check-container",
"--live"
]
},
"initialDelaySeconds": 120,
"timeoutSeconds": 10
},
"name": "postgresql",
"ports": [{
"containerPort": 5432,
"protocol": "TCP"
}],
"readinessProbe": {
"exec": {
"command": [
"/usr/libexec/check-container"
]
},
"initialDelaySeconds": 5,
"timeoutSeconds": 1
},
"resources": {
"limits": {
"memory": "${MEMORY_LIMIT}"
}
},
"securityContext": {
"capabilities": {},
"fsGroup": 0,
"privileged": true
},
"terminationMessagePath": "/dev/termination-log",
"volumeMounts": [{
"mountPath": "/var/lib/pgsql/data",
"name": "${DATABASE_SERVICE_NAME}-data"
}]
}],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"volumes": [{
"name": "${DATABASE_SERVICE_NAME}-data",
"persistentVolumeClaim": {
"claimName": "${DATABASE_SERVICE_NAME}"
}
}]
}
},
"triggers": [{
"imageChangeParams": {
"automatic": true,
"containerNames": [
"postgresql"
],
"from": {
"kind": "ImageStreamTag",
"name": "postgresql:${POSTGRESQL_VERSION}",
"namespace": "${NAMESPACE}"
},
"lastTriggeredImage": ""
},
"type": "ImageChange"
},
{
"type": "ConfigChange"
}
]
},
"status": {}
}
],
"parameters": [{
"description": "Maximum amount of memory the container can use.",
"displayName": "Memory Limit",
"name": "MEMORY_LIMIT",
"required": true,
"value": "512Mi"
},
{
"description": "The OpenShift Namespace where the ImageStream resides.",
"displayName": "Namespace",
"name": "NAMESPACE",
"value": "openshift"
},
{
"description": "The name of the OpenShift Service exposed for the database.",
"displayName": "Database Service Name",
"name": "DATABASE_SERVICE_NAME",
"required": true,
"value": "postgresql"
},
{
"description": "Username for PostgreSQL user that will be used for accessing the database.",
"displayName": "PostgreSQL Connection Username",
"from": "user[A-Z0-9]{3}",
"generate": "expression",
"name": "POSTGRESQL_USER",
"required": true
},
{
"description": "Password for the PostgreSQL connection user.",
"displayName": "PostgreSQL Connection Password",
"from": "[a-zA-Z0-9]{16}",
"generate": "expression",
"name": "POSTGRESQL_PASSWORD",
"required": true
},
{
"description": "Name of the PostgreSQL database accessed.",
"displayName": "PostgreSQL Database Name",
"name": "POSTGRESQL_DATABASE",
"required": true,
"value": "sampledb"
},
{
"description": "Volume space available for data, e.g. 512Mi, 2Gi.",
"displayName": "Volume Capacity",
"name": "VOLUME_CAPACITY",
"required": true,
"value": "1Gi"
},
{
"description": "Version of PostgreSQL image to be used (10 or latest).",
"displayName": "Version of PostgreSQL Image",
"name": "POSTGRESQL_VERSION",
"required": true,
"value": "10"
}
]
}
oc create -f postgres-persistent.json
oc new-app -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quaydb --template=postgresql-persistent
oc exec -it postgresql-96-rhel7-1-9vltm /bin/bash
echo "SELECT * FROM pg_available_extensions" | psql
echo "CREATE EXTENSION pg_trgm" | psql
echo "SELECT * FROM pg_extension" | psql
echo "ALTER USER quayuser WITH SUPERUSER;" | psql
Grab the postgresql cluster ip with:
oc get services -n quay-enterprise
git clone https://github.com/zalando/postgres-operator.git
create the following file minimal-postgres-manifest.yaml and replace the one you got from the git repo. The difference is quay user, db and db version (9.6)
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: acid-minimal-cluster
namespace: default
spec:
teamId: "ACID"
volume:
size: 1Gi
numberOfInstances: 2
users:
# database owner
zalando:
- superuser
- createdb
quayuser:
- superuser
- createdb
# role for application foo
foo_user: []
#databases: name->owner
databases:
foo: zalando
quay: quayuser
postgresql:
version: "9.6"
oc project default
kubectl create -f manifests/configmap.yaml
kubectl create -f manifests/operator-service-account-rbac.yaml
kubectl create -f manifests/postgres-operator.yaml
kubectl create -f manifests/minimal-postgres-manifest.yaml
Bring up proxy
#get name of master pod of acid-minimal-cluster
export PGMASTER=$(kubectl get pods -o jsonpath={.items..metadata.name} -l version=acid-minimal-cluster -n default)
#set up port forward
kubectl port-forward $PGMASTER 6432:5432
open a new terminal and run the following commands
Get quay user's password with the following command
echo $(kubectl get secret quayuser.acid-minimal-cluster.credentials -o 'jsonpath={.data.password}' -n default ) |base64 -D
Connect the database
psql -U quayuser -p 6432 -h 127.0.0.1 -W quay
Run the following commands:
SELECT * FROM pg_available_extensions;
CREATE EXTENSION pg_trgm;
SELECT * FROM pg_extension;
ALTER USER quayuser WITH SUPERUSER;
PostgreSQL hostname is acid-minimal-cluster.default
Configure quay to use NooBaa https://noobaa.desk.com/customer/portal/articles/2970047-quay
Turn on debug for Quay - https://access.redhat.com/solutions/3663691
If you need to cleanup, but failed, use this one https://github.com/ctron/kill-kube-ns/blob/master/kill-kube-ns