This tutorial demonstrates how to share a PostgreSQL database across multiple Kubernetes clusters that are located in different public and private cloud providers.
In this tutorial, you will create a Virtual Application Nework that enables communications across the public and private clusters. You will then deploy a PostgresSQL database instance to a private cluster and attach it to the Virtual Application Network. Thie will enable clients on different public clusters attached to the Virtual Application Nework to transparently access the database without the need for additional networking setup (e.g. no vpn or sdn required).
To complete this tutorial, do the following:
- Prerequisites
- Step 1: Set up the demo
- Step 2: Deploy the Virtual Application Network
- Step 3: Deploy the PostgreSQL service
- Step 4: Expose the PostgreSQL deployment to the Virtual Application Network
- Step 5: Deploy interactive Pod with Postgresql client utilities
- Step 6: Create a Database, Create a Table, Insert Values
- Cleaning up
- Next steps
- The
kubectl
command-line tool, version 1.15 or later (installation guide) - The
skupper
command-line tool, version 0.7 or later (installation guide)
The basis for the demonstration is to depict the operation of a PostgreSQL database in a private cluster and the ability to access the database from clients resident on other public clusters. As an example, the cluster deployment might be comprised of:
- A private cloud cluster running on your local machine
- Two public cloud clusters running in public cloud providers
While the detailed steps are not included here, this demonstration can alternatively be performed with three separate namespaces on a single cluster.
-
On your local machine, make a directory for this tutorial and clone the example repo:
mkdir pg-demo cd pg-demo git clone https://github.com/skupperproject/skupper-example-postgresql.git
-
Prepare the target clusters.
- On your local machine, log in to each cluster in a separate terminal session.
- In each cluster, create a namespace to use for the demo.
- In each cluster, set the kubectl config context to use the demo namespace (see kubectl cheat sheet)
On each cluster, define the virtual application network and the connectivity for the peer clusters.
-
In the terminal for the first public cluster, deploy the public1 application router. Create connection token for connections from the public2 cluster and the private1 cluster:
skupper init --site-name public1 skupper token create --uses=2 public1-token.yaml
-
In the terminal for the second public cluster, deploy the public2 application router, create a connection token for a connection from the private1 cluster and link to the public1 cluster:
skupper init --site-name public2 skupper token create public2-token.yaml skupper link create public1-token.yaml
-
In the terminal for the private cluster, deploy the private1 application router and create its links to the public1 and public2 cluster
skupper init --site-name private1 skupper link create public1-token.yaml skupper link create public2-token.yaml
-
In each of the cluster terminals, verify connectivity has been established
skupper link status
After creating the application router network, deploy the PostgreSQL service. The private1 cluster will be used to deploy the PostgreSQL server and the public1 and public2 clusters will be used to enable client communications to the server on the private1 cluster.
-
In the terminal for the private1 cluster where the PostgreSQL server will be created, deploy the following:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml
-
In the terminal for the private1 cluster, expose the postgresql-svc service:
skupper service create postgresql 5432
-
In each of the cluster terminals, verify the service created is present
skupper service status
Note that the mapping for the service address defaults to
tcp
.
-
In the terminal for the private1 cluster, expose the postgresql-svc service:
skupper service bind postgresql deployment postgresql
-
In the private1 cluster terminal, verify the service bind to the target
skupper service status
Note that the private1 is the only cluster to provide a target.
-
From each cluster terminial, create a pod that contains the PostgreSQL client utilities:
kubectl run pg-shell -i --tty --image quay.io/skupper/simple-pg \ --env="PGUSER=postgres" \ --env="PGPASSWORD=skupper" \ --env="PGHOST=$(kubectl get service postgresql -o=jsonpath='{.spec.clusterIP}')" \ -- bash
-
Note that if the session is ended, it can be resumed with the following:
kubectl attach pg-shell -c pg-shell -i -t
Using the 'pg-shell' pod running on each cluster, operate on the database:
-
Create a database called 'markets' from the private1 cluster
bash-5.0$ createdb -e markets
-
Create a table called 'product' in the 'markets' database from the public1 cluster
bash-5.0$ psql -d markets markets# create table if not exists product ( id SERIAL, name VARCHAR(100) NOT NULL, sku CHAR(8) );
-
Insert values into the
product
table in themarkets
database from the public2 cluster:bash-5.0$ psql -d markets markets# INSERT INTO product VALUES(DEFAULT, 'Apple, Fuji', '4131'); markets# INSERT INTO product VALUES(DEFAULT, 'Banana', '4011'); markets# INSERT INTO product VALUES(DEFAULT, 'Pear, Bartlett', '4214'); markets# INSERT INTO product VALUES(DEFAULT, 'Orange', '4056');
-
From any cluster, access the
product
tables in themarkets
database to view contentsbash-5.0$ psql -d markets markets# SELECT * FROM product;
Restore your cluster environment by returning the resources created in the demonstration. On each cluster, delete the demo resources and the virtual application network:
-
In the terminal for the public1 cluster, delete the resources:
$ kubectl delete pod pg-shell $ skupper delete
-
In the terminal for the public2 cluster, delete the resources:
$ kubectl delete pod pg-shell $ skupper delete
-
In the terminal for the private1 cluster, delete the resources:
$ kubectl delete pod pg-shell $ skupper unexpose deployment postgresql $ kubectl delete -f ~/pg-demo/skupper-example-postgresql/deployment-postgresql-svc.yaml $ skupper delete