Skip to content

Commit

Permalink
Ingress, scaling and storage enhencements (#114)
Browse files Browse the repository at this point in the history
* Documented support and use of Ingress
* Support to individually specify all PubSub+ scaling parameters - also fixes Enhancement: Add New Helm Chart Parameter for Broker Message Spool Limit #99
* Support to use single mount storage-group for the broker
* Allow smaller Monitor pod CPU, Memory and storage requirements in an HA deployment
* Fixed Helm error with custom service annotation - fixes issue [BUG] Helm error with custom service annotation #112
  • Loading branch information
bczoma authored May 10, 2022
1 parent 78d40b6 commit 66142a2
Show file tree
Hide file tree
Showing 44 changed files with 5,037 additions and 3,078 deletions.
355 changes: 188 additions & 167 deletions .github/workflows/build-test.yml

Large diffs are not rendered by default.

402 changes: 201 additions & 201 deletions LICENSE

Large diffs are not rendered by default.

246 changes: 123 additions & 123 deletions README.md

Large diffs are not rendered by default.

1,870 changes: 1,037 additions & 833 deletions docs/PubSubPlusK8SDeployment.md

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions docs/helm-charts/create-chart-variants.sh
Original file line number Diff line number Diff line change
Expand Up @@ -53,5 +53,6 @@ for variant in '' '-dev' '-ha' ;
sed -i 's%helm repo add.*%helm repo add openshift-helm-charts https://charts.openshift.io%g' pubsubplus-openshift"$variant"/README.md
sed -i 's%solacecharts/pubsubplus%openshift-helm-charts/pubsubplus-openshift%g' pubsubplus-openshift"$variant"/README.md
sed -i 's@`solace/solace-pubsub-standard`@`registry.connect.redhat.com/solace/pubsubplus-standard`@g' pubsubplus-openshift"$variant"/README.md
sed -i 's/kubectl/oc/g' pubsubplus-openshift"$variant"/templates/NOTES.txt
helm package pubsubplus-openshift"$variant"
done
42 changes: 21 additions & 21 deletions pubsubplus/.helmignore
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
58 changes: 29 additions & 29 deletions pubsubplus/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,29 +1,29 @@
apiVersion: v2
description: Deploy Solace PubSub+ Event Broker Singleton or HA redundancy group onto a Kubernetes Cluster
name: pubsubplus
version: 3.0.0
icon: https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/images/PubSubPlus.png
kubeVersion: '>= 1.10.0-0'
maintainers:
- name: Solace Community Forum
url: https://solace.community/
- name: Solace Support
url: https://solace.com/support/
home: https://dev.solace.com
sources:
- https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart
keywords:
- solace
- pubsubplus
- pubsub+
- pubsub
- messaging
- advanced event broker
- event broker
- event mesh
- event streaming
- data streaming
- event integration
- middleware
annotations:
charts.openshift.io/name: PubSub+ Event Broker
apiVersion: v2
description: Deploy Solace PubSub+ Event Broker Singleton or HA redundancy group onto a Kubernetes Cluster
name: pubsubplus
version: 3.1.0
icon: https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/images/PubSubPlus.png
kubeVersion: '>= 1.10.0-0'
maintainers:
- name: Solace Community Forum
url: https://solace.community/
- name: Solace Support
url: https://solace.com/support/
home: https://dev.solace.com
sources:
- https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart
keywords:
- solace
- pubsubplus
- pubsub+
- pubsub
- messaging
- advanced event broker
- event broker
- event mesh
- event streaming
- data streaming
- event integration
- middleware
annotations:
charts.openshift.io/name: PubSub+ Event Broker
402 changes: 201 additions & 201 deletions pubsubplus/LICENSE

Large diffs are not rendered by default.

230 changes: 117 additions & 113 deletions pubsubplus/README.md

Large diffs are not rendered by default.

187 changes: 98 additions & 89 deletions pubsubplus/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -1,89 +1,98 @@

== Check Solace PubSub+ deployment progress ==
Deployment is complete when a PubSub+ pod representing an active event broker node's label reports "active=true".
Watch progress by running:
kubectl get pods --namespace {{ .Release.Namespace }} --show-labels -w | grep {{ template "solace.fullname" . }}

For troubleshooting, refer to ***TroubleShooting.md***

== TLS support ==
{{- if not .Values.tls.enabled }}
TLS has not been enabled for this deployment.
{{- else }}
TLS is enabled, using secret {{ .Values.tls.serverCertificatesSecret }} for server certificates configuration.
{{- end }}

== Admin credentials and access ==
{{- if not .Values.solace.usernameAdminPassword }}
*********************************************************************
* An admin password was not specified and has been auto-generated.
* You must retrieve it and provide it as value override
* if using Helm upgrade otherwise your cluster will become unusable.
*********************************************************************

{{- end }}
Username : admin
Admin password : echo `kubectl get secret --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }}-secrets -o jsonpath="{.data.username_admin_password}" | base64 --decode`
Use the "semp" service address to access the management API via browser or a REST tool, see Services access below.

== Image used ==
{{ .Values.image.repository }}:{{ .Values.image.tag }}

== Storage used ==
{{- if and ( .Values.storage.persistent ) ( .Values.storage.useStorageClass ) }}
Using persistent volumes via dynamic provisioning, ensure specified StorageClass exists: `kubectl get sc {{ .Values.storage.useStorageClass }}`
{{- else if .Values.storage.persistent}}
Using persistent volumes via dynamic provisioning with the "default" StorageClass, ensure it exists: `kubectl get sc | grep default`
{{- end }}
{{- if and ( not .Values.storage.persistent ) ( not .Values.storage.hostPath ) ( not .Values.storage.existingVolume ) }}
*******************************************************************************
* This deployment is using pod-local ephemeral storage.
* Note that any configuration and stored messages will be lost at pod restart.
*******************************************************************************
For production purposes it is recommended to use persistent storage.
{{- end }}

== Performance and resource requirements ==
{{- if contains "dev" .Values.solace.size }}
This is a minimum footprint deployment for development purposes. For guaranteed performance, specify a different solace.size value.
{{- else }}
The requested connection scaling tier for this deployment is: max {{ substr 4 10 .Values.solace.size }} connections.
{{- end }}
Following resources have been requested per PubSub+ pod:
echo `kubectl get statefulset --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="Minimum resources: {.spec.template.spec.containers[0].resources.requests}"`

== Services access ==
To access services from pods within the k8s cluster, use these addresses:

echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t{{ template "solace.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{.port}\n"`

To access from outside the k8s cluster, perform the following steps.

{{- if contains "NodePort" .Values.service.type }}

Obtain the NodePort IP and service ports:

export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[*].status.addresses[0].address}"); echo $NODE_IP
# Use following ports with any of the NodeIPs
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t<NodeIP>:{.nodePort}\n"`

{{- else if contains "LoadBalancer" .Values.service.type }}

Obtain the LoadBalancer IP and the service addresses:
NOTE: At initial deployment it may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "solace.fullname" . }}'

export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"); echo SERVICE_IP=$SERVICE_IP
# Ensure valid SERVICE_IP is returned:
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t$SERVICE_IP:{.port}\n"`

{{- else if contains "ClusterIP" .Values.service.type }}

NOTE: The specified k8s service type for this deployment is "ClusterIP" and it is not exposing services externally.

For local testing purposes you can use port-forward in a background process to map pod ports to local host, then use these service addresses:

kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "solace.fullname" . }} $(echo `kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.targetPort}:{.port} "`) &
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t127.0.0.1:{.targetPort}\n"`

{{- end }}

== Check Solace PubSub+ deployment progress ==
Deployment is complete when a PubSub+ pod representing an active event broker node's label reports "active=true".
Watch progress by running:
kubectl get pods --namespace {{ .Release.Namespace }} --show-labels -w | grep {{ template "solace.fullname" . }}

For troubleshooting, refer to ***TroubleShooting.md***

== TLS support ==
{{- if not .Values.tls.enabled }}
TLS has not been enabled for this deployment.
{{- else }}
TLS is enabled, using secret {{ .Values.tls.serverCertificatesSecret }} for server certificates configuration.
{{- end }}

== Admin credentials and access ==
{{- if not .Values.solace.usernameAdminPassword }}
*********************************************************************
* An admin password was not specified and has been auto-generated.
* You must retrieve it and provide it as value override
* if using Helm upgrade otherwise your cluster will become unusable.
*********************************************************************

{{- end }}
Username : admin
Admin password : echo `kubectl get secret --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }}-secrets -o jsonpath="{.data.username_admin_password}" | base64 --decode`
Use the "semp" service address to access the management API via browser or a REST tool, see Services access below.

== Image used ==
{{ .Values.image.repository }}:{{ .Values.image.tag }}

== Storage used ==
{{- if and ( .Values.storage.persistent ) ( .Values.storage.useStorageClass ) }}
Using persistent volumes via dynamic provisioning, ensure specified StorageClass exists: `kubectl get sc {{ .Values.storage.useStorageClass }}`
{{- else if .Values.storage.persistent}}
Using persistent volumes via dynamic provisioning with the "default" StorageClass, ensure it exists: `kubectl get sc | grep default`
{{- end }}
{{- if and ( not .Values.storage.persistent ) ( not .Values.storage.hostPath ) ( not .Values.storage.existingVolume ) }}
*******************************************************************************
* This deployment is using pod-local ephemeral storage.
* Note that any configuration and stored messages will be lost at pod restart.
*******************************************************************************
For production purposes it is recommended to use persistent storage.
{{- end }}

== Performance and resource requirements ==
{{- if .Values.solace.systemScaling }}
Max supported number of client connections: {{ .Values.solace.systemScaling.maxConnections }}
Max number of queue messages, in millions of messages: {{ .Values.solace.systemScaling.maxQueueMessages }}
Max spool usage, in MB: {{ .Values.solace.systemScaling.maxSpoolUsage }}
Requested cpu, in cores: {{ .Values.solace.systemScaling.cpu }}
Requested memory: {{ .Values.solace.systemScaling.memory }}
Requested storage: {{ .Values.storage.size }}
{{- else }}
{{- if contains "dev" .Values.solace.size }}
This is a minimum footprint deployment for development purposes. For guaranteed performance, specify a different solace.size value.
{{- else }}
The requested connection scaling tier for this deployment is: max {{ substr 4 10 .Values.solace.size }} connections.
{{- end }}
Following resources have been requested per PubSub+ pod:
echo `kubectl get statefulset --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="Minimum resources: {.spec.template.spec.containers[0].resources.requests}"`
{{- end }}

== Services access ==
To access services from pods within the k8s cluster, use these addresses:

echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t{{ template "solace.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{.port}\n"`

To access from outside the k8s cluster, perform the following steps.

{{- if contains "NodePort" .Values.service.type }}

Obtain the NodePort IP and service ports:

export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[*].status.addresses[0].address}"); echo $NODE_IP
# Use following ports with any of the NodeIPs
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t<NodeIP>:{.nodePort}\n"`

{{- else if contains "LoadBalancer" .Values.service.type }}

Obtain the LoadBalancer IP and the service addresses:
NOTE: At initial deployment it may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "solace.fullname" . }}'

export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"); echo SERVICE_IP=$SERVICE_IP
# Ensure valid SERVICE_IP is returned:
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t$SERVICE_IP:{.port}\n"`

{{- else if contains "ClusterIP" .Values.service.type }}

NOTE: The specified k8s service type for this deployment is "ClusterIP" and it is not exposing services externally.

For local testing purposes you can use port-forward in a background process to map pod ports to local host, then use these service addresses:

kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "solace.fullname" . }} $(echo `kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.targetPort}:{.port} "`) &
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t127.0.0.1:{.targetPort}\n"`

{{- end }}
56 changes: 28 additions & 28 deletions pubsubplus/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -1,29 +1,29 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "solace.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 53 chars because some Kubernetes name fields are limited (by the DNS naming spec).
*/}}
{{- define "solace.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 53 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 53 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Return the name of the service account to use
*/}}
{{- define "solace.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default ( cat (include "solace.fullname" .) "-sa" | nospace ) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "solace.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 53 chars because some Kubernetes name fields are limited (by the DNS naming spec).
*/}}
{{- define "solace.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 53 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 53 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Return the name of the service account to use
*/}}
{{- define "solace.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default ( cat (include "solace.fullname" .) "-sa" | nospace ) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
Loading

0 comments on commit 66142a2

Please sign in to comment.