Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query on customVolumeMount #92

Closed
vkotheka opened this issue Oct 23, 2020 · 4 comments
Closed

Query on customVolumeMount #92

vkotheka opened this issue Oct 23, 2020 · 4 comments

Comments

@vkotheka
Copy link

vkotheka commented Oct 23, 2020

Suppose my K8s cluster has predefined PV and PVCs.

I refer to the documentation on https://artifacthub.io/packages/helm/solace/pubsubplus
&
https://raw.githubusercontent.com/SolaceProducts/pubsubplus-kubernetes-quickstart/master/pubsubplus/values.yaml

# storage.customVolumeMount enables specifying a YAML fragment how the data volume should be mounted.
 #   If customVolumeMount is defined the rest of the storage params will be ignored
 #   This example shows how to mount the PubSub+ data volume from an existing pvc "test-claim". Ensure to preserve indentation.
 # customVolumeMount: |
 #   persistentVolumeClaim:
 #     claimName: existing-pvc-name

The question for the above scheme is:
Let's say I have 3 PVCs called pvc-primary, pvc-backup, pvc-monitor
How to specify pvc names for primary, backup and monitor nodes using the above scheme?

@bczoma
Copy link
Collaborator

bczoma commented Oct 23, 2020

storage.customVolumeMount can be used for a non-HA deployment.

For an HA cluster, the current implementation leveraging StatefulSet requires a strict naming scheme if provided PVCs are to be used.

The naming is data-<pod-name>, where -0 refers to the Primary, -1 is the Backup and -2 is the Monitor pod.

For an example deployment named <MyDeployment>, the pods created will be

NAME                  READY   STATUS    RESTARTS   AGE
<MyDeployment>-pubsubplus-0   1/1     Running   2          27d
<MyDeployment>-pubsubplus-1   1/1     Running   2          27d
<MyDeployment>-pubsubplus-2   1/1     Running   2          27d

And the PVCs have the name:

$ kubectl get pvc
NAME                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-<MyDeployment>-pubsubplus-0   Bound    pvc-6fa7d535-97b7-465c-a633-7db8fec57c54   30Gi       RWO            standard       27d
data-<MyDeployment>-pubsubplus-1   Bound    pvc-4f1a77e8-8fe9-432c-afcf-0a737150d472   30Gi       RWO            standard       27d
data-<MyDeployment>-pubsubplus-2   Bound    pvc-40188e57-aecd-480d-9b87-a229ebc4f19e   30Gi       RWO            standard       27d

Above PVCs have been generated dynamically but if same named PVCs are available before launching the deployment then the existing ones will be picked up.

There are several online discussions giving ideas how to rename existing PVCs to what is needed, if you are trying to migrate existing PVCs to the naming scheme.

I will enhance documentation to better clarify this.

@vkotheka
Copy link
Author

Thanks bczoma.

One more query on:
"Above PVCs have been generated dynamically but if same named PVCs are available before launching the deployment then the existing ones will be picked up."

How does the script know which PV to bind the PVC if there are no existing PVCs following the naming convention. And let's add a complexity - what happens if there are multiple PV? So let's say, our PVCs require 30 GB gp2 volume and there are 6 of them - which one would be chosen. Probably this happens at the K8s infra level but I am keen to know how if the helm scripts have any special processing there.

@bczoma
Copy link
Collaborator

bczoma commented Oct 30, 2020

The logic is following a naming convention. If any PVC named data-<MyDeployment>-pubsubplus-[012] (interpret [012] as regex) exist they will be picked up for the matching pod, otherwise new PVCs will be created for the non-existent ones. New PVCs will be created using StorageClasses. If no SC is available/defined PVCs will be stuck in pending state and so will the pods.
This is happening at K8S StatefulSet level as the normal behavior.

@bczoma
Copy link
Collaborator

bczoma commented Jan 27, 2021

No further updates, considering as ready to be closed.

PhilippeKhalife pushed a commit that referenced this issue Jan 27, 2021
* Added support to configure server certificates for TLS
* Improved logic to set and log pod active labels as part of readiness check
* Updated names and file locations for readiness check related files to align with long-term broker file system strategy
* Renamed internal script setup-config-sync to startup-broker
* Added more default exposed ports, unified port names
* Added automation coverage for TLS configuration
* Fixed reference to Helm hub
* Updated chart version to 2.4

Closes  issues #87, #92, #94, #98
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants