Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow volumes and volume mounts to be specified #1424

Open
mimiteto opened this issue Oct 16, 2024 · 2 comments
Open

Allow volumes and volume mounts to be specified #1424

mimiteto opened this issue Oct 16, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@mimiteto
Copy link

Describe the feature you'd like to have.
It would be nice to have the option to mount user specified secrets, configmaps (and others) on the replication pods.

What is the value to the end user? (why is it a priority?)
As an end user I would be able to use ssh certificates with the rclone mover (when using rclone with sftp).
Currently certificates in rclone can be provided only as file paths in their config.
There are possibly other cases where the ability to provide similar configurations is beneficial.

How will we know we have a good solution? (acceptance criteria)

Additional context

@mimiteto mimiteto added the enhancement New feature or request label Oct 16, 2024
@tesshuflower
Copy link
Contributor

We could possibly do something similar to what we did with restic with customCA (that can be specified in a configmap or secret).

Is there a common way of passing this information to rclone, and would this give you what you need? A quick look suggests rclone has a --ca-cert option, but not sure if it works for non HTTPS backends).

The issue with simply mounting things to mover pods is we need to also specify paths so that you can reference them, and also potentially tell rclone where to look for them. I'd prefer to have defined parameters we pass to rclone if possible.

@mimiteto
Copy link
Author

On the topic of rclone - the flag I need is --sftp-pubkey-file. I proposed a code change so I don't need to mount anything beyond what is currently available in volsync.

On the topic of mounting user specified volumes - I believe this will make volsync more flexible.
Currently you are handling each case separately which needs issues like the current one.
Allowing user specified volumes to be mounted will remove the need to open new issues.
I believe you can just add user specified volumes to a common dir (e.g. /mnt or /mount).
Let's look at the current replication source

apiVersion: volsync.backube/v1alpha1
kind: ReplicationSource
metadata:
  name: user-sync-data-0
  namespace: sync-data
spec:
  rclone:
    copyMethod: Clone
    moverSecurityContext:
      fsGroup: 1000
      runAsGroup: 1000
      runAsUser: 1000
    rcloneConfig: volsync-user-sync-local
    rcloneConfigSection: ssh
    rcloneDestPath: /backups/p/sync-data/user-sync-data-user-sync-0
  sourcePVC: user-sync-data-user-sync-0
  trigger:
    schedule: 0 6 * * *

Which leads to the following job

apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: "2024-10-18T15:24:53Z"
  generation: 1
  labels:
    app.kubernetes.io/created-by: volsync
    volsync.backube/cleanup: d56bcd01-3649-4629-8a1d-692345b5be2e
  name: volsync-rclone-src-user-sync-data-0
  namespace: sync-data
  ownerReferences:
  - apiVersion: volsync.backube/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicationSource
    name: user-sync-data-0
    uid: d56bcd01-3649-4629-8a1d-692345b5be2e
  resourceVersion: "11063059"
  uid: bad912e6-0b04-477b-9652-957faa264ee6
spec:
  backoffLimit: 2
  completionMode: NonIndexed
  manualSelector: false
  parallelism: 1
  podReplacementPolicy: TerminatingOrFailed
  selector:
    matchLabels:
      batch.kubernetes.io/controller-uid: bad912e6-0b04-477b-9652-957faa264ee6
  suspend: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/created-by: volsync
        batch.kubernetes.io/controller-uid: bad912e6-0b04-477b-9652-957faa264ee6
        batch.kubernetes.io/job-name: volsync-rclone-src-user-sync-data-0
        controller-uid: bad912e6-0b04-477b-9652-957faa264ee6
        job-name: volsync-rclone-src-user-sync-data-0
      name: volsync-rclone-src-user-sync-data-0
    spec:
      containers:
      - command:
        - /bin/bash
        - -c
        - /mover-rclone/active.sh
        env:
        - name: RCLONE_CONFIG
          value: /rclone-config/rclone.conf
        - name: RCLONE_DEST_PATH
          value: /backups/p/sync-data/user-sync-data-user-sync-0
        - name: DIRECTION
          value: source
        - name: MOUNT_PATH
          value: /data
        - name: RCLONE_CONFIG_SECTION
          value: ssh
        - name: PRIVILEGED_MOVER
          value: "0"
        image: quay.io/backube/volsync:0.10.0
        imagePullPolicy: IfNotPresent
        name: rclone
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data
          name: data
        - mountPath: /rclone-config/
          name: rclone-secret
        - mountPath: /tmp
          name: tempdir
      dnsPolicy: ClusterFirst
      restartPolicy: Never
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      serviceAccount: volsync-src-user-sync-data-0
      serviceAccountName: volsync-src-user-sync-data-0
      terminationGracePeriodSeconds: 30
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: volsync-user-sync-data-0-src
      - name: rclone-secret
        secret:
          defaultMode: 384
          secretName: volsync-user-sync-local
      - emptyDir:
          medium: Memory
        name: tempdir

Allowing users to specify volumes like:

apiVersion: volsync.backube/v1alpha1
kind: ReplicationSource
metadata:
  name: user-sync-data-0
  namespace: sync-data
spec:
  rclone:
    copyMethod: Clone
    moverSecurityContext:
      fsGroup: 1000
      runAsGroup: 1000
      runAsUser: 1000
    rcloneConfig: volsync-user-sync-local
    rcloneConfigSection: ssh
    rcloneDestPath: /backups/p/sync-data/user-sync-data-user-sync-0
  sourcePVC: user-sync-data-user-sync-0
  trigger:
    schedule: 0 6 * * *
  moverVolumes:
    - name: my-mount
      persistentVolumeClaim:
        claimName: my-pvc
    - name: my-credentials
      secret:
        secretName: secret-credentials

and then producing job like

apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: "2024-10-18T15:24:53Z"
  generation: 1
  labels:
    app.kubernetes.io/created-by: volsync
    volsync.backube/cleanup: d56bcd01-3649-4629-8a1d-692345b5be2e
  name: volsync-rclone-src-user-sync-data-0
  namespace: sync-data
  ownerReferences:
  - apiVersion: volsync.backube/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicationSource
    name: user-sync-data-0
    uid: d56bcd01-3649-4629-8a1d-692345b5be2e
  resourceVersion: "11063059"
  uid: bad912e6-0b04-477b-9652-957faa264ee6
spec:
  backoffLimit: 2
  completionMode: NonIndexed
  manualSelector: false
  parallelism: 1
  podReplacementPolicy: TerminatingOrFailed
  selector:
    matchLabels:
      batch.kubernetes.io/controller-uid: bad912e6-0b04-477b-9652-957faa264ee6
  suspend: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/created-by: volsync
        batch.kubernetes.io/controller-uid: bad912e6-0b04-477b-9652-957faa264ee6
        batch.kubernetes.io/job-name: volsync-rclone-src-user-sync-data-0
        controller-uid: bad912e6-0b04-477b-9652-957faa264ee6
        job-name: volsync-rclone-src-user-sync-data-0
      name: volsync-rclone-src-user-sync-data-0
    spec:
      containers:
      - command:
        - /bin/bash
        - -c
        - /mover-rclone/active.sh
        env:
        - name: RCLONE_CONFIG
          value: /rclone-config/rclone.conf
        - name: RCLONE_DEST_PATH
          value: /backups/p/sync-data/user-sync-data-user-sync-0
        - name: DIRECTION
          value: source
        - name: MOUNT_PATH
          value: /data
        - name: RCLONE_CONFIG_SECTION
          value: ssh
        - name: PRIVILEGED_MOVER
          value: "0"
        image: quay.io/backube/volsync:0.10.0
        imagePullPolicy: IfNotPresent
        name: rclone
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data
          name: data
        - mountPath: /rclone-config/
          name: rclone-secret
        - mountPath: /tmp
          name: tempdir
        - mountPath: /mnt/my-mount
          name: u-my-mount
        - mountPath: /mnt/my-credentials
          name: u-my-credentials
      dnsPolicy: ClusterFirst
      restartPolicy: Never
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      serviceAccount: volsync-src-user-sync-data-0
      serviceAccountName: volsync-src-user-sync-data-0
      terminationGracePeriodSeconds: 30
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: volsync-user-sync-data-0-src
      - name: rclone-secret
        secret:
          defaultMode: 384
          secretName: volsync-user-sync-local
      - emptyDir:
          medium: Memory
        name: tempdir
      - name: u-my-mount
        persistentVolumeClaim:
          claimName: my-pvc
      - name: u-my-credentials
        secret:
          secretName: secret-credentials

with added prefix (u- in my example) to avoid collision the the current existing volume names.

Users then can reliably guess the path they need for their configs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: No status
Development

No branches or pull requests

2 participants