Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem mounting persistent volumes in Admiralty #104

Open
simonbonnefoy opened this issue Apr 15, 2021 · 2 comments
Open

Problem mounting persistent volumes in Admiralty #104

simonbonnefoy opened this issue Apr 15, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@simonbonnefoy
Copy link

Hello,

I have been trying to mount persistent volumes in Admiralty.
However, I encountered an odd situation.
I have tested the PersistentVolumeClaim without multicluster-scheduler and everything works well.
My situation is the following: I have two clusters running on Google Kubernetes Engine. One source cluster (cluster-cd) and one target cluster (cluster-1).
I have created 2PersistentVolumeClaims on each cluster:
cluster-cd -> pvc-cd and pvc-demo
cluster-1 -> pvc-1 and pvc-demo

Note that the pvc-demo do not point to the same PersistantVolume. Just the name is the same.
Also, I use the following jobs to test them (extracted from the quick start guide)

apiVersion: batch/v1
kind: Job
metadata:
  name: admiralty-pvc-test
spec:
  template:
    metadata:
      annotations:
        multicluster.admiralty.io/elect: ""
    spec:
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
            claimName: pvc-demo
      containers:
      - name: c
        image: busybox
        command: ["sh", "-c", "echo hello world: && echo hello world >> /mnt/data/hello.txt && ifconfig && df -h"]
        # command: ["sh", "-c", "cat /mnt/data/hello.txt && ifconfig && echo ------- && df -h"]
        volumeMounts:
          - mountPath: "/mnt/data/"
            name: task-pv-storage
        resources:
          requests:
            cpu: 100m
      restartPolicy: Never

Case 1

I set claimName: pvc-cd (pvc on the source cluster).
The pods stay in Pending status (in source and target clusters) and the pod description in the source cluster context gives me the following error.

Warning  FailedScheduling  44s (x3 over 111s)  admiralty-proxy  0/4 nodes are available: 1 , 3 node(s) didn't match node selector.

Case 2

I set claimName: pvc-1 (pvc on the target cluster).
The pods stay in Pending status (only in the source cluster this time, pods does not even show up in target cluster).
The pod description in the source cluster context gives me the following error.

Warning  FailedScheduling  48s (x3 over 118s)  admiralty-proxy  persistentvolumeclaim "pvc-1" not found

Case 3

I set claimName: pvc-demo (pvc that exists on both cluster, but refers to different locations)
In this case, it seems to be working. However, the command echo hello world >> /mnt/data/hello.txtis written
on the pvc of the target clusters.

Conclusion

I understand the behavior in the 3 cases. However, is there a way in Admiralty to use the PersistentVolumeClaims? I am interested in them in a perspective of plugging them on some Argo workflow to produce input and output set of data.
Is there a good way to do that with Admiraly/Argo. Or should I use buckets ?
I have not found specifications regarding that matter in the documentation. But maybe have I overlooked something.

Thanks in advance!

@adrienjt
Copy link
Contributor

adrienjt commented Apr 19, 2021

Hi @simonbonnefoy , PVCs and PVs aren't specifically supported yet. As you saw, you had to copy pvc-demo to the target cluster for scheduling to work (Admiralty didn't make the one in the source cluster "follow"). Then the two pvc-demos gave birth to two independent PVs referring to different Google Persistent Disks.

  • Would you like them to refer to the same disk? What if the clusters are in different regions? That may confuse the CSI driver.
  • Would you like them to refer to different disks with data replication? You'd need a 3rd-party CSI driver.

I would recommend using buckets for your use case. Here's an equivalent demo on AWS (at 00:10:20): https://zoom.us/rec/share/0Ve24HgWnCkz474Q84wD2LgjXtO4UHSB_Bp1vJwrMf0lSXucBQoK4xKcz7qx63Pz.ZvD7hs0H9b0SxXLW

@simonbonnefoy
Copy link
Author

Hi @adrienjt

Thanks a lot for your reply! In the end I could make it work using GCS buckets.

@adrienjt adrienjt added the enhancement New feature or request label Sep 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants