-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mounting a Persistent Volume Claim as Volume within a Pod's Container spec doesn't seem to work. #2217
Comments
/assign |
I will try to reproduce the issue and find the root cause. |
after debug, it looks like, in this case, the function sanitize_for_serialization can't serialize the Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example |
@dtm2451, you need to modify the
|
Oh oh! Thank you so much for your time investigating. Sounds like this is user error then on my side, but a warning-add would be nice! I didn't catch that I'd left this portion of my manifest in snake_case rather then camelCase, as is clearly the way all key names work in the python client! Some warning when elements are skipped due to such conversion failure would be VERY nice! |
Wait actually, I responded too quickly there. My understanding of the python client is that fields are designed around snake_case conversions of what one would normally provide in camelCase directly to kubectl. That is what I built towards here -- exactly the path you point towards in:
So it does still seem like a bug in the client to me if the snake_case version of FWIW, contrary to my understanding from the documentation, but perhaps it is my understanding that is wrong?, when I swap to using camelCase (not the seemingly desired snake_case) for the entirety of my |
For example, in what I understand to be documentation of how to define a V1Volume for the python client, the field "persistent_volume_claim" (not "persistentVolumeClaim") is typed as V1PersistentVolumeClaimVolumeSource, and following that link we also find "claim_name" and "read_only" fields (not "claimName" and "readOnly"). |
@dtm2451, I couldn't agree with you more that the fields are designed around snake_case. For this case, the difference is the type of request body, if the body type is pure json (like hard code), the python client will work as |
I'm not sure I quite follow the logic behind
fully. Specifically because the case here is a python dict, which is ofc similar to json yet fully python native. I suppose I'm simply curious for more detail of why it becomes unreasonable to parse and modify for the client. Is there a specific function that I should be passing the |
What happened (please include outputs or screenshots):
A pod is created from the manifest below, but the
volume
meant to target a persistent_volume_claim is instead created as EmptyDir, and container volume_mounts meant to target this volume are skipped entirely.Specifically, when I run
kubectl describe pod/test-pod
, its container has no mount associated with the target name, and I see the below for the volume that should be a PersistentVolumeClaim:What you expected to happen:
I expect the pod to be created with
and its container to have
I was able to successfully produce this by spinning up the pod up from an equivalent yml file and using kubectl directly.
How to reproduce it (as minimally and precisely as possible):
Persistent volume config: (Adjust 'path' as needed to something that exists on your k8s node)
Persistent Volume Claim config:
Then run
kubectl create -f path/to/each/config.yml
for both files.kub_cli
here)Anything else we need to know?:
I am newer to working with kubernetes, but I believe this methodology is the intended way to mount persistent storage to pod containers, https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
Environment:
kubectl version
):python --version
): 3.10.12 and 3.8.17. (MRE tested directly only with 3.10.12)pip list | grep kubernetes
): 29.0.0The text was updated successfully, but these errors were encountered: