-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache driver capabilities #241
Cache driver capabilities #241
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jsafrane The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
If the provisioner just dies on startup, will kubernetes constantly try to restart it? Would that take up more cycles than just having it be idle? |
Yes, Kubernetes will try to restart it (with exponential backoff) and the pod should eventually end up in CrashLoopBackOff state, hoping to get admin's attention. The max backoff is probably 5 minutes, i.e. nothing that I would worry about (for one driver). Question is if it's the right thing to do. What is the right way how to get admin's attention? Crashing provisioner or event in PVC that this PVC cannot be provisioned? |
/assign |
Conclusion:
|
7798ce6
to
9cbcfcf
Compare
Restored sending of events |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
We don't expect that driver capabilities change when the driver is running. In this PR:
edit: This changes behavior of external-provisioner.
Previously, it was running even if the driver did not support dynamic provisioning and created events for all PVCs that it won't provision them. Users could spot this event easily and talk to cluster admin.
Now users will see just "waiting for a volume to be created, either by external provisioner or manually". The provisioner just dies on startup and it's up to cluster admin to find out what's wrong.