-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make use of API server dry-run in K8s v1.13 #804
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Closing this in favor of #11574 |
Today, we do a
kubectl apply --dry-run
of all resources, before performing the actual apply, in hopes of catching any manifest errors before deploying anything to the cluster. This is far from foolproof and the dry-run does not catch a lot of errors that only happen during theapply
phase, which results in a partial successful deployment. An examples of these may be where a deployment's label selectors do not match the pod template labels.There is a new API server feature (will be beta in v1.13) which will perform the validation server side, which will improve the experience by performing deeper checks on the user's manifests:
kubernetes/enhancements#576
To support this, we simply need to check if the cluster is running v1.13 and use the new
--server-dry-run
flag in place of--dry-run
.The text was updated successfully, but these errors were encountered: