-
Notifications
You must be signed in to change notification settings - Fork 690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some admission control support for validating local issues #2742
Comments
Also, I think a DAG-aware admission controller is possible if Contour itself is serving the admission controller webhook, but I'm not sure it's a good idea to force DAG nodes to be created in a topological order to avoid admission errors. This isn't true of normal Kubernetes objects in general (eg. |
How far can you get with https://github.com/open-policy-agent/gatekeeper ?
I expect gatekeeper can do an OK job of these.
This probably needs CRD v1 #2678. Note that CRD v1 can be done today for people running Kubernetes > 1.16 (I had it in my old kustomize PR). |
I think that admission control of some sort is inevitable. It's so much better a user experience to be made aware of invalid objects as soon as possible. The details we need to figure out are: What admission control do we use?As @jpeach says, many use cases could be hit using Gatekeeper, and that would also give advanced users the flexibility of customising the admission control to their liking. A DAG-aware admission controller would need to run as part of Contour, I agree, in a similar way to what kubebuilder controllers now do. I think this is much less necessary, given that we (well, I) will implement Conditions support into status, which should at least provide a way to surface the same information. When do we do this?Again, the advantage of using a general admisson controller like Gatekeeper (and providing sample policies) is that we will be able to deliver some value sooner. Building something into Contour will take longer. This is a great discussion for us to have one community meeting! @bgagnon, if you could make a community meeting, you can give us a shout, and we will add an item to the agenda. |
IIUC, you can't really do admission control against more than one resource, so there's no real benefit to being DAG aware. |
I think there is a benefit. Once you have those admission bits in front you're able to stop someone from breaking a chain of proxies by being able to find issues up front and handle them better. This would be a very big change to Contour that needs thought out carefully. |
But that would only work if you could be sure you had all the proxies you needed at validation time. My understanding was that you could not depend on that. If you could, then validation would also enforce some sort of ordering constraints, which could be awkward. Maybe a more general approach to this problem is to figure out how Contour can keep existing configurations working if they get modified in a way that breaks them. That would improve operational safety in the general case.
Yup 👍 |
@youngnick I brought this up in the last community meeting, actually. There was a discussion about the upcoming timeout range feature and it reminded me of LimitRange, ResourceQuota and PodSecurityPolicy objects which are enforced at admission time. Arguably, Kubernetes itself is not perfect with those due to the asynchronous admission of Deployment -> ReplicaSet -> Pod. On our side, we've been considering adding admission control hooks for Contour via our controller built on controller-runtime that handles our custom DNS and TLA provisioning. We have environment specific constraints on top of what Contour allows. This portion could be done with OPA and Gatekeeper but it might be a little silly considering we can just add a few lines of Go to our existing controller. Anyhow, it feels to me like a portion (perhaps very small) of the validation needed in Contour could be done at admission, leaving the more complex bits to the eventually consistent DAG evaluator. I should mention this reflection happened in the context of an old 1.14 cluster (now 1.15!) that does not even validate CRD payloads. With OpenAPI / JSON schema, it's perhaps less necessary, but there are still validations influenced by Contour configs and CLI flags. If that's a growing trend in the project, it's worth considering, IMO. |
Yes, I think that moving as much simple validation as possible as early in the object apply process as possible is the best UX we can get. That is:
|
The Contour project currently lacks enough contributors to adequately respond to all Issues. This bot triages Issues according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack |
The Contour project currently lacks enough contributors to adequately respond to all Issues. This bot triages Issues according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack |
For user errors than can be validated without resolving other nodes in the DAG, admission control would provide a better user experience than the current asynchronous system of flagging the proxy object as "invalid".
Some validation points require late binding and couldn't be covered here:
A quick list of validation points not covered by the JSON schema but still local to a single object:
cc @stevesloka
The text was updated successfully, but these errors were encountered: