-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add opt to configure mTLS for host clients #183
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
richardcase
reviewed
Jun 27, 2022
Callisto13
force-pushed
the
tls
branch
2 times, most recently
from
June 28, 2022 13:46
59624ef
to
037a555
Compare
**User Value** This commit adds a parameter on the MicrovmClusterSpec to configure mTLS for flintlock host client connections. For now, mTLS is always configured with no option for simple TLS. Credentials must be stored in a secret which is created in the same namespace as the MicroVMCluster. The name of the secret is then passed to the `tlsSecretRef` field on the cluster spec. The presence of this field will determine whether CAPMVM loads the credentials into client calls. Otherwise all calls will be made insecurely. ``` --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1 kind: MicrovmCluster metadata: name: mvm-test namespace: default spec: tlsSecretRef: mytlssecret ... ``` The secret must contain the following fields in its `data`: `"cert"` (the client certificate), `"key"` (the client key), and `"ca"` (the certificate authority cert). ``` apiVersion: v1 kind: Secret metadata: name: mytlssecret namespace: default # same as the cluster type: Opaque data: cert: YWRtaW4= key: MWYyZDFlMmU2N2Rm ca: Zm9vYmFyCg== ``` **Implementation** The secretref is on the mvm cluster object and will be used (secret fetched, values decoded, etc) by every mvm machine on every reconcile which requires it. This feels a bit wasteful to me, but apparently this is the way things are done. Sharing in-memory data between reconciliation loops is not really done (unless you want to write the value to the status of one object so that it can be read by another, which in this case we really don't want to do). One idea is to have a global var. Another, better, idea is to have a pointer value added to both reconcilers' structs: the mvm cluster writing the value and the mvm machines reading it. I decided to leave this for now and keep things simple. **Testing** I have added units for the extraction of the data from the secret, but not for the addition of creds to the client. Tests that the client is actually built with and is using the correct things would be more at the integration level. Any tests around the host clients here use fake clients so adding coverage there would show us nothing. One thing I could do is create a fake flintlock (which would basically be a naiive implementation of a flintlock server, returning hardcoded answers) which verifies that the client is built the way we want it to be. The benefits of this is that 1) we get fast turnaround at an integration level without having to run an e2e suite, 2) we actually have some integration level tests at all. The downside is that the fake flintlock implementation may drift from the real one, and we end up building CAPMVM capability which the real flintlock does not support. Admittedly the chance of this happening is quite narrow given that fake server would only return what we tell it to, but just calling it out. My next step is to create e2es which test all the interactions between CAPMVM and flintlock.
**User Value** This commit adds a parameter on the MicrovmClusterSpec to configure mTLS for flintlock host client connections. For now, mTLS is always configured with no option for simple TLS. Credentials must be stored in 2 secrets which is created in the same namespace as the MicroVMCluster. The names of the secret are then passed to the `tlsSecretRef` and `caSecretRef` fields on the cluster spec. The presence of these fields will determine whether CAPMVM loads the credentials into client calls. Otherwise all calls will be made insecurely. ``` --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1 kind: MicrovmCluster metadata: name: mvm-test namespace: default spec: tlsSecretRef: mytlssecret caSecretRef: mycasecret ... ``` The TLS secret must contain the following fields in its `data`: `"tls.crt"` (the client certificate) and `"tls.key"` (the client key). This secret is of type `kubernetes.io/tls`. The CA secret must contain a `"ca.crt"` (the certificate authority cert) field. This secret is `Opaque`. ``` apiVersion: v1 kind: Secret metadata: name: mytlssecret namespace: default # same as the cluster type: kubernetes.io/tls data: tls.crt: | MIIC2DCCAcCgAwIBAgIBATANBgkqh ... tls.key: | MIIC2DCCAcCgAwIBAgIBATANBgkqh ... ``` ``` apiVersion: v1 kind: Secret metadata: name: mycasecret namespace: default # same as the cluster type: Opaque data: ca.crt: | MIIC2DCCAcCgAwIBAgIBATANBgkqh ... ``` **Implementation** The secret refs are on the mvm cluster object and will be used (secrets fetched, values decoded, etc) by every mvm machine on every reconcile which requires it. This feels a bit wasteful to me, but apparently this is the way things are done. Sharing in-memory data between reconciliation loops is not really done (unless you want to write the value to the status of one object so that it can be read by another, which in this case we really don't want to do). One idea is to have a global var. Another, better, idea is to have a pointer value added to both reconcilers' structs: the mvm cluster writing the value and the mvm machines reading it. I decided to leave this for now and keep things simple. **Testing** I have added units for the extraction of the data from the secret, but not for the addition of creds to the client. Tests that the client is actually built with and is using the correct things would be more at the integration level. Any tests around the host clients here use fake clients so adding coverage there would show us nothing. One thing I could do is create a fake flintlock (which would basically be a naiive implementation of a flintlock server, returning hardcoded answers) which verifies that the client is built the way we want it to be. The benefits of this is that 1) we get fast turnaround at an integration level without having to run an e2e suite, 2) we actually have some integration level tests at all. The downside is that the fake flintlock implementation may drift from the real one, and we end up building CAPMVM capability which the real flintlock does not support. Admittedly the chance of this happening is quite narrow given that fake server would only return what we tell it to, but just calling it out. My next step is to create e2es which test all the interactions between CAPMVM and flintlock.
**User Value** This commit adds a parameter on the MicrovmClusterSpec to configure mTLS for flintlock host client connections. For now, mTLS is always configured with no option for simple TLS. Credentials must be stored in a tls secret which is created in the same namespace as the MicroVMCluster. The name of the secret is then passed to the `tlsSecretRef` field on the cluster spec. The presence of this field will determine whether CAPMVM loads the credentials into client calls. Otherwise all calls will be made insecurely. ``` --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1 kind: MicrovmCluster metadata: name: mvm-test namespace: default spec: tlsSecretRef: mytlssecret ... ``` The secret must contain the following fields in its `data`: `"tls.crt"` (the client certificate), `"tls.key"` (the client key), and `"ca.crt"` (the certificate authority cert). ``` apiVersion: v1 kind: Secret metadata: name: mytlssecret namespace: default # same as the cluster type: kubernetes.io/tls data: tls.crt: | MIIC2DCCAcCgAwIBAgIBATANBgkqh ... tls.key: | MIIC2DCCAcCgAwIBAgIBATANBgkqh ... ca.crt: | MIIC2DCCAcCgAwIBAgIBATANBgkqh ... ``` **Implementation** The secretref is on the mvm cluster object and will be used (secret fetched, values decoded, etc) by every mvm machine on every reconcile which requires it. This feels a bit wasteful to me, but apparently this is the way things are done. Sharing in-memory data between reconciliation loops is not really done (unless you want to write the value to the status of one object so that it can be read by another, which in this case we really don't want to do). One idea is to have a global var. Another, better, idea is to have a pointer value added to both reconcilers' structs: the mvm cluster writing the value and the mvm machines reading it. I decided to leave this for now and keep things simple. **Testing** I have added units for the extraction of the data from the secret, but not for the addition of creds to the client. Tests that the client is actually built with and is using the correct things would be more at the integration level. Any tests around the host clients here use fake clients so adding coverage there would show us nothing. One thing I could do is create a fake flintlock (which would basically be a naiive implementation of a flintlock server, returning hardcoded answers) which verifies that the client is built the way we want it to be. The benefits of this is that 1) we get fast turnaround at an integration level without having to run an e2e suite, 2) we actually have some integration level tests at all. The downside is that the fake flintlock implementation may drift from the real one, and we end up building CAPMVM capability which the real flintlock does not support. Admittedly the chance of this happening is quite narrow given that fake server would only return what we tell it to, but just calling it out. My next step is to create e2es which test all the interactions between CAPMVM and flintlock.
Callisto13
commented
Jun 28, 2022
Callisto13
commented
Jun 28, 2022
// kind: Secret | ||
// metadata: | ||
// name: secret-tls | ||
// type: kubernetes.io/tls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opaque
or tls
, that is the question
i swear to god this fucking linter... I just want to write bad code in peace :( |
richardcase
approved these changes
Jun 28, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
User Value
This commit adds a parameter on the MicrovmClusterSpec to configure mTLS
for flintlock host client connections.
For now, mTLS is always configured with no option for simple TLS.
Credentials must be stored in a tls secret which is created in the same
namespace as the MicroVMCluster. The name of the secret is then passed
to the
tlsSecretRef
field on the cluster spec. The presence of thisfield will determine whether CAPMVM loads the credentials into client
calls. Otherwise all calls will be made insecurely.
The secret must contain the following fields in its
data
:"tls.crt"
(the client certificate),
"tls.key"
(the client key), and"ca.crt"
(thecertificate authority cert).
Implementation
The secretref is on the mvm cluster object and will be used (secret fetched,
values decoded, etc) by every mvm machine on every reconcile which
requires it. This feels a bit wasteful to me, but apparently this is the
way things are done. Sharing in-memory data between reconciliation loops
is not really done (unless you want to write the value to the status of
one object so that it can be read by another, which in this case we
really don't want to do).
One idea is to have a global var. Another, better, idea is to have a
pointer value added to both reconcilers' structs: the mvm cluster writing
the value and the mvm machines reading it. I decided to leave this for
now and keep things simple.
Testing
I have added units for the extraction of the data from the secret, but
not for the addition of creds to the client. Tests that the client is
actually built with and is using the correct things would be more at the
integration level. Any tests around the host clients here use fake
clients so adding coverage there would show us nothing.
One thing I could do is create a fake flintlock (which would basically be
a naiive implementation of a flintlock server, returning hardcoded
answers) which verifies that the client is built the way we want it to be.
The benefits of this is that 1) we get fast turnaround at an integration
level without having to run an e2e suite, 2) we actually have some
integration level tests at all.
The downside is that the fake flintlock implementation may drift from
the real one, and we end up building CAPMVM capability which the real
flintlock does not support. Admittedly the chance of this happening is
quite narrow given that fake server would only return what we tell it
to, but just calling it out.
My next step is to create e2es which test all the interactions
between CAPMVM and flintlock.
Dependent on #182 and liquidmetal-dev/flintlock#464