Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add opt to configure mTLS for host clients #183

Merged
merged 5 commits into from
Jun 28, 2022

Conversation

Callisto13
Copy link
Member

@Callisto13 Callisto13 commented Jun 15, 2022

User Value

This commit adds a parameter on the MicrovmClusterSpec to configure mTLS
for flintlock host client connections.

For now, mTLS is always configured with no option for simple TLS.

Credentials must be stored in a tls secret which is created in the same
namespace as the MicroVMCluster. The name of the secret is then passed
to the tlsSecretRef field on the cluster spec. The presence of this
field will determine whether CAPMVM loads the credentials into client
calls. Otherwise all calls will be made insecurely.

---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: MicrovmCluster
metadata:
  name: mvm-test
  namespace: default
spec:
  tlsSecretRef: mytlssecret
...

The secret must contain the following fields in its data: "tls.crt"
(the client certificate), "tls.key" (the client key), and "ca.crt" (the
certificate authority cert).

apiVersion: v1
kind: Secret
metadata:
  name: mytlssecret
  namespace: default # same as the cluster
type: kubernetes.io/tls
data:
  tls.crt: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
  tls.key: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
  ca.crt: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...

Implementation

The secretref is on the mvm cluster object and will be used (secret fetched,
values decoded, etc) by every mvm machine on every reconcile which
requires it. This feels a bit wasteful to me, but apparently this is the
way things are done. Sharing in-memory data between reconciliation loops
is not really done (unless you want to write the value to the status of
one object so that it can be read by another, which in this case we
really don't want to do).

One idea is to have a global var. Another, better, idea is to have a
pointer value added to both reconcilers' structs: the mvm cluster writing
the value and the mvm machines reading it. I decided to leave this for
now and keep things simple.

Testing

I have added units for the extraction of the data from the secret, but
not for the addition of creds to the client. Tests that the client is
actually built with and is using the correct things would be more at the
integration level. Any tests around the host clients here use fake
clients so adding coverage there would show us nothing.

One thing I could do is create a fake flintlock (which would basically be
a naiive implementation of a flintlock server, returning hardcoded
answers) which verifies that the client is built the way we want it to be.
The benefits of this is that 1) we get fast turnaround at an integration
level without having to run an e2e suite, 2) we actually have some
integration level tests at all.
The downside is that the fake flintlock implementation may drift from
the real one, and we end up building CAPMVM capability which the real
flintlock does not support. Admittedly the chance of this happening is
quite narrow given that fake server would only return what we tell it
to, but just calling it out.

My next step is to create e2es which test all the interactions
between CAPMVM and flintlock.

Dependent on #182 and liquidmetal-dev/flintlock#464

@Callisto13 Callisto13 added the kind/feature New feature or request label Jun 15, 2022
**User Value**

This commit adds a parameter on the MicrovmClusterSpec to configure mTLS
for flintlock host client connections.

For now, mTLS is always configured with no option for simple TLS.

Credentials must be stored in a secret which is created in the same
namespace as the MicroVMCluster. The name of the secret is then passed
to the `tlsSecretRef` field on the cluster spec. The presence of this
field will determine whether CAPMVM loads the credentials into client
calls. Otherwise all calls will be made insecurely.

```
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: MicrovmCluster
metadata:
  name: mvm-test
  namespace: default
spec:
  tlsSecretRef: mytlssecret
...
```

The secret must contain the following fields in its `data`: `"cert"`
(the client certificate), `"key"` (the client key), and `"ca"` (the
certificate authority cert).

```
apiVersion: v1
kind: Secret
metadata:
  name: mytlssecret
  namespace: default # same as the cluster
type: Opaque
data:
  cert: YWRtaW4=
  key: MWYyZDFlMmU2N2Rm
  ca: Zm9vYmFyCg==
```

**Implementation**

The secretref is on the mvm cluster object and will be used (secret fetched,
values decoded, etc) by every mvm machine on every reconcile which
requires it. This feels a bit wasteful to me, but apparently this is the
way things are done. Sharing in-memory data between reconciliation loops
is not really done (unless you want to write the value to the status of
one object so that it can be read by another, which in this case we
really don't want to do).

One idea is to have a global var. Another, better, idea is to have a
pointer value added to both reconcilers' structs: the mvm cluster writing
the value and the mvm machines reading it. I decided to leave this for
now and keep things simple.

**Testing**

I have added units for the extraction of the data from the secret, but
not for the addition of creds to the client. Tests that the client is
actually built with and is using the correct things would be more at the
integration level. Any tests around the host clients here use fake
clients so adding coverage there would show us nothing.

One thing I could do is create a fake flintlock (which would basically be
a naiive implementation of a flintlock server, returning hardcoded
answers) which verifies that the client is built the way we want it to be.
The benefits of this is that 1) we get fast turnaround at an integration
level without having to run an e2e suite, 2) we actually have some
integration level tests at all.
The downside is that the fake flintlock implementation may drift from
the real one, and we end up building CAPMVM capability which the real
flintlock does not support. Admittedly the chance of this happening is
quite narrow given that fake server would only return what we tell it
to, but just calling it out.

My next step is to create e2es which test all the interactions
between CAPMVM and flintlock.
**User Value**

This commit adds a parameter on the MicrovmClusterSpec to configure mTLS
for flintlock host client connections.

For now, mTLS is always configured with no option for simple TLS.

Credentials must be stored in 2 secrets which is created in the same
namespace as the MicroVMCluster. The names of the secret are then passed
to the `tlsSecretRef` and `caSecretRef` fields on the cluster spec.
The presence of these fields will determine whether CAPMVM loads the
credentials into client calls. Otherwise all calls will be made insecurely.

```
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: MicrovmCluster
metadata:
  name: mvm-test
  namespace: default
spec:
  tlsSecretRef: mytlssecret
  caSecretRef: mycasecret
...
```

The TLS secret must contain the following fields in its `data`: `"tls.crt"`
(the client certificate) and `"tls.key"` (the client key). This secret is of
type `kubernetes.io/tls`.
The CA secret must contain a `"ca.crt"` (the certificate authority cert) field.
This secret is `Opaque`.

```
apiVersion: v1
kind: Secret
metadata:
  name: mytlssecret
  namespace: default # same as the cluster
type: kubernetes.io/tls
data:
  tls.crt: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
  tls.key: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
```

```
apiVersion: v1
kind: Secret
metadata:
  name: mycasecret
  namespace: default # same as the cluster
type: Opaque
data:
  ca.crt: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
```
**Implementation**

The secret refs are on the mvm cluster object and will be used (secrets fetched,
values decoded, etc) by every mvm machine on every reconcile which
requires it. This feels a bit wasteful to me, but apparently this is the
way things are done. Sharing in-memory data between reconciliation loops
is not really done (unless you want to write the value to the status of
one object so that it can be read by another, which in this case we
really don't want to do).

One idea is to have a global var. Another, better, idea is to have a
pointer value added to both reconcilers' structs: the mvm cluster writing
the value and the mvm machines reading it. I decided to leave this for
now and keep things simple.

**Testing**

I have added units for the extraction of the data from the secret, but
not for the addition of creds to the client. Tests that the client is
actually built with and is using the correct things would be more at the
integration level. Any tests around the host clients here use fake
clients so adding coverage there would show us nothing.

One thing I could do is create a fake flintlock (which would basically be
a naiive implementation of a flintlock server, returning hardcoded
answers) which verifies that the client is built the way we want it to be.
The benefits of this is that 1) we get fast turnaround at an integration
level without having to run an e2e suite, 2) we actually have some
integration level tests at all.
The downside is that the fake flintlock implementation may drift from
the real one, and we end up building CAPMVM capability which the real
flintlock does not support. Admittedly the chance of this happening is
quite narrow given that fake server would only return what we tell it
to, but just calling it out.

My next step is to create e2es which test all the interactions
between CAPMVM and flintlock.
**User Value**

This commit adds a parameter on the MicrovmClusterSpec to configure mTLS
for flintlock host client connections.

For now, mTLS is always configured with no option for simple TLS.

Credentials must be stored in a tls secret which is created in the same
namespace as the MicroVMCluster. The name of the secret is then passed
to the `tlsSecretRef` field on the cluster spec. The presence of this
field will determine whether CAPMVM loads the credentials into client
calls. Otherwise all calls will be made insecurely.

```
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: MicrovmCluster
metadata:
  name: mvm-test
  namespace: default
spec:
  tlsSecretRef: mytlssecret
...
```

The secret must contain the following fields in its `data`: `"tls.crt"`
(the client certificate), `"tls.key"` (the client key), and `"ca.crt"` (the
certificate authority cert).

```
apiVersion: v1
kind: Secret
metadata:
  name: mytlssecret
  namespace: default # same as the cluster
type: kubernetes.io/tls
data:
  tls.crt: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
  tls.key: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
  ca.crt: |
    MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
```

**Implementation**

The secretref is on the mvm cluster object and will be used (secret fetched,
values decoded, etc) by every mvm machine on every reconcile which
requires it. This feels a bit wasteful to me, but apparently this is the
way things are done. Sharing in-memory data between reconciliation loops
is not really done (unless you want to write the value to the status of
one object so that it can be read by another, which in this case we
really don't want to do).

One idea is to have a global var. Another, better, idea is to have a
pointer value added to both reconcilers' structs: the mvm cluster writing
the value and the mvm machines reading it. I decided to leave this for
now and keep things simple.

**Testing**

I have added units for the extraction of the data from the secret, but
not for the addition of creds to the client. Tests that the client is
actually built with and is using the correct things would be more at the
integration level. Any tests around the host clients here use fake
clients so adding coverage there would show us nothing.

One thing I could do is create a fake flintlock (which would basically be
a naiive implementation of a flintlock server, returning hardcoded
answers) which verifies that the client is built the way we want it to be.
The benefits of this is that 1) we get fast turnaround at an integration
level without having to run an e2e suite, 2) we actually have some
integration level tests at all.
The downside is that the fake flintlock implementation may drift from
the real one, and we end up building CAPMVM capability which the real
flintlock does not support. Admittedly the chance of this happening is
quite narrow given that fake server would only return what we tell it
to, but just calling it out.

My next step is to create e2es which test all the interactions
between CAPMVM and flintlock.
// kind: Secret
// metadata:
// name: secret-tls
// type: kubernetes.io/tls
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opaque or tls, that is the question

@Callisto13
Copy link
Member Author

Callisto13 commented Jun 28, 2022

i swear to god this fucking linter...

I just want to write bad code in peace :(

@Callisto13 Callisto13 merged commit 243c19a into liquidmetal-dev:main Jun 28, 2022
@Callisto13 Callisto13 deleted the tls branch June 28, 2022 15:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants