Skip to content

Latest commit

 

History

History
60 lines (44 loc) · 3.34 KB

etcd-snapshot-cr.md

File metadata and controls

60 lines (44 loc) · 3.34 KB

Store etcd snapshot metadata in a Custom Resource

Date: 2023-07-27

Status

Accepted

Context

K3s currently stores a list of etcd snapshots and associated metadata in a ConfigMap. Other downstream projects and controllers consume the content of this ConfigMap in order to present cluster administrators with a list of snapshots that can be restored.

On clusters with more than a handful of nodes, and reasonable snapshot intervals and retention periods, the snapshot list ConfigMap frequently reaches the maximum size allowed by Kubernetes, and fails to store any additional information. The snapshots are still created, but they cannot be discovered by users or accessed by tools that consume information from the ConfigMap.

When this occurs, the K3s service log shows errors such as:

level=error msg="failed to save local snapshot data to configmap: ConfigMap \"k3s-etcd-snapshots\" is invalid: []: Too long: must have at most 1048576 bytes"

A side-effect of this is that snapshot metadata is lost if the ConfigMap cannot be updated, as the list is the only place that it is stored.

Reference:

  • rancher/rke2#4495
  • k3s/pkg/etcd/etcd.go

    Lines 1503 to 1516 in 36645e7

    type snapshotFile struct {
    Name string `json:"name"`
    // Location contains the full path of the snapshot. For
    // local paths, the location will be prefixed with "file://".
    Location string `json:"location,omitempty"`
    Metadata string `json:"metadata,omitempty"`
    Message string `json:"message,omitempty"`
    NodeName string `json:"nodeName,omitempty"`
    CreatedAt *metav1.Time `json:"createdAt,omitempty"`
    Size int64 `json:"size,omitempty"`
    Status snapshotStatus `json:"status,omitempty"`
    S3 *s3Config `json:"s3Config,omitempty"`
    Compressed bool `json:"compressed"`
    }

Existing Work

Rancher already has a rke.cattle.io/v1 ETCDSnapshot Custom Resource that contains the same information after it's been imported by the management cluster:

It is unlikely that we would want to use this custom resource in its current package; we may be able to negotiate moving it into a neutral project for use by both projects.

Decision

  1. Instead of populating snapshots into a ConfigMap using the JSON serialization of the private snapshotFile type, K3s will manage creation of an new Custom Resource Definition with similar fields.
  2. Metadata on each snapshot will be stored in a distinct Custom Resource.
  3. The new Custom Resource will be cluster-scoped, as etcd and its snapshots are a cluster-level resource.
  4. Snapshot metadata will also be written alongside snapshot files created on disk and/or uploaded to S3. The metadata files will have the same basename as their corresponding snapshot file.
  5. A hash of the server token will be stored as an annotation on the Custom Resource, and stored as metadata on snapshots uploaded to S3. This hash should be compared to a current etcd snapshot's token hash to determine if the server token must be rolled back as part of the snapshot restore process.
  6. Downstream consumers of etcd snapshot lists will migrate to watching Custom Resource types, instead of the ConfigMap.
  7. K3s will observe a three minor version transition period, where both the new Custom Resources, and the existing ConfigMap, will both be used.
  8. During the transition period, older snapshot metadata may be removed from the ConfigMap while those snapshots still exist and are referenced by new Custom Resources, if the ConfigMap exceeds a preset size or key count limit.

Consequences

  • Snapshot metadata will no longer be lost when the number of snapshots exceeds what can be stored in the ConfigMap.
  • There will be some additional complexity in managing the new Custom Resource, and working with other projects to migrate to using it.