You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 28, 2020. It is now read-only.
So when the etcd-operator detects that a cluster has lost more than half of its nodes it cannot just wait for them to come back alive (since they don’t have persistent data and no stable peer address) but the only thing it can do is to drop the current cluster and create a new cluster restoring its data from a backup (if configured to do so). If this is enabled you’ll end up with an etcd cluster that under the hoods is restored with old data. So you’ve to be wise and know if this is a correct approach depending on what you’re storing inside your etcd kv store.
Auto-restore hides complexity and details. Sometimes users won't be comfortable with it. An initial idea is to provide restore hooks.
The text was updated successfully, but these errors were encountered:
From https://sgotti.me/post/kubernetes-persistent-etcd/
Auto-restore hides complexity and details. Sometimes users won't be comfortable with it. An initial idea is to provide restore hooks.
The text was updated successfully, but these errors were encountered: