You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Auto Unseal was developed to aid in reducing the operational complexity of
Auto unseal was developed to aid in reducing the operational complexity of
keeping the unseal key secure. This feature delegates the responsibility of
securing the unseal key from users to a trusted device or service. At startup
Vault will connect to the device or service implementing the seal and ask it
Expand All
@@ -112,10 +112,11 @@ For a list of examples and supported providers, please see the
When DR replication is enabled in Vault Enterprise, [Performance Standby](/vault/docs/enterprise/performance-standby) nodes on the DR cluster will seal themselves, so they must be restarted to be unsealed.
-> **Warning:** Recovery keys cannot decrypt the root key, and thus are not
sufficient to unseal Vault if the Auto Unseal mechanism isn't working. They
are purely an authorization mechanism. Using Auto Unseal
creates a strict Vault lifecycle dependency on the underlying seal mechanism.
<Warning title="Recovery keys cannot decrypt the root key">
Recovery keys cannot decrypt the root key and thus are not sufficient to unseal
Vault if the auto unseal mechanism isn't working. They are purely an authorization mechanism.
Using auto unseal creates a strict Vault lifecycle dependency on the underlying seal mechanism.
This means that if the seal mechanism (such as the Cloud KMS key) becomes unavailable,
or deleted before the seal is migrated, then there is no ability to recover
access to the Vault cluster until the mechanism is available again. **If the seal
Expand All
@@ -130,6 +131,7 @@ seal configured independently of the primary, and when properly configured guard
against *some* of this risk. Unreplicated items such as local mounts could still
be lost.
</Warning>
## Recovery key
Expand DownExpand Up
@@ -190,7 +192,7 @@ API prefix for this operation is at `/sys/rekey-recovery-key` rather than
## Seal migration
The Seal migration process cannot be performed without downtime, and due to the
The seal migration process cannot be performed without downtime, and due to the
technical underpinnings of the seal implementations, the process requires that
you briefly take the whole cluster down. While experiencing some downtime may
be unavoidable, we believe that switching seals is a rare event and that the
Expand All
@@ -200,15 +202,15 @@ inconvenience of the downtime is an acceptable trade-off.
something goes wrong.
~> **NOTE**: Seal migration operation will require both old and new seals to be
available during the migration. For example, migration from Auto Unseal to Shamir
seal will require that the service backing the Auto Unseal is accessible during
available during the migration. For example, migration from auto unseal to Shamir
seal will require that the service backing the auto unseal is accessible during
the migration.
~> **NOTE**: Seal migration from Auto Unseal to Auto Unseal of the same type is
~> **NOTE**: Seal migration from auto unseal to auto unseal of the same type is
supported since Vault 1.6.0. However, there is a current limitation that
prevents migrating from AWSKMS to AWSKMS; all other seal migrations of the same
type are supported. Seal migration from One Auto Unseal type (AWS KMS) to
different Auto Unseal type (HSM, Azure KMS, etc.) is also supported on older
type are supported. Seal migration from one auto unseal type (AWS KMS) to
different auto unseal type (HSM, Azure KMS, etc.) is also supported on older
versions as well.
### Migration post Vault 1.5.1
Expand DownExpand Up
@@ -262,7 +264,7 @@ any storage backend.
1. Seal migration is now completed. Take down the old active node, update its
configuration to use the new seal blocks (completely unaware of the old seal type)
,and bring it back up. It will be auto-unsealed if the new seal is one of the
Auto seals, or will require unseal keys if the new seal is Shamir.
auto seals, or will require unseal keys if the new seal is Shamir.
1. At this point, configuration files of all the nodes can be updated to only have the
new seal information. Standby nodes can be restarted right away and the active
Expand All
@@ -286,7 +288,7 @@ keys.
#### Migration from auto unseal to shamir
To migrate from Auto Unseal to Shamir keys, take your server cluster offline
To migrate from auto unseal to Shamir keys, take your server cluster offline
and update the [seal configuration](/vault/docs/configuration/seal) and add `disabled
= "true"` to the seal block. This allows the migration to use this information
to decrypt the key but will not unseal Vault. When you bring your server back
Expand All
@@ -299,9 +301,9 @@ will be migrated to be used as unseal keys.
~> **NOTE**: Migration between same Auto Unseal types is supported in Vault
1.6.0 and higher. For these pre-1.5.1 steps, it is only possible to migrate from
one type of Auto Unseal to a different type (ie Transit -> AWSKMS).
one type of auto unseal to a different type (ie Transit -> AWSKMS).
To migrate from Auto Unseal to a different Auto Unseal configuration, take your
To migrate from auto unseal to a different auto unseal configuration, take your
server cluster offline and update the existing [seal
configuration](/vault/docs/configuration/seal) and add `disabled = "true"` to the seal
block. Then add another seal block to describe the new seal.
Expand All
@@ -324,3 +326,74 @@ When the quorum of nodes are back up, Raft will elect a leader and the leader
node that will perform the migration. The migrated information will be replicated to
all other cluster peers and when the peers eventually become the leader,
migration will not happen again on the peer nodes.
## Seal high availability <EnterpriseAlert inline="true" />
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BACKPORT] Manual cherry-pick of failed release/1.14.x backport PRs #26799
[BACKPORT] Manual cherry-pick of failed release/1.14.x backport PRs #26799
Changes from all commits
8dbd328
18e0d2f
a4ee156
18f1ab1
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing