Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: remove duplicate word in comments #3377

Merged
merged 1 commit into from
Aug 30, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion conf/server/server_full.conf
Original file line number Diff line number Diff line change
Expand Up @@ -600,7 +600,7 @@ plugins {
# be the path to a file that must contain one or more certificates
# representing the upstream root certificates and the file at
# cert_file_path contains one or more certificates necessary to chain up
# the the root certificates in bundle_file_path (where the first
# the root certificates in bundle_file_path (where the first
# certificate in cert_file_path is the upstream CA certificate).
# bundle_file_path = ""
}
Expand Down
2 changes: 1 addition & 1 deletion doc/plugin_server_upstreamauthority_disk.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The plugin accepts the following configuration options:
| ----------------| ---------------------------------------------------- |
| cert_file_path | If SPIRE is using a self-signed CA, `cert_file_path` should specify the path to a single PEM encoded certificate representing the upstream CA certificate. If not self-signed, `cert_file_path` should specify the path to a file that must contain one or more certificates necessary to establish a valid certificate chain up the root certificates defined in `bundle_file_path`. |
| key_file_path | Path to the "upstream" CA key file. Key files must contain a single PEM encoded key. The supported key types are EC (ASN.1 or PKCS8 encoded) or RSA (PKCS1 or PKCS8 encoded).|
| bundle_file_path| If SPIRE is using a self-signed CA, `bundle_file_path` can be left unset. If not self-signed, then `bundle_file_path` should be the path to a file that must contain one or more certificates representing the upstream root certificates and the file at cert_file_path contains one or more certificates necessary to chain up the the root certificates in bundle_file_path (where the first certificate in cert_file_path is the upstream CA certificate). |
| bundle_file_path| If SPIRE is using a self-signed CA, `bundle_file_path` can be left unset. If not self-signed, then `bundle_file_path` should be the path to a file that must contain one or more certificates representing the upstream root certificates and the file at cert_file_path contains one or more certificates necessary to chain up the root certificates in bundle_file_path (where the first certificate in cert_file_path is the upstream CA certificate). |

The `disk` plugin is able to function as either a root CA, or join an existing PKI.

Expand Down
2 changes: 1 addition & 1 deletion pkg/common/catalog/constraints.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ type Constraints struct {
// zero, there is no lower bound (i.e. the plugin type is optional).
Min int

// Max is the the maximum number of plugins required of a specific type. If
// Max is the maximum number of plugins required of a specific type. If
// zero, there is no upper bound.
Max int
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/server/ca/manager_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ func (s *ManagerSuite) TestPersistenceFailsIfJournalLost() {
x509CA, jwtKey := s.currentX509CA(), s.currentJWTKey()

// wipe the journal, reinitialize, and make sure the keys differ. this
// simulates the the key manager having dangling keys.
// simulates the key manager having dangling keys.
s.wipeJournal()
s.initSelfSignedManager()
s.requireX509CANotEqual(x509CA, s.currentX509CA())
Expand Down
2 changes: 1 addition & 1 deletion pkg/server/plugin/notifier/gcsbundle/gcsbundle.go
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ func (p *Plugin) updateBundleObject(ctx context.Context, c *pluginConfig) (err e
}
p.log.Debug("Bundle object retrieved", telemetry.Generation, generation)

// Load bundle data from the the identity provider. The bundle has to
// Load bundle data from the identity provider. The bundle has to
// be loaded after fetching the generation so we can properly detect
// and correct a race updating the bundle (i.e. read-modify-write
// semantics).
Expand Down
4 changes: 2 additions & 2 deletions pkg/server/plugin/notifier/k8sbundle/k8sbundle.go
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ func (c mutatingWebhookClient) CreatePatch(ctx context.Context, obj runtime.Obje
}
patch.Webhooks = make([]admissionv1.MutatingWebhook, len(mutatingWebhook.Webhooks))

// Step through all the the webhooks in the MutatingWebhookConfiguration
// Step through all the webhooks in the MutatingWebhookConfiguration
for i := range patch.Webhooks {
patch.Webhooks[i].AdmissionReviewVersions = mutatingWebhook.Webhooks[i].AdmissionReviewVersions
patch.Webhooks[i].ClientConfig.CABundle = []byte(bundleData(resp.Bundle))
Expand Down Expand Up @@ -644,7 +644,7 @@ func (c validatingWebhookClient) CreatePatch(ctx context.Context, obj runtime.Ob
}
patch.Webhooks = make([]admissionv1.ValidatingWebhook, len(validatingWebhook.Webhooks))

// Step through all the the webhooks in the ValidatingWebhookConfiguration
// Step through all the webhooks in the ValidatingWebhookConfiguration
for i := range patch.Webhooks {
patch.Webhooks[i].AdmissionReviewVersions = validatingWebhook.Webhooks[i].AdmissionReviewVersions
patch.Webhooks[i].ClientConfig.CABundle = []byte(bundleData(resp.Bundle))
Expand Down
4 changes: 2 additions & 2 deletions support/k8s/k8s-workload-registrar/mode-crd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This enables auto and manual generation of SPIFFE IDs from with Kubenretes and t

There are mutiple modes of the Kubernetes Workload Registrar. The benefits of the CRD mode when compared to other modes are:

* **`kubectl` integration**: Using a CRD, SPIRE is fully intergrated with Kubernetes. You can view and create SPIFFE IDs directly using `kubectl`, without having to shell into the the SPIRE server.
* **`kubectl` integration**: Using a CRD, SPIRE is fully intergrated with Kubernetes. You can view and create SPIFFE IDs directly using `kubectl`, without having to shell into the SPIRE server.
* **Fully event-driven design**: Using the Kubernetes CRD system, the CRD mode Kubernetes Workload Registrar is fully event-driven to minimze resource usage.
* **Standards-based solution**: CRDs are the standard way to extend Kubernetes, with many resources online, such as [kubebuilder](https://book.kubebuilder.io/), discussing the approach. The CRD Kubernetes Worklaod Registrar follows all standards and best practices to ensure it is maintainable.

Expand Down Expand Up @@ -424,7 +424,7 @@ Entries can be created manually and automatically. For automatic generation, ent

### Finalizers

[Finalizers](https://book.kubebuilder.io/reference/using-finalizers.html) are added to all SpiffeID resources, manual or automatically created. This ensures that entries on the SPIRE Server are properly cleaned up when a SpiffeID resource is deleted by blocking deletion of the resource until the SPIRE Server entry is first deleted. This important for the scenario where the the Kubernetes Workload Registrar is down when a SpiffeID resource is deleted.
[Finalizers](https://book.kubebuilder.io/reference/using-finalizers.html) are added to all SpiffeID resources, manual or automatically created. This ensures that entries on the SPIRE Server are properly cleaned up when a SpiffeID resource is deleted by blocking deletion of the resource until the SPIRE Server entry is first deleted. This important for the scenario where the Kubernetes Workload Registrar is down when a SpiffeID resource is deleted.

This has the potential side effect of blocking deletion of a namespace until all the SpiffeID resources in that namespace are first deleted.

Expand Down