Skip to content

Commit

Permalink
codespell corrections (kedacore#479)
Browse files Browse the repository at this point in the history
  • Loading branch information
robin-wayve authored Jul 5, 2021
1 parent 7036d62 commit fba922a
Show file tree
Hide file tree
Showing 37 changed files with 48 additions and 48 deletions.
4 changes: 2 additions & 2 deletions content/docs/1.4/concepts/external-scalers.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The `Scaler` interface defines 4 methods:
- `IsActive` is called on `pollingInterval` defined in the ScaledObject/ScaledJob CRDs and scaling to 1 happens if this returns true
- `Close` is called to allow the scaler to clean up connections or other resources.
- `GetMetricSpec` returns the target value for the HPA definition for the scaler. For more details refer to [Implementing `GetMetricSpec`](#5-implementing-getmetricspec)
- `GetMetrics` returns the value of the metric refered to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)
- `GetMetrics` returns the value of the metric referred to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)

> Refer to the [HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for how HPA calculates replicaCount based on metric value and target value.
Expand Down Expand Up @@ -385,7 +385,7 @@ server.addService(externalScalerProto.externalscaler.ExternalScaler.service, {

#### 5. Implementing `GetMetrics`

`GetMetrics` returns the value of the metric refered to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.
`GetMetrics` returns the value of the metric referred to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.

{{< collapsible "Golang" >}}
```golang
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.4/scalers/azure-storage-blob.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ triggers:
blobContainerName: functions-blob # Required: Name of Azure Blob Storage container
blobCount: '5' # Optional. Amount of blobs to scale out on. Default: 5 blobs
connection: STORAGE_CONNECTIONSTRING_ENV_NAME # Optional if TriggerAuthentication defined with pod identity or connection string authentication.
blobPrefix: # Optional. Prefix for the Blob. Use this to specifiy sub path for the blobs if required. Default : ""
blobPrefix: # Optional. Prefix for the Blob. Use this to specify sub path for the blobs if required. Default : ""
blobDelimiter: # Optional. Delimiter for identifying the blob Prefix. Default: "/"
```
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.5/concepts/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ secretTargetRef: # Optional.

### Hashicorp Vault secret(s)

You can pull one or more Hashicorp Vault secrets into the trigger by defining the autentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
You can pull one or more Hashicorp Vault secrets into the trigger by defining the authentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
`credential` defines the Hashicorp Vault credentials depending on the authentication method, for kubernetes you should provide path to service account token (/var/run/secrets/kubernetes.io/serviceaccount/token) and for token auth method provide the token.
`secrets` list defines the mapping between the path and the key of the secret in Vault to the parameter.

Expand Down
4 changes: 2 additions & 2 deletions content/docs/1.5/concepts/external-scalers.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The `Scaler` interface defines 4 methods:
- `IsActive` is called on `pollingInterval` defined in the ScaledObject/ScaledJob CRDs and scaling to 1 happens if this returns true
- `Close` is called to allow the scaler to clean up connections or other resources.
- `GetMetricSpec` returns the target value for the HPA definition for the scaler. For more details refer to [Implementing `GetMetricSpec`](#5-implementing-getmetricspec)
- `GetMetrics` returns the value of the metric refered to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)
- `GetMetrics` returns the value of the metric referred to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)

> Refer to the [HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for how HPA calculates replicaCount based on metric value and target value.
Expand Down Expand Up @@ -385,7 +385,7 @@ server.addService(externalScalerProto.externalscaler.ExternalScaler.service, {

#### 5. Implementing `GetMetrics`

`GetMetrics` returns the value of the metric refered to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.
`GetMetrics` returns the value of the metric referred to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.

{{< collapsible "Golang" >}}
```golang
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.5/scalers/azure-storage-blob.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ triggers:
blobContainerName: functions-blob # Required: Name of Azure Blob Storage container
blobCount: '5' # Optional. Amount of blobs to scale out on. Default: 5 blobs
connection: STORAGE_CONNECTIONSTRING_ENV_NAME # Optional if TriggerAuthentication defined with pod identity or connection string authentication.
blobPrefix: # Optional. Prefix for the Blob. Use this to specifiy sub path for the blobs if required. Default : ""
blobPrefix: # Optional. Prefix for the Blob. Use this to specify sub path for the blobs if required. Default : ""
blobDelimiter: # Optional. Delimiter for identifying the blob Prefix. Default: "/"
```
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.5/scalers/rabbitmq-queue.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ triggers:
- `host`: Value is the name of the environment variable your deployment uses to get the connection string. This is usually resolved from a `Secret V1` or a `ConfigMap V1` collections. `env` and `envFrom` are both supported. The resolved host should follow a format like `amqp://guest:password@localhost:5672/vhost`
- `queueName`: Name of the queue to read message from. Required.
- `queueLength`: Queue length target for HPA. Default is 20. Optional.
- `includeUnacked`: By default `includeUnacked` is `false` in this case scaler uses AMQP protocol, requires `host` and only counts messages in the queue and ignores unacked messages. If `includeUnacked` is `true` then `host` is not required but `apiHost` is required in this case scaler uses HTTP management API and counts messages in the queue + unacked messages count. Optional. `host` or `apiHost` value comes from authencation trigger.
- `includeUnacked`: By default `includeUnacked` is `false` in this case scaler uses AMQP protocol, requires `host` and only counts messages in the queue and ignores unacked messages. If `includeUnacked` is `true` then `host` is not required but `apiHost` is required in this case scaler uses HTTP management API and counts messages in the queue + unacked messages count. Optional. `host` or `apiHost` value comes from authentication trigger.
- `apiHost`: It has similar format as of `host` but for HTTP API endpoint, like https://guest:password@localhost:443/vhostname.

Note `host` and `apiHost` both have an optional vhost name after the host slash which will be used to scope API request.
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.0/concepts/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ secretTargetRef: # Optional.

### Hashicorp Vault secret(s)

You can pull one or more Hashicorp Vault secrets into the trigger by defining the autentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
You can pull one or more Hashicorp Vault secrets into the trigger by defining the authentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
`credential` defines the Hashicorp Vault credentials depending on the authentication method, for kubernetes you should provide path to service account token (/var/run/secrets/kubernetes.io/serviceaccount/token) and for token auth method provide the token.
`secrets` list defines the mapping between the path and the key of the secret in Vault to the parameter.

Expand Down
4 changes: 2 additions & 2 deletions content/docs/2.0/concepts/external-scalers.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The `Scaler` interface defines 4 methods:
- `IsActive` is called on `pollingInterval` defined in the ScaledObject/ScaledJob CRDs and scaling to 1 happens if this returns true
- `Close` is called to allow the scaler to clean up connections or other resources.
- `GetMetricSpec` returns the target value for the HPA definition for the scaler. For more details refer to [Implementing `GetMetricSpec`](#5-implementing-getmetricspec)
- `GetMetrics` returns the value of the metric refered to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)
- `GetMetrics` returns the value of the metric referred to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)

> Refer to the [HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for how HPA calculates `replicaCount` based on metric value and target value. KEDA uses the metric target type `AverageValue` for external metrics. This will cause the metric value returned by the external scaler to be divided by the number of replicas.
Expand Down Expand Up @@ -503,7 +503,7 @@ server.addService(externalScalerProto.externalscaler.ExternalScaler.service, {

#### 6. Implementing `GetMetrics`

`GetMetrics` returns the value of the metric refered to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.
`GetMetrics` returns the value of the metric referred to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.

{{< collapsible "Golang" >}}
```golang
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.0/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ spec:
The reference to the resource this ScaledObject is configured for. This is the resource KEDA will scale up/down and setup an HPA for, based on the triggers defined in `triggers:`.

To scale Kubernetes Deployments only `name` is needed to be specified, if one wants to scale a different resource such as StatefulSet or Custom Resource (that defines `/scale` subresource), appropriate `apiVersion` (following standard Kubernetes convetion, ie. `{api}/{version}`) and `kind` need to be specified.
To scale Kubernetes Deployments only `name` is needed to be specified, if one wants to scale a different resource such as StatefulSet or Custom Resource (that defines `/scale` subresource), appropriate `apiVersion` (following standard Kubernetes convention, ie. `{api}/{version}`) and `kind` need to be specified.

`envSourceContainerName` is an optional property that specifies the name of container in the target resource, from which KEDA should try to get environment properties holding secrets etc. If it is not defined it, KEDA will try to get environment properties from the first Container, ie. from `.spec.template.spec.containers[0]`.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.0/scalers/azure-storage-blob.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ triggers:
- `blobContainerName` - Name of container in an Azure Storage account
- `blobCount` - Average target value to trigger scaling actions. (default: 5)
- `connectionFromEnv` - Name of the environment variable your deployment uses to get the connection string.
- `blobPrefix` - Prefix for the Blob. Use this to specifiy sub path for the blobs if required. (default: `""`)
- `blobPrefix` - Prefix for the Blob. Use this to specify sub path for the blobs if required. (default: `""`)
- `blobDelimiter` - Delimiter for identifying the blob prefix. (default: `/`)

### Authentication Parameters
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.1/concepts/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ secretTargetRef: # Optional.

### Hashicorp Vault secret(s)

You can pull one or more Hashicorp Vault secrets into the trigger by defining the autentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
You can pull one or more Hashicorp Vault secrets into the trigger by defining the authentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
`credential` defines the Hashicorp Vault credentials depending on the authentication method, for kubernetes you should provide path to service account token (/var/run/secrets/kubernetes.io/serviceaccount/token) and for token auth method provide the token.
`secrets` list defines the mapping between the path and the key of the secret in Vault to the parameter.

Expand Down
4 changes: 2 additions & 2 deletions content/docs/2.1/concepts/external-scalers.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The `Scaler` interface defines 4 methods:
- `IsActive` is called on `pollingInterval` defined in the ScaledObject/ScaledJob CRDs and scaling to 1 happens if this returns true
- `Close` is called to allow the scaler to clean up connections or other resources.
- `GetMetricSpec` returns the target value for the HPA definition for the scaler. For more details refer to [Implementing `GetMetricSpec`](#5-implementing-getmetricspec)
- `GetMetrics` returns the value of the metric refered to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)
- `GetMetrics` returns the value of the metric referred to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)

> Refer to the [HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for how HPA calculates `replicaCount` based on metric value and target value. KEDA uses the metric target type `AverageValue` for external metrics. This will cause the metric value returned by the external scaler to be divided by the number of replicas.
Expand Down Expand Up @@ -503,7 +503,7 @@ server.addService(externalScalerProto.externalscaler.ExternalScaler.service, {

#### 6. Implementing `GetMetrics`

`GetMetrics` returns the value of the metric refered to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.
`GetMetrics` returns the value of the metric referred to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.

{{< collapsible "Golang" >}}
```golang
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.1/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ spec:
The reference to the resource this ScaledObject is configured for. This is the resource KEDA will scale up/down and setup an HPA for, based on the triggers defined in `triggers:`.

To scale Kubernetes Deployments only `name` is needed to be specified, if one wants to scale a different resource such as StatefulSet or Custom Resource (that defines `/scale` subresource), appropriate `apiVersion` (following standard Kubernetes convetion, ie. `{api}/{version}`) and `kind` need to be specified.
To scale Kubernetes Deployments only `name` is needed to be specified, if one wants to scale a different resource such as StatefulSet or Custom Resource (that defines `/scale` subresource), appropriate `apiVersion` (following standard Kubernetes convention, ie. `{api}/{version}`) and `kind` need to be specified.

`envSourceContainerName` is an optional property that specifies the name of container in the target resource, from which KEDA should try to get environment properties holding secrets etc. If it is not defined, KEDA will try to get environment properties from the first Container, ie. from `.spec.template.spec.containers[0]`.

Expand Down
4 changes: 2 additions & 2 deletions content/docs/2.1/scalers/aws-cloudwatch.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ triggers:
metadata:
# Required: namespace
namespace: AWS/SQS
# Required: Dimension Name - Supports specifying multiple dimension names by using ";" as a seperator i.e. dimensionName: QueueName;QueueName
# Required: Dimension Name - Supports specifying multiple dimension names by using ";" as a separator i.e. dimensionName: QueueName;QueueName
dimensionName: QueueName
# Required: Dimension Value - Supports specifying multiple dimension values by using ";" as a seperator i.e. dimensionValue: queue1;queue2
# Required: Dimension Value - Supports specifying multiple dimension values by using ";" as a separator i.e. dimensionValue: queue1;queue2
dimensionValue: keda
metricName: ApproximateNumberOfMessagesVisible
targetMetricValue: "2"
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.1/scalers/azure-storage-blob.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ triggers:
- `blobContainerName` - Name of container in an Azure Storage account
- `blobCount` - Average target value to trigger scaling actions. (default: 5)
- `connectionFromEnv` - Name of the environment variable your deployment uses to get the connection string.
- `blobPrefix` - Prefix for the Blob. Use this to specifiy sub path for the blobs if required. (default: `""`)
- `blobPrefix` - Prefix for the Blob. Use this to specify sub path for the blobs if required. (default: `""`)
- `blobDelimiter` - Delimiter for identifying the blob prefix. (default: `/`)

### Authentication Parameters
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.2/concepts/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ secretTargetRef: # Optional.

### Hashicorp Vault secret(s)

You can pull one or more Hashicorp Vault secrets into the trigger by defining the autentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
You can pull one or more Hashicorp Vault secrets into the trigger by defining the authentication metadata such as Vault `address` and the `authentication` method (token | kubernetes). If you choose kubernetes auth method you should provide `role` and `mount` as well.
`credential` defines the Hashicorp Vault credentials depending on the authentication method, for kubernetes you should provide path to service account token (/var/run/secrets/kubernetes.io/serviceaccount/token) and for token auth method provide the token.
`secrets` list defines the mapping between the path and the key of the secret in Vault to the parameter.

Expand Down
4 changes: 2 additions & 2 deletions content/docs/2.2/concepts/external-scalers.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The `Scaler` interface defines 4 methods:
- `IsActive` is called on `pollingInterval` defined in the ScaledObject/ScaledJob CRDs and scaling to 1 happens if this returns true
- `Close` is called to allow the scaler to clean up connections or other resources.
- `GetMetricSpec` returns the target value for the HPA definition for the scaler. For more details refer to [Implementing `GetMetricSpec`](#5-implementing-getmetricspec)
- `GetMetrics` returns the value of the metric refered to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)
- `GetMetrics` returns the value of the metric referred to from `GetMetricSpec`. For more details refer to [Implementing `GetMetrics`](#6-implementing-getmetrics)

> Refer to the [HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for how HPA calculates `replicaCount` based on metric value and target value. KEDA uses the metric target type `AverageValue` for external metrics. This will cause the metric value returned by the external scaler to be divided by the number of replicas.
Expand Down Expand Up @@ -503,7 +503,7 @@ server.addService(externalScalerProto.externalscaler.ExternalScaler.service, {

#### 6. Implementing `GetMetrics`

`GetMetrics` returns the value of the metric refered to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.
`GetMetrics` returns the value of the metric referred to from `GetMetricSpec`, in this example it's `earthquakeThreshold`.

{{< collapsible "Golang" >}}
```golang
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.2/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ spec:
The reference to the resource this ScaledObject is configured for. This is the resource KEDA will scale up/down and setup an HPA for, based on the triggers defined in `triggers:`.

To scale Kubernetes Deployments only `name` is needed to be specified, if one wants to scale a different resource such as StatefulSet or Custom Resource (that defines `/scale` subresource), appropriate `apiVersion` (following standard Kubernetes convetion, ie. `{api}/{version}`) and `kind` need to be specified.
To scale Kubernetes Deployments only `name` is needed to be specified, if one wants to scale a different resource such as StatefulSet or Custom Resource (that defines `/scale` subresource), appropriate `apiVersion` (following standard Kubernetes convention, ie. `{api}/{version}`) and `kind` need to be specified.

`envSourceContainerName` is an optional property that specifies the name of container in the target resource, from which KEDA should try to get environment properties holding secrets etc. If it is not defined, KEDA will try to get environment properties from the first Container, ie. from `.spec.template.spec.containers[0]`.

Expand Down
Loading

0 comments on commit fba922a

Please sign in to comment.