Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OpenStack metrics scaler doc #405

Closed
wants to merge 25 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
42e4bbf
fix: Fix rendering of Kubernetes events overview (#394)
tomkerkhove Mar 5, 2021
c520e78
Update V1 Install docs (#395)
pragnagopa Mar 8, 2021
04aaf60
document new `publishRate` trigger (#388)
rwkarg Mar 9, 2021
5e9f0cc
AWS SQS Scaler: Document additions of NotVisible messages in scaling …
TyBrown Mar 15, 2021
ba1ffeb
docs: Provide overview of required ports to be accessible (#390)
tomkerkhove Mar 15, 2021
4fa6828
Update metrics API to explain support for Quantities (#401)
devjoes Mar 16, 2021
bbb4411
release 2.2 and prepare 2.3 (#402)
zroubalik Mar 18, 2021
8dd5bbd
Modified deploying keda for keda 2.1.0 (#403)
Shubham82 Mar 19, 2021
6bab99c
Prometheus scaler auth (#364)
marpio Mar 24, 2021
c010361
Provide "Migrating our container images to Github Container Registry"…
tomkerkhove Mar 26, 2021
d9cbf21
Kafka: add allowIdleConsumers documentation (#407)
lionelvillard Mar 26, 2021
99aae08
Create Openstack Metric Scaler docs
Rodolfodc Mar 10, 2021
33fd60e
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 25, 2021
0f1f24e
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 25, 2021
011f4b2
improve openstack metric scaler initial description
Rodolfodc Mar 25, 2021
c74ef73
Create Openstack Metric Scaler docs
Rodolfodc Mar 10, 2021
f8094a0
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 25, 2021
ae8643c
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 25, 2021
d33cb1c
improve openstack metric scaler initial description
Rodolfodc Mar 25, 2021
79083ef
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 26, 2021
4e69a63
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 26, 2021
1c99039
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 26, 2021
402330a
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 26, 2021
349054f
Update content/docs/2.2/scalers/openstack-metric.md
Rodolfodc Mar 26, 2021
fb05184
move OpenStack Metric Scaler docs to 2.3
Rodolfodc Mar 26, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ alpine_js_version = "2.2.1"
favicon = "favicon.png"

[params.versions]
docs = ["2.1", "2.0", "1.5", "1.4"]
docs = ["2.2", "2.1", "2.0", "1.5", "1.4"]

# Site fonts. For more options see https://fonts.google.com.
[[params.fonts]]
Expand Down
71 changes: 71 additions & 0 deletions content/blog/migrating-to-github-container-registry.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
+++
title = "Migrating our container images to GitHub Container Registry"
date = 2021-03-26
author = "KEDA Maintainers"
+++

We provide **various ways to [deploy KEDA](https://keda.sh/docs/latest/deploy/) in your cluster** including by using [Helm chart](https://github.com/kedacore/charts), [Operator Hub](https://operatorhub.io/operator/keda) and raw YAML specifications.

These deployment options all rely on the container images that we provide which are available on **[Docker Hub](https://hub.docker.com/u/kedacore), the industry standard for public container images**.

However, we have found that Docker Hub is no longer the best place for our container images and are migrating to GitHub Container Registry (Preview).

## Why are making this change?

### Docker Hub is introducing rate limiting and image retention

Over the past couple of years, Docker Hub has become the industry standard for hosting public container images. This has become a big burden for Docker to manage all the traffic and decided in 2020 to make some changes:

- Anonymous image pulls are being rate limited
- Unused images will no longer be retained

Because we want to ensure that our end-users can use KEDA without any issues, we want to make them available to anyone without any limitations.

Learn more about these changes in [Docker's FAQ](https://www.docker.com/pricing/resource-consumption-updates) and our issue on [GitHub](https://github.com/kedacore/keda/issues/995).

### Gaining insights on KEDA adoption

As maintainers, **we find it hard to measure the adoption of KEDA** to understand how many end-users are using older versions of KEDA and what the growth is over time.

Docker Hub provides a vague total pull count per container image, but it does not give in-depth details concerning the tags and what the pull growth is over time.

In GitHub Container Registry, however, **metrics are provided out-of-the-box on a per-tag basis** allowing us to better understand what our customers are using and make better decisions when we no longer support a given version.

### Bringing our artifacts closer to home

Lastly, we want to **bring our artifacts closer to our home on GitHub**. By using more of the GitHub ecosystem, we believe that this integration will only improve and get tighter integration with our releases and such.

## What is changing?

Our container images are being published on [GitHub Container Registry](https://github.com/orgs/kedacore/packages?type=source) for end-users to pull them.

Because of this, the names of our container images are changing:

| Component | New Image (GitHub Container Registry) | Legacy Image (Docker Hub) |
| :------------- | :---------------------------------------- | --------------------------------- |
| Metrics Server | `ghcr.io/kedacore/keda-metrics-apiserver` | `kedacore/keda-metrics-apiserver` |
| Operator | `ghcr.io/kedacore/keda` | `kedacore/keda` |

## When is this taking place?

As of v2.2, we have started publishing our new container images to GitHub Container Registry in parallel to Docker Hub.

This allows customers to already migrate to our new registry and consume our artifacts there.

**Once GitHub Container Registry becomes generally available (GA), we will no longer publish new versions to Docker Hub.**

## What is the impact for end-users?

**If you are using one of our deployment options, end-users are not be impacted.**

Since v2.2, we are using GitHub Container Registry by default and you are good to go.

If you are using your own deployment mechanism, then you will have to pull the container images from GitHub Container Registry instead.

## Join the conversation

Do you have questions or remarks? Feel free to join the conversation on [GitHub Discussions](https://github.com/kedacore/keda/discussions/1700).

Thanks for reading, and happy scaling!

KEDA Maintainers.
2 changes: 1 addition & 1 deletion content/docs/1.4/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ kubectl apply -f ./
You can also find the same YAML declarations in our `/deploy` directory on our [GitHub repo](https://github.com/kedacore/keda) if you prefer to clone it.

```sh
git clone https://github.com/kedacore/keda && cd keda
git clone https://github.com/kedacore/keda && cd keda && git checkout tags/v1.4.1

kubectl apply -f deploy/crds/keda.k8s.io_scaledobjects_crd.yaml
kubectl apply -f deploy/crds/keda.k8s.io_triggerauthentications_crd.yaml
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.5/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ kubectl apply -f ./
You can also find the same YAML declarations in our `/deploy` directory on our [GitHub repo](https://github.com/kedacore/keda) if you prefer to clone it.

```sh
git clone https://github.com/kedacore/keda && cd keda
git clone https://github.com/kedacore/keda && cd keda && git checkout tags/v1.5.0

kubectl apply -f deploy/crds/keda.k8s.io_scaledobjects_crd.yaml
kubectl apply -f deploy/crds/keda.k8s.io_triggerauthentications_crd.yaml
Expand Down
13 changes: 12 additions & 1 deletion content/docs/2.0/operate/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,15 @@ Here is an overview of all KEDA deployments and the supported replicas:
| Deployment | Support Replicas | Reasoning |
|----------------|-------------------------|-------------------------------|
| Operator | 1 | |
| Metrics Server | 1 | Limitation in [k8s custom metrics server](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) |
| Metrics Server | 1 | Limitation in [k8s custom metrics server](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) |

## Firewall requirements

KEDA requires to be accessible inside the cluster to be able to autoscale.

Here is an overview of the required ports that need to be accessible for KEDA to work:

| Port | Why? | Remarks |
| ------ | -------------------------------------------- | ---------------------------------------------------- |
| `443` | Used by Kubernetes API server to get metrics | Required for all platforms, except for Google Cloud. |
| `6443` | Used by Kubernetes API server to get metrics | Only required for Google Cloud |
10 changes: 5 additions & 5 deletions content/docs/2.1/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Deploying KEDA with Helm is very simple:

```sh
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
helm install keda kedacore/keda --version 2.1.0 --namespace keda
```

### Uninstall
Expand All @@ -48,10 +48,10 @@ If you want to remove KEDA from a cluster you can run one of the following:

```sh
helm uninstall -n keda keda
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_scaledobjects.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_scaledjobs.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_triggerauthentications.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_clustertriggerauthentications.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/v2.1.0/config/crd/bases/keda.sh_scaledobjects.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/v2.1.0/config/crd/bases/keda.sh_scaledjobs.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/v2.1.0/config/crd/bases/keda.sh_triggerauthentications.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/v2.1.0/config/crd/bases/keda.sh_clustertriggerauthentications.yaml
```

## Deploying with Operator Hub {#operatorhub}
Expand Down
33 changes: 22 additions & 11 deletions content/docs/2.1/operate/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,6 @@ description = "Guidance & requirements for running KEDA in your cluster"
weight = 100
+++

## High Availability

KEDA does not provide support for high-availability due to upstream limitations.

Here is an overview of all KEDA deployments and the supported replicas:

| Deployment | Support Replicas | Reasoning |
|----------------|-------------------------|-------------------------------|
| Operator | 1 | |
| Metrics Server | 1 | Limitation in [k8s custom metrics server](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) |

## Cluster capacity requirements

The KEDA runtime require the following resources in a production-ready setup:
Expand All @@ -28,6 +17,28 @@ These are used by default when deploying through YAML.

> 💡 For more info on CPU and Memory resource units and their meaning, see [this](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes) link.

## Firewall requirements

KEDA requires to be accessible inside the cluster to be able to autoscale.

Here is an overview of the required ports that need to be accessible for KEDA to work:

| Port | Why? | Remarks |
| ------ | -------------------------------------------- | ---------------------------------------------------- |
| `443` | Used by Kubernetes API server to get metrics | Required for all platforms, except for Google Cloud. |
| `6443` | Used by Kubernetes API server to get metrics | Only required for Google Cloud |

## High Availability

KEDA does not provide support for high-availability due to upstream limitations.

Here is an overview of all KEDA deployments and the supported replicas:

| Deployment | Support Replicas | Reasoning |
|----------------|-------------------------|-------------------------------|
| Operator | 1 | |
| Metrics Server | 1 | Limitation in [k8s custom metrics server](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) |

## HTTP Timeouts

Some scalers issue HTTP requests to external servers (i.e. cloud services). Each applicable scaler uses its own dedicated HTTP client with its own connection pool, and by default each client is set to time out any HTTP request after 3 seconds.
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.2/concepts/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Some metadata parameters will not allow resolving from a literal value, and will

### Example

If using the [RabbitMQ scaler](https://keda.sh/docs/2.1/scalers/rabbitmq-queue/), the `host` parameter may include passwords so is required to be a reference. You can create a secret with the value of the `host` string, reference that secret in the deployment, and map it to the `ScaledObject` metadata parameter like below:
If using the [RabbitMQ scaler](https://keda.sh/docs/2.2/scalers/rabbitmq-queue/), the `host` parameter may include passwords so is required to be a reference. You can create a secret with the value of the `host` string, reference that secret in the deployment, and map it to the `ScaledObject` metadata parameter like below:

```yaml
apiVersion: v1
Expand Down
16 changes: 8 additions & 8 deletions content/docs/2.2/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,43 +72,43 @@ Locate installed KEDA Operator in `keda` namespace, then remove created `KedaCon
If you want to try KEDA on [Minikube](https://minikube.sigs.k8s.io) or a different Kubernetes deployment without using Helm you can still deploy it with `kubectl`.

- We provide sample YAML declaration which includes our CRDs and all other resources in a file which is available on the [GitHub releases](https://github.com/kedacore/keda/releases) page.
Run the following command (if needed, replace the version, in this case `2.0.0`, with the one you are using):
Run the following command (if needed, replace the version, in this case `2.2.0`, with the one you are using):

```sh
kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml
kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.2.0/keda-2.2.0.yaml
```

- Alternatively you can download the file and deploy it from the local path:
```sh
kubectl apply -f keda-2.0.0.yaml
kubectl apply -f keda-2.2.0.yaml
```

- You can also find the same YAML declarations in our `/config` directory on our [GitHub repo](https://github.com/kedacore/keda) if you prefer to clone it.

```sh
git clone https://github.com/kedacore/keda && cd keda

VERSION=2.0.0 make deploy
VERSION=2.2.0 make deploy
```

### Uninstall

- In case of installing from released YAML file just run the following command (if needed, replace the version, in this case `2.0.0`, with the one you are using):
- In case of installing from released YAML file just run the following command (if needed, replace the version, in this case `2.2.0`, with the one you are using):

```sh
kubectl delete -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml
kubectl delete -f https://github.com/kedacore/keda/releases/download/v2.2.0/keda-2.2.0.yaml
```

- If you have downloaded the file locally, you can run:

```sh
kubectl delete -f keda-2.0.0.yaml
kubectl delete -f keda-2.2.0.yaml
```

- You would need to run these commands from within the directory of the cloned [GitHub repo](https://github.com/kedacore/keda):

```sh
VERSION=2.0.0 make undeploy
VERSION=2.2.0 make undeploy
```

## Deploying KEDA on MicroK8s {#microk8s}
Expand Down
36 changes: 19 additions & 17 deletions content/docs/2.2/operate/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,40 +4,42 @@ description = "Guidance & requirements for running KEDA in your cluster"
weight = 100
+++

## High Availability

KEDA does not provide support for high-availability due to upstream limitations.

Here is an overview of all KEDA deployments and the supported replicas:

| Deployment | Support Replicas | Reasoning |
|----------------|-------------------------|-------------------------------|
| Operator | 1 | |
| Metrics Server | 1 | Limitation in [k8s custom metrics server](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) |

## Cluster capacity requirements

The KEDA runtime require the following resources in a production-ready setup:

| Deployment | CPU | Memory |
|----------------|-------------------------|-------------------------------|
| Operator | Limit: 1, Request: 100m | Limit: 1000Mi, Request: 100Mi |
| -------------- | ----------------------- | ----------------------------- |
| Metrics Server | Limit: 1, Request: 100m | Limit: 1000Mi, Request: 100Mi |
| Operator | Limit: 1, Request: 100m | Limit: 1000Mi, Request: 100Mi |

These are used by default when deploying through YAML.

> 💡 For more info on CPU and Memory resource units and their meaning, see [this](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes) link.

## Firewall requirements

KEDA requires to be accessible inside the cluster to be able to autoscale.

Here is an overview of the required ports that need to be accessible for KEDA to work:

<!-- markdownlint-disable no-inline-html -->
| Port | Why? | Remarks |
| ------ | -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `443` | Used by Kubernetes API server to get metrics | Required for all platforms because it uses Control Plane &#8594; port 443 on the Service IP range communication.<br /><br /> This is not applicable for Google Cloud. |
| `6443` | Used by Kubernetes API server to get metrics | Only required for Google Cloud because it uses Control Plane &#8594; port 6443 on the Pod IP range for communication |
<!-- markdownlint-enable no-inline-html -->

## High Availability

KEDA does not provide support for high-availability due to upstream limitations.

Here is an overview of all KEDA deployments and the supported replicas:

| Deployment | Support Replicas | Reasoning |
|----------------|-------------------------|-------------------------------|
| Operator | 1 | |
| Metrics Server | 1 | Limitation in [k8s custom metrics server](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) |
| Deployment | Support Replicas | Reasoning |
| -------------- | ---------------- | ---------------------------------------------------------------------------------------------------------------- |
| Metrics Server | 1 | Limitation in [k8s custom metrics server](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) |
| Operator | 1 | |

## HTTP Timeouts

Expand Down
1 change: 1 addition & 0 deletions content/docs/2.2/operate/events.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ weight = 100
## Kubernetes Events emitted by KEDA

KEDA emits the following [Kubernetes Events](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#event-v1-core):

| Event | Type | Description |
|-------------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------|
| `ScaledObjectReady` | `Normal` | On the first time a ScaledObject is ready, or if the previous ready condition status of the object was `Unknown` or `False` |
Expand Down
9 changes: 6 additions & 3 deletions content/docs/2.2/scalers/aws-sqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,18 @@ triggers:
**Parameter list:**

- `queueURL` - Full URL for the SQS Queue
- `queueLength` - Target value for queue length passed to the scaler. Example: if one pod can handle 10 messages, set the queue length target to 10. If the actual `ApproximateNumberOfMessages` in the SQS Queue is 30, the scaler scales to 3 pods. (default: 5)
- `queueLength` - Target value for queue length passed to the scaler. Example: if one pod can handle 10 messages, set the queue length target to 10. If the actual messages in the SQS Queue is 30, the scaler scales to 3 pods. (default: 5)

> For the purposes of scaling, "actual messages" is equal to `ApproximateNumberOfMessages` + `ApproximateNumberOfMessagesNotVislble`, since `NotVisible` in SQS terms means the message is still in-flight/processing.

- `awsRegion` - AWS Region for the SQS Queue
- `identityOwner` - Receive permissions on the SQS Queue via Pod Identity or from the KEDA operator itself (see below).

> When `identityOwner` set to `operator` - the only requirement is that the Keda operator has the correct IAM permissions on the SQS queue. Additional Authentication Parameters are not required.

### Authentication Parameters

> These parameters are relevant only when `identityOwner` is set to `pod`.
> These parameters are relevant only when `identityOwner` is set to `pod`.

You can use `TriggerAuthentication` CRD to configure the authenticate by providing either a role ARN or a set of IAM credentials.

Expand Down Expand Up @@ -63,7 +66,7 @@ metadata:
data:
AWS_ACCESS_KEY_ID: <encoded-user-id>
AWS_SECRET_ACCESS_KEY: <encoded-key>
---
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
Expand Down
Loading