Skip to content
This repository has been archived by the owner on Jan 22, 2021. It is now read-only.

Add support for Azure Storage Queue #39

Open
developerxnz opened this issue Oct 11, 2018 · 17 comments
Open

Add support for Azure Storage Queue #39

developerxnz opened this issue Oct 11, 2018 · 17 comments
Labels
enhancement New feature or request

Comments

@developerxnz
Copy link

Add support to access Storage Queue Message Count using the azure go sdk.

@developerxnz developerxnz changed the title Add support Azure Storage Queue Add support for Azure Storage Queue Oct 11, 2018
@tomkerkhove
Copy link
Member

This is actually already supported - https://github.com/Azure/azure-k8s-metrics-adapter#external-metrics.

Every metric that is available in Azure Monitor, is waiting for you to use!

@jsturtevant
Copy link
Collaborator

I thought the same initially but I have found that there are edge cases on how each metric is reported to Azure Monitor. In this case, Azure Storage only reports the queue Capacity Metrics every 24 hours. I tested this to confirm it and found that for the first day I did not get any Queue message counts, The second day I had the counts.

This makes the Azure Monitor metric not very useful in this use case. In addition, the metric reported to Azure Monitor by Azure Storage Queue's is total messages across all queues and doesn't currently support dimensions for per queue metrics.

To enable this will need to create a way to use a different api endpoint via the go SDK. It does look like we can get the QueueMessage Count through the SDK.

I am currently thinking we modify the ExternalMetric CRD to have a type that can override the type of api endpoint we want to query. This way we could have an ExternalMetric that uses Azure Monitor, Storage Queue, or any other one that comes up with a limitation like this.

@tomkerkhove
Copy link
Member

tomkerkhove commented Oct 11, 2018 via email

jsturtevant added a commit that referenced this issue Oct 17, 2018
Updates to docs based on the investigation on storage queues ( #39 ). Also found that filter was required but shouldn't be.
@jsturtevant
Copy link
Collaborator

I am planning on re-using the same ExternalMetric CRD with a property that will tell us the type and use a different schema based on each type. I think having too many CRD types will get confusing and is harder to maintain in the code. It would look something like:

apiVersion: azure.com/v1alpha2
kind: ExternalMetric
metadata:
  name: example-external-metric-sb
spec:
  type: azuremonitor
  config:
    azure:
      resourceGroup: sb-external-example
      resourceName: sb-external-ns
      resourceProviderNamespace: Microsoft.ServiceBus
      resourceType: namespaces
    metric:
      metricName: Messages
      aggregation: Total
      filter: EntityName eq 'externalq'

or

apiVersion: azure.com/v1alpha2
kind: ExternalMetric
metadata:
  name: example-external-metric-sb
spec:
  type: storagequeue
  config:
    queuename: test1
    resourcegroup: storage-rg

I am not sure what config is nessary for the storage queue example currently but that give you a sense of what I am thinking.

@tomkerkhove
Copy link
Member

Would it make sense to rename azuremonitor to generic?

This feels similar to Promitor's Service Bus Queue scraping & generic scraping where there are more convenient and simple configs but also more advanced in case there is a need.

@developerxnz
Copy link
Author

developerxnz commented Oct 26, 2018

The example code I was using to access the storage queue only requires the storage account name, the queue name and the connection string.

e.g.
`u, _ := url.Parse(fmt.Sprintf("https://%s.queue.core.windows.net/%s", c.storageaccountname, c.queueName))'

`queueURL := azqueue.NewQueueURL(*u, azqueue.NewPipeline(azqueue.NewSharedKeyCredential(c.storageaccountname, c.accountKey), azqueue.PipelineOptions{}))`

@brobichaud
Copy link

Has any progress been made on this issue? I am in need of horizontal pod scaling based on azure storage queue depth.

@jsturtevant
Copy link
Collaborator

@brobichaud You should now check out the KEDA project for future work. Specifically storage queues are supported.

@brobichaud
Copy link

@jsturtevant yeah I have taken a look at KEDA and it does go a bit further but is truly event driven. Meaning it would create a new pod for every message that appears in the queue, I need a more chunky approach, to which I've given them feedback.

That said, are you indicating that this component is likely end-of-life? And thus I should not expect any movement?

@jsturtevant
Copy link
Collaborator

I do not believe KEDA creates a new pod for every message in the queue. Where did you see that? There is a proposal for long running jobs that does do this but is not currently implemented. @lee0c could confirm

Since KEDA came out I have slowed development as I believe it answers most requests I have seen and is a extension of the work done here. If there is a clear need for further development becuase KEDA doesn't fit the needs I will pick this back up.

@brobichaud
Copy link

Isn't event-driven the core of how KEDA works? Each event triggers the scaling logic, which in terms of Storage Queues means each message above the defined threshold equates to an Event, which in turns triggers KEDA to scale-out the deployment. I don't see any other configurability on how to coerce KEDA into doing anything but that. Am I missing something?

@jsturtevant
Copy link
Collaborator

KEDA it's self only manages the scale from 1 to 0 and 0 to 1. When a new "event" is in a queue then it will activate the deployment by setting the replica count to 1, if it was scaled to zero. After a cool down period, when there are no "events" in the queue it will scale the replica count of the deployment to zero.

After a deployment is active (has more than one replica) KEDA uses a HPA that it creates behind the scenes to do scaling. The HPA is wired to call KEDA's built in metric server (same as what this project provides) which retrieves the value from the queue and relies on the HPA external metric scaling logic to scale the deployment.

Does this help?

@jsturtevant
Copy link
Collaborator

You can see the KEDA scaling logic here: https://github.com/kedacore/keda/blob/0cb6bfda6395433d197c1ac0b3e504314d62b26f/pkg/handler/scale_handler.go#L345

@lee0c
Copy link
Collaborator

lee0c commented Jun 28, 2019

To confirm, no, KEDA will not create a pod per each queue item. When defining a ScaledObject spec, you set a target value. That target value is then used to tell the HPA how many queue items it should aim to have per pod. So a target value of 5 means that there will be roughly one pod per 5 queue items.

@brobichaud
Copy link

brobichaud commented Jun 28, 2019

I guess I totally misunderstood the basic concept of how KEDA scales. Clearly it works in concert with HPA, but I guess I don't get how to configure it.

The core of what I'm after is something like this:
When my Azure Storage Queue exceeds X then start scaling out, making sure there is one pod for every Y queue messages above X. I also of course need a min and max pod count to scale between.

A real-world example for me is:
I run with 1-20 messages in my queue at any given time and 2 pods normally handling them fine. But suddenly 86 messages appear in my queue. I want to scale out so I have 1 pod for every 10 messages above that 20 threshold, so 7 more pods, or 9 total. I want to limit the min pod count at 2 and max at 10.

Can I make KEDA do this? (I've asked over on the KEDA repo as well: kedacore/keda#269 (comment))

@lee0c
Copy link
Collaborator

lee0c commented Jun 28, 2019

KEDA provides 0->1 and 1->0 scaling, & serves metrics so that the HPA can provide 1->n/n->1 scaling.

As long as you are holding to a rough 10 messages : 1 pod ratio, that should be doable with KEDA. KEDA's ScaledObject spec provides a way to set min and max pod counts, which should handle what you are looking for.

@Aarthisk
Copy link

Aarthisk commented Jul 1, 2019

+1 to what @lee0c said: HPA is already chunky so the current scaler should address what you want.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants