-
Notifications
You must be signed in to change notification settings - Fork 35
Use deterministic pauses rather than fixed-length ones. #186
Use deterministic pauses rather than fixed-length ones. #186
Conversation
@@ -44,9 +33,43 @@ | |||
kubectl --kubeconfig={{ kubeconfig }} delete deployment {{ tiller }} --namespace=kube-system | |||
when: tiller_present|success | |||
|
|||
- name: Clean up services | |||
command: > | |||
kubectl --kubeconfig={{ kubeconfig }} delete --namespace {{ item.metadata.namespace }} svc {{ item.metadata.name }} --grace-period=600 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does the grace period introduce a polling loop that waits for the operation to complete? I see we are still polling for the ELBs to be deleted anyway. and will ansible run these deletes in parallel or could this end up waiting 600 seconds for each ELB if the AWS API server is slow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not. I might as well revert that change. I tried it hoping it would be sufficient.
one comment and that's more clarification for me. otherwise, LGTM |
retest this please |
this can be merged once the test passes. |
Go go Jenkins! |
Instead of a 60 second sleep, deterministically wait for tiller-deploy to be ready.
Rather than waiting a solid 5 minutes regardless of need, wait up to 5 minutes for the ELBs to stop before moving on.
Go go Jenkins! |
For some reason, the kubernetes api server is not listening when we try to `helm init` Wait for it to be listening.
Go go Jenkins! |
when: kraken_action == 'down' and kraken_config.provider == 'aws' | ||
until: (elb_facts is none) or (elb_facts|json_query(vpc_lookup) is none) or (elb_facts|json_query(vpc_lookup)|length <= 1) | ||
retries: 600 | ||
delay: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will cause us to exceed API rate limits. Maybe we should do a quick benchmark of the average ELB deletion time and then make that the delay?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1000 request per second (rps) with a burst limit of 2000 rps
Unless there's 1000 k2 doing this from one account, we should be ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably more acrobatics than we need, but an geometrically decreasing delay might be one way to minimize the total time we wait. :) I am more concerned with making this robust for users than making it fast for developers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe you are quoting the rate limit for the AWS API Gateway Service, which is different than the rate limit for an AWS account.
I am having trouble finding documentation on the rate limits for Describe, Modify, and Create actions. You are doing a describe here, so that is safer. Note that we have run into rate limits on our team in the past though, and the documentation specifically recommends an exponential back-off...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And ansible doesn't have the facility to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I argue that we go ahead with the 1 second loop and revisit this if there actually is a problem or if a backoff feature is added to ansible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So here we are using the ec2_elb_facts ansible module:
https://github.com/wimnat/ansible-modules/blob/master/ec2_elb_facts/ec2_elb_facts.py
This uses boto.ec2.elb which I think implements a binary exponential backoff:
https://github.com/boto/boto/blob/abb38474ee5124bb571da0c42be67cd27c47094f/boto/connection.py
So maybe your change is safe. I know I am being pedantic here, it's just 1 second really seems excessive when we know ELB operations are much slower. I would prefer if verify that boto is implementing a backoff, and then optionally change the delay to be something more reasonable, or not. If a backoff is in place then I am not too concerned about CPU utilization here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately the boto retries are for opening the connection and not the results of the query.
I'm just suggesting we not pre-optimize. I haven't seen any rate limit errors in all the times I've upped and downed - and that's on us-west which is (apparently) notorious for rate limiting. If it turns out that it's a problem, we can easily change the delay later.
Btw... I've recently been pretty consistently getting 5 retries before it moves on and that's with central-logging added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks fine to me. API limits (as described here: kubernetes/kubernetes#39526, seem to be a problem in the thousand plus per hour. If we want this to have an exponential backoff (seems fine) lets open a fresh ticket and do the work there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Rather than using fixed-length pauses, use loops to test if the thing we're pausing for has completed. This allows each of the former pauses to be shorter under many conditions.
Fixes #129