Skip to content
This repository has been archived by the owner on Jul 18, 2018. It is now read-only.

Use deterministic pauses rather than fixed-length ones. #186

Merged
merged 6 commits into from
Feb 15, 2017
Merged

Use deterministic pauses rather than fixed-length ones. #186

merged 6 commits into from
Feb 15, 2017

Conversation

joejulian
Copy link
Contributor

Rather than using fixed-length pauses, use loops to test if the thing we're pausing for has completed. This allows each of the former pauses to be shorter under many conditions.

Fixes #129

@@ -44,9 +33,43 @@
kubectl --kubeconfig={{ kubeconfig }} delete deployment {{ tiller }} --namespace=kube-system
when: tiller_present|success

- name: Clean up services
command: >
kubectl --kubeconfig={{ kubeconfig }} delete --namespace {{ item.metadata.namespace }} svc {{ item.metadata.name }} --grace-period=600
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the grace period introduce a polling loop that waits for the operation to complete? I see we are still polling for the ELBs to be deleted anyway. and will ansible run these deletes in parallel or could this end up waiting 600 seconds for each ELB if the AWS API server is slow?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not. I might as well revert that change. I tried it hoping it would be sufficient.

@coffeepac
Copy link
Contributor

one comment and that's more clarification for me. otherwise, LGTM

@coffeepac
Copy link
Contributor

retest this please

@coffeepac
Copy link
Contributor

this can be merged once the test passes.

@joejulian
Copy link
Contributor Author

Go go Jenkins!

Instead of a 60 second sleep, deterministically wait for tiller-deploy
to be ready.
Rather than waiting a solid 5 minutes regardless of need, wait up to 5
minutes for the ELBs to stop before moving on.
@joejulian
Copy link
Contributor Author

Go go Jenkins!

For some reason, the kubernetes api server is not listening when we try to `helm init`
Wait for it to be listening.
@joejulian
Copy link
Contributor Author

Go go Jenkins!

when: kraken_action == 'down' and kraken_config.provider == 'aws'
until: (elb_facts is none) or (elb_facts|json_query(vpc_lookup) is none) or (elb_facts|json_query(vpc_lookup)|length <= 1)
retries: 600
delay: 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will cause us to exceed API rate limits. Maybe we should do a quick benchmark of the average ELB deletion time and then make that the delay?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1000 request per second (rps) with a burst limit of 2000 rps

Unless there's 1000 k2 doing this from one account, we should be ok.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's probably more acrobatics than we need, but an geometrically decreasing delay might be one way to minimize the total time we wait. :) I am more concerned with making this robust for users than making it fast for developers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you are quoting the rate limit for the AWS API Gateway Service, which is different than the rate limit for an AWS account.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/query-api-troubleshooting.html#api-request-rate

I am having trouble finding documentation on the rate limits for Describe, Modify, and Create actions. You are doing a describe here, so that is safer. Note that we have run into rate limits on our team in the past though, and the documentation specifically recommends an exponential back-off...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And ansible doesn't have the facility to do that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I argue that we go ahead with the 1 second loop and revisit this if there actually is a problem or if a backoff feature is added to ansible.

Copy link
Contributor

@davidewatson davidewatson Feb 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So here we are using the ec2_elb_facts ansible module:

https://github.com/wimnat/ansible-modules/blob/master/ec2_elb_facts/ec2_elb_facts.py

This uses boto.ec2.elb which I think implements a binary exponential backoff:

https://github.com/boto/boto/blob/abb38474ee5124bb571da0c42be67cd27c47094f/boto/connection.py

So maybe your change is safe. I know I am being pedantic here, it's just 1 second really seems excessive when we know ELB operations are much slower. I would prefer if verify that boto is implementing a backoff, and then optionally change the delay to be something more reasonable, or not. If a backoff is in place then I am not too concerned about CPU utilization here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately the boto retries are for opening the connection and not the results of the query.

I'm just suggesting we not pre-optimize. I haven't seen any rate limit errors in all the times I've upped and downed - and that's on us-west which is (apparently) notorious for rate limiting. If it turns out that it's a problem, we can easily change the delay later.

Btw... I've recently been pretty consistently getting 5 retries before it moves on and that's with central-logging added.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks fine to me. API limits (as described here: kubernetes/kubernetes#39526, seem to be a problem in the thousand plus per hour. If we want this to have an exponential backoff (seems fine) lets open a fresh ticket and do the work there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@davidewatson davidewatson merged commit b09700c into samsung-cnct:master Feb 15, 2017
@joejulian joejulian deleted the 129_deterministic_pauses branch February 15, 2017 18:55
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants