Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Honor Retry-After header #773

Merged
merged 1 commit into from
Feb 22, 2018
Merged

Conversation

bronislav
Copy link
Contributor

Description

Improve retry interval calculations to honor Retry-After header if present. This header may be set in case if a server does not respond or request being throttled.

Todos

List any remaining work that needs to be done, i.e:

  • Tests
  • Documentation

Additional Notes

I thought to create separate middleware for this purpose but then decided to just improve existing retry logic.

Copy link
Member

@iMacTia iMacTia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @bronislav and thanks for taking the time to work on this.

I like the additions you're bringing into the retry middleware, but I think the implementation can be improved, so I left some comments.

Please let me know if I wasn't clear in any of them.

end

def call(env)
retries = @options.max
request_body = env[:body]
begin
env[:body] = request_body # after failure env[:body] is set to the response body
@app.call(env)
@app.call(env).tap do |resp|
raise Faraday::Error::RetriableResponse.new(nil, resp) if @options.retry_statuses.include?(resp.status)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see what you're trying to do with Faraday::Error::RetriableResponse, however I don't think it's a good idea to introduce this exception and use it this way.
The reason is that, after we reach the maximum number of retries, the new exception will be propagated outside and this might be an unexpected behaviour compared to what happens now.

I like the idea of allowing to define a list of statuses that you want to retry, but this should be tested internally without raising any exception.
My suggestion is to add this to retry_request?.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to do this without raising an exception but this will lead to implementing two places with retry logic which seems a bit overhead. It could be done using do ... while loop but it looks hacky for me.

And I'll move statuses testing to the better place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to move this line around but it seems that this is the best option.

retry_after = calculate_retry_after(env)
interval = current_interval + random_interval

retry_after && retry_after >= interval ? retry_after : interval
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method looks a bit too complicated now, so I would separate the logic to calculate the retry interval into a separate method (similarly to what you did with calculate_retry_after).
This new method could be called calculate_retry_interval(retries).

assert_equal middleware.sleep_amount(5), 0.1
assert_equal middleware.sleep_amount(4), 0.2
assert_equal middleware.sleep_amount(3), 0.4
assert_equal middleware.sleep_amount(5, {}), 0.1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use the new calculate_retry_interval for this and the following tests so we don't need to pass {} every time.

conn(:max => 1, :interval => 0.1, :retry_statuses => [429]).get("/throttled", params)
}

assert_in_delta 0.5, Time.now - started, 0.04
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit worried by this assertion, it might create false-positives if test execution is too slow.
Any reason why we don't simply test that Time.now - started > 0.5?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. We don't care about delta in this case.

retry
end
raise

raise unless exception.is_a?(Faraday::Error::RetriableResponse)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iMacTia I've silenced exception here, but not sure that this is acceptable and very good solution

@iMacTia
Copy link
Member

iMacTia commented Feb 20, 2018

I agree this is not the most elegant solution, but it seems the easiest one without refactoring too much.
I'll try to think of a better way to do this.

@bronislav
Copy link
Contributor Author

I've added support for timestamp in the Retry-After header according to MDN https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After

I've also stop retrying if retry_after interval is higher than max_interval

@bronislav
Copy link
Contributor Author

@iMacTia Any chance to merge it soon?

Copy link
Member

@iMacTia iMacTia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review comments have been addressed, test coverage is good. The exception solution doesn't look like the best one but I couldn't think of a better approach, so let's get it merged as it is and improve later on

@bronislav
Copy link
Contributor Author

@iMacTia Thank you.

@iMacTia iMacTia merged commit 6ddf9c3 into lostisland:master Feb 22, 2018
@iMacTia
Copy link
Member

iMacTia commented Feb 22, 2018

@bronislav Merged into master, will probably release next week as this weekend I'm unavailable in case of issues 😅
If you get a chance to test master branch in a real-world application than feedback would be welcome 😃

@bronislav bronislav deleted the retry-after-header branch February 22, 2018 21:09
@bronislav
Copy link
Contributor Author

Yes. I'm going to use it from master and test toward real project tomorrow. Will let you know.

@bronislav
Copy link
Contributor Author

bronislav commented Feb 23, 2018

I found one configuration issue: if user enable :raise_error middleware or raise custom exceptions on particular responses, those exceptions should be included in the retry middleware configuration explicitly.

This just documentation issue, I think.

@iMacTia
Copy link
Member

iMacTia commented Feb 28, 2018

@bronislav that should be in line with what happens currently and depends on how you set the middlewares in your stack, but thanks for raising this 👍

end
raise

raise unless exception.is_a?(Faraday::Error::RetriableResponse)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bronislav @iMacTia Hey all, I've tried to use :retry_statuses, but it seems that this portion makes it return a nil response and no exceptions are raised when it gives up retrying the RetriableResponse, which was unexpected for me. Maybe in that case it would make more sense to return the last resp, and then let the user handle the response as he sees fit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiokr I see what you mean. Would you mind to create a pull request for this change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right @fabiokr, we missed what to do in case we reach this point (too many retries) with a Faraday::Error::RetriableResponse. I agree in that case we should simply return the response, which should be available in the exception.

PR with demonstrating test and fix welcome!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vvo
Copy link
Contributor

vvo commented Jan 6, 2020

Hi there, should we document the fact that retry-after headers are already handled? this was a surprise for me and I guess we could tell somewhere that it's already handled? thanks! (for example, here: https://www.rubydoc.info/gems/faraday/Faraday/Request/Retry)

iMacTia added a commit that referenced this pull request Jan 8, 2020
@iMacTia iMacTia mentioned this pull request Jan 8, 2020
@iMacTia
Copy link
Member

iMacTia commented Jan 8, 2020

@vvo good point, #1109 takes care of that 👍

iMacTia added a commit that referenced this pull request Jan 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants