Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Honor Retry-After header #773

Merged
merged 1 commit into from
Feb 22, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion lib/faraday/error.rb
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,12 @@ def initialize(ex = nil)
class SSLError < ClientError
end

class RetriableResponse < ClientError; end

[:ClientError, :ConnectionFailed, :ResourceNotFound,
:ParsingError, :TimeoutError, :SSLError].each do |const|
:ParsingError, :TimeoutError, :SSLError, :RetriableResponse].each do |const|
Error.const_set(const, Faraday.const_get(const))
end


end
60 changes: 48 additions & 12 deletions lib/faraday/request/retry.rb
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@ class Request::Retry < Faraday::Middleware
IDEMPOTENT_METHODS = [:delete, :get, :head, :options, :put]

class Options < Faraday::Options.new(:max, :interval, :max_interval, :interval_randomness,
:backoff_factor, :exceptions, :methods, :retry_if, :retry_block)
:backoff_factor, :exceptions, :methods, :retry_if, :retry_block,
:retry_statuses)

DEFAULT_CHECK = lambda { |env,exception| false }

def self.from(value)
Expand Down Expand Up @@ -56,7 +58,8 @@ def backoff_factor

def exceptions
Array(self[:exceptions] ||= [Errno::ETIMEDOUT, 'Timeout::Error',
Error::TimeoutError])
Error::TimeoutError,
Faraday::Error::RetriableResponse])
end

def methods
Expand All @@ -71,6 +74,9 @@ def retry_block
self[:retry_block] ||= Proc.new {}
end

def retry_statuses
Array(self[:retry_statuses] ||= [])
end
end

# Public: Initialize middleware
Expand Down Expand Up @@ -106,29 +112,35 @@ def initialize(app, options = nil)
@errmatch = build_exception_matcher(@options.exceptions)
end

def sleep_amount(retries)
retry_index = @options.max - retries
current_interval = @options.interval * (@options.backoff_factor ** retry_index)
current_interval = [current_interval, @options.max_interval].min
random_interval = rand * @options.interval_randomness.to_f * @options.interval
current_interval + random_interval
def calculate_sleep_amount(retries, env)
retry_after = calculate_retry_after(env)
retry_interval = calculate_retry_interval(retries)

return if retry_after && retry_after > @options.max_interval

retry_after && retry_after >= retry_interval ? retry_after : retry_interval
end

def call(env)
retries = @options.max
request_body = env[:body]
begin
env[:body] = request_body # after failure env[:body] is set to the response body
@app.call(env)
@app.call(env).tap do |resp|
raise Faraday::Error::RetriableResponse.new(nil, resp) if @options.retry_statuses.include?(resp.status)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see what you're trying to do with Faraday::Error::RetriableResponse, however I don't think it's a good idea to introduce this exception and use it this way.
The reason is that, after we reach the maximum number of retries, the new exception will be propagated outside and this might be an unexpected behaviour compared to what happens now.

I like the idea of allowing to define a list of statuses that you want to retry, but this should be tested internally without raising any exception.
My suggestion is to add this to retry_request?.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to do this without raising an exception but this will lead to implementing two places with retry logic which seems a bit overhead. It could be done using do ... while loop but it looks hacky for me.

And I'll move statuses testing to the better place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to move this line around but it seems that this is the best option.

end
rescue @errmatch => exception
if retries > 0 && retry_request?(env, exception)
retries -= 1
rewind_files(request_body)
@options.retry_block.call(env, @options, retries, exception)
sleep sleep_amount(retries + 1)
retry
if (sleep_amount = calculate_sleep_amount(retries + 1, env))
sleep sleep_amount
retry
end
end
raise

raise unless exception.is_a?(Faraday::Error::RetriableResponse)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bronislav @iMacTia Hey all, I've tried to use :retry_statuses, but it seems that this portion makes it return a nil response and no exceptions are raised when it gives up retrying the RetriableResponse, which was unexpected for me. Maybe in that case it would make more sense to return the last resp, and then let the user handle the response as he sees fit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiokr I see what you mean. Would you mind to create a pull request for this change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right @fabiokr, we missed what to do in case we reach this point (too many retries) with a Faraday::Error::RetriableResponse. I agree in that case we should simply return the response, which should be available in the exception.

PR with demonstrating test and fix welcome!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

end
end

Expand Down Expand Up @@ -167,5 +179,29 @@ def rewind_files(body)
end
end

# MDN spec for Retry-After header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
def calculate_retry_after(env)
response_headers = env[:response_headers]
return unless response_headers

retry_after_value = env[:response_headers]["Retry-After"]

# Try to parse date from the header value
begin
datetime = DateTime.rfc2822(retry_after_value)
datetime.to_time - Time.now.utc
rescue ArgumentError
retry_after_value.to_f
end
end

def calculate_retry_interval(retries)
retry_index = @options.max - retries
current_interval = @options.interval * (@options.backoff_factor ** retry_index)
current_interval = [current_interval, @options.max_interval].min
random_interval = rand * @options.interval_randomness.to_f * @options.interval

current_interval + random_interval
end
end
end
89 changes: 77 additions & 12 deletions test/middleware/retry_test.rb
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ def setup
def conn(*retry_args)
Faraday.new do |b|
b.request :retry, *retry_args

b.adapter :test do |stub|
['get', 'post'].each do |method|
stub.send(method, '/unstable') do |env|
Expand All @@ -18,6 +19,22 @@ def conn(*retry_args)
env[:body] = nil # simulate blanking out response body
@explode.call @times_called
end

stub.send(method, '/throttled') do |env|
@times_called += 1
@envs << env.dup

params = env[:params]

status = (params['status'] || 429).to_i
headers = {}

retry_after = params['retry_after']

headers['Retry-After'] = retry_after if retry_after

[status, headers, '']
end
end
end
end
Expand Down Expand Up @@ -68,7 +85,7 @@ def test_interval
assert_in_delta 0.2, Time.now - started, 0.04
end

def test_calls_sleep_amount
def test_calls_calculate_sleep_amount
explode_app = MiniTest::Mock.new
explode_app.expect(:call, nil, [{:body=>nil}])
def explode_app.call(env)
Expand All @@ -79,7 +96,7 @@ def explode_app.call(env)
class << retry_middleware
attr_accessor :sleep_amount_retries

def sleep_amount(retries)
def calculate_sleep_amount(retries, env)
self.sleep_amount_retries.delete(retries)
0
end
Expand All @@ -95,30 +112,30 @@ def sleep_amount(retries)

def test_exponential_backoff
middleware = Faraday::Request::Retry.new(nil, :max => 5, :interval => 0.1, :backoff_factor => 2)
assert_equal middleware.sleep_amount(5), 0.1
assert_equal middleware.sleep_amount(4), 0.2
assert_equal middleware.sleep_amount(3), 0.4
assert_equal middleware.send(:calculate_retry_interval, 5), 0.1
assert_equal middleware.send(:calculate_retry_interval, 4), 0.2
assert_equal middleware.send(:calculate_retry_interval, 3), 0.4
end

def test_exponential_backoff_with_max_interval
middleware = Faraday::Request::Retry.new(nil, :max => 5, :interval => 1, :max_interval => 3, :backoff_factor => 2)
assert_equal middleware.sleep_amount(5), 1
assert_equal middleware.sleep_amount(4), 2
assert_equal middleware.sleep_amount(3), 3
assert_equal middleware.sleep_amount(2), 3
assert_equal middleware.send(:calculate_retry_interval, 5), 1
assert_equal middleware.send(:calculate_retry_interval, 4), 2
assert_equal middleware.send(:calculate_retry_interval, 3), 3
assert_equal middleware.send(:calculate_retry_interval, 2), 3
end

def test_random_additional_interval_amount
middleware = Faraday::Request::Retry.new(nil, :max => 2, :interval => 0.1, :interval_randomness => 1.0)
sleep_amount = middleware.sleep_amount(2)
sleep_amount = middleware.send(:calculate_retry_interval, 2)
assert_operator sleep_amount, :>=, 0.1
assert_operator sleep_amount, :<=, 0.2
middleware = Faraday::Request::Retry.new(nil, :max => 2, :interval => 0.1, :interval_randomness => 0.5)
sleep_amount = middleware.sleep_amount(2)
sleep_amount = middleware.send(:calculate_retry_interval, 2)
assert_operator sleep_amount, :>=, 0.1
assert_operator sleep_amount, :<=, 0.15
middleware = Faraday::Request::Retry.new(nil, :max => 2, :interval => 0.1, :interval_randomness => 0.25)
sleep_amount = middleware.sleep_amount(2)
sleep_amount = middleware.send(:calculate_retry_interval, 2)
assert_operator sleep_amount, :>=, 0.1
assert_operator sleep_amount, :<=, 0.125
end
Expand Down Expand Up @@ -212,5 +229,53 @@ def test_should_rewind_files_on_retry
assert_equal 3, @times_called
assert_equal 2, rewound
end

def test_should_retry_retriable_response
params = { status: 429 }
conn(:max => 1, :retry_statuses => 429).get("/throttled", params)

assert_equal 2, @times_called
end

def test_should_not_retry_non_retriable_response
params = { status: 503 }
conn(:max => 1, :retry_statuses => 429).get("/throttled", params)

assert_equal 1, @times_called
end

def test_interval_if_retry_after_present
started = Time.now

params = { :retry_after => 0.5 }
conn(:max => 1, :interval => 0.1, :retry_statuses => [429]).get("/throttled", params)

assert Time.now - started > 0.5
end

def test_should_ignore_retry_after_if_less_then_calculated
started = Time.now

params = { :retry_after => 0.1 }
conn(:max => 1, :interval => 0.2, :retry_statuses => [429]).get("/throttled", params)

assert Time.now - started > 0.2
end

def test_interval_when_retry_after_is_timestamp
started = Time.now

params = { :retry_after => (Time.now.utc + 2).strftime('%a, %d %b %Y %H:%M:%S GMT') }
conn(:max => 1, :interval => 0.1, :retry_statuses => [429]).get("/throttled", params)

assert Time.now - started > 1
end

def test_should_not_retry_when_retry_after_greater_then_max_interval
params = { :retry_after => (Time.now.utc + 20).strftime('%a, %d %b %Y %H:%M:%S GMT') }
conn(:max => 2, :interval => 0.1, :retry_statuses => [429], max_interval: 5).get("/throttled", params)

assert_equal 1, @times_called
end
end
end