Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SE.Redis Reconnect retry policy #510

Merged
merged 2 commits into from
Feb 9, 2017
Merged

SE.Redis Reconnect retry policy #510

merged 2 commits into from
Feb 9, 2017

Conversation

deepakverma
Copy link
Collaborator

Implemented reconnect retry policies to back off connectimeout wait time linearly (default) or exponentially
config.ReconnectRetryPolicy = new ExponentialRetry(config.ConnectTimeout);

Looking for code review and any other feedback on the pull request.

thx.

@chester89
Copy link

@deepakverma that would be awesome - because right now all my Redis code is wrapped in retry logic

Copy link
Collaborator

@NickCraver NickCraver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. Great improvement. It needs a just a few tweaks and comments to be good to go.

@@ -208,6 +210,12 @@ public CommandMap CommandMap
}

/// <summary>
/// The retry policy to be used for connection reconnects
/// </summary>
public IReconnectRetryPolicy ReconnectRetryPolicy { get { return reconnectRetryPolicy ?? new LinearRetry(ConnectTimeout); } set { reconnectRetryPolicy = value; } }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this creates an object for the null case, the format should change a bit. Right now, it'll spin up a new LinearRetry on each property fetch - should capture it to the private when null, e.g. in the get:

return reconnectRetryPolicy ?? (reconnectRetryPolicy = new LinearRetry(ConnectTimeout));

Sidenote: I don't recommend the static default option, since it'd be a bad long-term assumption of a stateless retry policy. Would just approach this with the tweak above.

@@ -350,6 +358,7 @@ public ConfigurationOptions Clone()
configCheckSeconds = configCheckSeconds,
responseTimeout = responseTimeout,
defaultDatabase = defaultDatabase,
ReconnectRetryPolicy = ReconnectRetryPolicy,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reconnectRetryPolicy = ReconnectRetryPolicy,

(casing is off, it's hitting the prop)

namespace StackExchange.Redis
{

public class ExponentialRetry : IReconnectRetryPolicy
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs XML comments, telling people how this works, since it'll appear directly in their code.

private int deltaBackOffMilliseconds;
private int maxDeltaBackOffMilliseconds = (int)TimeSpan.FromSeconds(10).TotalMilliseconds;
[ThreadStatic]
static private Random r;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private static (keyword order)

{
}

public ExponentialRetry(int deltaBackOffMilliseconds, int maxDeltaBackOffMilliseconds)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

XMl comments on all of these as well, so users know how it behaves.

/// </summary>
/// <param name="currentRetryCount">The number of times reconnect retries have already been made by the multiplexer while it was in connecting state</param>
/// <param name="timeElapsedMillisecondsSinceLastRetry">Total time elapsed in milliseconds since the last reconnect retry was made</param>
bool ShouldRetry(long currentRetryCount, int timeElapsedMillisecondsSinceLastRetry);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the long here, giving this connection lots of chances :)

@deepakverma
Copy link
Collaborator Author

Thanks for the review, I have updated the pull request.

@deepakverma
Copy link
Collaborator Author

@NickCraver what's the plan to merge this for folks who want to try it out?

@NickCraver
Copy link
Collaborator

@deepakverma I'm merging in some Opserver bits. I'll try and pull this to test and merge tonight. I appreciate the PR and sorry we've been so busy lately :(

@NickCraver NickCraver merged commit b8a98e4 into StackExchange:master Feb 9, 2017
@NickCraver
Copy link
Collaborator

Verified locally - thanks for the work! This will be in the next NuGet build.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants