Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"i/o timeout" err not retried when using Watch() #2999

Open
marcind opened this issue May 24, 2024 · 0 comments
Open

"i/o timeout" err not retried when using Watch() #2999

marcind opened this issue May 24, 2024 · 0 comments

Comments

@marcind
Copy link

marcind commented May 24, 2024

Issue tracker is used for reporting bugs and discussing new features. Please use
stackoverflow for supporting issues.

Expected Behavior

I would expect the client to retry commands related to executing the ClusterClient.Watch() method.

Current Behavior

Currently if there's a broken connection the TCP call will timeout after the configured or default (3 seconds) timeout, causing the whole method call to fail. This can happen as early as when trying to execute the initial WATCH command against the server. The Watch() method call returns with an error like redis: Conn is in a bad state: read tcp 123.123.123.123:37106->42.42.42.42:6379: i/o timeout

Possible Solution

Detect and unwrap a BadConnErr in shouldRetry before evaluating retry conditions.

Steps to Reproduce

N/A, this is an intermittent issue that occurs in a cloud environment.

Context (Environment)

Version: github.com/redis/go-redis/v9 v9.5.1

Detailed Description

Reading through the code, this happens because the Watch() method uses a Tx object, which uses a StickyConnPool:

go-redis/tx.go

Lines 26 to 32 in 0f0a284

tx := Tx{
baseClient: baseClient{
opt: c.opt,
connPool: pool.NewStickyConnPool(c.connPool),
},
hooksMixin: c.hooksMixin.clone(),
}

This StickyConnPool captures any errors that occur while executing commands and wraps them in the internal BadConnErr and stores internally:

p._badConnError.Store(BadConnError{wrapped: reason})

When the initial command fails, it may get retried (depending on options) by baseClient._process, but a repeated call to StickyConnPool.Get will now return this BadConnErr, which will bubble out.

When the ClusterClient.Watch method machinery attempts a retry it will evaluate the error using shouldRetry which will return false, even though the wrapped error is a timeoutError.

Possible Implementation

Change shouldRetry to detect if the passed in err is BadConnErr and unwrap it before invoking the existing logic.

However note that there's a wrinkle in that if baseClient._process (which also uses shouldRetry) that is executing on a Tx object that is using the sticky pool now attempts a retry it will continue trying to get a conn that is in a bad state. There is the StickyConnPool.Reset() method that appears to be intended to restore the sticky pool to an uninitialized state so that calling Get on it can attempt to get a fresh conn from the underlying pool, however this method is not called anywhere in the code at the moment. It appears that there needs to be extra logic added to reset the sticky pool before retries are attempted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant