Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Never use infinity request timeouts #160

Merged
merged 1 commit into from
Feb 19, 2014

Conversation

reiddraper
Copy link
Contributor

Backport of 5aa1ab0

As described in #156, there are several types of timeouts in the client.
The timeout that is generally provided as the last argument to client
operations is used to create timers which prevent us from waiting for
every on messages for TCP data (from gen_tcp). There are several cases
where this timeout was hardcoded to infinity. This can cause the client
to hang on these requests for a (mostly) unbounded time. Even when using
a gen_server timeout, the gen_server itself will continue to wait for
the message to come, with no timeout. Further, because of #155, we
simply use the ServerTimeout as the RequestTimeout, if there is not
a separate RequestTimeout. It's possible that the RequestTimeout can
fire before the ServerTimeout (this timeout is remote), but we'd
otherwise just be picking some random number to be the difference
between them. Addressing #155 will shed more light on this.

Backport of 5aa1ab0

As described in #156, there are several types of timeouts in the client.
The timeout that is generally provided as the last argument to client
operations is used to create timers which prevent us from waiting for
every on messages for TCP data (from gen_tcp). There are several cases
where this timeout was hardcoded to infinity. This can cause the client
to hang on these requests for a (mostly) unbounded time. Even when using
a gen_server timeout, the gen_server itself will continue to wait for
the message to come, with no timeout. Further, because of #155, we
simply use the `ServerTimeout` as the `RequestTimeout`, if there is not
a separate `RequestTimeout`. It's possible that the `RequestTimeout` can
fire before the `ServerTimeout` (this timeout is remote), but we'd
otherwise just be picking some random number to be the difference
between them. Addressing #155 will shed more light on this.
@reiddraper
Copy link
Contributor Author

This is a partial 1.4 backport of #156.

@coderoshi
Copy link
Contributor

+1, test pass, looks good.

@reiddraper
Copy link
Contributor Author

@evanmcc you mind giving this a second pair of eyes, since we had talked about it before?

@evanmcc
Copy link
Contributor

evanmcc commented Feb 19, 2014

looks sensible to me, there don't seem to be any lingering hardcoded infinity timeouts remaining for the request timeout.

reiddraper added a commit that referenced this pull request Feb 19, 2014
@reiddraper reiddraper merged commit 786f460 into 1.4 Feb 19, 2014
@reiddraper reiddraper deleted the feature/remove-infinity-timeouts branch February 19, 2014 17:30
reiddraper added a commit to basho/riak_cs that referenced this pull request Feb 19, 2014
* Revert part of 6e7027e. It is
  preferrable to use the request timeout over gen_server:call timoeuts.
  Further, these timeouts have been removed in the client in the master
  branch.
* Make the fold objects request timeout configurable, using
  `fold_objects_timeout`.
* Update riakc dependency to get features added in
* basho/riak-erlang-client#160
reiddraper added a commit to basho/riak_cs that referenced this pull request Feb 19, 2014
* Revert part of 6e7027e. It is
  preferrable to use the request timeout over gen_server:call timoeuts.
  Further, these timeouts have been removed in the client in the master
  branch.
* Make the fold objects request timeout configurable, using
  `fold_objects_timeout`.
* Update riakc dependency to get features added in
* basho/riak-erlang-client#160
* Update riak_repl_pb_api (to get a consistent dependency on riak_pb)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants