Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout issue when using EventStore.read_stream_forward\3 #92

Merged

Conversation

eamontaaffe
Copy link
Contributor

@eamontaaffe eamontaaffe commented Dec 4, 2017

I am having a few issues when trying to read a large number of events from a stream (> 1,000). Obviously, I can just reduce the number of events to read and spread it out over many requests. However sometimes, my internet isn't the best (Australian internet can be shit) and reading a small number of events (~100) also fails.

I would like to add a parameter or configuration option to the read_stream_forward method so that we can increase the GenServer.call timeout.

The attached changes do it in a pretty crude manner, but it works for my purposes. I am open to suggestions on how we can clean it up. We should also add the timeout options to other similar methods such as EventStore.read_all_streams_forward.

@eamontaaffe eamontaaffe changed the title Timeout issue when using EventStore.read_stream_forward\3 Timeout issue when using EventStore.read_stream_forward\3 Dec 4, 2017
@slashdotdash
Copy link
Member

Thanks for the pull request @eamontaaffe.

You could also use EventStore.stream_forward to return an Elixir Stream of events as this won't be affected by the GenServer call timeout (5 seconds). This is typically the function I use to fetch events from a stream, rather than paging through events manually.

@slashdotdash slashdotdash merged commit d7bb8c0 into commanded:master Dec 4, 2017
slashdotdash added a commit that referenced this pull request Dec 4, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants