Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic sharding implementation #174

Closed
wants to merge 10 commits into from
Closed

Conversation

yaourt
Copy link
Contributor

@yaourt yaourt commented Jun 29, 2011

Please could you review the following proposal and tell me what do you think about it ?

Currently, Jedis doesn't support dynamic sharding.

If we want to add / remove a Redis server from the available shards, then the only way to achieve this is to stop the application, change the sharding configuration and then restart it.

This implementation allows various things like :

  • to add / remove some servers without downtime for the application
  • to externalize the Redis connectivity monitoring and adapt the sharding configuration in pseudo-realtime
  • to centralize the sharding configuration for a cluster in a DB or something similar
  • etc

This proposal delegates the sharding configuration to a component outside of Jedis and allow to dynamically update the sharding configuration without down time.

JedisDynamicShardsProvider class is doing the bridge between this component and Jedis internals.

The external component manages the sharding configuration the way it wants and when an update is required, it only has to call the relevant method add/removeShard( ) or setShards( ) on JedisDynamicShardsProvider.

Upon such a call, JedisDynamicShardsProvider then notify (when absolutely required) Sharded instances about such a change (inspired from the Observable / Observer pattern)

When Sharded handles such a notification, then all the read access on the shard configuration are blocked until it has finished to change the sharding configuration (ReentrantReadWriteLock usage).

yaourt added 4 commits June 28, 2011 14:42
…t a JedisDynamicShardsProvider instead of a static shards list
- Use ReentrantReadWriteLock in AbstractDynamicShardsProvider
- Fix read/write locks in Sharded
- Add integration tests with concurrency
@ib84
Copy link

ib84 commented Jun 30, 2011

nice!

@ib84
Copy link

ib84 commented Sep 9, 2011

Hi yaourt. Did you test this already?

@yaourt
Copy link
Contributor Author

yaourt commented Sep 9, 2011

Mmmmh, to be honest, I added several tests for this but I don't remember if I ran all the Jedis test suite ...

So please, gimme some time to confirm, so you will not loose your time ...

@ib84
Copy link

ib84 commented Sep 9, 2011

yes that would be a big help thanks. I'd like your dynamic sharding!

@yaourt
Copy link
Contributor Author

yaourt commented Sep 9, 2011

I've your answer :

Results :

Tests run: 215, Failures: 0, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

@ib84
Copy link

ib84 commented Sep 9, 2011

nice! I assume it also passed your own custom tests for add/removing shards. forwarding to jonathan

@yaourt
Copy link
Contributor Author

yaourt commented Sep 9, 2011

@yaourt
Copy link
Contributor Author

yaourt commented Sep 9, 2011

Real life sample usage code .... (DAO impl NOT provided)

https://gist.github.com/1206366

@ib84
Copy link

ib84 commented Sep 9, 2011

ah ok thanks, I just wrote you a personal mail asking just those two things. Have a read anyway for another thing

@yaourt
Copy link
Contributor Author

yaourt commented Sep 16, 2011

Up ;-)

Getting started sample available here : https://gist.github.com/1211016

@goldfishy
Copy link

Will this be merged into Jedis? :)

resources.put(shardInfo, shardInfo.createResource());
}
nodes.clear();
resources.clear();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this leaking resources? Don't you have to close them all rather than just clearing references to them here?

@HeartSaVioR
Copy link
Contributor

We agree that we don't treat ShardedJedis as core of Jedis, and for now we don't improve it.
If you think that we really need this feature, please leave an issue and link this so that we don't forget it.
Closing.

@phenomax
Copy link

Is this feature still under consideration? As a workaround I thought about disconnecting reconnecting the connection to get the new nodes. What do you think about that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants