-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify sharding logic #438
Simplify sharding logic #438
Conversation
To restate the above and the code is basically no need to check if it changed just apply it unilaterally? |
Yeah, it was added way back when re-applying a config always restarted the underlying instance. That changed in #145 but the logic here was never removed. |
🤔 I added a test to make sure the sharding is working but the k3d example still has every node sharding all configs to themselves. I need to look into this more. |
Found the bug, I forgot to update |
7116030
to
f6692d5
Compare
* remove sharding intance manager * add test to ensure sharding behaves properly * handle watch events properly * add test for unowned config in watchKV * add comments based on feedback
PR Description
This PR simplifies the scraping service sharding mechanism by removing the scraping service manager and performing ownership checking locally directly in the shard logic.
This removes the need for a ton of code.
Which issue(s) this PR fixes
Notes to the Reviewer
Previously hashing was used to cache whether an instance needed to be updated and ignoring the ApplyConfig if it didn't. This hasn't been necessary for a while; ApplyConfig for an unmodified instance is a no-op.
PR Checklist
(Internal change, no changelog or documentation)