Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify sharding logic #438

Merged
merged 5 commits into from
Mar 1, 2021

Conversation

rfratto
Copy link
Member

@rfratto rfratto commented Feb 26, 2021

PR Description

This PR simplifies the scraping service sharding mechanism by removing the scraping service manager and performing ownership checking locally directly in the shard logic.

This removes the need for a ton of code.

Which issue(s) this PR fixes

Notes to the Reviewer

Previously hashing was used to cache whether an instance needed to be updated and ignoring the ApplyConfig if it didn't. This hasn't been necessary for a while; ApplyConfig for an unmodified instance is a no-op.

PR Checklist

(Internal change, no changelog or documentation)

  • CHANGELOG updated
  • Documentation added
  • Tests updated

@mattdurham
Copy link
Collaborator

To restate the above and the code is basically no need to check if it changed just apply it unilaterally?

@rfratto
Copy link
Member Author

rfratto commented Feb 26, 2021

To restate the above and the code is basically no need to check if it changed just apply it unilaterally?

Yeah, it was added way back when re-applying a config always restarted the underlying instance. That changed in #145 but the logic here was never removed.

@rfratto
Copy link
Member Author

rfratto commented Feb 26, 2021

🤔 I added a test to make sure the sharding is working but the k3d example still has every node sharding all configs to themselves. I need to look into this more.

@rfratto
Copy link
Member Author

rfratto commented Feb 26, 2021

🤔 I added a test to make sure the sharding is working but the k3d example still has every node sharding all configs to themselves. I need to look into this more.

Found the bug, I forgot to update watchKV. There should be a test for this.

pkg/prom/ha/server.go Show resolved Hide resolved
pkg/prom/ha/server.go Show resolved Hide resolved
pkg/prom/ha/server.go Show resolved Hide resolved
pkg/prom/ha/server.go Show resolved Hide resolved
@rfratto rfratto force-pushed the remove-sharding-instance-manager branch from 7116030 to f6692d5 Compare March 1, 2021 16:22
@rfratto rfratto merged commit b61e0e2 into grafana:main Mar 1, 2021
@rfratto rfratto deleted the remove-sharding-instance-manager branch March 1, 2021 16:52
@mattdurham mattdurham mentioned this pull request Sep 7, 2021
3 tasks
mattdurham pushed a commit that referenced this pull request Nov 11, 2021
* remove sharding intance manager

* add test to ensure sharding behaves properly

* handle watch events properly

* add test for unowned config in watchKV

* add comments based on feedback
@github-actions github-actions bot added the frozen-due-to-age Locked due to a period of inactivity. Please open new issues or PRs if more discussion is needed. label Apr 17, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 17, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
frozen-due-to-age Locked due to a period of inactivity. Please open new issues or PRs if more discussion is needed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants