-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(discovery): plugin registration bugfixes #1650
Conversation
Hi @andrewazores! Add at least one of the required labels to this PR Required labels are : chore,ci,cleanup,docs,feat,fix,perf,refactor,style,test |
Hi @andrewazores! Add at least one of the required labels to this PR Required labels are : chore,ci,cleanup,docs,feat,fix,perf,refactor,style,test |
/request_review |
This PR/issue depends on:
|
/build_test |
To run smoketest:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments for testing Mergify config :))
Looks like this worked as expected here. |
…egistration/stale prune
… on deregistration/stale prune
8d4fb70
to
e6d8f5e
Compare
* fix(discovery): delete plugin stored credentials automatically on deregistration/stale prune * delete any stored credentials on plugin callback ping failure * bump minimum event loop pool size * use more specific response code for duplicate matchexpression
Welcome to Cryostat! 👋
Before contributing, make sure you have:
main
branch[chore, ci, docs, feat, fix, test]
To recreate commits with GPG signature
git fetch upstream && git rebase --force --gpg-sign upstream/main
Fixes: https://github.com/cryostatio/cryostat/issues/1633
See also cryostatio/cryostat-agent#193
Based on #1636
Description of the change:
Improves error handling and cleanup when plugin registrations fail and are associated with stored credentials.
Motivation for the change:
This along with cryostatio/cryostat-agent#193 increases the resiliency of the server/agent registration system so that temporary networking failures or registration conflicts or other bugs are less likely to leave the server and agent both in a state where neither recognizes the other and yet neither is able to clean up and reset the registration status.
How to manually test:
podman kill
some agent instances to prevent clean shutdownpodman run
(referencesmoketest.sh
for exact invocation) to restart some of the killed agent instances to spin them back up