-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update health_check and auto_revert don't seem to work #3016
Comments
Even when I add:
nothing is reverted after ending up in "failed" state:
|
@tino Looks like you are running a system job. Unfortunately this feature is only available on service jobs at the moment. The docs have been updated and the website should be pushed soon. |
Ah, okay, that explains it! Is there anything I can do now to prevent deploying a failing configuration everywhere as I was trying to accomplish? And is this something to be expected in a 0.6.x or more 0.7/8 release? |
@tino you could duplicate the group and add a constraint to one group to run only on one node and on the other to not run on that node to essentially manually canary. As for bringing the new update stanza it is more like 0.7/0.8. |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Nomad version
Nomad v0.6.0-dev (1f3966e+CHANGES)
(from #2969)
Operating system and Environment details
Docker alpine
Issue
With this config:
I expect a failing configuration to not be deployed across multiple machines, but be reverted after failing a single try.
Reproduction steps
nomad run ngtest.nomad
;
in the nginx.conf to make it invalidnomad run ngtest.nomad
=> both end up failing.
Nomad Server logs (if appropriate)
Nomad Client logs (if appropriate)
After first run:
after 2nd run:
The text was updated successfully, but these errors were encountered: