Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop button in Nomad UI does not stop the job in the selected cluster region #8195

Closed
pirogoeth opened this issue Jun 17, 2020 · 3 comments
Closed

Comments

@pirogoeth
Copy link

pirogoeth commented Jun 17, 2020

For reporting security vulnerabilities please refer to the website.

If you have a question, prepend your issue with [question] or preferably use the nomad mailing list.

If filing a bug please include the following:

Nomad version

Nomad v0.10.1 (0d4e5d9)

Operating system and Environment details

Ubuntu 16.04.6 LTS on AWS
Nomad clusters federated between three AWS regions, us-east-1, us-west-2, eu-central-1

Issue

When connected to the Nomad UI with a cluster that is not the selected cluster, stopping a job in the selected cluster stops the job in the cluster local to the UI you are accessing instead.

Request targets the Nomad API local to the UI region without setting the region query param.

image

Reproduction steps

  1. Access the Nomad UI from a datacenter endpoint (ie., http://nomad.us-east-1/ui/)
  2. Change the region selector to a remote region (ie., eu-central-1)
  3. Pick a job out of the list of running jobs
  4. Click the stop button and confirm

Now the job will be stopped in us-east-1 but not eu-central-1.

Job file (if appropriate)

Stupid-simple job to test with. Deployed to each region.

job "sleeper" {
  datacenters = ["dc1"]
  type        = "system"
  priority    = 75

  group "app" {
    task "sleeper" {
      driver = "docker"

      config {
        image      = "alpine:latest"
        force_pull = true

        command = "/bin/sleep"

        args = [
          "3600",
        ]
      }

      resources {
        cpu    = 64
        memory = 64
      }
    }
  }
}

Nomad Client logs (if appropriate)

If possible please post relevant logs in the issue.

Logs and other artifacts may also be sent to: nomad-oss-debug@hashicorp.com

Please link to your Github issue in the email and reference it in the subject
line:

To: nomad-oss-debug@hashicorp.com

Subject: GH-1234: Errors garbage collecting allocs

Emails sent to that address are readable by all HashiCorp employees but are not publicly visible.

Nomad Server logs (if appropriate)

@m1keil
Copy link

m1keil commented Nov 19, 2020

We are hitting this in 0.10.2 as well.
I believe this was solved in 0.12.1 by @DingoEatingFuzz. Can someone confirm if this is not planned to be backported?

@tgross tgross added this to Needs Triage in Nomad - Community Issues Triage via automation Feb 18, 2021
@tgross tgross moved this from Needs Triage to Needs Roadmapping in Nomad - Community Issues Triage Feb 18, 2021
@tgross
Copy link
Member

tgross commented Mar 3, 2021

This won't be backported to 0.10.2, sorry.

@tgross tgross closed this as completed Mar 3, 2021
Nomad - Community Issues Triage automation moved this from Needs Roadmapping to Done Mar 3, 2021
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants