Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passing a sidecar_task stanza to one task modifies every other connect sidecar task #8337

Closed
jorgemarey opened this issue Jul 2, 2020 · 2 comments · Fixed by #8338
Closed
Labels
theme/consul/connect Consul Connect integration type/bug

Comments

@jorgemarey
Copy link
Contributor

Nomad version

Nomad v0.11.3

Issue

When providing a sidecar_task stanza to a group service to modify sidecar configuration, if the
config is modified this will be used in every other connect sidecar (over other groups of the same job, or even
in different jobs deployed later.)

Reproduction steps

  1. start nomad with connect (nomad agent -dev-connect)
  2. run job
  3. all envoy tasks will have the same config (even if no sidecar_task stanza is specified)
  4. deploying a new different job will have the same sidecar configuration as the one specified in the previous jobs

Job file (if appropriate)

job "countdash" {
   datacenters = ["dc1"]
   group "api" {
     network {
       mode = "bridge"
     }

     service {
       name = "count-api"
       port = "9001"

       connect {
         sidecar_service {}
       }
     }

     task "web" {
       driver = "docker"
       config {
         image = "hashicorpnomad/counter-api:v1"
       }
     }
   }

   group "dashboard" {
     network {
       mode ="bridge"
       port "http" {
         static = 9002
         to     = 9002
       }
     }

     service {
       name = "count-dashboard"
       port = "9002"

       connect {
         sidecar_service {
           proxy {
             upstreams {
               destination_name = "count-api"
               local_bind_port = 8080
             }
           }
         }
         sidecar_task {
           config {
             args = [ "-l", "debug"]
           }
         }
       }
     }

     task "dashboard" {
       driver = "docker"
       env {
         COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
       }
       config {
         image = "hashicorpnomad/counter-dashboard:v1"
       }
     }
   }
 }
@dpn
Copy link

dpn commented Jul 17, 2020

So we've bumped into this into a test cluster-- is there a way to revert this change back to the default config? I'm guessing this is a wipe the cluster and start over kind of situation but, happy to try other things if anyone has any ideas.

Edit: Looks like a rolling restart of the servers was enough in case anyone runs across this 👍

@github-actions
Copy link

github-actions bot commented Nov 4, 2022

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 4, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
theme/consul/connect Consul Connect integration type/bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants