Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker extra_hosts is ignored unless defined in the majority of the services within the group #12373

Open
Dgotlieb opened this issue Mar 24, 2022 · 4 comments
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/driver/docker theme/networking type/bug

Comments

@Dgotlieb
Copy link
Contributor

Dgotlieb commented Mar 24, 2022

Nomad version

v1.2.6 (a6c6b47)

Operating system and Environment details

Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic

Background

Since Nomad version 1.1.3 /etc/hosts file was moved from the task level to the allocation level so that it can be shared between tasks of an allocation #10823

According to the upgrade guides, when using extra_hosts with Consul Connect in bridge network mode, the extra_hosts values should be defined in the sidecar_task.config block instead of the task config level.

Issue

When a group has more than one service, the /etc/hosts file is populated with the extra_hosts only if the majority of the services contain the extra_hosts inside the sidecar_task.config block , otherwise the /etc/hosts of the allocation is generated without the extra_hosts .

Expected Result

In case sidecar_task with extra_hosts is defined in at least one of the group's services, the extra_hosts should be populated in the /etc/hosts file of the allocation.

Workaround

Add the extra_hosts inside sidecar_task.config block to the majority of the services using connect within the group.

Job file - before workaround.

Only 2 out of 5 services contain extra_hosts configured in the sidecar_task.config:

job "my_job" {
  type        = "service"
  datacenters = ["dc1"]

  group "my_group" {
    count = 1

    network {
      mode = "bridge"
      port "service-a-port" {
        to = 5555
      }
      port "service-b-port" {
        to = 5556
      }
      port "service-c-port" {
        to = 5557
      }
      port "service-d-port" {
        to = 5558
      }
      port "service-e-port" {
        to = 5559
      }
    }

    service {
      name = "service-a"
      port = 5555

      connect {
        sidecar_service {}
        sidecar_task {}  # can also be removed
      }
    }

    service {
      name = "service-b"
      port = 5556
      connect {
        sidecar_service {}
        sidecar_task {}  # can also be removed
      }
    }

    service {
      name = "service-c"
      port = 5557
      connect {
        sidecar_service {}
        sidecar_task {}  # can also be removed
      }
    }

    service {
      name = "service-d"
      port = 5558
      connect {
        sidecar_service {}
        sidecar_task {
          config {
            extra_hosts = ["foo:127.0.0.1"]
          }
        }
      }
    }

    service {
      name = "service-e"
      port = 5559
      connect {
        sidecar_service {}
        sidecar_task {
          config {
            extra_hosts = ["foo:127.0.0.1"]
          }
        }
      }
    }

    task "my_task" {
      driver = "docker"

      config {
        image = "nginx:alpine"
      }
    }
  }
}
$ cat /etc/hosts
# these entries are extra hosts added by the task config
(empty)

Job file - after workaround

3 out of 5 (majority) services contain extra_hosts configured in the sidecar_task.config:

job "my_job" {
  type        = "service"
  datacenters = ["dc1"]

  group "my_group" {
    count = 1

    network {
      mode = "bridge"
      port "service-a-port" {
        to = 5555
      }
      port "service-b-port" {
        to = 5556
      }
      port "service-c-port" {
        to = 5557
      }
      port "service-d-port" {
        to = 5558
      }
      port "service-e-port" {
        to = 5559
      }
    }

    service {
      name = "service-a"
      port = 5555

      connect {
        sidecar_service {}
        sidecar_task {}  # can also be removed
      }
    }

    service {
      name = "service-b"
      port = 5556
      connect {
        sidecar_service {}
        sidecar_task {} # can also be removed
      }
    }

    service {
      name = "service-c"
      port = 5557
      connect {
        sidecar_service {}
        sidecar_task {
          config {
            extra_hosts = ["foo:127.0.0.1"]
          }
        }
      }
    }

    service {
      name = "service-d"
      port = 5558
      connect {
        sidecar_service {}
        sidecar_task {
          config {
            extra_hosts = ["foo:127.0.0.1"]
          }
        }
      }
    }

    service {
      name = "service-e"
      port = 5559
      connect {
        sidecar_service {}
        sidecar_task {
          config {
            extra_hosts = ["foo:127.0.0.1"]
          }
        }
      }
    }

    task "my_task" {
      driver = "docker"

      config {
        image = "nginx:alpine"
      }
    }
  }
}

Output: (as expected):

$ cat /etc/hosts
# these entries are extra hosts added by the task config
127.0.0.1 foo

Side notes

  1. I got inconsistent results In case there are even number of services (tried 2, 4, 6...)
  2. This was tested using Docker driver.

Thanks!

@lgfa29
Copy link
Contributor

lgfa29 commented Mar 24, 2022

Hi @Dgotlieb 👋

Unfortunately since extra_hosts are set at the task level there is a known race condition where the final value will be set by the task that starts first. I tried to fix this before in #11074 but it wasn't a reliable approach as all tasks would be trying to write to the same file, which may result in an invalid file if the write over each other.

Have you tried the first workaround I mentioned in #11056 (comment)? In that particular case I thought it would be better to set at the sidecar config, but I didn't think about the scenario you have, with multiple sidecars. I think the prestart task will be able to set the desired extra_hosts.

A minimum prestart task would look like this:

job "my_job" {
  # ...
  group "my_group" {
    # ...
    task "extra-hosts" {
      driver = "docker"

      config {
        image   = "busybox:1"
        extra_hosts = ["foo:127.0.0.1"]
      }

      lifecycle {
        hook = "prestart"
      }
    }
    # ...
  }
}

@lgfa29 lgfa29 added theme/networking theme/driver/docker stage/accepted Confirmed, and intend to work on. No timeline committment though. labels Mar 24, 2022
@lgfa29 lgfa29 added this to Needs Triage in Nomad - Community Issues Triage via automation Mar 24, 2022
@lgfa29 lgfa29 moved this from Needs Triage to Needs Roadmapping in Nomad - Community Issues Triage Mar 24, 2022
@Dgotlieb
Copy link
Contributor Author

Hi @lgfa29,

I can confirm that your workaround works (even) when using multiple services:

job "my_job" {
  type        = "service"
  datacenters = ["dc1"]

  group "my_group" {
    count = 1

    network {
      mode = "bridge"
      port "service-a-port" {
        to = 5555
      }
      port "service-b-port" {
        to = 5556
      }
      port "service-c-port" {
        to = 5557
      }
      port "service-d-port" {
        to = 5558
      }
      port "service-e-port" {
        to = 5559
      }
    }

    service {
      name = "service-a"
      port = 5555

      connect {
        sidecar_service {}
      }
    }

    service {
      name = "service-b"
      port = 5556
      connect {
        sidecar_service {}
      }
    }

    service {
      name = "service-c"
      port = 5557
      connect {
        sidecar_service {}
      }
    }

    service {
      name = "service-d"
      port = 5558
      connect {
        sidecar_service {}
      }
    }

    service {
      name = "service-e"
      port = 5559
      connect {
        sidecar_service {}
      }
    }

    task "my_task" {
      driver = "docker"

      config {
        image = "nginx:alpine"
      }
    }

    task "extra-hosts" {
      driver = "docker"

      config {
        image   = "busybox:1.33"
        command = "/bin/sh"
        args    = ["local/extra_hosts.sh"]
      }

      template {
        data = <<EOF
cat <<EOT >> /etc/hosts
foo:127.0.0.1
EOT
EOF
        destination = "local/extra_hosts.sh"
      }

      lifecycle {
        hook = "prestart"
      }
    }
  }
}

I just want to emphasize that in case there are extra_hosts set in any other tasks/side_car tasks, they will be overridden by it, so the below approach may be "safer":

      template {
        data = <<EOF
echo "foo 127.0.0.1" >> /etc/hosts
EOF
        destination = "local/extra_hosts.sh"
      }

Would be nice to be able to use it in the future without the workaround 😅 but until then it's good enough 😄

Thanks!

@Dgotlieb
Copy link
Contributor Author

@lgfa29 does this means that we can sadly close this issue?

@lgfa29
Copy link
Contributor

lgfa29 commented Jul 11, 2022

@lgfa29 does this means that we can sadly close this issue?

Nope, it means that there is quite a bit of extra work that needs to be done to get this fixed. Namely, we need to hoist all extra_hosts values from tasks into the allocation level and process them there. This will require refactoring some internals so we haven't been able to get to it yet, but it's good to leave it open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/driver/docker theme/networking type/bug
Projects
Status: Needs Roadmapping
Development

No branches or pull requests

2 participants