-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connect: use heuristic to detect sidecar task driver #17065
Conversation
5afbbfd
to
289b29e
Compare
This PR adds a heuristic to detect whether to use the podman task driver for the connect sidecar proxy. The podman driver will be selected if there is at least one task in the task group configured to use podman, and there are zero tasks in the group configured to use docker. In all other cases the task driver defaults to docker. After this change, we should be able to run typical Connect jobspecs (e.g. nomad job init [-short] -connect) on Clusters configured with the podman task driver, without modification to the job files. Closes #17042
289b29e
to
112879e
Compare
Spot check nomad agent configserver {
enabled = true
}
client {
enabled = true
options = {
driver.allowlist = "podman"
}
}
plugin_dir = "/home/shoenig/Go/podman/build"
plugin "nomad-driver-podman" {
} job filejob "podcon" {
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {}
}
}
task "web" {
driver = "podman"
config {
image = "docker.io/hashicorpdev/counter-api:v3"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "podman"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "docker.io/hashicorpdev/counter-dashboard:v3"
}
}
}
} start nomad and consul
plan is ok
deployment ok
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After this change, we should be able to run typical Connect jobspecs
(e.g. nomad job init [-short] -connect) on Clusters configured with the
podman task driver, without modification to the job files.
🥳
nomad/job_endpoint_hook_connect.go
Outdated
func groupConnectGuessTaskDriver(g *structs.TaskGroup) string { | ||
foundPodman := false | ||
for _, task := range g.Tasks { | ||
switch task.Driver { | ||
case "docker": | ||
return "docker" | ||
case "podman": | ||
foundPodman = true | ||
} | ||
} | ||
if foundPodman { | ||
return "podman" | ||
} | ||
return "docker" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a simple function, so I hate to bikeshed on it 😅
But I found this logic a bit confusing to grok at first. I think we could instead have a task driver counter map and then something like this so it more explicitly matches the function docstring:
if numTasks["podman"] > 1 && numTasks["docker"] == 0 {
return "podman"
}
return "docker"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about
func groupConnectGuessTaskDriver(g *structs.TaskGroup) string {
drivers := set.FromFunc(g.Tasks, func(t *structs.Task) string {
return t.Driver
})
if drivers.Contains("podman") && !drivers.Contains("docker") {
return "podman"
}
return "docker"
}
return &structs.Task{ | ||
// Name is used in container name so must start with '[A-Za-z0-9]' | ||
Name: fmt.Sprintf("%s-%s", structs.ConnectProxyPrefix, service), | ||
Kind: structs.NewTaskKind(structs.ConnectProxyPrefix, service), | ||
Driver: "docker", | ||
Driver: driver, | ||
Config: connectSidecarDriverConfig(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth adding a docsting to connectSidecarDriverConfig()
to note that it should be compatible with Docker and Podman?
Or maybe even pass the driver choice here. It can be ignored in the current implementation, but may be handy if there are conflicting configs between the drivers or we add support for other plugins.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a comment; with a note about passing in the parameter one day if needed
Is there a doc we could update? Maybe https://developer.hashicorp.com/nomad/docs/integrations/consul-connect or https://developer.hashicorp.com/nomad/docs/job-specification/sidecar_task ? |
yeah, let me cover that in another PR along with an e2e test |
This PR adds a heuristic to detect whether to use the podman task driver
for the connect sidecar proxy. The podman driver will be selected if there
is at least one task in the task group configured to use podman, and there
are zero tasks in the group configured to use docker. In all other cases
the task driver defaults to docker.
After this change, we should be able to run typical Connect jobspecs
(e.g. nomad job init [-short] -connect) on Clusters configured with the
podman task driver, without modification to the job files.
Closes #17042