Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scheduling: prevent self-collision in dynamic port network offerings #16401

Merged
merged 2 commits into from
Mar 9, 2023

Conversation

tgross
Copy link
Member

@tgross tgross commented Mar 8, 2023

Fixes #9506 #14850 (and several internally-tracked issues)

When the scheduler tries to find a placement for a new allocation, it iterates over a subset of nodes. For each node, we populate a NetworkIndex bitmap with the ports of all existing allocations and any other allocations already proposed as part of this same evaluation via its AddAllocs method. Then we make an "ask" of the NetworkIndex in AssignPorts for any ports we need and receive an "offer" in return. The offer will include both static ports and any dynamic port assignments.

The AssignPorts method was written to support group networks, and it shares code that selects dynamic ports with the original AssignTaskNetwork code. AssignTaskNetwork can request multiple ports from the bitmap at a time. But AssignPorts requests them one at a time and does not account for possible collisions, and doesn't return an error in that case.

What happens next varies:

  1. If the scheduler doesn't place the allocation on that node, the port conflict is thrown away and there's no problem.
  2. If the node is picked and this is the only allocation (or last allocation), the plan applier will reject the plan when it calls AddAllocs, as we'd expect.
  3. If the node is picked and there are additional allocations in the same eval that iterate over the same node, their call to AddAllocs will detect the impossible state and the node will be rejected. This can have the puzzling behavior where a second task group for the job without any networking at all can hit a port collision error!

It looks like this bug has existed since we implemented group networks, but there are several factors that add up to making the issue rare for many users yet frustratingly frequent for others:

  • You're more likely to hit this bug the more tightly packed your range for dynamic ports is. With 12000 ports in the range by default, many clusters can avoid this for a long time.
  • You're more likely to hit case (3) for jobs with lots of allocations or if a scheduler has to iterate over a large number of nodes, such as with system jobs, jobs with spread blocks, or (sometimes) jobs using unique constraints.

For unlucky combinations of these factors, it's possible that case (3) happens repeatedly, preventing scheduling of a given job until a client state change (ex. restarting the agent so all its allocations are rescheduled elsewhere) re-opens the range of dynamic ports available.

This changeset:

  • Fixes the bug by accounting for collisions in dynamic port selection in AssignPorts.
  • Adds test coverage for AssignPorts, expands coverage of this case for the deprecated AssignTaskNetwork, and tightens the dynamic port range in a scheduler test for spread scheduling to more easily detect this kind of problem in the future.
  • Adds a String() method to Bitmap so that any future "screaming" log lines have a human-readable list of used ports.

When the scheduler tries to find a placement for a new allocation, it iterates
over a subset of nodes. For each node, we populate a `NetworkIndex` bitmap with
the ports of all existing allocations and any other allocations already proposed
as part of this same evaluation via its `SetAllocs` method. Then we make an
"ask" of the `NetworkIndex` in `AssignPorts` for any ports we need and receive
an "offer" in return. The offer will include both static ports and any dynamic
port assignments.

The `AssignPorts` method was written to support group networks, and it shares
code that selects dynamic ports with the original `AssignTaskNetwork`
code. `AssignTaskNetwork` can request multiple ports from the bitmap at a
time. But `AssignPorts` requests them one at a time and does not account for
possible collisions, and doesn't return an error in that case.

What happens next varies:

1. If the scheduler doesn't place the allocation on that node, the port
   conflict is thrown away and there's no problem.
2. If the node is picked and this is the only allocation (or last allocation),
   the plan applier will reject the plan when it calls `SetAllocs`, as we'd expect.
3. If the node is picked and there are additional allocations in the same eval
   that iterate over the same node, their call to `SetAllocs` will detect the
   impossible state and the node will be rejected. This can have the puzzling
   behavior where a second task group for the job without any networking at all
   can hit a port collision error!

It looks like this bug has existed since we implemented group networks, but
there are several factors that add up to making the issue rare for many users
yet frustratingly frequent for others:

* You're more likely to hit this bug the more tightly packed your range for
  dynamic ports is. With 12000 ports in the range by default, many clusters can
  avoid this for a long time.
* You're more likely to hit case (3) for jobs with lots of allocations or if a
  scheduler has to iterate over a large number of nodes, such as with system jobs,
  jobs with `spread` blocks, or (sometimes) jobs using `unique` constraints.

For unlucky combinations of these factors, it's possible that case (3) happens
repeatedly, preventing scheduling of a given job until a client state
change (ex. restarting the agent so all its allocations are rescheduled
elsewhere) re-opens the range of dynamic ports available.

This changeset:

* Fixes the bug by accounting for collisions in dynamic port selection in
  `AssignPorts`.
* Adds test coverage for `AssignPorts`, expands coverage of this case for the
  deprecated `AssignTaskNetwork`, and tightens the dynamic port range in a
  scheduler test for spread scheduling to more easily detect this kind of problem
  in the future.
* Adds a `String()` method to `Bitmap` so that any future "screaming" log lines
  have a human-readable list of used ports.
@@ -554,10 +557,14 @@ func (idx *NetworkIndex) AssignPorts(ask *NetworkResource) (AllocatedPorts, erro
// lower memory usage.
var dynPorts []int
// TODO: its more efficient to find multiple dynamic ports at once
Copy link
Member Author

@tgross tgross Mar 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to reviewers: this comment is really only true in the case where the dynamic range is full enough that we fail to get randomly selected ports within 20 tries. Optimizing for this will involve heavily refactoring this method to unpack this loop into multiple passes. We'll need to first figure out the count of dynamic ports we need (per network/address), getting the offered ports, and then dolling them out for each of the asked ports.

I want to do this performance work in the very near term, but not in the same PR as a major bug fix. And I want to tackle #13657 around the same time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a fun problem! I would even go so far as to suggest reconsidering the use of a Bitmap, perhaps instead thinking of our own data structure that provides an efficient API surface that actually makes sense here (eliminating the need for random tries with precise fallback). No reason it can't be done, it's just off the beaten path.

@tgross tgross added backport/1.3.x backport to 1.3.x release line backport/1.4.x backport to 1.4.x release line backport/1.5.x backport to 1.5.x release line labels Mar 8, 2023
@tgross tgross marked this pull request as ready for review March 8, 2023 21:52
Copy link
Member

@schmichael schmichael left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Congratulations on tracking down this nasty and persistent bug!

nomad/structs/bitmap.go Outdated Show resolved Hide resolved
@@ -742,6 +756,12 @@ func getDynamicPortsStochastic(nodeUsed Bitmap, minDynamicPort, maxDynamicPort i
goto PICK
}
}
// the pick conflicted with a previous pick that hasn't been saved to
// the index yet
if portsInOffer != nil && slices.Contains(portsInOffer, randPort) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nil check (and len check above) aren't technically necessary as Contains uses range which just doesn't loop over empty slices.

A map or set would work for portsInOffer too, but I suspect it should be small enough that the slice is more/as efficient for common cases (<10 ports per offer).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real advantage of Set is it makes code like this more readable, e.g.

slices.Contains(portsInOffer, randPort)

becomes

portsInOffer.Contains(randPort)

while also being more efficient O(1) vs O(n) lookups - though it would be interesting to benchmark here!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed the nil check (not sure which len check you mean?). I've kept the slice over the map/set because as you noted it doesn't make a meaningful performance difference at the size we're talking about and on top of that when I do the optimization pass mentioned in https://github.com/hashicorp/nomad/pull/16401/files#r1130056010 the return value ends up being a slice anyways.

Copy link
Member Author

@tgross tgross Mar 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eh, I was writing this comment still when @shoenig posted. I'm sold... I'll swap it out for a set. Can't pass a nil set, which the AssignTaskPorts needs. Let's just keep it a slice for now and we can revisit when we do the optimization pass over this code.

Copy link
Member

@shoenig shoenig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, awesome work @tgross!

@@ -554,10 +557,14 @@ func (idx *NetworkIndex) AssignPorts(ask *NetworkResource) (AllocatedPorts, erro
// lower memory usage.
var dynPorts []int
// TODO: its more efficient to find multiple dynamic ports at once
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a fun problem! I would even go so far as to suggest reconsidering the use of a Bitmap, perhaps instead thinking of our own data structure that provides an efficient API surface that actually makes sense here (eliminating the need for random tries with precise fallback). No reason it can't be done, it's just off the beaten path.

@@ -742,6 +756,12 @@ func getDynamicPortsStochastic(nodeUsed Bitmap, minDynamicPort, maxDynamicPort i
goto PICK
}
}
// the pick conflicted with a previous pick that hasn't been saved to
// the index yet
if portsInOffer != nil && slices.Contains(portsInOffer, randPort) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real advantage of Set is it makes code like this more readable, e.g.

slices.Contains(portsInOffer, randPort)

becomes

portsInOffer.Contains(randPort)

while also being more efficient O(1) vs O(n) lookups - though it would be interesting to benchmark here!

@tgross
Copy link
Member Author

tgross commented Mar 9, 2023

Sounds like a fun problem! I would even go so far as to suggest reconsidering the use of a Bitmap, perhaps instead thinking of our own data structure that provides an efficient API surface that actually makes sense here (eliminating the need for random tries with precise fallback). No reason it can't be done, it's just off the beaten path.

Yeah I was thinking about this and one of the interesting things about using a Bitmap is that it uses a fixed but large amount of space, it's cheap to check, and it's cheap to get a new port so long as the bitmap is sparsely-populated. So it has a weird performance cliff when the bitmap gets full. We'll want to benchmark the space vs time tradeoff at varying levels of sparseness.

@tgross tgross merged commit c36d3bd into main Mar 9, 2023
@tgross tgross deleted the pfnr-networking-fix branch March 9, 2023 15:09
tgross added a commit that referenced this pull request Mar 9, 2023
…16401)

When the scheduler tries to find a placement for a new allocation, it iterates
over a subset of nodes. For each node, we populate a `NetworkIndex` bitmap with
the ports of all existing allocations and any other allocations already proposed
as part of this same evaluation via its `SetAllocs` method. Then we make an
"ask" of the `NetworkIndex` in `AssignPorts` for any ports we need and receive
an "offer" in return. The offer will include both static ports and any dynamic
port assignments.

The `AssignPorts` method was written to support group networks, and it shares
code that selects dynamic ports with the original `AssignTaskNetwork`
code. `AssignTaskNetwork` can request multiple ports from the bitmap at a
time. But `AssignPorts` requests them one at a time and does not account for
possible collisions, and doesn't return an error in that case.

What happens next varies:

1. If the scheduler doesn't place the allocation on that node, the port
   conflict is thrown away and there's no problem.
2. If the node is picked and this is the only allocation (or last allocation),
   the plan applier will reject the plan when it calls `SetAllocs`, as we'd expect.
3. If the node is picked and there are additional allocations in the same eval
   that iterate over the same node, their call to `SetAllocs` will detect the
   impossible state and the node will be rejected. This can have the puzzling
   behavior where a second task group for the job without any networking at all
   can hit a port collision error!

It looks like this bug has existed since we implemented group networks, but
there are several factors that add up to making the issue rare for many users
yet frustratingly frequent for others:

* You're more likely to hit this bug the more tightly packed your range for
  dynamic ports is. With 12000 ports in the range by default, many clusters can
  avoid this for a long time.
* You're more likely to hit case (3) for jobs with lots of allocations or if a
  scheduler has to iterate over a large number of nodes, such as with system jobs,
  jobs with `spread` blocks, or (sometimes) jobs using `unique` constraints.

For unlucky combinations of these factors, it's possible that case (3) happens
repeatedly, preventing scheduling of a given job until a client state
change (ex. restarting the agent so all its allocations are rescheduled
elsewhere) re-opens the range of dynamic ports available.

This changeset:

* Fixes the bug by accounting for collisions in dynamic port selection in
  `AssignPorts`.
* Adds test coverage for `AssignPorts`, expands coverage of this case for the
  deprecated `AssignTaskNetwork`, and tightens the dynamic port range in a
  scheduler test for spread scheduling to more easily detect this kind of problem
  in the future.
* Adds a `String()` method to `Bitmap` so that any future "screaming" log lines
  have a human-readable list of used ports.
tgross added a commit that referenced this pull request Mar 9, 2023
…16401) (#16408)

When the scheduler tries to find a placement for a new allocation, it iterates
over a subset of nodes. For each node, we populate a `NetworkIndex` bitmap with
the ports of all existing allocations and any other allocations already proposed
as part of this same evaluation via its `SetAllocs` method. Then we make an
"ask" of the `NetworkIndex` in `AssignPorts` for any ports we need and receive
an "offer" in return. The offer will include both static ports and any dynamic
port assignments.

The `AssignPorts` method was written to support group networks, and it shares
code that selects dynamic ports with the original `AssignTaskNetwork`
code. `AssignTaskNetwork` can request multiple ports from the bitmap at a
time. But `AssignPorts` requests them one at a time and does not account for
possible collisions, and doesn't return an error in that case.

What happens next varies:

1. If the scheduler doesn't place the allocation on that node, the port
   conflict is thrown away and there's no problem.
2. If the node is picked and this is the only allocation (or last allocation),
   the plan applier will reject the plan when it calls `SetAllocs`, as we'd expect.
3. If the node is picked and there are additional allocations in the same eval
   that iterate over the same node, their call to `SetAllocs` will detect the
   impossible state and the node will be rejected. This can have the puzzling
   behavior where a second task group for the job without any networking at all
   can hit a port collision error!

It looks like this bug has existed since we implemented group networks, but
there are several factors that add up to making the issue rare for many users
yet frustratingly frequent for others:

* You're more likely to hit this bug the more tightly packed your range for
  dynamic ports is. With 12000 ports in the range by default, many clusters can
  avoid this for a long time.
* You're more likely to hit case (3) for jobs with lots of allocations or if a
  scheduler has to iterate over a large number of nodes, such as with system jobs,
  jobs with `spread` blocks, or (sometimes) jobs using `unique` constraints.

For unlucky combinations of these factors, it's possible that case (3) happens
repeatedly, preventing scheduling of a given job until a client state
change (ex. restarting the agent so all its allocations are rescheduled
elsewhere) re-opens the range of dynamic ports available.

This changeset:

* Fixes the bug by accounting for collisions in dynamic port selection in
  `AssignPorts`.
* Adds test coverage for `AssignPorts`, expands coverage of this case for the
  deprecated `AssignTaskNetwork`, and tightens the dynamic port range in a
  scheduler test for spread scheduling to more easily detect this kind of problem
  in the future.
* Adds a `String()` method to `Bitmap` so that any future "screaming" log lines
  have a human-readable list of used ports.

Co-authored-by: Tim Gross <tgross@hashicorp.com>
@schmichael
Copy link
Member

I stumbled across #8421 while spelunking for other network issues. Think we could close it as well? Not sure this is the exact fix for it, but combined with other work in the past 2 years I doubt that old issue is relevant anymore?

Sounds like a fun problem! I would even go so far as to suggest reconsidering the use of a Bitmap, perhaps instead thinking of our own data structure that provides an efficient API surface that actually makes sense here (eliminating the need for random tries with precise fallback). No reason it can't be done, it's just off the beaten path.

Yeah I was thinking about this and one of the interesting things about using a Bitmap is that it uses a fixed but large amount of space, it's cheap to check, and it's cheap to get a new port so long as the bitmap is sparsely-populated. So it has a weird performance cliff when the bitmap gets full. We'll want to benchmark the space vs time tradeoff at varying levels of sparseness.

I think the tradeoff is made on the assumption that only a small % of ports are ever used on any given node. With the default port range giving us 12,000 ports to work with, I'm guessing the bitmap is rarely 1% full (120 ports used for a single address on a single node).

That being said folks have wanted dynamic port ranges for a long time (#1384 & #3242) with the upper bound being 10k ports requested by a single alloc!

So all that to say I think a balanced approach that doesn't overfit for any fill % is probably best. I think we can save a lot of CPU just by Doing Less Work (eg not doing any of this for allocs without ports!) regardless of the data structure used for tracking ports.

@tgross
Copy link
Member Author

tgross commented Mar 9, 2023

I stumbled across #8421 while spelunking for other network issues. Think we could close it as well? Not sure this is the exact fix for it, but combined with other work in the past 2 years I doubt that old issue is relevant anymore?

Agreed. Done!

tgross added a commit that referenced this pull request Mar 13, 2023
The system scheduler uses a separate code path for reconciliation. During the
investigation into the "plan for node rejected" bug which was fixed in #16401,
it was discovered this code path doesn't maintain the invariant that no more
than 1 allocation per system job task group (or `count` allocations for sysbatch
jobs) should be left running on a given client. While this condition should be
impossible to encounter, the scheduler should be reconciling these cases.

Add a new `ensureSingleSystemAlloc` function that enforces the invariant that
there can be only a single desired-running allocation for a given system job on
a given node.
tgross added a commit that referenced this pull request Mar 15, 2023
The system scheduler uses a separate code path for reconciliation. During the
investigation into the "plan for node rejected" bug which was fixed in #16401,
it was discovered this code path doesn't maintain the invariant that no more
than 1 allocation per system job task group (or `count` allocations for sysbatch
jobs) should be left running on a given client. While this condition should be
impossible to encounter, the scheduler should be reconciling these cases.

Add a new `ensureSingleSystemAlloc` function that enforces the invariant that
there can be only a single desired-running allocation for a given system job on
a given node.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport/1.3.x backport to 1.3.x release line backport/1.4.x backport to 1.4.x release line backport/1.5.x backport to 1.5.x release line theme/scheduling type/bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

intermittent error "reserved port collision" when running new job
3 participants