Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVF: prioritise execution depending on context #4632

Closed
sandreim opened this issue May 29, 2024 · 14 comments · Fixed by #4837
Closed

PVF: prioritise execution depending on context #4632

sandreim opened this issue May 29, 2024 · 14 comments · Fixed by #4837
Assignees
Labels
I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task. T8-polkadot This PR/Issue is related to/affects the Polkadot network.

Comments

@sandreim
Copy link
Contributor

From the node perspective, the goal is to trade off parachain liveness for finality . When finality lag hits a threshold, the mechanism should kick in and enforce the following priority:

  • dispute PVF execution
  • approval PVF execution
  • backing PVF execution

In case of node overload, this change adds back pressure to candidate backing but not dropping backed candidates. Due to async backing delayed candidate backing still allows for candidates to be backed in next relay chain block if relay parent does not expire.

Some more details on the subject can be found in this discussion

@sandreim sandreim added I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task. T8-polkadot This PR/Issue is related to/affects the Polkadot network. labels May 29, 2024
@burdges
Copy link

burdges commented May 30, 2024

I'm not prefectly happy about disputes being higher priority than approvals, depending upon what approvals means:

If we delay approval announcements then we risk adversaries breaking soundness when honest nodes get bogged down.

If we delay approval checks, then we risk unecessary escalations. If we add some "I'm here but overworked" vote then we incur that code complexity.

We do not necessarily have a choice but to prioritize disputes of course, since disputes mess everything up anyways. We could discuss "soft" options like a dispute appearing on-chain turns off backing & inclusion for a few blocks. It's tricky picking how long though: A dispute could resolve fast, but not a timeout dispute, although maybe polkavm fixes that.

@sandreim
Copy link
Contributor Author

sandreim commented May 31, 2024

I'm not prefectly happy about disputes being higher priority than approvals, depending upon what approvals means:

If we delay approval announcements then we risk adversaries breaking soundness when honest nodes get bogged down.

If we delay approval checks, then we risk unecessary escalations. If we add some "I'm here but overworked" vote then we incur that code complexity.

This is what I had in mind and will indeed cause escalations, but that is fine it back pressures on backing PVF execution (least priority) which means less approval work for next block allowing the network to recover.

We do not necessarily have a choice but to prioritize disputes of course, since disputes mess everything up anyways. We could discuss "soft" options like a dispute appearing on-chain turns off backing & inclusion for a few blocks. It's tricky picking how long though: A dispute could resolve fast, but not a timeout dispute, although maybe polkavm fixes that.

I totally agree, we will have to prioritise disputes and maybe we should not starve approvals, like we would do with backing and still allow 10% approval pvf execution vs 90% disputes or smth like that.

@burdges
Copy link

burdges commented Jun 1, 2024

Alright delaying the approval checks is secure. It'll just cause more slowdowns, which'll ultimately hit backing.

If approvals remain at 100%, but backing stops completely for one block, then we've made time to process one dispute, well except for the extra reconstruction overhead.We might've more than one dispute of course, but someone is being billed for them. We cannot stop backing compeltely either, but..

Also, if approvals run at maybe 60% or something, then we typically still finish approval checks before we no-show, which avoids escalations.

I think the first question here is: At the time disputes occur, how hard do we back pressue our local backing? We could stip 2-3 backing slots or something every time a dispute occurs, even if we do not feel the pressure ourselves?

@AndreiEres
Copy link
Contributor

@sandreim @burdges
I'd like to clarify the final numbers:

  • We execute dispute tasks first
  • But we guarantee 10% of approval tasks even if we still have disputes.
  • We only start with backing tasks when other queues are empty.

Please, correct me if I took it wrong.

@sandreim
Copy link
Contributor Author

My current thinking is to still allow 10% backing. Getting at least 10% approvals is not going to have any impact on finality if it is lagging because of disputes, but will reduce the number of no-shows which is beneficial. However, because disputes are something that happen very rarely I don't think it is worth doing it.

To get a clear answer we should run some dispute load tests on Versi with both scenarios.

@burdges
Copy link

burdges commented Jun 26, 2024

We're discussing situations where disputes have occured across many cores, or in many times slots for only a few parachains.

If we've only a few parachains causing many disputes, then maybe they could be handled in sequence, but not sure doing so warrants extra complexity. We might however sequence disputes anyways, like maybe oldest first so reversions turn the rest into remote disputes.

In principle, we could do remote disputes more lazily, but we've no concensus upon which disputes are remote, unless grandpa finalizes something remote from the dispute's perspective. Again not sure if we need code for this: Yes, this could same considerable CPU during a storm of correct disputes of invalid blocks, but that's even more rare. Imho, incorrect storms of disputes of valid blocks sounds way more likely.

We should likely keep that part simple for now, but just discussing the design space there.

Anyways..

Approvals should imho never halt completely, not sure how much reserve, but 10% sounds reasonable.

We only start with backing tasks when other queues are empty.

Approvals might never be empty, so not sure this works per se.

In theory, I'd halt backing completely if we've really exhausted all other resources. We'd maybe want some exceptions though, like sytem parachains dong ellections, or DKGs, or maybe other governance. If we'd want such special reserved system parachains, then we might start by simply never halting backing completely, and then later if necessary then add per parachain configujration there.

Approval's should always have more reserved capacity, as otherwise approvals might never catch up vs backing.

At what points should we trigger this back pressure on backing?

  • Individual criteria stop us overworking ourselves:
    • Slow/delay backing based upon our CPU load --- I dislike this one because high CPU load means we're wasting less silicon.
    • Halt/delay/slow backing if we're currently in no-show.
    • Delay/slow backing if our queue is too full --- This might be more senseitive than no-shows, but what does it mean if our queue in not full, but we're in no-show?
  • Network criteria stop us overworking others:
    • Slow/delay backing dependent upon finality lag --- Relatively simple to configure.
    • Slow/delay backing dependent upon how many no-shows exist overall --- This is nicely objective, and much more reactive, but maybe too reactive, and adds more parameters.

That's like 5 possible overworked metrics. Any thoughts on what's simplest? If we only do one, then queue fullness sounds good, because maybe our queue fills up because others no-show. If we do two, then maybe add network no-shows or finality. I think beyond this then we should more carefully assess.

@sandreim
Copy link
Contributor Author

We're discussing situations where disputes have occured across many cores, or in many times slots for only a few parachains.

If we've only a few parachains causing many disputes, then maybe they could be handled in sequence, but not sure doing so warrants extra complexity. We might however sequence disputes anyways, like maybe oldest first so reversions turn the rest into remote disputes.

With validator disabling these things are looking very good now. I don't think we should optimize for disputes and in this current proposal they have max priority.

We should likely keep that part simple for now, but just discussing the design space there.

Sure, we just want some simple prioritization to better control system load. Further optimizations should be done later, only if needed.

Approvals should imho never halt completely, not sure how much reserve, but 10% sounds reasonable.

What is the reasoning for this 10% ? Do you believe that reducing no-shows by a small margin improves the overall outcome of a dispute storm scenario ?

We only start with backing tasks when other queues are empty.

Approvals might never be empty, so not sure this works per se.

Yes, that might be true to some extent, so we need to reserve some % resources for backing.

In theory, I'd halt backing completely if we've really exhausted all other resources. We'd maybe want some exceptions though, like sytem parachains dong ellections, or DKGs, or maybe other governance. If we'd want such special reserved system parachains, then we might start by simply never halting backing completely, and then later if necessary then add per parachain configujration there.

Yes, we should reserve capacity for the system parachains, giving something like 10% to backing no matter the load should not make the situation worse.

Approval's should always have more reserved capacity, as otherwise approvals might never catch up vs backing.

Yeah, they have 90% if no disputes. We can consider lowering backing reservation to just 5%, 1 in 20 executions is guaranteed to be backing. If these are light blocks it should still work fine. However if blocks are 2s, it will take 40s for the the candidate to be backed by which time it's relay parent has already expired.

At what points should we trigger this back pressure on backing?

  • Individual criteria stop us overworking ourselves:

    • Slow/delay backing based upon our CPU load --- I dislike this one because high CPU load means we're wasting less silicon.

This is hard to measure in practice, but the prioritization proposed here actually achieves this without measuring the load directly.

  • Halt/delay/slow backing if we're currently in no-show.

I think it is a good idea, this can happen for example because availability recovery is slow because of the nework. In this case, there wouldn't be any actual CPU load, so the prioritization doesn't help at all.

  • Delay/slow backing if our queue is too full --- This might be more senseitive than no-shows, but what does it mean if our queue in not full, but we're in no-show?

The prioritization here should delay backing if queue is full in terms of there is more work that we can sustain in a reasonable time. The fullness depends on how much time the execution takes. You could have 10 executions queued and each one will take 2s, but you could also have 30 executions of 100ms each. Hard to know how full the queue is until you execute it. We could optimize and add a TTL for the backing executions to drop them if the relay parent has expired.

  • Network criteria stop us overworking others:

    • Slow/delay backing dependent upon finality lag --- Relatively simple to configure.

This must be done in the runtime. We just need to put finality proofs and we can select how many get backed.

  • Slow/delay backing dependent upon how many no-shows exist overall --- This is nicely objective, and much more reactive, but maybe too reactive, and adds more parameters.

No-shows tipically will raise the CPU/network load as more tranches needed to cover so we will a bit later slow down backing. Doing it earlier as you propose can improve the situation probably faster

That's like 5 possible overworked metrics. Any thoughts on what's simplest? If we only do one, then queue fullness sounds good, because maybe our queue fills up because others no-show. If we do two, then maybe add network no-shows or finality. I think beyond this then we should more carefully assess.

We want to implement the prioritization first. Then we do some glutton testing and calibrate the backing percentage. We should also test 10% approval reservation.

@burdges
Copy link

burdges commented Jun 28, 2024

Approvals should imho never halt completely, not sure how much reserve, but 10% sounds reasonable.
What is the reasoning for this 10% ?

I'd actually prever higher minimum for approvals, like max 70% for disputes, minimum 25% for approvals, and minimum 5% for critical system parachains, but no backing reservation for other parachains, or even non-criticial system parachains really.

We could likely go below this 25% but then we should think about crticial vs non-critical system parachains. We could ignore crticial vs non-critical system parachains and simply reserve more approval times for them. Or maybe we'll have so few system parachains that 10% is alraedy enough we can ignore this distinction?

Do you believe that reducing no-shows by a small margin improves the overall outcome of a dispute storm scenario ?

Yeah, every no-show add 2.25 more checkers.

This is hard to measure in practice, but the prioritization proposed here actually achieves this without measuring the load directly.

Alright yeah let's not directly measure CPU load. :)

Halt/delay/slow backing if we're currently in no-show.
I think it is a good idea, this can happen for example because availability recovery is slow because of the nework. In this case, there wouldn't be any actual CPU load, so the prioritization doesn't help at all.

Interesting, the queue could be empty but slow recovery might be causing no-shows. At what times do we expect this?

At least one scenario goes: Some validators get elected who have network disruptions to many other validators, like NATs or BGP attacks. Or maybe some validators try earning themselves extra points in availability by voting early that they have their chunk, assuming they'll get it eventually. If enough, then this hurts availability.

I guess back pressure makes sense here, but it's clearly secondary to the queue.

Delay/slow backing if our queue is too full --- This might be more senseitive than no-shows, but what does it mean if our queue in not full, but we're in no-show?
The prioritization here should delay backing if queue is full in terms of there is more work that we can sustain in a reasonable time. The fullness depends on how much time the execution takes. You could have 10 executions queued and each one will take 2s, but you could also have 30 executions of 100ms each. Hard to know how full the queue is until you execute it. We could optimize and add a TTL for the backing executions to drop them if the relay parent has expired.

Can we just assume worse case here? We'd tune based on gluttons I guess, so maybe that's easiest anyways?

Slow/delay backing dependent upon finality lag --- Relatively simple to configure.
This must be done in the runtime. We just need to put finality proofs and we can select how many get backed.

Why? I'd think the opposite: We know what our grandpa is voting on, so if it's too old than apply back pressure.

Slow/delay backing dependent upon how many no-shows exist overall --- This is nicely objective, and much more reactive, but maybe too reactive, and adds more parameters.
No-shows tipically will raise the CPU/network load as more tranches needed to cover so we will a bit later slow down backing. Doing it earlier as you propose can improve the situation probably faster

Alright so we've boiled down my criteria list to:
a. worst case queue based estimate
b. overall no-show count
c. our own local no-shows

We want to implement the prioritization first.

Yeah of course, one thing at a time. We can talk later about whether we really need all of a,b,c. :)

I'd missed one factor above: We need back pressure not only in our own backing like discussed here, but also in the relay chain block production: If dishonest or confused backers produce backing statements while being back presured, then an honest relay chain block producer should skip their statments too. We need not rush into doing this, since it only matters if nodes are being dishonest, conecting poorly, etc.

Anyeways, we'd eventually wire some of those three criteria into (x) back pressuring doing & gossiping backing statements, and (y) placing backing statments into relay chain blocks.

@AndreiEres
Copy link
Contributor

In my proposal we have 3 levels of priority.
If we talk about giving 5-10% of execution to backing for system parachains, it may end up adding another level. Instead, I'd put system parachains backing in the same queue as approval work.

So we'll have that mapping:

  • Critical: disputes.
  • Normal: approvals and system parachains backing.
  • Low: other parachains backing.

I'd start with a 70/20/10 split and then we can tweak it as we test.
What do you think @sandreim @burdges?

@burdges
Copy link

burdges commented Jul 4, 2024

Are those numbers CPU time "priorities" for those tasks when back pressure gets applied? Or are they overall allocations at all times? If a distinguished state, then any idea how quickly we can enter and leave the "back pressured" state?

I presume those priorities fall through, so if there are no disputes then the 100% is divided among the remaining categories?

Approvals & system parachains backing should not be given the same weight. We do not notice this problem right now, but if someone does something silly like give longer runtimes (JAM) then this'll break.

Approvals should be given much more CPU time, which under good conditions resembles the paramater choices in #640. In other words, if there are no problems then approvals should've roughly relayVrfModuloSamples + 3 times the CPU time of all backing jobs. I'm still eye balling that +3 there but anyways..

At a more intuitive level, two backing statement creates work for like 35 other nodes, but possibly 1000 other nodes if we're experencing network disruptions, so approvals should be like 14 times more CPU than all backing, even if zero no-shows occur, not so different from the relayVrfModuloSamples + 3 estimate. Ideally this would be higher when back pressure gets applied.

Anyways..

It sounds like you do not have a distinguished back pressure state yet, so then maybe the number should be:

70% disputes
28% approvals & availability
2% all parachains, both system and other.

If there are no disputes then we'd choose backing over approvals 6.7 % of the time, which sounds reasonable. It's still possible this change results in more delayed backing votes, but that likely signals some problems elsewhere. Any thoughs @alexggh ?

Also..

We can seperate system and other partachains once we eventually add some distinguished back pressure state, but the simplest breakdown under back pressure would be

70% disputes
28% approvals & availability
2% system parachain backing
0% other parachain backing

In this, worse case includes several meanings:

  1. the disruption looks more than transatory -- If we'd a transatory disruption then we'd recover quickly anyways, but maybe applying & unapplying this quickly works too.
  2. system parachain make up maybe half of our usage -- At present, this looks likely becasue silly people want smart contracts on AssetHub. We could say "true" system parachains here, and exclude AssetHub & smart contract chains from that list, but this ratio simplifies things.

@alexggh
Copy link
Contributor

alexggh commented Jul 4, 2024

I presume those priorities fall through, so if there are no disputes then the 100% is divided among the remaining categories?

Yes, we definitely, need to implement it like that, otherwise we waste capacity.

If there are no disputes then we'd choose backing over approvals 6.7 % of the time, which sounds reasonable. It's still possible this change results in more delayed backing votes, but that likely signals some problems elsewhere. Any thoughs @alexggh ?

The way I see it, backing is more time sensitive than approving, but approving is more important than backing, so our prioritisation mechanisms needs to reflect that. I don't think it is a problem if we delay backing if we are overloaded since we need to apply back-pressure to not create more approval work, but it is an issue if we delay backing because we have a steady stream of approvals and always prioritise approvals over backing or if we add a significant delay to backing when would actually be better served, by doing the backing first and then do all the other approvals(since we've got time for them).

I worry, that allocating just 2% to backing might add the extra delay in situations where we are not overloaded. My simple heuristic would be something like that: If the execution queue is under NEW_WORK_PER_BLOCK(MODULO_SAMPLES + 1), backing should be allocated first, If not use this statistical prioritisation.

Bare in mind that we have at least 2(maybe 4 in the future) parallel execution streams so approval votes wouldn't be starved because we should have no more than one block to back.

@burdges
Copy link

burdges commented Jul 4, 2024

That 2% becomes 6.7% if there are no disputes.

What is NEW_WORK_PER_BLOCK? Yes, we could do backing first if the execution queue were under MODULO_SAMPLES+1, and no other back pressure trigger applied, but..

Bare in mind that we have at least 2(maybe 4 in the future) parallel execution streams so approval votes wouldn't be starved because we should have no more than one block to back.

This make sense only if we detect the system needs back pressue, and then change the rules. All those backing votes create more work for the whole system.

I launched off into discussing detection above, but afaik the current discussion is really only about back pressue without explicit detection, aka only using disputes as the trigger in a simplistic way.

@alexggh
Copy link
Contributor

alexggh commented Jul 4, 2024

What is NEW_WORK_PER_BLOCK?

It is how many parachain blocks we expect normally to execute/verify per relay-chain blocks it should be around MODULO_SAMPLES+1.

I launched off into discussing detection above, but afaik the current discussion is really only about back pressure without explicit detection, aka only using disputes as the trigger in a simplistic way.

Yeah, simplistic is what I'm thinking as well, but this is not only for disputes, I'm more concerned about the situation where sharding breaks because of no-shows and nodes have to check a lot more candidates than the available CPU resources, so we need to slow-down so we can catch up.

I'm not too opinionated about this, but I think any solution/split we end up with needs to make sure it doesn't affect the base-case where the node is not over-loaded.

@burdges
Copy link

burdges commented Jul 4, 2024

Ain't clear "expect" is well defined there. We've the current queue length which depend upon our assignements from parameters and no-shows, and the unknowable workload they represent.

I'll suggest roughly this logic:

If there exists a dispute that's pending, aka not yet being run, then we probabalisatically run the dispute 70-80% of the time, or fall back to the non-dispute system 30-20% of the time.

In our non-dispute system..

  • We run an approvals job anytime we have no pending backing job.
  • Assuming we've a pending backing job B then..
    • Define HAPPY_QUEUE = MODULO_SAMPLES+1.
    • We run B first anytime our_queue <= HAPPY_QUEUE.
    • Assuming our_queue > HAPPY_QUEUE then we run an approvals jobs or B probabalisatically
      • We run B with 10% if B is a true system parachain, which imho ideally excludes smart contract chains.
      • We run B with 3% odds if B is not a true system parachain.

Again it's possible this happy queue test messes things up, because now we do not back unless we're caught up on our workload from the previous relay chain block, so this'll require some care and revision. We could set HAPPY_QUEUE higher or count our own no-shows and/or others no shows here. We're looking at the queue first because we think it's more sensitive, but no shows being less sensitive maybe better, although simply setting HAPPY_QUEUE higher works too.

If you need a constant then HAPPY_QUEUE = 12 or similar an over estiamte that'll make it less sensitive, and would let you debug the logic without plumbing in the MODULO_SAMPLES parameter. An advantage of starting with HAPPY_QUEUE larger than desired is that it'll be less disruptive, and we can figure out how much we like the system under bad network conditions first. As a further initial simplication you can assume everything is a true system parachain for initial testing.

github-merge-queue bot pushed a commit that referenced this issue Oct 9, 2024
Resolves #4632

The new logic optimizes the distribution of execution jobs for disputes,
approvals, and backings. Testing shows improved finality lag and
candidate checking times, especially under heavy network load.

### Approach

This update adds prioritization to the PVF execution queue. The logic
partially implements the suggestions from
#4632 (comment).

We use thresholds to determine how much a current priority can "steal"
from lower ones:
-  Disputes: 70%
-  Approvals: 80%
-  Backing System Parachains: 100%
-  Backing: 100%

A threshold indicates the portion of the current priority that can be
allocated from lower priorities.

For example:
-  Disputes take 70%, leaving 30% for approvals and all backings.
- 80% of the remaining goes to approvals, which is 30% * 80% = 24% of
the original 100%.
- If we used parts of the original 100%, approvals couldn't take more
than 24%, even if there are no disputes.

Assuming a maximum of 12 executions per block, with a 6-second window, 2
CPU cores, and a 2-second run time, we get these distributions:

-  With disputes: 8 disputes, 3 approvals, 1 backing
-  Without disputes: 9 approvals, 3 backings

It's worth noting that when there are no disputes, if there's only one
backing job, we continue processing approvals regardless of their
fulfillment status.

### Versi Testing 40/20

Testing showed a slight difference in finality lag and candidate
checking time between this pull request and its base on the master
branch. The more loaded the network, the greater the observed
difference.

Testing Parameters:
-  40 validators (4 malicious)
-  20 gluttons with 2 seconds of PVF execution time
-  6 VRF modulo samples
-  12 required approvals

![Pasted Graphic
3](https://github.com/user-attachments/assets/8b6163a4-a1c9-44c2-bdba-ce1ef4b1eba7)
![Pasted Graphic
4](https://github.com/user-attachments/assets/9f016647-7727-42e8-afe9-04f303e6c862)

### Versi Testing 80/40

For this test, we compared the master branch with the branch from
#5616. The second branch
is based on the current one but removes backing jobs that have exceeded
their time limits. We excluded malicious nodes to reduce noise from
disputing and banning validators. The results show that, under the same
load, nodes experience less finality lag and reduced recovery and check
time. Even parachains are functioning with a shorter block time,
although it remains over 6 seconds.

Testing Parameters:
-  80 validators (0 malicious)
-  40 gluttons with 2 seconds of PVF execution time
-  6 VRF modulo samples
-  30 required approvals


![image](https://github.com/user-attachments/assets/42bcc845-9115-4ae3-9910-286b77a60bbf)

---------

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task. T8-polkadot This PR/Issue is related to/affects the Polkadot network.
Projects
Status: Completed
Development

Successfully merging a pull request may close this issue.

4 participants