Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Approve multiple candidates with a single signature #7554

Open
wants to merge 62 commits into
base: sandreim/vrf_modulo_comapct_assignment
Choose a base branch
from

Conversation

alexggh
Copy link
Contributor

@alexggh alexggh commented Jul 28, 2023

Initial implementation for the plan discussed here: paritytech/polkadot-sdk#701 on top of: #6782

Overall idea

When approval-voting checks a candidate and is ready to advertise the approval, defer it in a per-relay chain block until we either have MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what candidates we have available.

This should allow us to reduce the number of approvals messages we have to create/send/verify. The parameters are configurable, so we should find some values that balance:

  • Security of the network: Delaying broadcasting of an approval shouldn't but the finality at risk and to make sure that never happens we won't delay sending a vote if we are past 2/3 from the no-show time.
  • Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 & MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from the measurements we did on versi, it bottlenecks approval-distribution/approval-voting when increase significantly the number of validators and parachains
  • Block storage: In case of disputes we have to import this votes on chain and that increase the necessary storage with MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that disputes are not the normal way of the network functioning and we will limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this should be good enough. Alternatively, we could try to create a better way to store this on-chain through indirection, if that's needed.

TODO:

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
TODO: Migration is not correctly handled, should be done before versi
testing.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
}),
DisputeStatement::Valid(ValidDisputeStatementKind::ApprovalChecking) =>
ApprovalVote(candidate_hash).signing_payload(session),
DisputeStatement::Valid(ValidDisputeStatementKind::ApprovalCheckingV2(
candidate_hashes,
)) => ApprovalVoteMultipleCandidates(candidate_hashes.clone()).signing_payload(session),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should ensure that candidate_hash is included in candidate_hashes.

We could also save a bit of space by having the ValidDisputeStatementKind::ApprovalCheckingV2 include only the other candidate hashes and the index in the vec where the candidate hash in question should be included. Maybe overcomplicated, but payload_data returns a Vec<u8> unconditionally which is a problem. i.e. we can't represent a failure path with the current interface.

As it stands, we'd allow any set of candidate hashes to be treated as a valid dispute vote for any candidate.

Pseudocode

ValidDisputeStatementKind::ApprovalCheckingV2(candidate_hashes, index) => {
    let mut final_candidates = candidate_hashes.clone();
    let index = min(candidate_hashes.len(), index);
    final_candidates.insert(index, candidate_hash);
    signing_payload(final_candidates, session);
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, this interface wasn't yet cleaned, it was just in a state that worked on the happy-path, I was thinking about making candidate_hash an Option<CandidateHash> and when is Some we just added to the payload.

What do you think about that ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opted instead for throwing an error if the provided candidate_hash is not in the vec of candidate_hashes, I think that keeps the API easy to use and prevents accidental misusage.

See here: 2271c71#diff-e876407804ef19c762a4fe91db962a8eea5185a3aee42ed76b60fb8b549beb03R1288, let me know what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems fine, so long as we throw an error in dispute handling as normally done.

I do feel this part of the code could be much better optimized.

pub max_approval_coalesce_count: u32,
/// The maximum time we await for a candidate approval to be coalesced with
/// the ones for other candidate before we sign it and distribute to our peers
pub max_approval_coalesce_wait_millis: u32,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this shouldn't be in the runtime configuration but is rather a strategy set in the node-side.

The runtime should be concerned with validity, not necessarily how the nodes get there in the fastest possible way.

Copy link
Contributor Author

@alexggh alexggh Aug 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you referring just to the max_approval_coalesce_wait_millis part or are you think that both configurations parameters shouldn't be there ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The count is a hard parameter defining a boundary between validity and invalidity, which could reasonably live in the runtime or else just be an implementation constant. Though it should probably be very high if defined in the runtime.

The wait time is just tactical from a validator and more advanced strategies would be precluded by hardcoding here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I see, one reason I had for keeping at least having max_approval_coalesce_count in the runtime is that it gives us a good knob to enable v2 approvals on all validator, instead of having to basically do 2 separate releases of node to enable this logic.

But I guess just for that use-case I could go with the option were I make both max_approval_coalesce_count and max_approval_coalesce_wait_ticks implementation constants and just have a param in the runtime enable_approvals_v2. Would that make more sense ?

Copy link
Contributor

@rphmeier rphmeier Aug 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't reviewed the changeset in-depth yet, but typically there are two approaches to enabling new features:

  1. Do a first node upgrade that adds support for the change, followed by a second after giving enough time for nodes to upgrade to the first that activates the behavior.
  2. Do a single node upgrade that adds support for the change, which is activated by a runtime switch such as a version increase.

(2) is usually better as it goes through chain governance, which it sounds like you are going for - great. (see the #5022 changeset for an example, where we use the async_backing_params runtime API to activate the feature).

I could go with the option were I make both max_approval_coalesce_count and max_approval_coalesce_wait_ticks implementation constants and just have a param in the runtime enable_approvals_v2. Would that make more sense

Yep, makes sense to me, as long as the following statement is something you agree with: we don't mind coalesced bundles being arbitrarily large as long as they are valid, and don't expect that valid bundles will be of a size too large to handle within the disputes pallet. Setting a hard maximum seems valuable to protect per-block disputes load from causing overweight blocks.

Copy link
Contributor Author

@alexggh alexggh Aug 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting a hard maximum seems valuable to protect per-block disputes load from causing overweight blocks.

Yes, we definitely need a maximum here to protect per-block disputes creating overweight blocks.

... additionally computed in ticks as it is done everywhere else.

And added some tests to make sure approval-voting behaves the way we
intended to.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
…gnment' into coalescing_of_approvals_cleanup
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
…gnment' into coalescing_of_approvals_cleanup
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
acatangiu and others added 7 commits August 22, 2023 09:08
* runtime: add BEEFY and MMR to Westend

Signed-off-by: Adrian Catangiu <adrian@parity.io>

* runtime: add BEEFY and MMR to Kusama

Signed-off-by: Adrian Catangiu <adrian@parity.io>

* node/service: enable BEEFY for Westend and Kusama

Signed-off-by: Adrian Catangiu <adrian@parity.io>

* node/service: regenerate genesis keys for westend-native and kusama-native

Since these keys are only used for development/local chains, also publish
the secret seeds used to generate the public keys, so that developers can
recover/generate the private key pairs if needed.

Signed-off-by: Adrian Catangiu <adrian@parity.io>

* runtime: add session keys migration to add BEEFY to Westend and Kusama

* runtime: fix migration

* fix try-runtime build

* cargo fmt

* fix parachains slashing benchmark

* address review comments

* Apply suggestions from code review

Co-authored-by: Bastian Köcher <git@kchr.de>

* runtime: fix session keys migration

---------

Signed-off-by: Adrian Catangiu <adrian@parity.io>
Co-authored-by: parity-processbot <>
Co-authored-by: Bastian Köcher <git@kchr.de>
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 3.8.0 to 3.8.1.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](actions/setup-node@v3.8.0...v3.8.1)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…gnment' into feature/approve_multiple_candidates_v2
…aritytech#7641)

* Bound number of assets which can be withdrawn to pay for execution.

* ".git/.scripts/commands/fmt/fmt.sh"

* Include ClaimAsset in limiting the assets

* Change max assets to constant

---------

Co-authored-by: command-bot <>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>

// Invariant: to our knowledge, none of the peers except for the `source` know about the
// approval.
metrics.on_approval_imported();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should inc_by(candidate_indices.count_ones() )

@paritytech-ci paritytech-ci requested review from a team August 22, 2023 11:30
alexggh and others added 10 commits August 22, 2023 16:54
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
* Fix xcm-builder mock

(preparation for monorepo)

The CI fails here when the runtime-benchmarks feature is enabled in the workspace.

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>

* Update xcm/xcm-builder/Cargo.toml

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
…gnment' into feature/approve_multiple_candidates_v2
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
* move RuntimeDebug out of frame_support

* move RuntimeDebug out of frame_support

* fix xcm export

* ".git/.scripts/commands/fmt/fmt.sh"

* fix xcm intefration tests

* fix cargo lock for xcm intefration tests

* wip

* restructure benchmarking macro related exports

* update cargo lock

---------

Co-authored-by: parity-processbot <>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
…gnment' into feature/approve_multiple_candidates_v2
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable
Logs: https://gitlab.parity.io/parity/mirrors/polkadot/-/jobs/3432194

@@ -162,6 +162,8 @@ where
KeyOwnershipProof(relay_parent, validator_id, key_ownership_proof) => self
.requests_cache
.cache_key_ownership_proof((relay_parent, validator_id), key_ownership_proof),
RequestResult::ApprovalVotingParams(relay_parent, params) =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
RequestResult::ApprovalVotingParams(relay_parent, params) =>
ApprovalVotingParams(relay_parent, params) =>

@@ -682,6 +773,9 @@ struct State {
clock: Box<dyn Clock + Send + Sync>,
assignment_criteria: Box<dyn AssignmentCriteria + Send + Sync>,
spans: HashMap<Hash, jaeger::PerLeafSpan>,
// Store approval-voting params, so that we don't to always go to RuntimeAPI
// to ask this information which requires a task context switch.
approval_voting_params_cache: Option<LruCache<Hash, ApprovalVotingParams>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't this be cached per-session? to have fewer entries in this map.
maybe even add it to the session_info_provider

dependabot bot and others added 6 commits August 24, 2023 10:19
Bumps [chevdor/srtool-actions](https://github.com/chevdor/srtool-actions) from 0.7.0 to 0.8.0.
- [Release notes](https://github.com/chevdor/srtool-actions/releases)
- [Commits](chevdor/srtool-actions@v0.7.0...v0.8.0)

---
updated-dependencies:
- dependency-name: chevdor/srtool-actions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.101.2 to 0.101.4.
- [Release notes](https://github.com/rustls/webpki/releases)
- [Commits](rustls/webpki@v/0.101.2...v/0.101.4)

---
updated-dependencies:
- dependency-name: rustls-webpki
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
added asynchronous backing params
Signed-off-by: Adrian Catangiu <adrian@parity.io>
* Update README.md

* Update README.md
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.