Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime: Accept full chain for para on multiple cores (elastic scaling) #3131

Closed
Tracked by #1829
eskimor opened this issue Jan 30, 2024 · 12 comments · Fixed by #3479
Closed
Tracked by #1829

Runtime: Accept full chain for para on multiple cores (elastic scaling) #3131

eskimor opened this issue Jan 30, 2024 · 12 comments · Fixed by #3479
Assignees

Comments

@eskimor
Copy link
Member

eskimor commented Jan 30, 2024

We accept N candidates for N scheduled cores, if those candidates form a chain back to the last included one.

@sandreim
Copy link
Contributor

It is not clear if this one refers to scheduler to assign multiple cores per para in same RCB

@alindima
Copy link
Contributor

alindima commented Jan 30, 2024

or to accepting multiple parablocks in paras_inherent?

@eskimor
Copy link
Member Author

eskimor commented Jan 30, 2024

This is about accepting multiple backed candidates in paras_inherent, if there are multiple cores for that para.

@eskimor
Copy link
Member Author

eskimor commented Jan 30, 2024

Scheduler should already be fine.

@alindima
Copy link
Contributor

alindima commented Feb 1, 2024

currently, the check for whether the backed candidate is a descendant of the current para head is done in: <inclusion::Pallet<T>>::process_candidates.

for this to work with multiple candidates, I think we'll need to change the format of the inherent data to contain the parent_head_hash, so that we're able to check the dependency chain and also check the PersistedValidationData hash against the one in the candidate receipt

@sandreim
Copy link
Contributor

sandreim commented Feb 2, 2024

I actually also need to change the ParachainsInherentData format due to a change of BackedCandidate primitive. Currently I am struggling to figure out how to implement this change only for Rococo runtime.

@eskimor
Copy link
Member Author

eskimor commented Feb 2, 2024

for this to work with multiple candidates, I think we'll need to change the format of the inherent data to contain the parent_head_hash, so that we're able to check the dependency chain and also check the PersistedValidationData hash against the one in the candidate receipt

I don't think this is necessary. It seems right now we check dependency chains via PersistedValidationData hash here. We should be able to make persisted validation data for each provided candidate similar to here and then put the candidates in a table based on their persisted validation data hashes. Then we just need to lookup the hash provided in the descriptor to find the parent.

@eskimor
Copy link
Member Author

eskimor commented Feb 2, 2024

I actually also need to change the ParachainsInherentData format due to a change of BackedCandidate primitive. Currently I am struggling to figure out how to implement this change only for Rococo runtime.

There is a hacky half-baked solution for this. If we establish the contract that provided BackedCandidates are provided in order of cores, then with all para cores occupied we are already golden - we know the first candidate for a para will be the lowest core for that para and so on.

Things get more tricky if there are holes and e.g. the candidate for core 1 of that para is missing, then we are off by one. We could get around this by inserting dummy BackedCandidates for those missing entries. Adding these dummy candidates, apart from being hacky, is also not backwards compatible though.

@eskimor
Copy link
Member Author

eskimor commented Feb 2, 2024

Another possible hack 😬 : We could change the format of validator_indices so the last bits in the field identify the core (if there are multiple). E.g. let say we want to support up to 4 cores per para for now, we could use 2 bits here.

@burdges
Copy link

burdges commented Feb 4, 2024

We'll require the candidate recipets appear backed on the relay chain in order?

@sandreim
Copy link
Contributor

sandreim commented Feb 5, 2024

We'll require the candidate recipets appear backed on the relay chain in order?

That would be an optimization I guess, otherwise we should be able to establish ordering based on the parent heads.

@sandreim
Copy link
Contributor

sandreim commented Feb 5, 2024

Another possible hack 😬 : We could change the format of validator_indices so the last bits in the field identify the core (if there are multiple). E.g. let say we want to support up to 4 cores per para for now, we could use 2 bits here.

I think this is by far the least intrusive method but there will still be breakage as current nodes rely on the bitfield len to identify the backing group size in client/runtime code. This will also make it subtle opt in interface. We'd probably have to introduce a temporary node side feature or runtime API to enable this only on Rococo.

@eskimor eskimor moved this from Backlog to In Progress in parachains team board Feb 7, 2024
@alindima alindima self-assigned this Feb 13, 2024
@alindima alindima linked a pull request Mar 15, 2024 that will close this issue
7 tasks
@alindima alindima moved this from In Progress to Review in progress in parachains team board Mar 20, 2024
github-merge-queue bot pushed a commit that referenced this issue Mar 21, 2024
Changes needed to implement the runtime part of elastic scaling:
#3131,
#3132,
#3202

Also fixes #3675

TODOs:

- [x] storage migration
- [x] optimise process_candidates from O(N^2)
- [x] drop backable candidates which form cycles
- [x] fix unit tests
- [x] add more unit tests
- [x] check the runtime APIs which use the pending availability storage.
We need to expose all of them, see
#3576
- [x] optimise the candidate selection. we're currently picking randomly
until we satisfy the weight limit. we need to be smart about not
breaking candidate chains while being fair to all paras -
#3573

Relies on the changes made in
#3233 in terms of the
inclusion policy and the candidate ordering

---------

Signed-off-by: alindima <alin@parity.io>
Co-authored-by: command-bot <>
Co-authored-by: eskimor <eskimor@users.noreply.github.com>
@github-project-automation github-project-automation bot moved this from Review in progress to Completed in parachains team board Mar 21, 2024
dharjeezy pushed a commit to dharjeezy/polkadot-sdk that referenced this issue Mar 24, 2024
…h#3479)

Changes needed to implement the runtime part of elastic scaling:
paritytech#3131,
paritytech#3132,
paritytech#3202

Also fixes paritytech#3675

TODOs:

- [x] storage migration
- [x] optimise process_candidates from O(N^2)
- [x] drop backable candidates which form cycles
- [x] fix unit tests
- [x] add more unit tests
- [x] check the runtime APIs which use the pending availability storage.
We need to expose all of them, see
paritytech#3576
- [x] optimise the candidate selection. we're currently picking randomly
until we satisfy the weight limit. we need to be smart about not
breaking candidate chains while being fair to all paras -
paritytech#3573

Relies on the changes made in
paritytech#3233 in terms of the
inclusion policy and the candidate ordering

---------

Signed-off-by: alindima <alin@parity.io>
Co-authored-by: command-bot <>
Co-authored-by: eskimor <eskimor@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Completed
Development

Successfully merging a pull request may close this issue.

4 participants