Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Claim queue restrictions #4022

Closed
eskimor opened this issue Apr 8, 2024 · 1 comment
Closed

Claim queue restrictions #4022

eskimor opened this issue Apr 8, 2024 · 1 comment

Comments

@eskimor
Copy link
Member

eskimor commented Apr 8, 2024

Claim queue length == allowed_ancestry_length + 1

The claim queue length should be made the same as allowed_ancestry_length +1, by definition. (Claim queue length should be derived from this configuration or the other way round).

Restrictions on collation fetching

If a collator decides to build on an older relay parent than the most current one, we should take this into consideration for what to accept

For a single para on a core: If allowed_ancestry_length is x and we are at ancestor head - y, with y <= x it does not make sense to accept more than x - y + 1 candidates for that relay parent as more could never make it on chain.

More general: We should trim the claim queue. E.g. if we got a collation for relay parent head - y and we have the following claim queue at that relay block:

[a,b,c,d]

then we should trim the actually available claim queue by y. So if y is 2, the claim queue to take into consideration should be reduced to:

[a,b]

as c and d could no longer possibly make it into a block.

for a single core para, a claim queue like

[a,a,a,a]

would be reduced to

[a,a]

so we should only accept two collations for that para - same as via above formula of course.

Spam vector restricting in general

There also is max_candidate_depth to limit spam. Even if we accept holes in chains this can be used to restrict a single parachain, as we can limit the total number of candidates in prospective parachains to that max_candidate_depth + 1. So even if we would accept 10 relay parents at any point in time, we could still restrict the number of candidates in total to max_candidate_depth + 1. This does not work effectively if paras are sharing a core though, then the more important parameter is max_ancestry_len. In general with #616 in place the most important property we need to maintain is, that we can detect spamyy misbehavior so we can account for it in the persistent reputation system. This only works on the local node though and would be an argument to reduce backing group size if possible.

@eskimor eskimor converted this from a draft issue Apr 8, 2024
@eskimor
Copy link
Member Author

eskimor commented Jul 29, 2024

Obsoleted by: #5079

@eskimor eskimor closed this as completed Jul 29, 2024
@github-project-automation github-project-automation bot moved this from Backlog to Completed in parachains team board Jul 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Completed
Development

No branches or pull requests

1 participant