You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The claim queue length should be made the same as allowed_ancestry_length +1, by definition. (Claim queue length should be derived from this configuration or the other way round).
Restrictions on collation fetching
If a collator decides to build on an older relay parent than the most current one, we should take this into consideration for what to accept
For a single para on a core: If allowed_ancestry_length is x and we are at ancestor head - y, with y <= x it does not make sense to accept more than x - y + 1 candidates for that relay parent as more could never make it on chain.
More general: We should trim the claim queue. E.g. if we got a collation for relay parent head - y and we have the following claim queue at that relay block:
[a,b,c,d]
then we should trim the actually available claim queue by y. So if y is 2, the claim queue to take into consideration should be reduced to:
[a,b]
as c and d could no longer possibly make it into a block.
for a single core para, a claim queue like
[a,a,a,a]
would be reduced to
[a,a]
so we should only accept two collations for that para - same as via above formula of course.
Spam vector restricting in general
There also is max_candidate_depth to limit spam. Even if we accept holes in chains this can be used to restrict a single parachain, as we can limit the total number of candidates in prospective parachains to that max_candidate_depth + 1. So even if we would accept 10 relay parents at any point in time, we could still restrict the number of candidates in total to max_candidate_depth + 1. This does not work effectively if paras are sharing a core though, then the more important parameter is max_ancestry_len. In general with #616 in place the most important property we need to maintain is, that we can detect spamyy misbehavior so we can account for it in the persistent reputation system. This only works on the local node though and would be an argument to reduce backing group size if possible.
The text was updated successfully, but these errors were encountered:
Claim queue length ==
allowed_ancestry_length + 1
The claim queue length should be made the same as allowed_ancestry_length +1, by definition. (Claim queue length should be derived from this configuration or the other way round).
Restrictions on collation fetching
If a collator decides to build on an older relay parent than the most current one, we should take this into consideration for what to accept
For a single para on a core: If allowed_ancestry_length is x and we are at ancestor
head - y
, withy <= x
it does not make sense to accept more thanx - y + 1
candidates for that relay parent as more could never make it on chain.More general: We should trim the claim queue. E.g. if we got a collation for relay parent
head - y
and we have the following claim queue at that relay block:[a,b,c,d]
then we should trim the actually available claim queue by
y
. So ify
is 2, the claim queue to take into consideration should be reduced to:[a,b]
as
c
andd
could no longer possibly make it into a block.for a single core para, a claim queue like
[a,a,a,a]
would be reduced to
[a,a]
so we should only accept two collations for that para - same as via above formula of course.
Spam vector restricting in general
There also is
max_candidate_depth
to limit spam. Even if we accept holes in chains this can be used to restrict a single parachain, as we can limit the total number of candidates in prospective parachains to thatmax_candidate_depth + 1
. So even if we would accept 10 relay parents at any point in time, we could still restrict the number of candidates in total tomax_candidate_depth + 1
. This does not work effectively if paras are sharing a core though, then the more important parameter ismax_ancestry_len
. In general with #616 in place the most important property we need to maintain is, that we can detect spamyy misbehavior so we can account for it in the persistent reputation system. This only works on the local node though and would be an argument to reduce backing group size if possible.The text was updated successfully, but these errors were encountered: