-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VC: new scoring functions. #5447
Conversation
I have updated
|
else: | ||
# Block monitoring is disabled or we missed a block, in this case | ||
# score value will be in range of `(0, 1]` | ||
getLexicographicScore(cdata.data.root) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better for the implementation to fall back on VC configuration order in case it doesn't have information / ends up with tie.. basically, we then get a hierarchical scoring algorithm where ties at one level are resolved at the next, the last level being the order of declaration of VC (so that if everything else is equal, we pick the first node)
e68b363
to
4774db1
Compare
let | ||
length = int(MAX_VALIDATORS_PER_COMMITTEE) | ||
ones = countOnes(adata.data.aggregation_bits) | ||
res = if length == ones: Inf else: float64(ones) / float64(length) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data.aggregation_bits
cannot practically have this many ones - typical committee size now around 500 validators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea of score is to get higher value when number of bits being set is bigger... I think current algorithm definitely obtains such score value and also it doesn't depends on typical/normal/obvious and any other count of validators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but it's not possible to attain a perfect score with this algorithm - even if an attestation has all of its bits set, this will not result in a perfect score since length
is always larger than the committee size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what if one BN returns attestation with committee size 500 bits but only 480 bits set (480 div 500 == 0.9600
), and another BN returns attestation data with committee size 490 bits but only 471 bits set (471 div 490 == 0.9612
), so your proposal is to use attestation data with 490/471. While my proposal is to use 500/480 instead. So yeah perfect score is very hard to achieve, but better attestations will be selected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perfect score is very hard to achieve
it's not "very hard" - it's simply not possible unless all eth in circulation is staked - MAX_VALIDATORS_PER_COMMITTEE
is derived from a validator count that cannot practically be achieved (it was chosen so that we will never ever hit it which makes the merkle tree "stable") - we can simply remove the "perfect attestation" score at this point because it doesn't make sense.
Split sync committee message score in range, so lexicographic scores will not intersect with normal one. Lexicographic scores should be below to normal scores.
Fix aggregated attestation scoring to use MAX_VALIDATORS_PER_COMMITTEE. Fix sync committee contributions to use SYNC_SUBCOMMITTEE_SIZE. Add getUniqueVotes test vectors.
3901391
to
fe2337d
Compare
New scoring functions added: