Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VC: new scoring functions. #5447

Merged
merged 10 commits into from
Nov 14, 2023
Merged

VC: new scoring functions. #5447

merged 10 commits into from
Nov 14, 2023

Conversation

cheatfate
Copy link
Contributor

New scoring functions added:

  • Aggregated attestations
  • Sync committee contributions
  • Sync committee messages

beacon_chain/validator_client/scoring.nim Outdated Show resolved Hide resolved
@github-actions
Copy link

github-actions bot commented Sep 20, 2023

Unit Test Results

         9 files  ±  0    1 098 suites  ±0   31m 46s ⏱️ + 2m 16s
  3 951 tests +  4    3 604 ✔️ +  4  347 💤 ±0  0 ±0 
16 066 runs  +12  15 668 ✔️ +12  398 💤 ±0  0 ±0 

Results for commit 8a004c2. ± Comparison against base commit 2f0bb61.

♻️ This comment has been updated with latest results.

@cheatfate
Copy link
Contributor Author

cheatfate commented Sep 20, 2023

I have updated getSyncCommitteeMessageDataScore() so now it returns:

  • when slot has been found in rootsSeen, its score range is in (1.000, 2.000] or +Inf.
  • when slot is not found or block monitoring disabled, its score range is in (0.000, 1.000].
  • when response is sent from optimistically synced node, score is -Inf.

else:
# Block monitoring is disabled or we missed a block, in this case
# score value will be in range of `(0, 1]`
getLexicographicScore(cdata.data.root)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better for the implementation to fall back on VC configuration order in case it doesn't have information / ends up with tie.. basically, we then get a hierarchical scoring algorithm where ties at one level are resolved at the next, the last level being the order of declaration of VC (so that if everything else is equal, we pick the first node)

@cheatfate cheatfate force-pushed the vc-scoring branch 2 times, most recently from e68b363 to 4774db1 Compare November 7, 2023 09:57
let
length = int(MAX_VALIDATORS_PER_COMMITTEE)
ones = countOnes(adata.data.aggregation_bits)
res = if length == ones: Inf else: float64(ones) / float64(length)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data.aggregation_bits cannot practically have this many ones - typical committee size now around 500 validators.

Copy link
Contributor Author

@cheatfate cheatfate Nov 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea of score is to get higher value when number of bits being set is bigger... I think current algorithm definitely obtains such score value and also it doesn't depends on typical/normal/obvious and any other count of validators.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but it's not possible to attain a perfect score with this algorithm - even if an attestation has all of its bits set, this will not result in a perfect score since length is always larger than the committee size

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what if one BN returns attestation with committee size 500 bits but only 480 bits set (480 div 500 == 0.9600), and another BN returns attestation data with committee size 490 bits but only 471 bits set (471 div 490 == 0.9612), so your proposal is to use attestation data with 490/471. While my proposal is to use 500/480 instead. So yeah perfect score is very hard to achieve, but better attestations will be selected.

Copy link
Member

@arnetheduck arnetheduck Nov 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perfect score is very hard to achieve

it's not "very hard" - it's simply not possible unless all eth in circulation is staked - MAX_VALIDATORS_PER_COMMITTEE is derived from a validator count that cannot practically be achieved (it was chosen so that we will never ever hit it which makes the merkle tree "stable") - we can simply remove the "perfect attestation" score at this point because it doesn't make sense.

Split sync committee message score in range, so lexicographic scores will not intersect with normal one.
Lexicographic scores should be below to normal scores.
Fix aggregated attestation scoring to use MAX_VALIDATORS_PER_COMMITTEE.
Fix sync committee contributions to use SYNC_SUBCOMMITTEE_SIZE.
Add getUniqueVotes test vectors.
@arnetheduck arnetheduck merged commit 9889b84 into unstable Nov 14, 2023
11 checks passed
@arnetheduck arnetheduck deleted the vc-scoring branch November 14, 2023 11:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants