-
Notifications
You must be signed in to change notification settings - Fork 672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[stateless_validation] Update validator roles and rewards #10556
Comments
Roles&selection change proposalSelect chunk producersFix maximal total chunk producer number at N * S where S is number of shards. Assign CPs to shards in the same way as we do currently - via assign_shards.
Select block producers/validatorsAs BPs won’t have to track any shard now, we can increase num_block_producer_seats. Because it's still a critical role, it looks reasonable to filter only top50% of proposals. As currently there are 200 validators on mainnet consistently, we can set num_block_producer_seats to 100. This is the only change to be made here. I was thinking whether we can skip CPs while assigning BPs, but this breaks consensus - if top stakers are not selected, malicious actors can arrange stakes properly to obtain >1/3 of such BP stake even while not having 1/3 of total stake. Not sure if selecting X out of total number of proposals doesn't break consensus as well, but we do this currently anyway. Select chunk validatorsEveryone is a chunk validator. Consequence 1: if you stake M*seat_price tokens, you will likely validate M shards for each block. So this essentially means that work distribution won’t visibly change. However, increasing number of shards will press top validators to eiter improve their machines or decrease the stake to reflect chunk validation capacity better, which looks good. StatelessnetI'm not sure how we should set thresholds for statelessnet chunk validators, though. Rewards: TODO |
Couple of questions and comments...
Question: Is the 50% number here arbitrary or do we have a sort of explanation why we would like to reduce the number of block producers to 100? What are the tradeoffs of lesser or more BP?
Question: Similar question, what are the tradeoffs of setting N smaller or larger?
Comment: I would propose that the seat price should be something around the median (50th percentile) of the stake of these 100*S nodes. Call this stake Depending on how quickly the stake for these nodes grows, the nodes would (probably) track all shards. i.e. there's a high chance all nodes with state value more than
Question: Sorry, I didn't quite understand this section. What are we trying to determine here when you say "give same stake to everyone"? |
That's not a reduction, we currently have 100 BPs as well: nearcore/core/primitives/src/epoch_manager.rs Line 196 in 5dbfa38
With stateless validation in place, I think the biggest concern is the size of approvals field in BlockHeader. With 100 BPs it may take 100*64 bytes = 6.4 Kb. This is even one of largest contributors to Block, together with chunk endorsements.
I see BlockHeader size as a concern for three usecases:
Another historical reason for BP limit is that BP always had the strongest HW requirements. But it won't be the case anymore.
Small N means high dependence on small amount of nodes. High N means stronger HW requirements on more nodes. So 5 is a compromise, we don't really need a lot of chunk producers.
So current median is ~700k NEAR. In discussion near Michael's analysis, 25k NEAR as a seat size is reasonable from HW perspective. Later he suggests 175k NEAR as a compromise.
My default approach would be to give the same statelessnet tokens for all community validator. But it won't help to test mainnet scenario: if seat price is too low, then everyone will validate all shards. Ideally we need to have all kinds of chunk validators: validating all shards, part of shards, only one shard (maybe with one partial mandate) |
Based on our discussion in the weekly meeting, we can aim for a dynamic seat price as well to optimize either of the following
For the first type, where we are fixing the total number of seats, the idea is to have an approx fixed number of seats assigned to each shard. Say if we were to take something like 500 to a 1000 total seats, that would mean a seat value close to what we had figured out as the meadian of ~700K NEAR (with 760 seats for 700K NEAR). This is a good thing and aligns well with our models of having ~50% validators only tracking one shard and the rest multiple shards. For the second type, the philosophy behind that would be that the security of the system is determined by the total number of unique validator nodes tracking and validating shards. I'm not really sure how much of a basis does this have in our current and past reasoning and have we ever even considered this as an argument to think over. But assuming this matters to us, again, the split would be something like, say if we want ideally ~100 unique validator nodes, then we can back calculate the seat price. With us currently having ~200 validators, we would want the threshold for seat price to be somewhere around ~125th validator mark (decently close to the median). The rough calculation I made was based on ~125 validators (who are below the threshold of 1 seat price) would track only 1 shard and the rest ~75 validators would probabilistically track all shards. That gives us 125/5 + 75 = 100 validators per shard. If we take into account that not all the ~75 validators would track all shards, then the skew shifts towards left, i.e. seat price would be before the 125th validator mark, pushing it closer to our median of 100th validator. With both these reasonings, we come up the seat price being close to the 50th percentile In terms of actual implementation, assuming we would have a constant number of ~200 validators, we can start by just having the seat price close to 50th percentile but later we can expand the algorithm to dynamically adjust the seat price based on having a total of 500 to 625 seats? Note that here in our calculation we find seat_price such that (I hate typing math formulas in latex) |
I'm a bit weary of validators running hardware suitable for n shards suddenly getting assigned more shards when the stake allocation changes. I don't think it's a blocker for mainnet release but may be something to address in the long term. |
Can we have update summary based on the latest progression? |
This was presented at the core team meeting without significant objections: https://docs.google.com/presentation/d/1UfHEe4OvExlmmrnw_8fqn3zVFgS9DxhReVKmb8qtReo To complete this, one needs to complete steps from "Next steps" slide. |
adding @birchmd as owner of the task for now |
Add chunk validator role
I miss some details here, but it is clear that
fn proposals_to_epoch_info
must be updated with respect to stateless validation.This is result of discussion on NY offsite: https://near.zulipchat.com/#narrow/stream/407237-pagoda.2Fcore.2Fstateless-validation/topic/validator.20role/near/397375108
Motivation to add chunk validator role as I see it:
Set right threshold
Relevant issue - now full mandate threshold is too low, as it leads to:
Probably has to be set to
threshold * 2
. It's not clear enough from the code what are consequences of specific thresholds.Change uptime/reward calculation
We need to consider re-estimating new validator rewards with respect to increased network costs.
The text was updated successfully, but these errors were encountered: