-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Formalize Tendermint proposer election algorithm properties #7864
Comments
Fair, this is probably a better choice.
I completely agree that we should implement randomized algorithms (at which point this property doesn't matter) - I'm trying to articulate the properties that our current algorithm (which will be used at launch) has. Doesn't any deterministic election algorithm with finite power bounds (e.g. This may not be a super important property, but I think it is a property of our algorithm. If we only want to write properties that we think a future algorithm will also have we can probably get rid of it though.
The limit was sloppy, and I agree that we want a tighter bound than ∞, but I don't think this formalization is quite correct either:
|
Fair point! I don't think it should be a design goal, just a property that may be endemic.
Woops, your right!
Is that not essentially ensuring
I think its fine for it to only be true over a single epoch, since we can enforce that epoch size in protocol. (i.e. we could delay the application of other power changes) The smaller the epoch size the better though. The other validators wont be delayed due to 1, since power updates don't have the delay, only joining / leaving the set.
If epochs for power changes are acceptable, I believe my definition captures both cases. |
I misread your equation slightly - indeed it is!
I suppose, if we do enforce that epoch size in protocol. I think we don't want the minimum epoch size / proposer set update delay to be too large (e.g. multiple times the size of the validator set), since that will disadvantage new validators and discourage competition. When we write a more concrete instantiation, if a set epoch size is easier to formalize and the minimum isn't too large, that does seem sufficient. |
Design goal: We want to design a deterministic round robin algorithm that doesn't reward new or updated validator power.
If a new validator starts with -new_total_voting_power after "centering" the other validator accums to zero, you get the property that you want, that a new validator's proposer turn gets sufficiently "delayed". Proving the "fairness" goes like this: -new_total_voting_power ensures that the new validator doesn't become a proposer until a full "period" (for some definition of period related to fairness), because the accum won't be non-negative until then, and because all proposers (as long as there exist a non-new validator) always have non-negative accum. I think the new accum value for new validators could even be higher (perhaps the "centering" could happen after setting new validator accums to -new_total_voting_power, for example). Before we explore randomness, we should make it work with a deterministic round robin algorithm. |
Can we consider this done re #3140 ? |
Hmm, I think it depends whether we want to further formalise the "tolerance factor" (called C in the diff), and/or the penalty factor, both of which are still a bit hand-wavy at the moment. Also, did we want to actually prove fairness (with some sort of mathematical reduction like in this (hacky) Idris source, though hopefully cleaner)? #3140 seems to leave the formal proofs as "TODO". In practice the current system seems OK, this strikes me as lower priority than other formalisation tasks (e.g. the light client). |
Spun off from #2718 and #2785.
Quoth @cwgoes (#2785 (comment)):
I think that is the goal here - we want to recenter the accum distribution at zero, each round. That prevents the accum from "drifting negative" and seems like it should ensure that a validator added with
-total_accum
is last in line (although we should more formally analyze this). If we add the accum when it is negative, we'll keep decreasing an already-negative average accum, which is not what we want.I think these are some properties the proposer election algorithm ought to have (cc @ValarDragon), which it would be nice if we could test for or prove on paper:
1 Last in line
1If a validator joins the bonded set, and no other power changes happen afterwards, they will be the last in the queue to propose (all other
n - 1
validators will propose first, and some may propose multiple times).2 Consistent
If no validator power changes happen, the proposer election should be completely cyclical in some repeating period (the exact sequence of proposers repeats).
3 Fairly stake-proportional
Regardless of what other power changes happen, the fraction of proposals
f
by some validatorV
bonding at roundn
and unbonding at roundm
, at constant powerp
throughout, in the limit of the differencem - n
to infinity, should equalp
divided by the average total power over those roundsP
:lim ((m - n) => ∞) f = p / P
Any other properties we think we want?
Quoth @ValarDragon (#2785 (comment)):
Nice writeup @cwgoes! I don't think those properties detail the ideal stake election algorithm though.
.1%
of stake who just proposed. It doesn't make sense for the 20% to be blocked on the .1%.d
blocks. Then once they get to join the proposer set, they start as if they had just proposed in the current setting, so they're at the bottom. (Similar idea as to what you had mentioned during last weeks tm call)This basically discredits any/all randomized election algorithms. I think randomized algorithms are far better due to ability to prevent grinding the ordering of the proposer distribution. I don't understand why consistency is even important in a deterministic consensus algorithm.
I don't think this is correctly formalized. You can't take an average of infinite sequence outside of a limit. The formalization at that infinite case also isn't very informative, since you do expect your power to fluctuate alot. Additionally, you want properties to hold at smaller scales, we don't really get to make expectations all the way out to infinity. A better formalization / model in my opinion:
e
, and assume there are no power changes within an epoch. Your power in epochi
isp_i
, and the total power in epochi
isP_i
. Foralln \in \Nat
, the fraction of blocksf
you propose isepsilon(n) + \sum_i=0^n p_i / P_i
, where for alln > n_0
,-acceptable_rational_function(n) < delta(n) < acceptable_rational_function(n)
. Acceptable rational function is a rational function which approaches 0 as n increases, and this function should be such that we consider it okay as long as the deviation is bounded by plus or minusacceptable_rational(n)
. (If we had secure randomness, we could instead make epsilon a negligible function)I think there needs to be some component of this. We're in a deterministic system, but we don't have to be. There are cool ways we can get secure randomness. (VDF + Avalanche RANDAO) This is clearly something that can't be done until postlaunch though.
The text was updated successfully, but these errors were encountered: