Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random Election using Stake Amount as Discrete Distribution #22

Closed
torao opened this issue Jan 23, 2020 · 12 comments
Closed

Random Election using Stake Amount as Discrete Distribution #22

torao opened this issue Jan 23, 2020 · 12 comments
Assignees
Labels
C: proposal Classification: Proposal for specification, algorithm, architecture, or communication

Comments

@torao
Copy link
Contributor

torao commented Jan 23, 2020

The scheme of selecting a Proposer and Validators based on PoS can be considered as random sampling from a group with a discrete probability distribution.

  • S: the total amount of issued stake
  • s_i: the stake amount held by a candidate i (Σ s_i = S)

Random Sampling based on Categorical Distribution

For simplicity, here is an example in which only a Proposer is selected from candidates with winning probability of p_i = s_i / S.

First, create a pseudo-random number generator using vrf_hash as a seed, and determine the threshold threshold for the Proposer. This random number algorithm should be deterministic and portable to other programming languages, but need not be cryptographic.

val rand = new Random(vrf_hash)
val threshold:Long = (rand.nextDouble() * S).toLong

Second, to make the result deterministic, we retrieve the candidates sorted in descending stake order.

val candidates:List[Candidate] = query("SELECT * FROM candidates WHERE stake > 0 ORDER BY stake DESC, public_key");

Finally, find the candidate hit by the arrow threshold of Proposer.

var proposer = candidates.last
var cumulativeStakeAmount:Long = 0
for(c <- candidates){
  if(cumulativeStakes <= threshold && threshold < cumulativeStakes + c.stake){
    proposer = c
    break
  }
  cumulativeStakes += c.stake
}

This is a common way of random sampling according to a categorical distribution by using a uniform random number. Similar to throwing an arrow on a spinning darts whose width is proportional to the probability of each item.

Untitled (1)

Selecting of a Consensus Group

By applying the above, we can select a consensus group consisting of one Proposer and V Validators. This is equivalent to performing V+1 categorical trials, which is the same as a random sampling model with a multinomial distribution. It's possible to illustrate this notion using a multinomial distribution demo I created in the past. This is equivalent to a model that selects a Proposer and Validators when K is the number of candidates and n=V+1.

As an example of intuitive code, I expand categorical sampling to multinomial.

val thresholds = new Array[Long](V + 1)
for(i <- 0 until thresholds.length){
  thresholds(i) = (rand.nextDouble() * S).toLong
}

var cumulativeStakes:Long = 0
val winner = new Array(thresholds.length)
for(c <- candidates){
  for((t, i) <- thresholds.zipWithIndex){
    if(cumulativeStakes <= t && t < cumulativeStakes + c.stake){
     winner(i) = c
    }
  }
  cumulativeStakes += c.stake
}
val proposer = winner(0)
val validator1 = winner(1)
...

In the above steps, a single candidate may assume multiple roles. If you want to exclude such a case, you can remove the winning candidate from the candidates. In this case, the thresholds must be recalculated because the total S of the population changes.

Computational Complexity

  1. typical sort algorithm: O(N log N), where N is the number of all candidates.
  2. generating random numbers: O(V + 1).
  3. winner extraction: O(N × (V+1)), is the worst case. In many cases, loops can be interrupted.

The computational complexity is mainly affected by the number of candidates N. There is room for improvement by remembering the list of candidates that have been sorted by the stake.

@torao torao added the C: discussion Classification: Discuss something label Jan 23, 2020
@torao torao added this to the Evolve Leader Election into VRF milestone Jan 23, 2020
@torao torao added C: proposal Classification: Proposal for specification, algorithm, architecture, or communication and removed C: discussion Classification: Discuss something labels Jan 24, 2020
@torao torao changed the title Random Election based on Stake Amount Random Election using Stake Holdings as a Discrete Distribution Jan 24, 2020
@torao torao changed the title Random Election using Stake Holdings as a Discrete Distribution Random Election using Stake Amount as Discrete Distribution Jan 24, 2020
@zemyblue
Copy link
Member

zemyblue commented Jan 28, 2020

This is good suggestion. 👍
But the computational calculation is very higher if there is many validators.

In the above steps, a single candidate may assume multiple roles. If you want to exclude such a case, you can remove the winning candidate from the candidates. In this case, the thresholds must be recalculated because the total S of the population changes.

As above your mention, the selected validator is less than V+1. In this case, is the process of this selecting validators repeated?

@torao
Copy link
Contributor Author

torao commented Jan 29, 2020

But the computational calculation is very higher if there is many validators.

The amount of calculation depends on V, but it consists only of simple addition, random number generation and conditional branching (rather than hash computation). So I roughly expect the cost to be modest, e.g., less than 100ms for V ≤ 10^5.

In this case, is the process of this selecting validators repeated?

Yes, it does. If we follow the case that the configuration doesn't allow a node to have multiple roles, 1) it will remove the selected validator from the candidates' list, 2) recalculate S, and 3) find the next winner, repeat these V times. Therefore, the cost of step 2) is added, the computational complexity of the above will be about O(N×V) + O(N×(V+1)) ≒ O(2N×V) in worst case.

@zemyblue
Copy link
Member

OK

And how about using hash function like sha256 instead of random function.

@torao
Copy link
Contributor Author

torao commented Jan 30, 2020

I think it mainly depends on its speed. And there are several considerations to using the cryptographic hash function.

  • Basically, the cryptographic hash functions or cryptographic pseudo-random number functions are computationally expensive and slow. The hash generated by VRF already has pseudo-random number properties based on cryptography. Applying the seed combined with the candidate's unique value to a pseudo-random function will generate a sufficiently random value. In this case, we can use a fast pseudo-random algorithm like Xorshift(vrf_hash || public_key).
  • Strictly speaking, hash functions are intended for zero or very low collision mappings and they don't consider value uniformity or periodicity. In our use, it's required a uniform mapping in the [0,1) interval, rather than generating an identity mapping.
  • SHA-256 is significantly slower in a 64bit environment, so I think SHA-512 is better if we use it.

@zemyblue
Copy link
Member

My concern is that the random function can vary from platform to platform and program language and compile version. (https://en.wikipedia.org/wiki/Random_number_generation) So I suggest using the hash function.

@egonspace
Copy link
Contributor

egonspace commented Jan 30, 2020

Random sampling is better than my proposal because of less calculation. I have two comment about this proposal.

Duplicate election

As my proposal suggests, there should be consideration of equity in the case of a duplicate election.
If a participant is elected twice in a round with a lot of staking, it will be as disadvantageous as a reward as a person who splits as much as possible and runs multiple nodes.
* 10000staking-1node rewards < 10staking-1000node rewards in during height h1~hn

So, in this case, duplicate elected validator must have some additional rewards, and additional elected validator should receive a part-reward for the validation job. (Please refer #17)

I'll think more about the reward policy and then make a separate issue.

A way to reduce computations

I think we don't need to calculate this at every round.

for(c <- candidates){
  if(cumulativeStakes <= threshold && threshold < cumulativeStakes + c.stake){
    proposer = c
    break
  }
  cumulativeStakes += c.stake
}

Validators order is fixed for a while if any staking tx is not executed.
So if we store following data, we can verify some proposer or validator at receiving proposal or vote easily.

order candidate staking position
1 c100 10000 0, 10000
2 c068 9000 10000, 19000
3 c021 8000 19000, 27000
... ... ... ...
78 c010 1 99999, 100000

Total staking is 100000.
This table is updated when staking tx is executed.

I am a validator and someone propose a block.
I know the threshold and his position, so I can verify whether he is the right proposer or not without whole calculating and sorting.

@torao
Copy link
Contributor Author

torao commented Jan 30, 2020

@zemyblue

My concern is that the random function can vary from platform to platform and program language and compile version.

The PRNG mentioned here means an algorithm defined by LINK 2 Network as its specification (rather than languages or external libraries such as libc). Typical PRNG algorithms, such as Xorshift, LCGs, and MT, produce the same results on all platforms, languages, and compiler versions as long as the algorithm and its parameters are the same.

@zemyblue
Copy link
Member

If a participant is elected twice in a round with a lot of staking, it will be as disadvantageous as a reward as a person who splits as much as possible and runs multiple nodes.

But, it is not disadvantageous, because the voting power is set by the staking value of user.
one node is not one voting power.
Therefore, rewards is given in proportion to the staking value.

But I'm also thinking about what it's like to give them the right to vote as many as they're elected.
It means if the user to have 10 voting power is elected 2 times, the user's voting power is 2.
But I am not sure this is good. I need to simulate more.

@torao
Copy link
Contributor Author

torao commented Jan 30, 2020

@egonspace

Duplicate election

This proposal doesn't point out the incentive scheme, so it needs to do separately. We'll discuss this in #17.

A way to reduce computations

That suggestion is useful. It will be able to reuse the results of the categorical distribution until the next stake transaction issued. If the stakes don't change often, we may be able to use an algorithm such as a binary tree, R-Tree or B-Tree instead of a linear search. In this case, 3.winner extraction will be O(V log N).

@egonspace
Copy link
Contributor

A problem with this algorithm was raised in the last open session.

Random election cannot protect for some candidate to be a validator having voting power over staking or over 1/3 of total extremely.

@torao
Copy link
Contributor Author

torao commented Mar 3, 2020

Here a few points.

  • From the private simulation results, the frequency of a BFT violation occurring on one node in uniformed staking was less than 10^-6 (it may be less more, but for now).
  • However, when considering to conspire, we must consider the potential for a BFT violation in the sum of VotingPower of 2 nodes, 3 nodes, 10 or 100 nodes. The case of one node is just one example of this consideration.
  • Simulations show that BFT violation is likely to occur even if the number of Byzantine nodes is much less than 1/3 of candidates (note that this is a fundamental problem of PoS + BFT, not only categorical or binomial distribution).
  • For example, if we select 25 out of 1000 nodes that contain 100 Byzantine (that is, 1/10 of the staking is occupied by adversaries), the frequency of BFT violation that 1/3f+1 out of 25 becomes ≧9 is 0.066%. This means 1.2 times for an hour in an environment that generates blocks once every two seconds. This increase rapidly, with BFT violation exceeding 1% at 150 nodes.
  • Go bucking to the first point, preventing a single node from acquiring more votes than expected might complicate the concept. And adversaries can simply avoid it to distribute their stakes across multiple nodes so that they don’t look suspicious.
  • Fundamentally, even if one round fails this way, we have the resilience characteristic by VRF that doesn’t affect later rounds. It’s better to use the same concept to avoid them than to design one node and multiple nodes individually.

Strictly speaking, getting one node "accidentally" 1/3f+1 is a slightly different issue from getting distributed Byzantines "accidentally" 1/3f+1 of validators. However, the former has a negligible probability compared to the latter, which will occur with an apparent frequency. Therefore, I believe it is reasonable to treat these as "accidental (expected) BFT Violation Problem" in a common manner.

6240571688222720

@torao
Copy link
Contributor Author

torao commented Mar 17, 2020

I'm writing the results of PoC at Wikipage. However, it's not able to upload any images on Wikipage, so attach them to this ticket and use that URL.

5984624369729536 (1)
5487301516591104 (1)
6060332446121984 (1)
6480916430716928 (1)
5445594917896192 (1)
4924536196431872 (1)
5637788396158976 (1)
5467658500440064 (1)
5421674416308224 (1)
5648854043852800 (1) (1)

@tnasu tnasu closed this as completed Mar 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: proposal Classification: Proposal for specification, algorithm, architecture, or communication
Projects
None yet
Development

No branches or pull requests

4 participants