-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Location of participants in Lotus stack, multiple identities in one instance #103
Comments
The design I settled on:
See #188 for PR on it. The suggestion there was to separate the PR into:
|
This has proven slightly trickier than we thought. The protocol code has assumptions that it can update state as a result of its own decisions more or less synchronously. This used to be achieved by internally receiving messages sent by self synchronously. This changed to async as part of #259 but means there's now a race between receiving those messages sent to self and receiving alarms #316. I explored an internal send-to-self of unvalidated messages, but immediately ran into not knowing the node's own power, so the state updates are impossible. Some potential paths forward that I see:
|
The gpbft implementation implicitly assumes that broadcast of `CONVERGE` messages to self are delivered immediately. In practice this assumption does not hold because of the complexity in deferred signing and async message delivery. The changes here relax this assumption by explicitly notifying the local converge state that the self participant has begun the `CONVERGE` step, providing self proposal and justification for the proposal. The code then considers the given data whenever search in converge state does not bear any results, caused by asynchronous message delivery. Further, the code ignores the self converge value once at least one broadcast message is received. Fixes #316 Reverts #318 Relates to #103 (comment)
The gpbft implementation implicitly assumes that broadcast of `CONVERGE` messages to self are delivered immediately. In practice this assumption does not hold because of the complexity in deferred signing and async message delivery. The changes here relax this assumption by explicitly notifying the local converge state that the self participant has begun the `CONVERGE` step, providing self proposal and justification for the proposal. The code then considers the given data whenever search in converge state does not bear any results, caused by asynchronous message delivery. Further, the code ignores the self converge value once at least one broadcast message is received. Additionally, the changes remove zero-latency for messages to self in simulations to make a stronger assertion that synchronous message delivery to self is no longer required (neither for `GMessage` nor alarms). Fixes #316 Reverts #318 Relates to #103 (comment)
The gpbft implementation implicitly assumes that broadcast of `CONVERGE` messages to self are delivered immediately. In practice this assumption does not hold because of the complexity in deferred signing and async message delivery. The changes here relax this assumption by explicitly notifying the local converge state that the self participant has begun the `CONVERGE` step, providing self proposal and justification for the proposal. The code then considers the given data whenever search in converge state does not bear any results, caused by asynchronous message delivery. Further, the code ignores the self converge value once at least one broadcast message is received. Additionally, the changes remove zero-latency for messages to self in simulations to make a stronger assertion that synchronous message delivery to self is no longer required (neither for `GMessage` nor alarms). Fixes #316 Reverts #318 Relates to #103 (comment)
The gpbft implementation implicitly assumes that broadcast of `CONVERGE` messages to self are delivered immediately. In practice this assumption does not hold because of the complexity in deferred signing and async message delivery. The changes here relax this assumption by explicitly notifying the local converge state that the self participant has begun the `CONVERGE` step, providing self proposal and justification for the proposal. The code then considers the given data whenever search in converge state does not bear any results, caused by asynchronous message delivery. Further, the code ignores the self converge value once at least one broadcast message is received. Additionally, the changes remove zero-latency for messages to self in simulations to make a stronger assertion that synchronous message delivery to self is no longer required (neither for `GMessage` nor alarms). Fixes #316 Reverts #318 Relates to #103 (comment)
…ly (#334) * Relax the assumption of receiving own `CONVERGE` messages synchronously The gpbft implementation implicitly assumes that broadcast of `CONVERGE` messages to self are delivered immediately. In practice this assumption does not hold because of the complexity in deferred signing and async message delivery. The changes here relax this assumption by explicitly notifying the local converge state that the self participant has begun the `CONVERGE` step, providing self proposal and justification for the proposal. The code then considers the given data whenever search in converge state does not bear any results, caused by asynchronous message delivery. Further, the code ignores the self converge value once at least one broadcast message is received. Additionally, the changes remove zero-latency for messages to self in simulations to make a stronger assertion that synchronous message delivery to self is no longer required (neither for `GMessage` nor alarms). Fixes #316 Reverts #318 Relates to #103 (comment) * Adjust naming and comments. --------- Co-authored-by: Alex North <445306+anorth@users.noreply.github.com>
For now, I've assumed that the f3 active participant would live in Lotus.
This might not have been as good of an assumption.
An active participant is tied to SP code and identity; that flow lives in lotus-miner/lotus-provider.
At the same time, the lotus miner is not connected with the global pubsub (AFAIK).
A single Lotus node can also host multiple providers, which, if f3 continues to live there, would necessitate either multiple concurrent instances in one Louts node or f3 being able to handle multiple identities.
As far as I know, the protocol flow is independent of our own identity, which should make running multiple identities at the same much easier. We could abstract out signing and VRF generation as part of the broadcast operation. Essentially, the instance itself stops caring about our own identity.
The text was updated successfully, but these errors were encountered: