Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

head change coalescer #4688

Merged
merged 9 commits into from
Nov 10, 2020
Merged

head change coalescer #4688

merged 9 commits into from
Nov 10, 2020

Conversation

vyzo
Copy link
Contributor

@vyzo vyzo commented Nov 2, 2020

No description provided.

@vyzo vyzo requested a review from Kubuxu November 4, 2020 11:59
@vyzo vyzo marked this pull request as ready for review November 4, 2020 16:22
@vyzo vyzo requested a review from magik6k as a code owner November 4, 2020 16:22
Copy link
Contributor

@arajasek arajasek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logic looks good to me!

Comment on lines +129 to +133
if cancel {
continue
}

newRevert = append(newRevert, ts)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Can these be changed to

if !cancel {
    newRevert = append(newRevert, ts)
}

@arajasek arajasek added this to the 🖇Lotus Stabilization milestone Nov 6, 2020
@whyrusleeping
Copy link
Member

@vyzo so the code here appears to work pretty well, but I'm still seeing the mempool triggering recent state calls. Maybe we can just bump the timeout number? Will try now...

@whyrusleeping
Copy link
Member

2020-11-06-135712_779x1810_scrot

I bumped the delay up to 3 seconds, and ran this for a bit. Its better, but i'm still seeing state computations being triggered by the mempool (and if this were fully fixed, i'd expect to see them only be triggered by the FundManager for now)

@whyrusleeping
Copy link
Member

To check this, I just grabbed a random 120 second profile of the lotus node while its fully caught up

@vyzo
Copy link
Contributor Author

vyzo commented Nov 7, 2020

We can try to add some more smarts -- for instance, when it's time to trigger the callback check to see if the last coalesce was within some bound (say 100ms) -- and if that's the case wait some more.
This might succeed in coalescing all the incoming blocks in the beginning of the epoch.

@vyzo
Copy link
Contributor Author

vyzo commented Nov 7, 2020

Also, some state computations are unavoidable, we do have to compute at some point, we would just like to avoid the unnecessary ones :)

@vyzo
Copy link
Contributor Author

vyzo commented Nov 9, 2020

Added some more smarts to the coalescing algorithm, controlled by 3 parameters: minDelay, maxDelay, and mergeInterval.

When the first head chance (since the last dispatch) is received, the coalescer schedules a timer for minDelay to coalesce more head changes.
When the timer fires, it checks the time of the last coalesce; if it is within the mergeInterval, then it extends the coalesce time to coalesce some more.
Finally, the delivery of head change is not delaye more than maxDelay.

For the mpool, I have set minDelay=2s, maxDelay=6s, and mergeInterval=1s.
Hopefully this will capture the common case of 6s or less for block delivery.

Copy link
Contributor

@arajasek arajasek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New version looks good to me

@magik6k magik6k merged commit 2ae0edc into master Nov 10, 2020
@magik6k magik6k deleted the feat/head-change-coalscer branch November 10, 2020 18:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants