Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storagefsm: Fix batch deal packing behavior #6041

Merged
merged 6 commits into from
May 31, 2021
Merged

Conversation

magik6k
Copy link
Contributor

@magik6k magik6k commented Apr 14, 2021

Fixes #6013
Fixes #5077
Fixes #6010

@stuberman
Copy link

I hope to test this later with 1.7.0-rc1

@stuberman
Copy link

Updated and running deals now
lotus-miner version

Daemon: 1.7.1-dev+mainnet+git.7f2030d41+api1.0.1
Local: lotus-miner version 1.7.1-dev+mainnet+git.7f2030d41

lotus-worker info

Worker version: 1.0.0
CLI version: lotus-worker version 1.7.1-dev+mainnet+git.7f2030d41

1st things I noticed:

  1. Published now: 6x32GiB deals
  2. This created 6xAP plus 4xWaitDeals sectors all at once
  3. MaxDealsPerPublishMsg = 16
  4. Worker showing repeated WARN messages: “negative reserved storage: p.reserved=206158430208, reserved: -393216”

@stuberman
Copy link

stuberman commented Apr 17, 2021

More updates:
I recently received 14 more small deals. 13 deals were batch published, and two WaitDeals sectors created. The first WaitDeals sector (1078) accepted all 13 deals using AddPiece.
Both sectors continued to remain in WaitDeals. When the 14th deal came in, I pushed it to publish once it finished transferring and it was added to sector 1078.

I think this is exactly the behavior we are looking for. (Although sector 1079 is superfluous at this point and I expect to remain in WaitDeals status until it both gets deals and the SealDelay timer expires.

1078 WaitDeals  NO    NO   n/a                14        
1079 WaitDeals  NO    NO   n/a                CC

@jennijuju
Copy link
Member

We need to make sure that smaller sized deals won't end up on a million of new sectors and waiting for to be sealed when the MaxWaitDealsSectors is set to a non-1 number

@arajasek
Copy link
Contributor

arajasek commented May 3, 2021

Context from @jennijuju : Magik is pretty sure there's a bug in this that we need to 🕵️‍♀️

@jennijuju jennijuju mentioned this pull request May 4, 2021
80 tasks
@BigLep BigLep modified the milestones: v1.9.x, Hyperdrive May 14, 2021
@jennijuju jennijuju added the P1 P1: Must be resolved label May 17, 2021
@BigLep BigLep added P2 P2: Should be resolved and removed P1 P1: Must be resolved labels May 26, 2021
@magik6k
Copy link
Contributor Author

magik6k commented May 30, 2021

Copy link
Contributor

@arajasek arajasek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGWM

@magik6k magik6k merged commit 2d6a159 into master May 31, 2021
@magik6k magik6k deleted the fix/batch-deal-packing branch May 31, 2021 18:45
@jennijuju
Copy link
Member

More testing is required on this PR and we wont be backporting it for v1.10.0, but it should be merged for v1.11.0

@jennijuju jennijuju modified the milestones: Network Hyperdrive, v1.11.x May 31, 2021
@jennijuju
Copy link
Member

Many miners have been running this for more than two weeks and this fixed issue for them. It is now crucial for miners that are actively making deals, especially to have sectors created properly. Therefore, we should backport it to v1.10.0.

magik6k added a commit that referenced this pull request Jun 18, 2021
Backport #6041 - storagefsm: Fix batch deal packing behavior
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 P2: Should be resolved
Projects
None yet
5 participants