M1 Testing with MinerX.2 Discussion #6861
Replies: 7 comments 25 replies
-
I think I have both separate instances running but I'm getting errors with all the storage deal reporting tools... for example:
Any ideas? |
Beta Was this translation helpful? Give feedback.
-
@TippyFlitsUK would you please share your repos' On the
On the
I believe that if you have |
Beta Was this translation helpful? Give feedback.
-
I started my migration on Phase III - m1.3.2 May I suggest that during the process of upgrading, instructions should clearly advise storage providers to disable all online and offline deals. I also highly suggest that as part of the DAG store migration that some type of progress indicator be included such as:
|
Beta Was this translation helpful? Give feedback.
-
I am testing m1.3.3 |
Beta Was this translation helpful? Give feedback.
-
Attaching storage to the markets process via CLI didn't work as expected ( It attached the storage location to the miner process. I'd guess (not tested) that https://docs.filecoin.io/mine/lotus/dagstore/#cli-commands |
Beta Was this translation helpful? Give feedback.
-
I'm on m1.3.5
Initializing shard fails for a sector in which unsealed copy does not exist.
When I specify key that state is under ShardStateNew with
No recovery from error is expected for sectors without copies. |
Beta Was this translation helpful? Give feedback.
-
This is a general discussion on m1 and the deal making function of FileCoin based on my experiences last night. For the first time (running m1.3.5) I was able to see the system at full load and it was not pretty. This ended up being an 'all-nighter' as I attempted to resolve the core issue of high deal flow. I also encountered what appears to be a memory leak, but that is not the core issue. Context: Running a dedicated market node m1.3.5 with bidbot (automated offline deals) and using CID-Gravity to manage deal prices and capacity. The sealing system has been able to process and seal up to 768 GiB/day of 32GiB sectors when clients arrange for large batches (100+) of offline deals. Download speed will be as good as 32GiB in 4-5 minutes. My verified deal prices is set to zero and my unverified prices is 0.000000012. Assumption: Storage providers, Filecoin and clients require the maximum efficiency for the deal making pipeline, in order to as affordably as possible ingest deals at near hardware and network capacity. While offline deals can be planned, online deals are very unpredictable, necessitating mechanisms to help client systems manage deal flow such that they send valid deals in a manner that will not cause them to fail or timeout or be rejected for other reasons. Problem #1 - Batch Published Deals (or PreCommit or Commit batches) Problem #2 - Deal throttling required As I monitored the market node last night I saw more than 100 fstmp* files in my TMPDIR and more than 60 transfers showing active in "Receiving Channels" when running lotus-miner data-transfers list just within 6 hours. Under the best of conditions that deal load would take about 12 hours to process and seal (my MaxSealingSectorsForDeals setting is '18'). Consider if this deal load continued without stopping this would amount to 48 sectors per day, more than double my sealing capacity. The result would be a large backlog of downloaded deals that would ultimately expire and fail due to start-epoch math. (In this case, a memory leak caused the market node to crash as the deal load grew and grew or after numerous restarts as the deal processes tried to catch up. I deleted all fstmp files and lowered various parameters to restrict deal flows. For instance CID-Gravity allows me to limit the number of deals (not bytes) per client per hour (however it currently does not allow me to limit the deal flow across all clients in aggregate). The Challenge Note @jennijuju @raulk @Stefaan-V @whyrusleeping @jsign @TippyFlitsUK @jimmylee |
Beta Was this translation helpful? Give feedback.
-
For discussing anything around M1 testing.
Beta Was this translation helpful? Give feedback.
All reactions