-
Notifications
You must be signed in to change notification settings - Fork 808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor append-only performance with large files #1054
Comments
I added this which helps: #1056 |
Hi @andyp1per, thanks for creating an issue. Honestly, if your logging speed is mission critical, and you're putting in the effort to make the hack in #564 (comment) work, I would consider not storing the log in a file and instead reserving a fixed amount of raw flash to hold the log. The speed in the chips datasheet is the maximum speed, and any filesystem will necessarily be slower. You could still store the log size/offset in a file to benefit from power-loss resilience.
It sounds like you're running into #75, the issue being block allocation/gc ultimately scales
This also makes it sound like a gc bottleneck. The gc scan and Some possible workarounds:
The gc scan as a part of block allocation. When the lookahead buffer is exhausted, littlefs traverses the filesystem to figure out what blocks are still in use (or more accurately, which blocks are not in use). This grows There are plans in the works to add an optional block map to avoid this, but it's a part of a large piece of work. To make a block map work littlefs needs to understand when blocks are no longer in use, which it currently doesn't.
Increasing the lookahead buffer may help, but the prog/read/file caches only prevent multiple reads to the same block. littlefs currently doesn't have multi-block caching. It's low priority vs things that require disk changes, but in theory multi-block caching could help here. Though you would need to be careful to make sure gc doesn't just thrash the multi-block cache every scan...
It's technically related to total filesystem size, which is arguably worse. This is one piece of a number of performance issues in littlefs that are being worked on. Unfortunately there's not much to show at this stage. With disk compatibility being the way it is, it's difficult to improve things incrementally. |
Thanks for the reply:
|
I don't think there's an easy answer without benchmarking on the device. It's a tradeoff of RAM to frequency of garbage collection, though there is no benefit to a lookahead larger than block_count/8.
The downsides of bigger blocks are 1. less block granularity, so things like small non-inlined or unaligned files can end up wasting more space, and 2. more expensive in-block operations, specifically metadata logs. This is another littlefs performance bottleneck in that metadata compaction also grows
The neat part is littlefs doesn't really care about the physical erase size. It's up to the Though at some point I think it would be a good idea to add |
My setup:
W25N01GV, 2Gbit chip, with 2k pages and 128k blocks
I am trying to write an append-only log file at about 318kB/s which is 158 pages/s. The chip will easily do writes at 1MB/s.
I am only syncing the file per-block by using the algorithm in #564 (comment)
The subsystem is writing data a page (2k) at a time.
My logging subsystem is doing no reads (I instrumented read() to check) but I see pages being read at about 274/s.
Worse the write speed slows down as the file gets bigger - going down to 90kB/s. Start a new file and the write speed bounces back up.
sync takes about 11ms, which is slow but not awful.
So my questions:
I have read the DESIGN.md, numerous issues and the code but am no closer to understanding what is actually going on here and why the performance is so poor.
The text was updated successfully, but these errors were encountered: