Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix buffer overrun in lfs_cache_zero #11614

Closed
wants to merge 1 commit into from

Conversation

kyle-cypress
Copy link

Description

lfs_cache_zero assumes that the buffer which it is passed is prog_size in length. There are two cases where this is false:

  1. In lfs_file_opencfg, when the file is opened readonly, file->cache.buffer is malloc'd as read_size
  2. In lfs_init, lfs->rcache is always malloc'd as read_size.

In both cases, the buffer in question is passed to lfs_cache_zero later in the same function.

To address this, add a cache_size argument to lfs_cache_zero, which is set to either read_size or prog_size as appropriate based on the caller's knowledge of the buffer that it is passing in.

The "pcache" argument to lfs_cache_zero is renamed to simply "cache" to make it clear that this pointer is not necessarily to a program cache.

Note: Ideally, I think the lfs_cache_t struct should be extended to include a buffer_size member; right now there is also no length checking possible for user-provided buffers. However, such a change would be much more invasive and possibly breaking for user applications. This PR therefore attempts to simply fix the cases that are clearly wrong in a manner which should be transparent to applications.

Pull request type

[x] Fix
[ ] Refactor
[ ] Target update
[ ] Functionality change
[ ] Docs update
[ ] Test update
[ ] Breaking change

Reviewers

Release Notes

@ciarmcom ciarmcom requested review from a team October 3, 2019 01:00
@ciarmcom
Copy link
Member

ciarmcom commented Oct 3, 2019

@kyle-cypress, thank you for your changes.
@ARMmbed/mbed-os-storage @ARMmbed/mbed-os-maintainers please review.

lfs_cache_zero assumes that the buffer which it is passed is
prog_size in length. There are two cases where this is false:
1. In lfs_file_opencfg, when the file is opened readonly
   file->cache.buffer is malloc'd as read_size
2. In lfs_init, lfs->rcache is always malloc'd as read_size,
   passed to lfs_cache_zero.

In both cases, the buffer in question is passed to lfs_cache_zero
later in the same function.

To address this, add a cache_size argument to lfs_cache_zero, which
is set to either read_size or prog_size as appropriate based on the
caller's knowledge of the buffer that it is passing in.

The "pcache" argument to lfs_cache_zero is renamed to simply "cache"
to make it clear that this pointer is not necessarily to a program cache.
@geky
Copy link
Contributor

geky commented Oct 3, 2019

Ah! Thanks for creating a PR. This was actually fixed in littlefs v1.6.1 here: littlefs-project/littlefs@3419284 (Mbed OS is currently at littlefs v1.6.0).

I created a PR to bring Mbed OS up to date: #11624

The solution I went with was slightly different, the purpose of lfs_cache_zero is to scrub uninitialized data so it doesn't end up on disk and possibly leak secrets. Zeroing isn't needed for read-caches as those buffers should never actually end up on disk, so those cases can be changed to lfs_cache_drop, which works for both cache sizes.

Sidenote: v2 does away with separate read/prog cache sizes and has a single cache_size configuration. This simplifies quite a bit of the cache handling.

I did look into an explicit size field on each cache, which provide a more generalized solution. However it did add a lot of complication to cache handling and increased the code-size by a noticeable amount.

Though it may be worth revisiting, at least one user has raised the issue that a single cache size limits the flexibility of having different files open with different cache sizes to better utilize the available RAM.

@kyle-cypress
Copy link
Author

Thanks @geky . Should #11624 supercede this PR? Or do you see value in having the targeted fix available separately?

@geky
Copy link
Contributor

geky commented Oct 4, 2019

Oh, that may be a better question for @adbridge or @bulislaw. It depends on how fast they think they will be bring #11624.

@adbridge
Copy link
Contributor

adbridge commented Oct 7, 2019

Looks like #11624 needs some greentea test updates so will be a bit longer

@bulislaw
Copy link
Member

bulislaw commented Oct 7, 2019

#11624 was approved so I would just wait for it

@0xc0170
Copy link
Contributor

0xc0170 commented Oct 15, 2019

@geky Lets make a progress with this fix via #11624

@adbridge
Copy link
Contributor

Closing this in favour of #11624

@adbridge adbridge closed this Oct 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants