Skip to content

Commit

Permalink
vdev_disk: rewrite BIO filling machinery to avoid split pages
Browse files Browse the repository at this point in the history
This commit tackles a number of issues in the way BIOs (`struct bio`)
are constructed for submission to the Linux block layer.

### BIO segment limits are set incorrectly

The kernel has a hard upper limit on the number of pages/segments that
can be added to a BIO, as well as a separate limit for each device
(related to its queue depth and other scheduling characteristics).

ZFS counts the number of memory pages in the request ABD
(`abd_nr_pages_off()`, and then uses that as the number of segments to
put into the BIO, up to the hard upper limit. If it requires more than
the limit, it will create multiple BIOs.

Leaving aside the fact that page count method is wrong (see below), not
limiting to the device segment max means that the device driver will
need to split the BIO in half. This is alone is not necessarily a
problem, but it interacts with another issue to cause a much larger
problem.

### BIOs are filled inefficiently

The kernel function to add a segment to a BIO (`bio_add_page()`) takes a
`struct page` pointer, and offset+len within it. `struct page` can
represent a run of contiguous memory pages (known as a "compound page").
In can be of arbitrary length.

The ZFS functions that count ABD pages and load them into the BIO
(`abd_nr_pages_off()`, `bio_map()` and `abd_bio_map_off()`) will never
consider a page to be more than `PAGE_SIZE` (4K), even if the `struct
page` is for multiple pages. In this case, it will load the same `struct
page` into the BIO multiple times, with the offset adjusted each time.

With a sufficiently large ABD, this can easily lead to the BIO being
entirely filled much earlier than it could have been. This is also
further contributes to the problem caused by the incorrect segment limit
calculation, as its much easier to go past the device limit, and so
require a split.

Again, this is not a problem on its own.

### Incomplete pages are submitted to BIOs

The logic for "never submit more than `PAGE_SIZE`" is actually a little
more subtle. It will actually never submit a buffer that crosses a 4K
page boundary.

In practice, this is fine, as most ABDs are scattered, that is a list of
complete 4K pages, and so are loaded in as such.

Linear ABDs are typically allocated from slabs, and for small sizes they
are frequently not aligned to page boundaries. For example, a 12K
allocation can span four pages, eg:

     -- 4K -- -- 4K -- -- 4K -- -- 4K --
    |        |        |        |        |
          :## ######## ######## ######:    [1K, 4K, 4K, 3K]

Such an allocation would be loaded into a BIO as you see:

    [1K, 4K, 4K, 3K]

This tends not to be a problem in practice, because even if the BIO were
filled and needed to be split, each half would still have either a start
or end aligned to the logical block size of the device (assuming 4K at
least).

---

In ideal circumstances, these shortcomings don't cause any particular
problems. Its when they start to interact with other ZFS features that
things get interesting.

### Aggregation

Aggregation will create a "gang" ABD, which is simply a list of other
ABDs. Iterating over a gang ABD is just iterating over each ABD within
it in turn.

Because the segments are simply loaded in order, we can end up with
uneven segments either side of the "gap" between the two ABDs. For
example, two 12K ABDs might be aggregated and then loaded as:

    [1K, 4K, 4K, 3K, 2K, 4K, 4K, 2K]

Should a split occur, each individual BIO can end up either having an
start or end offset that is not aligned to the logical block size, which
some drivers (eg SCSI) will reject. However, this tends not to happen
because the default aggregation limit usually keeps the BIO small enough
to not require more than one split, and most pages are actually full 4K
pages, so hitting an uneven gap is very rare anyway.

### Gang blocks

If the pool is under particular memory pressure, then an IO can be
broken down into a "gang block", a 512-byte block composed of a header
and up to three block pointers. Each points to a fragment of the
original write, or in turn, another gang block, breaking the original
data up over and over until space can be found in the pool for each of
them.

Each gang header is a separate 512-byte memory allocation from a slab,
that needs to be written down to disk. When the gang header is added to
the BIO, its a single 512-byte segment.

### Aggregation with gang blocks

Pulling all this together, consider a large aggregated write of gang
blocks. This results a BIO containing lots of 512-byte segments. Given
our tendency to overfill the BIO, a split is likely, and most possible
split points will yield a pair of BIOs that are misaligned. Drivers that
care, like the SCSI driver, will reject them.

---

This commit is a substantial refactor and rewrite of much of `vdev_disk`
to sort all this out.

### Configure maximum segment size for device

`vdev_bio_max_segs()` now returns the ideal maximum size for the device,
if available. There's also a tuneable `vdev_disk_max_segs` to override
this, to assist with testing.

### ABDs checked up front for page count and alignment

We scan the ABD up front to count the number of pages within it, and to
confirm that if we submitted all those pages to one or more BIOs, it
could be split at any point with creating a misaligned BIO.

If that wouldn't be possible (as in any of the above situations), the
ABD is linearised, and then checked again. This is the same technique
used in `vdev_geom` on FreeBSD, adjusted for Linux's variable page size
and allocator quirks.

In the end, a count of segments is produced, and this is combined with
the max seg count to determine how many BIOs will be needed.

### Virtual block IO object

`vbio_t` is a cleanup and enhancement of the old `dio_request_t`. The
idea is simply that it can hold all the state needed to create, submit
and return multiple BIOs, including all the refcounts, the ABD copy if
it was needed, and so on. Apart from what I hope is a clearer interface,
the major difference is that because we know how many BIOs we'll need up
front, we don't need the old overflow logic that would grow the BIO
array, throw away all the old work and restart. We can get it right from
the start.

### Other cleanups

Lots of cleanup, particularly through `vdev_disk_io_start` to make it
feel a bit more like current ZFS code. Loads of comments too.

Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
  • Loading branch information
robn committed Dec 11, 2023
1 parent ac86aea commit b5e6138
Show file tree
Hide file tree
Showing 5 changed files with 487 additions and 340 deletions.
1 change: 1 addition & 0 deletions include/os/linux/kernel/linux/mod_compat.h
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ enum scope_prefix_types {
zfs_trim,
zfs_txg,
zfs_vdev,
zfs_vdev_disk,
zfs_vdev_file,
zfs_vdev_mirror,
zfs_vnops,
Expand Down
8 changes: 0 additions & 8 deletions include/sys/abd.h
Original file line number Diff line number Diff line change
Expand Up @@ -211,14 +211,6 @@ abd_get_size(abd_t *abd)
void abd_init(void);
void abd_fini(void);

/*
* Linux ABD bio functions
*/
#if defined(__linux__) && defined(_KERNEL)
unsigned int abd_bio_map_off(struct bio *, abd_t *, unsigned int, size_t);
unsigned long abd_nr_pages_off(abd_t *, unsigned int, size_t);
#endif

#ifdef __cplusplus
}
#endif
Expand Down
10 changes: 9 additions & 1 deletion man/man4/zfs.4
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
.\" Copyright (c) 2019 Datto Inc.
.\" Copyright (c) 2023 Klara, Inc.
.\" The contents of this file are subject to the terms of the Common Development
.\" and Distribution License (the "License"). You may not use this file except
.\" in compliance with the License. You can obtain a copy of the license at
Expand All @@ -15,7 +16,7 @@
.\" own identifying information:
.\" Portions Copyright [yyyy] [name of copyright owner]
.\"
.Dd July 21, 2023
.Dd November 27, 2023
.Dt ZFS 4
.Os
.
Expand Down Expand Up @@ -1353,6 +1354,13 @@ _
4 Driver No driver retries on driver errors.
.TE
.
.It Sy zfs_vdev_disk_max_segs Ns = Ns Sy 0 Pq uint
Maximum number of segments to add to a BIO (min 4).
If this is higher than the maximum allowed by the device queue or the kernel
itself, it will be clamped.
Setting it to zero will cause the kernel's ideal size to be used.
This parameter only applies on Linux.
.
.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
Time before expiring
.Pa .zfs/snapshot .
Expand Down
144 changes: 1 addition & 143 deletions module/os/linux/zfs/abd_os.c
Original file line number Diff line number Diff line change
Expand Up @@ -1028,150 +1028,8 @@ abd_cache_reap_now(void)
{
}

#if defined(_KERNEL)
/*
* bio_nr_pages for ABD.
* @off is the offset in @abd
*/
unsigned long
abd_nr_pages_off(abd_t *abd, unsigned int size, size_t off)
{
unsigned long pos;

if (abd_is_gang(abd)) {
unsigned long count = 0;

for (abd_t *cabd = abd_gang_get_offset(abd, &off);
cabd != NULL && size != 0;
cabd = list_next(&ABD_GANG(abd).abd_gang_chain, cabd)) {
ASSERT3U(off, <, cabd->abd_size);
int mysize = MIN(size, cabd->abd_size - off);
count += abd_nr_pages_off(cabd, mysize, off);
size -= mysize;
off = 0;
}
return (count);
}

if (abd_is_linear(abd))
pos = (unsigned long)abd_to_buf(abd) + off;
else
pos = ABD_SCATTER(abd).abd_offset + off;

return (((pos + size + PAGESIZE - 1) >> PAGE_SHIFT) -
(pos >> PAGE_SHIFT));
}

static unsigned int
bio_map(struct bio *bio, void *buf_ptr, unsigned int bio_size)
{
unsigned int offset, size, i;
struct page *page;

offset = offset_in_page(buf_ptr);
for (i = 0; i < bio->bi_max_vecs; i++) {
size = PAGE_SIZE - offset;

if (bio_size <= 0)
break;

if (size > bio_size)
size = bio_size;

if (is_vmalloc_addr(buf_ptr))
page = vmalloc_to_page(buf_ptr);
else
page = virt_to_page(buf_ptr);

/*
* Some network related block device uses tcp_sendpage, which
* doesn't behave well when using 0-count page, this is a
* safety net to catch them.
*/
ASSERT3S(page_count(page), >, 0);

if (bio_add_page(bio, page, size, offset) != size)
break;

buf_ptr += size;
bio_size -= size;
offset = 0;
}

return (bio_size);
}

/*
* bio_map for gang ABD.
*/
static unsigned int
abd_gang_bio_map_off(struct bio *bio, abd_t *abd,
unsigned int io_size, size_t off)
{
ASSERT(abd_is_gang(abd));

for (abd_t *cabd = abd_gang_get_offset(abd, &off);
cabd != NULL;
cabd = list_next(&ABD_GANG(abd).abd_gang_chain, cabd)) {
ASSERT3U(off, <, cabd->abd_size);
int size = MIN(io_size, cabd->abd_size - off);
int remainder = abd_bio_map_off(bio, cabd, size, off);
io_size -= (size - remainder);
if (io_size == 0 || remainder > 0)
return (io_size);
off = 0;
}
ASSERT0(io_size);
return (io_size);
}

/*
* bio_map for ABD.
* @off is the offset in @abd
* Remaining IO size is returned
*/
unsigned int
abd_bio_map_off(struct bio *bio, abd_t *abd,
unsigned int io_size, size_t off)
{
struct abd_iter aiter;

ASSERT3U(io_size, <=, abd->abd_size - off);
if (abd_is_linear(abd))
return (bio_map(bio, ((char *)abd_to_buf(abd)) + off, io_size));

ASSERT(!abd_is_linear(abd));
if (abd_is_gang(abd))
return (abd_gang_bio_map_off(bio, abd, io_size, off));

abd_iter_init(&aiter, abd);
abd_iter_advance(&aiter, off);

for (int i = 0; i < bio->bi_max_vecs; i++) {
struct page *pg;
size_t len, sgoff, pgoff;
struct scatterlist *sg;

if (io_size <= 0)
break;

sg = aiter.iter_sg;
sgoff = aiter.iter_offset;
pgoff = sgoff & (PAGESIZE - 1);
len = MIN(io_size, PAGESIZE - pgoff);
ASSERT(len > 0);

pg = nth_page(sg_page(sg), sgoff >> PAGE_SHIFT);
if (bio_add_page(bio, pg, len, pgoff) != len)
break;

io_size -= len;
abd_iter_advance(&aiter, len);
}

return (io_size);
}

#ifdef _KERNEL
/* Tunable Parameters */
module_param(zfs_abd_scatter_enabled, int, 0644);
MODULE_PARM_DESC(zfs_abd_scatter_enabled,
Expand Down
Loading

0 comments on commit b5e6138

Please sign in to comment.