Skip to content

Commit

Permalink
Metaslab max_size should be persisted while unloaded
Browse files Browse the repository at this point in the history
When we unload metaslabs today in ZFS, the cached max_size value is
discarded. We instead use the histogram to determine whether or not we
think we can satisfy an allocation from the metaslab. This can result in
situations where, if we're doing I/Os of a size not aligned to a
histogram bucket, a metaslab is loaded even though it cannot satisfy the
allocation we think it can. For example, a metaslab with 16 entries in
the 16k-32k bucket may have entirely 16kB entries. If we try to allocate
a 24kB buffer, we will load that metaslab because we think it should be
able to handle the allocation. Doing so is expensive in CPU time, disk
reads, and average IO latency. This is exacerbated if the write being
attempted is a sync write.

This change makes ZFS cache the max_size after the metaslab is
unloaded. If we ever get a free (or a coalesced group of frees) larger
than the max_size, we will update it. Otherwise, we leave it as is. When
attempting to allocate, we use the max_size as a lower bound, and
respect it unless we are in try_hard. However, we do age the max_size
out at some point, since we expect the actual max_size to increase as we
do more frees. A more sophisticated algorithm here might be helpful, but
this works reasonably well.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Matt Ahrens <mahrens@delphix.com>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #9055
  • Loading branch information
pcd1193182 authored and ahrens committed Aug 5, 2019
1 parent 99e755d commit c81f179
Show file tree
Hide file tree
Showing 7 changed files with 190 additions and 40 deletions.
4 changes: 2 additions & 2 deletions cmd/zdb/zdb.c
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2018 by Delphix. All rights reserved.
* Copyright (c) 2011, 2019 by Delphix. All rights reserved.
* Copyright (c) 2014 Integros [integros.com]
* Copyright 2016 Nexenta Systems, Inc.
* Copyright (c) 2017, 2018 Lawrence Livermore National Security, LLC.
Expand Down Expand Up @@ -955,7 +955,7 @@ dump_metaslab_stats(metaslab_t *msp)
/* max sure nicenum has enough space */
CTASSERT(sizeof (maxbuf) >= NN_NUMBUF_SZ);

zdb_nicenum(metaslab_block_maxsize(msp), maxbuf, sizeof (maxbuf));
zdb_nicenum(metaslab_largest_allocatable(msp), maxbuf, sizeof (maxbuf));

(void) printf("\t %25s %10lu %7s %6s %4s %4d%%\n",
"segments", avl_numnodes(t), "maxsize", maxbuf,
Expand Down
2 changes: 1 addition & 1 deletion include/sys/metaslab.h
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ uint64_t metaslab_allocated_space(metaslab_t *);
void metaslab_sync(metaslab_t *, uint64_t);
void metaslab_sync_done(metaslab_t *, uint64_t);
void metaslab_sync_reassess(metaslab_group_t *);
uint64_t metaslab_block_maxsize(metaslab_t *);
uint64_t metaslab_largest_allocatable(metaslab_t *);

/*
* metaslab alloc flags
Expand Down
7 changes: 7 additions & 0 deletions include/sys/metaslab_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -475,6 +475,12 @@ struct metaslab {
* stay cached.
*/
uint64_t ms_selected_txg;
/*
* ms_load/unload_time can be used for performance monitoring
* (e.g. by dtrace or mdb).
*/
hrtime_t ms_load_time; /* time last loaded */
hrtime_t ms_unload_time; /* time last unloaded */

uint64_t ms_alloc_txg; /* last successful alloc (debug only) */
uint64_t ms_max_size; /* maximum allocatable size */
Expand All @@ -495,6 +501,7 @@ struct metaslab {
* segment sizes.
*/
avl_tree_t ms_allocatable_by_size;
avl_tree_t ms_unflushed_frees_by_size;
uint64_t ms_lbas[MAX_LBAS];

metaslab_group_t *ms_group; /* metaslab group */
Expand Down
2 changes: 2 additions & 0 deletions include/sys/range_tree.h
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,8 @@ range_tree_t *range_tree_create_impl(range_tree_ops_t *ops, void *arg,
range_tree_t *range_tree_create(range_tree_ops_t *ops, void *arg);
void range_tree_destroy(range_tree_t *rt);
boolean_t range_tree_contains(range_tree_t *rt, uint64_t start, uint64_t size);
boolean_t range_tree_find_in(range_tree_t *rt, uint64_t start, uint64_t size,
uint64_t *ostart, uint64_t *osize);
void range_tree_verify_not_present(range_tree_t *rt,
uint64_t start, uint64_t size);
range_seg_t *range_tree_find(range_tree_t *rt, uint64_t start, uint64_t size);
Expand Down
16 changes: 16 additions & 0 deletions man/man5/zfs-module-parameters.5
Original file line number Diff line number Diff line change
Expand Up @@ -370,6 +370,22 @@ larger).
Use \fB1\fR for yes and \fB0\fR for no (default).
.RE

.sp
.ne 2
.na
\fBzfs_metaslab_max_size_cache_sec\fR (ulong)
.ad
.RS 12n
When we unload a metaslab, we cache the size of the largest free chunk. We use
that cached size to determine whether or not to load a metaslab for a given
allocation. As more frees accumulate in that metaslab while it's unloaded, the
cached max size becomes less and less accurate. After a number of seconds
controlled by this tunable, we stop considering the cached max size and start
considering only the histogram instead.
.sp
Default value: \fB3600 seconds\fR (one hour)
.RE

.sp
.ne 2
.na
Expand Down
Loading

0 comments on commit c81f179

Please sign in to comment.