Skip to content

Commit

Permalink
Fix typos in module/zfs/
Browse files Browse the repository at this point in the history
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Ryan Moeller <ryan@ixsystems.com>
Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net>
Closes #9240
  • Loading branch information
Gelma authored and behlendorf committed Sep 3, 2019
1 parent 7859537 commit e1cfd73
Show file tree
Hide file tree
Showing 52 changed files with 114 additions and 114 deletions.
18 changes: 9 additions & 9 deletions module/zfs/arc.c
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@
* elements of the cache are therefore exactly the same size. So
* when adjusting the cache size following a cache miss, its simply
* a matter of choosing a single page to evict. In our model, we
* have variable sized cache blocks (rangeing from 512 bytes to
* have variable sized cache blocks (ranging from 512 bytes to
* 128K bytes). We therefore choose a set of blocks to evict to make
* space for a cache miss that approximates as closely as possible
* the space used by the new block.
Expand Down Expand Up @@ -262,7 +262,7 @@
* The L1ARC has a slightly different system for storing encrypted data.
* Raw (encrypted + possibly compressed) data has a few subtle differences from
* data that is just compressed. The biggest difference is that it is not
* possible to decrypt encrypted data (or visa versa) if the keys aren't loaded.
* possible to decrypt encrypted data (or vice-versa) if the keys aren't loaded.
* The other difference is that encryption cannot be treated as a suggestion.
* If a caller would prefer compressed data, but they actually wind up with
* uncompressed data the worst thing that could happen is there might be a
Expand Down Expand Up @@ -2151,7 +2151,7 @@ arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb,
}

/*
* Adjust encrypted and authenticated headers to accomodate
* Adjust encrypted and authenticated headers to accommodate
* the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are
* allowed to fail decryption due to keys not being loaded
* without being marked as an IO error.
Expand Down Expand Up @@ -2220,7 +2220,7 @@ arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb,
if (arc_buf_is_shared(buf)) {
ASSERT(ARC_BUF_COMPRESSED(buf));

/* We need to give the buf it's own b_data */
/* We need to give the buf its own b_data */
buf->b_flags &= ~ARC_BUF_FLAG_SHARED;
buf->b_data =
arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf);
Expand Down Expand Up @@ -2836,7 +2836,7 @@ arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf)
* sufficient to make this guarantee, however it's possible
* (specifically in the rare L2ARC write race mentioned in
* arc_buf_alloc_impl()) there will be an existing uncompressed buf that
* is sharable, but wasn't at the time of its allocation. Rather than
* is shareable, but wasn't at the time of its allocation. Rather than
* allow a new shared uncompressed buf to be created and then shuffle
* the list around to make it the last element, this simply disallows
* sharing if the new buf isn't the first to be added.
Expand Down Expand Up @@ -2895,7 +2895,7 @@ arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb,

/*
* Only honor requests for compressed bufs if the hdr is actually
* compressed. This must be overriden if the buffer is encrypted since
* compressed. This must be overridden if the buffer is encrypted since
* encrypted buffers cannot be decompressed.
*/
if (encrypted) {
Expand Down Expand Up @@ -3199,7 +3199,7 @@ arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf)
}

/*
* Free up buf->b_data and pull the arc_buf_t off of the the arc_buf_hdr_t's
* Free up buf->b_data and pull the arc_buf_t off of the arc_buf_hdr_t's
* list and free it.
*/
static void
Expand Down Expand Up @@ -3658,7 +3658,7 @@ arc_hdr_realloc_crypt(arc_buf_hdr_t *hdr, boolean_t need_crypt)
/*
* This function is used by the send / receive code to convert a newly
* allocated arc_buf_t to one that is suitable for a raw encrypted write. It
* is also used to allow the root objset block to be uupdated without altering
* is also used to allow the root objset block to be updated without altering
* its embedded MACs. Both block types will always be uncompressed so we do not
* have to worry about compression type or psize.
*/
Expand Down Expand Up @@ -6189,7 +6189,7 @@ arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp,

/*
* Determine if we have an L1 cache hit or a cache miss. For simplicity
* we maintain encrypted data seperately from compressed / uncompressed
* we maintain encrypted data separately from compressed / uncompressed
* data. If the user is requesting raw encrypted data and we don't have
* that in the header we will read from disk to guarantee that we can
* get it even if the encryption keys aren't loaded.
Expand Down
8 changes: 4 additions & 4 deletions module/zfs/dbuf.c
Original file line number Diff line number Diff line change
Expand Up @@ -2337,7 +2337,7 @@ dmu_buf_will_dirty_impl(dmu_buf_t *db_fake, int flags, dmu_tx_t *tx)
ASSERT(!zfs_refcount_is_zero(&db->db_holds));

/*
* Quick check for dirtyness. For already dirty blocks, this
* Quick check for dirtiness. For already dirty blocks, this
* reduces runtime of this function by >90%, and overall performance
* by 50% for some workloads (e.g. file deletion with indirect blocks
* cached).
Expand Down Expand Up @@ -2892,7 +2892,7 @@ dbuf_create(dnode_t *dn, uint8_t level, uint64_t blkid,
* Hold the dn_dbufs_mtx while we get the new dbuf
* in the hash table *and* added to the dbufs list.
* This prevents a possible deadlock with someone
* trying to look up this dbuf before its added to the
* trying to look up this dbuf before it's added to the
* dn_dbufs list.
*/
mutex_enter(&dn->dn_dbufs_mtx);
Expand Down Expand Up @@ -3337,7 +3337,7 @@ dbuf_hold_impl_arg(struct dbuf_hold_arg *dh)
ASSERT(dh->dh_db->db_buf == NULL || arc_referenced(dh->dh_db->db_buf));

/*
* If this buffer is currently syncing out, and we are are
* If this buffer is currently syncing out, and we are
* still referencing it from db_data, we need to make a copy
* of it in case we decide we want to dirty it again in this txg.
*/
Expand Down Expand Up @@ -3812,7 +3812,7 @@ dbuf_check_blkptr(dnode_t *dn, dmu_buf_impl_t *db)
/*
* This buffer was allocated at a time when there was
* no available blkptrs from the dnode, or it was
* inappropriate to hook it in (i.e., nlevels mis-match).
* inappropriate to hook it in (i.e., nlevels mismatch).
*/
ASSERT(db->db_blkid < dn->dn_phys->dn_nblkptr);
ASSERT(db->db_parent == NULL);
Expand Down
6 changes: 3 additions & 3 deletions module/zfs/dmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -639,11 +639,11 @@ dmu_buf_rele_array(dmu_buf_t **dbp_fake, int numbufs, void *tag)

/*
* Issue prefetch i/os for the given blocks. If level is greater than 0, the
* indirect blocks prefeteched will be those that point to the blocks containing
* indirect blocks prefetched will be those that point to the blocks containing
* the data starting at offset, and continuing to offset + len.
*
* Note that if the indirect blocks above the blocks being prefetched are not
* in cache, they will be asychronously read in.
* in cache, they will be asynchronously read in.
*/
void
dmu_prefetch(objset_t *os, uint64_t object, int64_t level, uint64_t offset,
Expand Down Expand Up @@ -2176,7 +2176,7 @@ dmu_write_policy(objset_t *os, dnode_t *dn, int level, int wp, zio_prop_t *zp)
* Determine dedup setting. If we are in dmu_sync(),
* we won't actually dedup now because that's all
* done in syncing context; but we do want to use the
* dedup checkum. If the checksum is not strong
* dedup checksum. If the checksum is not strong
* enough to ensure unique signatures, force
* dedup_verify.
*/
Expand Down
4 changes: 2 additions & 2 deletions module/zfs/dmu_objset.c
Original file line number Diff line number Diff line change
Expand Up @@ -1028,7 +1028,7 @@ dmu_objset_create_impl_dnstats(spa_t *spa, dsl_dataset_t *ds, blkptr_t *bp,

/*
* We don't want to have to increase the meta-dnode's nlevels
* later, because then we could do it in quescing context while
* later, because then we could do it in quiescing context while
* we are also accessing it in open context.
*
* This precaution is not necessary for the MOS (ds == NULL),
Expand Down Expand Up @@ -2648,7 +2648,7 @@ dmu_objset_find_dp_cb(void *arg)

/*
* We need to get a pool_config_lock here, as there are several
* asssert(pool_config_held) down the stack. Getting a lock via
* assert(pool_config_held) down the stack. Getting a lock via
* dsl_pool_config_enter is risky, as it might be stalled by a
* pending writer. This would deadlock, as the write lock can
* only be granted when our parent thread gives up the lock.
Expand Down
4 changes: 2 additions & 2 deletions module/zfs/dmu_send.c
Original file line number Diff line number Diff line change
Expand Up @@ -548,7 +548,7 @@ dump_write(dmu_send_cookie_t *dscp, dmu_object_type_t type, uint64_t object,
/*
* There's no pre-computed checksum for partial-block writes,
* embedded BP's, or encrypted BP's that are being sent as
* plaintext, so (like fletcher4-checkummed blocks) userland
* plaintext, so (like fletcher4-checksummed blocks) userland
* will have to compute a dedup-capable checksum itself.
*/
drrw->drr_checksumtype = ZIO_CHECKSUM_OFF;
Expand Down Expand Up @@ -2262,7 +2262,7 @@ setup_send_progress(struct dmu_send_params *dspp)
*
* The final case is a simple zfs full or incremental send. The to_ds traversal
* thread behaves the same as always. The redact list thread is never started.
* The send merge thread takes all the blocks that the to_ds traveral thread
* The send merge thread takes all the blocks that the to_ds traversal thread
* sends it, prefetches the data, and sends the blocks on to the main thread.
* The main thread sends the data over the wire.
*
Expand Down
2 changes: 1 addition & 1 deletion module/zfs/dmu_zfetch.c
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ dmu_zfetch(zfetch_t *zf, uint64_t blkid, uint64_t nblks, boolean_t fetch_data,
* can only read from blocks that we carefully ensure are on
* concrete vdevs (or previously-loaded indirect vdevs). So we
* can't allow the predictive prefetcher to attempt reads of other
* blocks (e.g. of the MOS's dnode obejct).
* blocks (e.g. of the MOS's dnode object).
*/
if (!spa_indirect_vdevs_loaded(spa))
return;
Expand Down
2 changes: 1 addition & 1 deletion module/zfs/dnode.c
Original file line number Diff line number Diff line change
Expand Up @@ -1787,7 +1787,7 @@ dnode_set_blksz(dnode_t *dn, uint64_t size, int ibs, dmu_tx_t *tx)
dn->dn_indblkshift = ibs;
dn->dn_next_indblkshift[tx->tx_txg&TXG_MASK] = ibs;
}
/* rele after we have fixed the blocksize in the dnode */
/* release after we have fixed the blocksize in the dnode */
if (db)
dbuf_rele(db, FTAG);

Expand Down
2 changes: 1 addition & 1 deletion module/zfs/dsl_bookmark.c
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ dsl_bookmark_lookup_impl(dsl_dataset_t *ds, const char *shortname,
}

/*
* If later_ds is non-NULL, this will return EXDEV if the the specified bookmark
* If later_ds is non-NULL, this will return EXDEV if the specified bookmark
* does not represents an earlier point in later_ds's timeline. However,
* bmp will still be filled in if we return EXDEV.
*
Expand Down
6 changes: 3 additions & 3 deletions module/zfs/dsl_crypt.c
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ dsl_crypto_params_create_nvlist(dcp_cmd_t cmd, nvlist_t *props,
goto error;
}

/* if the user asked for the deault crypt, determine that now */
/* if the user asked for the default crypt, determine that now */
if (dcp->cp_crypt == ZIO_CRYPT_ON)
dcp->cp_crypt = ZIO_CRYPT_ON_VALUE;

Expand Down Expand Up @@ -1596,7 +1596,7 @@ spa_keystore_change_key(const char *dsname, dsl_crypto_params_t *dcp)
/*
* Perform the actual work in syncing context. The blocks modified
* here could be calculated but it would require holding the pool
* lock and tarversing all of the datasets that will have their keys
* lock and traversing all of the datasets that will have their keys
* changed.
*/
return (dsl_sync_task(dsname, spa_keystore_change_key_check,
Expand Down Expand Up @@ -1714,7 +1714,7 @@ dsl_dataset_promote_crypt_sync(dsl_dir_t *target, dsl_dir_t *origin,
return;

/*
* If the target is being promoted to the encyrption root update the
* If the target is being promoted to the encryption root update the
* DSL Crypto Key and keylocation to reflect that. We also need to
* update the DSL Crypto Keys of all children inheritting their
* encryption root to point to the new target. Otherwise, the check
Expand Down
8 changes: 4 additions & 4 deletions module/zfs/dsl_dataset.c
Original file line number Diff line number Diff line change
Expand Up @@ -393,7 +393,7 @@ load_zfeature(objset_t *mos, dsl_dataset_t *ds, spa_feature_t f)
}

/*
* We have to release the fsid syncronously or we risk that a subsequent
* We have to release the fsid synchronously or we risk that a subsequent
* mount of the same dataset will fail to unique_insert the fsid. This
* failure would manifest itself as the fsid of this dataset changing
* between mounts which makes NFS clients quite unhappy.
Expand Down Expand Up @@ -2308,7 +2308,7 @@ get_clones_stat(dsl_dataset_t *ds, nvlist_t *nv)
* We use nvlist_alloc() instead of fnvlist_alloc() because the
* latter would allocate the list with NV_UNIQUE_NAME flag.
* As a result, every time a clone name is appended to the list
* it would be (linearly) searched for for a duplicate name.
* it would be (linearly) searched for a duplicate name.
* We already know that all clone names must be unique and we
* want avoid the quadratic complexity of double-checking that
* because we can have a large number of clones.
Expand Down Expand Up @@ -2683,7 +2683,7 @@ dsl_get_mountpoint(dsl_dataset_t *ds, const char *dsname, char *value,
int error;
dsl_pool_t *dp = ds->ds_dir->dd_pool;

/* Retrieve the mountpoint value stored in the zap opbject */
/* Retrieve the mountpoint value stored in the zap object */
error = dsl_prop_get_ds(ds, zfs_prop_to_name(ZFS_PROP_MOUNTPOINT), 1,
ZAP_MAXVALUELEN, value, source);
if (error != 0) {
Expand Down Expand Up @@ -3961,7 +3961,7 @@ dsl_dataset_clone_swap_check_impl(dsl_dataset_t *clone,
* The clone can't be too much over the head's refquota.
*
* To ensure that the entire refquota can be used, we allow one
* transaction to exceed the the refquota. Therefore, this check
* transaction to exceed the refquota. Therefore, this check
* needs to also allow for the space referenced to be more than the
* refquota. The maximum amount of space that one transaction can use
* on disk is DMU_MAX_ACCESS * spa_asize_inflation. Allowing this
Expand Down
2 changes: 1 addition & 1 deletion module/zfs/dsl_destroy.c
Original file line number Diff line number Diff line change
Expand Up @@ -667,7 +667,7 @@ dsl_destroy_snapshots_nvl(nvlist_t *snaps, boolean_t defer,

/*
* lzc_destroy_snaps() is documented to fill the errlist with
* int32 values, so we need to covert the int64 values that are
* int32 values, so we need to convert the int64 values that are
* returned from LUA.
*/
int rv = 0;
Expand Down
4 changes: 2 additions & 2 deletions module/zfs/dsl_dir.c
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@
* limit set. If there is a limit at any initialized level up the tree, the
* check must pass or the creation will fail. Likewise, when a filesystem or
* snapshot is destroyed, the counts are recursively adjusted all the way up
* the initizized nodes in the tree. Renaming a filesystem into different point
* the initialized nodes in the tree. Renaming a filesystem into different point
* in the tree will first validate, then update the counts on each branch up to
* the common ancestor. A receive will also validate the counts and then update
* them.
Expand Down Expand Up @@ -1467,7 +1467,7 @@ dsl_dir_tempreserve_clear(void *tr_cookie, dmu_tx_t *tx)
* less than the amount specified.
*
* NOTE: The behavior of this function is identical to the Illumos / FreeBSD
* version however it has been adjusted to use an iterative rather then
* version however it has been adjusted to use an iterative rather than
* recursive algorithm to minimize stack usage.
*/
void
Expand Down
6 changes: 3 additions & 3 deletions module/zfs/dsl_scan.c
Original file line number Diff line number Diff line change
Expand Up @@ -1912,7 +1912,7 @@ dsl_scan_visitbp(blkptr_t *bp, const zbookmark_phys_t *zb,

/*
* This debugging is commented out to conserve stack space. This
* function is called recursively and the debugging addes several
* function is called recursively and the debugging adds several
* bytes to the stack for each call. It can be commented back in
* if required to debug an issue in dsl_scan_visitbp().
*
Expand Down Expand Up @@ -3373,7 +3373,7 @@ dsl_process_async_destroys(dsl_pool_t *dp, dmu_tx_t *tx)
/*
* This is the primary entry point for scans that is called from syncing
* context. Scans must happen entirely during syncing context so that we
* cna guarantee that blocks we are currently scanning will not change out
* can guarantee that blocks we are currently scanning will not change out
* from under us. While a scan is active, this function controls how quickly
* transaction groups proceed, instead of the normal handling provided by
* txg_sync_thread().
Expand Down Expand Up @@ -3977,7 +3977,7 @@ scan_exec_io(dsl_pool_t *dp, const blkptr_t *bp, int zio_flags,
* As can be seen, at fill_ratio=3, the algorithm is slightly biased towards
* extents that are more completely filled (in a 3:2 ratio) vs just larger.
* Note that as an optimization, we replace multiplication and division by
* 100 with bitshifting by 7 (which effecitvely multiplies and divides by 128).
* 100 with bitshifting by 7 (which effectively multiplies and divides by 128).
*/
static int
ext_size_compare(const void *x, const void *y)
Expand Down
2 changes: 1 addition & 1 deletion module/zfs/dsl_synctask.c
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ dsl_sync_task(const char *pool, dsl_checkfunc_t *checkfunc,
* For that reason, early synctasks can affect the process of writing dirty
* changes to disk for the txg that they run and should be used with caution.
* In addition, early synctasks should not dirty any metaslabs as this would
* invalidate the precodition/invariant for subsequent early synctasks.
* invalidate the precondition/invariant for subsequent early synctasks.
* [see dsl_pool_sync() and dsl_early_sync_task_verify()]
*/
int
Expand Down
6 changes: 3 additions & 3 deletions module/zfs/dsl_userhold.c
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ dsl_dataset_user_hold_sync(void *arg, dmu_tx_t *tx)
* holds is nvl of snapname -> holdname
* errlist will be filled in with snapname -> error
*
* The snaphosts must all be in the same pool.
* The snapshots must all be in the same pool.
*
* Holds for snapshots that don't exist will be skipped.
*
Expand Down Expand Up @@ -556,9 +556,9 @@ dsl_dataset_user_release_sync(void *arg, dmu_tx_t *tx)
* errlist will be filled in with snapname -> error
*
* If tmpdp is not NULL the names for holds should be the dsobj's of snapshots,
* otherwise they should be the names of shapshots.
* otherwise they should be the names of snapshots.
*
* As a release may cause snapshots to be destroyed this trys to ensure they
* As a release may cause snapshots to be destroyed this tries to ensure they
* aren't mounted.
*
* The release of non-existent holds are skipped.
Expand Down
4 changes: 2 additions & 2 deletions module/zfs/fm.c
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
* Name-Value Pair Lists
*
* The embodiment of an FMA protocol element (event, fmri or authority) is a
* name-value pair list (nvlist_t). FMA-specific nvlist construtor and
* name-value pair list (nvlist_t). FMA-specific nvlist constructor and
* destructor functions, fm_nvlist_create() and fm_nvlist_destroy(), are used
* to create an nvpair list using custom allocators. Callers may choose to
* allocate either from the kernel memory allocator, or from a preallocated
Expand Down Expand Up @@ -784,7 +784,7 @@ zfs_zevent_destroy(zfs_zevent_t *ze)
#endif /* _KERNEL */

/*
* Wrapppers for FM nvlist allocators
* Wrappers for FM nvlist allocators
*/
/* ARGSUSED */
static void *
Expand Down
Loading

0 comments on commit e1cfd73

Please sign in to comment.