Skip to content

Commit

Permalink
DOC: Typos found by codespell
Browse files Browse the repository at this point in the history
  • Loading branch information
DimitriPapadopoulos committed Aug 9, 2021
1 parent ea68c4e commit b625d2e
Show file tree
Hide file tree
Showing 48 changed files with 74 additions and 74 deletions.
8 changes: 4 additions & 4 deletions Changelog
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,7 @@ Bug fixes
reviewed by PM)
* Safer warning registry manipulation when checking for overflows (pr/753)
(CM, reviewed by MB)
* Correctly write .annot files with duplicate lables (pr/763) (Richard Nemec
* Correctly write .annot files with duplicate labels (pr/763) (Richard Nemec
with CM)

Maintenance
Expand Down Expand Up @@ -997,7 +997,7 @@ visiting the URL::
* Bugfix: Removed left-over print statement in extension code.
* Bugfix: Prevent saving of bogus 'None.nii' images when the filename
was previously assign, before calling NiftiImage.save() (Closes: #517920).
* Bugfix: Extension length was to short for all `edata` whos length matches
* Bugfix: Extension length was to short for all `edata` whose length matches
n*16-8, for all integer n.

0.20090205.1 (Thu, 5 Feb 2009)
Expand All @@ -1017,7 +1017,7 @@ visiting the URL::
automatically dumped into this extension.
Embedded meta data is not loaded automatically, since this has security
implications, because code from the file header is actually executed.
The documentation explicitely mentions this risk.
The documentation explicitly mentions this risk.
* Added :class:`~nifti.extensions.NiftiExtensions`. This is a container-like
handler to access and manipulate NIfTI1 header extensions.
* Exposed :class:`~nifti.image.MemMappedNiftiImage` in the root module.
Expand Down Expand Up @@ -1223,7 +1223,7 @@ visiting the URL::
* Does not depend on libfslio anymore.
* Up to seven-dimensional dataset are supported (as much as NIfTI can do).
* The complete NIfTI header dataset is modifiable.
* Most image properties are accessable via class attributes and accessor
* Most image properties are accessible via class attributes and accessor
methods.
* Improved documentation (but still a long way to go).

Expand Down
2 changes: 1 addition & 1 deletion doc/misc/pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ output-format=colorized
# Include message's id in output
include-ids=yes

# Tells wether to display a full report or only the messages
# Tells whether to display a full report or only the messages
reports=yes

[MISCELLANEOUS]
Expand Down
2 changes: 1 addition & 1 deletion doc/source/coordinate_systems.rst
Original file line number Diff line number Diff line change
Expand Up @@ -530,7 +530,7 @@ then the image affine matrix $A$ is:
Why the extra row of $[0, 0, 0, 1]$? We need this row because we have
rephrased the combination of rotations / zooms and translations as a
transformation in *homogenous coordinates* (see `wikipedia homogenous
transformation in *homogeneous coordinates* (see `wikipedia homogeneous
coordinates`_). This is a trick that allows us to put the translation part
into the same matrix as the rotations / zooms, so that both translations and
rotations / zooms can be applied by matrix multiplication. In order to make
Expand Down
2 changes: 1 addition & 1 deletion doc/source/devel/data_pkg_discuss.rst
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ Discovery
Revsion ids could for example be hashes of the package instantiation
(package contents), so they could be globally unique to the contents,
whereever the contents was when the identifier was made. However, *tags*
wherever the contents was when the identifier was made. However, *tags*
are just names that someone has attached to a particular revsion id. If
there is more than one person providing versions of a particular package,
there may not be agreement on the revsion that a particular tag is attached
Expand Down
4 changes: 2 additions & 2 deletions doc/source/devel/devguide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ necessary and the branch gets tagged when a package version is released.
Maintenance (as well as backport) releases or branches off from the respective
packaging tag.

There might be additonal branches for each developer, prefixed with intials.
There might be additional branches for each developer, prefixed with initials.
Alternatively, several GitHub (or elsewhere) clones might be used.


Expand Down Expand Up @@ -99,7 +99,7 @@ Changelog
=========

The changelog is located in the toplevel directory of the source tree in the
`Changelog` file. The content of this file should be formated as restructured
`Changelog` file. The content of this file should be formatted as restructured
text to make it easy to put it into manual appendix and on the website.

This changelog should neither replicate the VCS commit log nor the
Expand Down
2 changes: 1 addition & 1 deletion doc/source/dicom/dicom_intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -686,7 +686,7 @@ For example, there is a DIMSE service called "C-ECHO" that requests confirmation
from the responding application that the echo message arrived.

The definition of the DIMSE services specifies, for a particular DIMSE service,
whether the DIMSE commend set should be followed by a data set.
whether the DIMSE command set should be followed by a data set.

In particular, the data set will be a full Information Object Definition's worth
of data.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/dicom/dicom_mosaic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ Data scaling
SPM gets the DICOM scaling, offset for the image ('RescaleSlope',
'RescaleIntercept'). It writes these scalings into the nifti_ header.
Then it writes the raw image data (unscaled) to disk. Obviously these
will have the corrent scalings applied when the nifti image is read again.
will have the current scalings applied when the nifti image is read again.

A comment in the code here says that the data are not scaled by the
maximum amount. I assume by this they mean that the DICOM scaling may
Expand Down
4 changes: 2 additions & 2 deletions doc/source/dicom/dicom_orientation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ constant, to the voxel coordinate for the slice (the value for the slice
index).

Our DICOM might have the 'SliceLocation' field (0020,1041).
'SliceLocation' seems to be proportianal to slice location, at least for
'SliceLocation' seems to be proportional to slice location, at least for
some GE and Philips DICOMs I was looking at. But, there is a more
reliable way (that doesn't depend on this field), and uses only the very
standard 'ImageOrientationPatient' and 'ImagePositionPatient' fields.
Expand Down Expand Up @@ -385,7 +385,7 @@ unit change in the slice voxel coordinate. So, the
addition of two vectors $T^j = \mathbf{a} + \mathbf{b}$, where
$\mathbf{a}$ is the position of the first voxel in some slice (here
slice 1, therefore $\mathbf{a} = T^1$) and $\mathbf{b}$ is $d$ times the
third colum of $A$. Obviously $d$ can be negative or positive. This
third column of $A$. Obviously $d$ can be negative or positive. This
leads to various ways of recovering something that is proportional to
$d$ plus a constant. The algorithm suggested in this `ITK post on
ordering slices`_ - and the one used by SPM - is to take the inner
Expand Down
2 changes: 1 addition & 1 deletion doc/source/dicom/siemens_csa.rst
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ Each item

Now there's a different length check from CSA1. ``item_len`` is given
just by ``xx[1]``. If ``item_len`` > ``csa_max_pos - csa_position``
(the remaining bytes in the header), then we just read the remaning
(the remaining bytes in the header), then we just read the remaining
bytes in the header (as above) into ``value`` below, as uint8, move the
filepointer to the next 4 byte boundary, and give up reading.

Expand Down
4 changes: 2 additions & 2 deletions doc/source/gettingstarted.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Getting Started
***************

NiBabel supports an ever growing collection of neuroimaging file formats. Every
file format format has its own features and pecularities that need to be taken
file format format has its own features and peculiarities that need to be taken
care of to get the most out of it. To this end, NiBabel offers both high-level
format-independent access to neuroimages, as well as an API with various levels
of format-specific access to all available information in a particular file
Expand Down Expand Up @@ -109,7 +109,7 @@ True

In this case, we used the identity matrix as the affine transformation. The
image header is initialized from the provided data array (i.e. shape, dtype)
and all other values are set to resonable defaults.
and all other values are set to reasonable defaults.

Saving this new image to a file is trivial:

Expand Down
2 changes: 1 addition & 1 deletion doc/source/gitwash/configure_git.rst
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ and it gives graph / text output something like this (but with color!)::
| * 4aff2a8 - fixed bug 35, and added a test in test_bugfixes (2 weeks ago) [Hugo]
|/
* a7ff2e5 - Added notes on discussion/proposal made during Data Array Summit. (2 weeks ago) [Corran Webster]
* 68f6752 - Initial implimentation of AxisIndexer - uses 'index_by' which needs to be changed to a call on an Axes object - this is all very sketchy right now. (2 weeks ago) [Corr
* 68f6752 - Initial implementation of AxisIndexer - uses 'index_by' which needs to be changed to a call on an Axes object - this is all very sketchy right now. (2 weeks ago) [Corr
* 376adbd - Merge pull request #46 from terhorst/master (2 weeks ago) [Jonathan Terhorst]
|\
| * b605216 - updated joshu example to current api (3 weeks ago) [Jonathan Terhorst]
Expand Down
2 changes: 1 addition & 1 deletion doc/source/links_names.txt
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@
.. _`wikipedia affine transform`: https://en.wikipedia.org/wiki/Affine_transformation
.. _`wikipedia linear transform`: https://en.wikipedia.org/wiki/Linear_transformation
.. _`wikipedia rotation matrix`: https://en.wikipedia.org/wiki/Rotation_matrix
.. _`wikipedia homogenous coordinates`: https://en.wikipedia.org/wiki/Homogeneous_coordinates
.. _`wikipedia homogeneous coordinates`: https://en.wikipedia.org/wiki/Homogeneous_coordinates
.. _`wikipedia axis angle`: https://en.wikipedia.org/wiki/Axis_angle
.. _`wikipedia Euler angles`: https://en.wikipedia.org/wiki/Euler_angles
.. _`Mathworld Euler angles`: http://mathworld.wolfram.com/EulerAngles.html
Expand Down
2 changes: 1 addition & 1 deletion doc/source/old/design.txt
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ We think of an image as being the association of:

For simplicity, we want the transformation (above) to be spatial.
Because the images are always at least 3D, and the transform is
spatial, this means that the tranformation is always exactly 3D. We
spatial, this means that the transformation is always exactly 3D. We
have to know which of the N image dimensions are spatial. For
example, if we have a 4D (space and time) image, we need to know
which of the 4 dimensions are spatial. We could ask the image to
Expand Down
4 changes: 2 additions & 2 deletions doc/source/old/orientation.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Affines as orientation
----------------------

Orientations are expressed by 4 by 4 affine arrays. 4x4 affine arrays
give, in homogenous coordinates, the relationship between the
give, in homogeneous coordinates, the relationship between the
coordinates in the voxel array, and millimeters. Let is say that I have
a simple affine like this:

Expand All @@ -26,7 +26,7 @@ And I have a voxel coordinate:

then the millimeter coordinate for that voxel is given by:

>>> # add extra 1 for homogenous coordinates
>>> # add extra 1 for homogeneous coordinates
>>> homogenous_coord = np.concatenate((coord, [1]))
>>> mm_coord = np.dot(aff, homogenous_coord)[:3]
>>> mm_coord
Expand Down
10 changes: 5 additions & 5 deletions nibabel/affines.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def apply_affine(aff, pts):
Parameters
----------
aff : (N, N) array-like
Homogenous affine, for 3D points, will be 4 by 4. Contrary to first
Homogeneous affine, for 3D points, will be 4 by 4. Contrary to first
appearance, the affine will be applied on the left of `pts`.
pts : (..., N-1) array-like
Points, where the last dimension contains the coordinates of each
Expand Down Expand Up @@ -87,7 +87,7 @@ def apply_affine(aff, pts):
def to_matvec(transform):
"""Split a transform into its matrix and vector components.
The tranformation must be represented in homogeneous coordinates and is
The transformation must be represented in homogeneous coordinates and is
split into its rotation matrix and translation vector components.
Parameters
Expand All @@ -104,7 +104,7 @@ def to_matvec(transform):
matrix : (N-1, M-1) array
Matrix component of `transform`
vector : (M-1,) array
Vector compoent of `transform`
Vector component of `transform`
See Also
--------
Expand Down Expand Up @@ -145,7 +145,7 @@ def from_matvec(matrix, vector=None):
Returns
-------
xform : array
An (N+1, M+1) homogenous transform matrix.
An (N+1, M+1) homogeneous transform matrix.
See Also
--------
Expand Down Expand Up @@ -269,7 +269,7 @@ def voxel_sizes(affine):
1)[:3]``. The world coordinate vector of voxel vector (1, 0, 0) is
``v1_ax1 = affine.dot((1, 0, 0, 1))[:3]``. The final 1 in the voxel
vectors and the ``[:3]`` at the end are because the affine works on
homogenous coodinates. The translations part of the affine is ``trans =
homogeneous coordinates. The translations part of the affine is ``trans =
affine[:3, 3]``, and the rotations, zooms and shearing part of the affine
is ``rzs = affine[:3, :3]``. Because of the final 1 in the input voxel
vector, ``v0 == rzs.dot((0, 0, 0)) + trans``, and ``v1_ax1 == rzs.dot((1,
Expand Down
2 changes: 1 addition & 1 deletion nibabel/analyze.py
Original file line number Diff line number Diff line change
Expand Up @@ -385,7 +385,7 @@ def from_header(klass, header=None, check=True):
# safely discard fields with names not known to this header
# type on the basis they are from the wrong Analyze dialect
pass
# set any fields etc that are specific to this format (overriden by
# set any fields etc that are specific to this format (overridden by
# sub-classes)
obj._clean_after_mapping()
# Fallback basic conversion always done.
Expand Down
2 changes: 1 addition & 1 deletion nibabel/batteryrunners.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
(very bad problem). The levels follow the log levels from the logging
module (e.g 40 equivalent to "error" level, 50 to "critical"). The
``error`` can be one of ``None`` if no error to suggest, or an Exception
class that the user might consider raising for this sitation. The
class that the user might consider raising for this situation. The
``problem_msg`` and ``fix_msg`` are human readable strings that should
explain what happened.
Expand Down
4 changes: 2 additions & 2 deletions nibabel/casting.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
""" Utilties for casting numpy values in various ways
""" Utilities for casting numpy values in various ways
Most routines work round some numpy oddities in floating point precision and
casting. Others work round numpy casting to and from python ints
Expand Down Expand Up @@ -132,7 +132,7 @@ def shared_range(flt_type, int_type):
Returns
-------
mn : object
Number of type `flt_type` that is the minumum value in the range of
Number of type `flt_type` that is the minimum value in the range of
`int_type`, such that ``mn.astype(int_type)`` >= min of `int_type`
mx : object
Number of type `flt_type` that is the maximum value in the range of
Expand Down
2 changes: 1 addition & 1 deletion nibabel/cifti2/tests/test_cifti2io_axes.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def check_Conte69(brain_model):

def check_rewrite(arr, axes, extension='.nii'):
"""
Checks wheter writing the Cifti2 array to disc and reading it back in gives the same object
Checks whether writing the Cifti2 array to disc and reading it back in gives the same object
Parameters
----------
Expand Down
4 changes: 2 additions & 2 deletions nibabel/cmdline/diff.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ def get_opt_parser():
def are_values_different(*values):
"""Generically compare values, return True if different
Note that comparison is targetting reporting of comparison of the headers
Note that comparison is targeting reporting of comparison of the headers
so has following specifics:
- even a difference in data types is considered a difference, i.e. 1 != 1.0
- nans are considered to be the "same", although generally nan != nan
Expand All @@ -94,7 +94,7 @@ def are_values_different(*values):
except TypeError as exc:
str_exc = str(exc)
# Not implemented in numpy 1.7.1
if "not supported" in str_exc or "ot implemented" in str_exc:
if "not supported" in str_exc or "not implemented" in str_exc:
value0_nans = None
else:
raise
Expand Down
2 changes: 1 addition & 1 deletion nibabel/cmdline/tck2trk.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def main():
try:
nii = nib.load(args.anatomy)
except Exception:
parser.error("Expecting anatomical image as first agument.")
parser.error("Expecting anatomical image as first argument.")

for tractogram in args.tractograms:
tractogram_format = nib.streamlines.detect_format(tractogram)
Expand Down
6 changes: 3 additions & 3 deletions nibabel/dataobj_images.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ def __init__(self, dataobj, header=None, extra=None, file_map=None):
Parameters
----------
dataobj : object
Object containg image data. It should be some object that retuns an
array from ``np.asanyarray``. It should have ``shape`` and ``ndim``
attributes or properties
Object containing image data. It should be some object that returns
an array from ``np.asanyarray``. It should have ``shape`` and
``ndim`` attributes or properties
header : None or mapping or header instance, optional
metadata for this image format
extra : None or mapping, optional
Expand Down
4 changes: 2 additions & 2 deletions nibabel/ecat.py
Original file line number Diff line number Diff line change
Expand Up @@ -661,7 +661,7 @@ def data_from_fileobj(self, frame=0, orientation=None):


class EcatImageArrayProxy(object):
""" Ecat implemention of array proxy protocol
""" Ecat implementation of array proxy protocol
The array proxy allows us to freeze the passed fileobj and
header such that it returns the expected data array.
Expand Down Expand Up @@ -989,7 +989,7 @@ def to_file_map(self, file_map=None):
# Write frame images
self._write_data(image, imgf, pos + 2, endianness='>')

# Move to dictionnary offset and write dictionnary entry
# Move to dictionary offset and write dictionary entry
self._write_data(mlist[index], imgf, entry_pos, endianness='>')

entry_pos = entry_pos + 16
Expand Down
2 changes: 1 addition & 1 deletion nibabel/filename_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ def splitext_addext(filename,
Extension, where extension is not in `addexts` - e.g. ``.ext`` in
example above
addext : str
Any suffixes appearing in `addext` occuring at end of filename
Any suffixes appearing in `addext` occurring at end of filename
Examples
--------
Expand Down
2 changes: 1 addition & 1 deletion nibabel/fileslice.py
Original file line number Diff line number Diff line change
Expand Up @@ -416,7 +416,7 @@ def optimize_slicer(slicer, dim_len, all_full, is_slowest, stride,
# full, but reversed
if slicer == slice(dim_len - 1, None, -1):
return slice(None), slice(None, None, -1)
# Not full, mabye continuous
# Not full, maybe continuous
is_int = False
else: # int
if slicer < 0: # make negative offsets positive
Expand Down
2 changes: 1 addition & 1 deletion nibabel/gifti/gifti.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ class GiftiDataArray(xml.XmlSerializable):
The Endianness to store the data array. Should correspond to the
machine endianness. Default is system byteorder.
coordsys : :class:`GiftiCoordSystem` instance
Input and output coordinate system with tranformation matrix between
Input and output coordinate system with transformation matrix between
the two.
ind_ord : int
The ordering of the array. see util.array_index_order_codes. Default
Expand Down
2 changes: 1 addition & 1 deletion nibabel/gifti/tests/test_gifti.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ def test_dataarray_init():
pytest.raises(KeyError, gda, datatype='not_datatype')
# Float32 datatype comes from array if datatype not set
assert gda(arr).datatype == 16
# Can be overriden by init
# Can be overridden by init
assert gda(arr, datatype='uint8').datatype == 2
# Encoding
assert gda(encoding=1).encoding == 1
Expand Down
4 changes: 2 additions & 2 deletions nibabel/imageclasses.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ def __getitem__(self, *args, **kwargs):


def spatial_axes_first(img):
""" True if spatial image axes for `img` always preceed other axes
""" True if spatial image axes for `img` always precede other axes
Parameters
----------
Expand All @@ -136,7 +136,7 @@ def spatial_axes_first(img):
-------
spatial_axes_first : bool
True if image only has spatial axes (number of axes < 4) or image type
known to have spatial axes preceeding other axes.
known to have spatial axes preceding other axes.
"""
if len(img.shape) < 4:
return True
Expand Down
2 changes: 1 addition & 1 deletion nibabel/loadsave.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ def read_img_data(img, prefer='scaled'):
array is given by the raw data on disk, multiplied by a scalefactor
and maybe with the addition of a constant. This function, with
``unscaled`` returns the data on the disk, without these
format-specific scalings applied. Please use this funciton only if
format-specific scalings applied. Please use this function only if
you absolutely need the unscaled data, and the magnitude of the
data, as given by the scalefactor, is not relevant to your
application. The Analyze-type formats have a single scalefactor +/-
Expand Down
Loading

0 comments on commit b625d2e

Please sign in to comment.