diff --git a/docs/src/further_topics/metadata.rst b/docs/src/further_topics/metadata.rst index a564b2ba68..10efcdf7fe 100644 --- a/docs/src/further_topics/metadata.rst +++ b/docs/src/further_topics/metadata.rst @@ -91,6 +91,16 @@ actual `data attribute`_ names of the metadata members on the Iris class. metadata members are Iris specific terms, rather than recognised `CF Conventions`_ terms. +.. note:: + + :class:`~iris.cube.Cube` :attr:`~iris.cube.Cube.attributes` implement the + concept of dataset-level and variable-level attributes, to enable correct + NetCDF loading and saving (see :class:`~iris.cube.CubeAttrsDict` and NetCDF + :func:`~iris.fileformats.netcdf.saver.save` for more). ``attributes`` on + the other classes do not have this distinction, but the ``attributes`` + members of ALL the classes still have the same interface, and can be + compared. + Common Metadata API =================== @@ -128,10 +138,12 @@ For example, given the following :class:`~iris.cube.Cube`, source 'Data from Met Office Unified Model 6.05' We can easily get all of the associated metadata of the :class:`~iris.cube.Cube` -using the ``metadata`` property: +using the ``metadata`` property (note the specialised +:class:`~iris.cube.CubeAttrsDict` for the :attr:`~iris.cube.Cube.attributes`, +as mentioned earlier): >>> cube.metadata - CubeMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}, cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) + CubeMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes=CubeAttrsDict(globals={'Conventions': 'CF-1.5'}, locals={'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}), cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) We can also inspect the ``metadata`` of the ``longitude`` :class:`~iris.coords.DimCoord` attached to the :class:`~iris.cube.Cube` in the same way: @@ -675,8 +687,8 @@ For example, consider the following :class:`~iris.common.metadata.CubeMetadata`, .. doctest:: metadata-combine - >>> cube.metadata # doctest: +SKIP - CubeMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}, cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) + >>> cube.metadata + CubeMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes=CubeAttrsDict(globals={'Conventions': 'CF-1.5'}, locals={'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}), cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) We can perform the **identity function** by comparing the metadata with itself, @@ -701,7 +713,7 @@ which is replaced with a **different value**, >>> metadata != cube.metadata True >>> metadata.combine(cube.metadata) # doctest: +SKIP - CubeMetadata(standard_name=None, long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'STASH': STASH(model=1, section=3, item=236), 'source': 'Data from Met Office Unified Model 6.05', 'Model scenario': 'A1B', 'Conventions': 'CF-1.5'}, cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) + CubeMetadata(standard_name=None, long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05', 'Conventions': 'CF-1.5'}, cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) The ``combine`` method combines metadata by performing a **strict** comparison between each of the associated metadata member values, @@ -724,7 +736,7 @@ Let's reinforce this behaviour, but this time by combining metadata where the >>> metadata != cube.metadata True >>> metadata.combine(cube.metadata).attributes - {'Model scenario': 'A1B'} + CubeAttrsDict(globals={}, locals={'Model scenario': 'A1B'}) The combined result for the ``attributes`` member only contains those **common keys** with **common values**. @@ -810,16 +822,17 @@ the ``from_metadata`` class method. For example, given the following .. doctest:: metadata-convert - >>> cube.metadata # doctest: +SKIP - CubeMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}, cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) + >>> cube.metadata + CubeMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes=CubeAttrsDict(globals={'Conventions': 'CF-1.5'}, locals={'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}), cell_methods=(CellMethod(method='mean', coord_names=('time',), intervals=('6 hour',), comments=()),)) We can easily convert it to a :class:`~iris.common.metadata.DimCoordMetadata` instance using ``from_metadata``, .. doctest:: metadata-convert - >>> DimCoordMetadata.from_metadata(cube.metadata) # doctest: +SKIP - DimCoordMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}, coord_system=None, climatological=None, circular=None) + >>> newmeta = DimCoordMetadata.from_metadata(cube.metadata) + >>> print(newmeta) + DimCoordMetadata(standard_name=air_temperature, var_name=air_temperature, units=K, attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}) By examining :numref:`metadata members table`, we can see that the :class:`~iris.cube.Cube` and :class:`~iris.coords.DimCoord` container @@ -849,9 +862,9 @@ class instance, .. doctest:: metadata-convert - >>> longitude.metadata.from_metadata(cube.metadata) - DimCoordMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}, coord_system=None, climatological=None, circular=None) - + >>> newmeta = longitude.metadata.from_metadata(cube.metadata) + >>> print(newmeta) + DimCoordMetadata(standard_name=air_temperature, var_name=air_temperature, units=K, attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}) .. _metadata assignment: @@ -978,7 +991,7 @@ Indeed, it's also possible to assign to the ``metadata`` property with a >>> longitude.metadata DimCoordMetadata(standard_name='longitude', long_name=None, var_name='longitude', units=Unit('degrees'), attributes={}, coord_system=GeogCS(6371229.0), climatological=False, circular=False) >>> longitude.metadata = cube.metadata - >>> longitude.metadata # doctest: +SKIP + >>> longitude.metadata DimCoordMetadata(standard_name='air_temperature', long_name=None, var_name='air_temperature', units=Unit('K'), attributes={'Conventions': 'CF-1.5', 'STASH': STASH(model=1, section=3, item=236), 'Model scenario': 'A1B', 'source': 'Data from Met Office Unified Model 6.05'}, coord_system=GeogCS(6371229.0), climatological=False, circular=False) Note that, only **common** metadata members will be assigned new associated diff --git a/docs/src/techpapers/index.rst b/docs/src/techpapers/index.rst index 773c8f7059..e97a87f39c 100644 --- a/docs/src/techpapers/index.rst +++ b/docs/src/techpapers/index.rst @@ -11,3 +11,4 @@ Extra information on specific technical issues. um_files_loading.rst missing_data_handling.rst + netcdf_io.rst diff --git a/docs/src/techpapers/netcdf_io.rst b/docs/src/techpapers/netcdf_io.rst new file mode 100644 index 0000000000..e151b2b7c1 --- /dev/null +++ b/docs/src/techpapers/netcdf_io.rst @@ -0,0 +1,140 @@ +.. testsetup:: chunk_control + + import iris + from iris.fileformats.netcdf.loader import CHUNK_CONTROL + + from pathlib import Path + import dask + import shutil + import tempfile + + tmp_dir = Path(tempfile.mkdtemp()) + tmp_filepath = tmp_dir / "tmp.nc" + + cube = iris.load(iris.sample_data_path("E1_north_america.nc"))[0] + iris.save(cube, tmp_filepath, chunksizes=(120, 37, 49)) + old_dask = dask.config.get("array.chunk-size") + dask.config.set({'array.chunk-size': '500KiB'}) + + +.. testcleanup:: chunk_control + + dask.config.set({'array.chunk-size': old_dask}) + shutil.rmtree(tmp_dir) + +.. _netcdf_io: + +============================= +NetCDF I/O Handling in Iris +============================= + +This document provides a basic account of how Iris loads and saves NetCDF files. + +.. admonition:: Under Construction + + This document is still a work in progress, so might include blank or unfinished sections, + watch this space! + + +Chunk Control +-------------- + +Default Chunking +^^^^^^^^^^^^^^^^ + +Chunks are, by default, optimised by Iris on load. This will automatically +decide the best chunksize for your data without any user input. This is +calculated based on a number of factors, including: + +- File Variable Chunking +- Full Variable Shape +- Dask Default Chunksize +- Dimension Order: Earlier (outer) dimensions will be prioritised to be split over later (inner) dimensions. + +.. doctest:: chunk_control + + >>> cube = iris.load_cube(tmp_filepath) + >>> + >>> print(cube.shape) + (240, 37, 49) + >>> print(cube.core_data().chunksize) + (60, 37, 49) + +For more user control, functionality was updated in :pull:`5588`, with the +creation of the :data:`iris.fileformats.netcdf.loader.CHUNK_CONTROL` class. + +Custom Chunking: Set +^^^^^^^^^^^^^^^^^^^^ + +There are three context manangers within :data:`~iris.fileformats.netcdf.loader.CHUNK_CONTROL`. The most basic is +:meth:`~iris.fileformats.netcdf.loader.ChunkControl.set`. This allows you to specify the chunksize for each dimension, +and to specify a ``var_name`` specifically to change. + +Using ``-1`` in place of a chunksize will ensure the chunksize stays the same +as the shape, i.e. no optimisation occurs on that dimension. + +.. doctest:: chunk_control + + >>> with CHUNK_CONTROL.set("air_temperature", time=180, latitude=-1, longitude=25): + ... cube = iris.load_cube(tmp_filepath) + >>> + >>> print(cube.core_data().chunksize) + (180, 37, 25) + +Note that ``var_name`` is optional, and that you don't need to specify every dimension. If you +specify only one dimension, the rest will be optimised using Iris' default behaviour. + +.. doctest:: chunk_control + + >>> with CHUNK_CONTROL.set(longitude=25): + ... cube = iris.load_cube(tmp_filepath) + >>> + >>> print(cube.core_data().chunksize) + (120, 37, 25) + +Custom Chunking: From File +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The second context manager is :meth:`~iris.fileformats.netcdf.loader.ChunkControl.from_file`. +This takes chunksizes as defined in the NetCDF file. Any dimensions without specified chunks +will default to Iris optimisation. + +.. doctest:: chunk_control + + >>> with CHUNK_CONTROL.from_file(): + ... cube = iris.load_cube(tmp_filepath) + >>> + >>> print(cube.core_data().chunksize) + (120, 37, 49) + +Custom Chunking: As Dask +^^^^^^^^^^^^^^^^^^^^^^^^ + +The final context manager, :meth:`~iris.fileformats.netcdf.loader.ChunkControl.as_dask`, bypasses +Iris' optimisation all together, and will take its chunksizes from Dask's behaviour. + +.. doctest:: chunk_control + + >>> with CHUNK_CONTROL.as_dask(): + ... cube = iris.load_cube(tmp_filepath) + >>> + >>> print(cube.core_data().chunksize) + (70, 37, 49) + + +Split Attributes +----------------- + +TBC + + +Deferred Saving +---------------- + +TBC + + +Guess Axis +----------- + +TBC diff --git a/docs/src/userguide/iris_cubes.rst b/docs/src/userguide/iris_cubes.rst index 267f97b0fc..03b5093efc 100644 --- a/docs/src/userguide/iris_cubes.rst +++ b/docs/src/userguide/iris_cubes.rst @@ -85,7 +85,10 @@ A cube consists of: data dimensions as the coordinate has dimensions. * an attributes dictionary which, other than some protected CF names, can - hold arbitrary extra metadata. + hold arbitrary extra metadata. This implements the concept of dataset-level + and variable-level attributes when loading and and saving NetCDF files (see + :class:`~iris.cube.CubeAttrsDict` and NetCDF + :func:`~iris.fileformats.netcdf.saver.save` for more). * a list of cell methods to represent operations which have already been applied to the data (e.g. "mean over time") * a list of coordinate "factories" used for deriving coordinates from the diff --git a/docs/src/whatsnew/latest.rst b/docs/src/whatsnew/latest.rst index 6e7087c687..3f2f9a1fd9 100644 --- a/docs/src/whatsnew/latest.rst +++ b/docs/src/whatsnew/latest.rst @@ -29,7 +29,17 @@ This document explains the changes made to Iris for this release ✨ Features =========== - +#. `@pp-mo`_, `@lbdreyer`_ and `@trexfeathers`_ improved + :class:`~iris.cube.Cube` :attr:`~iris.cube.Cube.attributes` handling to + better preserve the distinction between dataset-level and variable-level + attributes, allowing file-Cube-file round-tripping of NetCDF attributes. See + :class:`~iris.cube.CubeAttrsDict`, NetCDF + :func:`~iris.fileformats.netcdf.saver.save` and :data:`~iris.Future` for more. + (:pull:`5152`, `split attributes project`_) + +#. `@rcomer`_ rewrote :func:`~iris.util.broadcast_to_shape` so it now handles + lazy data. (:pull:`5307`) + #. `@trexfeathers`_ and `@HGWright`_ (reviewer) sub-categorised all Iris' :class:`UserWarning`\s for richer filtering. The full index of sub-categories can be seen here: :mod:`iris.exceptions` . (:pull:`5498`) @@ -44,6 +54,14 @@ This document explains the changes made to Iris for this release Winter - December to February) will be assigned to the preceding year (e.g. the year of December) instead of the following year (the default behaviour). (:pull:`5573`) + +#. `@HGWright`_ added :attr:`~iris.coords.Coord.ignore_axis` to allow manual + intervention preventing :func:`~iris.util.guess_coord_axis` from acting on a + coordinate. (:pull:`5551`) + +#. `@pp-mo`_, `@trexfeathers`_ and `@ESadek-MO`_ added more control over + NetCDF chunking with the use of the :data:`iris.fileformats.netcdf.loader.CHUNK_CONTROL` + context manager. (:pull:`5588`) 🐛 Bugs Fixed @@ -68,7 +86,8 @@ This document explains the changes made to Iris for this release 🚀 Performance Enhancements =========================== -#. N/A +#. `@stephenworsley`_ improved the speed of :class:`~iris.analysis.AreaWeighted` + regridding. (:pull:`5543`) 🔥 Deprecations @@ -103,6 +122,10 @@ This document explains the changes made to Iris for this release #. `@ESadek-MO`_ added a phrasebook for synonymous terms used in similar packages. (:pull:`5564`) +#. `@ESadek-MO`_ and `@trexfeathers`_ created a technical paper for NetCDF + saving and loading, :ref:`netcdf_io` with a section on chunking, and placeholders + for further topics. (:pull:`5588`) + 💼 Internal =========== @@ -147,4 +170,4 @@ This document explains the changes made to Iris for this release .. _NEP29 Drop Schedule: https://numpy.org/neps/nep-0029-deprecation_policy.html#drop-schedule .. _codespell: https://github.com/codespell-project/codespell - +.. _split attributes project: https://github.com/orgs/SciTools/projects/5?pane=info diff --git a/lib/iris/__init__.py b/lib/iris/__init__.py index c29998cd6d..a10169b7bb 100644 --- a/lib/iris/__init__.py +++ b/lib/iris/__init__.py @@ -141,7 +141,9 @@ def callback(cube, field, filename): class Future(threading.local): """Run-time configuration controller.""" - def __init__(self, datum_support=False, pandas_ndim=False): + def __init__( + self, datum_support=False, pandas_ndim=False, save_split_attrs=False + ): """ A container for run-time options controls. @@ -163,6 +165,11 @@ def __init__(self, datum_support=False, pandas_ndim=False): pandas_ndim : bool, default=False See :func:`iris.pandas.as_data_frame` for details - opts in to the newer n-dimensional behaviour. + save_split_attrs : bool, default=False + Save "global" and "local" cube attributes to netcdf in appropriately + different ways : "global" ones are saved as dataset attributes, where + possible, while "local" ones are saved as data-variable attributes. + See :func:`iris.fileformats.netcdf.saver.save`. """ # The flag 'example_future_flag' is provided as a reference for the @@ -174,14 +181,18 @@ def __init__(self, datum_support=False, pandas_ndim=False): # self.__dict__['example_future_flag'] = example_future_flag self.__dict__["datum_support"] = datum_support self.__dict__["pandas_ndim"] = pandas_ndim + self.__dict__["save_split_attrs"] = save_split_attrs + # TODO: next major release: set IrisDeprecation to subclass # DeprecationWarning instead of UserWarning. def __repr__(self): # msg = ('Future(example_future_flag={})') # return msg.format(self.example_future_flag) - msg = "Future(datum_support={}, pandas_ndim={})" - return msg.format(self.datum_support, self.pandas_ndim) + msg = "Future(datum_support={}, pandas_ndim={}, save_split_attrs={})" + return msg.format( + self.datum_support, self.pandas_ndim, self.save_split_attrs + ) # deprecated_options = {'example_future_flag': 'warning',} deprecated_options = {} diff --git a/lib/iris/_lazy_data.py b/lib/iris/_lazy_data.py index fb29f411d3..11477a2fa6 100644 --- a/lib/iris/_lazy_data.py +++ b/lib/iris/_lazy_data.py @@ -61,6 +61,7 @@ def _optimum_chunksize_internals( shape, limit=None, dtype=np.dtype("f4"), + dims_fixed=None, dask_array_chunksize=dask.config.get("array.chunk-size"), ): """ @@ -70,8 +71,8 @@ def _optimum_chunksize_internals( Args: - * chunks (tuple of int, or None): - Pre-existing chunk shape of the target data : None if unknown. + * chunks (tuple of int): + Pre-existing chunk shape of the target data. * shape (tuple of int): The full array shape of the target data. * limit (int): @@ -79,6 +80,11 @@ def _optimum_chunksize_internals( :mod:`dask.config`. * dtype (np.dtype): Numpy dtype of target data. + * dims_fixed (list of bool): + If set, a list of values equal in length to 'chunks' or 'shape'. + 'True' values indicate a dimension that can not be changed, i.e. that + element of the result must equal the corresponding value in 'chunks' or + data.shape. Returns: * chunk (tuple of int): @@ -99,6 +105,7 @@ def _optimum_chunksize_internals( "chunks = [c[0] for c in normalise_chunks('auto', ...)]". """ + # Set the chunksize limit. if limit is None: # Fetch the default 'optimal' chunksize from the dask config. @@ -108,58 +115,90 @@ def _optimum_chunksize_internals( point_size_limit = limit / dtype.itemsize - # Create result chunks, starting with a copy of the input. - result = list(chunks) - - if np.prod(result) < point_size_limit: - # If size is less than maximum, expand the chunks, multiplying later - # (i.e. inner) dims first. - i_expand = len(shape) - 1 - while np.prod(result) < point_size_limit and i_expand >= 0: - factor = np.floor(point_size_limit * 1.0 / np.prod(result)) - new_dim = result[i_expand] * int(factor) - if new_dim >= shape[i_expand]: - # Clip to dim size : chunk dims must not exceed the full shape. - new_dim = shape[i_expand] - else: - # 'new_dim' is less than the relevant dim of 'shape' -- but it - # is also the largest possible multiple of the input-chunks, - # within the size limit. - # So : 'i_expand' is the outer (last) dimension over which we - # will multiply the input chunks, and 'new_dim' is a value that - # ensures the fewest possible chunks within that dim. - - # Now replace 'new_dim' with the value **closest to equal-size - # chunks**, for the same (minimum) number of chunks. - # More-equal chunks are practically better. - # E.G. : "divide 8 into multiples of 2, with a limit of 7", - # produces new_dim=6, which would mean chunks of sizes (6, 2). - # But (4, 4) is clearly better for memory and time cost. - - # Calculate how many (expanded) chunks fit into this dimension. - dim_chunks = np.ceil(shape[i_expand] * 1.0 / new_dim) - # Get "ideal" (equal) size for that many chunks. - ideal_equal_chunk_size = shape[i_expand] / dim_chunks - # Use the nearest whole multiple of input chunks >= ideal. - new_dim = int( - result[i_expand] - * np.ceil(ideal_equal_chunk_size / result[i_expand]) - ) - - result[i_expand] = new_dim - i_expand -= 1 + if dims_fixed is not None: + if not np.any(dims_fixed): + dims_fixed = None + + if dims_fixed is None: + # Get initial result chunks, starting with a copy of the input. + working = list(chunks) + else: + # Adjust the operation to ignore the 'fixed' dims. + # (We reconstruct the original later, before return). + chunks = np.array(chunks) + dims_fixed_arr = np.array(dims_fixed) + # Reduce the target size by the fixed size of all the 'fixed' dims. + point_size_limit = point_size_limit // np.prod(chunks[dims_fixed_arr]) + # Work on only the 'free' dims. + original_shape = tuple(shape) + shape = tuple(np.array(shape)[~dims_fixed_arr]) + working = list(chunks[~dims_fixed_arr]) + + if len(working) >= 1: + if np.prod(working) < point_size_limit: + # If size is less than maximum, expand the chunks, multiplying + # later (i.e. inner) dims first. + i_expand = len(shape) - 1 + while np.prod(working) < point_size_limit and i_expand >= 0: + factor = np.floor(point_size_limit * 1.0 / np.prod(working)) + new_dim = working[i_expand] * int(factor) + if new_dim >= shape[i_expand]: + # Clip to dim size : must not exceed the full shape. + new_dim = shape[i_expand] + else: + # 'new_dim' is less than the relevant dim of 'shape' -- but + # it is also the largest possible multiple of the + # input-chunks, within the size limit. + # So : 'i_expand' is the outer (last) dimension over which + # we will multiply the input chunks, and 'new_dim' is a + # value giving the fewest possible chunks within that dim. + + # Now replace 'new_dim' with the value **closest to + # equal-size chunks**, for the same (minimum) number of + # chunks. More-equal chunks are practically better. + # E.G. : "divide 8 into multiples of 2, with a limit of 7", + # produces new_dim=6, meaning chunks of sizes (6, 2). + # But (4, 4) is clearly better for memory and time cost. + + # Calculate how many (expanded) chunks fit in this dim. + dim_chunks = np.ceil(shape[i_expand] * 1.0 / new_dim) + # Get "ideal" (equal) size for that many chunks. + ideal_equal_chunk_size = shape[i_expand] / dim_chunks + # Use the nearest whole multiple of input chunks >= ideal. + new_dim = int( + working[i_expand] + * np.ceil(ideal_equal_chunk_size / working[i_expand]) + ) + + working[i_expand] = new_dim + i_expand -= 1 + else: + # Similarly, reduce if too big, reducing earlier (outer) dims first. + i_reduce = 0 + while np.prod(working) > point_size_limit: + factor = np.ceil(np.prod(working) / point_size_limit) + new_dim = int(working[i_reduce] / factor) + if new_dim < 1: + new_dim = 1 + working[i_reduce] = new_dim + i_reduce += 1 + + working = tuple(working) + + if dims_fixed is None: + result = working else: - # Similarly, reduce if too big, reducing earlier (outer) dims first. - i_reduce = 0 - while np.prod(result) > point_size_limit: - factor = np.ceil(np.prod(result) / point_size_limit) - new_dim = int(result[i_reduce] / factor) - if new_dim < 1: - new_dim = 1 - result[i_reduce] = new_dim - i_reduce += 1 + # Reconstruct the original form + result = [] + for i_dim in range(len(original_shape)): + if dims_fixed[i_dim]: + dim = chunks[i_dim] + else: + dim = working[0] + working = working[1:] + result.append(dim) - return tuple(result) + return result @wraps(_optimum_chunksize_internals) @@ -168,6 +207,7 @@ def _optimum_chunksize( shape, limit=None, dtype=np.dtype("f4"), + dims_fixed=None, ): # By providing dask_array_chunksize as an argument, we make it so that the # output of _optimum_chunksize_internals depends only on its arguments (and @@ -177,11 +217,14 @@ def _optimum_chunksize( tuple(shape), limit=limit, dtype=dtype, + dims_fixed=dims_fixed, dask_array_chunksize=dask.config.get("array.chunk-size"), ) -def as_lazy_data(data, chunks=None, asarray=False): +def as_lazy_data( + data, chunks=None, asarray=False, dims_fixed=None, dask_chunking=False +): """ Convert the input array `data` to a :class:`dask.array.Array`. @@ -200,6 +243,16 @@ def as_lazy_data(data, chunks=None, asarray=False): If True, then chunks will be converted to instances of `ndarray`. Set to False (default) to pass passed chunks through unchanged. + * dims_fixed (list of bool): + If set, a list of values equal in length to 'chunks' or data.ndim. + 'True' values indicate a dimension which can not be changed, i.e. the + result for that index must equal the value in 'chunks' or data.shape. + + * dask_chunking (bool): + If True, Iris chunking optimisation will be bypassed, and dask's default + chunking will be used instead. Including a value for chunks while dask_chunking + is set to True will result in a failure. + Returns: The input array converted to a :class:`dask.array.Array`. @@ -211,24 +264,38 @@ def as_lazy_data(data, chunks=None, asarray=False): but reduced by a factor if that exceeds the dask default chunksize. """ - if chunks is None: - # No existing chunks : Make a chunk the shape of the entire input array - # (but we will subdivide it if too big). - chunks = list(data.shape) - - # Adjust chunk size for better dask performance, - # NOTE: but only if no shape dimension is zero, so that we can handle the - # PPDataProxy of "raw" landsea-masked fields, which have a shape of (0, 0). - if all(elem > 0 for elem in data.shape): - # Expand or reduce the basic chunk shape to an optimum size. - chunks = _optimum_chunksize(chunks, shape=data.shape, dtype=data.dtype) - + if dask_chunking: + if chunks is not None: + raise ValueError( + f"Dask chunking chosen, but chunks already assigned value {chunks}" + ) + lazy_params = {"asarray": asarray, "meta": np.ndarray} + else: + if chunks is None: + # No existing chunks : Make a chunk the shape of the entire input array + # (but we will subdivide it if too big). + chunks = list(data.shape) + + # Adjust chunk size for better dask performance, + # NOTE: but only if no shape dimension is zero, so that we can handle the + # PPDataProxy of "raw" landsea-masked fields, which have a shape of (0, 0). + if all(elem > 0 for elem in data.shape): + # Expand or reduce the basic chunk shape to an optimum size. + chunks = _optimum_chunksize( + chunks, + shape=data.shape, + dtype=data.dtype, + dims_fixed=dims_fixed, + ) + lazy_params = { + "chunks": chunks, + "asarray": asarray, + "meta": np.ndarray, + } if isinstance(data, ma.core.MaskedConstant): data = ma.masked_array(data.data, mask=data.mask) if not is_lazy_data(data): - data = da.from_array( - data, chunks=chunks, asarray=asarray, meta=np.ndarray - ) + data = da.from_array(data, **lazy_params) return data diff --git a/lib/iris/_merge.py b/lib/iris/_merge.py index bf22f57887..a8f079e70e 100644 --- a/lib/iris/_merge.py +++ b/lib/iris/_merge.py @@ -22,6 +22,9 @@ multidim_lazy_stack, ) from iris.common import CoordMetadata, CubeMetadata +from iris.common._split_attribute_dicts import ( + _convert_splitattrs_to_pairedkeys_dict as convert_splitattrs_to_pairedkeys_dict, +) import iris.coords import iris.cube import iris.exceptions @@ -390,8 +393,10 @@ def _defn_msgs(self, other_defn): ) ) if self_defn.attributes != other_defn.attributes: - diff_keys = set(self_defn.attributes.keys()) ^ set( - other_defn.attributes.keys() + attrs_1, attrs_2 = self_defn.attributes, other_defn.attributes + diff_keys = sorted( + set(attrs_1.globals) ^ set(attrs_2.globals) + | set(attrs_1.locals) ^ set(attrs_2.locals) ) if diff_keys: msgs.append( @@ -399,14 +404,16 @@ def _defn_msgs(self, other_defn): + ", ".join(repr(key) for key in diff_keys) ) else: + attrs_1, attrs_2 = [ + convert_splitattrs_to_pairedkeys_dict(dic) + for dic in (attrs_1, attrs_2) + ] diff_attrs = [ - repr(key) - for key in self_defn.attributes - if np.all( - self_defn.attributes[key] != other_defn.attributes[key] - ) + repr(key[1]) + for key in attrs_1 + if np.all(attrs_1[key] != attrs_2[key]) ] - diff_attrs = ", ".join(diff_attrs) + diff_attrs = ", ".join(sorted(diff_attrs)) msgs.append( "cube.attributes values differ for keys: {}".format( diff_attrs diff --git a/lib/iris/analysis/_area_weighted.py b/lib/iris/analysis/_area_weighted.py index ffec82fd4e..bd2ad90a3a 100644 --- a/lib/iris/analysis/_area_weighted.py +++ b/lib/iris/analysis/_area_weighted.py @@ -7,6 +7,7 @@ import cf_units import numpy as np import numpy.ma as ma +from scipy.sparse import csr_array from iris._lazy_data import map_complete_blocks from iris.analysis._interpolation import get_xy_dim_coords, snapshot_grid @@ -75,8 +76,7 @@ def __init__(self, src_grid_cube, target_grid_cube, mdtol=1): self.grid_y, self.meshgrid_x, self.meshgrid_y, - self.weights_info, - self.index_info, + self.weights, ) = _regrid_info def __call__(self, cube): @@ -125,8 +125,7 @@ def __call__(self, cube): self.grid_y, self.meshgrid_x, self.meshgrid_y, - self.weights_info, - self.index_info, + self.weights, ) return _regrid_area_weighted_rectilinear_src_and_grid__perform( cube, _regrid_info, mdtol=self._mdtol @@ -224,468 +223,17 @@ def _get_xy_coords(cube): return x_coord, y_coord -def _within_bounds(src_bounds, tgt_bounds, orderswap=False): - """ - Determine which target bounds lie within the extremes of the source bounds. - - Args: - - * src_bounds (ndarray): - An (n, 2) shaped array of monotonic contiguous source bounds. - * tgt_bounds (ndarray): - An (n, 2) shaped array corresponding to the target bounds. - - Kwargs: - - * orderswap (bool): - A Boolean indicating whether the target bounds are in descending order - (True). Defaults to False. - - Returns: - Boolean ndarray, indicating whether each target bound is within the - extremes of the source bounds. - - """ - min_bound = np.min(src_bounds) - 1e-14 - max_bound = np.max(src_bounds) + 1e-14 - - # Swap upper-lower is necessary. - if orderswap is True: - upper, lower = tgt_bounds.T - else: - lower, upper = tgt_bounds.T - - return ((lower <= max_bound) * (lower >= min_bound)) * ( - (upper <= max_bound) * (upper >= min_bound) - ) - - -def _cropped_bounds(bounds, lower, upper): - """ - Return a new bounds array and corresponding slice object (or indices) of - the original data array, resulting from cropping the provided bounds - between the specified lower and upper values. The bounds at the - extremities will be truncated so that they start and end with lower and - upper. - - This function will return an empty NumPy array and slice if there is no - overlap between the region covered by bounds and the region from lower to - upper. - - If lower > upper the resulting bounds may not be contiguous and the - indices object will be a tuple of indices rather than a slice object. - - Args: - - * bounds: - An (n, 2) shaped array of monotonic contiguous bounds. - * lower: - Lower bound at which to crop the bounds array. - * upper: - Upper bound at which to crop the bounds array. - - Returns: - A tuple of the new bounds array and the corresponding slice object or - indices from the zeroth axis of the original array. - - """ - reversed_flag = False - # Ensure order is increasing. - if bounds[0, 0] > bounds[-1, 0]: - # Reverse bounds - bounds = bounds[::-1, ::-1] - reversed_flag = True - - # Number of bounds. - n = bounds.shape[0] - - if lower <= upper: - if lower > bounds[-1, 1] or upper < bounds[0, 0]: - new_bounds = bounds[0:0] - indices = slice(0, 0) - else: - # A single region lower->upper. - if lower < bounds[0, 0]: - # Region extends below bounds so use first lower bound. - lindex = 0 - lower = bounds[0, 0] - else: - # Index of last lower bound less than or equal to lower. - lindex = np.nonzero(bounds[:, 0] <= lower)[0][-1] - if upper > bounds[-1, 1]: - # Region extends above bounds so use last upper bound. - uindex = n - 1 - upper = bounds[-1, 1] - else: - # Index of first upper bound greater than or equal to - # upper. - uindex = np.nonzero(bounds[:, 1] >= upper)[0][0] - # Extract the bounds in our region defined by lower->upper. - new_bounds = np.copy(bounds[lindex : (uindex + 1), :]) - # Replace first and last values with specified bounds. - new_bounds[0, 0] = lower - new_bounds[-1, 1] = upper - if reversed_flag: - indices = slice(n - (uindex + 1), n - lindex) - else: - indices = slice(lindex, uindex + 1) - else: - # Two regions [0]->upper, lower->[-1] - # [0]->upper - if upper < bounds[0, 0]: - # Region outside src bounds. - new_bounds_left = bounds[0:0] - indices_left = tuple() - slice_left = slice(0, 0) - else: - if upper > bounds[-1, 1]: - # Whole of bounds. - uindex = n - 1 - upper = bounds[-1, 1] - else: - # Index of first upper bound greater than or equal to upper. - uindex = np.nonzero(bounds[:, 1] >= upper)[0][0] - # Extract the bounds in our region defined by [0]->upper. - new_bounds_left = np.copy(bounds[0 : (uindex + 1), :]) - # Replace last value with specified bound. - new_bounds_left[-1, 1] = upper - if reversed_flag: - indices_left = tuple(range(n - (uindex + 1), n)) - slice_left = slice(n - (uindex + 1), n) - else: - indices_left = tuple(range(0, uindex + 1)) - slice_left = slice(0, uindex + 1) - # lower->[-1] - if lower > bounds[-1, 1]: - # Region is outside src bounds. - new_bounds_right = bounds[0:0] - indices_right = tuple() - slice_right = slice(0, 0) - else: - if lower < bounds[0, 0]: - # Whole of bounds. - lindex = 0 - lower = bounds[0, 0] - else: - # Index of last lower bound less than or equal to lower. - lindex = np.nonzero(bounds[:, 0] <= lower)[0][-1] - # Extract the bounds in our region defined by lower->[-1]. - new_bounds_right = np.copy(bounds[lindex:, :]) - # Replace first value with specified bound. - new_bounds_right[0, 0] = lower - if reversed_flag: - indices_right = tuple(range(0, n - lindex)) - slice_right = slice(0, n - lindex) - else: - indices_right = tuple(range(lindex, n)) - slice_right = slice(lindex, None) - - if reversed_flag: - # Flip everything around. - indices_left, indices_right = indices_right, indices_left - slice_left, slice_right = slice_right, slice_left - - # Combine regions. - new_bounds = np.concatenate((new_bounds_left, new_bounds_right)) - # Use slices if possible, but if we have two regions use indices. - if indices_left and indices_right: - indices = indices_left + indices_right - elif indices_left: - indices = slice_left - elif indices_right: - indices = slice_right - else: - indices = slice(0, 0) - - if reversed_flag: - new_bounds = new_bounds[::-1, ::-1] - - return new_bounds, indices - - -def _cartesian_area(y_bounds, x_bounds): - """ - Return an array of the areas of each cell given two arrays - of cartesian bounds. - - Args: - - * y_bounds: - An (n, 2) shaped NumPy array. - * x_bounds: - An (m, 2) shaped NumPy array. - - Returns: - An (n, m) shaped Numpy array of areas. - - """ - heights = y_bounds[:, 1] - y_bounds[:, 0] - widths = x_bounds[:, 1] - x_bounds[:, 0] - return np.abs(np.outer(heights, widths)) - - -def _spherical_area(y_bounds, x_bounds, radius=1.0): +def _get_bounds_in_units(coord, units, dtype): """ - Return an array of the areas of each cell on a sphere - given two arrays of latitude and longitude bounds in radians. - - Args: - - * y_bounds: - An (n, 2) shaped NumPy array of latitude bounds in radians. - * x_bounds: - An (m, 2) shaped NumPy array of longitude bounds in radians. - * radius: - Radius of the sphere. Default is 1.0. - - Returns: - An (n, m) shaped Numpy array of areas. + Return a copy of coord's bounds in the specified units and dtype. + Return as contiguous bounds. """ - return iris.analysis.cartography._quadrant_area(y_bounds, x_bounds, radius) - - -def _get_bounds_in_units(coord, units, dtype): - """Return a copy of coord's bounds in the specified units and dtype.""" # The bounds are cast to dtype before conversion to prevent issues when # mixing float32 and float64 types. - return coord.units.convert(coord.bounds.astype(dtype), units).astype(dtype) - - -def _weighted_mean_with_mdtol(data, weights, axis=None, mdtol=0): - """ - Return the weighted mean of an array over the specified axis - using the provided weights (if any) and a permitted fraction of - masked data. - - Args: - - * data (array-like): - Data to be averaged. - - * weights (array-like): - An array of the same shape as the data that specifies the contribution - of each corresponding data element to the calculated mean. - - Kwargs: - - * axis (int or tuple of ints): - Axis along which the mean is computed. The default is to compute - the mean of the flattened array. - - * mdtol (float): - Tolerance of missing data. The value returned in each element of the - returned array will be masked if the fraction of masked data exceeds - mdtol. This fraction is weighted by the `weights` array if one is - provided. mdtol=0 means no missing data is tolerated - while mdtol=1 will mean the resulting element will be masked if and - only if all the contributing elements of data are masked. - Defaults to 0. - - Returns: - Numpy array (possibly masked) or scalar. - - """ - if ma.is_masked(data): - res, unmasked_weights_sum = ma.average( - data, weights=weights, axis=axis, returned=True - ) - if mdtol < 1: - weights_sum = weights.sum(axis=axis) - frac_masked = 1 - np.true_divide(unmasked_weights_sum, weights_sum) - mask_pt = frac_masked > mdtol - if np.any(mask_pt) and not isinstance(res, ma.core.MaskedConstant): - if np.isscalar(res): - res = ma.masked - elif ma.isMaskedArray(res): - res.mask |= mask_pt - else: - res = ma.masked_array(res, mask=mask_pt) - else: - res = np.average(data, weights=weights, axis=axis) - return res - - -def _regrid_area_weighted_array( - src_data, x_dim, y_dim, weights_info, index_info, mdtol=0 -): - """ - Regrid the given data from its source grid to a new grid using - an area weighted mean to determine the resulting data values. - - .. note:: - - Elements in the returned array that lie either partially - or entirely outside of the extent of the source grid will - be masked irrespective of the value of mdtol. - - Args: - - * src_data: - An N-dimensional NumPy array. - * x_dim: - The X dimension within `src_data`. - * y_dim: - The Y dimension within `src_data`. - * weights_info: - The area weights information to be used for area-weighted - regridding. - - Kwargs: - - * mdtol: - Tolerance of missing data. The value returned in each element of the - returned array will be masked if the fraction of missing data exceeds - mdtol. This fraction is calculated based on the area of masked cells - within each target cell. mdtol=0 means no missing data is tolerated - while mdtol=1 will mean the resulting element will be masked if and - only if all the overlapping elements of the source grid are masked. - Defaults to 0. - - Returns: - The regridded data as an N-dimensional NumPy array. The lengths - of the X and Y dimensions will now match those of the target - grid. - - """ - ( - blank_weights, - src_area_weights, - new_data_mask_basis, - ) = weights_info - - ( - result_x_extent, - result_y_extent, - square_data_indices_y, - square_data_indices_x, - src_area_datas_required, - ) = index_info - - # Ensure we have x_dim and y_dim. - x_dim_orig = x_dim - y_dim_orig = y_dim - if y_dim is None: - src_data = np.expand_dims(src_data, axis=src_data.ndim) - y_dim = src_data.ndim - 1 - if x_dim is None: - src_data = np.expand_dims(src_data, axis=src_data.ndim) - x_dim = src_data.ndim - 1 - # Move y_dim and x_dim to last dimensions - if not x_dim == src_data.ndim - 1: - src_data = np.moveaxis(src_data, x_dim, -1) - if not y_dim == src_data.ndim - 2: - if x_dim < y_dim: - # note: y_dim was shifted along by one position when - # x_dim was moved to the last dimension - src_data = np.moveaxis(src_data, y_dim - 1, -2) - elif x_dim > y_dim: - src_data = np.moveaxis(src_data, y_dim, -2) - x_dim = src_data.ndim - 1 - y_dim = src_data.ndim - 2 - - # Create empty "pre-averaging" data array that will enable the - # src_data data corresponding to a given target grid point, - # to be stacked per point. - # Note that dtype is not preserved and that the array mask - # allows for regions that do not overlap. - new_shape = list(src_data.shape) - new_shape[x_dim] = result_x_extent - new_shape[y_dim] = result_y_extent - - # Use input cube dtype or convert values to the smallest possible float - # dtype when necessary. - dtype = np.promote_types(src_data.dtype, np.float16) - - # Axes of data over which the weighted mean is calculated. - axis = (y_dim, x_dim) - - # Use previously established indices - - src_area_datas_square = src_data[ - ..., square_data_indices_y, square_data_indices_x - ] - - _, src_area_datas_required = np.broadcast_arrays( - src_area_datas_square, src_area_datas_required - ) - - src_area_datas = np.where( - src_area_datas_required, src_area_datas_square, 0 - ) - - # Flag to indicate whether the original data was a masked array. - src_masked = src_data.mask.any() if ma.isMaskedArray(src_data) else False - if src_masked: - src_area_masks_square = src_data.mask[ - ..., square_data_indices_y, square_data_indices_x - ] - src_area_masks = np.where( - src_area_datas_required, src_area_masks_square, True - ) - - else: - # If the weights were originally blank, set the weights to all 1 to - # avoid divide by 0 error and set the new data mask for making the - # values 0 - src_area_weights = np.where(blank_weights, 1, src_area_weights) - - new_data_mask = np.broadcast_to(new_data_mask_basis, new_shape) - - # Broadcast the weights array to allow numpy's ma.average - # to be called. - # Assign new shape to raise error on copy. - src_area_weights.shape = src_area_datas.shape[-3:] - # Broadcast weights to match shape of data. - _, src_area_weights = np.broadcast_arrays(src_area_datas, src_area_weights) - - # Mask the data points - if src_masked: - src_area_datas = np.ma.array(src_area_datas, mask=src_area_masks) - - # Calculate weighted mean taking into account missing data. - new_data = _weighted_mean_with_mdtol( - src_area_datas, weights=src_area_weights, axis=axis, mdtol=mdtol - ) - new_data = new_data.reshape(new_shape) - if src_masked: - new_data_mask = new_data.mask - - # Mask the data if originally masked or if the result has masked points - if ma.isMaskedArray(src_data): - new_data = ma.array( - new_data, - mask=new_data_mask, - fill_value=src_data.fill_value, - dtype=dtype, - ) - elif new_data_mask.any(): - new_data = ma.array(new_data, mask=new_data_mask, dtype=dtype) - else: - new_data = new_data.astype(dtype) - - # Restore data to original form - if x_dim_orig is None and y_dim_orig is None: - new_data = np.squeeze(new_data, axis=x_dim) - new_data = np.squeeze(new_data, axis=y_dim) - elif y_dim_orig is None: - new_data = np.squeeze(new_data, axis=y_dim) - new_data = np.moveaxis(new_data, -1, x_dim_orig) - elif x_dim_orig is None: - new_data = np.squeeze(new_data, axis=x_dim) - new_data = np.moveaxis(new_data, -1, y_dim_orig) - elif x_dim_orig < y_dim_orig: - # move the x_dim back first, so that the y_dim will - # then be moved to its original position - new_data = np.moveaxis(new_data, -1, x_dim_orig) - new_data = np.moveaxis(new_data, -1, y_dim_orig) - else: - # move the y_dim back first, so that the x_dim will - # then be moved to its original position - new_data = np.moveaxis(new_data, -2, y_dim_orig) - new_data = np.moveaxis(new_data, -1, x_dim_orig) - - return new_data + return coord.units.convert( + coord.contiguous_bounds().astype(dtype), units + ).astype(dtype) def _regrid_area_weighted_rectilinear_src_and_grid__prepare( @@ -775,290 +323,51 @@ def _regrid_area_weighted_rectilinear_src_and_grid__prepare( # Create 2d meshgrids as required by _create_cube func. meshgrid_x, meshgrid_y = _meshgrid(grid_x.points, grid_y.points) - # Determine whether target grid bounds are decreasing. This must - # be determined prior to wrap_lons being called. - grid_x_decreasing = grid_x_bounds[-1, 0] < grid_x_bounds[0, 0] - grid_y_decreasing = grid_y_bounds[-1, 0] < grid_y_bounds[0, 0] - # Wrapping of longitudes. if spherical: - base = np.min(src_x_bounds) modulus = x_units.modulus - # Only wrap if necessary to avoid introducing floating - # point errors. - if np.min(grid_x_bounds) < base or np.max(grid_x_bounds) > ( - base + modulus - ): - grid_x_bounds = iris.analysis.cartography.wrap_lons( - grid_x_bounds, base, modulus - ) - - # Determine whether the src_x coord has periodic boundary conditions. - circular = getattr(src_x, "circular", False) - - # Use simple cartesian area function or one that takes into - # account the curved surface if coord system is spherical. - if spherical: - area_func = _spherical_area else: - area_func = _cartesian_area + modulus = None def _calculate_regrid_area_weighted_weights( src_x_bounds, src_y_bounds, grid_x_bounds, grid_y_bounds, - grid_x_decreasing, - grid_y_decreasing, - area_func, - circular=False, + spherical, + modulus=None, ): - """ - Compute the area weights used for area-weighted regridding. - Args: - * src_x_bounds: - A NumPy array of bounds along the X axis defining the source grid. - * src_y_bounds: - A NumPy array of bounds along the Y axis defining the source grid. - * grid_x_bounds: - A NumPy array of bounds along the X axis defining the new grid. - * grid_y_bounds: - A NumPy array of bounds along the Y axis defining the new grid. - * grid_x_decreasing: - Boolean indicating whether the X coordinate of the new grid is - in descending order. - * grid_y_decreasing: - Boolean indicating whether the Y coordinate of the new grid is - in descending order. - * area_func: - A function that returns an (p, q) array of weights given an (p, 2) - shaped array of Y bounds and an (q, 2) shaped array of X bounds. - Kwargs: - * circular: - A boolean indicating whether the `src_x_bounds` are periodic. - Default is False. - Returns: - The area weights to be used for area-weighted regridding. - """ - # Determine which grid bounds are within src extent. - y_within_bounds = _within_bounds( - src_y_bounds, grid_y_bounds, grid_y_decreasing - ) - x_within_bounds = _within_bounds( - src_x_bounds, grid_x_bounds, grid_x_decreasing + """Return weights matrix to be used in regridding.""" + src_shape = (len(src_x_bounds) - 1, len(src_y_bounds) - 1) + tgt_shape = (len(grid_x_bounds) - 1, len(grid_y_bounds) - 1) + + if spherical: + # Changing the dtype here replicates old regridding behaviour. + dtype = np.float64 + src_x_bounds = src_x_bounds.astype(dtype) + src_y_bounds = src_y_bounds.astype(dtype) + grid_x_bounds = grid_x_bounds.astype(dtype) + grid_y_bounds = grid_y_bounds.astype(dtype) + + src_y_bounds = np.sin(src_y_bounds) + grid_y_bounds = np.sin(grid_y_bounds) + x_info = _get_coord_to_coord_matrix_info( + src_x_bounds, grid_x_bounds, circular=spherical, mod=modulus ) - - # Cache which src_bounds are within grid bounds - cached_x_bounds = [] - cached_x_indices = [] - max_x_indices = 0 - for x_0, x_1 in grid_x_bounds: - if grid_x_decreasing: - x_0, x_1 = x_1, x_0 - x_bounds, x_indices = _cropped_bounds(src_x_bounds, x_0, x_1) - cached_x_bounds.append(x_bounds) - cached_x_indices.append(x_indices) - # Keep record of the largest slice - if isinstance(x_indices, slice): - x_indices_size = np.sum(x_indices.stop - x_indices.start) - else: # is tuple of indices - x_indices_size = len(x_indices) - if x_indices_size > max_x_indices: - max_x_indices = x_indices_size - - # Cache which y src_bounds areas and weights are within grid bounds - cached_y_indices = [] - cached_weights = [] - max_y_indices = 0 - for j, (y_0, y_1) in enumerate(grid_y_bounds): - # Reverse lower and upper if dest grid is decreasing. - if grid_y_decreasing: - y_0, y_1 = y_1, y_0 - y_bounds, y_indices = _cropped_bounds(src_y_bounds, y_0, y_1) - cached_y_indices.append(y_indices) - # Keep record of the largest slice - if isinstance(y_indices, slice): - y_indices_size = np.sum(y_indices.stop - y_indices.start) - else: # is tuple of indices - y_indices_size = len(y_indices) - if y_indices_size > max_y_indices: - max_y_indices = y_indices_size - - weights_i = [] - for i, (x_0, x_1) in enumerate(grid_x_bounds): - # Reverse lower and upper if dest grid is decreasing. - if grid_x_decreasing: - x_0, x_1 = x_1, x_0 - x_bounds = cached_x_bounds[i] - x_indices = cached_x_indices[i] - - # Determine whether element i, j overlaps with src and hence - # an area weight should be computed. - # If x_0 > x_1 then we want [0]->x_1 and x_0->[0] + mod in the case - # of wrapped longitudes. However if the src grid is not global - # (i.e. circular) this new cell would include a region outside of - # the extent of the src grid and thus the weight is therefore - # invalid. - outside_extent = x_0 > x_1 and not circular - if ( - outside_extent - or not y_within_bounds[j] - or not x_within_bounds[i] - ): - weights = False - else: - # Calculate weights based on areas of cropped bounds. - if isinstance(x_indices, tuple) and isinstance( - y_indices, tuple - ): - raise RuntimeError( - "Cannot handle split bounds " "in both x and y." - ) - weights = area_func(y_bounds, x_bounds) - weights_i.append(weights) - cached_weights.append(weights_i) - return ( - tuple(cached_x_indices), - tuple(cached_y_indices), - max_x_indices, - max_y_indices, - tuple(cached_weights), + y_info = _get_coord_to_coord_matrix_info(src_y_bounds, grid_y_bounds) + weights_matrix = _combine_xy_weights( + x_info, y_info, src_shape, tgt_shape ) + return weights_matrix - ( - cached_x_indices, - cached_y_indices, - max_x_indices, - max_y_indices, - cached_weights, - ) = _calculate_regrid_area_weighted_weights( + weights = _calculate_regrid_area_weighted_weights( src_x_bounds, src_y_bounds, grid_x_bounds, grid_y_bounds, - grid_x_decreasing, - grid_y_decreasing, - area_func, - circular, - ) - - # Go further, calculating the full weights array that we'll need in the - # perform step and the indices we'll need to extract from the cube we're - # regridding (src_data) - - result_y_extent = len(grid_y_bounds) - result_x_extent = len(grid_x_bounds) - - # Total number of points - num_target_pts = result_y_extent * result_x_extent - - # Create empty array to hold weights - src_area_weights = np.zeros( - list((max_y_indices, max_x_indices, num_target_pts)) + spherical, + modulus, ) - - # Built for the case where the source cube isn't masked - blank_weights = np.zeros((num_target_pts,)) - new_data_mask_basis = np.full( - (len(cached_y_indices), len(cached_x_indices)), False, dtype=np.bool_ - ) - - # To permit fancy indexing, we need to store our data in an array whose - # first two dimensions represent the indices needed for the target cell. - # Since target cells can require a different number of indices, the size of - # these dimensions should be the maximum of this number. - # This means we need to track whether the data in - # that array is actually required and build those squared-off arrays - # TODO: Consider if a proper mask would be better - src_area_datas_required = np.full( - (max_y_indices, max_x_indices, num_target_pts), False - ) - square_data_indices_y = np.zeros( - (max_y_indices, max_x_indices, num_target_pts), dtype=int - ) - square_data_indices_x = np.zeros( - (max_y_indices, max_x_indices, num_target_pts), dtype=int - ) - - # Stack the weights for each target point and build the indices we'll need - # to extract the src_area_data - target_pt_ji = -1 - for j, y_indices in enumerate(cached_y_indices): - for i, x_indices in enumerate(cached_x_indices): - target_pt_ji += 1 - # Determine whether to mask element i, j based on whether - # there are valid weights. - weights = cached_weights[j][i] - if weights is False: - # Prepare for the src_data not being masked by storing the - # information that will let us fill the data with zeros and - # weights as one. The weighted average result will be the same, - # but we avoid dividing by zero. - blank_weights[target_pt_ji] = True - new_data_mask_basis[j, i] = True - else: - # Establish which indices are actually in y_indices and x_indices - if isinstance(y_indices, slice): - y_indices = list( - range( - y_indices.start, - y_indices.stop, - y_indices.step or 1, - ) - ) - else: - y_indices = list(y_indices) - - if isinstance(x_indices, slice): - x_indices = list( - range( - x_indices.start, - x_indices.stop, - x_indices.step or 1, - ) - ) - else: - x_indices = list(x_indices) - - # For the weights, we just need the lengths of these as we're - # dropping them into a pre-made array - - len_y = len(y_indices) - len_x = len(x_indices) - - src_area_weights[0:len_y, 0:len_x, target_pt_ji] = weights - - # To build the indices for the source cube, we need equal - # shaped array so we pad with 0s and record the need to mask - # them in src_area_datas_required - padded_y_indices = y_indices + [0] * (max_y_indices - len_y) - padded_x_indices = x_indices + [0] * (max_x_indices - len_x) - - square_data_indices_y[..., target_pt_ji] = np.array( - padded_y_indices - )[:, np.newaxis] - square_data_indices_x[..., target_pt_ji] = padded_x_indices - - src_area_datas_required[0:len_y, 0:len_x, target_pt_ji] = True - - # Package up the return data - - weights_info = ( - blank_weights, - src_area_weights, - new_data_mask_basis, - ) - - index_info = ( - result_x_extent, - result_y_extent, - square_data_indices_y, - square_data_indices_x, - src_area_datas_required, - ) - - # Now return it - return ( src_x, src_y, @@ -1068,8 +377,7 @@ def _calculate_regrid_area_weighted_weights( grid_y, meshgrid_x, meshgrid_y, - weights_info, - index_info, + weights, ) @@ -1091,17 +399,18 @@ def _regrid_area_weighted_rectilinear_src_and_grid__perform( grid_y, meshgrid_x, meshgrid_y, - weights_info, - index_info, + weights, ) = regrid_info + tgt_shape = (len(grid_y.points), len(grid_x.points)) + # Calculate new data array for regridded cube. regrid = functools.partial( - _regrid_area_weighted_array, + _regrid_along_dims, x_dim=src_x_dim, y_dim=src_y_dim, - weights_info=weights_info, - index_info=index_info, + weights=weights, + tgt_shape=tgt_shape, mdtol=mdtol, ) @@ -1120,9 +429,9 @@ def _regrid_area_weighted_rectilinear_src_and_grid__perform( ) # TODO: investigate if an area weighted callback would be more appropriate. # _regrid_callback = functools.partial( - # _regrid_area_weighted_array, - # weights_info=weights_info, - # index_info=index_info, + # _regrid_along_dims, + # weights=weights, + # tgt_shape=tgt_shape, # mdtol=mdtol, # ) @@ -1149,3 +458,263 @@ def regrid_callback(*args, **kwargs): new_cube = new_cube[tuple(indices)] return new_cube + + +def _get_coord_to_coord_matrix_info( + src_bounds, tgt_bounds, circular=False, mod=None +): + """ + First part of weight calculation. + + Calculate the weights contribution from a single pair of + coordinate bounds. Search for pairs of overlapping source and + target bounds and associate weights with them. + + Note: this assumes that the bounds are monotonic. + """ + # Calculate the number of cells represented by the bounds. + m = len(tgt_bounds) - 1 + n = len(src_bounds) - 1 + + # Ensure bounds are strictly increasing. + src_decreasing = src_bounds[0] > src_bounds[1] + tgt_decreasing = tgt_bounds[0] > tgt_bounds[1] + if src_decreasing: + src_bounds = src_bounds[::-1] + if tgt_decreasing: + tgt_bounds = tgt_bounds[::-1] + + if circular: + # For circular coordinates (e.g. longitude) account for source and + # target bounds which span different ranges (e.g. (-180, 180) vs + # (0, 360)). We ensure that all possible overlaps between source and + # target bounds are accounted for by including two copies of the + # source bounds, shifted appropriately by the modulus. + adjust = (tgt_bounds.min() - src_bounds.min()) // mod + src_bounds = src_bounds + (mod * adjust) + src_bounds = np.append(src_bounds, src_bounds + mod) + nn = (2 * n) + 1 + else: + nn = n + + # Before iterating through pairs of overlapping bounds, find an + # appropriate place to start iteration. Note that this assumes that + # the bounds are increasing. + i = max(np.searchsorted(tgt_bounds, src_bounds[0], side="right") - 1, 0) + j = max(np.searchsorted(src_bounds, tgt_bounds[0], side="right") - 1, 0) + + data = [] + rows = [] + cols = [] + + # Iterate through overlapping cells in the source and target bounds. + # For the sake of calculations, we keep track of the minimum value of + # the intersection of each cell. + floor = max(tgt_bounds[i], src_bounds[j]) + while i < m and j < nn: + # Record the current indices. + rows.append(i) + cols.append(j) + + # Determine the next indices and floor. + if tgt_bounds[i + 1] < src_bounds[j + 1]: + next_floor = tgt_bounds[i + 1] + next_i = i + 1 + elif tgt_bounds[i + 1] == src_bounds[j + 1]: + next_floor = tgt_bounds[i + 1] + next_i = i + 1 + j += 1 + else: + next_floor = src_bounds[j + 1] + next_i = i + j += 1 + + # Calculate and record the weight for the current overlapping cells. + weight = (next_floor - floor) / (tgt_bounds[i + 1] - tgt_bounds[i]) + data.append(weight) + + # Update indices and floor + i = next_i + floor = next_floor + + data = np.array(data) + rows = np.array(rows) + cols = np.array(cols) + + if circular: + # Remove out of bounds points. When the source bounds were duplicated + # an "out of bounds" cell was introduced between the two copies. + oob = np.where(cols == n) + data = np.delete(data, oob) + rows = np.delete(rows, oob) + cols = np.delete(cols, oob) + + # Wrap indices. Since we duplicated the source bounds there may be + # indices which are greater than n which will need to be corrected. + cols = cols % (n + 1) + + # Correct indices which were flipped due to reversing decreasing bounds. + if src_decreasing: + cols = n - cols - 1 + if tgt_decreasing: + rows = m - rows - 1 + + return data, rows, cols + + +def _combine_xy_weights(x_info, y_info, src_shape, tgt_shape): + """ + Second part of weight calculation. + + Combine the weights contributions from both pairs of coordinate + bounds (i.e. the source/target pairs for the x and y coords). + Return the result as a sparse array. + """ + x_src, y_src = src_shape + x_tgt, y_tgt = tgt_shape + src_size = x_src * y_src + tgt_size = x_tgt * y_tgt + x_weight, x_rows, x_cols = x_info + y_weight, y_rows, y_cols = y_info + + # Regridding weights will be applied to a flattened (y, x) array. + # Weights and indices are constructed in a way to account for this. + # Weights of the combined matrix are constructed by broadcasting + # the x_weights and y_weights. The resulting array contains every + # combination of x weight and y weight. Then we flatten this array. + xy_weight = y_weight[:, np.newaxis] * x_weight[np.newaxis, :] + xy_weight = xy_weight.flatten() + + # Given the x index and y index associated with a weight, calculate + # the equivalent index in the flattened (y, x) array. + xy_rows = (y_rows[:, np.newaxis] * x_tgt) + x_rows[np.newaxis, :] + xy_rows = xy_rows.flatten() + xy_cols = (y_cols[:, np.newaxis] * x_src) + x_cols[np.newaxis, :] + xy_cols = xy_cols.flatten() + + # Create a sparse matrix for efficient weight application. + combined_weights = csr_array( + (xy_weight, (xy_rows, xy_cols)), shape=(tgt_size, src_size) + ) + return combined_weights + + +def _standard_regrid_no_masks(data, weights, tgt_shape): + """ + Regrid unmasked data to an unmasked result. + + Assumes that the first two dimensions are the x-y grid. + """ + # Reshape data to a form suitable for matrix multiplication. + extra_shape = data.shape[:-2] + data = data.reshape(-1, np.prod(data.shape[-2:])) + + # Apply regridding weights. + # The order of matrix multiplication is chosen to be consistent + # with existing regridding code. + result = data @ weights.T + + # Reshape result to a suitable form. + result = result.reshape(*(extra_shape + tgt_shape)) + return result + + +def _standard_regrid(data, weights, tgt_shape, mdtol): + """ + Regrid data and handle masks. + + Assumes that the first two dimensions are the x-y grid. + """ + # This is set to keep consistent with legacy behaviour. + # This is likely to become switchable in the future, see: + # https://github.com/SciTools/iris/issues/5461 + oob_invalid = True + + data_shape = data.shape + if ma.is_masked(data): + unmasked = ~ma.getmaskarray(data) + # Calculate contribution from unmasked sources to each target point. + weight_sums = _standard_regrid_no_masks(unmasked, weights, tgt_shape) + else: + # If there are no masked points then all contributions will be + # from unmasked sources, so we can skip this calculation + weight_sums = np.ones(data_shape[:-2] + tgt_shape) + mdtol = max(mdtol, 1e-8) + tgt_mask = weight_sums > 1 - mdtol + # If out of bounds sources are treated the same as masked sources this + # will already have been calculated above, so we can skip this calculation. + if oob_invalid or not ma.is_masked(data): + # Calculate the proportion of each target cell which is covered by the + # source. For the sake of efficiency, this is calculated for a 2D slice + # which is then broadcast. + inbound_sums = _standard_regrid_no_masks( + np.ones(data_shape[-2:]), weights, tgt_shape + ) + if oob_invalid: + # Legacy behaviour, if the full area of a target cell does not lie + # in bounds it will be masked. + oob_mask = inbound_sums > 1 - 1e-8 + else: + # Note: this code is currently inaccessible. This code exists to lay + # the groundwork for future work which will make out of bounds + # behaviour switchable. + oob_mask = inbound_sums > 1 - mdtol + # Broadcast the mask to the shape of the full array + oob_slice = ((np.newaxis,) * len(data.shape[:-2])) + np.s_[:, :] + tgt_mask = tgt_mask * oob_mask[oob_slice] + + # Calculate normalisations. + normalisations = tgt_mask.astype(weight_sums.dtype) + normalisations[tgt_mask] /= weight_sums[tgt_mask] + + # Mask points in the result. + if ma.isMaskedArray(data): + # If the source is masked, the result should have a similar mask. + fill_value = data.fill_value + normalisations = ma.array( + normalisations, mask=~tgt_mask, fill_value=fill_value + ) + elif np.any(~tgt_mask): + normalisations = ma.array(normalisations, mask=~tgt_mask) + + # Use input cube dtype or convert values to the smallest possible float + # dtype when necessary. + dtype = np.promote_types(data.dtype, np.float16) + + # Perform regridding on unmasked data. + result = _standard_regrid_no_masks( + ma.filled(data, 0.0), weights, tgt_shape + ) + # Apply normalisations and masks to the regridded data. + result = result * normalisations + result = result.astype(dtype) + return result + + +def _regrid_along_dims(data, x_dim, y_dim, weights, tgt_shape, mdtol): + """Regrid data, handling masks and dimensions.""" + # Handle scalar coordinates. + # Note: scalar source coordinates are only handled when their + # corresponding target coordinate is also scalar. + num_scalar_dims = 0 + if x_dim is None: + num_scalar_dims += 1 + data = np.expand_dims(data, -1) + x_dim = -1 + if y_dim is None: + num_scalar_dims += 1 + data = np.expand_dims(data, -1) + y_dim = -1 + if num_scalar_dims == 2: + y_dim = -2 + + # Standard regridding expects the last two dimensions to belong + # to the y and x coordinate and will output as such. + # Axes are moved to account for an arbitrary dimension ordering. + data = np.moveaxis(data, [y_dim, x_dim], [-2, -1]) + result = _standard_regrid(data, weights, tgt_shape, mdtol) + result = np.moveaxis(result, [-2, -1], [y_dim, x_dim]) + + for _ in range(num_scalar_dims): + result = np.squeeze(result, axis=-1) + return result diff --git a/lib/iris/common/_split_attribute_dicts.py b/lib/iris/common/_split_attribute_dicts.py new file mode 100644 index 0000000000..3927974053 --- /dev/null +++ b/lib/iris/common/_split_attribute_dicts.py @@ -0,0 +1,125 @@ +# Copyright Iris contributors +# +# This file is part of Iris and is released under the BSD license. +# See LICENSE in the root of the repository for full licensing details. +""" +Dictionary operations for dealing with the CubeAttrsDict "split"-style attribute +dictionaries. + +The idea here is to convert a split-dictionary into a "plain" one for calculations, +whose keys are all pairs of the form ('global', ) or ('local', ). +And to convert back again after the operation, if the result is a dictionary. + +For "strict" operations this clearly does all that is needed. For lenient ones, +we _might_ want for local+global attributes of the same name to interact. +However, on careful consideration, it seems that this is not actually desirable for +any of the common-metadata operations. +So, we simply treat "global" and "local" attributes of the same name as entirely +independent. Which happily is also the easiest to code, and to explain. +""" +from collections.abc import Mapping, Sequence +from functools import wraps + + +def _convert_splitattrs_to_pairedkeys_dict(dic): + """ + Convert a split-attributes dictionary to a "normal" dict. + + Transform a :class:`~iris.cube.CubeAttributesDict` "split" attributes dictionary + into a 'normal' :class:`dict`, with paired keys of the form ('global', name) or + ('local', name). + + If the input is *not* a split-attrs dict, it is converted to one before + transforming it. This will assign its keys to global/local depending on a standard + set of choices (see :class:`~iris.cube.CubeAttributesDict`). + """ + from iris.cube import CubeAttrsDict + + # Convert input to CubeAttrsDict + if not hasattr(dic, "globals") or not hasattr(dic, "locals"): + dic = CubeAttrsDict(dic) + + def _global_then_local_items(dic): + # Routine to produce global, then local 'items' in order, and with all keys + # "labelled" as local or global type, to ensure they are all unique. + for key, value in dic.globals.items(): + yield ("global", key), value + for key, value in dic.locals.items(): + yield ("local", key), value + + return dict(_global_then_local_items(dic)) + + +def _convert_pairedkeys_dict_to_splitattrs(dic): + """ + Convert an input with global/local paired keys back into a split-attrs dict. + + For now, this is always and only a :class:`iris.cube.CubeAttrsDict`. + """ + from iris.cube import CubeAttrsDict + + result = CubeAttrsDict() + for key, value in dic.items(): + keytype, keyname = key + if keytype == "global": + result.globals[keyname] = value + else: + assert keytype == "local" + result.locals[keyname] = value + return result + + +def adjust_for_split_attribute_dictionaries(operation): + """ + Decorator to make a function of attribute-dictionaries work with split attributes. + + The wrapped function of attribute-dictionaries is currently always one of "equals", + "combine" or "difference", with signatures like : + equals(left: dict, right: dict) -> bool + combine(left: dict, right: dict) -> dict + difference(left: dict, right: dict) -> None | (dict, dict) + + The results of the wrapped operation are either : + * for "equals" (or "__eq__") : a boolean + * for "combine" : a (converted) attributes-dictionary + * for "difference" : a list of (None or "pair"), where a pair contains two + dictionaries + + Before calling the wrapped operation, its inputs (left, right) are modified by + converting any "split" dictionaries to a form where the keys are pairs + of the form ("global", name) or ("local", name). + + After calling the wrapped operation, for "combine" or "difference", the result can + contain a dictionary or dictionaries. These are then transformed back from the + 'converted' form to split-attribute dictionaries, before returning. + + "Split" dictionaries are all of class :class:`~iris.cube.CubeAttrsDict`, since + the only usage of 'split' attribute dictionaries is in Cubes (i.e. they are not + used for cube components). + """ + + @wraps(operation) + def _inner_function(*args, **kwargs): + # Convert all inputs into 'pairedkeys' type dicts + args = [_convert_splitattrs_to_pairedkeys_dict(arg) for arg in args] + + result = operation(*args, **kwargs) + + # Convert known specific cases of 'pairedkeys' dicts in the result, and convert + # those back into split-attribute dictionaries. + if isinstance(result, Mapping): + # Fix a result which is a single dictionary -- for "combine" + result = _convert_pairedkeys_dict_to_splitattrs(result) + elif isinstance(result, Sequence) and len(result) == 2: + # Fix a result which is a pair of dictionaries -- for "difference" + left, right = result + left, right = ( + _convert_pairedkeys_dict_to_splitattrs(left), + _convert_pairedkeys_dict_to_splitattrs(right), + ) + result = result.__class__([left, right]) + # ELSE: leave other types of result unchanged. E.G. None, bool + + return result + + return _inner_function diff --git a/lib/iris/common/metadata.py b/lib/iris/common/metadata.py index 8d60171331..f88a2e57b5 100644 --- a/lib/iris/common/metadata.py +++ b/lib/iris/common/metadata.py @@ -20,6 +20,7 @@ from xxhash import xxh64_hexdigest from ..config import get_logger +from ._split_attribute_dicts import adjust_for_split_attribute_dictionaries from .lenient import _LENIENT from .lenient import _lenient_service as lenient_service from .lenient import _qualname as qualname @@ -241,7 +242,11 @@ def __str__(self): field_strings = [] for field in self._fields: value = getattr(self, field) - if value is None or isinstance(value, (str, dict)) and not value: + if ( + value is None + or isinstance(value, (str, Mapping)) + and not value + ): continue field_strings.append(f"{field}={value}") @@ -1250,6 +1255,46 @@ def _check(item): return result + # + # Override each of the attribute-dict operations in BaseMetadata, to enable + # them to deal with split-attribute dictionaries correctly. + # There are 6 of these, for (equals/combine/difference) * (lenient/strict). + # Each is overridden with a *wrapped* version of the parent method, using the + # "@adjust_for_split_attribute_dictionaries" decorator, which converts any + # split-attribute dictionaries in the inputs to ordinary dicts, and likewise + # re-converts any dictionaries in the return value. + # + + @staticmethod + @adjust_for_split_attribute_dictionaries + def _combine_lenient_attributes(left, right): + return BaseMetadata._combine_lenient_attributes(left, right) + + @staticmethod + @adjust_for_split_attribute_dictionaries + def _combine_strict_attributes(left, right): + return BaseMetadata._combine_strict_attributes(left, right) + + @staticmethod + @adjust_for_split_attribute_dictionaries + def _compare_lenient_attributes(left, right): + return BaseMetadata._compare_lenient_attributes(left, right) + + @staticmethod + @adjust_for_split_attribute_dictionaries + def _compare_strict_attributes(left, right): + return BaseMetadata._compare_strict_attributes(left, right) + + @staticmethod + @adjust_for_split_attribute_dictionaries + def _difference_lenient_attributes(left, right): + return BaseMetadata._difference_lenient_attributes(left, right) + + @staticmethod + @adjust_for_split_attribute_dictionaries + def _difference_strict_attributes(left, right): + return BaseMetadata._difference_strict_attributes(left, right) + class DimCoordMetadata(CoordMetadata): """ diff --git a/lib/iris/common/mixin.py b/lib/iris/common/mixin.py index f3b42fc02d..a1b1e4647b 100644 --- a/lib/iris/common/mixin.py +++ b/lib/iris/common/mixin.py @@ -16,7 +16,7 @@ from .metadata import BaseMetadata -__all__ = ["CFVariableMixin"] +__all__ = ["CFVariableMixin", "LimitedAttributeDict"] def _get_valid_standard_name(name): @@ -52,7 +52,29 @@ def _get_valid_standard_name(name): class LimitedAttributeDict(dict): - _forbidden_keys = ( + """ + A specialised 'dict' subclass, which forbids (errors) certain attribute names. + + Used for the attribute dictionaries of all Iris data objects (that is, + :class:`CFVariableMixin` and its subclasses). + + The "excluded" attributes are those which either :mod:`netCDF4` or Iris intpret and + control with special meaning, which therefore should *not* be defined as custom + 'user' attributes on Iris data objects such as cubes. + + For example : "coordinates", "grid_mapping", "scale_factor". + + The 'forbidden' attributes are those listed in + :data:`iris.common.mixin.LimitedAttributeDict.CF_ATTRS_FORBIDDEN` . + + All the forbidden attributes are amongst those listed in + `Appendix A of the CF Conventions: `_ + -- however, not *all* of them, since not all are interpreted by Iris. + + """ + + #: Attributes with special CF meaning, forbidden in Iris attribute dictionaries. + CF_ATTRS_FORBIDDEN = ( "standard_name", "long_name", "units", @@ -77,7 +99,7 @@ def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) # Check validity of keys for key in self.keys(): - if key in self._forbidden_keys: + if key in self.CF_ATTRS_FORBIDDEN: raise ValueError(f"{key!r} is not a permitted attribute") def __eq__(self, other): @@ -98,11 +120,12 @@ def __ne__(self, other): return not self == other def __setitem__(self, key, value): - if key in self._forbidden_keys: + if key in self.CF_ATTRS_FORBIDDEN: raise ValueError(f"{key!r} is not a permitted attribute") dict.__setitem__(self, key, value) def update(self, other, **kwargs): + """Standard ``dict.update()`` operation.""" # Gather incoming keys keys = [] if hasattr(other, "keys"): @@ -114,7 +137,7 @@ def update(self, other, **kwargs): # Check validity of keys for key in keys: - if key in self._forbidden_keys: + if key in self.CF_ATTRS_FORBIDDEN: raise ValueError(f"{key!r} is not a permitted attribute") dict.update(self, other, **kwargs) diff --git a/lib/iris/coords.py b/lib/iris/coords.py index 30de08d496..8af7ee0c8a 100644 --- a/lib/iris/coords.py +++ b/lib/iris/coords.py @@ -36,6 +36,9 @@ import iris.time import iris.util +#: The default value for ignore_axis which controls guess_coord_axis' behaviour +DEFAULT_IGNORE_AXIS = False + class _DimensionalMetadata(CFVariableMixin, metaclass=ABCMeta): """ @@ -860,7 +863,6 @@ def xml_element(self, doc): element.setAttribute( "climatological", str(self.climatological) ) - if self.attributes: attributes_element = doc.createElement("attributes") for name in sorted(self.attributes.keys()): @@ -1593,6 +1595,8 @@ def __init__( self.bounds = bounds self.climatological = climatological + self._ignore_axis = DEFAULT_IGNORE_AXIS + def copy(self, points=None, bounds=None): """ Returns a copy of this coordinate. @@ -1625,6 +1629,10 @@ def copy(self, points=None, bounds=None): # self. new_coord.bounds = bounds + # The state of ignore_axis is controlled by the coordinate rather than + # the metadata manager + new_coord.ignore_axis = self.ignore_axis + return new_coord @classmethod @@ -1644,7 +1652,14 @@ def from_coord(cls, coord): if issubclass(cls, DimCoord): # DimCoord introduces an extra constructor keyword. kwargs["circular"] = getattr(coord, "circular", False) - return cls(**kwargs) + + new_coord = cls(**kwargs) + + # The state of ignore_axis is controlled by the coordinate rather than + # the metadata manager + new_coord.ignore_axis = coord.ignore_axis + + return new_coord @property def points(self): @@ -1736,6 +1751,24 @@ def climatological(self, value): self._metadata_manager.climatological = value + @property + def ignore_axis(self): + """ + A boolean that controls whether guess_coord_axis acts on this + coordinate. + + Defaults to False, and when set to True it will be skipped by + guess_coord_axis. + """ + return self._ignore_axis + + @ignore_axis.setter + def ignore_axis(self, value): + if not isinstance(value, bool): + emsg = "'ignore_axis' can only be set to 'True' or 'False'" + raise ValueError(emsg) + self._ignore_axis = value + def lazy_points(self): """ Return a lazy array representing the coord points. @@ -2694,7 +2727,6 @@ def __init__( Will set to True when a climatological time axis is loaded from NetCDF. Always False if no bounds exist. - """ # Configure the metadata manager. self._metadata_manager = metadata_manager_factory(DimCoordMetadata) diff --git a/lib/iris/cube.py b/lib/iris/cube.py index 3a36a035c0..8aa0b452d5 100644 --- a/lib/iris/cube.py +++ b/lib/iris/cube.py @@ -9,11 +9,20 @@ """ from collections import OrderedDict -from collections.abc import Container, Iterable, Iterator, MutableMapping import copy from copy import deepcopy from functools import partial, reduce +import itertools import operator +from typing import ( + Container, + Iterable, + Iterator, + Mapping, + MutableMapping, + Optional, + Union, +) import warnings from xml.dom.minidom import Document import zlib @@ -34,12 +43,13 @@ import iris.aux_factory from iris.common import CFVariableMixin, CubeMetadata, metadata_manager_factory from iris.common.metadata import metadata_filter +from iris.common.mixin import LimitedAttributeDict import iris.coord_systems import iris.coords import iris.exceptions import iris.util -__all__ = ["Cube", "CubeList"] +__all__ = ["Cube", "CubeAttrsDict", "CubeList"] # The XML namespace to use for CubeML documents @@ -789,6 +799,352 @@ def _is_single_item(testee): return isinstance(testee, str) or not isinstance(testee, Iterable) +class CubeAttrsDict(MutableMapping): + """ + A :class:`dict`\\-like object for :attr:`iris.cube.Cube.attributes`, + providing unified user access to combined cube "local" and "global" attributes + dictionaries, with the access behaviour of an ordinary (single) dictionary. + + Properties :attr:`globals` and :attr:`locals` are regular + :class:`~iris.common.mixin.LimitedAttributeDict`\\s, which can be accessed and + modified separately. The :class:`CubeAttrsDict` itself contains *no* additional + state, but simply provides a 'combined' view of both global + local attributes. + + All the read- and write-type methods, such as ``get()``, ``update()``, ``values()``, + behave according to the logic documented for : :meth:`__getitem__`, + :meth:`__setitem__` and :meth:`__iter__`. + + Notes + ----- + For type testing, ``issubclass(CubeAttrsDict, Mapping)`` is ``True``, but + ``issubclass(CubeAttrsDict, dict)`` is ``False``. + + Examples + -------- + + >>> from iris.cube import Cube + >>> cube = Cube([0]) + >>> # CF defines 'history' as global by default. + >>> cube.attributes.update({"history": "from test-123", "mycode": 3}) + >>> print(cube.attributes) + {'history': 'from test-123', 'mycode': 3} + >>> print(repr(cube.attributes)) + CubeAttrsDict(globals={'history': 'from test-123'}, locals={'mycode': 3}) + + >>> cube.attributes['history'] += ' +added' + >>> print(repr(cube.attributes)) + CubeAttrsDict(globals={'history': 'from test-123 +added'}, locals={'mycode': 3}) + + >>> cube.attributes.locals['history'] = 'per-variable' + >>> print(cube.attributes) + {'history': 'per-variable', 'mycode': 3} + >>> print(repr(cube.attributes)) + CubeAttrsDict(globals={'history': 'from test-123 +added'}, locals={'mycode': 3, 'history': 'per-variable'}) + + """ + + # TODO: Create a 'further topic' / 'tech paper' on NetCDF I/O, including + # discussion of attribute handling. + + def __init__( + self, + combined: Optional[Union[Mapping, str]] = "__unspecified", + locals: Optional[Mapping] = None, + globals: Optional[Mapping] = None, + ): + """ + Create a cube attributes dictionary. + + We support initialisation from a single generic mapping input, using the default + global/local assignment rules explained at :meth:`__setattr__`, or from + two separate mappings. Two separate dicts can be passed in the ``locals`` + and ``globals`` args, **or** via a ``combined`` arg which has its own + ``.globals`` and ``.locals`` properties -- so this allows passing an existing + :class:`CubeAttrsDict`, which will be copied. + + Parameters + ---------- + combined : dict + values to init both 'self.globals' and 'self.locals'. If 'combined' itself + has attributes named 'locals' and 'globals', these are used to update the + respective content (after initially setting the individual ones). + Otherwise, 'combined' is treated as a generic mapping, applied as + ``self.update(combined)``, + i.e. it will set locals and/or globals with the same logic as + :meth:`~iris.cube.CubeAttrsDict.__setitem__` . + locals : dict + initial content for 'self.locals' + globals : dict + initial content for 'self.globals' + + Examples + -------- + + >>> from iris.cube import CubeAttrsDict + >>> # CF defines 'history' as global by default. + >>> CubeAttrsDict({'history': 'data-story', 'comment': 'this-cube'}) + CubeAttrsDict(globals={'history': 'data-story'}, locals={'comment': 'this-cube'}) + + >>> CubeAttrsDict(locals={'history': 'local-history'}) + CubeAttrsDict(globals={}, locals={'history': 'local-history'}) + + >>> CubeAttrsDict(globals={'x': 'global'}, locals={'x': 'local'}) + CubeAttrsDict(globals={'x': 'global'}, locals={'x': 'local'}) + + >>> x1 = CubeAttrsDict(globals={'x': 1}, locals={'y': 2}) + >>> x2 = CubeAttrsDict(x1) + >>> x2 + CubeAttrsDict(globals={'x': 1}, locals={'y': 2}) + + """ + # First initialise locals + globals, defaulting to empty. + self.locals = locals + self.globals = globals + # Update with combined, if present. + if not isinstance(combined, str) or combined != "__unspecified": + # Treat a single input with 'locals' and 'globals' properties as an + # existing CubeAttrsDict, and update from its content. + # N.B. enforce deep copying, consistent with general Iris usage. + if hasattr(combined, "globals") and hasattr(combined, "locals"): + # Copy a mapping with globals/locals, like another 'CubeAttrsDict' + self.globals.update(deepcopy(combined.globals)) + self.locals.update(deepcopy(combined.locals)) + else: + # Treat any arbitrary single input value as a mapping (dict), and + # update from it. + self.update(dict(deepcopy(combined))) + + # + # Ensure that the stored local/global dictionaries are "LimitedAttributeDicts". + # + @staticmethod + def _normalise_attrs( + attributes: Optional[Mapping], + ) -> LimitedAttributeDict: + # Convert an input attributes arg into a standard form. + # N.B. content is always a LimitedAttributeDict, and a deep copy of input. + # Allow arg of None, etc. + if not attributes: + attributes = {} + else: + attributes = deepcopy(attributes) + + # Ensure the expected mapping type. + attributes = LimitedAttributeDict(attributes) + return attributes + + @property + def locals(self) -> LimitedAttributeDict: + return self._locals + + @locals.setter + def locals(self, attributes: Optional[Mapping]): + self._locals = self._normalise_attrs(attributes) + + @property + def globals(self) -> LimitedAttributeDict: + return self._globals + + @globals.setter + def globals(self, attributes: Optional[Mapping]): + self._globals = self._normalise_attrs(attributes) + + # + # Provide a serialisation interface + # + def __getstate__(self): + return (self.locals, self.globals) + + def __setstate__(self, state): + self.locals, self.globals = state + + # + # Support comparison -- required because default operation only compares a single + # value at each key. + # + def __eq__(self, other): + # For equality, require both globals + locals to match exactly. + # NOTE: array content works correctly, since 'locals' and 'globals' are always + # iris.common.mixin.LimitedAttributeDict, which gets this right. + other = CubeAttrsDict(other) + result = self.locals == other.locals and self.globals == other.globals + return result + + # + # Provide methods duplicating those for a 'dict', but which are *not* provided by + # MutableMapping, for compatibility with code which expected a cube.attributes to be + # a :class:`~iris.common.mixin.LimitedAttributeDict`. + # The extra required methods are : + # 'copy', 'update', '__ior__', '__or__', '__ror__' and 'fromkeys'. + # + def copy(self): + """ + Return a copy. + + Implemented with deep copying, consistent with general Iris usage. + + """ + return CubeAttrsDict(self) + + def update(self, *args, **kwargs): + """ + Update by adding items from a mapping arg, or keyword-values. + + If the argument is a split dictionary, preserve the local/global nature of its + keys. + """ + if args and hasattr(args[0], "globals") and hasattr(args[0], "locals"): + dic = args[0] + self.globals.update(dic.globals) + self.locals.update(dic.locals) + else: + super().update(*args) + super().update(**kwargs) + + def __or__(self, arg): + """Implement 'or' via 'update'.""" + if not isinstance(arg, Mapping): + return NotImplemented + new_dict = self.copy() + new_dict.update(arg) + return new_dict + + def __ior__(self, arg): + """Implement 'ior' via 'update'.""" + self.update(arg) + return self + + def __ror__(self, arg): + """ + Implement 'ror' via 'update'. + + This needs to promote, such that the result is a CubeAttrsDict. + """ + if not isinstance(arg, Mapping): + return NotImplemented + result = CubeAttrsDict(arg) + result.update(self) + return result + + @classmethod + def fromkeys(cls, iterable, value=None): + """ + Create a new object with keys taken from an argument, all set to one value. + + If the argument is a split dictionary, preserve the local/global nature of its + keys. + """ + if hasattr(iterable, "globals") and hasattr(iterable, "locals"): + # When main input is a split-attrs dict, create global/local parts from its + # global/local keys + result = cls( + globals=dict.fromkeys(iterable.globals, value), + locals=dict.fromkeys(iterable.locals, value), + ) + else: + # Create from a dict.fromkeys, using default classification of the keys. + result = cls(dict.fromkeys(iterable, value)) + return result + + # + # The remaining methods are sufficient to generate a complete standard Mapping + # API. See - + # https://docs.python.org/3/reference/datamodel.html#emulating-container-types. + # + + def __iter__(self): + """ + Define the combined iteration order. + + Result is: all global keys, then all local ones, but omitting duplicates. + + """ + # NOTE: this means that in the "summary" view, attributes present in both + # locals+globals are listed first, amongst the globals, even though they appear + # with the *value* from locals. + # Otherwise follows order of insertion, as is normal for dicts. + return itertools.chain( + self.globals.keys(), + (x for x in self.locals.keys() if x not in self.globals), + ) + + def __len__(self): + # Return the number of keys in the 'combined' view. + return len(list(iter(self))) + + def __getitem__(self, key): + """ + Fetch an item from the "combined attributes". + + If the name is present in *both* ``self.locals`` and ``self.globals``, then + the local value is returned. + + """ + if key in self.locals: + store = self.locals + else: + store = self.globals + return store[key] + + def __setitem__(self, key, value): + """ + Assign an attribute value. + + This may be assigned in either ``self.locals`` or ``self.globals``, chosen as + follows: + + * If there is an existing setting in either ``.locals`` or ``.globals``, then + that is updated (i.e. overwritten). + + * If it is present in *both*, only + ``.locals`` is updated. + + * If there is *no* existing attribute, it is usually created in ``.locals``. + **However** a handful of "known normally global" cases, as defined by CF, + go into ``.globals`` instead. + At present these are : ('conventions', 'featureType', 'history', 'title'). + See `CF Conventions, Appendix A: `_ . + + """ + # If an attribute of this name is already present, update that + # (the local one having priority). + if key in self.locals: + store = self.locals + elif key in self.globals: + store = self.globals + else: + # If NO existing attribute, create local unless it is a "known global" one. + from iris.fileformats.netcdf.saver import _CF_GLOBAL_ATTRS + + if key in _CF_GLOBAL_ATTRS: + store = self.globals + else: + store = self.locals + + store[key] = value + + def __delitem__(self, key): + """ + Remove an attribute. + + Delete from both local + global. + + """ + if key in self.locals: + del self.locals[key] + if key in self.globals: + del self.globals[key] + + def __str__(self): + # Print it just like a "normal" dictionary. + # Convert to a normal dict to do that. + return str(dict(self)) + + def __repr__(self): + # Special repr form, showing "real" contents. + return f"CubeAttrsDict(globals={self.globals}, locals={self.locals})" + + class Cube(CFVariableMixin): """ A single Iris cube of data and metadata. @@ -985,8 +1341,8 @@ def __init__( self.cell_methods = cell_methods - #: A dictionary, with a few restricted keys, for arbitrary - #: Cube metadata. + #: A dictionary for arbitrary Cube metadata. + #: A few keys are restricted - see :class:`CubeAttrsDict`. self.attributes = attributes # Coords @@ -1044,6 +1400,22 @@ def _names(self): """ return self._metadata_manager._names + # + # Ensure that .attributes is always a :class:`CubeAttrsDict`. + # + @property + def attributes(self) -> CubeAttrsDict: + return super().attributes + + @attributes.setter + def attributes(self, attributes: Optional[Mapping]): + """ + An override to CfVariableMixin.attributes.setter, which ensures that Cube + attributes are stored in a way which distinguishes global + local ones. + + """ + self._metadata_manager.attributes = CubeAttrsDict(attributes or {}) + def _dimensional_metadata(self, name_or_dimensional_metadata): """ Return a single _DimensionalMetadata instance that matches the given diff --git a/lib/iris/fileformats/_nc_load_rules/helpers.py b/lib/iris/fileformats/_nc_load_rules/helpers.py index 71e59feda0..7044b3a993 100644 --- a/lib/iris/fileformats/_nc_load_rules/helpers.py +++ b/lib/iris/fileformats/_nc_load_rules/helpers.py @@ -482,9 +482,9 @@ def build_cube_metadata(engine): # Set the cube global attributes. for attr_name, attr_value in cf_var.cf_group.global_attributes.items(): try: - cube.attributes[str(attr_name)] = attr_value + cube.attributes.globals[str(attr_name)] = attr_value except ValueError as e: - msg = "Skipping global attribute {!r}: {}" + msg = "Skipping disallowed global attribute {!r}: {}" warnings.warn( msg.format(attr_name, str(e)), category=_WarnComboIgnoringLoad, diff --git a/lib/iris/fileformats/netcdf/loader.py b/lib/iris/fileformats/netcdf/loader.py index c07b6af5f4..eea0e9a2ac 100644 --- a/lib/iris/fileformats/netcdf/loader.py +++ b/lib/iris/fileformats/netcdf/loader.py @@ -11,7 +11,12 @@ Also : `CF Conventions `_. """ -from collections.abc import Iterable +from collections.abc import Iterable, Mapping +from contextlib import contextmanager +from copy import deepcopy +from enum import Enum, auto +import threading +from typing import Union import warnings import numpy as np @@ -167,8 +172,13 @@ def attribute_predicate(item): return item[0] not in _CF_ATTRS tmpvar = filter(attribute_predicate, cf_var.cf_attrs_unused()) + attrs_dict = iris_object.attributes + if hasattr(attrs_dict, "locals"): + # Treat cube attributes (i.e. a CubeAttrsDict) as a special case. + # These attrs are "local" (i.e. on the variable), so record them as such. + attrs_dict = attrs_dict.locals for attr_name, attr_value in tmpvar: - _set_attributes(iris_object.attributes, attr_name, attr_value) + _set_attributes(attrs_dict, attr_name, attr_value) def _get_actual_dtype(cf_var): @@ -199,6 +209,7 @@ def _get_cf_var_data(cf_var, filename): unnecessarily slow + wasteful of memory. """ + global CHUNK_CONTROL if hasattr(cf_var, "_data_array"): # The variable is not an actual netCDF4 file variable, but an emulating # object with an attached data array (either numpy or dask), which can be @@ -215,6 +226,8 @@ def _get_cf_var_data(cf_var, filename): else: # Get lazy chunked data out of a cf variable. + # Creates Dask wrappers around data arrays for any cube components which + # can have lazy values, e.g. Cube, Coord, CellMeasure, AuxiliaryVariable. dtype = _get_actual_dtype(cf_var) # Make a data-proxy that mimics array access and can fetch from the file. @@ -228,21 +241,59 @@ def _get_cf_var_data(cf_var, filename): ) # Get the chunking specified for the variable : this is either a shape, or # maybe the string "contiguous". - chunks = cf_var.cf_data.chunking() - # In the "contiguous" case, pass chunks=None to 'as_lazy_data'. - if chunks == "contiguous": - chunks = None - - # Return a dask array providing deferred access. - result = as_lazy_data(proxy, chunks=chunks) - + if CHUNK_CONTROL.mode is ChunkControl.Modes.AS_DASK: + result = as_lazy_data(proxy, chunks=None, dask_chunking=True) + else: + chunks = cf_var.cf_data.chunking() + # In the "contiguous" case, pass chunks=None to 'as_lazy_data'. + if chunks == "contiguous": + if ( + CHUNK_CONTROL.mode is ChunkControl.Modes.FROM_FILE + and isinstance( + cf_var, iris.fileformats.cf.CFDataVariable + ) + ): + raise KeyError( + f"{cf_var.cf_name} does not contain pre-existing chunk specifications." + f" Instead, you might wish to use CHUNK_CONTROL.set(), or just use default" + f" behaviour outside of a context manager. " + ) + # Equivalent to chunks=None, but value required by chunking control + chunks = list(cf_var.shape) + + # Modify the chunking in the context of an active chunking control. + # N.B. settings specific to this named var override global ('*') ones. + dim_chunks = CHUNK_CONTROL.var_dim_chunksizes.get( + cf_var.cf_name + ) or CHUNK_CONTROL.var_dim_chunksizes.get("*") + dims = cf_var.cf_data.dimensions + if CHUNK_CONTROL.mode is ChunkControl.Modes.FROM_FILE: + dims_fixed = np.ones(len(dims), dtype=bool) + elif not dim_chunks: + dims_fixed = None + else: + # Modify the chunks argument, and pass in a list of 'fixed' dims, for + # any of our dims which are controlled. + dims_fixed = np.zeros(len(dims), dtype=bool) + for i_dim, dim_name in enumerate(dims): + dim_chunksize = dim_chunks.get(dim_name) + if dim_chunksize: + if dim_chunksize == -1: + chunks[i_dim] = cf_var.shape[i_dim] + else: + chunks[i_dim] = dim_chunksize + dims_fixed[i_dim] = True + if dims_fixed is None: + dims_fixed = [dims_fixed] + result = as_lazy_data( + proxy, chunks=chunks, dims_fixed=tuple(dims_fixed) + ) return result class _OrderedAddableList(list): """ A custom container object for actions recording. - Used purely in actions debugging, to accumulate a record of which actions were activated. @@ -265,6 +316,18 @@ def add(self, msg): def _load_cube(engine, cf, cf_var, filename): + global CHUNK_CONTROL + + # Translate dimension chunk-settings specific to this cube (i.e. named by + # it's data-var) into global ones, for the duration of this load. + # Thus, by default, we will create any AuxCoords, CellMeasures et al with + # any per-dimension chunksizes specified for the cube. + these_settings = CHUNK_CONTROL.var_dim_chunksizes.get(cf_var.cf_name, {}) + with CHUNK_CONTROL.set(**these_settings): + return _load_cube_inner(engine, cf, cf_var, filename) + + +def _load_cube_inner(engine, cf, cf_var, filename): from iris.cube import Cube """Create the cube associated with the CF-netCDF data variable.""" @@ -606,3 +669,168 @@ def load_cubes(file_sources, callback=None, constraints=None): continue yield cube + + +class ChunkControl(threading.local): + class Modes(Enum): + DEFAULT = auto() + FROM_FILE = auto() + AS_DASK = auto() + + def __init__(self, var_dim_chunksizes=None): + """ + Provide user control of Dask chunking. + + The NetCDF loader is controlled by the single instance of this: the + :data:`~iris.fileformats.netcdf.loader.CHUNK_CONTROL` object. + + A chunk size can be set for a specific (named) file dimension, when + loading specific (named) variables, or for all variables. + + When a selected variable is a CF data-variable, which loads as a + :class:`~iris.cube.Cube`, then the given dimension chunk size is *also* + fixed for all variables which are components of that :class:`~iris.cube.Cube`, + i.e. any :class:`~iris.coords.Coord`, :class:`~iris.coords.CellMeasure`, + :class:`~iris.coords.AncillaryVariable` etc. + This can be overridden, if required, by variable-specific settings. + + For this purpose, :class:`~iris.experimental.ugrid.mesh.MeshCoord` and + :class:`~iris.experimental.ugrid.mesh.Connectivity` are not + :class:`~iris.cube.Cube` components, and chunk control on a + :class:`~iris.cube.Cube` data-variable will not affect them. + + """ + self.var_dim_chunksizes = var_dim_chunksizes or {} + self.mode = self.Modes.DEFAULT + + @contextmanager + def set( + self, + var_names: Union[str, Iterable[str]] = None, + **dimension_chunksizes: Mapping[str, int], + ) -> None: + """ + Control the Dask chunk sizes applied to NetCDF variables during loading. + + Parameters + ---------- + var_names : str or list of str, default=None + apply the `dimension_chunksizes` controls only to these variables, + or when building :class:`~iris.cube.Cube`\\ s from these data variables. + If ``None``, settings apply to all loaded variables. + dimension_chunksizes : dict of {str: int} + Kwargs specifying chunksizes for dimensions of file variables. + Each key-value pair defines a chunk size for a named file + dimension, e.g. ``{'time': 10, 'model_levels':1}``. + Values of ``-1`` will lock the chunk size to the full size of that + dimension. + + Notes + ----- + This function acts as a context manager, for use in a ``with`` block. + + >>> import iris + >>> from iris.fileformats.netcdf.loader import CHUNK_CONTROL + >>> with CHUNK_CONTROL.set("air_temperature", time=180, latitude=-1): + ... cube = iris.load(iris.sample_data_path("E1_north_america.nc"))[0] + + When `var_names` is present, the chunk size adjustments are applied + only to the selected variables. However, for a CF data variable, this + extends to all components of the (raw) :class:`~iris.cube.Cube` created + from it. + + **Un**-adjusted dimensions have chunk sizes set in the 'usual' way. + That is, according to the normal behaviour of + :func:`iris._lazy_data.as_lazy_data`, which is: chunk size is based on + the file variable chunking, or full variable shape; this is scaled up + or down by integer factors to best match the Dask default chunk size, + i.e. the setting configured by + ``dask.config.set({'array.chunk-size': '250MiB'})``. + + """ + old_mode = self.mode + old_var_dim_chunksizes = deepcopy(self.var_dim_chunksizes) + if var_names is None: + var_names = ["*"] + elif isinstance(var_names, str): + var_names = [var_names] + try: + for var_name in var_names: + # Note: here we simply treat '*' as another name. + # A specific name match should override a '*' setting, but + # that is implemented elsewhere. + if not isinstance(var_name, str): + msg = ( + "'var_names' should be an iterable of strings, " + f"not {var_names!r}." + ) + raise ValueError(msg) + dim_chunks = self.var_dim_chunksizes.setdefault(var_name, {}) + for dim_name, chunksize in dimension_chunksizes.items(): + if not ( + isinstance(dim_name, str) + and isinstance(chunksize, int) + ): + msg = ( + "'dimension_chunksizes' kwargs should be a dict " + f"of `str: int` pairs, not {dimension_chunksizes!r}." + ) + raise ValueError(msg) + dim_chunks[dim_name] = chunksize + yield + finally: + self.var_dim_chunksizes = old_var_dim_chunksizes + self.mode = old_mode + + @contextmanager + def from_file(self) -> None: + """ + Ensures the chunk sizes are loaded in from NetCDF file variables. + + Raises + ------ + KeyError + If any NetCDF data variables - those that become + :class:`~iris.cube.Cube`\\ s - do not specify chunk sizes. + + Notes + ----- + This function acts as a context manager, for use in a ``with`` block. + """ + old_mode = self.mode + old_var_dim_chunksizes = deepcopy(self.var_dim_chunksizes) + try: + self.mode = self.Modes.FROM_FILE + yield + finally: + self.mode = old_mode + self.var_dim_chunksizes = old_var_dim_chunksizes + + @contextmanager + def as_dask(self) -> None: + """ + Relies on Dask :external+dask:doc:`array` to control chunk sizes. + + Notes + ----- + This function acts as a context manager, for use in a ``with`` block. + """ + old_mode = self.mode + old_var_dim_chunksizes = deepcopy(self.var_dim_chunksizes) + try: + self.mode = self.Modes.AS_DASK + yield + finally: + self.mode = old_mode + self.var_dim_chunksizes = old_var_dim_chunksizes + + +# Note: the CHUNK_CONTROL object controls chunk sizing in the +# :meth:`_get_cf_var_data` method. +# N.B. :meth:`_load_cube` also modifies this when loading each cube, +# introducing an additional context in which any cube-specific settings are +# 'promoted' into being global ones. + +#: The global :class:`ChunkControl` object providing user-control of Dask chunking +#: when Iris loads NetCDF files. +CHUNK_CONTROL: ChunkControl = ChunkControl() diff --git a/lib/iris/fileformats/netcdf/saver.py b/lib/iris/fileformats/netcdf/saver.py index 895ceb60e7..c2f82537e6 100644 --- a/lib/iris/fileformats/netcdf/saver.py +++ b/lib/iris/fileformats/netcdf/saver.py @@ -29,6 +29,7 @@ from dask.delayed import Delayed import numpy as np +from iris._deprecation import warn_deprecated from iris._lazy_data import _co_realise_lazy_arrays, is_lazy_data from iris.aux_factory import ( AtmosphereSigmaFactory, @@ -559,6 +560,11 @@ def write( An interable of cube attribute keys. Any cube attributes with matching keys will become attributes on the data variable rather than global attributes. + + .. Note:: + + Has no effect if :attr:`iris.FUTURE.save_split_attrs` is ``True``. + unlimited_dimensions : iterable of str and/or :class:`iris.coords.Coord` List of coordinate names (or coordinate objects) corresponding to coordinate dimensions of `cube` to save with the @@ -641,6 +647,9 @@ def write( 3 files that do not use HDF5. """ + # TODO: when iris.FUTURE.save_split_attrs defaults to True, we can deprecate the + # "local_keys" arg, and finally remove it when we finally remove the + # save_split_attrs switch. if unlimited_dimensions is None: unlimited_dimensions = [] @@ -717,20 +726,23 @@ def write( # aux factory in the cube. self._add_aux_factories(cube, cf_var_cube, cube_dimensions) - # Add data variable-only attribute names to local_keys. - if local_keys is None: - local_keys = set() - else: - local_keys = set(local_keys) - local_keys.update(_CF_DATA_ATTRS, _UKMO_DATA_ATTRS) - - # Add global attributes taking into account local_keys. - global_attributes = { - k: v - for k, v in cube.attributes.items() - if (k not in local_keys and k.lower() != "conventions") - } - self.update_global_attributes(global_attributes) + if not iris.FUTURE.save_split_attrs: + # In the "old" way, we update global attributes as we go. + # Add data variable-only attribute names to local_keys. + if local_keys is None: + local_keys = set() + else: + local_keys = set(local_keys) + local_keys.update(_CF_DATA_ATTRS, _UKMO_DATA_ATTRS) + + # Add global attributes taking into account local_keys. + cube_attributes = cube.attributes + global_attributes = { + k: v + for k, v in cube_attributes.items() + if (k not in local_keys and k.lower() != "conventions") + } + self.update_global_attributes(global_attributes) if cf_profile_available: cf_patch = iris.site_configuration.get("cf_patch") @@ -788,6 +800,9 @@ def update_global_attributes(self, attributes=None, **kwargs): attributes: dict or iterable of key, value pairs CF global attributes to be updated. """ + # TODO: when when iris.FUTURE.save_split_attrs is removed, this routine will + # only be called once: it can reasonably be renamed "_set_global_attributes", + # and the 'kwargs' argument can be removed. if attributes is not None: # Handle sequence e.g. [('fruit', 'apple'), ...]. if not hasattr(attributes, "keys"): @@ -2266,6 +2281,9 @@ def _create_cf_data_variable( """ Create CF-netCDF data variable for the cube and any associated grid mapping. + # TODO: when iris.FUTURE.save_split_attrs is removed, the 'local_keys' arg can + # be removed. + Parameters ---------- cube: :class:`iris.cube.Cube` @@ -2287,6 +2305,8 @@ def _create_cf_data_variable( The newly created CF-netCDF data variable. """ + # TODO: when iris.FUTURE.save_split_attrs is removed, the 'local_keys' arg can + # be removed. # Get the values in a form which is valid for the file format. data = self._ensure_valid_dtype(cube.core_data(), "cube", cube) @@ -2375,16 +2395,20 @@ def set_packing_ncattrs(cfvar): if cube.units.calendar: _setncattr(cf_var, "calendar", cube.units.calendar) - # Add data variable-only attribute names to local_keys. - if local_keys is None: - local_keys = set() + if iris.FUTURE.save_split_attrs: + attr_names = cube.attributes.locals.keys() else: - local_keys = set(local_keys) - local_keys.update(_CF_DATA_ATTRS, _UKMO_DATA_ATTRS) + # Add data variable-only attribute names to local_keys. + if local_keys is None: + local_keys = set() + else: + local_keys = set(local_keys) + local_keys.update(_CF_DATA_ATTRS, _UKMO_DATA_ATTRS) + + # Add any cube attributes whose keys are in local_keys as + # CF-netCDF data variable attributes. + attr_names = set(cube.attributes).intersection(local_keys) - # Add any cube attributes whose keys are in local_keys as - # CF-netCDF data variable attributes. - attr_names = set(cube.attributes).intersection(local_keys) for attr_name in sorted(attr_names): # Do not output 'conventions' attribute. if attr_name.lower() == "conventions": @@ -2672,9 +2696,15 @@ def save( Save cube(s) to a netCDF file, given the cube and the filename. * Iris will write CF 1.7 compliant NetCDF files. - * The attributes dictionaries on each cube in the saved cube list - will be compared and common attributes saved as NetCDF global - attributes where appropriate. + * **If split-attribute saving is disabled**, i.e. + :data:`iris.FUTURE`\\ ``.save_split_attrs`` is ``False``, then attributes + dictionaries on each cube in the saved cube list will be compared, and common + attributes saved as NetCDF global attributes where appropriate. + + Or, **when split-attribute saving is enabled**, then ``cube.attributes.locals`` + are always saved as attributes of data-variables, and ``cube.attributes.globals`` + are saved as global (dataset) attributes, where possible. + Since the 2 types are now distinguished : see :class:`~iris.cube.CubeAttrsDict`. * Keyword arguments specifying how to save the data are applied to each cube. To use different settings for different cubes, use the NetCDF Context manager (:class:`~Saver`) directly. @@ -2703,6 +2733,11 @@ def save( An interable of cube attribute keys. Any cube attributes with matching keys will become attributes on the data variable rather than global attributes. + + .. note:: + This is *ignored* if 'split-attribute saving' is **enabled**, + i.e. when ``iris.FUTURE.save_split_attrs`` is ``True``. + unlimited_dimensions: iterable of str and/or :class:`iris.coords.Coord` objects, optional List of coordinate names (or coordinate objects) corresponding to coordinate dimensions of `cube` to save with the NetCDF dimension @@ -2832,26 +2867,127 @@ def save( else: cubes = cube - if local_keys is None: + # Decide which cube attributes will be saved as "global" attributes + # NOTE: in 'legacy' mode, when iris.FUTURE.save_split_attrs == False, this code + # section derives a common value for 'local_keys', which is passed to 'Saver.write' + # when saving each input cube. The global attributes are then created by a call + # to "Saver.update_global_attributes" within each 'Saver.write' call (which is + # obviously a bit redundant!), plus an extra one to add 'Conventions'. + # HOWEVER, in `split_attrs` mode (iris.FUTURE.save_split_attrs == False), this code + # instead constructs a 'global_attributes' dictionary, and outputs that just once, + # after writing all the input cubes. + if iris.FUTURE.save_split_attrs: + # We don't actually use 'local_keys' in this case. + # TODO: can remove this when the iris.FUTURE.save_split_attrs is removed. local_keys = set() + + # Find any collisions in the cube global attributes and "demote" all those to + # local attributes (where possible, else warn they are lost). + # N.B. "collision" includes when not all cubes *have* that attribute. + global_names = set() + for cube in cubes: + global_names |= set(cube.attributes.globals.keys()) + + # Fnd any global attributes which are not the same on *all* cubes. + def attr_values_equal(val1, val2): + # An equality test which also works when some values are numpy arrays (!) + # As done in :meth:`iris.common.mixin.LimitedAttributeDict.__eq__`. + match = val1 == val2 + try: + match = bool(match) + except ValueError: + match = match.all() + return match + + cube0 = cubes[0] + invalid_globals = set( + [ + attrname + for attrname in global_names + if not all( + attr_values_equal( + cube.attributes.globals.get(attrname), + cube0.attributes.globals.get(attrname), + ) + for cube in cubes[1:] + ) + ] + ) + + # Establish all the global attributes which we will write to the file (at end). + global_attributes = { + attr: cube0.attributes.globals.get(attr) + for attr in global_names - invalid_globals + } + if invalid_globals: + # Some cubes have different global attributes: modify cubes as required. + warnings.warn( + f"Saving the cube global attributes {sorted(invalid_globals)} as local " + "(i.e. data-variable) attributes, where possible, since they are not " + "the same on all input cubes.", + category=iris.exceptions.IrisSaveWarning, + ) + cubes = cubes.copy() # avoiding modifying the actual input arg. + for i_cube in range(len(cubes)): + # We iterate over cube *index*, so we can replace the list entries with + # with cube *copies* -- just to avoid changing our call args. + cube = cubes[i_cube] + demote_attrs = set(cube.attributes.globals) & invalid_globals + if any(demote_attrs): + # Catch any demoted attrs where there is already a local version + blocked_attrs = demote_attrs & set(cube.attributes.locals) + if blocked_attrs: + warnings.warn( + f"Global cube attributes {sorted(blocked_attrs)} " + f'of cube "{cube.name()}" were not saved, overlaid ' + "by existing local attributes with the same names.", + category=iris.exceptions.IrisSaveWarning, + ) + demote_attrs -= blocked_attrs + if demote_attrs: + # This cube contains some 'demoted' global attributes. + # Replace input cube with a copy, so we can modify attributes. + cube = cube.copy() + cubes[i_cube] = cube + for attr in demote_attrs: + # move global to local + value = cube.attributes.globals.pop(attr) + cube.attributes.locals[attr] = value + else: - local_keys = set(local_keys) - - # Determine the attribute keys that are common across all cubes and - # thereby extend the collection of local_keys for attributes - # that should be attributes on data variables. - attributes = cubes[0].attributes - common_keys = set(attributes) - for cube in cubes[1:]: - keys = set(cube.attributes) - local_keys.update(keys.symmetric_difference(common_keys)) - common_keys.intersection_update(keys) - different_value_keys = [] - for key in common_keys: - if np.any(attributes[key] != cube.attributes[key]): - different_value_keys.append(key) - common_keys.difference_update(different_value_keys) - local_keys.update(different_value_keys) + # Legacy mode: calculate "local_keys" to control which attributes are local + # and which global. + # TODO: when iris.FUTURE.save_split_attrs is removed, this section can also be + # removed + message = ( + "Saving to netcdf with legacy-style attribute handling for backwards " + "compatibility.\n" + "This mode is deprecated since Iris 3.8, and will eventually be removed.\n" + "Please consider enabling the new split-attributes handling mode, by " + "setting 'iris.FUTURE.save_split_attrs = True'." + ) + warn_deprecated(message) + + if local_keys is None: + local_keys = set() + else: + local_keys = set(local_keys) + + # Determine the attribute keys that are common across all cubes and + # thereby extend the collection of local_keys for attributes + # that should be attributes on data variables. + attributes = cubes[0].attributes + common_keys = set(attributes) + for cube in cubes[1:]: + keys = set(cube.attributes) + local_keys.update(keys.symmetric_difference(common_keys)) + common_keys.intersection_update(keys) + different_value_keys = [] + for key in common_keys: + if np.any(attributes[key] != cube.attributes[key]): + different_value_keys.append(key) + common_keys.difference_update(different_value_keys) + local_keys.update(different_value_keys) def is_valid_packspec(p): """Only checks that the datatype is valid.""" @@ -2953,7 +3089,12 @@ def is_valid_packspec(p): warnings.warn(msg, category=iris.exceptions.IrisCfSaveWarning) # Add conventions attribute. - sman.update_global_attributes(Conventions=conventions) + if iris.FUTURE.save_split_attrs: + # In the "new way", we just create all the global attributes at once. + global_attributes["Conventions"] = conventions + sman.update_global_attributes(global_attributes) + else: + sman.update_global_attributes(Conventions=conventions) if compute: # No more to do, since we used Saver(compute=True). diff --git a/lib/iris/tests/experimental/regrid/test_regrid_area_weighted_rectilinear_src_and_grid.py b/lib/iris/tests/experimental/regrid/test_regrid_area_weighted_rectilinear_src_and_grid.py index 93b1a6d3e6..9190548b15 100644 --- a/lib/iris/tests/experimental/regrid/test_regrid_area_weighted_rectilinear_src_and_grid.py +++ b/lib/iris/tests/experimental/regrid/test_regrid_area_weighted_rectilinear_src_and_grid.py @@ -601,6 +601,20 @@ def test_circular_subset(self): @tests.skip_data def test_non_circular_subset(self): + """ + Test regridding behaviour when the source grid has circular latitude. + + This tests the specific case when the longitude coordinate of the + source grid has the `circular` attribute as `False` but otherwise spans + the full 360 degrees. + + Note: the previous behaviour was to always mask target cells when they + spanned the boundary of max/min longitude and `circular` was `False`, + however this has been changed so that such cells will only be masked + when there is a gap between max longitude and min longitude. In this + test these cells are expected to be unmasked and therefore the result + will be equal to the above test for circular longitudes. + """ src = iris.tests.stock.global_pp() src.coord("latitude").guess_bounds() src.coord("longitude").guess_bounds() @@ -619,9 +633,53 @@ def test_non_circular_subset(self): dest.add_dim_coord(dest_lat, 0) dest.add_dim_coord(dest_lon, 1) + res = regrid_area_weighted(src, dest) + self.assertArrayShapeStats(res, (40, 7), 285.653960, 15.212710) + + @tests.skip_data + def test__proper_non_circular_subset(self): + """ + Test regridding behaviour when the source grid has circular latitude. + + This tests the specific case when the longitude coordinate of the + source grid does not span the full 360 degrees. Target cells which span + the boundary of max/min longitude will contain a section which is out + of bounds from the source grid and are therefore expected to be masked. + """ + src = iris.tests.stock.global_pp() + src.coord("latitude").guess_bounds() + src.coord("longitude").guess_bounds() + src_lon_bounds = src.coord("longitude").bounds.copy() + # Leave a small gap between the first and last longitude value. + src_lon_bounds[0, 0] += 0.001 + src_lon = src.coord("longitude").copy( + points=src.coord("longitude").points, bounds=src_lon_bounds + ) + src.remove_coord("longitude") + src.add_dim_coord(src_lon, 1) + dest_lat = src.coord("latitude")[0:40] + dest_lon = iris.coords.DimCoord( + [-15.0, -10.0, -5.0, 0.0, 5.0, 10.0, 15.0], + standard_name="longitude", + units="degrees", + coord_system=dest_lat.coord_system, + ) + # Note target grid (in -180 to 180) src in 0 to 360 + dest_lon.guess_bounds() + data = np.zeros((dest_lat.shape[0], dest_lon.shape[0])) + dest = iris.cube.Cube(data) + dest.add_dim_coord(dest_lat, 0) + dest.add_dim_coord(dest_lon, 1) + res = regrid_area_weighted(src, dest) self.assertArrayShapeStats(res, (40, 7), 285.550814, 15.190245) + # The target cells straddling the gap between min and max source + # longitude should be masked. + expected_mask = np.zeros(res.shape) + expected_mask[:, 3] = 1 + assert np.array_equal(expected_mask, res.data.mask) + if __name__ == "__main__": tests.main() diff --git a/lib/iris/tests/integration/attrs_matrix_results_load.json b/lib/iris/tests/integration/attrs_matrix_results_load.json new file mode 100644 index 0000000000..a1d37708a9 --- /dev/null +++ b/lib/iris/tests/integration/attrs_matrix_results_load.json @@ -0,0 +1,1019 @@ +{ + "case_singlevar_localonly": { + "input": "G-La", + "localstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "G-La" + ] + }, + "globalstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "G-La" + ] + }, + "userstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "G-La" + ] + } + }, + "case_singlevar_globalonly": { + "input": "GaL-", + "localstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "GaL-" + ] + }, + "globalstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "GaL-" + ] + }, + "userstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "GaL-" + ] + } + }, + "case_singlevar_glsame": { + "input": "GaLa", + "localstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "GaLa" + ] + }, + "globalstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "GaLa" + ] + }, + "userstyle": { + "legacy": [ + "G-La" + ], + "newstyle": [ + "GaLa" + ] + } + }, + "case_singlevar_gldiffer": { + "input": "GaLb", + "localstyle": { + "legacy": [ + "G-Lb" + ], + "newstyle": [ + "GaLb" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lb" + ], + "newstyle": [ + "GaLb" + ] + }, + "userstyle": { + "legacy": [ + "G-Lb" + ], + "newstyle": [ + "GaLb" + ] + } + }, + "case_multivar_same_noglobal": { + "input": "G-Laa", + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-Laa" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-Laa" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-Laa" + ] + } + }, + "case_multivar_same_sameglobal": { + "input": "GaLaa", + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLaa" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLaa" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLaa" + ] + } + }, + "case_multivar_same_diffglobal": { + "input": "GaLbb", + "localstyle": { + "legacy": [ + "G-Lbb" + ], + "newstyle": [ + "GaLbb" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lbb" + ], + "newstyle": [ + "GaLbb" + ] + }, + "userstyle": { + "legacy": [ + "G-Lbb" + ], + "newstyle": [ + "GaLbb" + ] + } + }, + "case_multivar_differ_noglobal": { + "input": "G-Lab", + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lab" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lab" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lab" + ] + } + }, + "case_multivar_differ_diffglobal": { + "input": "GaLbc", + "localstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLbc" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLbc" + ] + }, + "userstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLbc" + ] + } + }, + "case_multivar_differ_sameglobal": { + "input": "GaLab", + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLab" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLab" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLab" + ] + } + }, + "case_multivar_1none_noglobal": { + "input": "G-La-", + "localstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-La-" + ] + }, + "globalstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-La-" + ] + }, + "userstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-La-" + ] + } + }, + "case_multivar_1none_diffglobal": { + "input": "GaLb-", + "localstyle": { + "legacy": [ + "G-Lba" + ], + "newstyle": [ + "GaLb-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lba" + ], + "newstyle": [ + "GaLb-" + ] + }, + "userstyle": { + "legacy": [ + "G-Lba" + ], + "newstyle": [ + "GaLb-" + ] + } + }, + "case_multivar_1none_sameglobal": { + "input": "GaLa-", + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLa-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLa-" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLa-" + ] + } + }, + "case_multisource_gsame_lnone": { + "input": [ + "GaL-", + "GaL-" + ], + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaL--" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaL--" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaL--" + ] + } + }, + "case_multisource_gsame_lallsame": { + "input": [ + "GaLa", + "GaLa" + ], + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLaa" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLaa" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLaa" + ] + } + }, + "case_multisource_gsame_l1same1none": { + "input": [ + "GaLa", + "GaL-" + ], + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLa-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLa-" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "GaLa-" + ] + } + }, + "case_multisource_gsame_l1same1other": { + "input": [ + "GaLa", + "GaLb" + ], + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLab" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLab" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLab" + ] + } + }, + "case_multisource_gsame_lallother": { + "input": [ + "GaLb", + "GaLb" + ], + "localstyle": { + "legacy": [ + "G-Lbb" + ], + "newstyle": [ + "GaLbb" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lbb" + ], + "newstyle": [ + "GaLbb" + ] + }, + "userstyle": { + "legacy": [ + "G-Lbb" + ], + "newstyle": [ + "GaLbb" + ] + } + }, + "case_multisource_gsame_lalldiffer": { + "input": [ + "GaLb", + "GaLc" + ], + "localstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLbc" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLbc" + ] + }, + "userstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLbc" + ] + } + }, + "case_multisource_gnone_l1one1none": { + "input": [ + "G-La", + "G-L-" + ], + "localstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-La-" + ] + }, + "globalstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-La-" + ] + }, + "userstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-La-" + ] + } + }, + "case_multisource_gnone_l1one1same": { + "input": [ + "G-La", + "G-La" + ], + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-Laa" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-Laa" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-Laa" + ] + } + }, + "case_multisource_gnone_l1one1other": { + "input": [ + "G-La", + "G-Lb" + ], + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lab" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lab" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lab" + ] + } + }, + "case_multisource_g1none_lnone": { + "input": [ + "GaL-", + "G-L-" + ], + "localstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-L-", + "GaL-" + ] + }, + "globalstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-L-", + "GaL-" + ] + }, + "userstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-L-", + "GaL-" + ] + } + }, + "case_multisource_g1none_l1same1none": { + "input": [ + "GaLa", + "G-L-" + ], + "localstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-L-", + "GaLa" + ] + }, + "globalstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-L-", + "GaLa" + ] + }, + "userstyle": { + "legacy": [ + "G-La-" + ], + "newstyle": [ + "G-L-", + "GaLa" + ] + } + }, + "case_multisource_g1none_l1none1same": { + "input": [ + "GaL-", + "G-La" + ], + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-La", + "GaL-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-La", + "GaL-" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-La", + "GaL-" + ] + } + }, + "case_multisource_g1none_l1diff1none": { + "input": [ + "GaLb", + "G-L-" + ], + "localstyle": { + "legacy": [ + "G-Lb-" + ], + "newstyle": [ + "G-L-", + "GaLb" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lb-" + ], + "newstyle": [ + "G-L-", + "GaLb" + ] + }, + "userstyle": { + "legacy": [ + "G-Lb-" + ], + "newstyle": [ + "G-L-", + "GaLb" + ] + } + }, + "case_multisource_g1none_l1none1diff": { + "input": [ + "GaL-", + "G-Lb" + ], + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lb", + "GaL-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lb", + "GaL-" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "G-Lb", + "GaL-" + ] + } + }, + "case_multisource_g1none_lallsame": { + "input": [ + "GaLa", + "G-La" + ], + "localstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-La", + "GaLa" + ] + }, + "globalstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-La", + "GaLa" + ] + }, + "userstyle": { + "legacy": [ + "G-Laa" + ], + "newstyle": [ + "G-La", + "GaLa" + ] + } + }, + "case_multisource_g1none_lallother": { + "input": [ + "GaLc", + "G-Lc" + ], + "localstyle": { + "legacy": [ + "G-Lcc" + ], + "newstyle": [ + "G-Lc", + "GaLc" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lcc" + ], + "newstyle": [ + "G-Lc", + "GaLc" + ] + }, + "userstyle": { + "legacy": [ + "G-Lcc" + ], + "newstyle": [ + "G-Lc", + "GaLc" + ] + } + }, + "case_multisource_gdiff_lnone": { + "input": [ + "GaL-", + "GbL-" + ], + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaL-", + "GbL-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaL-", + "GbL-" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaL-", + "GbL-" + ] + } + }, + "case_multisource_gdiff_l1same1none": { + "input": [ + "GaLa", + "GbL-" + ], + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLa", + "GbL-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLa", + "GbL-" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLa", + "GbL-" + ] + } + }, + "case_multisource_gdiff_l1diff1none": { + "input": [ + "GaLb", + "GcL-" + ], + "localstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLb", + "GcL-" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLb", + "GcL-" + ] + }, + "userstyle": { + "legacy": [ + "G-Lbc" + ], + "newstyle": [ + "GaLb", + "GcL-" + ] + } + }, + "case_multisource_gdiff_lallsame": { + "input": [ + "GaLa", + "GbLb" + ], + "localstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLa", + "GbLb" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLa", + "GbLb" + ] + }, + "userstyle": { + "legacy": [ + "G-Lab" + ], + "newstyle": [ + "GaLa", + "GbLb" + ] + } + }, + "case_multisource_gdiff_lallother": { + "input": [ + "GaLc", + "GbLc" + ], + "localstyle": { + "legacy": [ + "G-Lcc" + ], + "newstyle": [ + "GaLc", + "GbLc" + ] + }, + "globalstyle": { + "legacy": [ + "G-Lcc" + ], + "newstyle": [ + "GaLc", + "GbLc" + ] + }, + "userstyle": { + "legacy": [ + "G-Lcc" + ], + "newstyle": [ + "GaLc", + "GbLc" + ] + } + } +} \ No newline at end of file diff --git a/lib/iris/tests/integration/attrs_matrix_results_roundtrip.json b/lib/iris/tests/integration/attrs_matrix_results_roundtrip.json new file mode 100644 index 0000000000..3446c7f312 --- /dev/null +++ b/lib/iris/tests/integration/attrs_matrix_results_roundtrip.json @@ -0,0 +1,983 @@ +{ + "case_singlevar_localonly": { + "input": "G-La", + "localstyle": { + "unsplit": [ + "G-La" + ], + "split": [ + "G-La" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "G-La" + ] + }, + "userstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "G-La" + ] + } + }, + "case_singlevar_globalonly": { + "input": "GaL-", + "localstyle": { + "unsplit": [ + "G-La" + ], + "split": [ + "GaL-" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaL-" + ] + }, + "userstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaL-" + ] + } + }, + "case_singlevar_glsame": { + "input": "GaLa", + "localstyle": { + "unsplit": [ + "G-La" + ], + "split": [ + "GaLa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaLa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaLa" + ] + } + }, + "case_singlevar_gldiffer": { + "input": "GaLb", + "localstyle": { + "unsplit": [ + "G-Lb" + ], + "split": [ + "GaLb" + ] + }, + "globalstyle": { + "unsplit": [ + "GbL-" + ], + "split": [ + "GaLb" + ] + }, + "userstyle": { + "unsplit": [ + "GbL-" + ], + "split": [ + "GaLb" + ] + } + }, + "case_multivar_same_noglobal": { + "input": "G-Laa", + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multivar_same_sameglobal": { + "input": "GaLaa", + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLaa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + } + }, + "case_multivar_same_diffglobal": { + "input": "GaLbb", + "localstyle": { + "unsplit": [ + "G-Lbb" + ], + "split": [ + "GaLbb" + ] + }, + "globalstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + }, + "userstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + } + }, + "case_multivar_differ_noglobal": { + "input": "G-Lab", + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multivar_differ_diffglobal": { + "input": "GaLbc", + "localstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + } + }, + "case_multivar_differ_sameglobal": { + "input": "GaLab", + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + } + }, + "case_multivar_1none_noglobal": { + "input": "G-La-", + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multivar_1none_diffglobal": { + "input": "GaLb-", + "localstyle": { + "unsplit": [ + "G-Lba" + ], + "split": [ + "GaLb-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lba" + ], + "split": [ + "GaLb-" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lba" + ], + "split": [ + "GaLb-" + ] + } + }, + "case_multivar_1none_sameglobal": { + "input": "GaLa-", + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLa-" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + } + }, + "case_multisource_gsame_lnone": { + "input": [ + "GaL-", + "GaL-" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaL--" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaL--" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaL--" + ] + } + }, + "case_multisource_gsame_lallsame": { + "input": [ + "GaLa", + "GaLa" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLaa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + } + }, + "case_multisource_gsame_l1same1none": { + "input": [ + "GaLa", + "GaL-" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLa-" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + } + }, + "case_multisource_gsame_l1same1other": { + "input": [ + "GaLa", + "GaLb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + } + }, + "case_multisource_gsame_lallother": { + "input": [ + "GaLb", + "GaLb" + ], + "localstyle": { + "unsplit": [ + "G-Lbb" + ], + "split": [ + "GaLbb" + ] + }, + "globalstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + }, + "userstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + } + }, + "case_multisource_gsame_lalldiffer": { + "input": [ + "GaLb", + "GaLc" + ], + "localstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + } + }, + "case_multisource_gnone_l1one1none": { + "input": [ + "G-La", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multisource_gnone_l1one1same": { + "input": [ + "G-La", + "G-La" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multisource_gnone_l1one1other": { + "input": [ + "G-La", + "G-Lb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_g1none_lnone": { + "input": [ + "GaL-", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multisource_g1none_l1same1none": { + "input": [ + "GaLa", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multisource_g1none_l1none1same": { + "input": [ + "GaL-", + "G-La" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multisource_g1none_l1diff1none": { + "input": [ + "GaLb", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-Lb-" + ], + "split": [ + "G-Lb-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lb-" + ], + "split": [ + "G-Lb-" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lb-" + ], + "split": [ + "G-Lb-" + ] + } + }, + "case_multisource_g1none_l1none1diff": { + "input": [ + "GaL-", + "G-Lb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_g1none_lallsame": { + "input": [ + "GaLa", + "G-La" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multisource_g1none_lallother": { + "input": [ + "GaLc", + "G-Lc" + ], + "localstyle": { + "unsplit": [ + "G-Lcc" + ], + "split": [ + "G-Lcc" + ] + }, + "globalstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + }, + "userstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + } + }, + "case_multisource_gdiff_lnone": { + "input": [ + "GaL-", + "GbL-" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_gdiff_l1same1none": { + "input": [ + "GaLa", + "GbL-" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_gdiff_l1diff1none": { + "input": [ + "GaLb", + "GcL-" + ], + "localstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "G-Lbc" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "G-Lbc" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "G-Lbc" + ] + } + }, + "case_multisource_gdiff_lallsame": { + "input": [ + "GaLa", + "GbLb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_gdiff_lallother": { + "input": [ + "GaLc", + "GbLc" + ], + "localstyle": { + "unsplit": [ + "G-Lcc" + ], + "split": [ + "G-Lcc" + ] + }, + "globalstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + }, + "userstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + } + } +} \ No newline at end of file diff --git a/lib/iris/tests/integration/attrs_matrix_results_save.json b/lib/iris/tests/integration/attrs_matrix_results_save.json new file mode 100644 index 0000000000..3446c7f312 --- /dev/null +++ b/lib/iris/tests/integration/attrs_matrix_results_save.json @@ -0,0 +1,983 @@ +{ + "case_singlevar_localonly": { + "input": "G-La", + "localstyle": { + "unsplit": [ + "G-La" + ], + "split": [ + "G-La" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "G-La" + ] + }, + "userstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "G-La" + ] + } + }, + "case_singlevar_globalonly": { + "input": "GaL-", + "localstyle": { + "unsplit": [ + "G-La" + ], + "split": [ + "GaL-" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaL-" + ] + }, + "userstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaL-" + ] + } + }, + "case_singlevar_glsame": { + "input": "GaLa", + "localstyle": { + "unsplit": [ + "G-La" + ], + "split": [ + "GaLa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaLa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL-" + ], + "split": [ + "GaLa" + ] + } + }, + "case_singlevar_gldiffer": { + "input": "GaLb", + "localstyle": { + "unsplit": [ + "G-Lb" + ], + "split": [ + "GaLb" + ] + }, + "globalstyle": { + "unsplit": [ + "GbL-" + ], + "split": [ + "GaLb" + ] + }, + "userstyle": { + "unsplit": [ + "GbL-" + ], + "split": [ + "GaLb" + ] + } + }, + "case_multivar_same_noglobal": { + "input": "G-Laa", + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multivar_same_sameglobal": { + "input": "GaLaa", + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLaa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + } + }, + "case_multivar_same_diffglobal": { + "input": "GaLbb", + "localstyle": { + "unsplit": [ + "G-Lbb" + ], + "split": [ + "GaLbb" + ] + }, + "globalstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + }, + "userstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + } + }, + "case_multivar_differ_noglobal": { + "input": "G-Lab", + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multivar_differ_diffglobal": { + "input": "GaLbc", + "localstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + } + }, + "case_multivar_differ_sameglobal": { + "input": "GaLab", + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + } + }, + "case_multivar_1none_noglobal": { + "input": "G-La-", + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multivar_1none_diffglobal": { + "input": "GaLb-", + "localstyle": { + "unsplit": [ + "G-Lba" + ], + "split": [ + "GaLb-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lba" + ], + "split": [ + "GaLb-" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lba" + ], + "split": [ + "GaLb-" + ] + } + }, + "case_multivar_1none_sameglobal": { + "input": "GaLa-", + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLa-" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + } + }, + "case_multisource_gsame_lnone": { + "input": [ + "GaL-", + "GaL-" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaL--" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaL--" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaL--" + ] + } + }, + "case_multisource_gsame_lallsame": { + "input": [ + "GaLa", + "GaLa" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLaa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLaa" + ] + } + }, + "case_multisource_gsame_l1same1none": { + "input": [ + "GaLa", + "GaL-" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "GaLa-" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "GaLa-" + ] + } + }, + "case_multisource_gsame_l1same1other": { + "input": [ + "GaLa", + "GaLb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "GaLab" + ] + } + }, + "case_multisource_gsame_lallother": { + "input": [ + "GaLb", + "GaLb" + ], + "localstyle": { + "unsplit": [ + "G-Lbb" + ], + "split": [ + "GaLbb" + ] + }, + "globalstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + }, + "userstyle": { + "unsplit": [ + "GbL--" + ], + "split": [ + "GaLbb" + ] + } + }, + "case_multisource_gsame_lalldiffer": { + "input": [ + "GaLb", + "GaLc" + ], + "localstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "GaLbc" + ] + } + }, + "case_multisource_gnone_l1one1none": { + "input": [ + "G-La", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multisource_gnone_l1one1same": { + "input": [ + "G-La", + "G-La" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multisource_gnone_l1one1other": { + "input": [ + "G-La", + "G-Lb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_g1none_lnone": { + "input": [ + "GaL-", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multisource_g1none_l1same1none": { + "input": [ + "GaLa", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + }, + "userstyle": { + "unsplit": [ + "G-La-" + ], + "split": [ + "G-La-" + ] + } + }, + "case_multisource_g1none_l1none1same": { + "input": [ + "GaL-", + "G-La" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multisource_g1none_l1diff1none": { + "input": [ + "GaLb", + "G-L-" + ], + "localstyle": { + "unsplit": [ + "G-Lb-" + ], + "split": [ + "G-Lb-" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lb-" + ], + "split": [ + "G-Lb-" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lb-" + ], + "split": [ + "G-Lb-" + ] + } + }, + "case_multisource_g1none_l1none1diff": { + "input": [ + "GaL-", + "G-Lb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_g1none_lallsame": { + "input": [ + "GaLa", + "G-La" + ], + "localstyle": { + "unsplit": [ + "G-Laa" + ], + "split": [ + "G-Laa" + ] + }, + "globalstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + }, + "userstyle": { + "unsplit": [ + "GaL--" + ], + "split": [ + "G-Laa" + ] + } + }, + "case_multisource_g1none_lallother": { + "input": [ + "GaLc", + "G-Lc" + ], + "localstyle": { + "unsplit": [ + "G-Lcc" + ], + "split": [ + "G-Lcc" + ] + }, + "globalstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + }, + "userstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + } + }, + "case_multisource_gdiff_lnone": { + "input": [ + "GaL-", + "GbL-" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_gdiff_l1same1none": { + "input": [ + "GaLa", + "GbL-" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_gdiff_l1diff1none": { + "input": [ + "GaLb", + "GcL-" + ], + "localstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "G-Lbc" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "G-Lbc" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lbc" + ], + "split": [ + "G-Lbc" + ] + } + }, + "case_multisource_gdiff_lallsame": { + "input": [ + "GaLa", + "GbLb" + ], + "localstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "globalstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + }, + "userstyle": { + "unsplit": [ + "G-Lab" + ], + "split": [ + "G-Lab" + ] + } + }, + "case_multisource_gdiff_lallother": { + "input": [ + "GaLc", + "GbLc" + ], + "localstyle": { + "unsplit": [ + "G-Lcc" + ], + "split": [ + "G-Lcc" + ] + }, + "globalstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + }, + "userstyle": { + "unsplit": [ + "GcL--" + ], + "split": [ + "G-Lcc" + ] + } + } +} \ No newline at end of file diff --git a/lib/iris/tests/integration/netcdf/test_delayed_save.py b/lib/iris/tests/integration/netcdf/test_delayed_save.py index d3f2ce22c4..177e9ce325 100644 --- a/lib/iris/tests/integration/netcdf/test_delayed_save.py +++ b/lib/iris/tests/integration/netcdf/test_delayed_save.py @@ -5,6 +5,7 @@ """ Integration tests for delayed saving. """ +import re import warnings from cf_units import Unit @@ -23,6 +24,13 @@ class Test__lazy_stream_data: + # Ensure all saves are done with split-atttribute saving, + # -- because some of these tests are sensitive to unexpected warnings. + @pytest.fixture(autouse=True) + def all_saves_with_split_attrs(self): + with iris.FUTURE.context(save_split_attrs=True): + yield + @pytest.fixture(autouse=True) def output_path(self, tmp_path): # A temporary output netcdf-file path, **unique to each test call**. @@ -190,19 +198,36 @@ def test_scheduler_types( if not save_is_delayed: assert result is None - assert len(logged_warnings) == 2 issued_warnings = [log.message for log in logged_warnings] else: assert result is not None assert len(logged_warnings) == 0 - warnings.simplefilter("error") - issued_warnings = result.compute() + with warnings.catch_warnings(record=True) as logged_warnings: + # The compute *returns* warnings from the delayed operations. + issued_warnings = result.compute() + issued_warnings = [ + log.message for log in logged_warnings + ] + issued_warnings + + warning_messages = [warning.args[0] for warning in issued_warnings] + if scheduler_type == "DistributedScheduler": + # Ignore any "large data transfer" messages generated, + # specifically when testing with the Distributed scheduler. + # These may not always occur and don't reflect something we want to + # test for. + large_transfer_message_regex = re.compile( + "Sending large graph.* may cause some slowdown", re.DOTALL + ) + warning_messages = [ + message + for message in warning_messages + if not large_transfer_message_regex.search(message) + ] - assert len(issued_warnings) == 2 + # In all cases, should get 2 fill value warnings overall. + assert len(warning_messages) == 2 expected_msg = "contains unmasked data points equal to the fill-value" - assert all( - expected_msg in warning.args[0] for warning in issued_warnings - ) + assert all(expected_msg in message for message in warning_messages) def test_time_of_writing( self, save_is_delayed, output_path, scheduler_type diff --git a/lib/iris/tests/integration/test_netcdf__loadsaveattrs.py b/lib/iris/tests/integration/test_netcdf__loadsaveattrs.py new file mode 100644 index 0000000000..b09b408827 --- /dev/null +++ b/lib/iris/tests/integration/test_netcdf__loadsaveattrs.py @@ -0,0 +1,1678 @@ +# Copyright Iris contributors +# +# This file is part of Iris and is released under the BSD license. +# See LICENSE in the root of the repository for full licensing details. +""" +Integration tests for loading and saving netcdf file attributes. + +Notes: +(1) attributes in netCDF files can be either "global attributes", or variable +("local") type. + +(2) in CF terms, this testcode classifies specific attributes (names) as either +"global" = names recognised by convention as normally stored in a file-global +setting; "local" = recognised names specifying details of variable data +encoding, which only make sense as a "local" attribute (i.e. on a variable), +and "user" = any additional attributes *not* recognised in conventions, which +might be recorded either globally or locally. + +""" +import inspect +import json +import os +from pathlib import Path +import re +from typing import Iterable, List, Optional, Union +import warnings + +import numpy as np +import pytest + +import iris +import iris.coord_systems +from iris.coords import DimCoord +from iris.cube import Cube +import iris.fileformats.netcdf +import iris.fileformats.netcdf._thread_safe_nc as threadsafe_nc4 + +# First define the known controlled attribute names defined by netCDf and CF conventions +# +# Note: certain attributes are "normally" global (e.g. "Conventions"), whilst others +# will only usually appear on a data-variable (e.g. "scale_factor"", "coordinates"). +# I'm calling these 'global-style' and 'local-style'. +# Any attributes either belongs to one of these 2 groups, or neither. Those 3 distinct +# types may then have different behaviour in Iris load + save. + +# A list of "global-style" attribute names : those which should be global attributes by +# default (i.e. file- or group-level, *not* attached to a variable). + +_GLOBAL_TEST_ATTRS = set(iris.fileformats.netcdf.saver._CF_GLOBAL_ATTRS) +# Remove this one, which has peculiar behaviour + is tested separately +# N.B. this is not the same as 'Conventions', but is caught in the crossfire when that +# one is processed. +_GLOBAL_TEST_ATTRS -= set(["conventions"]) +_GLOBAL_TEST_ATTRS = sorted(_GLOBAL_TEST_ATTRS) + + +# Define a fixture to parametrise tests over the 'global-style' test attributes. +# This just provides a more concise way of writing parametrised tests. +@pytest.fixture(params=_GLOBAL_TEST_ATTRS) +def global_attr(request): + # N.B. "request" is a standard PyTest fixture + return request.param # Return the name of the attribute to test. + + +# A list of "local-style" attribute names : those which should be variable attributes +# by default (aka "local", "variable" or "data" attributes) . +_LOCAL_TEST_ATTRS = ( + iris.fileformats.netcdf.saver._CF_DATA_ATTRS + + iris.fileformats.netcdf.saver._UKMO_DATA_ATTRS +) + + +# Define a fixture to parametrise over the 'local-style' test attributes. +# This just provides a more concise way of writing parametrised tests. +@pytest.fixture(params=_LOCAL_TEST_ATTRS) +def local_attr(request): + # N.B. "request" is a standard PyTest fixture + return request.param # Return the name of the attribute to test. + + +# Define whether to parametrise over split-attribute saving +# Just for now, so that we can run against legacy code. +_SPLIT_SAVE_SUPPORTED = hasattr(iris.FUTURE, "save_split_attrs") +_SPLIT_PARAM_VALUES = [False, True] +_SPLIT_PARAM_IDS = ["nosplit", "split"] +_MATRIX_LOAD_RESULTSTYLES = ["legacy", "newstyle"] +if not _SPLIT_SAVE_SUPPORTED: + _SPLIT_PARAM_VALUES.remove(True) + _SPLIT_PARAM_IDS.remove("split") + _MATRIX_LOAD_RESULTSTYLES.remove("newstyle") + + +_SKIP_WARNCHECK = "_no_warnings_check" + + +def check_captured_warnings( + expected_keys: List[str], + captured_warnings: List[warnings.WarningMessage], + allow_possible_legacy_warning: bool = False, +): + """ + Compare captured warning messages with a list of regexp-matches. + + We allow them to occur in any order, and replace each actual result in the list + with its matching regexp, if any, as this makes failure results much easier to + comprehend. + + """ + # TODO: when iris.FUTURE.save_split_attrs is removed, we can remove the + # 'allow_possible_legacy_warning' arg. + + if expected_keys is None: + expected_keys = [] + elif hasattr(expected_keys, "upper"): + # Handle a single string + if expected_keys == _SKIP_WARNCHECK: + # No check at all in this case + return + expected_keys = [expected_keys] + + if allow_possible_legacy_warning: + # Allow but do not require a "saving without split-attributes" warning. + legacy_message_key = ( + "Saving to netcdf with legacy-style attribute handling for backwards " + "compatibility." + ) + expected_keys.append(legacy_message_key) + + expected_keys = [re.compile(key) for key in expected_keys] + found_results = [str(warning.message) for warning in captured_warnings] + remaining_keys = expected_keys.copy() + for i_message, message in enumerate(found_results.copy()): + for key in remaining_keys: + if key.search(message): + # Hit : replace one message in the list with its matching "key" + found_results[i_message] = key + # remove the matching key + remaining_keys.remove(key) + # skip on to next message + break + + if allow_possible_legacy_warning: + # Remove any unused "legacy attribute saving" key. + # N.B. this is the *only* key we will tolerate not being used. + expected_keys = [ + key for key in expected_keys if key != legacy_message_key + ] + + assert set(found_results) == set(expected_keys) + + +class MixinAttrsTesting: + @staticmethod + def _calling_testname(): + """ + Search up the callstack for a function named "test_*", and return the name for + use as a test identifier. + + Idea borrowed from :meth:`iris.tests.IrisTest.result_path`. + + Returns + ------- + test_name : str + Returns a string, with the initial "test_" removed. + """ + test_name = None + stack = inspect.stack() + for frame in stack[1:]: + full_name = frame[3] + if full_name.startswith("test_"): + # Return the name with the initial "test_" removed. + test_name = full_name.replace("test_", "") + break + # Search should not fail, unless we were called from an inappropriate place? + assert test_name is not None + return test_name + + @pytest.fixture(autouse=True) + def make_tempdir(self, tmp_path_factory): + """ + Automatically-run fixture to activate the 'tmp_path_factory' fixture on *every* + test: Make a directory for temporary files, and record it on the test instance. + + N.B. "tmp_path_factory" is a standard PyTest fixture, which provides a dirpath + *shared* by all tests. This is a bit quicker and more debuggable than having a + directory per-testcase. + """ + # Store the temporary directory path on the test instance + self.tmpdir = str(tmp_path_factory.getbasetemp()) + + def _testfile_path(self, basename: str) -> str: + # Make a filepath in the temporary directory, based on the name of the calling + # test method, and the "self.attrname" it sets up. + testname = self._calling_testname() + # Turn that into a suitable temporary filename + ext_name = getattr(self, "testname_extension", "") + if ext_name: + basename = basename + "_" + ext_name + path_str = f"{self.tmpdir}/{self.__class__.__name__}__test_{testname}-{self.attrname}__{basename}.nc" + return path_str + + @staticmethod + def _default_vars_and_attrvalues(vars_and_attrvalues): + # Simple default strategy : turn a simple value into {'var': value} + if not isinstance(vars_and_attrvalues, dict): + # Treat single non-dict argument as a value for a single variable + vars_and_attrvalues = {"var": vars_and_attrvalues} + return vars_and_attrvalues + + def create_testcase_files_or_cubes( + self, + attr_name: str, + global_value_file1: Optional[str] = None, + var_values_file1: Union[None, str, dict] = None, + global_value_file2: Optional[str] = None, + var_values_file2: Union[None, str, dict] = None, + cubes: bool = False, + ): + """ + Create temporary input netcdf files, or cubes, with specific content. + + Creates a temporary netcdf test file (or two) with the given global and + variable-local attributes. Or build cubes, similarly. + If ``cubes`` is ``True``, save cubes in ``self.input_cubes``. + Else save filepaths in ``self.input_filepaths``. + + Note: 'var_values_file' args are dictionaries. The named variables are + created, with an attribute = the dictionary value, *except* that a dictionary + value of None means that a local attribute is _not_ created on the variable. + """ + # save attribute on the instance + self.attrname = attr_name + + if not cubes: + # Make some input file paths. + filepath1 = self._testfile_path("testfile") + filepath2 = self._testfile_path("testfile2") + + def make_file( + filepath: str, global_value=None, var_values=None + ) -> str: + ds = threadsafe_nc4.DatasetWrapper(filepath, "w") + if global_value is not None: + ds.setncattr(attr_name, global_value) + ds.createDimension("x", 3) + # Rationalise the per-variable requirements + # N.B. this *always* makes at least one variable, as otherwise we would + # load no cubes. + var_values = self._default_vars_and_attrvalues(var_values) + for var_name, value in var_values.items(): + v = ds.createVariable(var_name, int, ("x",)) + if value is not None: + v.setncattr(attr_name, value) + ds.close() + return filepath + + def make_cubes(var_name, global_value=None, var_values=None): + cubes = [] + var_values = self._default_vars_and_attrvalues(var_values) + for varname, local_value in var_values.items(): + cube = Cube(np.arange(3.0), var_name=var_name) + cubes.append(cube) + dimco = DimCoord(np.arange(3.0), var_name="x") + cube.add_dim_coord(dimco, 0) + if not hasattr(cube.attributes, "globals"): + # N.B. For now, also support oldstyle "single" cube attribute + # dictionaries, so that we can generate legacy results to compore + # with the "new world" results. + single_value = global_value + if local_value is not None: + single_value = local_value + if single_value is not None: + cube.attributes[attr_name] = single_value + else: + if global_value is not None: + cube.attributes.globals[attr_name] = global_value + if local_value is not None: + cube.attributes.locals[attr_name] = local_value + return cubes + + if cubes: + results = make_cubes("v1", global_value_file1, var_values_file1) + if global_value_file2 is not None or var_values_file2 is not None: + results.extend( + make_cubes("v2", global_value_file2, var_values_file2) + ) + else: + results = [ + make_file(filepath1, global_value_file1, var_values_file1) + ] + if global_value_file2 is not None or var_values_file2 is not None: + # Make a second testfile and add it to files-to-be-loaded. + results.append( + make_file(filepath2, global_value_file2, var_values_file2) + ) + + # Save results on the instance + if cubes: + self.input_cubes = results + else: + self.input_filepaths = results + return results + + def run_testcase( + self, + attr_name: str, + values: Union[List, List[List]], + create_cubes_or_files: str = "files", + ) -> None: + """ + Create testcase inputs (files or cubes) with specified attributes. + + Parameters + ---------- + attr_name : str + name for all attributes created in this testcase. + Also saved as ``self.attrname``, as used by ``fetch_results``. + values : list + list, or lists, of values for created attributes, each containing one global + and one-or-more local attribute values as [global, local1, local2...] + create_cubes_or_files : str, default "files" + create either cubes or testfiles. + + If ``create_cubes_or_files`` == "files", create one temporary netCDF file per + values-list, and record in ``self.input_filepaths``. + Else if ``create_cubes_or_files`` == "cubes", create sets of cubes with common + global values and store all of them to ``self.input_cubes``. + + """ + # Save common attribute-name on the instance + self.attrname = attr_name + + # Standardise input to a list-of-lists, each inner list = [global, *locals] + assert isinstance(values, list) + if not isinstance(values[0], list): + values = [values] + assert len(values) in (1, 2) + assert len(values[0]) > 1 + + # Decode into global1, *locals1, and optionally global2, *locals2 + global1 = values[0][0] + vars1 = {} + i_var = 0 + for value in values[0][1:]: + vars1[f"var_{i_var}"] = value + i_var += 1 + if len(values) == 1: + global2 = None + vars2 = None + else: + assert len(values) == 2 + global2 = values[1][0] + vars2 = {} + for value in values[1][1:]: + vars2[f"var_{i_var}"] = value + i_var += 1 + + # Create test files or cubes (and store data on the instance) + assert create_cubes_or_files in ("cubes", "files") + make_cubes = create_cubes_or_files == "cubes" + self.create_testcase_files_or_cubes( + attr_name=attr_name, + global_value_file1=global1, + var_values_file1=vars1, + global_value_file2=global2, + var_values_file2=vars2, + cubes=make_cubes, + ) + + def fetch_results( + self, + filepath: str = None, + cubes: Iterable[Cube] = None, + oldstyle_combined: bool = False, + ): + """ + Return testcase results from an output file or cubes in a standardised form. + + Unpick the global+local values of the attribute ``self.attrname``, resulting + from a test operation. + A file result is always [global_value, *local_values] + A cubes result is [*[global_value, *local_values]] (over different global vals) + + When ``oldstyle_combined`` is ``True``, simulate the "legacy" style results, + that is when each cube had a single combined attribute dictionary. + This enables us to check against former behaviour, by combining results into a + single dictionary. N.B. per-cube single results are then returned in the form: + [None, cube1, cube2...]. + N.B. if results are from a *file*, this key has **no effect**. + + """ + attr_name = self.attrname + if filepath is not None: + # Fetch global and local values from a file + try: + ds = threadsafe_nc4.DatasetWrapper(filepath) + global_result = ( + ds.getncattr(attr_name) + if attr_name in ds.ncattrs() + else None + ) + # Fetch local attr value from all data variables : In our testcases, + # that is all *except* dimcoords (ones named after dimensions). + local_vars_results = [ + ( + var.name, + ( + var.getncattr(attr_name) + if attr_name in var.ncattrs() + else None + ), + ) + for var in ds.variables.values() + if var.name not in ds.dimensions + ] + finally: + ds.close() + # This version always returns a single result set [global, local1[, local2]] + # Return global, plus locals sorted by varname + local_vars_results = sorted(local_vars_results, key=lambda x: x[0]) + results = [global_result] + [val for _, val in local_vars_results] + else: + assert cubes is not None + # Sort result cubes according to a standard ordering. + cubes = sorted(cubes, key=lambda cube: cube.name()) + # Fetch globals and locals from cubes. + # This way returns *multiple* result 'sets', one for each global value + if oldstyle_combined or not _SPLIT_SAVE_SUPPORTED: + # Use all-combined dictionaries in place of actual cubes' attributes + cube_attr_dicts = [dict(cube.attributes) for cube in cubes] + # Return results as if all cubes had global=None + results = [ + [None] + + [ + cube_attr_dict.get(attr_name, None) + for cube_attr_dict in cube_attr_dicts + ] + ] + else: + # Return a result-set for each occurring global value (possibly + # including a 'None'). + global_values = set( + cube.attributes.globals.get(attr_name, None) + for cube in cubes + ) + results = [ + [globalval] + + [ + cube.attributes.locals.get(attr_name, None) + for cube in cubes + if cube.attributes.globals.get(attr_name, None) + == globalval + ] + for globalval in sorted(global_values, key=str) + ] + return results + + +# Define all the testcases for different parameter input structures : +# - combinations of matching+differing, global+local params +# - these are interpreted differently for the 3 main test types : Load/Save/Roundtrip +_MATRIX_TESTCASE_INPUTS = { + "case_singlevar_localonly": "G-La", + "case_singlevar_globalonly": "GaL-", + "case_singlevar_glsame": "GaLa", + "case_singlevar_gldiffer": "GaLb", + "case_multivar_same_noglobal": "G-Laa", + "case_multivar_same_sameglobal": "GaLaa", + "case_multivar_same_diffglobal": "GaLbb", + "case_multivar_differ_noglobal": "G-Lab", + "case_multivar_differ_diffglobal": "GaLbc", + "case_multivar_differ_sameglobal": "GaLab", + "case_multivar_1none_noglobal": "G-La-", + "case_multivar_1none_diffglobal": "GaLb-", + "case_multivar_1none_sameglobal": "GaLa-", + # Note: the multi-set input cases are more complex. + # These are encoded as *pairs* of specs, for 2 different files, or cubes with + # independent global values. + # We assume that there can be nothing "special" about a var's interaction with + # another one from the same (as opposed to the "other") file. + "case_multisource_gsame_lnone": ["GaL-", "GaL-"], + "case_multisource_gsame_lallsame": ["GaLa", "GaLa"], + "case_multisource_gsame_l1same1none": ["GaLa", "GaL-"], + "case_multisource_gsame_l1same1other": ["GaLa", "GaLb"], + "case_multisource_gsame_lallother": ["GaLb", "GaLb"], + "case_multisource_gsame_lalldiffer": ["GaLb", "GaLc"], + "case_multisource_gnone_l1one1none": ["G-La", "G-L-"], + "case_multisource_gnone_l1one1same": ["G-La", "G-La"], + "case_multisource_gnone_l1one1other": ["G-La", "G-Lb"], + "case_multisource_g1none_lnone": ["GaL-", "G-L-"], + "case_multisource_g1none_l1same1none": ["GaLa", "G-L-"], + "case_multisource_g1none_l1none1same": ["GaL-", "G-La"], + "case_multisource_g1none_l1diff1none": ["GaLb", "G-L-"], + "case_multisource_g1none_l1none1diff": ["GaL-", "G-Lb"], + "case_multisource_g1none_lallsame": ["GaLa", "G-La"], + "case_multisource_g1none_lallother": ["GaLc", "G-Lc"], + "case_multisource_gdiff_lnone": ["GaL-", "GbL-"], + "case_multisource_gdiff_l1same1none": ["GaLa", "GbL-"], + "case_multisource_gdiff_l1diff1none": ["GaLb", "GcL-"], + "case_multisource_gdiff_lallsame": ["GaLa", "GbLb"], + "case_multisource_gdiff_lallother": ["GaLc", "GbLc"], +} +_MATRIX_TESTCASES = list(_MATRIX_TESTCASE_INPUTS.keys()) + +# +# Define the attrs against which all matrix tests are run +# +max_param_attrs = None +# max_param_attrs = 5 + +_MATRIX_ATTRNAMES = _LOCAL_TEST_ATTRS[:max_param_attrs] +_MATRIX_ATTRNAMES += _GLOBAL_TEST_ATTRS[:max_param_attrs] +_MATRIX_ATTRNAMES += ["user"] + +# remove special-cases, for now : all these behave irregularly (i.e. unlike the known +# "globalstyle", or "localstyle" generic cases). +# N.B. not including "Conventions", which is not in the globals list, so won't be +# matrix-tested unless we add it specifically. +# TODO: decide if any of these need to be tested, as separate test-styles. +_SPECIAL_ATTRS = [ + "ukmo__process_flags", + "missing_value", + "standard_error_multiplier", + "STASH", + "um_stash_source", +] +_MATRIX_ATTRNAMES = [ + attr for attr in _MATRIX_ATTRNAMES if attr not in _SPECIAL_ATTRS +] + + +# +# A routine to work "backwards" from an attribute name to its "style", i.e. type category. +# Possible styles are "globalstyle", "localstyle", "userstyle". +# +_ATTR_STYLES = ["localstyle", "globalstyle", "userstyle"] + + +def deduce_attr_style(attrname: str) -> str: + # Extract the attribute "style type" from an attr_param name + if attrname in _LOCAL_TEST_ATTRS: + style = "localstyle" + elif attrname in _GLOBAL_TEST_ATTRS: + style = "globalstyle" + else: + assert attrname == "user" + style = "userstyle" + return style + + +# +# Decode a matrix "input spec" to codes for global + local values. +# +def decode_matrix_input(input_spec): + # Decode a matrix-test input specification, like "GaLbc", into lists of values. + # E.G. "GaLbc" -> ["a", "b", "c"] + # ["GaLbc", "GbLbc"] -> [["a", "b", "c"], ["b", "b", c"]] + # N.B. in this form "values" are all one-character strings. + def decode_specstring(spec: str) -> List[Union[str, None]]: + # Decode an input spec-string to input/output attribute values + assert spec[0] == "G" and spec[2] == "L" + allvals = spec[1] + spec[3:] + result = [None if valchar == "-" else valchar for valchar in allvals] + return result + + if isinstance(input_spec, str): + # Single-source spec (one cube or one file) + vals = decode_specstring(input_spec) + result = [vals] + else: + # Dual-source spec (two files, or sets of cubes with a common global value) + vals_A = decode_specstring(input_spec[0]) + vals_B = decode_specstring(input_spec[1]) + result = [vals_A, vals_B] + + return result + + +def encode_matrix_result(results: List[List[str]]) -> List[str]: + # Re-code a set of output results, [*[global-value, *local-values]] as a list of + # strings, like ["GaL-b"] or ["GaLabc", "GbLabc"]. + # N.B. again assuming that all values are just one-character strings, or None. + assert isinstance(results, Iterable) and len(results) >= 1 + if not isinstance(results[0], list): + results = [results] + assert all( + all(val is None or isinstance(val, str) for val in vals) + for vals in results + ) + + # Translate "None" values to "-" + def valrep(val): + return "-" if val is None else val + + results = list( + "".join(["G", valrep(vals[0]), "L"] + list(map(valrep, vals[1:]))) + for vals in results + ) + return results + + +# +# The "expected" matrix test results are stored in JSON files (one for each test-type). +# We can also save the found results. +# +_MATRIX_TESTTYPES = ("load", "save", "roundtrip") + + +@pytest.fixture(autouse=True, scope="session") +def matrix_results(): + matrix_filepaths = { + testtype: ( + Path(__file__).parent / f"attrs_matrix_results_{testtype}.json" + ) + for testtype in _MATRIX_TESTTYPES + } + # An environment variable can trigger saving of the results. + save_matrix_results = bool( + int(os.environ.get("SAVEALL_MATRIX_RESULTS", "0")) + ) + + matrix_results = {} + for testtype in _MATRIX_TESTTYPES: + # Either fetch from file, or initialise, a results matrix for each test type + # (load/save/roundtrip). + input_path = matrix_filepaths[testtype] + if input_path.exists(): + # Load from file with json. + with open(input_path) as file_in: + testtype_results = json.load(file_in) + # Check compatibility (in case we changed the test-specs list) + assert set(testtype_results.keys()) == set(_MATRIX_TESTCASES) + assert all( + testtype_results[key]["input"] == _MATRIX_TESTCASE_INPUTS[key] + for key in _MATRIX_TESTCASES + ) + else: + # Create empty matrix results content (for one test-type) + testtype_results = {} + for testcase in _MATRIX_TESTCASES: + test_case_results = {} + testtype_results[testcase] = test_case_results + # Every testcase dict has an "input" slot with the test input spec, + # basically just to help human readability. + test_case_results["input"] = _MATRIX_TESTCASE_INPUTS[testcase] + for attrstyle in _ATTR_STYLES: + if testtype == "load": + # "load" test results have a "legacy" result (as for a single + # combined attrs dictionary), and a "newstyle" result (with + # the new split dictionary). + test_case_results[attrstyle] = { + "legacy": None, + "newstyle": None, + } + else: + # "save"/"roundtrip"-type results record 2 result sets, + # (unsplit/split) for each attribute-style + # - i.e. when saved without/with split_attrs_saving enabled. + test_case_results[attrstyle] = { + "unsplit": None, + "split": None, + } + + # Build complete data: matrix_results[TESTTYPES][TESTCASES][ATTR_STYLES] + matrix_results[testtype] = testtype_results + + # Pass through to all the tests : they can also update it, if enabled. + yield save_matrix_results, matrix_results + + if save_matrix_results: + for testtype in _MATRIX_TESTTYPES: + output_path = matrix_filepaths[testtype] + results = matrix_results[testtype] + with open(output_path, "w") as file_out: + json.dump(results, file_out, indent=2) + + +class TestRoundtrip(MixinAttrsTesting): + """ + Test handling of attributes in roundtrip netcdf-iris-netcdf. + + This behaviour should be (almost) unchanged by the adoption of + split-attribute handling. + + NOTE: the tested combinations in the 'TestLoad' test all match tests here, but not + *all* of the tests here are useful there. To avoid confusion (!) the ones which are + paralleled in TestLoad there have the identical test-names. However, as the tests + are all numbered that means there are missing numbers there. + The tests are numbered only so it is easier to review the discovered test list + (which is sorted). + + """ + + # Parametrise all tests over split/unsplit saving. + @pytest.fixture( + params=_SPLIT_PARAM_VALUES, ids=_SPLIT_PARAM_IDS, autouse=True + ) + def do_split(self, request): + do_split = request.param + self.save_split_attrs = do_split + return do_split + + def run_roundtrip_testcase(self, attr_name, values): + """ + Initialise the testcase from the passed-in controls, configure the input + files and run a save-load roundtrip to produce the output file. + + The name of the attribute, and the input and output temporary filepaths are + stored on the instance, where "self.check_roundtrip_results()" can get them. + + """ + self.run_testcase( + attr_name=attr_name, values=values, create_cubes_or_files="files" + ) + self.result_filepath = self._testfile_path("result") + + with warnings.catch_warnings(record=True) as captured_warnings: + # Do a load+save to produce a testable output result in a new file. + cubes = iris.load(self.input_filepaths) + # Ensure stable result order. + cubes = sorted(cubes, key=lambda cube: cube.name()) + do_split = getattr(self, "save_split_attrs", False) + kwargs = ( + dict(save_split_attrs=do_split) + if _SPLIT_SAVE_SUPPORTED + else dict() + ) + with iris.FUTURE.context(**kwargs): + iris.save(cubes, self.result_filepath) + + self.captured_warnings = captured_warnings + + def check_roundtrip_results(self, expected, expected_warnings=None): + """ + Run checks on the generated output file. + + The counterpart to :meth:`run_roundtrip_testcase`, with similar arguments. + Check existence (or not) of a global attribute, and a number of local + (variable) attributes. + Values of 'None' mean to check that the relevant global/local attribute does + *not* exist. + + Also check the warnings captured during the testcase run. + """ + # N.B. there is only ever one result-file, but it can contain various variables + # which came from different input files. + results = self.fetch_results(filepath=self.result_filepath) + assert results == expected + check_captured_warnings( + expected_warnings, + self.captured_warnings, + # N.B. only allow a legacy-attributes warning when NOT saving split-attrs + allow_possible_legacy_warning=not self.save_split_attrs, + ) + + ####################################################### + # Tests on "user-style" attributes. + # This means any arbitrary attribute which a user might have added -- i.e. one with + # a name which is *not* recognised in the netCDF or CF conventions. + # + + def test_01_userstyle_single_global(self): + self.run_roundtrip_testcase( + attr_name="myname", values=["single-value", None] + ) + # Default behaviour for a general global user-attribute. + # It simply remains global. + self.check_roundtrip_results(["single-value", None]) + + def test_02_userstyle_single_local(self, do_split): + # Default behaviour for a general local user-attribute. + # It results in a "promoted" global attribute. + self.run_roundtrip_testcase( + attr_name="myname", # A generic "user" attribute with no special handling + values=[None, "single-value"], + ) + if do_split: + expected = [None, "single-value"] + else: + expected = ["single-value", None] + self.check_roundtrip_results(expected) + + def test_03_userstyle_multiple_different(self, do_split): + # Default behaviour for general user-attributes. + # The global attribute is lost because there are local ones. + self.run_roundtrip_testcase( + attr_name="random", # A generic "user" attribute with no special handling + values=[ + ["common_global", "f1v1", "f1v2"], + ["common_global", "x1", "x2"], + ], + ) + expected_result = ["common_global", "f1v1", "f1v2", "x1", "x2"] + if not do_split: + # in legacy mode, global is lost + expected_result[0] = None + # just check they are all there and distinct + self.check_roundtrip_results(expected_result) + + def test_04_userstyle_matching_promoted(self, do_split): + # matching local user-attributes are "promoted" to a global one. + # (but not when saving split attributes) + input_values = ["global_file1", "same-value", "same-value"] + self.run_roundtrip_testcase( + attr_name="random", + values=input_values, + ) + if do_split: + expected = input_values + else: + expected = ["same-value", None, None] + self.check_roundtrip_results(expected) + + def test_05_userstyle_matching_crossfile_promoted(self, do_split): + # matching user-attributes are promoted, even across input files. + # (but not when saving split attributes) + self.run_roundtrip_testcase( + attr_name="random", + values=[ + ["global_file1", "same-value", "same-value"], + [None, "same-value", "same-value"], + ], + ) + if do_split: + # newstyle saves: locals are preserved, mismathced global is *lost* + expected_result = [ + None, + "same-value", + "same-value", + "same-value", + "same-value", + ] + # warnings about the clash + expected_warnings = [ + "Saving.* global attributes.* as local", + 'attributes.* of cube "var_0" were not saved', + 'attributes.* of cube "var_1" were not saved', + ] + else: + # oldstyle saves: matching locals promoted, override original global + expected_result = ["same-value", None, None, None, None] + expected_warnings = None + + self.check_roundtrip_results(expected_result, expected_warnings) + + def test_06_userstyle_nonmatching_remainlocal(self, do_split): + # Non-matching user attributes remain 'local' to the individual variables. + input_values = ["global_file1", "value-1", "value-2"] + if do_split: + # originals are preserved + expected_result = input_values + else: + # global is lost + expected_result = [None, "value-1", "value-2"] + self.run_roundtrip_testcase(attr_name="random", values=input_values) + self.check_roundtrip_results(expected_result) + + ####################################################### + # Tests on "Conventions" attribute. + # Note: the usual 'Conventions' behaviour is already tested elsewhere + # - see :class:`TestConventionsAttributes` above + # + # TODO: the name 'conventions' (lower-case) is also listed in _CF_GLOBAL_ATTRS, but + # we have excluded it from the global-attrs testing here. We probably still need to + # test what that does, though it's inclusion might simply be a mistake. + # + + def test_07_conventions_var_local(self): + # What happens if 'Conventions' appears as a variable-local attribute. + # N.B. this is not good CF, but we'll see what happens anyway. + self.run_roundtrip_testcase( + attr_name="Conventions", + values=[None, "user_set"], + ) + self.check_roundtrip_results(["CF-1.7", None]) + + def test_08_conventions_var_both(self): + # What happens if 'Conventions' appears as both global + local attribute. + self.run_roundtrip_testcase( + attr_name="Conventions", + values=["global-setting", "local-setting"], + ) + # standard content from Iris save + self.check_roundtrip_results(["CF-1.7", None]) + + ####################################################### + # Tests on "global" style attributes + # = those specific ones which 'ought' only to be global (except on collisions) + # + def test_09_globalstyle__global(self, global_attr): + attr_content = f"Global tracked {global_attr}" + self.run_roundtrip_testcase( + attr_name=global_attr, + values=[attr_content, None], + ) + self.check_roundtrip_results([attr_content, None]) + + def test_10_globalstyle__local(self, global_attr, do_split): + # Strictly, not correct CF, but let's see what it does with it. + attr_content = f"Local tracked {global_attr}" + input_values = [None, attr_content] + self.run_roundtrip_testcase( + attr_name=global_attr, + values=input_values, + ) + if do_split: + # remains local as supplied, but there is a warning + expected_result = input_values + expected_warning = f"'{global_attr}'.* should only be a CF global" + else: + # promoted to global + expected_result = [attr_content, None] + expected_warning = None + self.check_roundtrip_results(expected_result, expected_warning) + + def test_11_globalstyle__both(self, global_attr, do_split): + attr_global = f"Global-{global_attr}" + attr_local = f"Local-{global_attr}" + input_values = [attr_global, attr_local] + self.run_roundtrip_testcase( + attr_name=global_attr, + values=input_values, + ) + if do_split: + # remains local as supplied, but there is a warning + expected_result = input_values + expected_warning = "should only be a CF global" + else: + # promoted to global, no local value, original global lost + expected_result = [attr_local, None] + expected_warning = None + self.check_roundtrip_results(expected_result, expected_warning) + + def test_12_globalstyle__multivar_different(self, global_attr): + # Multiple *different* local settings are retained, not promoted + attr_1 = f"Local-{global_attr}-1" + attr_2 = f"Local-{global_attr}-2" + expect_warning = "should only be a CF global attribute" + # A warning should be raised when writing the result. + self.run_roundtrip_testcase( + attr_name=global_attr, + values=[None, attr_1, attr_2], + ) + self.check_roundtrip_results([None, attr_1, attr_2], expect_warning) + + def test_13_globalstyle__multivar_same(self, global_attr, do_split): + # Multiple *same* local settings are promoted to a common global one + attrval = f"Locally-defined-{global_attr}" + input_values = [None, attrval, attrval] + self.run_roundtrip_testcase( + attr_name=global_attr, + values=input_values, + ) + if do_split: + # remains local, but with a warning + expected_warning = "should only be a CF global" + expected_result = input_values + else: + # promoted to global + expected_warning = None + expected_result = [attrval, None, None] + self.check_roundtrip_results(expected_result, expected_warning) + + def test_14_globalstyle__multifile_different(self, global_attr, do_split): + # Different global attributes from multiple files are retained as local ones + attr_1 = f"Global-{global_attr}-1" + attr_2 = f"Global-{global_attr}-2" + self.run_roundtrip_testcase( + attr_name=global_attr, + values=[[attr_1, None], [attr_2, None]], + ) + # A warning should be raised when writing the result. + expected_warnings = ["should only be a CF global attribute"] + if do_split: + # An extra warning, only when saving with split-attributes. + expected_warnings = ["Saving.* as local"] + expected_warnings + self.check_roundtrip_results([None, attr_1, attr_2], expected_warnings) + + def test_15_globalstyle__multifile_same(self, global_attr): + # Matching global-type attributes in multiple files are retained as global + attrval = f"Global-{global_attr}" + self.run_roundtrip_testcase( + attr_name=global_attr, values=[[attrval, None], [attrval, None]] + ) + self.check_roundtrip_results([attrval, None, None]) + + ####################################################### + # Tests on "local" style attributes + # = those specific ones which 'ought' to appear attached to a variable, rather than + # being global + # + + @pytest.mark.parametrize("origin_style", ["input_global", "input_local"]) + def test_16_localstyle(self, local_attr, origin_style, do_split): + # local-style attributes should *not* get 'promoted' to global ones + # Set the name extension to avoid tests with different 'style' params having + # collisions over identical testfile names + self.testname_extension = origin_style + + attrval = f"Attr-setting-{local_attr}" + if local_attr == "missing_value": + # Special-cases : 'missing_value' type must be compatible with the variable + attrval = 303 + elif local_attr == "ukmo__process_flags": + # What this does when a GLOBAL attr seems to be weird + unintended. + # 'this' --> 't h i s' + attrval = "process" + # NOTE: it's also supposed to handle vector values - which we are not + # testing. + + # NOTE: results *should* be the same whether the original attribute is written + # as global or a variable attribute + if origin_style == "input_global": + # Record in source as a global attribute + values = [attrval, None] + else: + assert origin_style == "input_local" + # Record in source as a variable-local attribute + values = [None, attrval] + self.run_roundtrip_testcase(attr_name=local_attr, values=values) + + if ( + local_attr in ("missing_value", "standard_error_multiplier") + and origin_style == "input_local" + ): + # These ones are actually discarded by roundtrip. + # Not clear why, but for now this captures the facts. + expect_global = None + expect_var = None + else: + expect_global = None + if ( + local_attr == "ukmo__process_flags" + and origin_style == "input_global" + and not do_split + ): + # This is very odd behaviour + surely unintended. + # It's supposed to handle vector values (which we are not checking). + # But the weird behaviour only applies to the 'global' test, which is + # obviously not normal usage anyway. + attrval = "p r o c e s s" + expect_var = attrval + + if local_attr == "STASH" and ( + origin_style == "input_local" or not do_split + ): + # A special case, output translates this to a different attribute name. + self.attrname = "um_stash_source" + + expected_result = [expect_global, expect_var] + if do_split and origin_style == "input_global": + # The result is simply the "other way around" + expected_result = expected_result[::-1] + self.check_roundtrip_results(expected_result) + + @pytest.mark.parametrize("testcase", _MATRIX_TESTCASES[:max_param_attrs]) + @pytest.mark.parametrize("attrname", _MATRIX_ATTRNAMES) + def test_roundtrip_matrix( + self, testcase, attrname, matrix_results, do_split + ): + do_saves, matrix_results = matrix_results + split_param = "split" if do_split else "unsplit" + testcase_spec = matrix_results["roundtrip"][testcase] + input_spec = testcase_spec["input"] + values = decode_matrix_input(input_spec) + + self.run_roundtrip_testcase(attrname, values) + results = self.fetch_results(filepath=self.result_filepath) + result_spec = encode_matrix_result(results) + + attr_style = deduce_attr_style(attrname) + expected = testcase_spec[attr_style][split_param] + + if do_saves: + testcase_spec[attr_style][split_param] = result_spec + if expected is not None: + assert result_spec == expected + + +class TestLoad(MixinAttrsTesting): + """ + Test loading of file attributes into Iris cube attribute dictionaries. + + Tests loading of various combinations to cube dictionaries, treated as a + single combined result (i.e. not split). This behaviour should be (almost) + conserved with the adoption of split attributes **except possibly for key + orderings** -- i.e. we test only up to dictionary equality. + + NOTE: the tested combinations are identical to the roundtrip test. Test numbering + is kept the same, so some (which are inapplicable for this) are missing. + + """ + + def run_load_testcase(self, attr_name, values): + self.run_testcase( + attr_name=attr_name, values=values, create_cubes_or_files="files" + ) + + def check_load_results(self, expected, oldstyle_combined=False): + if not _SPLIT_SAVE_SUPPORTED and not oldstyle_combined: + # Don't check "newstyle" in the old world -- just skip it. + return + result_cubes = iris.load(self.input_filepaths) + results = self.fetch_results( + cubes=result_cubes, oldstyle_combined=oldstyle_combined + ) + # Standardise expected form to list(lists). + assert isinstance(expected, list) + if not isinstance(expected[0], list): + expected = [expected] + assert results == expected + + ####################################################### + # Tests on "user-style" attributes. + # This means any arbitrary attribute which a user might have added -- i.e. one with + # a name which is *not* recognised in the netCDF or CF conventions. + # + + def test_01_userstyle_single_global(self): + self.run_load_testcase( + attr_name="myname", values=["single_value", None, None] + ) + # Legacy-equivalent result check (single attributes dict per cube) + self.check_load_results( + [None, "single_value", "single_value"], + oldstyle_combined=True, + ) + # Full new-style results check + self.check_load_results(["single_value", None, None]) + + def test_02_userstyle_single_local(self): + # Default behaviour for a general local user-attribute. + # It is attached to only the specific cube. + self.run_load_testcase( + attr_name="myname", # A generic "user" attribute with no special handling + values=[None, "single-value", None], + ) + self.check_load_results( + [None, "single-value", None], oldstyle_combined=True + ) + self.check_load_results([None, "single-value", None]) + + def test_03_userstyle_multiple_different(self): + # Default behaviour for differing local user-attributes. + # The global attribute is simply lost, because there are local ones. + self.run_load_testcase( + attr_name="random", # A generic "user" attribute with no special handling + values=[ + ["global_file1", "f1v1", "f1v2"], + ["global_file2", "x1", "x2"], + ], + ) + self.check_load_results( + [None, "f1v1", "f1v2", "x1", "x2"], + oldstyle_combined=True, + ) + self.check_load_results( + [["global_file1", "f1v1", "f1v2"], ["global_file2", "x1", "x2"]] + ) + + def test_04_userstyle_multiple_same(self): + # Nothing special to note in this case + # TODO: ??remove?? + self.run_load_testcase( + attr_name="random", + values=["global_file1", "same-value", "same-value"], + ) + self.check_load_results( + oldstyle_combined=True, expected=[None, "same-value", "same-value"] + ) + self.check_load_results(["global_file1", "same-value", "same-value"]) + + ####################################################### + # Tests on "Conventions" attribute. + # Note: the usual 'Conventions' behaviour is already tested elsewhere + # - see :class:`TestConventionsAttributes` above + # + # TODO: the name 'conventions' (lower-case) is also listed in _CF_GLOBAL_ATTRS, but + # we have excluded it from the global-attrs testing here. We probably still need to + # test what that does, though it's inclusion might simply be a mistake. + # + + def test_07_conventions_var_local(self): + # What happens if 'Conventions' appears as a variable-local attribute. + # N.B. this is not good CF, but we'll see what happens anyway. + self.run_load_testcase( + attr_name="Conventions", + values=[None, "user_set"], + ) + # Legacy result + self.check_load_results([None, "user_set"], oldstyle_combined=True) + # Newstyle result + self.check_load_results([None, "user_set"]) + + def test_08_conventions_var_both(self): + # What happens if 'Conventions' appears as both global + local attribute. + self.run_load_testcase( + attr_name="Conventions", + values=["global-setting", "local-setting"], + ) + # (#1): legacy result : the global version gets lost. + self.check_load_results( + [None, "local-setting"], oldstyle_combined=True + ) + # (#2): newstyle results : retain both. + self.check_load_results(["global-setting", "local-setting"]) + + ####################################################### + # Tests on "global" style attributes + # = those specific ones which 'ought' only to be global (except on collisions) + # + + def test_09_globalstyle__global(self, global_attr): + attr_content = f"Global tracked {global_attr}" + self.run_load_testcase( + attr_name=global_attr, values=[attr_content, None] + ) + # (#1) legacy + self.check_load_results([None, attr_content], oldstyle_combined=True) + # (#2) newstyle : global status preserved. + self.check_load_results([attr_content, None]) + + def test_10_globalstyle__local(self, global_attr): + # Strictly, not correct CF, but let's see what it does with it. + attr_content = f"Local tracked {global_attr}" + self.run_load_testcase( + attr_name=global_attr, + values=[None, attr_content], + ) + # (#1): legacy result = treated the same as a global setting + self.check_load_results([None, attr_content], oldstyle_combined=True) + # (#2): newstyle result : remains local + self.check_load_results( + [None, attr_content], + ) + + def test_11_globalstyle__both(self, global_attr): + attr_global = f"Global-{global_attr}" + attr_local = f"Local-{global_attr}" + self.run_load_testcase( + attr_name=global_attr, + values=[attr_global, attr_local], + ) + # (#1) legacy result : promoted local setting "wins" + self.check_load_results([None, attr_local], oldstyle_combined=True) + # (#2) newstyle result : both retained + self.check_load_results([attr_global, attr_local]) + + def test_12_globalstyle__multivar_different(self, global_attr): + # Multiple *different* local settings are retained + attr_1 = f"Local-{global_attr}-1" + attr_2 = f"Local-{global_attr}-2" + self.run_load_testcase( + attr_name=global_attr, + values=[None, attr_1, attr_2], + ) + # (#1): legacy values, for cube.attributes viewed as a single dict + self.check_load_results([None, attr_1, attr_2], oldstyle_combined=True) + # (#2): exact results, with newstyle "split" cube attrs + self.check_load_results([None, attr_1, attr_2]) + + def test_14_globalstyle__multifile_different(self, global_attr): + # Different global attributes from multiple files + attr_1 = f"Global-{global_attr}-1" + attr_2 = f"Global-{global_attr}-2" + self.run_load_testcase( + attr_name=global_attr, + values=[[attr_1, None, None], [attr_2, None, None]], + ) + # (#1) legacy : multiple globals retained as local ones + self.check_load_results( + [None, attr_1, attr_1, attr_2, attr_2], oldstyle_combined=True + ) + # (#1) newstyle : result same as input + self.check_load_results([[attr_1, None, None], [attr_2, None, None]]) + + ####################################################### + # Tests on "local" style attributes + # = those specific ones which 'ought' to appear attached to a variable, rather than + # being global + # + + @pytest.mark.parametrize("origin_style", ["input_global", "input_local"]) + def test_16_localstyle(self, local_attr, origin_style): + # local-style attributes should *not* get 'promoted' to global ones + # Set the name extension to avoid tests with different 'style' params having + # collisions over identical testfile names + self.testname_extension = origin_style + + attrval = f"Attr-setting-{local_attr}" + if local_attr == "missing_value": + # Special-case : 'missing_value' type must be compatible with the variable + attrval = 303 + elif local_attr == "ukmo__process_flags": + # Another special case : the handling of this one is "unusual". + attrval = "process" + + # Create testfiles and load them, which should always produce a single cube. + if origin_style == "input_global": + # Record in source as a global attribute + values = [attrval, None] + else: + assert origin_style == "input_local" + # Record in source as a variable-local attribute + values = [None, attrval] + + self.run_load_testcase(attr_name=local_attr, values=values) + + # Work out the expected result. + result_value = attrval + # ... there are some special cases + if origin_style == "input_local": + if local_attr == "ukmo__process_flags": + # Some odd special behaviour here. + result_value = (result_value,) + elif local_attr in ("standard_error_multiplier", "missing_value"): + # For some reason, these ones never appear on the cube + result_value = None + + # NOTE: **legacy** result is the same, whether the original attribute was + # provided as a global or local attribute ... + expected_result_legacy = [None, result_value] + + # While 'newstyle' results preserve the input type local/global. + if origin_style == "input_local": + expected_result_newstyle = [None, result_value] + else: + expected_result_newstyle = [result_value, None] + + # (#1): legacy values, for cube.attributes viewed as a single dict + self.check_load_results(expected_result_legacy, oldstyle_combined=True) + # (#2): exact results, with newstyle "split" cube attrs + self.check_load_results(expected_result_newstyle) + + @pytest.mark.parametrize("testcase", _MATRIX_TESTCASES[:max_param_attrs]) + @pytest.mark.parametrize("attrname", _MATRIX_ATTRNAMES) + @pytest.mark.parametrize("resultstyle", _MATRIX_LOAD_RESULTSTYLES) + def test_load_matrix( + self, testcase, attrname, matrix_results, resultstyle + ): + do_saves, matrix_results = matrix_results + testcase_spec = matrix_results["load"][testcase] + input_spec = testcase_spec["input"] + values = decode_matrix_input(input_spec) + + self.run_load_testcase(attrname, values) + + result_cubes = iris.load(self.input_filepaths) + do_combined = resultstyle == "legacy" + results = self.fetch_results( + cubes=result_cubes, oldstyle_combined=do_combined + ) + result_spec = encode_matrix_result(results) + + attr_style = deduce_attr_style(attrname) + expected = testcase_spec[attr_style][resultstyle] + + if do_saves: + testcase_spec[attr_style][resultstyle] = result_spec + if expected is not None: + assert result_spec == expected + + +class TestSave(MixinAttrsTesting): + """ + Test saving from cube attributes dictionary (various categories) into files. + + """ + + # Parametrise all tests over split/unsplit saving. + @pytest.fixture( + params=_SPLIT_PARAM_VALUES, ids=_SPLIT_PARAM_IDS, autouse=True + ) + def do_split(self, request): + do_split = request.param + self.save_split_attrs = do_split + return do_split + + def run_save_testcase(self, attr_name: str, values: list): + # Create input cubes. + self.run_testcase( + attr_name=attr_name, + values=values, + create_cubes_or_files="cubes", + ) + + # Save input cubes to a temporary result file. + with warnings.catch_warnings(record=True) as captured_warnings: + self.result_filepath = self._testfile_path("result") + do_split = getattr(self, "save_split_attrs", False) + kwargs = ( + dict(save_split_attrs=do_split) + if _SPLIT_SAVE_SUPPORTED + else dict() + ) + with iris.FUTURE.context(**kwargs): + iris.save(self.input_cubes, self.result_filepath) + + self.captured_warnings = captured_warnings + + def run_save_testcase_legacytype(self, attr_name: str, values: list): + """ + Legacy-type means : before cubes had split attributes. + + This just means we have only one "set" of cubes, with ***no*** distinct global + attribute. + """ + if not isinstance(values, list): + # Translate single input value to list-of-1 + values = [values] + + self.run_save_testcase(attr_name, [None] + values) + + def check_save_results( + self, expected: list, expected_warnings: List[str] = None + ): + results = self.fetch_results(filepath=self.result_filepath) + assert results == expected + check_captured_warnings( + expected_warnings, + self.captured_warnings, + # N.B. only allow a legacy-attributes warning when NOT saving split-attrs + allow_possible_legacy_warning=not self.save_split_attrs, + ) + + def test_userstyle__single(self, do_split): + self.run_save_testcase_legacytype("random", "value-x") + if do_split: + # result as input values + expected_result = [None, "value-x"] + else: + # in legacy mode, promoted = stored as a *global* by default. + expected_result = ["value-x", None] + self.check_save_results(expected_result) + + def test_userstyle__multiple_same(self, do_split): + self.run_save_testcase_legacytype("random", ["value-x", "value-x"]) + if do_split: + # result as input values + expected_result = [None, "value-x", "value-x"] + else: + # in legacy mode, promoted = stored as a *global* by default. + expected_result = ["value-x", None, None] + self.check_save_results(expected_result) + + def test_userstyle__multiple_different(self): + # Clashing values are stored as locals on the individual variables. + self.run_save_testcase_legacytype("random", ["value-A", "value-B"]) + self.check_save_results([None, "value-A", "value-B"]) + + def test_userstyle__multiple_onemissing(self): + # Multiple user-type, with one missing, behave like different values. + self.run_save_testcase_legacytype( + "random", + ["value", None], + ) + # Stored as locals when there are differing values. + self.check_save_results([None, "value", None]) + + def test_Conventions__single(self): + self.run_save_testcase_legacytype("Conventions", "x") + # Always discarded + replaced by a single global setting. + self.check_save_results(["CF-1.7", None]) + + def test_Conventions__multiple_same(self): + self.run_save_testcase_legacytype( + "Conventions", ["same-value", "same-value"] + ) + # Always discarded + replaced by a single global setting. + self.check_save_results(["CF-1.7", None, None]) + + def test_Conventions__multiple_different(self): + self.run_save_testcase_legacytype( + "Conventions", ["value-A", "value-B"] + ) + # Always discarded + replaced by a single global setting. + self.check_save_results(["CF-1.7", None, None]) + + def test_globalstyle__single(self, global_attr, do_split): + self.run_save_testcase_legacytype(global_attr, ["value"]) + if do_split: + # result as input values + expected_warning = "should only be a CF global" + expected_result = [None, "value"] + else: + # in legacy mode, promoted + expected_warning = None + expected_result = ["value", None] + self.check_save_results(expected_result, expected_warning) + + def test_globalstyle__multiple_same(self, global_attr, do_split): + # Multiple global-type with same values are made global. + self.run_save_testcase_legacytype( + global_attr, + ["value-same", "value-same"], + ) + if do_split: + # result as input values + expected_result = [None, "value-same", "value-same"] + expected_warning = "should only be a CF global attribute" + else: + # in legacy mode, promoted + expected_result = ["value-same", None, None] + expected_warning = None + self.check_save_results(expected_result, expected_warning) + + def test_globalstyle__multiple_different(self, global_attr): + # Multiple global-type with different values become local, with warning. + self.run_save_testcase_legacytype(global_attr, ["value-A", "value-B"]) + # *Only* stored as locals when there are differing values. + msg_regexp = ( + f"'{global_attr}' is being added as CF data variable attribute," + f".* should only be a CF global attribute." + ) + self.check_save_results( + [None, "value-A", "value-B"], expected_warnings=msg_regexp + ) + + def test_globalstyle__multiple_onemissing(self, global_attr): + # Multiple global-type, with one missing, behave like different values. + self.run_save_testcase_legacytype( + global_attr, ["value", "value", None] + ) + # Stored as locals when there are differing values. + msg_regexp = ( + f"'{global_attr}' is being added as CF data variable attribute," + f".* should only be a CF global attribute." + ) + self.check_save_results( + [None, "value", "value", None], expected_warnings=msg_regexp + ) + + def test_localstyle__single(self, local_attr): + self.run_save_testcase_legacytype(local_attr, ["value"]) + + # Defaults to local + expected_results = [None, "value"] + # .. but a couple of special cases + if local_attr == "ukmo__process_flags": + # A particular, really weird case + expected_results = [None, "v a l u e"] + elif local_attr == "STASH": + # A special case : the stored name is different + self.attrname = "um_stash_source" + + self.check_save_results(expected_results) + + def test_localstyle__multiple_same(self, local_attr): + self.run_save_testcase_legacytype( + local_attr, ["value-same", "value-same"] + ) + + # They remain separate + local + expected_results = [None, "value-same", "value-same"] + if local_attr == "ukmo__process_flags": + # A particular, really weird case + expected_results = [ + None, + "v a l u e - s a m e", + "v a l u e - s a m e", + ] + elif local_attr == "STASH": + # A special case : the stored name is different + self.attrname = "um_stash_source" + + self.check_save_results(expected_results) + + def test_localstyle__multiple_different(self, local_attr): + self.run_save_testcase_legacytype(local_attr, ["value-A", "value-B"]) + # Different values are treated just the same as matching ones. + expected_results = [None, "value-A", "value-B"] + if local_attr == "ukmo__process_flags": + # A particular, really weird case + expected_results = [ + None, + "v a l u e - A", + "v a l u e - B", + ] + elif local_attr == "STASH": + # A special case : the stored name is different + self.attrname = "um_stash_source" + self.check_save_results(expected_results) + + # + # Test handling of newstyle independent global+local cube attributes. + # + def test_globallocal_clashing(self, do_split): + # A cube has clashing local + global attrs. + original_values = ["valueA", "valueB"] + self.run_save_testcase("userattr", original_values) + expected_result = original_values.copy() + if not do_split: + # in legacy mode, "promote" = lose the local one + expected_result[0] = expected_result[1] + expected_result[1] = None + self.check_save_results(expected_result) + + def test_globallocal_oneeach_same(self, do_split): + # One cube with global attr, another with identical local one. + self.run_save_testcase( + "userattr", values=[[None, "value"], ["value", None]] + ) + if do_split: + expected = [None, "value", "value"] + expected_warning = ( + r"Saving the cube global attributes \['userattr'\] as local" + ) + else: + # N.B. legacy code sees only two equal values (and promotes). + expected = ["value", None, None] + expected_warning = None + + self.check_save_results(expected, expected_warning) + + def test_globallocal_oneeach_different(self, do_split): + # One cube with global attr, another with a *different* local one. + self.run_save_testcase( + "userattr", [[None, "valueA"], ["valueB", None]] + ) + if do_split: + warning = ( + r"Saving the cube global attributes \['userattr'\] as local" + ) + else: + # N.B. legacy code does not warn of global-to-local "demotion". + warning = None + self.check_save_results([None, "valueA", "valueB"], warning) + + def test_globallocal_one_other_clashingglobals(self, do_split): + # Two cubes with both, second cube has a clashing global attribute. + self.run_save_testcase( + "userattr", + values=[["valueA", "valueB"], ["valueXXX", "valueB"]], + ) + if do_split: + expected = [None, "valueB", "valueB"] + expected_warnings = [ + "Saving.* global attributes.* as local", + 'attributes.* of cube "v1" were not saved', + 'attributes.* of cube "v2" were not saved', + ] + else: + # N.B. legacy code sees only the locals, and promotes them. + expected = ["valueB", None, None] + expected_warnings = None + self.check_save_results(expected, expected_warnings) + + def test_globallocal_one_other_clashinglocals(self, do_split): + # Two cubes with both, second cube has a clashing local attribute. + inputs = [["valueA", "valueB"], ["valueA", "valueXXX"]] + if do_split: + expected = ["valueA", "valueB", "valueXXX"] + else: + # N.B. legacy code sees only the locals. + expected = [None, "valueB", "valueXXX"] + self.run_save_testcase("userattr", values=inputs) + self.check_save_results(expected) + + @pytest.mark.parametrize("testcase", _MATRIX_TESTCASES[:max_param_attrs]) + @pytest.mark.parametrize("attrname", _MATRIX_ATTRNAMES) + def test_save_matrix(self, testcase, attrname, matrix_results, do_split): + do_saves, matrix_results = matrix_results + split_param = "split" if do_split else "unsplit" + testcase_spec = matrix_results["save"][testcase] + input_spec = testcase_spec["input"] + values = decode_matrix_input(input_spec) + + self.run_save_testcase(attrname, values) + results = self.fetch_results(filepath=self.result_filepath) + result_spec = encode_matrix_result(results) + + attr_style = deduce_attr_style(attrname) + expected = testcase_spec[attr_style][split_param] + + if do_saves: + testcase_spec[attr_style][split_param] = result_spec + if expected is not None: + assert result_spec == expected diff --git a/lib/iris/tests/test_merge.py b/lib/iris/tests/test_merge.py index 054fd3a20b..7c11fde55d 100644 --- a/lib/iris/tests/test_merge.py +++ b/lib/iris/tests/test_merge.py @@ -21,6 +21,7 @@ from iris._lazy_data import as_lazy_data from iris.coords import AuxCoord, DimCoord import iris.cube +from iris.cube import CubeAttrsDict import iris.exceptions import iris.tests.stock @@ -1107,5 +1108,86 @@ def test_ancillary_variable_error_msg(self): _ = iris.cube.CubeList([cube1, cube2]).merge_cube() +class TestCubeMerge__split_attributes__error_messages(tests.IrisTest): + """ + Specific tests for the detection and wording of attribute-mismatch errors. + + In particular, the adoption of 'split' attributes with the new + :class:`iris.cube.CubeAttrsDict` introduces some more subtle possible discrepancies + in attributes, where this has also impacted the messaging, so this aims to probe + those cases. + """ + + def _check_merge_error(self, attrs_1, attrs_2, expected_message): + """ + Check the error from a merge failure caused by a mismatch of attributes. + + Build a pair of cubes with given attributes, merge them + check for a match + to the expected error message. + """ + cube_1 = iris.cube.Cube( + [0], + aux_coords_and_dims=[(AuxCoord([1], long_name="x"), None)], + attributes=attrs_1, + ) + cube_2 = iris.cube.Cube( + [0], + aux_coords_and_dims=[(AuxCoord([2], long_name="x"), None)], + attributes=attrs_2, + ) + with self.assertRaisesRegex( + iris.exceptions.MergeError, expected_message + ): + iris.cube.CubeList([cube_1, cube_2]).merge_cube() + + def test_keys_differ__single(self): + self._check_merge_error( + attrs_1=dict(a=1, b=2), + attrs_2=dict(a=1), + # Note: matching key 'a' does *not* appear in the message + expected_message="cube.attributes keys differ: 'b'", + ) + + def test_keys_differ__multiple(self): + self._check_merge_error( + attrs_1=dict(a=1, b=2), + attrs_2=dict(a=1, c=2), + expected_message="cube.attributes keys differ: 'b', 'c'", + ) + + def test_values_differ__single(self): + self._check_merge_error( + attrs_1=dict(a=1, b=2), # Note: matching key 'a' does not appear + attrs_2=dict(a=1, b=3), + expected_message="cube.attributes values differ for keys: 'b'", + ) + + def test_values_differ__multiple(self): + self._check_merge_error( + attrs_1=dict(a=1, b=2), + attrs_2=dict(a=12, b=22), + expected_message="cube.attributes values differ for keys: 'a', 'b'", + ) + + def test_splitattrs_keys_local_global_mismatch(self): + # Since Cube.attributes is now a "split-attributes" dictionary, it is now + # possible to have "cube1.attributes != cube1.attributes", but also + # "set(cube1.attributes.keys()) == set(cube2.attributes.keys())". + # I.E. it is now necessary to specifically compare ".globals" and ".locals" to + # see *what* differs between two attributes dictionaries. + self._check_merge_error( + attrs_1=CubeAttrsDict(globals=dict(a=1), locals=dict(b=2)), + attrs_2=CubeAttrsDict(locals=dict(a=2)), + expected_message="cube.attributes keys differ: 'a', 'b'", + ) + + def test_splitattrs_keys_local_match_masks_global_mismatch(self): + self._check_merge_error( + attrs_1=CubeAttrsDict(globals=dict(a=1), locals=dict(a=3)), + attrs_2=CubeAttrsDict(globals=dict(a=2), locals=dict(a=3)), + expected_message="cube.attributes values differ for keys: 'a'", + ) + + if __name__ == "__main__": tests.main() diff --git a/lib/iris/tests/unit/analysis/area_weighted/test_AreaWeightedRegridder.py b/lib/iris/tests/unit/analysis/area_weighted/test_AreaWeightedRegridder.py index 2d873ad011..789426e11b 100644 --- a/lib/iris/tests/unit/analysis/area_weighted/test_AreaWeightedRegridder.py +++ b/lib/iris/tests/unit/analysis/area_weighted/test_AreaWeightedRegridder.py @@ -50,7 +50,7 @@ def check_mdtol(self, mdtol=None): _regrid_info = _regrid_area_weighted_rectilinear_src_and_grid__prepare( src_grid, target_grid ) - self.assertEqual(len(_regrid_info), 10) + self.assertEqual(len(_regrid_info), 9) with mock.patch( "iris.analysis._area_weighted." "_regrid_area_weighted_rectilinear_src_and_grid__prepare", diff --git a/lib/iris/tests/unit/common/metadata/test_CubeMetadata.py b/lib/iris/tests/unit/common/metadata/test_CubeMetadata.py index 382607dca5..4425ba62d7 100644 --- a/lib/iris/tests/unit/common/metadata/test_CubeMetadata.py +++ b/lib/iris/tests/unit/common/metadata/test_CubeMetadata.py @@ -15,8 +15,11 @@ import unittest.mock as mock from unittest.mock import sentinel +import pytest + from iris.common.lenient import _LENIENT, _qualname from iris.common.metadata import BaseMetadata, CubeMetadata +from iris.cube import CubeAttrsDict def _make_metadata( @@ -90,9 +93,360 @@ def test_bases(self): self.assertTrue(issubclass(self.cls, BaseMetadata)) -class Test___eq__(tests.IrisTest): - def setUp(self): - self.values = dict( +@pytest.fixture(params=CubeMetadata._fields) +def fieldname(request): + """Parametrize testing over all CubeMetadata field names.""" + return request.param + + +@pytest.fixture(params=["strict", "lenient"]) +def op_leniency(request): + """Parametrize testing over strict or lenient operation.""" + return request.param + + +@pytest.fixture(params=["primaryAA", "primaryAX", "primaryAB"]) +def primary_values(request): + """ + Parametrize over the possible non-trivial pairs of operation values. + + The parameters all provide two attribute values which are the left- and right-hand + arguments to the tested operation. The attribute values are single characters from + the end of the parameter name -- except that "X" denotes a "missing" attribute. + + The possible cases are: + + * one side has a value and the other is missing + * left and right have the same non-missing value + * left and right have different non-missing values + """ + return request.param + + +@pytest.fixture(params=[False, True], ids=["primaryLocal", "primaryGlobal"]) +def primary_is_global_not_local(request): + """Parametrize split-attribute testing over "global" or "local" attribute types.""" + return request.param + + +@pytest.fixture(params=[False, True], ids=["leftrightL2R", "leftrightR2L"]) +def order_reversed(request): + """Parametrize split-attribute testing over "left OP right" or "right OP left".""" + return request.param + + +# Define the expected results for split-attribute testing. +# This dictionary records the expected results for the various possible arrangements of +# values of a single attribute in the "left" and "right" inputs of a CubeMetadata +# operation. +# The possible operations are "equal", "combine" or "difference", and may all be +# performed "strict" or "lenient". +# N.B. the *same* results should also apply when left+right are swapped, with a suitable +# adjustment to the result value. Likewise, results should be the same for either +# global- or local-style attributes. +_ALL_RESULTS = { + "equal": { + "primaryAA": {"lenient": True, "strict": True}, + "primaryAX": {"lenient": True, "strict": False}, + "primaryAB": {"lenient": False, "strict": False}, + }, + "combine": { + "primaryAA": {"lenient": "A", "strict": "A"}, + "primaryAX": {"lenient": "A", "strict": None}, + "primaryAB": {"lenient": None, "strict": None}, + }, + "difference": { + "primaryAA": {"lenient": None, "strict": None}, + "primaryAX": {"lenient": None, "strict": ("A", None)}, + "primaryAB": {"lenient": ("A", "B"), "strict": ("A", "B")}, + }, +} +# A fixed attribute name used for all the split-attribute testing. +_TEST_ATTRNAME = "_test_attr_" + + +def extract_attribute_value(split_dict, extract_global): + """ + Extract a test-attribute value from a split-attribute dictionary. + + Parameters + ---------- + split_dict : CubeAttrsDict + a split dictionary from an operation result + extract_global : bool + whether to extract values of the global, or local, `_TEST_ATTRNAME` attribute + + Returns + ------- + str | None + """ + if extract_global: + result = split_dict.globals.get(_TEST_ATTRNAME, None) + else: + result = split_dict.locals.get(_TEST_ATTRNAME, None) + return result + + +def extract_result_value(input, extract_global): + """ + Extract the values(s) of the main test attribute from an operation result. + + Parameters + ---------- + input : bool | CubeMetadata + an operation result : the structure varies for the three different operations. + extract_global : bool + whether to return values of a global, or local, `_TEST_ATTRNAME` attribute. + + Returns + ------- + None | bool | str | tuple[None | str] + result value(s) + """ + if not isinstance(input, CubeMetadata): + # Result is either boolean (for "equals") or a None (for "difference"). + result = input + else: + # Result is a CubeMetadata. Get the value(s) of the required attribute. + result = input.attributes + + if isinstance(result, CubeAttrsDict): + result = extract_attribute_value(result, extract_global) + else: + # For "difference", input.attributes is a *pair* of dictionaries. + assert isinstance(result, tuple) + result = tuple( + [ + extract_attribute_value(dic, extract_global) + for dic in result + ] + ) + if result == (None, None): + # This value occurs when the desired attribute is *missing* from a + # difference result, but other (secondary) attributes were *different*. + # We want only differences of the *target* attribute, so convert these + # to a plain 'no difference', for expected-result testing purposes. + result = None + + return result + + +def make_attrsdict(value): + """ + Return a dictionary containing a test attribute with the given value. + + If the value is "X", the attribute is absent (result is empty dict). + """ + if value == "X": + # Translate an "X" input as "missing". + result = {} + else: + result = {_TEST_ATTRNAME: value} + return result + + +def check_splitattrs_testcase( + operation_name: str, + check_is_lenient: bool, + primary_inputs: str = "AA", # two character values + secondary_inputs: str = "XX", # two character values + check_global_not_local: bool = True, + check_reversed: bool = False, +): + """ + Test a metadata operation with split-attributes against known expected results. + + Parameters + ---------- + operation_name : str + One of "equal", "combine" or "difference. + check_is_lenient : bool + Whether the tested operation is performed 'lenient' or 'strict'. + primary_inputs : str + A pair of characters defining left + right attribute values for the operands of + the operation. + secondary_inputs : str + A further pair of values for an attribute of the same name but "other" type + ( i.e. global/local when the main test is local/global ). + check_global_not_local : bool + If `True` then the primary operands, and the tested result values, are *global* + attributes, and the secondary ones are local. + Otherwise, the other way around. + check_reversed : bool + If True, the left and right operands are exchanged, and the expected value + modified according. + + Notes + ----- + The expected result of an operation is mostly defined by : the operation applied; + the main "primary" inputs; and the lenient/strict mode. + + In the case of the "equals" operation, however, the expected result is simply + set to `False` if the secondary inputs do not match. + + Calling with different values for the keywords aims to show that the main operation + has the expected value, from _ALL_RESULTS, the ***same in essentially all cases*** + ( though modified in specific ways for some factors ). + + This regularity also demonstrates the required independence over the other + test-factors, i.e. global/local attribute type, and right-left order. + """ + # Just for comfort, check that inputs are all one of a few single characters. + assert all( + (item in list("ABCDX")) for item in (primary_inputs + secondary_inputs) + ) + # Interpret "primary" and "secondary" inputs as "global" and "local" attributes. + if check_global_not_local: + global_values, local_values = primary_inputs, secondary_inputs + else: + local_values, global_values = primary_inputs, secondary_inputs + + # Form 2 inputs to the operation : Make left+right split-attribute input + # dictionaries, with both the primary and secondary attribute value settings. + input_dicts = [ + CubeAttrsDict( + globals=make_attrsdict(global_value), + locals=make_attrsdict(local_value), + ) + for global_value, local_value in zip(global_values, local_values) + ] + # Make left+right CubeMetadata with those attributes, other fields all blank. + input_l, input_r = [ + CubeMetadata( + **{ + field: attrs if field == "attributes" else None + for field in CubeMetadata._fields + } + ) + for attrs in input_dicts + ] + + if check_reversed: + # Swap the inputs to perform a 'reversed' calculation. + input_l, input_r = input_r, input_l + + # Run the actual operation + result = getattr(input_l, operation_name)( + input_r, lenient=check_is_lenient + ) + + if operation_name == "difference" and check_reversed: + # Adjust the result of a "reversed" operation to the 'normal' way round. + # ( N.B. only "difference" results are affected by reversal. ) + if isinstance(result, CubeMetadata): + result = result._replace(attributes=result.attributes[::-1]) + + # Extract, from the operation result, the value to be tested against "expected". + result = extract_result_value(result, check_global_not_local) + + # Get the *expected* result for this operation. + which = "lenient" if check_is_lenient else "strict" + primary_key = "primary" + primary_inputs + expected = _ALL_RESULTS[operation_name][primary_key][which] + if operation_name == "equal" and expected: + # Account for the equality cases made `False` by mismatched secondary values. + left, right = secondary_inputs + secondaries_same = left == right or ( + check_is_lenient and "X" in (left, right) + ) + if not secondaries_same: + expected = False + + # Check that actual extracted operation result matches the "expected" one. + assert result == expected + + +class MixinSplitattrsMatrixTests: + """ + Define split-attributes tests to perform on all the metadata operations. + + This is inherited by the testclass for each operation : + i.e. Test___eq__, Test_combine and Test_difference + """ + + # Define the operation name : set in each inheritor + operation_name = None + + def test_splitattrs_cases( + self, + op_leniency, + primary_values, + primary_is_global_not_local, + order_reversed, + ): + """ + Check the basic operation against the expected result from _ALL_RESULTS. + + Parametrisation checks this for all combinations of various factors : + + * possible arrangements of the primary values + * strict and lenient + * global- and local-type attributes + * left-to-right or right-to-left operation order. + """ + primary_inputs = primary_values[-2:] + check_is_lenient = {"strict": False, "lenient": True}[op_leniency] + check_splitattrs_testcase( + operation_name=self.operation_name, + check_is_lenient=check_is_lenient, + primary_inputs=primary_inputs, + secondary_inputs="XX", + check_global_not_local=primary_is_global_not_local, + check_reversed=order_reversed, + ) + + @pytest.mark.parametrize( + "secondary_values", + [ + "secondaryXX", + "secondaryCX", + "secondaryXC", + "secondaryCC", + "secondaryCD", + ] + # NOTE: test CX as well as XC, since primary choices has "AX" but not "XA". + ) + def test_splitattrs_global_local_independence( + self, + op_leniency, + primary_values, + secondary_values, + ): + """ + Check that results are (mostly) independent of the "other" type attributes. + + The operation on attributes of the 'primary' type (global/local) should be + basically unaffected by those of the 'secondary' type (--> local/global). + + This is not really true for equality, so we adjust those results to compensate. + See :func:`check_splitattrs_testcase` for explanations. + + Notes + ----- + We provide this *separate* test for global/local attribute independence, + parametrized over selected relevant arrangements of the 'secondary' values. + We *don't* test with reversed order or "local" primary inputs, because matrix + testing over *all* relevant factors produces too many possible combinations. + """ + primary_inputs = primary_values[-2:] + secondary_inputs = secondary_values[-2:] + check_is_lenient = {"strict": False, "lenient": True}[op_leniency] + check_splitattrs_testcase( + operation_name=self.operation_name, + check_is_lenient=check_is_lenient, + primary_inputs=primary_inputs, + secondary_inputs=secondary_inputs, + check_global_not_local=True, + check_reversed=False, + ) + + +class Test___eq__(MixinSplitattrsMatrixTests): + operation_name = "equal" + + @pytest.fixture(autouse=True) + def setup(self): + self.lvalues = dict( standard_name=sentinel.standard_name, long_name=sentinel.long_name, var_name=sentinel.var_name, @@ -101,17 +455,19 @@ def setUp(self): attributes=dict(), cell_methods=sentinel.cell_methods, ) + # Setup another values tuple with all-distinct content objects. + self.rvalues = deepcopy(self.lvalues) self.dummy = sentinel.dummy self.cls = CubeMetadata def test_wraps_docstring(self): - self.assertEqual(BaseMetadata.__eq__.__doc__, self.cls.__eq__.__doc__) + assert self.cls.__eq__.__doc__ == BaseMetadata.__eq__.__doc__ def test_lenient_service(self): qualname___eq__ = _qualname(self.cls.__eq__) - self.assertIn(qualname___eq__, _LENIENT) - self.assertTrue(_LENIENT[qualname___eq__]) - self.assertTrue(_LENIENT[self.cls.__eq__]) + assert qualname___eq__ in _LENIENT + assert _LENIENT[qualname___eq__] + assert _LENIENT[self.cls.__eq__] def test_call(self): other = sentinel.other @@ -122,107 +478,114 @@ def test_call(self): ) as mocker: result = metadata.__eq__(other) - self.assertEqual(return_value, result) - self.assertEqual(1, mocker.call_count) - (arg,), kwargs = mocker.call_args - self.assertEqual(other, arg) - self.assertEqual(dict(), kwargs) - - def test_op_lenient_same(self): - lmetadata = self.cls(**self.values) - rmetadata = self.cls(**self.values) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertTrue(lmetadata.__eq__(rmetadata)) - self.assertTrue(rmetadata.__eq__(lmetadata)) - - def test_op_lenient_same_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["var_name"] = None - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertTrue(lmetadata.__eq__(rmetadata)) - self.assertTrue(rmetadata.__eq__(lmetadata)) - - def test_op_lenient_same_cell_methods_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = None - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertFalse(lmetadata.__eq__(rmetadata)) - self.assertFalse(rmetadata.__eq__(lmetadata)) - - def test_op_lenient_different(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["units"] = self.dummy - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertFalse(lmetadata.__eq__(rmetadata)) - self.assertFalse(rmetadata.__eq__(lmetadata)) - - def test_op_lenient_different_cell_methods(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = self.dummy - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertFalse(lmetadata.__eq__(rmetadata)) - self.assertFalse(rmetadata.__eq__(lmetadata)) - - def test_op_strict_same(self): - lmetadata = self.cls(**self.values) - rmetadata = self.cls(**self.values) - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertTrue(lmetadata.__eq__(rmetadata)) - self.assertTrue(rmetadata.__eq__(lmetadata)) - - def test_op_strict_different(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["long_name"] = self.dummy - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertFalse(lmetadata.__eq__(rmetadata)) - self.assertFalse(rmetadata.__eq__(lmetadata)) - - def test_op_strict_different_cell_methods(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = self.dummy - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertFalse(lmetadata.__eq__(rmetadata)) - self.assertFalse(rmetadata.__eq__(lmetadata)) - - def test_op_strict_different_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["long_name"] = None - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertFalse(lmetadata.__eq__(rmetadata)) - self.assertFalse(rmetadata.__eq__(lmetadata)) - - def test_op_strict_different_measure_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = None - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertFalse(lmetadata.__eq__(rmetadata)) - self.assertFalse(rmetadata.__eq__(lmetadata)) + assert return_value == result + assert mocker.call_args_list == [mock.call(other)] + + def test_op_same(self, op_leniency): + # Check op all-same content, but all-new data. + # NOTE: test for both strict/lenient, should both work the same. + is_lenient = op_leniency == "lenient" + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check equality both l==r and r==l. + assert lmetadata.__eq__(rmetadata) + assert rmetadata.__eq__(lmetadata) + + def test_op_different__none(self, fieldname, op_leniency): + # One side has field=value, and the other field=None, both strict + lenient. + if fieldname == "attributes": + # Must be a dict, cannot be None. + pytest.skip() + else: + is_lenient = op_leniency == "lenient" + lmetadata = self.cls(**self.lvalues) + self.rvalues.update({fieldname: None}) + rmetadata = self.cls(**self.rvalues) + if fieldname in ("cell_methods", "standard_name", "units"): + # These ones are compared strictly + expect_success = False + elif fieldname in ("var_name", "long_name"): + # For other 'normal' fields : lenient succeeds, strict does not. + expect_success = is_lenient + else: + # Ensure we are handling all the different field cases + raise ValueError( + f"{self.__name__} unhandled fieldname : {fieldname}" + ) + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check equality both l==r and r==l. + assert lmetadata.__eq__(rmetadata) == expect_success + assert rmetadata.__eq__(lmetadata) == expect_success + + def test_op_different__value(self, fieldname, op_leniency): + # Compare when a given field value is changed, both strict + lenient. + if fieldname == "attributes": + # Dicts have more possibilities: handled separately. + pytest.skip() + else: + is_lenient = op_leniency == "lenient" + lmetadata = self.cls(**self.lvalues) + self.rvalues.update({fieldname: self.dummy}) + rmetadata = self.cls(**self.rvalues) + if fieldname in ( + "cell_methods", + "standard_name", + "units", + "long_name", + ): + # These ones are compared strictly + expect_success = False + elif fieldname == "var_name": + # For other 'normal' fields : lenient succeeds, strict does not. + expect_success = is_lenient + else: + # Ensure we are handling all the different field cases + raise ValueError( + f"{self.__name__} unhandled fieldname : {fieldname}" + ) + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check equality both l==r and r==l. + assert lmetadata.__eq__(rmetadata) == expect_success + assert rmetadata.__eq__(lmetadata) == expect_success + + def test_op_different__attribute_extra(self, op_leniency): + # Check when one set of attributes has an extra entry. + is_lenient = op_leniency == "lenient" + lmetadata = self.cls(**self.lvalues) + self.rvalues["attributes"]["_extra_"] = 1 + rmetadata = self.cls(**self.rvalues) + # This counts as equal *only* in the lenient case. + expect_success = is_lenient + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check equality both l==r and r==l. + assert lmetadata.__eq__(rmetadata) == expect_success + assert rmetadata.__eq__(lmetadata) == expect_success + + def test_op_different__attribute_value(self, op_leniency): + # lhs and rhs have different values for an attribute, both strict + lenient. + is_lenient = op_leniency == "lenient" + self.lvalues["attributes"]["_extra_"] = mock.sentinel.value1 + self.rvalues["attributes"]["_extra_"] = mock.sentinel.value2 + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # This should ALWAYS fail. + assert not lmetadata.__eq__(rmetadata) + assert not rmetadata.__eq__(lmetadata) class Test___lt__(tests.IrisTest): @@ -256,9 +619,12 @@ def test__ignore_attributes_cell_methods(self): self.assertFalse(result) -class Test_combine(tests.IrisTest): - def setUp(self): - self.values = dict( +class Test_combine(MixinSplitattrsMatrixTests): + operation_name = "combine" + + @pytest.fixture(autouse=True) + def setup(self): + self.lvalues = dict( standard_name=sentinel.standard_name, long_name=sentinel.long_name, var_name=sentinel.var_name, @@ -266,20 +632,20 @@ def setUp(self): attributes=sentinel.attributes, cell_methods=sentinel.cell_methods, ) + # Get a second copy with all-new objects. + self.rvalues = deepcopy(self.lvalues) self.dummy = sentinel.dummy self.cls = CubeMetadata self.none = self.cls(*(None,) * len(self.cls._fields)) def test_wraps_docstring(self): - self.assertEqual( - BaseMetadata.combine.__doc__, self.cls.combine.__doc__ - ) + assert self.cls.combine.__doc__ == BaseMetadata.combine.__doc__ def test_lenient_service(self): qualname_combine = _qualname(self.cls.combine) - self.assertIn(qualname_combine, _LENIENT) - self.assertTrue(_LENIENT[qualname_combine]) - self.assertTrue(_LENIENT[self.cls.combine]) + assert qualname_combine in _LENIENT + assert _LENIENT[qualname_combine] + assert _LENIENT[self.cls.combine] def test_lenient_default(self): other = sentinel.other @@ -289,11 +655,8 @@ def test_lenient_default(self): ) as mocker: result = self.none.combine(other) - self.assertEqual(return_value, result) - self.assertEqual(1, mocker.call_count) - (arg,), kwargs = mocker.call_args - self.assertEqual(other, arg) - self.assertEqual(dict(lenient=None), kwargs) + assert return_value == result + assert mocker.call_args_list == [mock.call(other, lenient=None)] def test_lenient(self): other = sentinel.other @@ -304,149 +667,165 @@ def test_lenient(self): ) as mocker: result = self.none.combine(other, lenient=lenient) - self.assertEqual(return_value, result) - self.assertEqual(1, mocker.call_count) - (arg,), kwargs = mocker.call_args - self.assertEqual(other, arg) - self.assertEqual(dict(lenient=lenient), kwargs) + assert return_value == result + assert mocker.call_args_list == [mock.call(other, lenient=lenient)] + + def test_op_same(self, op_leniency): + # Result is same as either input, both strict + lenient. + is_lenient = op_leniency == "lenient" + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + expected = self.lvalues + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check both l+r and r+l + assert lmetadata.combine(rmetadata)._asdict() == expected + assert rmetadata.combine(lmetadata)._asdict() == expected + + def test_op_different__none(self, fieldname, op_leniency): + # One side has field=value, and the other field=None, both strict + lenient. + if fieldname == "attributes": + # Can't be None : Tested separately + pytest.skip() + + is_lenient = op_leniency == "lenient" + + lmetadata = self.cls(**self.lvalues) + # Cancel one setting in the rhs argument. + self.rvalues[fieldname] = None + rmetadata = self.cls(**self.rvalues) + + if fieldname in ("cell_methods", "units"): + # NB cell-methods and units *always* strict behaviour. + # strict form : take only those which both have set + strict_result = True + elif fieldname in ("standard_name", "long_name", "var_name"): + strict_result = not is_lenient + else: + # Ensure we are handling all the different field cases + raise ValueError( + f"{self.__name__} unhandled fieldname : {fieldname}" + ) - def test_op_lenient_same(self): - lmetadata = self.cls(**self.values) - rmetadata = self.cls(**self.values) - expected = self.values - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_lenient_same_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["var_name"] = None - rmetadata = self.cls(**right) - expected = self.values - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_lenient_same_cell_methods_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = None - rmetadata = self.cls(**right) - expected = right.copy() - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_lenient_different(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["units"] = self.dummy - rmetadata = self.cls(**right) - expected = self.values.copy() - expected["units"] = None - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_lenient_different_cell_methods(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = self.dummy - rmetadata = self.cls(**right) - expected = self.values.copy() - expected["cell_methods"] = None - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_strict_same(self): - lmetadata = self.cls(**self.values) - rmetadata = self.cls(**self.values) - expected = self.values.copy() - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_strict_different(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["long_name"] = self.dummy - rmetadata = self.cls(**right) - expected = self.values.copy() - expected["long_name"] = None - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_strict_different_cell_methods(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = self.dummy - rmetadata = self.cls(**right) - expected = self.values.copy() - expected["cell_methods"] = None - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_strict_different_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["long_name"] = None - rmetadata = self.cls(**right) - expected = self.values.copy() - expected["long_name"] = None - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - def test_op_strict_different_cell_methods_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = None - rmetadata = self.cls(**right) - expected = self.values.copy() - expected["cell_methods"] = None - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual(expected, lmetadata.combine(rmetadata)._asdict()) - self.assertEqual(expected, rmetadata.combine(lmetadata)._asdict()) - - -class Test_difference(tests.IrisTest): - def setUp(self): - self.values = dict( + if strict_result: + # include only those which both have + expected = self.rvalues + else: + # also include those which only 1 has + expected = self.lvalues + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check both l+r and r+l + assert lmetadata.combine(rmetadata)._asdict() == expected + assert rmetadata.combine(lmetadata)._asdict() == expected + + def test_op_different__value(self, fieldname, op_leniency): + # One field has different value for lhs/rhs, both strict + lenient. + if fieldname == "attributes": + # Attribute behaviours are tested separately + pytest.skip() + + is_lenient = op_leniency == "lenient" + + self.lvalues[fieldname] = mock.sentinel.value1 + self.rvalues[fieldname] = mock.sentinel.value2 + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + + # In all cases, this field should be None in the result : leniency has no effect + expected = self.lvalues.copy() + expected[fieldname] = None + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check both l+r and r+l + assert lmetadata.combine(rmetadata)._asdict() == expected + assert rmetadata.combine(lmetadata)._asdict() == expected + + def test_op_different__attribute_extra(self, op_leniency): + # One field has an extra attribute, both strict + lenient. + is_lenient = op_leniency == "lenient" + + self.lvalues["attributes"] = {"_a_common_": mock.sentinel.dummy} + self.rvalues["attributes"] = self.lvalues["attributes"].copy() + self.rvalues["attributes"]["_extra_"] = mock.sentinel.testvalue + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + + if is_lenient: + # the extra attribute should appear in the result .. + expected = self.rvalues + else: + # .. it should not + expected = self.lvalues + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check both l+r and r+l + assert lmetadata.combine(rmetadata)._asdict() == expected + assert rmetadata.combine(lmetadata)._asdict() == expected + + def test_op_different__attribute_value(self, op_leniency): + # lhs and rhs have different values for an attribute, both strict + lenient. + is_lenient = op_leniency == "lenient" + + self.lvalues["attributes"] = { + "_a_common_": self.dummy, + "_b_common_": mock.sentinel.value1, + } + self.lvalues["attributes"] = { + "_a_common_": self.dummy, + "_b_common_": mock.sentinel.value2, + } + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + + # Result has entirely EMPTY attributes (whether strict or lenient). + # TODO: is this maybe a mistake of the existing implementation ? + expected = self.lvalues.copy() + expected["attributes"] = None + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # Check both l+r and r+l + assert lmetadata.combine(rmetadata)._asdict() == expected + assert rmetadata.combine(lmetadata)._asdict() == expected + + +class Test_difference(MixinSplitattrsMatrixTests): + operation_name = "difference" + + @pytest.fixture(autouse=True) + def setup(self): + self.lvalues = dict( standard_name=sentinel.standard_name, long_name=sentinel.long_name, var_name=sentinel.var_name, units=sentinel.units, - attributes=sentinel.attributes, + attributes=dict(), # MUST be a dict cell_methods=sentinel.cell_methods, ) + # Make a copy with all-different objects in it. + self.rvalues = deepcopy(self.lvalues) self.dummy = sentinel.dummy self.cls = CubeMetadata self.none = self.cls(*(None,) * len(self.cls._fields)) def test_wraps_docstring(self): - self.assertEqual( - BaseMetadata.difference.__doc__, self.cls.difference.__doc__ - ) + assert self.cls.difference.__doc__ == BaseMetadata.difference.__doc__ def test_lenient_service(self): qualname_difference = _qualname(self.cls.difference) - self.assertIn(qualname_difference, _LENIENT) - self.assertTrue(_LENIENT[qualname_difference]) - self.assertTrue(_LENIENT[self.cls.difference]) + assert qualname_difference in _LENIENT + assert _LENIENT[qualname_difference] + assert _LENIENT[self.cls.difference] def test_lenient_default(self): other = sentinel.other @@ -456,11 +835,8 @@ def test_lenient_default(self): ) as mocker: result = self.none.difference(other) - self.assertEqual(return_value, result) - self.assertEqual(1, mocker.call_count) - (arg,), kwargs = mocker.call_args - self.assertEqual(other, arg) - self.assertEqual(dict(lenient=None), kwargs) + assert return_value == result + assert mocker.call_args_list == [mock.call(other, lenient=None)] def test_lenient(self): other = sentinel.other @@ -471,178 +847,149 @@ def test_lenient(self): ) as mocker: result = self.none.difference(other, lenient=lenient) - self.assertEqual(return_value, result) - self.assertEqual(1, mocker.call_count) - (arg,), kwargs = mocker.call_args - self.assertEqual(other, arg) - self.assertEqual(dict(lenient=lenient), kwargs) - - def test_op_lenient_same(self): - lmetadata = self.cls(**self.values) - rmetadata = self.cls(**self.values) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertIsNone(lmetadata.difference(rmetadata)) - self.assertIsNone(rmetadata.difference(lmetadata)) - - def test_op_lenient_same_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["var_name"] = None - rmetadata = self.cls(**right) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertIsNone(lmetadata.difference(rmetadata)) - self.assertIsNone(rmetadata.difference(lmetadata)) - - def test_op_lenient_same_cell_methods_none(self): - lmetadata = self.cls(**self.values) - right = self.values.copy() - right["cell_methods"] = None - rmetadata = self.cls(**right) - lexpected = deepcopy(self.none)._asdict() - lexpected["cell_methods"] = (sentinel.cell_methods, None) - rexpected = deepcopy(self.none)._asdict() - rexpected["cell_methods"] = (None, sentinel.cell_methods) - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual( - lexpected, lmetadata.difference(rmetadata)._asdict() - ) - self.assertEqual( - rexpected, rmetadata.difference(lmetadata)._asdict() - ) - - def test_op_lenient_different(self): - left = self.values.copy() - lmetadata = self.cls(**left) - right = self.values.copy() - right["units"] = self.dummy - rmetadata = self.cls(**right) - lexpected = deepcopy(self.none)._asdict() - lexpected["units"] = (left["units"], right["units"]) - rexpected = deepcopy(self.none)._asdict() - rexpected["units"] = lexpected["units"][::-1] - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual( - lexpected, lmetadata.difference(rmetadata)._asdict() - ) - self.assertEqual( - rexpected, rmetadata.difference(lmetadata)._asdict() - ) - - def test_op_lenient_different_cell_methods(self): - left = self.values.copy() - lmetadata = self.cls(**left) - right = self.values.copy() - right["cell_methods"] = self.dummy - rmetadata = self.cls(**right) - lexpected = deepcopy(self.none)._asdict() - lexpected["cell_methods"] = ( - left["cell_methods"], - right["cell_methods"], - ) - rexpected = deepcopy(self.none)._asdict() - rexpected["cell_methods"] = lexpected["cell_methods"][::-1] - - with mock.patch("iris.common.metadata._LENIENT", return_value=True): - self.assertEqual( - lexpected, lmetadata.difference(rmetadata)._asdict() - ) - self.assertEqual( - rexpected, rmetadata.difference(lmetadata)._asdict() - ) - - def test_op_strict_same(self): - lmetadata = self.cls(**self.values) - rmetadata = self.cls(**self.values) - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertIsNone(lmetadata.difference(rmetadata)) - self.assertIsNone(rmetadata.difference(lmetadata)) - - def test_op_strict_different(self): - left = self.values.copy() - lmetadata = self.cls(**left) - right = self.values.copy() - right["long_name"] = self.dummy - rmetadata = self.cls(**right) - lexpected = deepcopy(self.none)._asdict() - lexpected["long_name"] = (left["long_name"], right["long_name"]) - rexpected = deepcopy(self.none)._asdict() - rexpected["long_name"] = lexpected["long_name"][::-1] - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual( - lexpected, lmetadata.difference(rmetadata)._asdict() - ) - self.assertEqual( - rexpected, rmetadata.difference(lmetadata)._asdict() - ) - - def test_op_strict_different_cell_methods(self): - left = self.values.copy() - lmetadata = self.cls(**left) - right = self.values.copy() - right["cell_methods"] = self.dummy - rmetadata = self.cls(**right) - lexpected = deepcopy(self.none)._asdict() - lexpected["cell_methods"] = ( - left["cell_methods"], - right["cell_methods"], - ) - rexpected = deepcopy(self.none)._asdict() - rexpected["cell_methods"] = lexpected["cell_methods"][::-1] - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual( - lexpected, lmetadata.difference(rmetadata)._asdict() - ) - self.assertEqual( - rexpected, rmetadata.difference(lmetadata)._asdict() + assert return_value == result + assert mocker.call_args_list == [mock.call(other, lenient=lenient)] + + def test_op_same(self, op_leniency): + is_lenient = op_leniency == "lenient" + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + assert lmetadata.difference(rmetadata) is None + assert rmetadata.difference(lmetadata) is None + + def test_op_different__none(self, fieldname, op_leniency): + # One side has field=value, and the other field=None, both strict + lenient. + if fieldname in ("attributes",): + # These cannot properly be set to 'None'. Tested elsewhere. + pytest.skip() + + is_lenient = op_leniency == "lenient" + + lmetadata = self.cls(**self.lvalues) + self.rvalues[fieldname] = None + rmetadata = self.cls(**self.rvalues) + + if fieldname in ("units", "cell_methods"): + # These ones are always "strict" + strict_result = True + elif fieldname in ("standard_name", "long_name", "var_name"): + strict_result = not is_lenient + else: + # Ensure we are handling all the different field cases + raise ValueError( + f"{self.__name__} unhandled fieldname : {fieldname}" ) - def test_op_strict_different_none(self): - left = self.values.copy() - lmetadata = self.cls(**left) - right = self.values.copy() - right["long_name"] = None - rmetadata = self.cls(**right) - lexpected = deepcopy(self.none)._asdict() - lexpected["long_name"] = (left["long_name"], right["long_name"]) - rexpected = deepcopy(self.none)._asdict() - rexpected["long_name"] = lexpected["long_name"][::-1] - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual( - lexpected, lmetadata.difference(rmetadata)._asdict() + if strict_result: + diffentry = tuple( + [getattr(mm, fieldname) for mm in (lmetadata, rmetadata)] ) - self.assertEqual( - rexpected, rmetadata.difference(lmetadata)._asdict() - ) - - def test_op_strict_different_measure_none(self): - left = self.values.copy() - lmetadata = self.cls(**left) - right = self.values.copy() - right["cell_methods"] = None - rmetadata = self.cls(**right) - lexpected = deepcopy(self.none)._asdict() - lexpected["cell_methods"] = ( - left["cell_methods"], - right["cell_methods"], + # NOTE: in these cases, the difference metadata will fail an == operation, + # because of the 'None' entries. + # But we can use metadata._asdict() and test that. + lexpected = self.none._asdict() + lexpected[fieldname] = diffentry + rexpected = lexpected.copy() + rexpected[fieldname] = diffentry[::-1] + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + if strict_result: + assert lmetadata.difference(rmetadata)._asdict() == lexpected + assert rmetadata.difference(lmetadata)._asdict() == rexpected + else: + # Expect NO differences + assert lmetadata.difference(rmetadata) is None + assert rmetadata.difference(lmetadata) is None + + def test_op_different__value(self, fieldname, op_leniency): + # One field has different value for lhs/rhs, both strict + lenient. + if fieldname == "attributes": + # Attribute behaviours are tested separately + pytest.skip() + + self.lvalues[fieldname] = mock.sentinel.value1 + self.rvalues[fieldname] = mock.sentinel.value2 + lmetadata = self.cls(**self.lvalues) + rmetadata = self.cls(**self.rvalues) + + # In all cases, this field should show a difference : leniency has no effect + ldiff_values = (mock.sentinel.value1, mock.sentinel.value2) + ldiff_metadata = self.none._asdict() + ldiff_metadata[fieldname] = ldiff_values + rdiff_metadata = self.none._asdict() + rdiff_metadata[fieldname] = ldiff_values[::-1] + + # Check both l+r and r+l + assert lmetadata.difference(rmetadata)._asdict() == ldiff_metadata + assert rmetadata.difference(lmetadata)._asdict() == rdiff_metadata + + def test_op_different__attribute_extra(self, op_leniency): + # One field has an extra attribute, both strict + lenient. + is_lenient = op_leniency == "lenient" + self.lvalues["attributes"] = {"_a_common_": self.dummy} + lmetadata = self.cls(**self.lvalues) + rvalues = deepcopy(self.lvalues) + rvalues["attributes"]["_b_extra_"] = mock.sentinel.extra + rmetadata = self.cls(**rvalues) + + if not is_lenient: + # In this case, attributes returns a "difference dictionary" + diffentry = tuple([{}, {"_b_extra_": mock.sentinel.extra}]) + lexpected = self.none._asdict() + lexpected["attributes"] = diffentry + rexpected = lexpected.copy() + rexpected["attributes"] = diffentry[::-1] + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + if is_lenient: + # It recognises no difference + assert lmetadata.difference(rmetadata) is None + assert rmetadata.difference(lmetadata) is None + else: + # As calculated above + assert lmetadata.difference(rmetadata)._asdict() == lexpected + assert rmetadata.difference(lmetadata)._asdict() == rexpected + + def test_op_different__attribute_value(self, op_leniency): + # lhs and rhs have different values for an attribute, both strict + lenient. + is_lenient = op_leniency == "lenient" + self.lvalues["attributes"] = { + "_a_common_": self.dummy, + "_b_extra_": mock.sentinel.value1, + } + lmetadata = self.cls(**self.lvalues) + self.rvalues["attributes"] = { + "_a_common_": self.dummy, + "_b_extra_": mock.sentinel.value2, + } + rmetadata = self.cls(**self.rvalues) + + # In this case, attributes returns a "difference dictionary" + diffentry = tuple( + [ + {"_b_extra_": mock.sentinel.value1}, + {"_b_extra_": mock.sentinel.value2}, + ] ) - rexpected = deepcopy(self.none)._asdict() - rexpected["cell_methods"] = lexpected["cell_methods"][::-1] - - with mock.patch("iris.common.metadata._LENIENT", return_value=False): - self.assertEqual( - lexpected, lmetadata.difference(rmetadata)._asdict() - ) - self.assertEqual( - rexpected, rmetadata.difference(lmetadata)._asdict() - ) + lexpected = self.none._asdict() + lexpected["attributes"] = diffentry + rexpected = lexpected.copy() + rexpected["attributes"] = diffentry[::-1] + + with mock.patch( + "iris.common.metadata._LENIENT", return_value=is_lenient + ): + # As calculated above -- same for both strict + lenient + assert lmetadata.difference(rmetadata)._asdict() == lexpected + assert rmetadata.difference(lmetadata)._asdict() == rexpected class Test_equal(tests.IrisTest): diff --git a/lib/iris/tests/unit/common/mixin/test_LimitedAttributeDict.py b/lib/iris/tests/unit/common/mixin/test_LimitedAttributeDict.py index 7416bb9da5..d29a120f35 100644 --- a/lib/iris/tests/unit/common/mixin/test_LimitedAttributeDict.py +++ b/lib/iris/tests/unit/common/mixin/test_LimitedAttributeDict.py @@ -20,7 +20,7 @@ class Test(tests.IrisTest): def setUp(self): - self.forbidden_keys = LimitedAttributeDict._forbidden_keys + self.forbidden_keys = LimitedAttributeDict.CF_ATTRS_FORBIDDEN self.emsg = "{!r} is not a permitted attribute" def test__invalid_keys(self): diff --git a/lib/iris/tests/unit/conftest.py b/lib/iris/tests/unit/conftest.py new file mode 100644 index 0000000000..a4ddb89294 --- /dev/null +++ b/lib/iris/tests/unit/conftest.py @@ -0,0 +1,14 @@ +# Copyright Iris contributors +# +# This file is part of Iris and is released under the BSD license. +# See LICENSE in the root of the repository for full licensing details. +"""Unit tests fixture infra-structure.""" +import pytest + +import iris + + +@pytest.fixture +def sample_coord(): + sample_coord = iris.coords.DimCoord(points=(1, 2, 3, 4, 5)) + return sample_coord diff --git a/lib/iris/tests/unit/coords/test_Coord.py b/lib/iris/tests/unit/coords/test_Coord.py index 1c9c3cce2d..14dcdf7ca0 100644 --- a/lib/iris/tests/unit/coords/test_Coord.py +++ b/lib/iris/tests/unit/coords/test_Coord.py @@ -14,6 +14,7 @@ import dask.array as da import numpy as np +import pytest import iris from iris.coords import AuxCoord, Coord, DimCoord @@ -1149,6 +1150,39 @@ def test_change_units(self): self.assertFalse(coord.climatological) +class TestIgnoreAxis: + def test_default(self, sample_coord): + assert sample_coord.ignore_axis is False + + def test_set_true(self, sample_coord): + sample_coord.ignore_axis = True + assert sample_coord.ignore_axis is True + + def test_set_random_value(self, sample_coord): + with pytest.raises( + ValueError, + match=r"'ignore_axis' can only be set to 'True' or 'False'", + ): + sample_coord.ignore_axis = "foo" + + @pytest.mark.parametrize( + "ignore_axis, copy_or_from, result", + [ + (True, "copy", True), + (True, "from_coord", True), + (False, "copy", False), + (False, "from_coord", False), + ], + ) + def test_copy_coord(self, ignore_axis, copy_or_from, result, sample_coord): + sample_coord.ignore_axis = ignore_axis + if copy_or_from == "copy": + new_coord = sample_coord.copy() + elif copy_or_from == "from_coord": + new_coord = sample_coord.from_coord(sample_coord) + assert new_coord.ignore_axis is result + + class Test___init____abstractmethod(tests.IrisTest): def test(self): emsg = ( diff --git a/lib/iris/tests/unit/cube/test_Cube.py b/lib/iris/tests/unit/cube/test_Cube.py index b1eed4743e..5e513c2bd0 100644 --- a/lib/iris/tests/unit/cube/test_Cube.py +++ b/lib/iris/tests/unit/cube/test_Cube.py @@ -33,7 +33,7 @@ CellMethod, DimCoord, ) -from iris.cube import Cube +from iris.cube import Cube, CubeAttrsDict import iris.exceptions from iris.exceptions import ( AncillaryVariableNotFoundError, @@ -3436,5 +3436,31 @@ def test_fail_assign_duckcellmethod(self): self.cube.cell_methods = (test_object,) +class TestAttributesProperty: + def test_attrs_type(self): + # Cube attributes are always of a special dictionary type. + cube = Cube([0], attributes={"a": 1}) + assert type(cube.attributes) is CubeAttrsDict + assert cube.attributes == {"a": 1} + + def test_attrs_remove(self): + # Wiping attributes replaces the stored object + cube = Cube([0], attributes={"a": 1}) + attrs = cube.attributes + cube.attributes = None + assert cube.attributes is not attrs + assert type(cube.attributes) is CubeAttrsDict + assert cube.attributes == {} + + def test_attrs_clear(self): + # Clearing attributes leaves the same object + cube = Cube([0], attributes={"a": 1}) + attrs = cube.attributes + cube.attributes.clear() + assert cube.attributes is attrs + assert type(cube.attributes) is CubeAttrsDict + assert cube.attributes == {} + + if __name__ == "__main__": tests.main() diff --git a/lib/iris/tests/unit/cube/test_CubeAttrsDict.py b/lib/iris/tests/unit/cube/test_CubeAttrsDict.py new file mode 100644 index 0000000000..615de7b8e6 --- /dev/null +++ b/lib/iris/tests/unit/cube/test_CubeAttrsDict.py @@ -0,0 +1,407 @@ +# Copyright Iris contributors +# +# This file is part of Iris and is released under the BSD license. +# See LICENSE in the root of the repository for full licensing details. +"""Unit tests for the `iris.cube.CubeAttrsDict` class.""" + +import pickle + +import numpy as np +import pytest + +from iris.common.mixin import LimitedAttributeDict +from iris.cube import CubeAttrsDict +from iris.fileformats.netcdf.saver import _CF_GLOBAL_ATTRS + + +@pytest.fixture +def sample_attrs() -> CubeAttrsDict: + return CubeAttrsDict( + locals={"a": 1, "z": "this"}, globals={"b": 2, "z": "that"} + ) + + +def check_content(attrs, locals=None, globals=None, matches=None): + """ + Check a CubeAttrsDict for expected properties. + + Its ".globals" and ".locals" must match 'locals' and 'globals' args + -- except that, if 'matches' is provided, it is a CubeAttrsDict, whose + locals/globals *replace* the 'locals'/'globals' arguments. + + Check that the result is a CubeAttrsDict and, for both local + global parts, + * parts match for *equality* (==) but are *non-identical* (is not) + * order of keys matches expected (N.B. which is *not* required for equality) + """ + assert isinstance(attrs, CubeAttrsDict) + attr_locals, attr_globals = attrs.locals, attrs.globals + assert type(attr_locals) is LimitedAttributeDict + assert type(attr_globals) is LimitedAttributeDict + if matches: + locals, globals = matches.locals, matches.globals + + def check(arg, content): + if not arg: + arg = {} + if not isinstance(arg, LimitedAttributeDict): + arg = LimitedAttributeDict(arg) + # N.B. if 'arg' is an actual given LimitedAttributeDict, it is not changed.. + # .. we proceed to ensure that the stored content is equal but NOT the same + assert content == arg + assert content is not arg + assert list(content.keys()) == list(arg.keys()) + + check(locals, attr_locals) + check(globals, attr_globals) + + +class Test___init__: + def test_empty(self): + attrs = CubeAttrsDict() + check_content(attrs, None, None) + + def test_from_combined_dict(self): + attrs = CubeAttrsDict({"q": 3, "history": "something"}) + check_content(attrs, locals={"q": 3}, globals={"history": "something"}) + + def test_from_separate_dicts(self): + locals = {"q": 3} + globals = {"history": "something"} + attrs = CubeAttrsDict(locals=locals, globals=globals) + check_content(attrs, locals=locals, globals=globals) + + def test_from_cubeattrsdict(self, sample_attrs): + result = CubeAttrsDict(sample_attrs) + check_content(result, matches=sample_attrs) + + def test_from_cubeattrsdict_like(self): + class MyDict: + pass + + mydict = MyDict() + locals, globals = {"a": 1}, {"b": 2} + mydict.locals = locals + mydict.globals = globals + attrs = CubeAttrsDict(mydict) + check_content(attrs, locals=locals, globals=globals) + + +class Test_OddMethods: + def test_pickle(self, sample_attrs): + bytes = pickle.dumps(sample_attrs) + result = pickle.loads(bytes) + check_content(result, matches=sample_attrs) + + def test_clear(self, sample_attrs): + sample_attrs.clear() + check_content(sample_attrs, {}, {}) + + def test_del(self, sample_attrs): + # 'z' is in both locals+globals. Delete removes both. + assert "z" in sample_attrs.keys() + del sample_attrs["z"] + assert "z" not in sample_attrs.keys() + + def test_copy(self, sample_attrs): + copy = sample_attrs.copy() + assert copy is not sample_attrs + check_content(copy, matches=sample_attrs) + + @pytest.fixture(params=["regular_arg", "split_arg"]) + def update_testcase(self, request): + lhs = CubeAttrsDict(globals={"a": 1, "b": 2}, locals={"b": 3, "c": 4}) + if request.param == "split_arg": + # A set of "update settings", with global/local-specific keys. + rhs = CubeAttrsDict( + globals={"a": 1001, "x": 1007}, + # NOTE: use a global-default key here, to check that type is preserved + locals={"b": 1003, "history": 1099}, + ) + expected_result = CubeAttrsDict( + globals={"a": 1001, "b": 2, "x": 1007}, + locals={"b": 1003, "c": 4, "history": 1099}, + ) + else: + assert request.param == "regular_arg" + # A similar set of update values in a regular dict (so not local/global) + rhs = {"a": 1001, "x": 1007, "b": 1003, "history": 1099} + expected_result = CubeAttrsDict( + globals={"a": 1001, "b": 2, "history": 1099}, + locals={"b": 1003, "c": 4, "x": 1007}, + ) + return lhs, rhs, expected_result + + def test_update(self, update_testcase): + testval, updater, expected = update_testcase + testval.update(updater) + check_content(testval, matches=expected) + + def test___or__(self, update_testcase): + testval, updater, expected = update_testcase + original = testval.copy() + result = testval | updater + assert result is not testval + assert testval == original + check_content(result, matches=expected) + + def test___ior__(self, update_testcase): + testval, updater, expected = update_testcase + testval |= updater + check_content(testval, matches=expected) + + def test___ror__(self): + # Check the "or" operation, when lhs is a regular dictionary + lhs = {"a": 1, "b": 2, "history": 3} + rhs = CubeAttrsDict( + globals={"a": 1001, "x": 1007}, + # NOTE: use a global-default key here, to check that type is preserved + locals={"b": 1003, "history": 1099}, + ) + # The lhs should be promoted to a CubeAttrsDict, and then combined. + expected = CubeAttrsDict( + globals={"history": 3, "a": 1001, "x": 1007}, + locals={"a": 1, "b": 1003, "history": 1099}, + ) + result = lhs | rhs + check_content(result, matches=expected) + + @pytest.mark.parametrize("value", [1, None]) + @pytest.mark.parametrize("inputtype", ["regular_arg", "split_arg"]) + def test__fromkeys(self, value, inputtype): + if inputtype == "regular_arg": + # Check when input is a plain iterable of key-names + keys = ["a", "b", "history"] + # Result has keys assigned local/global via default mechanism. + expected = CubeAttrsDict( + globals={"history": value}, + locals={"a": value, "b": value}, + ) + else: + assert inputtype == "split_arg" + # Check when input is a CubeAttrsDict + keys = CubeAttrsDict( + globals={"a": 1}, locals={"b": 2, "history": 3} + ) + # The result preserves the input keys' local/global identity + # N.B. "history" would be global by default (cf. "regular_arg" case) + expected = CubeAttrsDict( + globals={"a": value}, + locals={"b": value, "history": value}, + ) + result = CubeAttrsDict.fromkeys(keys, value) + check_content(result, matches=expected) + + def test_to_dict(self, sample_attrs): + result = dict(sample_attrs) + expected = sample_attrs.globals.copy() + expected.update(sample_attrs.locals) + assert result == expected + + def test_array_copies(self): + array = np.array([3, 2, 1, 4]) + map = {"array": array} + attrs = CubeAttrsDict(map) + check_content(attrs, globals=None, locals=map) + attrs_array = attrs["array"] + assert np.all(attrs_array == array) + assert attrs_array is not array + + def test__str__(self, sample_attrs): + result = str(sample_attrs) + assert result == "{'b': 2, 'z': 'this', 'a': 1}" + + def test__repr__(self, sample_attrs): + result = repr(sample_attrs) + expected = ( + "CubeAttrsDict(" + "globals={'b': 2, 'z': 'that'}, " + "locals={'a': 1, 'z': 'this'})" + ) + assert result == expected + + +class TestEq: + def test_eq_empty(self): + attrs_1 = CubeAttrsDict() + attrs_2 = CubeAttrsDict() + assert attrs_1 == attrs_2 + + def test_eq_nonempty(self, sample_attrs): + attrs_1 = sample_attrs + attrs_2 = sample_attrs.copy() + assert attrs_1 == attrs_2 + + @pytest.mark.parametrize("aspect", ["locals", "globals"]) + def test_ne_missing(self, sample_attrs, aspect): + attrs_1 = sample_attrs + attrs_2 = sample_attrs.copy() + del getattr(attrs_2, aspect)["z"] + assert attrs_1 != attrs_2 + assert attrs_2 != attrs_1 + + @pytest.mark.parametrize("aspect", ["locals", "globals"]) + def test_ne_different(self, sample_attrs, aspect): + attrs_1 = sample_attrs + attrs_2 = sample_attrs.copy() + getattr(attrs_2, aspect)["z"] = 99 + assert attrs_1 != attrs_2 + assert attrs_2 != attrs_1 + + def test_ne_locals_vs_globals(self): + attrs_1 = CubeAttrsDict(locals={"a": 1}) + attrs_2 = CubeAttrsDict(globals={"a": 1}) + assert attrs_1 != attrs_2 + assert attrs_2 != attrs_1 + + def test_eq_dict(self): + # A CubeAttrsDict can be equal to a plain dictionary (which would create it) + vals_dict = {"a": 1, "b": 2, "history": "this"} + attrs = CubeAttrsDict(vals_dict) + assert attrs == vals_dict + assert vals_dict == attrs + + def test_ne_dict_local_global(self): + # Dictionary equivalence fails if the local/global assignments are wrong. + # sample dictionary + vals_dict = {"title": "b"} + # these attrs are *not* the same, because 'title' is global by default + attrs = CubeAttrsDict(locals={"title": "b"}) + assert attrs != vals_dict + assert vals_dict != attrs + + def test_empty_not_none(self): + # An empty CubeAttrsDict is not None, and does not compare to 'None' + # N.B. this for compatibility with the LimitedAttributeDict + attrs = CubeAttrsDict() + assert attrs is not None + with pytest.raises(TypeError, match="iterable"): + # Cannot *compare* to None (or anything non-iterable) + # N.B. not actually testing against None, as it upsets black (!) + attrs == 0 + + def test_empty_eq_iterables(self): + # An empty CubeAttrsDict is "equal" to various empty containers + attrs = CubeAttrsDict() + assert attrs == {} + assert attrs == [] + assert attrs == () + + +class TestDictOrderBehaviour: + def test_ordering(self): + attrs = CubeAttrsDict({"a": 1, "b": 2}) + assert list(attrs.keys()) == ["a", "b"] + # Remove, then reinstate 'a' : it will go to the back + del attrs["a"] + attrs["a"] = 1 + assert list(attrs.keys()) == ["b", "a"] + + def test_globals_locals_ordering(self): + # create attrs with a global attribute set *before* a local one .. + attrs = CubeAttrsDict() + attrs.globals.update(dict(a=1, m=3)) + attrs.locals.update(dict(f=7, z=4)) + # .. and check key order of combined attrs + assert list(attrs.keys()) == ["a", "m", "f", "z"] + + def test_locals_globals_nonalphabetic_order(self): + # create the "same" thing with locals before globals, *and* different key order + attrs = CubeAttrsDict() + attrs.locals.update(dict(z=4, f=7)) + attrs.globals.update(dict(m=3, a=1)) + # .. this shows that the result is not affected either by alphabetical key + # order, or the order of adding locals/globals + # I.E. result is globals-in-create-order, then locals-in-create-order + assert list(attrs.keys()) == ["m", "a", "z", "f"] + + +class TestSettingBehaviours: + def test_add_localtype(self): + attrs = CubeAttrsDict() + # Any attribute not recognised as global should go into 'locals' + attrs["z"] = 3 + check_content(attrs, locals={"z": 3}) + + @pytest.mark.parametrize("attrname", _CF_GLOBAL_ATTRS) + def test_add_globaltype(self, attrname): + # These specific attributes are recognised as belonging in 'globals' + attrs = CubeAttrsDict() + attrs[attrname] = "this" + check_content(attrs, globals={attrname: "this"}) + + def test_overwrite_local(self): + attrs = CubeAttrsDict({"a": 1}) + attrs["a"] = 2 + check_content(attrs, locals={"a": 2}) + + @pytest.mark.parametrize("attrname", _CF_GLOBAL_ATTRS) + def test_overwrite_global(self, attrname): + attrs = CubeAttrsDict({attrname: 1}) + attrs[attrname] = 2 + check_content(attrs, globals={attrname: 2}) + + @pytest.mark.parametrize("global_attrname", _CF_GLOBAL_ATTRS) + def test_overwrite_forced_local(self, global_attrname): + attrs = CubeAttrsDict(locals={global_attrname: 1}) + # The attr *remains* local, even though it would be created global by default + attrs[global_attrname] = 2 + check_content(attrs, locals={global_attrname: 2}) + + def test_overwrite_forced_global(self): + attrs = CubeAttrsDict(globals={"data": 1}) + # The attr remains global, even though it would be created local by default + attrs["data"] = 2 + check_content(attrs, globals={"data": 2}) + + def test_overwrite_both(self): + attrs = CubeAttrsDict(locals={"z": 1}, globals={"z": 1}) + # Where both exist, it will always update the local one + attrs["z"] = 2 + check_content(attrs, locals={"z": 2}, globals={"z": 1}) + + def test_local_global_masking(self, sample_attrs): + # initially, local 'z' masks the global one + assert sample_attrs["z"] == sample_attrs.locals["z"] + # remove local, global will show + del sample_attrs.locals["z"] + assert sample_attrs["z"] == sample_attrs.globals["z"] + # re-set local + sample_attrs.locals["z"] = "new" + assert sample_attrs["z"] == "new" + # change the global, makes no difference + sample_attrs.globals["z"] == "other" + assert sample_attrs["z"] == "new" + + @pytest.mark.parametrize("globals_or_locals", ("globals", "locals")) + @pytest.mark.parametrize( + "value_type", + ("replace", "emptylist", "emptytuple", "none", "zero", "false"), + ) + def test_replace_subdict(self, globals_or_locals, value_type): + # Writing to attrs.xx always replaces content with a *new* LimitedAttributeDict + locals, globals = {"a": 1}, {"b": 2} + attrs = CubeAttrsDict(locals=locals, globals=globals) + # Snapshot old + write new value, of either locals or globals + old_content = getattr(attrs, globals_or_locals) + value = { + "replace": {"qq": 77}, + "emptytuple": (), + "emptylist": [], + "none": None, + "zero": 0, + "false": False, + }[value_type] + setattr(attrs, globals_or_locals, value) + # check new content is expected type and value + new_content = getattr(attrs, globals_or_locals) + assert isinstance(new_content, LimitedAttributeDict) + assert new_content is not old_content + if value_type != "replace": + value = {} + assert new_content == value + # Check expected whole: i.e. either globals or locals was replaced with value + if globals_or_locals == "globals": + globals = value + else: + locals = value + check_content(attrs, locals=locals, globals=globals) diff --git a/lib/iris/tests/unit/fileformats/nc_load_rules/helpers/test_build_cube_metadata.py b/lib/iris/tests/unit/fileformats/nc_load_rules/helpers/test_build_cube_metadata.py index e2297be69e..973e10217b 100644 --- a/lib/iris/tests/unit/fileformats/nc_load_rules/helpers/test_build_cube_metadata.py +++ b/lib/iris/tests/unit/fileformats/nc_load_rules/helpers/test_build_cube_metadata.py @@ -41,7 +41,7 @@ def _make_engine(global_attributes=None, standard_name=None, long_name=None): return engine -class TestInvalidGlobalAttributes(tests.IrisTest): +class TestGlobalAttributes(tests.IrisTest): def test_valid(self): global_attributes = { "Conventions": "CF-1.5", @@ -50,7 +50,7 @@ def test_valid(self): engine = _make_engine(global_attributes) build_cube_metadata(engine) expected = global_attributes - self.assertEqual(engine.cube.attributes, expected) + self.assertEqual(engine.cube.attributes.globals, expected) def test_invalid(self): global_attributes = { @@ -64,13 +64,14 @@ def test_invalid(self): # Check for a warning. self.assertEqual(warn.call_count, 1) self.assertIn( - "Skipping global attribute 'calendar'", warn.call_args[0][0] + "Skipping disallowed global attribute 'calendar'", + warn.call_args[0][0], ) # Check resulting attributes. The invalid entry 'calendar' # should be filtered out. global_attributes.pop("calendar") expected = global_attributes - self.assertEqual(engine.cube.attributes, expected) + self.assertEqual(engine.cube.attributes.globals, expected) class TestCubeName(tests.IrisTest): diff --git a/lib/iris/tests/unit/fileformats/netcdf/loader/test__chunk_control.py b/lib/iris/tests/unit/fileformats/netcdf/loader/test__chunk_control.py new file mode 100644 index 0000000000..7249c39829 --- /dev/null +++ b/lib/iris/tests/unit/fileformats/netcdf/loader/test__chunk_control.py @@ -0,0 +1,216 @@ +# Copyright Iris contributors +# +# This file is part of Iris and is released under the BSD license. +# See LICENSE in the root of the repository for full licensing details. +"""Unit tests for :class:`iris.fileformats.netcdf.loader.ChunkControl`.""" + +# Import iris.tests first so that some things can be initialised before +# importing anything else. +import iris.tests as tests # isort:skip +from unittest.mock import ANY, patch + +import dask +import numpy as np +import pytest + +import iris +from iris.cube import CubeList +from iris.fileformats.netcdf import loader +from iris.fileformats.netcdf.loader import CHUNK_CONTROL +import iris.tests.stock as istk + + +@pytest.fixture() +def save_cubelist_with_sigma(tmp_filepath): + cube = istk.simple_4d_with_hybrid_height() + cube_varname = "my_var" + sigma_varname = "my_sigma" + cube.var_name = cube_varname + cube.coord("sigma").var_name = sigma_varname + cube.coord("sigma").guess_bounds() + iris.save(cube, tmp_filepath) + return cube_varname, sigma_varname + + +@pytest.fixture +def save_cube_with_chunksize(tmp_filepath): + cube = istk.simple_3d() + # adding an aux coord allows us to test that + # iris.fileformats.netcdf.loader._get_cf_var_data() + # will only throw an error if from_file mode is + # True when the entire cube has no specified chunking + aux = iris.coords.AuxCoord( + points=np.zeros((3, 4)), + long_name="random", + units="1", + ) + cube.add_aux_coord(aux, [1, 2]) + iris.save(cube, tmp_filepath, chunksizes=(1, 3, 4)) + + +@pytest.fixture(scope="session") +def tmp_filepath(tmp_path_factory): + tmp_dir = tmp_path_factory.mktemp("data") + tmp_path = tmp_dir / "tmp.nc" + return str(tmp_path) + + +@pytest.fixture(autouse=True) +def remove_min_bytes(): + old_min_bytes = loader._LAZYVAR_MIN_BYTES + loader._LAZYVAR_MIN_BYTES = 0 + yield + loader._LAZYVAR_MIN_BYTES = old_min_bytes + + +def test_default(tmp_filepath, save_cubelist_with_sigma): + cube_varname, _ = save_cubelist_with_sigma + cubes = CubeList(loader.load_cubes(tmp_filepath)) + cube = cubes.extract_cube(cube_varname) + assert cube.shape == (3, 4, 5, 6) + assert cube.lazy_data().chunksize == (3, 4, 5, 6) + + sigma = cube.coord("sigma") + assert sigma.shape == (4,) + assert sigma.lazy_points().chunksize == (4,) + assert sigma.lazy_bounds().chunksize == (4, 2) + + +def test_control_global(tmp_filepath, save_cubelist_with_sigma): + cube_varname, _ = save_cubelist_with_sigma + with CHUNK_CONTROL.set(model_level_number=2): + cubes = CubeList(loader.load_cubes(tmp_filepath)) + cube = cubes.extract_cube(cube_varname) + assert cube.shape == (3, 4, 5, 6) + assert cube.lazy_data().chunksize == (3, 2, 5, 6) + + sigma = cube.coord("sigma") + assert sigma.shape == (4,) + assert sigma.lazy_points().chunksize == (2,) + assert sigma.lazy_bounds().chunksize == (2, 2) + + +def test_control_sigma_only(tmp_filepath, save_cubelist_with_sigma): + cube_varname, sigma_varname = save_cubelist_with_sigma + with CHUNK_CONTROL.set(sigma_varname, model_level_number=2): + cubes = CubeList(loader.load_cubes(tmp_filepath)) + cube = cubes.extract_cube(cube_varname) + assert cube.shape == (3, 4, 5, 6) + assert cube.lazy_data().chunksize == (3, 4, 5, 6) + + sigma = cube.coord("sigma") + assert sigma.shape == (4,) + assert sigma.lazy_points().chunksize == (2,) + # N.B. this does not apply to bounds array + assert sigma.lazy_bounds().chunksize == (4, 2) + + +def test_control_cube_var(tmp_filepath, save_cubelist_with_sigma): + cube_varname, _ = save_cubelist_with_sigma + with CHUNK_CONTROL.set(cube_varname, model_level_number=2): + cubes = CubeList(loader.load_cubes(tmp_filepath)) + cube = cubes.extract_cube(cube_varname) + assert cube.shape == (3, 4, 5, 6) + assert cube.lazy_data().chunksize == (3, 2, 5, 6) + + sigma = cube.coord("sigma") + assert sigma.shape == (4,) + assert sigma.lazy_points().chunksize == (2,) + assert sigma.lazy_bounds().chunksize == (2, 2) + + +def test_invalid_chunksize(tmp_filepath, save_cubelist_with_sigma): + with pytest.raises(ValueError): + with CHUNK_CONTROL.set(model_level_numer="2"): + CubeList(loader.load_cubes(tmp_filepath)) + + +def test_invalid_var_name(tmp_filepath, save_cubelist_with_sigma): + with pytest.raises(ValueError): + with CHUNK_CONTROL.set([1, 2], model_level_numer="2"): + CubeList(loader.load_cubes(tmp_filepath)) + + +def test_control_multiple(tmp_filepath, save_cubelist_with_sigma): + cube_varname, sigma_varname = save_cubelist_with_sigma + with CHUNK_CONTROL.set( + cube_varname, model_level_number=2 + ), CHUNK_CONTROL.set(sigma_varname, model_level_number=3): + cubes = CubeList(loader.load_cubes(tmp_filepath)) + cube = cubes.extract_cube(cube_varname) + assert cube.shape == (3, 4, 5, 6) + assert cube.lazy_data().chunksize == (3, 2, 5, 6) + + sigma = cube.coord("sigma") + assert sigma.shape == (4,) + assert sigma.lazy_points().chunksize == (3,) + assert sigma.lazy_bounds().chunksize == (2, 2) + + +def test_neg_one(tmp_filepath, save_cubelist_with_sigma): + cube_varname, _ = save_cubelist_with_sigma + with dask.config.set({"array.chunk-size": "50B"}): + with CHUNK_CONTROL.set(model_level_number=-1): + cubes = CubeList(loader.load_cubes(tmp_filepath)) + cube = cubes.extract_cube(cube_varname) + assert cube.shape == (3, 4, 5, 6) + # uses known good output + assert cube.lazy_data().chunksize == (1, 4, 1, 1) + + sigma = cube.coord("sigma") + assert sigma.shape == (4,) + assert sigma.lazy_points().chunksize == (4,) + assert sigma.lazy_bounds().chunksize == (4, 1) + + +def test_from_file(tmp_filepath, save_cube_with_chunksize): + with CHUNK_CONTROL.from_file(): + cube = next(loader.load_cubes(tmp_filepath)) + assert cube.shape == (2, 3, 4) + assert cube.lazy_data().chunksize == (1, 3, 4) + + +def test_no_chunks_from_file(tmp_filepath, save_cubelist_with_sigma): + cube_varname, _ = save_cubelist_with_sigma + with pytest.raises(KeyError): + with CHUNK_CONTROL.from_file(): + CubeList(loader.load_cubes(tmp_filepath)) + + +def test_as_dask(tmp_filepath, save_cubelist_with_sigma): + """ + This does not test return values, as we can't be sure + dask chunking behaviour won't change, or that it will differ + from our own chunking behaviour. + """ + message = "Mock called, rest of test unneeded" + with patch("iris.fileformats.netcdf.loader.as_lazy_data") as as_lazy_data: + as_lazy_data.side_effect = RuntimeError(message) + with CHUNK_CONTROL.as_dask(): + try: + CubeList(loader.load_cubes(tmp_filepath)) + except RuntimeError as e: + if str(e) != message: + raise e + as_lazy_data.assert_called_with(ANY, chunks=None, dask_chunking=True) + + +def test_pinned_optimisation(tmp_filepath, save_cubelist_with_sigma): + cube_varname, _ = save_cubelist_with_sigma + with dask.config.set({"array.chunk-size": "250B"}): + with CHUNK_CONTROL.set(model_level_number=2): + cubes = CubeList(loader.load_cubes(tmp_filepath)) + cube = cubes.extract_cube(cube_varname) + assert cube.shape == (3, 4, 5, 6) + # uses known good output + # known good output WITHOUT pinning: (1, 1, 5, 6) + assert cube.lazy_data().chunksize == (1, 2, 2, 6) + + sigma = cube.coord("sigma") + assert sigma.shape == (4,) + assert sigma.lazy_points().chunksize == (2,) + assert sigma.lazy_bounds().chunksize == (2, 2) + + +if __name__ == "__main__": + tests.main() diff --git a/lib/iris/tests/unit/fileformats/netcdf/loader/test__get_cf_var_data.py b/lib/iris/tests/unit/fileformats/netcdf/loader/test__get_cf_var_data.py index 3c3cbff7f4..caece8b6bc 100644 --- a/lib/iris/tests/unit/fileformats/netcdf/loader/test__get_cf_var_data.py +++ b/lib/iris/tests/unit/fileformats/netcdf/loader/test__get_cf_var_data.py @@ -14,7 +14,7 @@ from iris._lazy_data import _optimum_chunksize import iris.fileformats.cf -from iris.fileformats.netcdf.loader import _get_cf_var_data +from iris.fileformats.netcdf.loader import CHUNK_CONTROL, _get_cf_var_data class Test__get_cf_var_data(tests.IrisTest): @@ -29,6 +29,7 @@ def _make( cf_data = mock.MagicMock( _FillValue=None, __getitem__="", + dimensions=["dim_" + str(x) for x in range(len(shape or "1"))], ) cf_data.chunking = mock.MagicMock(return_value=chunksizes) if shape is None: @@ -60,6 +61,16 @@ def test_cf_data_chunks(self): expected_chunks = _optimum_chunksize(chunks, self.shape) self.assertArrayEqual(lazy_data_chunks, expected_chunks) + def test_cf_data_chunk_control(self): + # more thorough testing can be found at `test__chunk_control` + chunks = [2500, 240, 200] + cf_var = self._make(shape=(2500, 240, 200), chunksizes=chunks) + with CHUNK_CONTROL.set(dim_0=25, dim_1=24, dim_2=20): + lazy_data = _get_cf_var_data(cf_var, self.filename) + lazy_data_chunks = [c[0] for c in lazy_data.chunks] + expected_chunks = (25, 24, 20) + self.assertArrayEqual(lazy_data_chunks, expected_chunks) + def test_cf_data_no_chunks(self): # No chunks means chunks are calculated from the array's shape by # `iris._lazy_data._optimum_chunksize()`. diff --git a/lib/iris/tests/unit/lazy_data/test_as_lazy_data.py b/lib/iris/tests/unit/lazy_data/test_as_lazy_data.py index 0acb085830..2222d185c3 100644 --- a/lib/iris/tests/unit/lazy_data/test_as_lazy_data.py +++ b/lib/iris/tests/unit/lazy_data/test_as_lazy_data.py @@ -41,6 +41,25 @@ def test_non_default_chunks(self): (result,) = np.unique(lazy_data.chunks) self.assertEqual(result, 24) + def test_dask_chunking(self): + data = np.arange(24) + chunks = (12,) + optimum = self.patch("iris._lazy_data._optimum_chunksize") + optimum.return_value = chunks + _ = as_lazy_data(data, chunks=None, dask_chunking=True) + self.assertFalse(optimum.called) + + def test_dask_chunking_error(self): + data = np.arange(24) + chunks = (12,) + optimum = self.patch("iris._lazy_data._optimum_chunksize") + optimum.return_value = chunks + with self.assertRaisesRegex( + ValueError, + r"Dask chunking chosen, but chunks already assigned value", + ): + as_lazy_data(data, chunks=chunks, dask_chunking=True) + def test_with_masked_constant(self): masked_data = ma.masked_array([8], mask=True) masked_constant = masked_data[0] @@ -151,7 +170,10 @@ def test_default_chunks_limiting(self): limitcall_patch.call_args_list, [ mock.call( - list(test_shape), shape=test_shape, dtype=np.dtype("f4") + list(test_shape), + shape=test_shape, + dtype=np.dtype("f4"), + dims_fixed=None, ) ], ) diff --git a/lib/iris/tests/unit/util/test_equalise_attributes.py b/lib/iris/tests/unit/util/test_equalise_attributes.py index a4198160a9..de5308a7fa 100644 --- a/lib/iris/tests/unit/util/test_equalise_attributes.py +++ b/lib/iris/tests/unit/util/test_equalise_attributes.py @@ -13,8 +13,13 @@ import numpy as np -from iris.cube import Cube +from iris.coords import AuxCoord +from iris.cube import Cube, CubeAttrsDict import iris.tests.stock +from iris.tests.unit.common.metadata.test_CubeMetadata import ( + _TEST_ATTRNAME, + make_attrsdict, +) from iris.util import equalise_attributes @@ -152,5 +157,111 @@ def test_complex_somecommon(self): ) +class TestSplitattributes: + """ + Extra testing for cases where attributes differ specifically by type + + That is, where there is a new possibility of 'mismatch' due to the newer "typing" + of attributes as global or local. + + Specifically, it is now possible that although + "cube1.attributes.keys() == cube2.attributes.keys()", + AND "cube1.attributes[k] == cube2.attributes[k]" for all keys, + YET STILL (possibly) "cube1.attributes != cube2.attributes" + """ + + @staticmethod + def _sample_splitattrs_cube(attr_global_local): + attrs = CubeAttrsDict( + globals=make_attrsdict(attr_global_local[0]), + locals=make_attrsdict(attr_global_local[1]), + ) + return Cube([0], attributes=attrs) + + @staticmethod + def check_equalised_result(cube1, cube2): + equalise_attributes([cube1, cube2]) + # Note: "X" represents a missing attribute, as in test_CubeMetadata + return [ + ( + cube1.attributes.globals.get(_TEST_ATTRNAME, "X") + + cube1.attributes.locals.get(_TEST_ATTRNAME, "X") + ), + ( + cube2.attributes.globals.get(_TEST_ATTRNAME, "X") + + cube2.attributes.locals.get(_TEST_ATTRNAME, "X") + ), + ] + + def test__global_and_local__bothsame(self): + # A trivial case showing that the original globals+locals are both preserved. + cube1 = self._sample_splitattrs_cube("AB") + cube2 = self._sample_splitattrs_cube("AB") + result = self.check_equalised_result(cube1, cube2) + assert result == ["AB", "AB"] + + def test__globals_different(self): + cube1 = self._sample_splitattrs_cube("AX") + cube2 = self._sample_splitattrs_cube("BX") + result = self.check_equalised_result(cube1, cube2) + assert result == ["XX", "XX"] + + def test__locals_different(self): + cube1 = self._sample_splitattrs_cube("XA") + cube2 = self._sample_splitattrs_cube("XB") + result = self.check_equalised_result(cube1, cube2) + assert result == ["XX", "XX"] + + def test__oneglobal_onelocal__different(self): + cube1 = self._sample_splitattrs_cube("AX") + cube2 = self._sample_splitattrs_cube("XB") + result = self.check_equalised_result(cube1, cube2) + assert result == ["XX", "XX"] + + # This case fails without the split-attributes fix. + def test__oneglobal_onelocal__same(self): + cube1 = self._sample_splitattrs_cube("AX") + cube2 = self._sample_splitattrs_cube("XA") + result = self.check_equalised_result(cube1, cube2) + assert result == ["XX", "XX"] + + def test__sameglobals_onelocal__different(self): + cube1 = self._sample_splitattrs_cube("AB") + cube2 = self._sample_splitattrs_cube("AX") + result = self.check_equalised_result(cube1, cube2) + assert result == ["XX", "XX"] + + # This case fails without the split-attributes fix. + def test__sameglobals_onelocal__same(self): + cube1 = self._sample_splitattrs_cube("AA") + cube2 = self._sample_splitattrs_cube("AX") + result = self.check_equalised_result(cube1, cube2) + assert result == ["XX", "XX"] + + # This case fails without the split-attributes fix. + def test__differentglobals_samelocals(self): + cube1 = self._sample_splitattrs_cube("AC") + cube2 = self._sample_splitattrs_cube("BC") + result = self.check_equalised_result(cube1, cube2) + assert result == ["XX", "XX"] + + +class TestNonCube: + # Just to assert that we can do operations on non-cube components (like Coords), + # in fact effectively, anything with a ".attributes". + # Even though the docstring does not admit this, we test it because we put in + # special code to preserve it when adding the split-attribute handling. + def test(self): + attrs = [1, 1, 2] + coords = [ + AuxCoord([0], attributes={"a": attr, "b": "all_the_same"}) + for attr in attrs + ] + equalise_attributes(coords) + assert all( + coord.attributes == {"b": "all_the_same"} for coord in coords + ) + + if __name__ == "__main__": tests.main() diff --git a/lib/iris/tests/unit/util/test_guess_coord_axis.py b/lib/iris/tests/unit/util/test_guess_coord_axis.py new file mode 100644 index 0000000000..d946565196 --- /dev/null +++ b/lib/iris/tests/unit/util/test_guess_coord_axis.py @@ -0,0 +1,50 @@ +# Copyright Iris contributors +# +# This file is part of Iris and is released under the BSD license. +# See LICENSE in the root of the repository for full licensing details. +"""Test function :func:`iris.util.guess_coord_axis`.""" + +import pytest + +from iris.util import guess_coord_axis + + +class TestGuessCoord: + @pytest.mark.parametrize( + "coordinate, axis", + [ + ("longitude", "X"), + ("grid_longitude", "X"), + ("projection_x_coordinate", "X"), + ("latitude", "Y"), + ("grid_latitude", "Y"), + ("projection_y_coordinate", "Y"), + ], + ) + def test_coord(self, coordinate, axis, sample_coord): + sample_coord.standard_name = coordinate + assert guess_coord_axis(sample_coord) == axis + + @pytest.mark.parametrize( + "units, axis", + [ + ("hPa", "Z"), + ("days since 1970-01-01 00:00:00", "T"), + ], + ) + def test_units(self, units, axis, sample_coord): + sample_coord.units = units + assert guess_coord_axis(sample_coord) == axis + + @pytest.mark.parametrize( + "ignore_axis, result", + [ + (True, None), + (False, "X"), + ], + ) + def test_ignore_axis(self, ignore_axis, result, sample_coord): + sample_coord.standard_name = "longitude" + sample_coord.ignore_axis = ignore_axis + + assert guess_coord_axis(sample_coord) == result diff --git a/lib/iris/util.py b/lib/iris/util.py index ee415d230e..10a58fdef0 100644 --- a/lib/iris/util.py +++ b/lib/iris/util.py @@ -257,10 +257,17 @@ def guess_coord_axis(coord): This function maintains laziness when called; it does not realise data. See more at :doc:`/userguide/real_and_lazy_data`. + The ``guess_coord_axis`` behaviour can be skipped by setting the coordinate property ``ignore_axis`` + to ``False``. + """ + axis = None - if coord.standard_name in ( + if hasattr(coord, "ignore_axis") and coord.ignore_axis is True: + return axis + + elif coord.standard_name in ( "longitude", "grid_longitude", "projection_x_coordinate", @@ -2064,24 +2071,50 @@ def equalise_attributes(cubes): See more at :doc:`/userguide/real_and_lazy_data`. """ - removed = [] + # deferred import to avoid circularity problem + from iris.common._split_attribute_dicts import ( + _convert_splitattrs_to_pairedkeys_dict, + ) + + cube_attrs = [cube.attributes for cube in cubes] + + # Convert all the input dictionaries to ones with 'paired' keys, so each key + # becomes a pair, ('local'/'global', attribute-name), making them specific to each + # "type", i.e. global or local. + # This is needed to ensure that afterwards all cubes will have identical + # attributes, E.G. it treats an attribute which is global on one cube and local + # on another as *not* the same. This is essential to its use in making merges work. + # + # This approach does also still function with "ordinary" dictionaries, or + # :class:`iris.common.mixin.LimitedAttributeDict`, though somewhat inefficiently, + # so the routine works on *other* objects bearing attributes, i.e. not just Cubes. + # That is also important since the original code allows that (though the docstring + # does not admit it). + cube_attrs = [ + _convert_splitattrs_to_pairedkeys_dict(dic) for dic in cube_attrs + ] + # Work out which attributes are identical across all the cubes. - common_keys = list(cubes[0].attributes.keys()) + common_keys = list(cube_attrs[0].keys()) keys_to_remove = set(common_keys) - for cube in cubes[1:]: - cube_keys = list(cube.attributes.keys()) + for attrs in cube_attrs[1:]: + cube_keys = list(attrs.keys()) keys_to_remove.update(cube_keys) common_keys = [ key for key in common_keys - if ( - key in cube_keys - and np.all(cube.attributes[key] == cubes[0].attributes[key]) - ) + if (key in cube_keys and np.all(attrs[key] == cube_attrs[0][key])) ] keys_to_remove.difference_update(common_keys) - # Remove all the other attributes. + # Convert back from the resulting 'paired' keys set, extracting just the + # attribute-name parts, as a set of names to be discarded. + # Note: we don't care any more what type (global/local) these were : we will + # simply remove *all* attributes with those names. + keys_to_remove = set(key_pair[1] for key_pair in keys_to_remove) + + # Remove all the non-matching attributes. + removed = [] for cube in cubes: deleted_attributes = { key: cube.attributes.pop(key) @@ -2089,6 +2122,7 @@ def equalise_attributes(cubes): if key in cube.attributes } removed.append(deleted_attributes) + return removed diff --git a/requirements/locks/py310-linux-64.lock b/requirements/locks/py310-linux-64.lock index 2655960622..18b8ee256c 100644 --- a/requirements/locks/py310-linux-64.lock +++ b/requirements/locks/py310-linux-64.lock @@ -1,6 +1,6 @@ # Generated by conda-lock. # platform: linux-64 -# input_hash: 94966cd7393527bff211c87589678b2ffe1697705267a20b2708a4cc27da5376 +# input_hash: df35455963a70471a00b88b3c8609117d9379aebcb6472b49d2a621e0d0895fa @EXPLICIT https://conda.anaconda.org/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2#d7c89558ba9fa0495403155b64376d81 https://conda.anaconda.org/conda-forge/linux-64/ca-certificates-2023.7.22-hbcca054_0.conda#a73ecd2988327ad4c8f2c331482917f2 @@ -9,18 +9,18 @@ https://conda.anaconda.org/conda-forge/noarch/font-ttf-inconsolata-3.000-h77eed3 https://conda.anaconda.org/conda-forge/noarch/font-ttf-source-code-pro-2.038-h77eed37_0.tar.bz2#4d59c254e01d9cde7957100457e2d5fb https://conda.anaconda.org/conda-forge/noarch/font-ttf-ubuntu-0.83-hab24e00_0.tar.bz2#19410c3df09dfb12d1206132a1d357c5 https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.40-h41732ed_0.conda#7aca3059a1729aa76c597603f10b0dd3 -https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-13.2.0-h7e041cc_2.conda#9172c297304f2a20134fc56c97fbe229 +https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-13.2.0-h7e041cc_3.conda#937eaed008f6bf2191c5fe76f87755e9 https://conda.anaconda.org/conda-forge/linux-64/python_abi-3.10-4_cp310.conda#26322ec5d7712c3ded99dd656142b8ce https://conda.anaconda.org/conda-forge/noarch/tzdata-2023c-h71feb2d_0.conda#939e3e74d8be4dac89ce83b20de2492a https://conda.anaconda.org/conda-forge/noarch/fonts-conda-forge-1-0.tar.bz2#f766549260d6815b0c52253f1fb1bb29 -https://conda.anaconda.org/conda-forge/linux-64/libgomp-13.2.0-h807b86a_2.conda#e2042154faafe61969556f28bade94b9 +https://conda.anaconda.org/conda-forge/linux-64/libgomp-13.2.0-h807b86a_3.conda#7124cbb46b13d395bdde68f2d215c989 https://conda.anaconda.org/conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2#73aaf86a425cc6e73fcf236a5a46396d https://conda.anaconda.org/conda-forge/noarch/fonts-conda-ecosystem-1-0.tar.bz2#fee5683a3f04bd15cbd8318b096a27ab -https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-13.2.0-h807b86a_2.conda#c28003b0be0494f9a7664389146716ff +https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-13.2.0-h807b86a_3.conda#23fdf1fef05baeb7eadc2aed5fb0011f https://conda.anaconda.org/conda-forge/linux-64/alsa-lib-1.2.10-hd590300_0.conda#75dae9a4201732aa78a530b826ee5fe0 https://conda.anaconda.org/conda-forge/linux-64/attr-2.5.1-h166bdaf_1.tar.bz2#d9c69a24ad678ffce24c6543a0176b00 -https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-h7f98852_4.tar.bz2#a1fd65c7ccbf10880423d82bca54eb54 -https://conda.anaconda.org/conda-forge/linux-64/c-ares-1.20.1-hd590300_0.conda#6642e4faa4804be3a0e7edfefbd16595 +https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-hd590300_5.conda#69b8b6202a07720f448be700e300ccf4 +https://conda.anaconda.org/conda-forge/linux-64/c-ares-1.21.0-hd590300_0.conda#c06fa0440048270817b9e3142cc661bf https://conda.anaconda.org/conda-forge/linux-64/fribidi-1.0.10-h36c2ea0_0.tar.bz2#ac7bc6a654f8f41b352b38f4051135f8 https://conda.anaconda.org/conda-forge/linux-64/geos-3.12.0-h59595ed_0.conda#3fdf79ef322c8379ae83be491d805369 https://conda.anaconda.org/conda-forge/linux-64/gettext-0.21.1-h27087fc_0.tar.bz2#14947d8770185e5153fdd04d4673ed37 @@ -36,11 +36,11 @@ https://conda.anaconda.org/conda-forge/linux-64/libdeflate-1.19-hd590300_0.conda https://conda.anaconda.org/conda-forge/linux-64/libev-4.33-h516909a_1.tar.bz2#6f8720dff19e17ce5d48cfe7f3d2f0a3 https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.5.0-hcb278e6_1.conda#6305a3dd2752c76335295da4e581f2fd https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.2-h7f98852_5.tar.bz2#d645c6d2ac96843a2bfaccd2d62b3ac3 -https://conda.anaconda.org/conda-forge/linux-64/libgfortran5-13.2.0-ha4646dd_2.conda#78fdab09d9138851dde2b5fe2a11019e +https://conda.anaconda.org/conda-forge/linux-64/libgfortran5-13.2.0-ha4646dd_3.conda#c714d905cdfa0e70200f68b80cc04764 https://conda.anaconda.org/conda-forge/linux-64/libiconv-1.17-h166bdaf_0.tar.bz2#b62b52da46c39ee2bc3c162ac7f1804d https://conda.anaconda.org/conda-forge/linux-64/libjpeg-turbo-3.0.0-hd590300_1.conda#ea25936bb4080d843790b586850f82b8 https://conda.anaconda.org/conda-forge/linux-64/libmo_unpack-3.1.2-hf484d3e_1001.tar.bz2#95f32a6a5a666d33886ca5627239f03d -https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.0-hd590300_1.conda#854e3e1623b39777140f199c5f9ab952 +https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda#30fd6e37fe21f86f4bd26d6ee73eeec7 https://conda.anaconda.org/conda-forge/linux-64/libogg-1.3.4-h7f98852_1.tar.bz2#6e8cc2173440d77708196c5b93771680 https://conda.anaconda.org/conda-forge/linux-64/libopus-1.3.1-h7f98852_1.tar.bz2#15345e56d527b330e1cacbdf58676e8f https://conda.anaconda.org/conda-forge/linux-64/libtool-2.4.7-h27087fc_0.conda#f204c8ba400ec475452737094fb81d52 @@ -49,9 +49,9 @@ https://conda.anaconda.org/conda-forge/linux-64/libwebp-base-1.3.2-hd590300_0.co https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.2.13-hd590300_5.conda#f36c115f1ee199da648e0597ec2047ad https://conda.anaconda.org/conda-forge/linux-64/lz4-c-1.9.4-hcb278e6_0.conda#318b08df404f9c9be5712aaa5a6f0bb0 https://conda.anaconda.org/conda-forge/linux-64/mpg123-1.32.3-h59595ed_0.conda#bdadff838d5437aea83607ced8b37f75 -https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.4-hcb278e6_0.conda#681105bccc2a3f7f1a837d47d39c9179 +https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.4-h59595ed_2.conda#7dbaa197d7ba6032caf7ae7f32c1efa0 https://conda.anaconda.org/conda-forge/linux-64/nspr-4.35-h27087fc_0.conda#da0ec11a6454ae19bff5b02ed881a2b1 -https://conda.anaconda.org/conda-forge/linux-64/openssl-3.1.3-hd590300_0.conda#7bb88ce04c8deb9f7d763ae04a1da72f +https://conda.anaconda.org/conda-forge/linux-64/openssl-3.1.4-hd590300_0.conda#412ba6938c3e2abaca8b1129ea82e238 https://conda.anaconda.org/conda-forge/linux-64/pixman-0.42.2-h59595ed_0.conda#700edd63ccd5fc66b70b1c028cea9a68 https://conda.anaconda.org/conda-forge/linux-64/pthread-stubs-0.4-h36c2ea0_1001.tar.bz2#22dad4df6e8630e8dff2428f6f6a7036 https://conda.anaconda.org/conda-forge/linux-64/snappy-1.1.10-h9fff704_0.conda#e6d228cd0bb74a51dd18f5bfce0b4115 @@ -74,21 +74,21 @@ https://conda.anaconda.org/conda-forge/linux-64/libcap-2.69-h0f662aa_0.conda#25c https://conda.anaconda.org/conda-forge/linux-64/libedit-3.1.20191231-he28a2e2_2.tar.bz2#4d331e44109e3f0e19b4cb8f9b82f3e1 https://conda.anaconda.org/conda-forge/linux-64/libevent-2.1.12-hf998b51_1.conda#a1cfcc585f0c42bf8d5546bb1dfb668d https://conda.anaconda.org/conda-forge/linux-64/libflac-1.4.3-h59595ed_0.conda#ee48bf17cc83a00f59ca1494d5646869 -https://conda.anaconda.org/conda-forge/linux-64/libgfortran-ng-13.2.0-h69a702a_2.conda#e75a75a6eaf6f318dae2631158c46575 +https://conda.anaconda.org/conda-forge/linux-64/libgfortran-ng-13.2.0-h69a702a_3.conda#73031c79546ad06f1fe62e57fdd021bc https://conda.anaconda.org/conda-forge/linux-64/libgpg-error-1.47-h71f35ed_0.conda#c2097d0b46367996f09b4e8e4920384a -https://conda.anaconda.org/conda-forge/linux-64/libnghttp2-1.52.0-h61bc06f_0.conda#613955a50485812985c059e7b269f42e +https://conda.anaconda.org/conda-forge/linux-64/libnghttp2-1.58.0-h47da74e_0.conda#9b13d5ee90fc9f09d54fd403247342b4 https://conda.anaconda.org/conda-forge/linux-64/libpng-1.6.39-h753d276_0.conda#e1c890aebdebbfbf87e2c917187b4416 -https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.43.2-h2797004_0.conda#4b441a1ee22397d5a27dc1126b849edd +https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.44.0-h2797004_0.conda#b58e6816d137f3aabf77d341dd5d732b https://conda.anaconda.org/conda-forge/linux-64/libssh2-1.11.0-h0841786_0.conda#1f5a58e686b13bcfde88b93f547d23fe https://conda.anaconda.org/conda-forge/linux-64/libudunits2-2.2.28-h40f5838_3.conda#4bdace082e911a3e1f1f0b721bed5b56 https://conda.anaconda.org/conda-forge/linux-64/libvorbis-1.3.7-h9c3ff4c_0.tar.bz2#309dec04b70a3cc0f1e84a4013683bc0 https://conda.anaconda.org/conda-forge/linux-64/libxcb-1.15-h0b41bf4_0.conda#33277193f5b92bad9fdd230eb700929c -https://conda.anaconda.org/conda-forge/linux-64/libxml2-2.11.5-h232c23b_1.conda#f3858448893839820d4bcfb14ad3ecdf +https://conda.anaconda.org/conda-forge/linux-64/libxml2-2.11.6-h232c23b_0.conda#427a3e59d66cb5d145020bd9c6493334 https://conda.anaconda.org/conda-forge/linux-64/libzip-1.10.1-h2629f0a_3.conda#ac79812548e7e8cf61f7b0abdef01d3b -https://conda.anaconda.org/conda-forge/linux-64/mysql-common-8.0.33-hf1915f5_5.conda#1e8ef4090ca4f0d66404a7441e1dbf3c -https://conda.anaconda.org/conda-forge/linux-64/pcre2-10.40-hc3806b6_0.tar.bz2#69e2c796349cd9b273890bee0febfe1b +https://conda.anaconda.org/conda-forge/linux-64/mysql-common-8.0.33-hf1915f5_6.conda#80bf3b277c120dd294b51d404b931a75 +https://conda.anaconda.org/conda-forge/linux-64/pcre2-10.42-hcad00b1_0.conda#679c8961826aa4b50653bce17ee52abe https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8228510_1.conda#47d31b792659ce70f470b5c82fdfb7a4 -https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-h2797004_0.conda#513336054f884f95d9fd925748f41ef3 +https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda#d453b98d9c83e71da0741bb0ff4d76bc https://conda.anaconda.org/conda-forge/linux-64/xorg-libsm-1.2.4-h7391055_0.conda#93ee23f12bc2e684548181256edd2cf6 https://conda.anaconda.org/conda-forge/linux-64/zlib-1.2.13-hd590300_5.conda#68c34ec6149623be41a1933ab996a209 https://conda.anaconda.org/conda-forge/linux-64/zstd-1.5.5-hfc55251_0.conda#04b88013080254850d6c01ed54810589 @@ -96,16 +96,16 @@ https://conda.anaconda.org/conda-forge/linux-64/blosc-1.21.5-h0f2a231_0.conda#00 https://conda.anaconda.org/conda-forge/linux-64/brotli-bin-1.1.0-hd590300_1.conda#39f910d205726805a958da408ca194ba https://conda.anaconda.org/conda-forge/linux-64/freetype-2.12.1-h267a509_2.conda#9ae35c3d96db2c94ce0cef86efdfa2cb https://conda.anaconda.org/conda-forge/linux-64/krb5-1.21.2-h659d440_0.conda#cd95826dbd331ed1be26bdf401432844 -https://conda.anaconda.org/conda-forge/linux-64/libgcrypt-1.10.1-h166bdaf_0.tar.bz2#f967fc95089cd247ceed56eda31de3a9 -https://conda.anaconda.org/conda-forge/linux-64/libglib-2.78.0-hebfc3b9_0.conda#e618003da3547216310088478e475945 +https://conda.anaconda.org/conda-forge/linux-64/libgcrypt-1.10.2-hd590300_0.conda#3d7d5e5cebf8af5aadb040732860f1b6 +https://conda.anaconda.org/conda-forge/linux-64/libglib-2.78.1-h783c2da_1.conda#70052d6c1e84643e30ffefb21ab6950f https://conda.anaconda.org/conda-forge/linux-64/libllvm15-15.0.7-h5cf9203_3.conda#9efe82d44b76a7529a1d702e5a37752e https://conda.anaconda.org/conda-forge/linux-64/libopenblas-0.3.24-pthreads_h413a1c8_0.conda#6e4ef6ca28655124dcde9bd500e44c32 https://conda.anaconda.org/conda-forge/linux-64/libsndfile-1.2.2-hc60ed4a_1.conda#ef1910918dd895516a769ed36b5b3a4e https://conda.anaconda.org/conda-forge/linux-64/libtiff-4.6.0-ha9c0a0a_2.conda#55ed21669b2015f77c180feb1dd41930 -https://conda.anaconda.org/conda-forge/linux-64/mysql-libs-8.0.33-hca2cd23_5.conda#b72f016c910ff9295b1377d3e17da3f2 +https://conda.anaconda.org/conda-forge/linux-64/mysql-libs-8.0.33-hca2cd23_6.conda#e87530d1b12dd7f4e0f856dc07358d60 https://conda.anaconda.org/conda-forge/linux-64/nss-3.94-h1d7d5a4_0.conda#7caef74bbfa730e014b20f0852068509 -https://conda.anaconda.org/conda-forge/linux-64/python-3.10.12-hd12c33a_0_cpython.conda#eb6f1df105f37daedd6dca78523baa75 -https://conda.anaconda.org/conda-forge/linux-64/sqlite-3.43.2-h2c6b66d_0.conda#c37b95bcd6c6833dacfd5df0ae2f4303 +https://conda.anaconda.org/conda-forge/linux-64/python-3.10.13-hd12c33a_0_cpython.conda#f3a8c32aa764c3e7188b4b810fc9d6ce +https://conda.anaconda.org/conda-forge/linux-64/sqlite-3.44.0-h2c6b66d_0.conda#df56c636df4a98990462d66ac7be2330 https://conda.anaconda.org/conda-forge/linux-64/udunits2-2.2.28-h40f5838_3.conda#6bb8deb138f87c9d48320ac21b87e7a1 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-0.4.0-hd590300_1.conda#9bfac7ccd94d54fd21a0501296d60424 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-keysyms-0.4.0-h8ee46fc_1.conda#632413adcd8bc16b515cab87a2932913 @@ -120,22 +120,22 @@ https://conda.anaconda.org/conda-forge/linux-64/brotli-1.1.0-hd590300_1.conda#f2 https://conda.anaconda.org/conda-forge/linux-64/brotli-python-1.1.0-py310hc6cd4ac_1.conda#1f95722c94f00b69af69a066c7433714 https://conda.anaconda.org/conda-forge/noarch/certifi-2023.7.22-pyhd8ed1ab_0.conda#7f3dbc9179b4dde7da98dfb151d0ad22 https://conda.anaconda.org/conda-forge/noarch/cfgv-3.3.1-pyhd8ed1ab_0.tar.bz2#ebb5f5f7dc4f1a3780ef7ea7738db08c -https://conda.anaconda.org/conda-forge/noarch/charset-normalizer-3.3.0-pyhd8ed1ab_0.conda#fef8ef5f0a54546b9efee39468229917 +https://conda.anaconda.org/conda-forge/noarch/charset-normalizer-3.3.2-pyhd8ed1ab_0.conda#7f4a9e3fcff3f6356ae99244a014da6a https://conda.anaconda.org/conda-forge/noarch/click-8.1.7-unix_pyh707e725_0.conda#f3ad426304898027fc619827ff428eca -https://conda.anaconda.org/conda-forge/noarch/cloudpickle-2.2.1-pyhd8ed1ab_0.conda#b325bfc4cff7d7f8a868f1f7ecc4ed16 +https://conda.anaconda.org/conda-forge/noarch/cloudpickle-3.0.0-pyhd8ed1ab_0.conda#753d29fe41bb881e4b9c004f0abf973f https://conda.anaconda.org/conda-forge/noarch/colorama-0.4.6-pyhd8ed1ab_0.tar.bz2#3faab06a954c2a04039983f2c4a50d99 https://conda.anaconda.org/conda-forge/noarch/cycler-0.12.1-pyhd8ed1ab_0.conda#5cd86562580f274031ede6aa6aa24441 -https://conda.anaconda.org/conda-forge/linux-64/cython-3.0.3-py310hc6cd4ac_0.conda#90bccd216944c486966c3846b339b42f +https://conda.anaconda.org/conda-forge/linux-64/cython-3.0.5-py310hc6cd4ac_0.conda#9156537f8d99eb8c45d0f811e8164527 https://conda.anaconda.org/conda-forge/linux-64/dbus-1.13.6-h5008d03_3.tar.bz2#ecfff944ba3960ecb334b9a2663d708d https://conda.anaconda.org/conda-forge/noarch/distlib-0.3.7-pyhd8ed1ab_0.conda#12d8aae6994f342618443a8f05c652a0 https://conda.anaconda.org/conda-forge/linux-64/docutils-0.19-py310hff52083_1.tar.bz2#21b8fa2179290505e607f5ccd65b01b0 https://conda.anaconda.org/conda-forge/noarch/exceptiongroup-1.1.3-pyhd8ed1ab_0.conda#e6518222753f519e911e83136d2158d9 https://conda.anaconda.org/conda-forge/noarch/execnet-2.0.2-pyhd8ed1ab_0.conda#67de0d8241e1060a479e3c37793e26f9 -https://conda.anaconda.org/conda-forge/noarch/filelock-3.12.4-pyhd8ed1ab_0.conda#5173d4b8267a0699a43d73231e0b6596 +https://conda.anaconda.org/conda-forge/noarch/filelock-3.13.1-pyhd8ed1ab_0.conda#0c1729b74a8152fde6a38ba0a2ab9f45 https://conda.anaconda.org/conda-forge/linux-64/fontconfig-2.14.2-h14ed4e7_0.conda#0f69b688f52ff6da70bccb7ff7001d1d -https://conda.anaconda.org/conda-forge/noarch/fsspec-2023.9.2-pyh1a96a4e_0.conda#9d15cd3a0e944594ab528da37dc72ecc +https://conda.anaconda.org/conda-forge/noarch/fsspec-2023.10.0-pyhca7485f_0.conda#5b86cf1ceaaa9be2ec4627377e538db1 https://conda.anaconda.org/conda-forge/linux-64/gdk-pixbuf-2.42.10-h829c605_4.conda#252a696860674caf7a855e16f680d63a -https://conda.anaconda.org/conda-forge/linux-64/glib-tools-2.78.0-hfc55251_0.conda#e10134de3558dd95abda6987b5548f4f +https://conda.anaconda.org/conda-forge/linux-64/glib-tools-2.78.1-hfc55251_1.conda#a50918d10114a0bf80fb46c7cc692058 https://conda.anaconda.org/conda-forge/linux-64/gts-0.7.6-h977cf35_4.conda#4d8df0b0db060d33c9a702ada998a8fe https://conda.anaconda.org/conda-forge/noarch/idna-3.4-pyhd8ed1ab_0.tar.bz2#34272b248891bddccc64479f9a7fffed https://conda.anaconda.org/conda-forge/noarch/imagesize-1.4.1-pyhd8ed1ab_0.tar.bz2#7de5386c8fea29e76b303f37dde4c352 @@ -143,11 +143,11 @@ https://conda.anaconda.org/conda-forge/noarch/iniconfig-2.0.0-pyhd8ed1ab_0.conda https://conda.anaconda.org/conda-forge/noarch/iris-sample-data-2.4.0-pyhd8ed1ab_0.tar.bz2#18ee9c07cf945a33f92caf1ee3d23ad9 https://conda.anaconda.org/conda-forge/linux-64/kiwisolver-1.4.5-py310hd41b1e2_1.conda#b8d67603d43b23ce7e988a5d81a7ab79 https://conda.anaconda.org/conda-forge/linux-64/lcms2-2.15-hb7c19ff_3.conda#e96637dd92c5f340215c753a5c9a22d7 -https://conda.anaconda.org/conda-forge/linux-64/libblas-3.9.0-18_linux64_openblas.conda#bcddbb497582ece559465b9cd11042e7 +https://conda.anaconda.org/conda-forge/linux-64/libblas-3.9.0-19_linux64_openblas.conda#420f4e9be59d0dc9133a0f43f7bab3f3 https://conda.anaconda.org/conda-forge/linux-64/libclang13-15.0.7-default_h9986a30_3.conda#1720df000b48e31842500323cb7be18c https://conda.anaconda.org/conda-forge/linux-64/libcups-2.3.3-h4637d8d_4.conda#d4529f4dff3057982a7617c7ac58fde3 https://conda.anaconda.org/conda-forge/linux-64/libcurl-8.4.0-hca28451_0.conda#1158ac1d2613b28685644931f11ee807 -https://conda.anaconda.org/conda-forge/linux-64/libpq-16.0-hfc447b1_1.conda#e4a9a5ba40123477db33e02a78dffb01 +https://conda.anaconda.org/conda-forge/linux-64/libpq-16.1-hfc447b1_0.conda#2b7f1893cf40b4ccdc0230bcd94d5ed9 https://conda.anaconda.org/conda-forge/linux-64/libsystemd0-254-h3516f8a_0.conda#df4b1cd0c91b4234fb02b5701a4cdddc https://conda.anaconda.org/conda-forge/linux-64/libwebp-1.3.2-h658648e_1.conda#0ebb65e8d86843865796c7c95a941f34 https://conda.anaconda.org/conda-forge/noarch/locket-1.0.0-pyhd8ed1ab_0.tar.bz2#91e27ef3d05cc772ce627e51cff111c4 @@ -181,7 +181,7 @@ https://conda.anaconda.org/conda-forge/noarch/toolz-0.12.0-pyhd8ed1ab_0.tar.bz2# https://conda.anaconda.org/conda-forge/linux-64/tornado-6.3.3-py310h2372a71_1.conda#b23e0147fa5f7a9380e06334c7266ad5 https://conda.anaconda.org/conda-forge/noarch/typing_extensions-4.8.0-pyha770c72_0.conda#5b1be40a26d10a06f6d4f1f9e19fa0c7 https://conda.anaconda.org/conda-forge/linux-64/unicodedata2-15.1.0-py310h2372a71_0.conda#72637c58d36d9475fda24700c9796f19 -https://conda.anaconda.org/conda-forge/noarch/wheel-0.41.2-pyhd8ed1ab_0.conda#1ccd092478b3e0ee10d7a891adbf8a4f +https://conda.anaconda.org/conda-forge/noarch/wheel-0.41.3-pyhd8ed1ab_0.conda#3fc026b9c87d091c4b34a6c997324ae8 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-image-0.4.0-h8ee46fc_1.conda#9d7bcddf49cbf727730af10e71022c73 https://conda.anaconda.org/conda-forge/linux-64/xkeyboard-config-2.40-hd590300_0.conda#07c15d846a2e4d673da22cbd85fdb6d2 https://conda.anaconda.org/conda-forge/linux-64/xorg-libxext-1.3.4-h0b41bf4_2.conda#82b6df12252e6f32402b96dacc656fec @@ -189,79 +189,79 @@ https://conda.anaconda.org/conda-forge/linux-64/xorg-libxrender-0.9.11-hd590300_ https://conda.anaconda.org/conda-forge/noarch/zict-3.0.0-pyhd8ed1ab_0.conda#cf30c2c15b82aacb07f9c09e28ff2275 https://conda.anaconda.org/conda-forge/noarch/zipp-3.17.0-pyhd8ed1ab_0.conda#2e4d6bc0b14e10f895fc6791a7d9b26a https://conda.anaconda.org/conda-forge/noarch/accessible-pygments-0.0.4-pyhd8ed1ab_0.conda#46a2e6e3dfa718ce3492018d5a110dd6 -https://conda.anaconda.org/conda-forge/noarch/babel-2.13.0-pyhd8ed1ab_0.conda#22541af7a9eb59fc6afcadb7ecdf9219 +https://conda.anaconda.org/conda-forge/noarch/babel-2.13.1-pyhd8ed1ab_0.conda#3ccff479c246692468f604df9c85ef26 https://conda.anaconda.org/conda-forge/noarch/beautifulsoup4-4.12.2-pyha770c72_0.conda#a362ff7d976217f8fa78c0f1c4f59717 https://conda.anaconda.org/conda-forge/linux-64/cairo-1.18.0-h3faef2a_0.conda#f907bb958910dc404647326ca80c263e https://conda.anaconda.org/conda-forge/linux-64/cffi-1.16.0-py310h2fee648_0.conda#45846a970e71ac98fd327da5d40a0a2c https://conda.anaconda.org/conda-forge/linux-64/coverage-7.3.2-py310h2372a71_0.conda#33c03cd5711885c920ddff676fb84f98 https://conda.anaconda.org/conda-forge/linux-64/cytoolz-0.12.2-py310h2372a71_1.conda#a79a93c3912e9e9b0afd3bf58f2c01d7 -https://conda.anaconda.org/conda-forge/linux-64/fonttools-4.43.1-py310h2372a71_0.conda#c7d552c32b87beb736c9658441bf93a9 -https://conda.anaconda.org/conda-forge/linux-64/glib-2.78.0-hfc55251_0.conda#2f55a36b549f51a7e0c2b1e3c3f0ccd4 +https://conda.anaconda.org/conda-forge/linux-64/fonttools-4.44.3-py310h2372a71_0.conda#b4bfb11c034c257e20159e9001cd8e28 +https://conda.anaconda.org/conda-forge/linux-64/glib-2.78.1-hfc55251_1.conda#8d7242302bb3d03b9a690b6dda872603 https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.14.2-nompi_h4f84152_100.conda#2de6a9bc8083b49f09b2f6eb28d3ba3c https://conda.anaconda.org/conda-forge/noarch/importlib-metadata-6.8.0-pyha770c72_0.conda#4e9f59a060c3be52bc4ddc46ee9b6946 https://conda.anaconda.org/conda-forge/noarch/jinja2-3.1.2-pyhd8ed1ab_1.tar.bz2#c8490ed5c70966d232fdd389d0dbed37 -https://conda.anaconda.org/conda-forge/linux-64/libcblas-3.9.0-18_linux64_openblas.conda#93dd9ab275ad888ed8113953769af78c +https://conda.anaconda.org/conda-forge/linux-64/libcblas-3.9.0-19_linux64_openblas.conda#d12374af44575413fbbd4a217d46ea33 https://conda.anaconda.org/conda-forge/linux-64/libclang-15.0.7-default_h7634d5b_3.conda#0922208521c0463e690bbaebba7eb551 https://conda.anaconda.org/conda-forge/linux-64/libgd-2.3.3-h119a65a_9.conda#cfebc557e54905dadc355c0e9f003004 -https://conda.anaconda.org/conda-forge/linux-64/liblapack-3.9.0-18_linux64_openblas.conda#a1244707531e5b143c420c70573c8ec5 +https://conda.anaconda.org/conda-forge/linux-64/liblapack-3.9.0-19_linux64_openblas.conda#9f100edf65436e3eabc2a51fc00b2c37 https://conda.anaconda.org/conda-forge/linux-64/libxkbcommon-1.6.0-h5d7e998_0.conda#d8edd0e29db6fb6b6988e1a28d35d994 https://conda.anaconda.org/conda-forge/noarch/nodeenv-1.8.0-pyhd8ed1ab_0.conda#2a75b296096adabbabadd5e9782e5fcc https://conda.anaconda.org/conda-forge/noarch/partd-1.4.1-pyhd8ed1ab_0.conda#acf4b7c0bcd5fa3b0e05801c4d2accd6 -https://conda.anaconda.org/conda-forge/linux-64/pillow-10.0.1-py310h01dd4db_2.conda#9ef290f84bf1f3932e9b42117d9364ff -https://conda.anaconda.org/conda-forge/noarch/pip-23.2.1-pyhd8ed1ab_0.conda#e2783aa3f9235225eec92f9081c5b801 -https://conda.anaconda.org/conda-forge/linux-64/proj-9.3.0-h1d62c97_1.conda#900fd11ac61d4415d515583fcb570207 +https://conda.anaconda.org/conda-forge/linux-64/pillow-10.1.0-py310h01dd4db_0.conda#95d87a906d88b5824d7d36eeef091dba +https://conda.anaconda.org/conda-forge/noarch/pip-23.3.1-pyhd8ed1ab_0.conda#2400c0b86889f43aa52067161e1fb108 +https://conda.anaconda.org/conda-forge/linux-64/proj-9.3.0-h1d62c97_2.conda#b5e57a0c643da391bef850922963eece https://conda.anaconda.org/conda-forge/linux-64/pulseaudio-client-16.1-hb77b528_5.conda#ac902ff3c1c6d750dd0dfc93a974ab74 -https://conda.anaconda.org/conda-forge/noarch/pytest-7.4.2-pyhd8ed1ab_0.conda#6dd662ff5ac9a783e5c940ce9f3fe649 +https://conda.anaconda.org/conda-forge/noarch/pytest-7.4.3-pyhd8ed1ab_0.conda#5bdca0aca30b0ee62bb84854e027eae0 https://conda.anaconda.org/conda-forge/noarch/python-dateutil-2.8.2-pyhd8ed1ab_0.tar.bz2#dd999d1cc9f79e67dbb855c8924c7984 -https://conda.anaconda.org/conda-forge/linux-64/sip-6.7.11-py310hc6cd4ac_1.conda#c7936ec7db24bb913671a1bc5eb2b79d +https://conda.anaconda.org/conda-forge/linux-64/sip-6.7.12-py310hc6cd4ac_0.conda#68d5bfccaba2d89a7812098dd3966d9b https://conda.anaconda.org/conda-forge/noarch/typing-extensions-4.8.0-hd8ed1ab_0.conda#384462e63262a527bda564fa2d9126c0 -https://conda.anaconda.org/conda-forge/noarch/urllib3-2.0.6-pyhd8ed1ab_0.conda#d5f8944ff9ab24a292511c83dce33dea -https://conda.anaconda.org/conda-forge/linux-64/gstreamer-1.22.6-h98fc4e7_2.conda#1c95f7c612f9121353c4ef764678113e -https://conda.anaconda.org/conda-forge/linux-64/harfbuzz-8.2.1-h3d44ed6_0.conda#98db5f8813f45e2b29766aff0e4a499c +https://conda.anaconda.org/conda-forge/noarch/urllib3-2.1.0-pyhd8ed1ab_0.conda#f8ced8ee63830dec7ecc1be048d1470a +https://conda.anaconda.org/conda-forge/linux-64/gstreamer-1.22.7-h98fc4e7_0.conda#6c919bafe5e03428a8e2ef319d7ef990 +https://conda.anaconda.org/conda-forge/linux-64/harfbuzz-8.3.0-h3d44ed6_0.conda#5a6f6c00ef982a9bc83558d9ac8f64a0 https://conda.anaconda.org/conda-forge/noarch/importlib_metadata-6.8.0-hd8ed1ab_0.conda#b279b07ce18058034e5b3606ba103a8b https://conda.anaconda.org/conda-forge/linux-64/libnetcdf-4.9.2-nompi_h80fb2b6_112.conda#a19fa6cacf80c8a366572853d5890eb4 https://conda.anaconda.org/conda-forge/linux-64/numpy-1.26.0-py310hb13e2d6_0.conda#ac3b67e928cc71548efad9b522d42fef -https://conda.anaconda.org/conda-forge/noarch/pbr-5.11.1-pyhd8ed1ab_0.conda#5bde4ebca51438054099b9527c904ecb +https://conda.anaconda.org/conda-forge/noarch/pbr-6.0.0-pyhd8ed1ab_0.conda#8dbab5ba746ed14aa32cb232dc437f8f https://conda.anaconda.org/conda-forge/noarch/platformdirs-3.11.0-pyhd8ed1ab_0.conda#8f567c0a74aa44cf732f15773b4083b0 -https://conda.anaconda.org/conda-forge/linux-64/pyproj-3.6.1-py310h32c33b7_2.conda#bfb5c8fe5b2cce3ca6140cbd61ecef3b +https://conda.anaconda.org/conda-forge/linux-64/pyproj-3.6.1-py310h32c33b7_4.conda#124211262afed349430d9a3de6b51e8f https://conda.anaconda.org/conda-forge/linux-64/pyqt5-sip-12.12.2-py310hc6cd4ac_5.conda#ef5333594a958b25912002886b82b253 https://conda.anaconda.org/conda-forge/noarch/pytest-cov-4.1.0-pyhd8ed1ab_0.conda#06eb685a3a0b146347a58dda979485da -https://conda.anaconda.org/conda-forge/noarch/pytest-xdist-3.3.1-pyhd8ed1ab_0.conda#816073bb54ef59f33f0f26c14f88311b +https://conda.anaconda.org/conda-forge/noarch/pytest-xdist-3.4.0-pyhd8ed1ab_0.conda#b8dc6f9db1b9670e564b68277a79ffeb https://conda.anaconda.org/conda-forge/noarch/requests-2.31.0-pyhd8ed1ab_0.conda#a30144e4156cdbb236f99ebb49828f8b https://conda.anaconda.org/conda-forge/noarch/setuptools-scm-8.0.4-pyhd8ed1ab_0.conda#3b8ef3a2d80f3d89d0ae7e3c975e6c57 https://conda.anaconda.org/conda-forge/linux-64/ukkonen-1.0.1-py310hd41b1e2_4.conda#35e87277fba9944b8a975113538bb5df -https://conda.anaconda.org/conda-forge/linux-64/cftime-1.6.2-py310h1f7b6fc_2.conda#7925aaa4330045bc32d334b20f446902 -https://conda.anaconda.org/conda-forge/linux-64/contourpy-1.1.1-py310hd41b1e2_1.conda#6a38f65d330b74495ad6990280486049 -https://conda.anaconda.org/conda-forge/noarch/dask-core-2023.9.3-pyhd8ed1ab_0.conda#a7155483171dbc27a7385d1c26e779de -https://conda.anaconda.org/conda-forge/linux-64/gst-plugins-base-1.22.6-h8e1006c_2.conda#3d8e98279bad55287f2ef9047996f33c -https://conda.anaconda.org/conda-forge/noarch/identify-2.5.30-pyhd8ed1ab_0.conda#b7a2e3bb89bda8c69839485c20aabadf -https://conda.anaconda.org/conda-forge/linux-64/mo_pack-0.2.0-py310hde88566_1008.tar.bz2#f9dd8a7a2fcc23eb2cd95cd817c949e7 +https://conda.anaconda.org/conda-forge/linux-64/cftime-1.6.3-py310h1f7b6fc_0.conda#31beda75384647959d5792a1a7dc571a +https://conda.anaconda.org/conda-forge/linux-64/contourpy-1.2.0-py310hd41b1e2_0.conda#85d2aaa7af046528d339da1e813c3a9f +https://conda.anaconda.org/conda-forge/noarch/dask-core-2023.11.0-pyhd8ed1ab_0.conda#3bf8f5c3fbab9e0cfffdf5914f021854 +https://conda.anaconda.org/conda-forge/linux-64/gst-plugins-base-1.22.7-h8e1006c_0.conda#065e2c1d49afa3fdc1a01f1dacd6ab09 +https://conda.anaconda.org/conda-forge/noarch/identify-2.5.31-pyhd8ed1ab_0.conda#fea10604a45e974b110ea15a88913ebc +https://conda.anaconda.org/conda-forge/linux-64/mo_pack-0.3.0-py310h2372a71_1.conda#dfcf64f67961eb9686676f96fdb4b4d1 https://conda.anaconda.org/conda-forge/linux-64/netcdf-fortran-4.6.1-nompi_hacb5139_102.conda#487a1c19dd3eacfd055ad614e9acde87 -https://conda.anaconda.org/conda-forge/linux-64/pandas-2.1.1-py310hcc13569_1.conda#a64a2b4907b96d4bf3c9dab59563ab50 +https://conda.anaconda.org/conda-forge/linux-64/pandas-2.1.3-py310hcc13569_0.conda#30a39c1064e5efc578d83c2a5f7cd749 https://conda.anaconda.org/conda-forge/linux-64/pango-1.50.14-ha41ecd1_2.conda#1a66c10f6a0da3dbd2f3a68127e7f6a0 https://conda.anaconda.org/conda-forge/linux-64/pywavelets-1.4.1-py310h1f7b6fc_1.conda#be6f0382440ccbf9fb01bb19ab1f1fc0 https://conda.anaconda.org/conda-forge/linux-64/scipy-1.11.3-py310hb13e2d6_1.conda#4260b359d8fbeab4f789a8b0f968079f https://conda.anaconda.org/conda-forge/linux-64/shapely-2.0.2-py310h7dcad9a_0.conda#0d7c35fe5cc1f436e368ddd500deb979 https://conda.anaconda.org/conda-forge/noarch/sphinxcontrib-apidoc-0.3.0-py_1.tar.bz2#855b087883443abb10f5faf6eef40860 -https://conda.anaconda.org/conda-forge/noarch/virtualenv-20.24.4-pyhd8ed1ab_0.conda#c3feaf947264a59a125e8c26e98c3c5a -https://conda.anaconda.org/conda-forge/linux-64/cf-units-3.2.0-py310h1f7b6fc_3.conda#ce30848c8731fe993893a872218dd37a -https://conda.anaconda.org/conda-forge/noarch/distributed-2023.9.3-pyhd8ed1ab_0.conda#543fafdd7b325bf16199235ee5f20622 +https://conda.anaconda.org/conda-forge/noarch/virtualenv-20.24.6-pyhd8ed1ab_0.conda#fb1fc875719e217ed799a7aae11d3be4 +https://conda.anaconda.org/conda-forge/linux-64/cf-units-3.2.0-py310h1f7b6fc_4.conda#0ca55ca20891d393846695354b32ebc5 +https://conda.anaconda.org/conda-forge/noarch/distributed-2023.11.0-pyhd8ed1ab_0.conda#a1ee8e3043eee1649f98704ea3e6feae https://conda.anaconda.org/conda-forge/linux-64/esmf-8.4.2-nompi_h9e768e6_3.conda#c330e87e698bae8e7381c0315cf25dd0 https://conda.anaconda.org/conda-forge/linux-64/gtk2-2.24.33-h90689f9_2.tar.bz2#957a0255ab58aaf394a91725d73ab422 https://conda.anaconda.org/conda-forge/noarch/imagehash-4.3.1-pyhd8ed1ab_0.tar.bz2#132ad832787a2156be1f1b309835001a https://conda.anaconda.org/conda-forge/linux-64/librsvg-2.56.3-h98fae49_0.conda#620e754f4344f4c27259ff460a2b9c50 -https://conda.anaconda.org/conda-forge/linux-64/matplotlib-base-3.8.0-py310h62c0568_2.conda#5c0d101ef8fc542778aa80795a759d08 -https://conda.anaconda.org/conda-forge/linux-64/netcdf4-1.6.4-nompi_py310hba70d50_103.conda#0850d2a119d51601b20c406a4909af4d +https://conda.anaconda.org/conda-forge/linux-64/matplotlib-base-3.8.1-py310h62c0568_0.conda#e650bd952e5618050ccb088bc0c6dfb4 +https://conda.anaconda.org/conda-forge/linux-64/netcdf4-1.6.5-nompi_py310hba70d50_100.conda#e19392760c7e4da3b9cb0ee5bf61bc4b https://conda.anaconda.org/conda-forge/noarch/pre-commit-3.5.0-pyha770c72_0.conda#964e3d762e427661c59263435a14c492 https://conda.anaconda.org/conda-forge/linux-64/python-stratify-0.3.0-py310h1f7b6fc_1.conda#857b828a13cdddf568958f7575b25b22 https://conda.anaconda.org/conda-forge/linux-64/qt-main-5.15.8-h82b777d_17.conda#4f01e33dbb406085a16a2813ab067e95 -https://conda.anaconda.org/conda-forge/linux-64/cartopy-0.22.0-py310h7cbd5c2_0.conda#7bfbace0788f477da1c26e10a358692d +https://conda.anaconda.org/conda-forge/linux-64/cartopy-0.22.0-py310hcc13569_1.conda#31ef447724fb19066a9d00a660dab1bd https://conda.anaconda.org/conda-forge/noarch/esmpy-8.4.2-pyhc1e730c_4.conda#ddcf387719b2e44df0cc4dd467643951 https://conda.anaconda.org/conda-forge/linux-64/graphviz-8.1.0-h28d9a01_0.conda#33628e0e3de7afd2c8172f76439894cb https://conda.anaconda.org/conda-forge/noarch/nc-time-axis-1.4.1-pyhd8ed1ab_0.tar.bz2#281b58948bf60a2582de9e548bcc5369 https://conda.anaconda.org/conda-forge/linux-64/pyqt-5.15.9-py310h04931ad_5.conda#f4fe7a6e3d7c78c9de048ea9dda21690 -https://conda.anaconda.org/conda-forge/linux-64/matplotlib-3.8.0-py310hff52083_2.conda#cda26b4d722d7319ce66df50332ff09b -https://conda.anaconda.org/conda-forge/noarch/pydata-sphinx-theme-0.14.1-pyhd8ed1ab_0.conda#78153addf629c51fab775ef360012ca3 +https://conda.anaconda.org/conda-forge/linux-64/matplotlib-3.8.1-py310hff52083_0.conda#acd62190c3822df888791592130aa286 +https://conda.anaconda.org/conda-forge/noarch/pydata-sphinx-theme-0.14.3-pyhd8ed1ab_1.conda#fbe2993dd48f14724b90bf12e92cc164 https://conda.anaconda.org/conda-forge/noarch/sphinx-copybutton-0.5.2-pyhd8ed1ab_0.conda#ac832cc43adc79118cf6e23f1f9b8995 https://conda.anaconda.org/conda-forge/noarch/sphinx-design-0.5.0-pyhd8ed1ab_0.conda#264b3c697fa9cdade87eb0abe4440d54 https://conda.anaconda.org/conda-forge/noarch/sphinx-gallery-0.14.0-pyhd8ed1ab_0.conda#b3788794f88c9512393032e448428261 diff --git a/requirements/locks/py311-linux-64.lock b/requirements/locks/py311-linux-64.lock index 0bbb6bfdcd..96509aae97 100644 --- a/requirements/locks/py311-linux-64.lock +++ b/requirements/locks/py311-linux-64.lock @@ -1,6 +1,6 @@ # Generated by conda-lock. # platform: linux-64 -# input_hash: 40113e38fffa3a31ce64e60231c756c740914d9f0444edaeecd07e598851abc8 +# input_hash: f2209792c838739771cbeb38eb5659da1f847d44387a829c931482c65e2f8885 @EXPLICIT https://conda.anaconda.org/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2#d7c89558ba9fa0495403155b64376d81 https://conda.anaconda.org/conda-forge/linux-64/ca-certificates-2023.7.22-hbcca054_0.conda#a73ecd2988327ad4c8f2c331482917f2 @@ -9,18 +9,18 @@ https://conda.anaconda.org/conda-forge/noarch/font-ttf-inconsolata-3.000-h77eed3 https://conda.anaconda.org/conda-forge/noarch/font-ttf-source-code-pro-2.038-h77eed37_0.tar.bz2#4d59c254e01d9cde7957100457e2d5fb https://conda.anaconda.org/conda-forge/noarch/font-ttf-ubuntu-0.83-hab24e00_0.tar.bz2#19410c3df09dfb12d1206132a1d357c5 https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.40-h41732ed_0.conda#7aca3059a1729aa76c597603f10b0dd3 -https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-13.2.0-h7e041cc_2.conda#9172c297304f2a20134fc56c97fbe229 +https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-13.2.0-h7e041cc_3.conda#937eaed008f6bf2191c5fe76f87755e9 https://conda.anaconda.org/conda-forge/linux-64/python_abi-3.11-4_cp311.conda#d786502c97404c94d7d58d258a445a65 https://conda.anaconda.org/conda-forge/noarch/tzdata-2023c-h71feb2d_0.conda#939e3e74d8be4dac89ce83b20de2492a https://conda.anaconda.org/conda-forge/noarch/fonts-conda-forge-1-0.tar.bz2#f766549260d6815b0c52253f1fb1bb29 -https://conda.anaconda.org/conda-forge/linux-64/libgomp-13.2.0-h807b86a_2.conda#e2042154faafe61969556f28bade94b9 +https://conda.anaconda.org/conda-forge/linux-64/libgomp-13.2.0-h807b86a_3.conda#7124cbb46b13d395bdde68f2d215c989 https://conda.anaconda.org/conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2#73aaf86a425cc6e73fcf236a5a46396d https://conda.anaconda.org/conda-forge/noarch/fonts-conda-ecosystem-1-0.tar.bz2#fee5683a3f04bd15cbd8318b096a27ab -https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-13.2.0-h807b86a_2.conda#c28003b0be0494f9a7664389146716ff +https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-13.2.0-h807b86a_3.conda#23fdf1fef05baeb7eadc2aed5fb0011f https://conda.anaconda.org/conda-forge/linux-64/alsa-lib-1.2.10-hd590300_0.conda#75dae9a4201732aa78a530b826ee5fe0 https://conda.anaconda.org/conda-forge/linux-64/attr-2.5.1-h166bdaf_1.tar.bz2#d9c69a24ad678ffce24c6543a0176b00 -https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-h7f98852_4.tar.bz2#a1fd65c7ccbf10880423d82bca54eb54 -https://conda.anaconda.org/conda-forge/linux-64/c-ares-1.20.1-hd590300_0.conda#6642e4faa4804be3a0e7edfefbd16595 +https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-hd590300_5.conda#69b8b6202a07720f448be700e300ccf4 +https://conda.anaconda.org/conda-forge/linux-64/c-ares-1.21.0-hd590300_0.conda#c06fa0440048270817b9e3142cc661bf https://conda.anaconda.org/conda-forge/linux-64/fribidi-1.0.10-h36c2ea0_0.tar.bz2#ac7bc6a654f8f41b352b38f4051135f8 https://conda.anaconda.org/conda-forge/linux-64/geos-3.12.0-h59595ed_0.conda#3fdf79ef322c8379ae83be491d805369 https://conda.anaconda.org/conda-forge/linux-64/gettext-0.21.1-h27087fc_0.tar.bz2#14947d8770185e5153fdd04d4673ed37 @@ -36,11 +36,11 @@ https://conda.anaconda.org/conda-forge/linux-64/libdeflate-1.19-hd590300_0.conda https://conda.anaconda.org/conda-forge/linux-64/libev-4.33-h516909a_1.tar.bz2#6f8720dff19e17ce5d48cfe7f3d2f0a3 https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.5.0-hcb278e6_1.conda#6305a3dd2752c76335295da4e581f2fd https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.2-h7f98852_5.tar.bz2#d645c6d2ac96843a2bfaccd2d62b3ac3 -https://conda.anaconda.org/conda-forge/linux-64/libgfortran5-13.2.0-ha4646dd_2.conda#78fdab09d9138851dde2b5fe2a11019e +https://conda.anaconda.org/conda-forge/linux-64/libgfortran5-13.2.0-ha4646dd_3.conda#c714d905cdfa0e70200f68b80cc04764 https://conda.anaconda.org/conda-forge/linux-64/libiconv-1.17-h166bdaf_0.tar.bz2#b62b52da46c39ee2bc3c162ac7f1804d https://conda.anaconda.org/conda-forge/linux-64/libjpeg-turbo-3.0.0-hd590300_1.conda#ea25936bb4080d843790b586850f82b8 https://conda.anaconda.org/conda-forge/linux-64/libmo_unpack-3.1.2-hf484d3e_1001.tar.bz2#95f32a6a5a666d33886ca5627239f03d -https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.0-hd590300_1.conda#854e3e1623b39777140f199c5f9ab952 +https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda#30fd6e37fe21f86f4bd26d6ee73eeec7 https://conda.anaconda.org/conda-forge/linux-64/libogg-1.3.4-h7f98852_1.tar.bz2#6e8cc2173440d77708196c5b93771680 https://conda.anaconda.org/conda-forge/linux-64/libopus-1.3.1-h7f98852_1.tar.bz2#15345e56d527b330e1cacbdf58676e8f https://conda.anaconda.org/conda-forge/linux-64/libtool-2.4.7-h27087fc_0.conda#f204c8ba400ec475452737094fb81d52 @@ -49,9 +49,9 @@ https://conda.anaconda.org/conda-forge/linux-64/libwebp-base-1.3.2-hd590300_0.co https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.2.13-hd590300_5.conda#f36c115f1ee199da648e0597ec2047ad https://conda.anaconda.org/conda-forge/linux-64/lz4-c-1.9.4-hcb278e6_0.conda#318b08df404f9c9be5712aaa5a6f0bb0 https://conda.anaconda.org/conda-forge/linux-64/mpg123-1.32.3-h59595ed_0.conda#bdadff838d5437aea83607ced8b37f75 -https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.4-hcb278e6_0.conda#681105bccc2a3f7f1a837d47d39c9179 +https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.4-h59595ed_2.conda#7dbaa197d7ba6032caf7ae7f32c1efa0 https://conda.anaconda.org/conda-forge/linux-64/nspr-4.35-h27087fc_0.conda#da0ec11a6454ae19bff5b02ed881a2b1 -https://conda.anaconda.org/conda-forge/linux-64/openssl-3.1.3-hd590300_0.conda#7bb88ce04c8deb9f7d763ae04a1da72f +https://conda.anaconda.org/conda-forge/linux-64/openssl-3.1.4-hd590300_0.conda#412ba6938c3e2abaca8b1129ea82e238 https://conda.anaconda.org/conda-forge/linux-64/pixman-0.42.2-h59595ed_0.conda#700edd63ccd5fc66b70b1c028cea9a68 https://conda.anaconda.org/conda-forge/linux-64/pthread-stubs-0.4-h36c2ea0_1001.tar.bz2#22dad4df6e8630e8dff2428f6f6a7036 https://conda.anaconda.org/conda-forge/linux-64/snappy-1.1.10-h9fff704_0.conda#e6d228cd0bb74a51dd18f5bfce0b4115 @@ -74,21 +74,21 @@ https://conda.anaconda.org/conda-forge/linux-64/libcap-2.69-h0f662aa_0.conda#25c https://conda.anaconda.org/conda-forge/linux-64/libedit-3.1.20191231-he28a2e2_2.tar.bz2#4d331e44109e3f0e19b4cb8f9b82f3e1 https://conda.anaconda.org/conda-forge/linux-64/libevent-2.1.12-hf998b51_1.conda#a1cfcc585f0c42bf8d5546bb1dfb668d https://conda.anaconda.org/conda-forge/linux-64/libflac-1.4.3-h59595ed_0.conda#ee48bf17cc83a00f59ca1494d5646869 -https://conda.anaconda.org/conda-forge/linux-64/libgfortran-ng-13.2.0-h69a702a_2.conda#e75a75a6eaf6f318dae2631158c46575 +https://conda.anaconda.org/conda-forge/linux-64/libgfortran-ng-13.2.0-h69a702a_3.conda#73031c79546ad06f1fe62e57fdd021bc https://conda.anaconda.org/conda-forge/linux-64/libgpg-error-1.47-h71f35ed_0.conda#c2097d0b46367996f09b4e8e4920384a -https://conda.anaconda.org/conda-forge/linux-64/libnghttp2-1.52.0-h61bc06f_0.conda#613955a50485812985c059e7b269f42e +https://conda.anaconda.org/conda-forge/linux-64/libnghttp2-1.58.0-h47da74e_0.conda#9b13d5ee90fc9f09d54fd403247342b4 https://conda.anaconda.org/conda-forge/linux-64/libpng-1.6.39-h753d276_0.conda#e1c890aebdebbfbf87e2c917187b4416 -https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.43.2-h2797004_0.conda#4b441a1ee22397d5a27dc1126b849edd +https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.44.0-h2797004_0.conda#b58e6816d137f3aabf77d341dd5d732b https://conda.anaconda.org/conda-forge/linux-64/libssh2-1.11.0-h0841786_0.conda#1f5a58e686b13bcfde88b93f547d23fe https://conda.anaconda.org/conda-forge/linux-64/libudunits2-2.2.28-h40f5838_3.conda#4bdace082e911a3e1f1f0b721bed5b56 https://conda.anaconda.org/conda-forge/linux-64/libvorbis-1.3.7-h9c3ff4c_0.tar.bz2#309dec04b70a3cc0f1e84a4013683bc0 https://conda.anaconda.org/conda-forge/linux-64/libxcb-1.15-h0b41bf4_0.conda#33277193f5b92bad9fdd230eb700929c -https://conda.anaconda.org/conda-forge/linux-64/libxml2-2.11.5-h232c23b_1.conda#f3858448893839820d4bcfb14ad3ecdf +https://conda.anaconda.org/conda-forge/linux-64/libxml2-2.11.6-h232c23b_0.conda#427a3e59d66cb5d145020bd9c6493334 https://conda.anaconda.org/conda-forge/linux-64/libzip-1.10.1-h2629f0a_3.conda#ac79812548e7e8cf61f7b0abdef01d3b -https://conda.anaconda.org/conda-forge/linux-64/mysql-common-8.0.33-hf1915f5_5.conda#1e8ef4090ca4f0d66404a7441e1dbf3c -https://conda.anaconda.org/conda-forge/linux-64/pcre2-10.40-hc3806b6_0.tar.bz2#69e2c796349cd9b273890bee0febfe1b +https://conda.anaconda.org/conda-forge/linux-64/mysql-common-8.0.33-hf1915f5_6.conda#80bf3b277c120dd294b51d404b931a75 +https://conda.anaconda.org/conda-forge/linux-64/pcre2-10.42-hcad00b1_0.conda#679c8961826aa4b50653bce17ee52abe https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8228510_1.conda#47d31b792659ce70f470b5c82fdfb7a4 -https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-h2797004_0.conda#513336054f884f95d9fd925748f41ef3 +https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda#d453b98d9c83e71da0741bb0ff4d76bc https://conda.anaconda.org/conda-forge/linux-64/xorg-libsm-1.2.4-h7391055_0.conda#93ee23f12bc2e684548181256edd2cf6 https://conda.anaconda.org/conda-forge/linux-64/zlib-1.2.13-hd590300_5.conda#68c34ec6149623be41a1933ab996a209 https://conda.anaconda.org/conda-forge/linux-64/zstd-1.5.5-hfc55251_0.conda#04b88013080254850d6c01ed54810589 @@ -96,16 +96,16 @@ https://conda.anaconda.org/conda-forge/linux-64/blosc-1.21.5-h0f2a231_0.conda#00 https://conda.anaconda.org/conda-forge/linux-64/brotli-bin-1.1.0-hd590300_1.conda#39f910d205726805a958da408ca194ba https://conda.anaconda.org/conda-forge/linux-64/freetype-2.12.1-h267a509_2.conda#9ae35c3d96db2c94ce0cef86efdfa2cb https://conda.anaconda.org/conda-forge/linux-64/krb5-1.21.2-h659d440_0.conda#cd95826dbd331ed1be26bdf401432844 -https://conda.anaconda.org/conda-forge/linux-64/libgcrypt-1.10.1-h166bdaf_0.tar.bz2#f967fc95089cd247ceed56eda31de3a9 -https://conda.anaconda.org/conda-forge/linux-64/libglib-2.78.0-hebfc3b9_0.conda#e618003da3547216310088478e475945 +https://conda.anaconda.org/conda-forge/linux-64/libgcrypt-1.10.2-hd590300_0.conda#3d7d5e5cebf8af5aadb040732860f1b6 +https://conda.anaconda.org/conda-forge/linux-64/libglib-2.78.1-h783c2da_1.conda#70052d6c1e84643e30ffefb21ab6950f https://conda.anaconda.org/conda-forge/linux-64/libllvm15-15.0.7-h5cf9203_3.conda#9efe82d44b76a7529a1d702e5a37752e https://conda.anaconda.org/conda-forge/linux-64/libopenblas-0.3.24-pthreads_h413a1c8_0.conda#6e4ef6ca28655124dcde9bd500e44c32 https://conda.anaconda.org/conda-forge/linux-64/libsndfile-1.2.2-hc60ed4a_1.conda#ef1910918dd895516a769ed36b5b3a4e https://conda.anaconda.org/conda-forge/linux-64/libtiff-4.6.0-ha9c0a0a_2.conda#55ed21669b2015f77c180feb1dd41930 -https://conda.anaconda.org/conda-forge/linux-64/mysql-libs-8.0.33-hca2cd23_5.conda#b72f016c910ff9295b1377d3e17da3f2 +https://conda.anaconda.org/conda-forge/linux-64/mysql-libs-8.0.33-hca2cd23_6.conda#e87530d1b12dd7f4e0f856dc07358d60 https://conda.anaconda.org/conda-forge/linux-64/nss-3.94-h1d7d5a4_0.conda#7caef74bbfa730e014b20f0852068509 https://conda.anaconda.org/conda-forge/linux-64/python-3.11.6-hab00c5b_0_cpython.conda#b0dfbe2fcbfdb097d321bfd50ecddab1 -https://conda.anaconda.org/conda-forge/linux-64/sqlite-3.43.2-h2c6b66d_0.conda#c37b95bcd6c6833dacfd5df0ae2f4303 +https://conda.anaconda.org/conda-forge/linux-64/sqlite-3.44.0-h2c6b66d_0.conda#df56c636df4a98990462d66ac7be2330 https://conda.anaconda.org/conda-forge/linux-64/udunits2-2.2.28-h40f5838_3.conda#6bb8deb138f87c9d48320ac21b87e7a1 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-0.4.0-hd590300_1.conda#9bfac7ccd94d54fd21a0501296d60424 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-keysyms-0.4.0-h8ee46fc_1.conda#632413adcd8bc16b515cab87a2932913 @@ -120,22 +120,22 @@ https://conda.anaconda.org/conda-forge/linux-64/brotli-1.1.0-hd590300_1.conda#f2 https://conda.anaconda.org/conda-forge/linux-64/brotli-python-1.1.0-py311hb755f60_1.conda#cce9e7c3f1c307f2a5fb08a2922d6164 https://conda.anaconda.org/conda-forge/noarch/certifi-2023.7.22-pyhd8ed1ab_0.conda#7f3dbc9179b4dde7da98dfb151d0ad22 https://conda.anaconda.org/conda-forge/noarch/cfgv-3.3.1-pyhd8ed1ab_0.tar.bz2#ebb5f5f7dc4f1a3780ef7ea7738db08c -https://conda.anaconda.org/conda-forge/noarch/charset-normalizer-3.3.0-pyhd8ed1ab_0.conda#fef8ef5f0a54546b9efee39468229917 +https://conda.anaconda.org/conda-forge/noarch/charset-normalizer-3.3.2-pyhd8ed1ab_0.conda#7f4a9e3fcff3f6356ae99244a014da6a https://conda.anaconda.org/conda-forge/noarch/click-8.1.7-unix_pyh707e725_0.conda#f3ad426304898027fc619827ff428eca -https://conda.anaconda.org/conda-forge/noarch/cloudpickle-2.2.1-pyhd8ed1ab_0.conda#b325bfc4cff7d7f8a868f1f7ecc4ed16 +https://conda.anaconda.org/conda-forge/noarch/cloudpickle-3.0.0-pyhd8ed1ab_0.conda#753d29fe41bb881e4b9c004f0abf973f https://conda.anaconda.org/conda-forge/noarch/colorama-0.4.6-pyhd8ed1ab_0.tar.bz2#3faab06a954c2a04039983f2c4a50d99 https://conda.anaconda.org/conda-forge/noarch/cycler-0.12.1-pyhd8ed1ab_0.conda#5cd86562580f274031ede6aa6aa24441 -https://conda.anaconda.org/conda-forge/linux-64/cython-3.0.3-py311hb755f60_0.conda#c54d71e8031a10d08f2e87ff81821588 +https://conda.anaconda.org/conda-forge/linux-64/cython-3.0.5-py311hb755f60_0.conda#25b42509a68f96e612534af3fe2cf033 https://conda.anaconda.org/conda-forge/linux-64/dbus-1.13.6-h5008d03_3.tar.bz2#ecfff944ba3960ecb334b9a2663d708d https://conda.anaconda.org/conda-forge/noarch/distlib-0.3.7-pyhd8ed1ab_0.conda#12d8aae6994f342618443a8f05c652a0 https://conda.anaconda.org/conda-forge/linux-64/docutils-0.19-py311h38be061_1.tar.bz2#599159b0740e9b82e7eef0e8471be3c2 https://conda.anaconda.org/conda-forge/noarch/exceptiongroup-1.1.3-pyhd8ed1ab_0.conda#e6518222753f519e911e83136d2158d9 https://conda.anaconda.org/conda-forge/noarch/execnet-2.0.2-pyhd8ed1ab_0.conda#67de0d8241e1060a479e3c37793e26f9 -https://conda.anaconda.org/conda-forge/noarch/filelock-3.12.4-pyhd8ed1ab_0.conda#5173d4b8267a0699a43d73231e0b6596 +https://conda.anaconda.org/conda-forge/noarch/filelock-3.13.1-pyhd8ed1ab_0.conda#0c1729b74a8152fde6a38ba0a2ab9f45 https://conda.anaconda.org/conda-forge/linux-64/fontconfig-2.14.2-h14ed4e7_0.conda#0f69b688f52ff6da70bccb7ff7001d1d -https://conda.anaconda.org/conda-forge/noarch/fsspec-2023.9.2-pyh1a96a4e_0.conda#9d15cd3a0e944594ab528da37dc72ecc +https://conda.anaconda.org/conda-forge/noarch/fsspec-2023.10.0-pyhca7485f_0.conda#5b86cf1ceaaa9be2ec4627377e538db1 https://conda.anaconda.org/conda-forge/linux-64/gdk-pixbuf-2.42.10-h829c605_4.conda#252a696860674caf7a855e16f680d63a -https://conda.anaconda.org/conda-forge/linux-64/glib-tools-2.78.0-hfc55251_0.conda#e10134de3558dd95abda6987b5548f4f +https://conda.anaconda.org/conda-forge/linux-64/glib-tools-2.78.1-hfc55251_1.conda#a50918d10114a0bf80fb46c7cc692058 https://conda.anaconda.org/conda-forge/linux-64/gts-0.7.6-h977cf35_4.conda#4d8df0b0db060d33c9a702ada998a8fe https://conda.anaconda.org/conda-forge/noarch/idna-3.4-pyhd8ed1ab_0.tar.bz2#34272b248891bddccc64479f9a7fffed https://conda.anaconda.org/conda-forge/noarch/imagesize-1.4.1-pyhd8ed1ab_0.tar.bz2#7de5386c8fea29e76b303f37dde4c352 @@ -143,11 +143,11 @@ https://conda.anaconda.org/conda-forge/noarch/iniconfig-2.0.0-pyhd8ed1ab_0.conda https://conda.anaconda.org/conda-forge/noarch/iris-sample-data-2.4.0-pyhd8ed1ab_0.tar.bz2#18ee9c07cf945a33f92caf1ee3d23ad9 https://conda.anaconda.org/conda-forge/linux-64/kiwisolver-1.4.5-py311h9547e67_1.conda#2c65bdf442b0d37aad080c8a4e0d452f https://conda.anaconda.org/conda-forge/linux-64/lcms2-2.15-hb7c19ff_3.conda#e96637dd92c5f340215c753a5c9a22d7 -https://conda.anaconda.org/conda-forge/linux-64/libblas-3.9.0-18_linux64_openblas.conda#bcddbb497582ece559465b9cd11042e7 +https://conda.anaconda.org/conda-forge/linux-64/libblas-3.9.0-19_linux64_openblas.conda#420f4e9be59d0dc9133a0f43f7bab3f3 https://conda.anaconda.org/conda-forge/linux-64/libclang13-15.0.7-default_h9986a30_3.conda#1720df000b48e31842500323cb7be18c https://conda.anaconda.org/conda-forge/linux-64/libcups-2.3.3-h4637d8d_4.conda#d4529f4dff3057982a7617c7ac58fde3 https://conda.anaconda.org/conda-forge/linux-64/libcurl-8.4.0-hca28451_0.conda#1158ac1d2613b28685644931f11ee807 -https://conda.anaconda.org/conda-forge/linux-64/libpq-16.0-hfc447b1_1.conda#e4a9a5ba40123477db33e02a78dffb01 +https://conda.anaconda.org/conda-forge/linux-64/libpq-16.1-hfc447b1_0.conda#2b7f1893cf40b4ccdc0230bcd94d5ed9 https://conda.anaconda.org/conda-forge/linux-64/libsystemd0-254-h3516f8a_0.conda#df4b1cd0c91b4234fb02b5701a4cdddc https://conda.anaconda.org/conda-forge/linux-64/libwebp-1.3.2-h658648e_1.conda#0ebb65e8d86843865796c7c95a941f34 https://conda.anaconda.org/conda-forge/noarch/locket-1.0.0-pyhd8ed1ab_0.tar.bz2#91e27ef3d05cc772ce627e51cff111c4 @@ -180,7 +180,7 @@ https://conda.anaconda.org/conda-forge/noarch/tomli-2.0.1-pyhd8ed1ab_0.tar.bz2#5 https://conda.anaconda.org/conda-forge/noarch/toolz-0.12.0-pyhd8ed1ab_0.tar.bz2#92facfec94bc02d6ccf42e7173831a36 https://conda.anaconda.org/conda-forge/linux-64/tornado-6.3.3-py311h459d7ec_1.conda#a700fcb5cedd3e72d0c75d095c7a6eda https://conda.anaconda.org/conda-forge/noarch/typing_extensions-4.8.0-pyha770c72_0.conda#5b1be40a26d10a06f6d4f1f9e19fa0c7 -https://conda.anaconda.org/conda-forge/noarch/wheel-0.41.2-pyhd8ed1ab_0.conda#1ccd092478b3e0ee10d7a891adbf8a4f +https://conda.anaconda.org/conda-forge/noarch/wheel-0.41.3-pyhd8ed1ab_0.conda#3fc026b9c87d091c4b34a6c997324ae8 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-image-0.4.0-h8ee46fc_1.conda#9d7bcddf49cbf727730af10e71022c73 https://conda.anaconda.org/conda-forge/linux-64/xkeyboard-config-2.40-hd590300_0.conda#07c15d846a2e4d673da22cbd85fdb6d2 https://conda.anaconda.org/conda-forge/linux-64/xorg-libxext-1.3.4-h0b41bf4_2.conda#82b6df12252e6f32402b96dacc656fec @@ -188,79 +188,79 @@ https://conda.anaconda.org/conda-forge/linux-64/xorg-libxrender-0.9.11-hd590300_ https://conda.anaconda.org/conda-forge/noarch/zict-3.0.0-pyhd8ed1ab_0.conda#cf30c2c15b82aacb07f9c09e28ff2275 https://conda.anaconda.org/conda-forge/noarch/zipp-3.17.0-pyhd8ed1ab_0.conda#2e4d6bc0b14e10f895fc6791a7d9b26a https://conda.anaconda.org/conda-forge/noarch/accessible-pygments-0.0.4-pyhd8ed1ab_0.conda#46a2e6e3dfa718ce3492018d5a110dd6 -https://conda.anaconda.org/conda-forge/noarch/babel-2.13.0-pyhd8ed1ab_0.conda#22541af7a9eb59fc6afcadb7ecdf9219 +https://conda.anaconda.org/conda-forge/noarch/babel-2.13.1-pyhd8ed1ab_0.conda#3ccff479c246692468f604df9c85ef26 https://conda.anaconda.org/conda-forge/noarch/beautifulsoup4-4.12.2-pyha770c72_0.conda#a362ff7d976217f8fa78c0f1c4f59717 https://conda.anaconda.org/conda-forge/linux-64/cairo-1.18.0-h3faef2a_0.conda#f907bb958910dc404647326ca80c263e https://conda.anaconda.org/conda-forge/linux-64/cffi-1.16.0-py311hb3a22ac_0.conda#b3469563ac5e808b0cd92810d0697043 https://conda.anaconda.org/conda-forge/linux-64/coverage-7.3.2-py311h459d7ec_0.conda#7b3145fed7adc7c63a0e08f6f29f5480 https://conda.anaconda.org/conda-forge/linux-64/cytoolz-0.12.2-py311h459d7ec_1.conda#afe341dbe834ae76d2c23157ff00e633 -https://conda.anaconda.org/conda-forge/linux-64/fonttools-4.43.1-py311h459d7ec_0.conda#ac995b680de3bdce2531c553b27dfe7e -https://conda.anaconda.org/conda-forge/linux-64/glib-2.78.0-hfc55251_0.conda#2f55a36b549f51a7e0c2b1e3c3f0ccd4 +https://conda.anaconda.org/conda-forge/linux-64/fonttools-4.44.3-py311h459d7ec_0.conda#a811af88d3c522cf36f4674ef699021d +https://conda.anaconda.org/conda-forge/linux-64/glib-2.78.1-hfc55251_1.conda#8d7242302bb3d03b9a690b6dda872603 https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.14.2-nompi_h4f84152_100.conda#2de6a9bc8083b49f09b2f6eb28d3ba3c https://conda.anaconda.org/conda-forge/noarch/importlib-metadata-6.8.0-pyha770c72_0.conda#4e9f59a060c3be52bc4ddc46ee9b6946 https://conda.anaconda.org/conda-forge/noarch/jinja2-3.1.2-pyhd8ed1ab_1.tar.bz2#c8490ed5c70966d232fdd389d0dbed37 -https://conda.anaconda.org/conda-forge/linux-64/libcblas-3.9.0-18_linux64_openblas.conda#93dd9ab275ad888ed8113953769af78c +https://conda.anaconda.org/conda-forge/linux-64/libcblas-3.9.0-19_linux64_openblas.conda#d12374af44575413fbbd4a217d46ea33 https://conda.anaconda.org/conda-forge/linux-64/libclang-15.0.7-default_h7634d5b_3.conda#0922208521c0463e690bbaebba7eb551 https://conda.anaconda.org/conda-forge/linux-64/libgd-2.3.3-h119a65a_9.conda#cfebc557e54905dadc355c0e9f003004 -https://conda.anaconda.org/conda-forge/linux-64/liblapack-3.9.0-18_linux64_openblas.conda#a1244707531e5b143c420c70573c8ec5 +https://conda.anaconda.org/conda-forge/linux-64/liblapack-3.9.0-19_linux64_openblas.conda#9f100edf65436e3eabc2a51fc00b2c37 https://conda.anaconda.org/conda-forge/linux-64/libxkbcommon-1.6.0-h5d7e998_0.conda#d8edd0e29db6fb6b6988e1a28d35d994 https://conda.anaconda.org/conda-forge/noarch/nodeenv-1.8.0-pyhd8ed1ab_0.conda#2a75b296096adabbabadd5e9782e5fcc https://conda.anaconda.org/conda-forge/noarch/partd-1.4.1-pyhd8ed1ab_0.conda#acf4b7c0bcd5fa3b0e05801c4d2accd6 -https://conda.anaconda.org/conda-forge/linux-64/pillow-10.0.1-py311ha6c5da5_2.conda#d6de249502f16ac151fcef9f743937b9 -https://conda.anaconda.org/conda-forge/noarch/pip-23.2.1-pyhd8ed1ab_0.conda#e2783aa3f9235225eec92f9081c5b801 -https://conda.anaconda.org/conda-forge/linux-64/proj-9.3.0-h1d62c97_1.conda#900fd11ac61d4415d515583fcb570207 +https://conda.anaconda.org/conda-forge/linux-64/pillow-10.1.0-py311ha6c5da5_0.conda#83a988daf5c49e57f7d2086fb6781fe8 +https://conda.anaconda.org/conda-forge/noarch/pip-23.3.1-pyhd8ed1ab_0.conda#2400c0b86889f43aa52067161e1fb108 +https://conda.anaconda.org/conda-forge/linux-64/proj-9.3.0-h1d62c97_2.conda#b5e57a0c643da391bef850922963eece https://conda.anaconda.org/conda-forge/linux-64/pulseaudio-client-16.1-hb77b528_5.conda#ac902ff3c1c6d750dd0dfc93a974ab74 -https://conda.anaconda.org/conda-forge/noarch/pytest-7.4.2-pyhd8ed1ab_0.conda#6dd662ff5ac9a783e5c940ce9f3fe649 +https://conda.anaconda.org/conda-forge/noarch/pytest-7.4.3-pyhd8ed1ab_0.conda#5bdca0aca30b0ee62bb84854e027eae0 https://conda.anaconda.org/conda-forge/noarch/python-dateutil-2.8.2-pyhd8ed1ab_0.tar.bz2#dd999d1cc9f79e67dbb855c8924c7984 -https://conda.anaconda.org/conda-forge/linux-64/sip-6.7.11-py311hb755f60_1.conda#e09eb6aad3607fb6f2c071a2c6a26e1d +https://conda.anaconda.org/conda-forge/linux-64/sip-6.7.12-py311hb755f60_0.conda#02336abab4cb5dd794010ef53c54bd09 https://conda.anaconda.org/conda-forge/noarch/typing-extensions-4.8.0-hd8ed1ab_0.conda#384462e63262a527bda564fa2d9126c0 -https://conda.anaconda.org/conda-forge/noarch/urllib3-2.0.6-pyhd8ed1ab_0.conda#d5f8944ff9ab24a292511c83dce33dea -https://conda.anaconda.org/conda-forge/linux-64/gstreamer-1.22.6-h98fc4e7_2.conda#1c95f7c612f9121353c4ef764678113e -https://conda.anaconda.org/conda-forge/linux-64/harfbuzz-8.2.1-h3d44ed6_0.conda#98db5f8813f45e2b29766aff0e4a499c +https://conda.anaconda.org/conda-forge/noarch/urllib3-2.1.0-pyhd8ed1ab_0.conda#f8ced8ee63830dec7ecc1be048d1470a +https://conda.anaconda.org/conda-forge/linux-64/gstreamer-1.22.7-h98fc4e7_0.conda#6c919bafe5e03428a8e2ef319d7ef990 +https://conda.anaconda.org/conda-forge/linux-64/harfbuzz-8.3.0-h3d44ed6_0.conda#5a6f6c00ef982a9bc83558d9ac8f64a0 https://conda.anaconda.org/conda-forge/noarch/importlib_metadata-6.8.0-hd8ed1ab_0.conda#b279b07ce18058034e5b3606ba103a8b https://conda.anaconda.org/conda-forge/linux-64/libnetcdf-4.9.2-nompi_h80fb2b6_112.conda#a19fa6cacf80c8a366572853d5890eb4 https://conda.anaconda.org/conda-forge/linux-64/numpy-1.26.0-py311h64a7726_0.conda#bf16a9f625126e378302f08e7ed67517 -https://conda.anaconda.org/conda-forge/noarch/pbr-5.11.1-pyhd8ed1ab_0.conda#5bde4ebca51438054099b9527c904ecb +https://conda.anaconda.org/conda-forge/noarch/pbr-6.0.0-pyhd8ed1ab_0.conda#8dbab5ba746ed14aa32cb232dc437f8f https://conda.anaconda.org/conda-forge/noarch/platformdirs-3.11.0-pyhd8ed1ab_0.conda#8f567c0a74aa44cf732f15773b4083b0 -https://conda.anaconda.org/conda-forge/linux-64/pyproj-3.6.1-py311h1facc83_2.conda#8298afb85a731b02dac82e02b6e13ae0 +https://conda.anaconda.org/conda-forge/linux-64/pyproj-3.6.1-py311h1facc83_4.conda#75d504c6787edc377ebdba087a26a61b https://conda.anaconda.org/conda-forge/linux-64/pyqt5-sip-12.12.2-py311hb755f60_5.conda#e4d262cc3600e70b505a6761d29f6207 https://conda.anaconda.org/conda-forge/noarch/pytest-cov-4.1.0-pyhd8ed1ab_0.conda#06eb685a3a0b146347a58dda979485da -https://conda.anaconda.org/conda-forge/noarch/pytest-xdist-3.3.1-pyhd8ed1ab_0.conda#816073bb54ef59f33f0f26c14f88311b +https://conda.anaconda.org/conda-forge/noarch/pytest-xdist-3.4.0-pyhd8ed1ab_0.conda#b8dc6f9db1b9670e564b68277a79ffeb https://conda.anaconda.org/conda-forge/noarch/requests-2.31.0-pyhd8ed1ab_0.conda#a30144e4156cdbb236f99ebb49828f8b https://conda.anaconda.org/conda-forge/noarch/setuptools-scm-8.0.4-pyhd8ed1ab_0.conda#3b8ef3a2d80f3d89d0ae7e3c975e6c57 https://conda.anaconda.org/conda-forge/linux-64/ukkonen-1.0.1-py311h9547e67_4.conda#586da7df03b68640de14dc3e8bcbf76f -https://conda.anaconda.org/conda-forge/linux-64/cftime-1.6.2-py311h1f0f07a_2.conda#571c0c47e8dbcf03577935ac818b6696 -https://conda.anaconda.org/conda-forge/linux-64/contourpy-1.1.1-py311h9547e67_1.conda#52d3de443952d33c5cee6b24b172ce96 -https://conda.anaconda.org/conda-forge/noarch/dask-core-2023.9.3-pyhd8ed1ab_0.conda#a7155483171dbc27a7385d1c26e779de -https://conda.anaconda.org/conda-forge/linux-64/gst-plugins-base-1.22.6-h8e1006c_2.conda#3d8e98279bad55287f2ef9047996f33c -https://conda.anaconda.org/conda-forge/noarch/identify-2.5.30-pyhd8ed1ab_0.conda#b7a2e3bb89bda8c69839485c20aabadf -https://conda.anaconda.org/conda-forge/linux-64/mo_pack-0.2.0-py311h4c7f6c3_1008.tar.bz2#5998dff78c3b82a07ad77f2ae1ec1c44 +https://conda.anaconda.org/conda-forge/linux-64/cftime-1.6.3-py311h1f0f07a_0.conda#b7e6d52b39e199238c3400cafaabafb3 +https://conda.anaconda.org/conda-forge/linux-64/contourpy-1.2.0-py311h9547e67_0.conda#40828c5b36ef52433e21f89943e09f33 +https://conda.anaconda.org/conda-forge/noarch/dask-core-2023.11.0-pyhd8ed1ab_0.conda#3bf8f5c3fbab9e0cfffdf5914f021854 +https://conda.anaconda.org/conda-forge/linux-64/gst-plugins-base-1.22.7-h8e1006c_0.conda#065e2c1d49afa3fdc1a01f1dacd6ab09 +https://conda.anaconda.org/conda-forge/noarch/identify-2.5.31-pyhd8ed1ab_0.conda#fea10604a45e974b110ea15a88913ebc +https://conda.anaconda.org/conda-forge/linux-64/mo_pack-0.3.0-py311h459d7ec_1.conda#45b8d355bbcdd27588c2d266bcfdff84 https://conda.anaconda.org/conda-forge/linux-64/netcdf-fortran-4.6.1-nompi_hacb5139_102.conda#487a1c19dd3eacfd055ad614e9acde87 -https://conda.anaconda.org/conda-forge/linux-64/pandas-2.1.1-py311h320fe9a_1.conda#a4371a95a8ae703a22949af28467b93d +https://conda.anaconda.org/conda-forge/linux-64/pandas-2.1.3-py311h320fe9a_0.conda#3ea3486e16d559dfcb539070ed330a1e https://conda.anaconda.org/conda-forge/linux-64/pango-1.50.14-ha41ecd1_2.conda#1a66c10f6a0da3dbd2f3a68127e7f6a0 https://conda.anaconda.org/conda-forge/linux-64/pywavelets-1.4.1-py311h1f0f07a_1.conda#86b71ff85f3e4c8a98b5bace6d9c4565 https://conda.anaconda.org/conda-forge/linux-64/scipy-1.11.3-py311h64a7726_1.conda#e4b4d3b764e2d029477d0db88248a8b5 https://conda.anaconda.org/conda-forge/linux-64/shapely-2.0.2-py311he06c224_0.conda#c90e2469d7512f3bba893533a82d7a02 https://conda.anaconda.org/conda-forge/noarch/sphinxcontrib-apidoc-0.3.0-py_1.tar.bz2#855b087883443abb10f5faf6eef40860 -https://conda.anaconda.org/conda-forge/noarch/virtualenv-20.24.4-pyhd8ed1ab_0.conda#c3feaf947264a59a125e8c26e98c3c5a -https://conda.anaconda.org/conda-forge/linux-64/cf-units-3.2.0-py311h1f0f07a_3.conda#4ac4de995f18d232af077e7743568b97 -https://conda.anaconda.org/conda-forge/noarch/distributed-2023.9.3-pyhd8ed1ab_0.conda#543fafdd7b325bf16199235ee5f20622 +https://conda.anaconda.org/conda-forge/noarch/virtualenv-20.24.6-pyhd8ed1ab_0.conda#fb1fc875719e217ed799a7aae11d3be4 +https://conda.anaconda.org/conda-forge/linux-64/cf-units-3.2.0-py311h1f0f07a_4.conda#1e105c1a8ea2163507726144b401eb1b +https://conda.anaconda.org/conda-forge/noarch/distributed-2023.11.0-pyhd8ed1ab_0.conda#a1ee8e3043eee1649f98704ea3e6feae https://conda.anaconda.org/conda-forge/linux-64/esmf-8.4.2-nompi_h9e768e6_3.conda#c330e87e698bae8e7381c0315cf25dd0 https://conda.anaconda.org/conda-forge/linux-64/gtk2-2.24.33-h90689f9_2.tar.bz2#957a0255ab58aaf394a91725d73ab422 https://conda.anaconda.org/conda-forge/noarch/imagehash-4.3.1-pyhd8ed1ab_0.tar.bz2#132ad832787a2156be1f1b309835001a https://conda.anaconda.org/conda-forge/linux-64/librsvg-2.56.3-h98fae49_0.conda#620e754f4344f4c27259ff460a2b9c50 -https://conda.anaconda.org/conda-forge/linux-64/matplotlib-base-3.8.0-py311h54ef318_2.conda#5655371cc61b8c31c369a7e709acb294 -https://conda.anaconda.org/conda-forge/linux-64/netcdf4-1.6.4-nompi_py311he8ad708_103.conda#97b45ba4ff4e46a07dd6c60040256538 +https://conda.anaconda.org/conda-forge/linux-64/matplotlib-base-3.8.1-py311h54ef318_0.conda#201fdabdb86bb8fb6e99fa3f0dab8122 +https://conda.anaconda.org/conda-forge/linux-64/netcdf4-1.6.5-nompi_py311he8ad708_100.conda#597b1ad6cb7011b7561c20ea30295cae https://conda.anaconda.org/conda-forge/noarch/pre-commit-3.5.0-pyha770c72_0.conda#964e3d762e427661c59263435a14c492 https://conda.anaconda.org/conda-forge/linux-64/python-stratify-0.3.0-py311h1f0f07a_1.conda#cd36a89a048ad2bcc6d8b43f648fb1d0 https://conda.anaconda.org/conda-forge/linux-64/qt-main-5.15.8-h82b777d_17.conda#4f01e33dbb406085a16a2813ab067e95 -https://conda.anaconda.org/conda-forge/linux-64/cartopy-0.22.0-py311h320fe9a_0.conda#1271b2375735e2aaa6d6770dbe2ad087 +https://conda.anaconda.org/conda-forge/linux-64/cartopy-0.22.0-py311h320fe9a_1.conda#10d1806e20da040c58c36deddf51c70c https://conda.anaconda.org/conda-forge/noarch/esmpy-8.4.2-pyhc1e730c_4.conda#ddcf387719b2e44df0cc4dd467643951 https://conda.anaconda.org/conda-forge/linux-64/graphviz-8.1.0-h28d9a01_0.conda#33628e0e3de7afd2c8172f76439894cb https://conda.anaconda.org/conda-forge/noarch/nc-time-axis-1.4.1-pyhd8ed1ab_0.tar.bz2#281b58948bf60a2582de9e548bcc5369 https://conda.anaconda.org/conda-forge/linux-64/pyqt-5.15.9-py311hf0fb5b6_5.conda#ec7e45bc76d9d0b69a74a2075932b8e8 -https://conda.anaconda.org/conda-forge/linux-64/matplotlib-3.8.0-py311h38be061_2.conda#0289918d4a09bbd0b85fd23ddf1c3ac1 -https://conda.anaconda.org/conda-forge/noarch/pydata-sphinx-theme-0.14.1-pyhd8ed1ab_0.conda#78153addf629c51fab775ef360012ca3 +https://conda.anaconda.org/conda-forge/linux-64/matplotlib-3.8.1-py311h38be061_0.conda#8a21cbbb87357c701fa44f4cfa4e23d7 +https://conda.anaconda.org/conda-forge/noarch/pydata-sphinx-theme-0.14.3-pyhd8ed1ab_1.conda#fbe2993dd48f14724b90bf12e92cc164 https://conda.anaconda.org/conda-forge/noarch/sphinx-copybutton-0.5.2-pyhd8ed1ab_0.conda#ac832cc43adc79118cf6e23f1f9b8995 https://conda.anaconda.org/conda-forge/noarch/sphinx-design-0.5.0-pyhd8ed1ab_0.conda#264b3c697fa9cdade87eb0abe4440d54 https://conda.anaconda.org/conda-forge/noarch/sphinx-gallery-0.14.0-pyhd8ed1ab_0.conda#b3788794f88c9512393032e448428261 diff --git a/requirements/locks/py39-linux-64.lock b/requirements/locks/py39-linux-64.lock index 167fc29e4c..4a7d83d4c7 100644 --- a/requirements/locks/py39-linux-64.lock +++ b/requirements/locks/py39-linux-64.lock @@ -1,6 +1,6 @@ # Generated by conda-lock. # platform: linux-64 -# input_hash: cc8b627bc99f75128e66e8d5f19fad191f76de7f27898db96e0eef7d6dc6e83a +# input_hash: 26c72df308ccfddf5aa1ad644bf5158095cf3032f3abe9322a6f1cdaab977a7c @EXPLICIT https://conda.anaconda.org/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2#d7c89558ba9fa0495403155b64376d81 https://conda.anaconda.org/conda-forge/linux-64/ca-certificates-2023.7.22-hbcca054_0.conda#a73ecd2988327ad4c8f2c331482917f2 @@ -9,18 +9,18 @@ https://conda.anaconda.org/conda-forge/noarch/font-ttf-inconsolata-3.000-h77eed3 https://conda.anaconda.org/conda-forge/noarch/font-ttf-source-code-pro-2.038-h77eed37_0.tar.bz2#4d59c254e01d9cde7957100457e2d5fb https://conda.anaconda.org/conda-forge/noarch/font-ttf-ubuntu-0.83-hab24e00_0.tar.bz2#19410c3df09dfb12d1206132a1d357c5 https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.40-h41732ed_0.conda#7aca3059a1729aa76c597603f10b0dd3 -https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-13.2.0-h7e041cc_2.conda#9172c297304f2a20134fc56c97fbe229 +https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-13.2.0-h7e041cc_3.conda#937eaed008f6bf2191c5fe76f87755e9 https://conda.anaconda.org/conda-forge/linux-64/python_abi-3.9-4_cp39.conda#bfe4b3259a8ac6cdf0037752904da6a7 https://conda.anaconda.org/conda-forge/noarch/tzdata-2023c-h71feb2d_0.conda#939e3e74d8be4dac89ce83b20de2492a https://conda.anaconda.org/conda-forge/noarch/fonts-conda-forge-1-0.tar.bz2#f766549260d6815b0c52253f1fb1bb29 -https://conda.anaconda.org/conda-forge/linux-64/libgomp-13.2.0-h807b86a_2.conda#e2042154faafe61969556f28bade94b9 +https://conda.anaconda.org/conda-forge/linux-64/libgomp-13.2.0-h807b86a_3.conda#7124cbb46b13d395bdde68f2d215c989 https://conda.anaconda.org/conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2#73aaf86a425cc6e73fcf236a5a46396d https://conda.anaconda.org/conda-forge/noarch/fonts-conda-ecosystem-1-0.tar.bz2#fee5683a3f04bd15cbd8318b096a27ab -https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-13.2.0-h807b86a_2.conda#c28003b0be0494f9a7664389146716ff +https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-13.2.0-h807b86a_3.conda#23fdf1fef05baeb7eadc2aed5fb0011f https://conda.anaconda.org/conda-forge/linux-64/alsa-lib-1.2.10-hd590300_0.conda#75dae9a4201732aa78a530b826ee5fe0 https://conda.anaconda.org/conda-forge/linux-64/attr-2.5.1-h166bdaf_1.tar.bz2#d9c69a24ad678ffce24c6543a0176b00 -https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-h7f98852_4.tar.bz2#a1fd65c7ccbf10880423d82bca54eb54 -https://conda.anaconda.org/conda-forge/linux-64/c-ares-1.20.1-hd590300_0.conda#6642e4faa4804be3a0e7edfefbd16595 +https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-hd590300_5.conda#69b8b6202a07720f448be700e300ccf4 +https://conda.anaconda.org/conda-forge/linux-64/c-ares-1.21.0-hd590300_0.conda#c06fa0440048270817b9e3142cc661bf https://conda.anaconda.org/conda-forge/linux-64/fribidi-1.0.10-h36c2ea0_0.tar.bz2#ac7bc6a654f8f41b352b38f4051135f8 https://conda.anaconda.org/conda-forge/linux-64/geos-3.12.0-h59595ed_0.conda#3fdf79ef322c8379ae83be491d805369 https://conda.anaconda.org/conda-forge/linux-64/gettext-0.21.1-h27087fc_0.tar.bz2#14947d8770185e5153fdd04d4673ed37 @@ -36,11 +36,11 @@ https://conda.anaconda.org/conda-forge/linux-64/libdeflate-1.19-hd590300_0.conda https://conda.anaconda.org/conda-forge/linux-64/libev-4.33-h516909a_1.tar.bz2#6f8720dff19e17ce5d48cfe7f3d2f0a3 https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.5.0-hcb278e6_1.conda#6305a3dd2752c76335295da4e581f2fd https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.2-h7f98852_5.tar.bz2#d645c6d2ac96843a2bfaccd2d62b3ac3 -https://conda.anaconda.org/conda-forge/linux-64/libgfortran5-13.2.0-ha4646dd_2.conda#78fdab09d9138851dde2b5fe2a11019e +https://conda.anaconda.org/conda-forge/linux-64/libgfortran5-13.2.0-ha4646dd_3.conda#c714d905cdfa0e70200f68b80cc04764 https://conda.anaconda.org/conda-forge/linux-64/libiconv-1.17-h166bdaf_0.tar.bz2#b62b52da46c39ee2bc3c162ac7f1804d https://conda.anaconda.org/conda-forge/linux-64/libjpeg-turbo-3.0.0-hd590300_1.conda#ea25936bb4080d843790b586850f82b8 https://conda.anaconda.org/conda-forge/linux-64/libmo_unpack-3.1.2-hf484d3e_1001.tar.bz2#95f32a6a5a666d33886ca5627239f03d -https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.0-hd590300_1.conda#854e3e1623b39777140f199c5f9ab952 +https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda#30fd6e37fe21f86f4bd26d6ee73eeec7 https://conda.anaconda.org/conda-forge/linux-64/libogg-1.3.4-h7f98852_1.tar.bz2#6e8cc2173440d77708196c5b93771680 https://conda.anaconda.org/conda-forge/linux-64/libopus-1.3.1-h7f98852_1.tar.bz2#15345e56d527b330e1cacbdf58676e8f https://conda.anaconda.org/conda-forge/linux-64/libtool-2.4.7-h27087fc_0.conda#f204c8ba400ec475452737094fb81d52 @@ -49,9 +49,9 @@ https://conda.anaconda.org/conda-forge/linux-64/libwebp-base-1.3.2-hd590300_0.co https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.2.13-hd590300_5.conda#f36c115f1ee199da648e0597ec2047ad https://conda.anaconda.org/conda-forge/linux-64/lz4-c-1.9.4-hcb278e6_0.conda#318b08df404f9c9be5712aaa5a6f0bb0 https://conda.anaconda.org/conda-forge/linux-64/mpg123-1.32.3-h59595ed_0.conda#bdadff838d5437aea83607ced8b37f75 -https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.4-hcb278e6_0.conda#681105bccc2a3f7f1a837d47d39c9179 +https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.4-h59595ed_2.conda#7dbaa197d7ba6032caf7ae7f32c1efa0 https://conda.anaconda.org/conda-forge/linux-64/nspr-4.35-h27087fc_0.conda#da0ec11a6454ae19bff5b02ed881a2b1 -https://conda.anaconda.org/conda-forge/linux-64/openssl-3.1.3-hd590300_0.conda#7bb88ce04c8deb9f7d763ae04a1da72f +https://conda.anaconda.org/conda-forge/linux-64/openssl-3.1.4-hd590300_0.conda#412ba6938c3e2abaca8b1129ea82e238 https://conda.anaconda.org/conda-forge/linux-64/pixman-0.42.2-h59595ed_0.conda#700edd63ccd5fc66b70b1c028cea9a68 https://conda.anaconda.org/conda-forge/linux-64/pthread-stubs-0.4-h36c2ea0_1001.tar.bz2#22dad4df6e8630e8dff2428f6f6a7036 https://conda.anaconda.org/conda-forge/linux-64/snappy-1.1.10-h9fff704_0.conda#e6d228cd0bb74a51dd18f5bfce0b4115 @@ -74,21 +74,21 @@ https://conda.anaconda.org/conda-forge/linux-64/libcap-2.69-h0f662aa_0.conda#25c https://conda.anaconda.org/conda-forge/linux-64/libedit-3.1.20191231-he28a2e2_2.tar.bz2#4d331e44109e3f0e19b4cb8f9b82f3e1 https://conda.anaconda.org/conda-forge/linux-64/libevent-2.1.12-hf998b51_1.conda#a1cfcc585f0c42bf8d5546bb1dfb668d https://conda.anaconda.org/conda-forge/linux-64/libflac-1.4.3-h59595ed_0.conda#ee48bf17cc83a00f59ca1494d5646869 -https://conda.anaconda.org/conda-forge/linux-64/libgfortran-ng-13.2.0-h69a702a_2.conda#e75a75a6eaf6f318dae2631158c46575 +https://conda.anaconda.org/conda-forge/linux-64/libgfortran-ng-13.2.0-h69a702a_3.conda#73031c79546ad06f1fe62e57fdd021bc https://conda.anaconda.org/conda-forge/linux-64/libgpg-error-1.47-h71f35ed_0.conda#c2097d0b46367996f09b4e8e4920384a -https://conda.anaconda.org/conda-forge/linux-64/libnghttp2-1.52.0-h61bc06f_0.conda#613955a50485812985c059e7b269f42e +https://conda.anaconda.org/conda-forge/linux-64/libnghttp2-1.58.0-h47da74e_0.conda#9b13d5ee90fc9f09d54fd403247342b4 https://conda.anaconda.org/conda-forge/linux-64/libpng-1.6.39-h753d276_0.conda#e1c890aebdebbfbf87e2c917187b4416 -https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.43.2-h2797004_0.conda#4b441a1ee22397d5a27dc1126b849edd +https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.44.0-h2797004_0.conda#b58e6816d137f3aabf77d341dd5d732b https://conda.anaconda.org/conda-forge/linux-64/libssh2-1.11.0-h0841786_0.conda#1f5a58e686b13bcfde88b93f547d23fe https://conda.anaconda.org/conda-forge/linux-64/libudunits2-2.2.28-h40f5838_3.conda#4bdace082e911a3e1f1f0b721bed5b56 https://conda.anaconda.org/conda-forge/linux-64/libvorbis-1.3.7-h9c3ff4c_0.tar.bz2#309dec04b70a3cc0f1e84a4013683bc0 https://conda.anaconda.org/conda-forge/linux-64/libxcb-1.15-h0b41bf4_0.conda#33277193f5b92bad9fdd230eb700929c -https://conda.anaconda.org/conda-forge/linux-64/libxml2-2.11.5-h232c23b_1.conda#f3858448893839820d4bcfb14ad3ecdf +https://conda.anaconda.org/conda-forge/linux-64/libxml2-2.11.6-h232c23b_0.conda#427a3e59d66cb5d145020bd9c6493334 https://conda.anaconda.org/conda-forge/linux-64/libzip-1.10.1-h2629f0a_3.conda#ac79812548e7e8cf61f7b0abdef01d3b -https://conda.anaconda.org/conda-forge/linux-64/mysql-common-8.0.33-hf1915f5_5.conda#1e8ef4090ca4f0d66404a7441e1dbf3c -https://conda.anaconda.org/conda-forge/linux-64/pcre2-10.40-hc3806b6_0.tar.bz2#69e2c796349cd9b273890bee0febfe1b +https://conda.anaconda.org/conda-forge/linux-64/mysql-common-8.0.33-hf1915f5_6.conda#80bf3b277c120dd294b51d404b931a75 +https://conda.anaconda.org/conda-forge/linux-64/pcre2-10.42-hcad00b1_0.conda#679c8961826aa4b50653bce17ee52abe https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8228510_1.conda#47d31b792659ce70f470b5c82fdfb7a4 -https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-h2797004_0.conda#513336054f884f95d9fd925748f41ef3 +https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda#d453b98d9c83e71da0741bb0ff4d76bc https://conda.anaconda.org/conda-forge/linux-64/xorg-libsm-1.2.4-h7391055_0.conda#93ee23f12bc2e684548181256edd2cf6 https://conda.anaconda.org/conda-forge/linux-64/zlib-1.2.13-hd590300_5.conda#68c34ec6149623be41a1933ab996a209 https://conda.anaconda.org/conda-forge/linux-64/zstd-1.5.5-hfc55251_0.conda#04b88013080254850d6c01ed54810589 @@ -96,16 +96,16 @@ https://conda.anaconda.org/conda-forge/linux-64/blosc-1.21.5-h0f2a231_0.conda#00 https://conda.anaconda.org/conda-forge/linux-64/brotli-bin-1.1.0-hd590300_1.conda#39f910d205726805a958da408ca194ba https://conda.anaconda.org/conda-forge/linux-64/freetype-2.12.1-h267a509_2.conda#9ae35c3d96db2c94ce0cef86efdfa2cb https://conda.anaconda.org/conda-forge/linux-64/krb5-1.21.2-h659d440_0.conda#cd95826dbd331ed1be26bdf401432844 -https://conda.anaconda.org/conda-forge/linux-64/libgcrypt-1.10.1-h166bdaf_0.tar.bz2#f967fc95089cd247ceed56eda31de3a9 -https://conda.anaconda.org/conda-forge/linux-64/libglib-2.78.0-hebfc3b9_0.conda#e618003da3547216310088478e475945 +https://conda.anaconda.org/conda-forge/linux-64/libgcrypt-1.10.2-hd590300_0.conda#3d7d5e5cebf8af5aadb040732860f1b6 +https://conda.anaconda.org/conda-forge/linux-64/libglib-2.78.1-h783c2da_1.conda#70052d6c1e84643e30ffefb21ab6950f https://conda.anaconda.org/conda-forge/linux-64/libllvm15-15.0.7-h5cf9203_3.conda#9efe82d44b76a7529a1d702e5a37752e https://conda.anaconda.org/conda-forge/linux-64/libopenblas-0.3.24-pthreads_h413a1c8_0.conda#6e4ef6ca28655124dcde9bd500e44c32 https://conda.anaconda.org/conda-forge/linux-64/libsndfile-1.2.2-hc60ed4a_1.conda#ef1910918dd895516a769ed36b5b3a4e https://conda.anaconda.org/conda-forge/linux-64/libtiff-4.6.0-ha9c0a0a_2.conda#55ed21669b2015f77c180feb1dd41930 -https://conda.anaconda.org/conda-forge/linux-64/mysql-libs-8.0.33-hca2cd23_5.conda#b72f016c910ff9295b1377d3e17da3f2 +https://conda.anaconda.org/conda-forge/linux-64/mysql-libs-8.0.33-hca2cd23_6.conda#e87530d1b12dd7f4e0f856dc07358d60 https://conda.anaconda.org/conda-forge/linux-64/nss-3.94-h1d7d5a4_0.conda#7caef74bbfa730e014b20f0852068509 https://conda.anaconda.org/conda-forge/linux-64/python-3.9.18-h0755675_0_cpython.conda#3ede353bc605068d9677e700b1847382 -https://conda.anaconda.org/conda-forge/linux-64/sqlite-3.43.2-h2c6b66d_0.conda#c37b95bcd6c6833dacfd5df0ae2f4303 +https://conda.anaconda.org/conda-forge/linux-64/sqlite-3.44.0-h2c6b66d_0.conda#df56c636df4a98990462d66ac7be2330 https://conda.anaconda.org/conda-forge/linux-64/udunits2-2.2.28-h40f5838_3.conda#6bb8deb138f87c9d48320ac21b87e7a1 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-0.4.0-hd590300_1.conda#9bfac7ccd94d54fd21a0501296d60424 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-keysyms-0.4.0-h8ee46fc_1.conda#632413adcd8bc16b515cab87a2932913 @@ -120,22 +120,22 @@ https://conda.anaconda.org/conda-forge/linux-64/brotli-1.1.0-hd590300_1.conda#f2 https://conda.anaconda.org/conda-forge/linux-64/brotli-python-1.1.0-py39h3d6467e_1.conda#c48418c8b35f1d59ae9ae1174812b40a https://conda.anaconda.org/conda-forge/noarch/certifi-2023.7.22-pyhd8ed1ab_0.conda#7f3dbc9179b4dde7da98dfb151d0ad22 https://conda.anaconda.org/conda-forge/noarch/cfgv-3.3.1-pyhd8ed1ab_0.tar.bz2#ebb5f5f7dc4f1a3780ef7ea7738db08c -https://conda.anaconda.org/conda-forge/noarch/charset-normalizer-3.3.0-pyhd8ed1ab_0.conda#fef8ef5f0a54546b9efee39468229917 +https://conda.anaconda.org/conda-forge/noarch/charset-normalizer-3.3.2-pyhd8ed1ab_0.conda#7f4a9e3fcff3f6356ae99244a014da6a https://conda.anaconda.org/conda-forge/noarch/click-8.1.7-unix_pyh707e725_0.conda#f3ad426304898027fc619827ff428eca -https://conda.anaconda.org/conda-forge/noarch/cloudpickle-2.2.1-pyhd8ed1ab_0.conda#b325bfc4cff7d7f8a868f1f7ecc4ed16 +https://conda.anaconda.org/conda-forge/noarch/cloudpickle-3.0.0-pyhd8ed1ab_0.conda#753d29fe41bb881e4b9c004f0abf973f https://conda.anaconda.org/conda-forge/noarch/colorama-0.4.6-pyhd8ed1ab_0.tar.bz2#3faab06a954c2a04039983f2c4a50d99 https://conda.anaconda.org/conda-forge/noarch/cycler-0.12.1-pyhd8ed1ab_0.conda#5cd86562580f274031ede6aa6aa24441 -https://conda.anaconda.org/conda-forge/linux-64/cython-3.0.3-py39h3d6467e_0.conda#13febcb5470ba004eeb3e7883fa66e79 +https://conda.anaconda.org/conda-forge/linux-64/cython-3.0.5-py39h3d6467e_0.conda#8a666e66408ec097bf7b6d44353d6294 https://conda.anaconda.org/conda-forge/linux-64/dbus-1.13.6-h5008d03_3.tar.bz2#ecfff944ba3960ecb334b9a2663d708d https://conda.anaconda.org/conda-forge/noarch/distlib-0.3.7-pyhd8ed1ab_0.conda#12d8aae6994f342618443a8f05c652a0 https://conda.anaconda.org/conda-forge/linux-64/docutils-0.19-py39hf3d152e_1.tar.bz2#adb733ec2ee669f6d010758d054da60f https://conda.anaconda.org/conda-forge/noarch/exceptiongroup-1.1.3-pyhd8ed1ab_0.conda#e6518222753f519e911e83136d2158d9 https://conda.anaconda.org/conda-forge/noarch/execnet-2.0.2-pyhd8ed1ab_0.conda#67de0d8241e1060a479e3c37793e26f9 -https://conda.anaconda.org/conda-forge/noarch/filelock-3.12.4-pyhd8ed1ab_0.conda#5173d4b8267a0699a43d73231e0b6596 +https://conda.anaconda.org/conda-forge/noarch/filelock-3.13.1-pyhd8ed1ab_0.conda#0c1729b74a8152fde6a38ba0a2ab9f45 https://conda.anaconda.org/conda-forge/linux-64/fontconfig-2.14.2-h14ed4e7_0.conda#0f69b688f52ff6da70bccb7ff7001d1d -https://conda.anaconda.org/conda-forge/noarch/fsspec-2023.9.2-pyh1a96a4e_0.conda#9d15cd3a0e944594ab528da37dc72ecc +https://conda.anaconda.org/conda-forge/noarch/fsspec-2023.10.0-pyhca7485f_0.conda#5b86cf1ceaaa9be2ec4627377e538db1 https://conda.anaconda.org/conda-forge/linux-64/gdk-pixbuf-2.42.10-h829c605_4.conda#252a696860674caf7a855e16f680d63a -https://conda.anaconda.org/conda-forge/linux-64/glib-tools-2.78.0-hfc55251_0.conda#e10134de3558dd95abda6987b5548f4f +https://conda.anaconda.org/conda-forge/linux-64/glib-tools-2.78.1-hfc55251_1.conda#a50918d10114a0bf80fb46c7cc692058 https://conda.anaconda.org/conda-forge/linux-64/gts-0.7.6-h977cf35_4.conda#4d8df0b0db060d33c9a702ada998a8fe https://conda.anaconda.org/conda-forge/noarch/idna-3.4-pyhd8ed1ab_0.tar.bz2#34272b248891bddccc64479f9a7fffed https://conda.anaconda.org/conda-forge/noarch/imagesize-1.4.1-pyhd8ed1ab_0.tar.bz2#7de5386c8fea29e76b303f37dde4c352 @@ -143,11 +143,11 @@ https://conda.anaconda.org/conda-forge/noarch/iniconfig-2.0.0-pyhd8ed1ab_0.conda https://conda.anaconda.org/conda-forge/noarch/iris-sample-data-2.4.0-pyhd8ed1ab_0.tar.bz2#18ee9c07cf945a33f92caf1ee3d23ad9 https://conda.anaconda.org/conda-forge/linux-64/kiwisolver-1.4.5-py39h7633fee_1.conda#c9f74d717e5a2847a9f8b779c54130f2 https://conda.anaconda.org/conda-forge/linux-64/lcms2-2.15-hb7c19ff_3.conda#e96637dd92c5f340215c753a5c9a22d7 -https://conda.anaconda.org/conda-forge/linux-64/libblas-3.9.0-18_linux64_openblas.conda#bcddbb497582ece559465b9cd11042e7 +https://conda.anaconda.org/conda-forge/linux-64/libblas-3.9.0-19_linux64_openblas.conda#420f4e9be59d0dc9133a0f43f7bab3f3 https://conda.anaconda.org/conda-forge/linux-64/libclang13-15.0.7-default_h9986a30_3.conda#1720df000b48e31842500323cb7be18c https://conda.anaconda.org/conda-forge/linux-64/libcups-2.3.3-h4637d8d_4.conda#d4529f4dff3057982a7617c7ac58fde3 https://conda.anaconda.org/conda-forge/linux-64/libcurl-8.4.0-hca28451_0.conda#1158ac1d2613b28685644931f11ee807 -https://conda.anaconda.org/conda-forge/linux-64/libpq-16.0-hfc447b1_1.conda#e4a9a5ba40123477db33e02a78dffb01 +https://conda.anaconda.org/conda-forge/linux-64/libpq-16.1-hfc447b1_0.conda#2b7f1893cf40b4ccdc0230bcd94d5ed9 https://conda.anaconda.org/conda-forge/linux-64/libsystemd0-254-h3516f8a_0.conda#df4b1cd0c91b4234fb02b5701a4cdddc https://conda.anaconda.org/conda-forge/linux-64/libwebp-1.3.2-h658648e_1.conda#0ebb65e8d86843865796c7c95a941f34 https://conda.anaconda.org/conda-forge/noarch/locket-1.0.0-pyhd8ed1ab_0.tar.bz2#91e27ef3d05cc772ce627e51cff111c4 @@ -181,7 +181,7 @@ https://conda.anaconda.org/conda-forge/noarch/toolz-0.12.0-pyhd8ed1ab_0.tar.bz2# https://conda.anaconda.org/conda-forge/linux-64/tornado-6.3.3-py39hd1e30aa_1.conda#cbe186eefb0bcd91e8f47c3908489874 https://conda.anaconda.org/conda-forge/noarch/typing_extensions-4.8.0-pyha770c72_0.conda#5b1be40a26d10a06f6d4f1f9e19fa0c7 https://conda.anaconda.org/conda-forge/linux-64/unicodedata2-15.1.0-py39hd1e30aa_0.conda#1da984bbb6e765743e13388ba7b7b2c8 -https://conda.anaconda.org/conda-forge/noarch/wheel-0.41.2-pyhd8ed1ab_0.conda#1ccd092478b3e0ee10d7a891adbf8a4f +https://conda.anaconda.org/conda-forge/noarch/wheel-0.41.3-pyhd8ed1ab_0.conda#3fc026b9c87d091c4b34a6c997324ae8 https://conda.anaconda.org/conda-forge/linux-64/xcb-util-image-0.4.0-h8ee46fc_1.conda#9d7bcddf49cbf727730af10e71022c73 https://conda.anaconda.org/conda-forge/linux-64/xkeyboard-config-2.40-hd590300_0.conda#07c15d846a2e4d673da22cbd85fdb6d2 https://conda.anaconda.org/conda-forge/linux-64/xorg-libxext-1.3.4-h0b41bf4_2.conda#82b6df12252e6f32402b96dacc656fec @@ -189,79 +189,79 @@ https://conda.anaconda.org/conda-forge/linux-64/xorg-libxrender-0.9.11-hd590300_ https://conda.anaconda.org/conda-forge/noarch/zict-3.0.0-pyhd8ed1ab_0.conda#cf30c2c15b82aacb07f9c09e28ff2275 https://conda.anaconda.org/conda-forge/noarch/zipp-3.17.0-pyhd8ed1ab_0.conda#2e4d6bc0b14e10f895fc6791a7d9b26a https://conda.anaconda.org/conda-forge/noarch/accessible-pygments-0.0.4-pyhd8ed1ab_0.conda#46a2e6e3dfa718ce3492018d5a110dd6 -https://conda.anaconda.org/conda-forge/noarch/babel-2.13.0-pyhd8ed1ab_0.conda#22541af7a9eb59fc6afcadb7ecdf9219 +https://conda.anaconda.org/conda-forge/noarch/babel-2.13.1-pyhd8ed1ab_0.conda#3ccff479c246692468f604df9c85ef26 https://conda.anaconda.org/conda-forge/noarch/beautifulsoup4-4.12.2-pyha770c72_0.conda#a362ff7d976217f8fa78c0f1c4f59717 https://conda.anaconda.org/conda-forge/linux-64/cairo-1.18.0-h3faef2a_0.conda#f907bb958910dc404647326ca80c263e https://conda.anaconda.org/conda-forge/linux-64/cffi-1.16.0-py39h7a31438_0.conda#ac992767d7f8ed2cb27e71e78f0fb2d7 https://conda.anaconda.org/conda-forge/linux-64/cytoolz-0.12.2-py39hd1e30aa_1.conda#e5b62f0c1f96413116f16d33973f1a44 -https://conda.anaconda.org/conda-forge/linux-64/fonttools-4.43.1-py39hd1e30aa_0.conda#74b032179f7782051800908cb2250132 -https://conda.anaconda.org/conda-forge/linux-64/glib-2.78.0-hfc55251_0.conda#2f55a36b549f51a7e0c2b1e3c3f0ccd4 +https://conda.anaconda.org/conda-forge/linux-64/fonttools-4.44.3-py39hd1e30aa_0.conda#873fb1d81f9e9220d605c6b05a96544c +https://conda.anaconda.org/conda-forge/linux-64/glib-2.78.1-hfc55251_1.conda#8d7242302bb3d03b9a690b6dda872603 https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.14.2-nompi_h4f84152_100.conda#2de6a9bc8083b49f09b2f6eb28d3ba3c https://conda.anaconda.org/conda-forge/noarch/importlib-metadata-6.8.0-pyha770c72_0.conda#4e9f59a060c3be52bc4ddc46ee9b6946 -https://conda.anaconda.org/conda-forge/noarch/importlib_resources-6.1.0-pyhd8ed1ab_0.conda#48b0d98e0c0ec810d3ccc2a0926c8c0e +https://conda.anaconda.org/conda-forge/noarch/importlib_resources-6.1.1-pyhd8ed1ab_0.conda#3d5fa25cf42f3f32a12b2d874ace8574 https://conda.anaconda.org/conda-forge/noarch/jinja2-3.1.2-pyhd8ed1ab_1.tar.bz2#c8490ed5c70966d232fdd389d0dbed37 -https://conda.anaconda.org/conda-forge/linux-64/libcblas-3.9.0-18_linux64_openblas.conda#93dd9ab275ad888ed8113953769af78c +https://conda.anaconda.org/conda-forge/linux-64/libcblas-3.9.0-19_linux64_openblas.conda#d12374af44575413fbbd4a217d46ea33 https://conda.anaconda.org/conda-forge/linux-64/libclang-15.0.7-default_h7634d5b_3.conda#0922208521c0463e690bbaebba7eb551 https://conda.anaconda.org/conda-forge/linux-64/libgd-2.3.3-h119a65a_9.conda#cfebc557e54905dadc355c0e9f003004 -https://conda.anaconda.org/conda-forge/linux-64/liblapack-3.9.0-18_linux64_openblas.conda#a1244707531e5b143c420c70573c8ec5 +https://conda.anaconda.org/conda-forge/linux-64/liblapack-3.9.0-19_linux64_openblas.conda#9f100edf65436e3eabc2a51fc00b2c37 https://conda.anaconda.org/conda-forge/linux-64/libxkbcommon-1.6.0-h5d7e998_0.conda#d8edd0e29db6fb6b6988e1a28d35d994 https://conda.anaconda.org/conda-forge/noarch/nodeenv-1.8.0-pyhd8ed1ab_0.conda#2a75b296096adabbabadd5e9782e5fcc https://conda.anaconda.org/conda-forge/noarch/partd-1.4.1-pyhd8ed1ab_0.conda#acf4b7c0bcd5fa3b0e05801c4d2accd6 -https://conda.anaconda.org/conda-forge/linux-64/pillow-10.0.1-py39had0adad_2.conda#4d5990bb620ed36b10a528324d9b75e3 -https://conda.anaconda.org/conda-forge/noarch/pip-23.2.1-pyhd8ed1ab_0.conda#e2783aa3f9235225eec92f9081c5b801 -https://conda.anaconda.org/conda-forge/linux-64/proj-9.3.0-h1d62c97_1.conda#900fd11ac61d4415d515583fcb570207 +https://conda.anaconda.org/conda-forge/linux-64/pillow-10.1.0-py39had0adad_0.conda#eeaa413fddccecb2ab7f747bdb55b07f +https://conda.anaconda.org/conda-forge/noarch/pip-23.3.1-pyhd8ed1ab_0.conda#2400c0b86889f43aa52067161e1fb108 +https://conda.anaconda.org/conda-forge/linux-64/proj-9.3.0-h1d62c97_2.conda#b5e57a0c643da391bef850922963eece https://conda.anaconda.org/conda-forge/linux-64/pulseaudio-client-16.1-hb77b528_5.conda#ac902ff3c1c6d750dd0dfc93a974ab74 -https://conda.anaconda.org/conda-forge/noarch/pytest-7.4.2-pyhd8ed1ab_0.conda#6dd662ff5ac9a783e5c940ce9f3fe649 +https://conda.anaconda.org/conda-forge/noarch/pytest-7.4.3-pyhd8ed1ab_0.conda#5bdca0aca30b0ee62bb84854e027eae0 https://conda.anaconda.org/conda-forge/noarch/python-dateutil-2.8.2-pyhd8ed1ab_0.tar.bz2#dd999d1cc9f79e67dbb855c8924c7984 -https://conda.anaconda.org/conda-forge/linux-64/sip-6.7.11-py39h3d6467e_1.conda#39d2473881976eeb57c09c106d2d9fc3 +https://conda.anaconda.org/conda-forge/linux-64/sip-6.7.12-py39h3d6467e_0.conda#e667a3ab0df62c54e60e1843d2e6defb https://conda.anaconda.org/conda-forge/noarch/typing-extensions-4.8.0-hd8ed1ab_0.conda#384462e63262a527bda564fa2d9126c0 -https://conda.anaconda.org/conda-forge/noarch/urllib3-2.0.6-pyhd8ed1ab_0.conda#d5f8944ff9ab24a292511c83dce33dea -https://conda.anaconda.org/conda-forge/linux-64/gstreamer-1.22.6-h98fc4e7_2.conda#1c95f7c612f9121353c4ef764678113e -https://conda.anaconda.org/conda-forge/linux-64/harfbuzz-8.2.1-h3d44ed6_0.conda#98db5f8813f45e2b29766aff0e4a499c -https://conda.anaconda.org/conda-forge/noarch/importlib-resources-6.1.0-pyhd8ed1ab_0.conda#6a62c2cc25376a0d050b3d1d221c3ee9 +https://conda.anaconda.org/conda-forge/noarch/urllib3-2.1.0-pyhd8ed1ab_0.conda#f8ced8ee63830dec7ecc1be048d1470a +https://conda.anaconda.org/conda-forge/linux-64/gstreamer-1.22.7-h98fc4e7_0.conda#6c919bafe5e03428a8e2ef319d7ef990 +https://conda.anaconda.org/conda-forge/linux-64/harfbuzz-8.3.0-h3d44ed6_0.conda#5a6f6c00ef982a9bc83558d9ac8f64a0 +https://conda.anaconda.org/conda-forge/noarch/importlib-resources-6.1.1-pyhd8ed1ab_0.conda#d04bd1b5bed9177dd7c3cef15e2b6710 https://conda.anaconda.org/conda-forge/noarch/importlib_metadata-6.8.0-hd8ed1ab_0.conda#b279b07ce18058034e5b3606ba103a8b https://conda.anaconda.org/conda-forge/linux-64/libnetcdf-4.9.2-nompi_h80fb2b6_112.conda#a19fa6cacf80c8a366572853d5890eb4 https://conda.anaconda.org/conda-forge/linux-64/numpy-1.26.0-py39h474f0d3_0.conda#62f1d2e05327bf62728afa448f2a9261 -https://conda.anaconda.org/conda-forge/noarch/pbr-5.11.1-pyhd8ed1ab_0.conda#5bde4ebca51438054099b9527c904ecb +https://conda.anaconda.org/conda-forge/noarch/pbr-6.0.0-pyhd8ed1ab_0.conda#8dbab5ba746ed14aa32cb232dc437f8f https://conda.anaconda.org/conda-forge/noarch/platformdirs-3.11.0-pyhd8ed1ab_0.conda#8f567c0a74aa44cf732f15773b4083b0 -https://conda.anaconda.org/conda-forge/linux-64/pyproj-3.6.1-py39hce394fd_2.conda#cb5ecd8db6d8ca8b9f281658a8512433 +https://conda.anaconda.org/conda-forge/linux-64/pyproj-3.6.1-py39hce394fd_4.conda#4b6e79000ec3a495f429b2c1092ed63b https://conda.anaconda.org/conda-forge/linux-64/pyqt5-sip-12.12.2-py39h3d6467e_5.conda#93aff412f3e49fdb43361c0215cbd72d -https://conda.anaconda.org/conda-forge/noarch/pytest-xdist-3.3.1-pyhd8ed1ab_0.conda#816073bb54ef59f33f0f26c14f88311b +https://conda.anaconda.org/conda-forge/noarch/pytest-xdist-3.4.0-pyhd8ed1ab_0.conda#b8dc6f9db1b9670e564b68277a79ffeb https://conda.anaconda.org/conda-forge/noarch/requests-2.31.0-pyhd8ed1ab_0.conda#a30144e4156cdbb236f99ebb49828f8b https://conda.anaconda.org/conda-forge/noarch/setuptools-scm-8.0.4-pyhd8ed1ab_0.conda#3b8ef3a2d80f3d89d0ae7e3c975e6c57 https://conda.anaconda.org/conda-forge/linux-64/ukkonen-1.0.1-py39h7633fee_4.conda#b66595fbda99771266f042f42c7457be -https://conda.anaconda.org/conda-forge/linux-64/cftime-1.6.2-py39h44dd56e_2.conda#bb788b462770a49433d7412e7881d917 -https://conda.anaconda.org/conda-forge/linux-64/contourpy-1.1.1-py39h7633fee_1.conda#33afb3357cd0d120ecb26778d37579e4 -https://conda.anaconda.org/conda-forge/noarch/dask-core-2023.9.3-pyhd8ed1ab_0.conda#a7155483171dbc27a7385d1c26e779de -https://conda.anaconda.org/conda-forge/linux-64/gst-plugins-base-1.22.6-h8e1006c_2.conda#3d8e98279bad55287f2ef9047996f33c -https://conda.anaconda.org/conda-forge/noarch/identify-2.5.30-pyhd8ed1ab_0.conda#b7a2e3bb89bda8c69839485c20aabadf -https://conda.anaconda.org/conda-forge/linux-64/mo_pack-0.2.0-py39h2ae25f5_1008.tar.bz2#d90acb3804f16c63eb6726652e4e25b3 +https://conda.anaconda.org/conda-forge/linux-64/cftime-1.6.3-py39h44dd56e_0.conda#baea2f5dfb3ab7b1c836385d2e1daca7 +https://conda.anaconda.org/conda-forge/linux-64/contourpy-1.2.0-py39h7633fee_0.conda#ed71ad3e30eb03da363fb797419cce98 +https://conda.anaconda.org/conda-forge/noarch/dask-core-2023.11.0-pyhd8ed1ab_0.conda#3bf8f5c3fbab9e0cfffdf5914f021854 +https://conda.anaconda.org/conda-forge/linux-64/gst-plugins-base-1.22.7-h8e1006c_0.conda#065e2c1d49afa3fdc1a01f1dacd6ab09 +https://conda.anaconda.org/conda-forge/noarch/identify-2.5.31-pyhd8ed1ab_0.conda#fea10604a45e974b110ea15a88913ebc +https://conda.anaconda.org/conda-forge/linux-64/mo_pack-0.3.0-py39hd1e30aa_1.conda#ca63612907462c8e36edcc9bbacc253e https://conda.anaconda.org/conda-forge/linux-64/netcdf-fortran-4.6.1-nompi_hacb5139_102.conda#487a1c19dd3eacfd055ad614e9acde87 -https://conda.anaconda.org/conda-forge/linux-64/pandas-2.1.1-py39hddac248_1.conda#f32809db710b8aac48fbc14c13058530 +https://conda.anaconda.org/conda-forge/linux-64/pandas-2.1.3-py39hddac248_0.conda#961b398d8c421a3752e26f01f2dcbdac https://conda.anaconda.org/conda-forge/linux-64/pango-1.50.14-ha41ecd1_2.conda#1a66c10f6a0da3dbd2f3a68127e7f6a0 https://conda.anaconda.org/conda-forge/linux-64/pywavelets-1.4.1-py39h44dd56e_1.conda#d037c20e3da2e85f03ebd20ad480c359 https://conda.anaconda.org/conda-forge/linux-64/scipy-1.11.3-py39h474f0d3_1.conda#55441724fedb3042d38ffa5220f00804 https://conda.anaconda.org/conda-forge/linux-64/shapely-2.0.2-py39h1bc45ef_0.conda#ca067895d22f8a0d38f225a95184858e https://conda.anaconda.org/conda-forge/noarch/sphinxcontrib-apidoc-0.3.0-py_1.tar.bz2#855b087883443abb10f5faf6eef40860 -https://conda.anaconda.org/conda-forge/noarch/virtualenv-20.24.4-pyhd8ed1ab_0.conda#c3feaf947264a59a125e8c26e98c3c5a -https://conda.anaconda.org/conda-forge/linux-64/cf-units-3.2.0-py39h44dd56e_3.conda#cbc2fe7741df3546448a534827238c32 -https://conda.anaconda.org/conda-forge/noarch/distributed-2023.9.3-pyhd8ed1ab_0.conda#543fafdd7b325bf16199235ee5f20622 +https://conda.anaconda.org/conda-forge/noarch/virtualenv-20.24.6-pyhd8ed1ab_0.conda#fb1fc875719e217ed799a7aae11d3be4 +https://conda.anaconda.org/conda-forge/linux-64/cf-units-3.2.0-py39h44dd56e_4.conda#81310d21bf9d91754c1220c585bb72d6 +https://conda.anaconda.org/conda-forge/noarch/distributed-2023.11.0-pyhd8ed1ab_0.conda#a1ee8e3043eee1649f98704ea3e6feae https://conda.anaconda.org/conda-forge/linux-64/esmf-8.4.2-nompi_h9e768e6_3.conda#c330e87e698bae8e7381c0315cf25dd0 https://conda.anaconda.org/conda-forge/linux-64/gtk2-2.24.33-h90689f9_2.tar.bz2#957a0255ab58aaf394a91725d73ab422 https://conda.anaconda.org/conda-forge/noarch/imagehash-4.3.1-pyhd8ed1ab_0.tar.bz2#132ad832787a2156be1f1b309835001a https://conda.anaconda.org/conda-forge/linux-64/librsvg-2.56.3-h98fae49_0.conda#620e754f4344f4c27259ff460a2b9c50 -https://conda.anaconda.org/conda-forge/linux-64/matplotlib-base-3.8.0-py39he9076e7_2.conda#404144d0628ebbbbd56d161c677cc71b -https://conda.anaconda.org/conda-forge/linux-64/netcdf4-1.6.4-nompi_py39h4282601_103.conda#c61de71bd3099973376aa370e3a0b39e +https://conda.anaconda.org/conda-forge/linux-64/matplotlib-base-3.8.1-py39he9076e7_0.conda#89615b866cb3b0d8ad4e2a11e2bcf9a0 +https://conda.anaconda.org/conda-forge/linux-64/netcdf4-1.6.5-nompi_py39h4282601_100.conda#d2809fbf0d8ae7b8ca92c456cb44a7d4 https://conda.anaconda.org/conda-forge/noarch/pre-commit-3.5.0-pyha770c72_0.conda#964e3d762e427661c59263435a14c492 https://conda.anaconda.org/conda-forge/linux-64/python-stratify-0.3.0-py39h44dd56e_1.conda#90c5165691fdcb5a9f43907e32ea48b4 https://conda.anaconda.org/conda-forge/linux-64/qt-main-5.15.8-h82b777d_17.conda#4f01e33dbb406085a16a2813ab067e95 -https://conda.anaconda.org/conda-forge/linux-64/cartopy-0.22.0-py39h40cae4c_0.conda#24b4bf92e26a46217e37e5928927116b +https://conda.anaconda.org/conda-forge/linux-64/cartopy-0.22.0-py39hddac248_1.conda#8dd2eb1e7aa9a33a92a75bdcea3f0dd0 https://conda.anaconda.org/conda-forge/noarch/esmpy-8.4.2-pyhc1e730c_4.conda#ddcf387719b2e44df0cc4dd467643951 https://conda.anaconda.org/conda-forge/linux-64/graphviz-8.1.0-h28d9a01_0.conda#33628e0e3de7afd2c8172f76439894cb https://conda.anaconda.org/conda-forge/noarch/nc-time-axis-1.4.1-pyhd8ed1ab_0.tar.bz2#281b58948bf60a2582de9e548bcc5369 https://conda.anaconda.org/conda-forge/linux-64/pyqt-5.15.9-py39h52134e7_5.conda#e1f148e57d071b09187719df86f513c1 -https://conda.anaconda.org/conda-forge/linux-64/matplotlib-3.8.0-py39hf3d152e_2.conda#ffe5ae58957da676064e2ce5d039d259 -https://conda.anaconda.org/conda-forge/noarch/pydata-sphinx-theme-0.14.1-pyhd8ed1ab_0.conda#78153addf629c51fab775ef360012ca3 +https://conda.anaconda.org/conda-forge/linux-64/matplotlib-3.8.1-py39hf3d152e_0.conda#f8b1cf66dbdbc9fe1a298a11fddcfb05 +https://conda.anaconda.org/conda-forge/noarch/pydata-sphinx-theme-0.14.3-pyhd8ed1ab_1.conda#fbe2993dd48f14724b90bf12e92cc164 https://conda.anaconda.org/conda-forge/noarch/sphinx-copybutton-0.5.2-pyhd8ed1ab_0.conda#ac832cc43adc79118cf6e23f1f9b8995 https://conda.anaconda.org/conda-forge/noarch/sphinx-design-0.5.0-pyhd8ed1ab_0.conda#264b3c697fa9cdade87eb0abe4440d54 https://conda.anaconda.org/conda-forge/noarch/sphinx-gallery-0.14.0-pyhd8ed1ab_0.conda#b3788794f88c9512393032e448428261