Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We need a fast path for open_mfdataset #1823

Closed
rabernat opened this issue Jan 12, 2018 · 19 comments · Fixed by #3239
Closed

We need a fast path for open_mfdataset #1823

rabernat opened this issue Jan 12, 2018 · 19 comments · Fixed by #3239

Comments

@rabernat
Copy link
Contributor

It would be great to have a "fast path" option for open_mfdataset, in which all alignment / coordinate checking is bypassed. This would be used in cases where the user knows that many netCDF files all share the same coordinates (e.g. model output, satellite records from the same product, etc.). The coordinates would just be taken from the first file, and only the data variables would be read from all subsequent files. The only checking would be that the data variables have the correct shape.

Implementing this would require some refactoring. @jbusecke mentioned that he had developed a solution for this (related to #1704), so maybe he could be the one to add this feature to xarray.

This is also related to #1385.

@jhamman
Copy link
Member

jhamman commented Jan 12, 2018

@rabernat - Depending on the structure of the dataset, another possibility that would speed up some open_mfdataset tasks substantially is to implement the step of opening each file and getting its metadata in in some parallel way (dask/joblib/etc.) and either returning the just dataset schema or a picklable version of the dataset itself. I think this will only be able to work with autoclose=True but it could be quite useful when working with many files.

@jbusecke
Copy link
Contributor

I did not really find an elegant solution. What I did was just specify all dims and coords as drop_variables and then update those from a master file with

ds.update(ds_master)

Perhaps this could be generalized in a sense, by reading all coords and dims just from the first file.

@jbusecke
Copy link
Contributor

Would these two options be necessarily mutually exclusive?

I think parallelizing the read in sounds amazing.

But isnt there some merit in skipping some of the checks all together, if the user is sure about the structure of the data contained in the many files?

I am often working with the aforementioned type of data (many files either contain a new timestep or a different variable, but most of the dimensions/coordinates are the same).

In some cases I am finding that reading the data "lazily" consumes a significant amount of the time in my workflow. I am unsure how hard this would be to achieve, and perhaps it is not worth it after all.

Just putting out a few ideas, while I wait for my xr.open_mfdataset to finish :-)

@jhamman
Copy link
Member

jhamman commented Mar 14, 2018

@jbusecke - No. These options are not mutually exclusive. The parallel open is, in my opinion, the lowest hanging fruit so that's why I started there. There are other improvements that we can tackle incrementally.

@jbusecke
Copy link
Contributor

Awesome, thanks for the clarification.
I just looked at #1981 and it seems indeed very elegant (in fact I just now used this approach to parallelize printing of movie frames!) Thanks for that!

@dcherian
Copy link
Contributor

dcherian commented May 1, 2019

I am currently motivated to fix this.

  1. Over in concat prealigned objects #1413 (comment) @rabernat mentioned

allowing the user to pass join='exact' via open_mfdataset. A related optimization would be to allow the user to pass coords='minimal' (or other concat coords options) via open_mfdataset.

  1. @shoyer suggested calling decode_cf later here though perhaps this wont help too much: slow performance with open_mfdataset #1385 (comment)

Is this all that we can do on the xarray side?

@dcherian dcherian closed this as completed May 1, 2019
@dcherian dcherian reopened this May 1, 2019
@TomNicholas
Copy link
Member

@dcherian I'm sorry, I'm very interested in this but after reading the issues I'm still not clear on what's being proposed:

What exactly is the bottleneck? Is it reading the coords from all the files? Is it loading the coord values into memory? Is it performing the alignment checks on those coords once they're in memory? Is it performing alignment checks on the dimensions? Is this suggestion relevant to datasets that don't have any coords?

Which of these steps would a join='exact' option omit?

A related optimization would be to allow the user to pass coords='minimal' (or other concat coords options) via open_mfdataset.

But this is already an option to open_mfdataset?

@j08lue
Copy link
Contributor

j08lue commented May 3, 2019

The original issue of this thread is that you sometimes might want to disable alignment checks for coordinates other than the concat_dim and only check for same dimensions and dimension shapes.

When you xr.merge with join='exact', it still checks for alignment (see #1330 (comment)), but does not join the coordinates if they are not aligned. This behavior (not joining) is also included in what @rabernat envisioned here, but his suggestion goes beyond that: you don't even load coordinate values from all but the first dataset and just blindly trust that they are aligned.

So xr.open_mfdataset(join='exact', coords='minimal') does not fix this issue here, I think.

@rabernat
Copy link
Contributor Author

rabernat commented May 3, 2019

So I think it is quite important to consider this issue together with #2697. An xml specification called NCML already exists which tells software how to put together multiple netCDF files into a single virtual netcdf. We should leverage this existing spec as much as possible.

A realistic use case for me is that I have, say 1000 files of high-res model output, each with large coordinate variables, all generated from the same model run. If we want to for for which we know a priori that certain coordinates (dimension coordinates or otherwise) are identical, we could save a lot of disk reads (the slow part of open_mfdataset) by never reading those coordinates at all. Enabling this would require a pretty low-level change in xarray. For example, we couldn't even rely on open_dataset in its current form to open files, because open_dataset eagerly loads all dimension coordinates into indexes. One way forward might be to create a new Store class.

For a catalog of tricks I use to optimize opening these sorts of big, complex, multi-file datasets (e.g. CMIP), check out
https://github.com/pangeo-data/esgf2xarray/blob/master/esgf2zarr/aggregate.py

@dcherian
Copy link
Contributor

dcherian commented May 3, 2019

One common use-case is files with large numbers of concat_dim-invariant non-dimensional co-ordinates. This is easy to speed up by dropping those variables from all but the first file.

e.g.
https://github.com/pangeo-data/esgf2xarray/blob/6a5e4df0d329c2f23b403cbfbb65f0f1dfa98d52/esgf2zarr/aggregate.py#L107-L110

    # keep only coordinates from first ensemble member to simplify merge
    first = member_dsets_aligned[0]
    rest = [mds.reset_coords(drop=True) for mds in member_dsets_aligned[1:]]
    objs_to_concat = [first] + rest

Similarly https://github.com/NCAR/intake-esm/blob/e86a8e8a80ce0fd4198665dbef3ba46af264b5ea/intake_esm/aggregate.py#L53-L57

def merge_vars_two_datasets(ds1, ds2):
    """
    Merge two datasets, dropping all variables from
    second dataset that already exist in the first dataset's coordinates.
    """

See also #2039 (second code block)

One way to do this might be to add a master_file kwarg to open_mfdataset. This would imply coords='minimal', join='exact' (I think; prealigned=True in some other proposals) and would drop non-dimensional coordinates from all but the first file and then call concat.

As bonus it would assign attributes from the master_file to the merged dataset (for which I think there are open issues) : this functionality exists in netCDF4.MFDataset so that's a plus.

EDIT: #2039 (third code block) is also a possibility. This might look like

xr.open_mfdataset('files*.nc', master_file='first', concat_dim='time')

in which case the first file is read; all coords that are not concat_dim become drop_variables for an open_dataset call that reads the remaining files. We then merge with the first dataset and assign attrs.

EDIT2: master_file combines two different functionalities here: specifying a "template file" and a file to choose attributes from. So maybe we need two kwargs: template_file and attrs_from?

@rabernat
Copy link
Contributor Author

Is this issue really closed?!?

🎉🎂🏆🥇

@dcherian
Copy link
Contributor

YES!
(well almost)

The PR lets you skip compatibility checks.
The magic spell is xr.open_mfdataset(..., data_vars="minimal", coords="minimal", compat="override")
You can skip index comparison by adding join="override".

Whats left is extremely large indexes and lazy index / coordinate loading but we have #2039 open for that. I will rename that issue.

If you have time, can you test it out?

@TomNicholas
Copy link
Member

This is big if true!

But surely to close an issue raised by complaints about speed, we should really have some new asv speed tests?

@dcherian
Copy link
Contributor

=) @TomNicholas PRs welcome!

@dcherian dcherian reopened this Sep 16, 2019
@dcherian
Copy link
Contributor

PS @rabernat

%%time
ds = xr.open_mfdataset("/glade/p/cesm/community/ASD-HIGH-RES-CESM1/hybrid_v5_rel04_BC5_ne120_t12_pop62/ocn/proc/tseries/monthly/*.nc", 
                        parallel=True, coords="minimal", data_vars="minimal", compat='override')

This completes in 40 seconds with 10 workers on cheyenne.

@jbusecke
Copy link
Contributor

Wooooow. Thanks. Ill have to give this a whirl soon.

@dcherian
Copy link
Contributor

Let's close this since there is an opt-in mostly-fast path. I've added an item to #4648 to cover adding an asv benchmark for mfdataset.

@Hossein-Madadi
Copy link
Contributor

Hossein-Madadi commented Jan 27, 2021

PS @rabernat

%%time
ds = xr.open_mfdataset("/glade/p/cesm/community/ASD-HIGH-RES-CESM1/hybrid_v5_rel04_BC5_ne120_t12_pop62/ocn/proc/tseries/monthly/*.nc", 
                        parallel=True, coords="minimal", data_vars="minimal", compat='override')

This completes in 40 seconds with 10 workers on cheyenne.

@dcherian, thanks for your solution. In my experience with 34013 NetCDF files, I could open 117 Gib in 13min 14s. Can I decrease this time?

@dcherian
Copy link
Contributor

That's 34k 3MB files! I suggest combining to 1k 100MB files, that would work a lot better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants