-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reading HDF5 - control creation of anonymous dimensions #1484
Comments
What would make this easier would be a test program that builds your HDF5 file (with HDF5 code) and then tries to open the file with netcdf. |
@edhartnett Thanks for looking into this. Attached below is sample code taken from HDF Group. It creates a sample file with a single 2d dataset. Using
Changing the second dimension in the source code to 200:
Update: intermix of output updated |
Hi You have the wrong person or email.
Thanks
Ed Harnett
…On Mon, 2 Sep 2019 at 13:28, Kai Mühlbauer ***@***.***> wrote:
@edharnett <https://github.com/edharnett> Thanks for looking into this.
Attached below is sample code taken from HDF Group
<https://support.hdfgroup.org/HDF5/examples/intro.html>. It creates a
sample file with a single 2d dataset. Using ncdump -h on this file:
netcdf dset {
dimensions:
phony_dim_0 = 100 ;
phony_dim_1 = 200 ;
variables:
int dset(phony_dim_0, phony_dim_1) ;
}
Changing the second dimension in the source code to 200:
netcdf dset {
dimensions:
phony_dim_0 = 100 ;
variables:
int dset(phony_dim_0, phony_dim_0) ;
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Copyright by The HDF Group. *
* Copyright by the Board of Trustees of the University of Illinois. *
* All rights reserved. *
* *
* This file is part of HDF5. The full HDF5 copyright notice, including *
* terms governing use, modification, and redistribution, is contained in *
* the COPYING file, which can be found at the root of the source code *
* distribution tree, or in https://support.hdfgroup.org/ftp/HDF5/releases. *
* If you do not have access to either file, you may request a copy from *
* ***@***.*** *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
/*
* This example illustrates how to create a dataset that is a 4 x 6
* array. It is used in the HDF5 Tutorial.
*/
#include "hdf5.h"
#define FILE "dset.h5"
int main() {
hid_t file_id, dataset_id, dataspace_id; /* identifiers */
hsize_t dims[2];
herr_t status;
/* Create a new file using default properties. */
file_id = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
/* Create the data space for the dataset. */
dims[0] = 100;
dims[1] = 100;
dataspace_id = H5Screate_simple(2, dims, NULL);
/* Create the dataset. */
dataset_id = H5Dcreate2(file_id, "/dset", H5T_STD_I32BE, dataspace_id,
H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);
/* End access to the dataset and release resources used by it. */
status = H5Dclose(dataset_id);
/* Terminate access to the data space. */
status = H5Sclose(dataspace_id);
/* Close the file. */
status = H5Fclose(file_id);
}
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1484?email_source=notifications&email_token=ABKKQ7ULZBUVPNQLR4XOJEDQHUBGRA5CNFSM4IS2TBKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5VVZDA#issuecomment-527129740>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABKKQ7WI3PY46C4AXWLDHNTQHUBGRANCNFSM4IS2TBKA>
.
|
Ping @edhartnett, because of using wrong github username |
@edhartnett Is there any news on this issue? Can I be of any further help? |
Wrong email again!
…On Thu, 5 Sep 2019 at 17:15, Kai Mühlbauer ***@***.***> wrote:
@edhartnett <https://github.com/edhartnett> Is there any news on this
issue? Can I be of any further help?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1484?email_source=notifications&email_token=ABKKQ7US4LDH2IQEH2ZJ2MLQIEWD5A5CNFSM4IS2TBKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD57W2VQ#issuecomment-528444758>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABKKQ7T4VCDGODPIG4Q3YD3QIEWD5ANCNFSM4IS2TBKA>
.
|
@edharnett I double checked to not use your github name this time but @edhartnett instead. Not sure why you got notified again. If this persists you might need to mute the thread via the link in your email. Sorry for any inconvenience. |
My 2cents. The problem is that without a name for the dimension,
The only problem with case 2 is coming up with a naming scheme for |
There is no further news and unfortunately it's unlikely that I will get time to look at this before I come back from a long vacation in November. ;-) @DennisHeimbigner I believe 2 is not a good idea, because mostly HDF5 files are actually trying to use netcdf-like dimensions. In other words, if there are 10 variables and they all share a dimension of length 100, then probably the file writer intended that all 10 variables are sharing a netCDF-like dimension. In any case, we cannot change this behavior. This code has been out there and stable for a long time (10+ years), and I believe it is widely used. So we can't break anyone's existing code. Perhaps we should handle the case differently in which one variable has two dimensions of the same size? But then what should we do? What should we do to other variables that also have that size? (So if we have a var[100][100], we decide that there are two dimensions, dim1 = 100 and dim2 = 100. Now we encounter another variable var2[100] - which dimension do we use?) Of course, using approach 1 also solves this problem, every dimension is unique and if 10 variables all share it in HDF5 then there will be 10 dimensions in the netCDF file. Perhaps we should add a mode flag to control this behavior? So the user can decide if shared dimensions are assumed (the default), or distinct dimensions will be created (if this mode flag is used), when opening a HDF5 file that was not created with netCDF. But if we want to add a mode flag we must finally confront the issue of expanding the number of mode flags (see #864). |
Another possibility (not necessarily a good one) is to |
I think it would be better to fix this in the C library. In fact, I think we should just assume, as @kmuehlbauer points out, that a variable will never have the same dimension twice in its list of dimensions. So if we encounter: What happens if we encounter another? Well we should just assign it dim1 and move on with our lives. I can implement this, but probably not til after vacation. ;-) |
@edhartnett @DennisHeimbigner Thank you for your extensive insights. I now see, that it is not that easy to conquer. Your example with var1 and var2 is quite convincing and this is possibly the main reason for having named dimensions (or dimension scales). In my datasets this is only appearing in some corner cases. But this means several years of 5 minute spaced datasets are involved. My current workaround is to touch the file with h5py and resize one colliding dimension by one element. This works but copying the file from read-only filesystem to local filesystem is necessary. Nevertheless some solution/handling which does not involve special treatment would be favourable. I'll think a bit more about approaching this the next days. |
If you do start implementing this, let me know... I have another somewhat related proposal, but it would result in a change to the dimensions and would make it so that older libraries would misread the dimensions in the new files. Would still be possible to read old files with new library, but not vice-versa... In other words, if the implementation planned for this would result in a new mode, I have something else to bundle with it to avoid yet another mode flag soon after. Basically -- proposal can help with parallel scalability of netcdf-4 files on large processor counts |
Well, @gsjaardema, we are certainly always prepared to listen. Your proposal relates to reading non-netcdf-4 HDF5 files as well? |
@edharnett No, NetCDF-4 files only. I need to do some more testing and timing; should have something at the start of our next FY -- October |
OK ping me in November when I get back... |
You probably should give us an overview of what you are proposing |
@DennisHeimbigner That is the plan, but I am not yet ready. Hopefully early October. |
Ping @edhartnett approaching mid-november, so just to get this back on the plate. If I can be of any help resolving this issue, please let me know. |
Thanks @kmuehlbauer for bringing this up again. I'm included to go with the solution I outlined above:
@gsjaardema now would be the time to bring up your proposed alternate solution... |
Well @kmuehlbauer such a joy to know my documentation has at least one reader! As a programmer, I honestly don't expect people to read the documentation. ;-) I have included an update to the tutorial on this topic. Thanks for your help on this! Please test it out and let me know if it solves your issues... |
I have an experimental version in which I use a "hidden attribute" to specify the size of the dim instead of the length of the HDF5 dataset. This did work, but breaks any applications that are using an older version of the library since they now see every dimension as having a value of '1'. The original reason that I investigated this (parallel scalability) did not show the performance benefit that I was hoping for (basically, since the data in the "fake" dataset is never written, the size of the dataset does not matter) so I don't know whether it is worth the trouble of figuring out how best to productionize it. Maybe if there is another format-breaking change in the future, this could be included. It does have the benefit that it makes it easier to use HDF5 tools on NetCDF-4 files. For example, if I do an The attached patch should show the basics of what I was doing. I tried to filter out the other changes I was investigating at the time (and will propose soon), but if something doesn't look quite right, it may be a vestige of the other changes. The changes did work, but didn't substantially change parallel performance and do break old libraries reading new files. |
OK, but a recent change to netCDF (4.7.1?) added hidden attributes for all the dimension info. Have you looked at them? So it may be that the info you need is already there. Adding a new hidden attribute is a problem. It breaks compatibility! So I piggybacked on some existing hidden attributes. |
Yes, that's true @edhartnett . But RTFM (and the code, if one is able to do so) is still the best approach to gain as much information as needed. I did not want to annoy people with things RTFM would have resolved. So, thanks for having neat documentation out there! I can try to build Again, thank you very much for taking the time and considering that feature request. |
Release Notes {#RELEASE_NOTES} ============= \brief Release notes file for the netcdf-c package. This file contains a high-level description of this package's evolution. Releases are in reverse chronological order (most recent first). Note that, as of netcdf 4.2, the `netcdf-c++` and `netcdf-fortran` libraries have been separated into their own libraries. ## 4.9.3 - TBD ## 4.9.2 - March 14, 2023 This is the maintenance release which adds support for HDF5 version 1.14.0, in addition to a handful of other changes and bugfixes. * Fix 'make distcheck' error in run_interop.sh. See [Github #????](https://github.com/Unidata/netcdf-c/pull/????). * Update `nc-config` to remove inclusion from automatically-detected `nf-config` and `ncxx-config` files, as the wrong files could be included in the output. This is in support of [GitHub #2274](Unidata/netcdf-c#2274). * Update H5FDhttp.[ch] to work with HDF5 version 1.13.2 and later. See [Github #2635](Unidata/netcdf-c#2635). * [Bug Fix] Update DAP code to enable CURLOPT_ACCEPT_ENCODING by default. See [Github #2630](Unidata/netcdf-c#2630). * [Bug Fix] Fix byterange failures for certain URLs. See [Github #2649](Unidata/netcdf-c#2649). * [Bug Fix] Fix 'make distcheck' error in run_interop.sh. See [Github #2631](Unidata/netcdf-c#2631). * [Enhancement] Update `nc-config` to remove inclusion from automatically-detected `nf-config` and `ncxx-config` files, as the wrong files could be included in the output. This is in support of [GitHub #2274](Unidata/netcdf-c#2274). * [Enhancement] Update H5FDhttp.[ch] to work with HDF5 version 1.14.0. See [Github #2615](Unidata/netcdf-c#2615). ## 4.9.1 - February 2, 2023 ## Known Issues * A test in the `main` branch of `netcdf-cxx4` is broken by this rc; this will bear further investigation, but not being treated as a roadblock for the release candidate. * The new document, `netcdf-c/docs/filter_quickstart.md` is in rough-draft form. * Race conditions exist in some of the tests when run concurrently with large numbers of processors ## What's Changed from v4.9.0 (automatically generated) * Fix nc_def_var_fletcher32 operation by \@DennisHeimbigner in Unidata/netcdf-c#2403 * Merge relevant info updates back into `main` by \@WardF in Unidata/netcdf-c#2387 * Add manual GitHub actions triggers for the tests. by \@WardF in Unidata/netcdf-c#2404 * Use env variable USERPROFILE instead of HOME for windows and mingw. by \@DennisHeimbigner in Unidata/netcdf-c#2405 * Make public a limited API for programmatic access to internal .rc tables by \@DennisHeimbigner in Unidata/netcdf-c#2408 * Fix typo in CMakeLists.txt by \@georgthegreat in Unidata/netcdf-c#2412 * Fix choice of HOME dir by \@DennisHeimbigner in Unidata/netcdf-c#2416 * Check for libxml2 development files by \@WardF in Unidata/netcdf-c#2417 * Updating Doxyfile.in with doxygen-1.8.17, turned on WARN_AS_ERROR, added doxygen build to CI run by \@edwardhartnett in Unidata/netcdf-c#2377 * updated release notes by \@edwardhartnett in Unidata/netcdf-c#2392 * increase read block size from 1 KB to 4 MB by \@wkliao in Unidata/netcdf-c#2319 * fixed RELEASE_NOTES.md by \@edwardhartnett in Unidata/netcdf-c#2423 * Fix pnetcdf tests in cmake by \@WardF in Unidata/netcdf-c#2437 * Updated CMakeLists to avoid corner case cmake error by \@WardF in Unidata/netcdf-c#2438 * Add `--disable-quantize` to configure by \@WardF in Unidata/netcdf-c#2439 * Fix the way CMake handles -DPLUGIN_INSTALL_DIR by \@DennisHeimbigner in Unidata/netcdf-c#2430 * fix and test quantize mode for NC_CLASSIC_MODEL by \@edwardhartnett in Unidata/netcdf-c#2445 * Guard _declspec(dllexport) in support of #2446 by \@WardF in Unidata/netcdf-c#2460 * Ensure that netcdf_json.h does not interfere with ncjson. by \@DennisHeimbigner in Unidata/netcdf-c#2448 * Prevent cmake writing to source dir by \@magnusuMET in Unidata/netcdf-c#2463 * more quantize testing and adding pre-processor constant NC_MAX_FILENAME to nc_tests.h by \@edwardhartnett in Unidata/netcdf-c#2457 * Provide a default enum const when fill value does not match any enum constant by \@DennisHeimbigner in Unidata/netcdf-c#2462 * Fix support for reading arrays of HDF5 fixed size strings by \@DennisHeimbigner in Unidata/netcdf-c#2466 * fix musl build by \@magnusuMET in Unidata/netcdf-c#1701 * Fix AWS SDK linking errors by \@dzenanz in Unidata/netcdf-c#2470 * Address jump-misses-init issue. by \@WardF in Unidata/netcdf-c#2488 * Remove stray merge conflict markers by \@WardF in Unidata/netcdf-c#2493 * Add support for Zarr string type to NCZarr by \@DennisHeimbigner in Unidata/netcdf-c#2492 * Fix some problems with PR 2492 by \@DennisHeimbigner in Unidata/netcdf-c#2497 * Fix some bugs in the blosc filter wrapper by \@DennisHeimbigner in Unidata/netcdf-c#2461 * Add option to control accessing external servers by \@DennisHeimbigner in Unidata/netcdf-c#2491 * Changed attribute case in documentation by \@WardF in Unidata/netcdf-c#2482 * Adding all-error-codes.md back in to distribution documentation. by \@WardF in Unidata/netcdf-c#2501 * Update hdf5 version in github actions. by \@WardF in Unidata/netcdf-c#2504 * Minor update to doxygen function documentation by \@gsjaardema in Unidata/netcdf-c#2451 * Fix some addtional errors in NCZarr by \@DennisHeimbigner in Unidata/netcdf-c#2503 * Cleanup szip handling some more by \@DennisHeimbigner in Unidata/netcdf-c#2421 * Check for zstd development headers in autotools by \@WardF in Unidata/netcdf-c#2507 * Add new options to nc-config by \@WardF in Unidata/netcdf-c#2509 * Cleanup built test sources in nczarr_test by \@DennisHeimbigner in Unidata/netcdf-c#2508 * Fix inconsistency in netcdf_meta.h by \@WardF in Unidata/netcdf-c#2512 * Small fix in nc-config.in by \@WardF in Unidata/netcdf-c#2513 * For loop initial declarations are only allowed in C99 mode by \@gsjaardema in Unidata/netcdf-c#2517 * Fix some dependencies in tst_nccopy3 by \@WardF in Unidata/netcdf-c#2518 * Update plugins/Makefile.am by \@WardF in Unidata/netcdf-c#2519 * Fix prereqs in ncdump/tst_nccopy4 in order to avoid race conditions. by \@WardF in Unidata/netcdf-c#2520 * Move construction of VERSION file to end of the build by \@DennisHeimbigner in Unidata/netcdf-c#2527 * Add draft filter quickstart guide by \@WardF in Unidata/netcdf-c#2531 * Turn off extraneous debug output by \@DennisHeimbigner in Unidata/netcdf-c#2537 * typo fix by \@wkliao in Unidata/netcdf-c#2538 * replace 4194304 with READ_BLOCK_SIZE by \@wkliao in Unidata/netcdf-c#2539 * Rename variable to avoid function name conflict by \@ibaned in Unidata/netcdf-c#2550 * Add Cygwin CI and stop installing unwanted plugins by \@DWesl in Unidata/netcdf-c#2529 * Merge subset of v4.9.1 files back into main development branch by \@WardF in Unidata/netcdf-c#2530 * Add a Filter quickstart guide document by \@WardF in Unidata/netcdf-c#2524 * Fix race condition in ncdump (and other) tests. by \@DennisHeimbigner in Unidata/netcdf-c#2552 * Make dap4 reference dap instead of hard-wired to be disabled. by \@WardF in Unidata/netcdf-c#2553 * Suppress nczarr_test/tst_unknown filter test by \@DennisHeimbigner in Unidata/netcdf-c#2557 * Add fenceposting for HAVE_DECL_ISINF and HAVE_DECL_ISNAN by \@WardF in Unidata/netcdf-c#2559 * Add an old static file. by \@WardF in Unidata/netcdf-c#2575 * Fix infinite loop in file inferencing by \@DennisHeimbigner in Unidata/netcdf-c#2574 * Merge Wellspring back into development branch by \@WardF in Unidata/netcdf-c#2560 * Allow ncdump -t to handle variable length string attributes by \@srherbener in Unidata/netcdf-c#2584 * Fix an issue I introduced with make distcheck by \@WardF in Unidata/netcdf-c#2590 * make UDF0 not require NC_NETCDF4 by \@jedwards4b in Unidata/netcdf-c#2586 * Expose user-facing documentation related to byterange DAP functionality. by \@WardF in Unidata/netcdf-c#2596 * Fix Memory Leak by \@DennisHeimbigner in Unidata/netcdf-c#2598 * CI: Change autotools CI build to out-of-tree build. by \@DWesl in Unidata/netcdf-c#2577 * Update github action configuration scripts. by \@WardF in Unidata/netcdf-c#2600 * Update the filter quickstart guide. by \@WardF in Unidata/netcdf-c#2602 * Fix symbol export on Windows by \@WardF in Unidata/netcdf-c#2604 ## New Contributors * \@georgthegreat made their first contribution in Unidata/netcdf-c#2412 * \@dzenanz made their first contribution in Unidata/netcdf-c#2470 * \@DWesl made their first contribution in Unidata/netcdf-c#2529 * \@srherbener made their first contribution in Unidata/netcdf-c#2584 * \@jedwards4b made their first contribution in Unidata/netcdf-c#2586 **Full Changelog**: Unidata/netcdf-c@v4.9.0...v4.9.1 ### 4.9.1 - Release Candidate 2 - November 21, 2022 #### Known Issues * A test in the `main` branch of `netcdf-cxx4` is broken by this rc; this will bear further investigation, but not being treated as a roadblock for the release candidate. * The new document, `netcdf-c/docs/filter_quickstart.md` is in rough-draft form. #### Changes * [Bug Fix] Fix a race condition when testing missing filters. See [Github #2557](Unidata/netcdf-c#2557). * [Bug Fix] Fix some race conditions due to use of a common file in multiple shell scripts . See [Github #2552](Unidata/netcdf-c#2552). ### 4.9.1 - Release Candidate 1 - October 24, 2022 * [Enhancement][Documentation] Add Plugins Quick Start Guide. See [GitHub #2524](Unidata/netcdf-c#2524) for more information. * [Enhancement] Add new entries in `netcdf_meta.h`, `NC_HAS_BLOSC` and `NC_HAS_BZ2`. See [Github #2511](Unidata/netcdf-c#2511) and [Github #2512](Unidata/netcdf-c#2512) for more information. * [Enhancement] Add new options to `nc-config`: `--has-multifilters`, `--has-stdfilters`, `--has-quantize`, `--plugindir`. See [Github #2509](Unidata/netcdf-c#2509) for more information. * [Bug Fix] Fix some errors detected in PR 2497. [PR #2497](Unidata/netcdf-c#2497) . See [Github #2503](Unidata/netcdf-c#2503). * [Bug Fix] Split the remote tests into two parts: one for the remotetest server and one for all other external servers. Also add a configure option to enable the latter set. See [Github #2491](Unidata/netcdf-c#2491). * [Bug Fix] Fix blosc plugin errors. See [Github #2461](Unidata/netcdf-c#2461). * [Bug Fix] Fix support for reading arrays of HDF5 fixed size strings. See [Github #2466](Unidata/netcdf-c#2466). * [Bug Fix] Fix some errors detected in [PR #2492](Unidata/netcdf-c#2492) . See [Github #2497](Unidata/netcdf-c#2497). * [Enhancement] Add support for Zarr (fixed length) string type in nczarr. See [Github #2492](Unidata/netcdf-c#2492). * [Bug Fix] Split the remote tests into two parts: one for the remotetest server and one for all other external servers. Also add a configure option to enable the latter set. See [Github #2491](Unidata/netcdf-c#2491). * [Bug Fix] Fix support for reading arrays of HDF5 fixed size strings. See [Github #2462](Unidata/netcdf-c#2466). * [Bug Fix] Provide a default enum const when fill value does not match any enum constant for the value zero. See [Github #2462](Unidata/netcdf-c#2462). * [Bug Fix] Fix the json submodule symbol conflicts between libnetcdf and the plugin specific netcdf_json.h. See [Github #2448](Unidata/netcdf-c#2448). * [Bug Fix] Fix quantize with CLASSIC_MODEL files. See [Github #2405](Unidata/netcdf-c#2445). * [Enhancement] Add `--disable-quantize` option to `configure`. * [Bug Fix] Fix CMakeLists.txt to handle all acceptable boolean values for -DPLUGIN_INSTALL_DIR. See [Github #2430](Unidata/netcdf-c#2430). * [Bug Fix] Fix tst_vars3.c to use the proper szip flag. See [Github #2421](Unidata/netcdf-c#2421). * [Enhancement] Provide a simple API to allow user access to the internal .rc file table: supports get/set/overwrite of entries of the form "key=value". See [Github #2408](Unidata/netcdf-c#2408). * [Bug Fix] Use env variable USERPROFILE instead of HOME for windows and mingw. See [Github #2405](Unidata/netcdf-c#2405). * [Bug Fix] Fix the nc_def_var_fletcher32 code in hdf5 to properly test value of the fletcher32 argument. See [Github #2403](Unidata/netcdf-c#2403). ## 4.9.0 - June 10, 2022 * [Enhancement] Add quantize functions nc_def_var_quantize() and nc_inq_var_quantize() to enable lossy compression. See [Github #1548](Unidata/netcdf-c#1548). * [Enhancement] Add zstandard compression functions nc_def_var_zstandard() and nc_inq_var_zstandard(). See [Github #2173](Unidata/netcdf-c#2173). * [Enhancement] Have netCDF-4 logging output one file per processor when used with parallel I/O. See [Github #1762](Unidata/netcdf-c#1762). * [Enhancement] Improve filter installation process to avoid use of an extra shell script. See [Github #2348](Unidata/netcdf-c#2348). * [Bug Fix] Get "make distcheck" to work See [Github #2343](Unidata/netcdf-c#2343). * [Enhancement] Allow the read/write of JSON-valued Zarr attributes to allow for domain specific info such as used by GDAL/Zarr. See [Github #2278](Unidata/netcdf-c#2278). * [Enhancement] Turn on the XArray convention for NCZarr files by default. WARNING, this means that the mode should explicitly specify "nczarr" or "zarr" even if "xarray" or "noxarray" is specified. See [Github #2257](Unidata/netcdf-c#2257). * [Enhancement] Update the documentation to match the current filter capabilities See [Github #2249](Unidata/netcdf-c#2249). * [Enhancement] Update the documentation to match the current filter capabilities. See [Github #2249](Unidata/netcdf-c#2249). * [Enhancement] Support installation of pre-built standard filters into user-specified location. See [Github #2318](Unidata/netcdf-c#2318). * [Enhancement] Improve filter support. More specifically (1) add nc_inq_filter_avail to check if a filter is available, (2) add the notion of standard filters, (3) cleanup szip support to fix interaction with NCZarr. See [Github #2245](Unidata/netcdf-c#2245). * [Enhancement] Switch to tinyxml2 as the default xml parser implementation. See [Github #2170](Unidata/netcdf-c#2170). * [Bug Fix] Require that the type of the variable in nc_def_var_filter is not variable length. See [Github #/2231](Unidata/netcdf-c#2231). * [File Change] Apply HDF5 v1.8 format compatibility when writing to previous files, as well as when creating new files. The superblock version remains at 2 for newly created files. Full backward read/write compatibility for netCDF-4 is maintained in all cases. See [Github #2176](Unidata/netcdf-c#2176). * [Enhancement] Add ability to set dataset alignment for netcdf-4/HDF5 files. See [Github #2206](Unidata/netcdf-c#2206). * [Bug Fix] Improve UTF8 support on windows so that it can use utf8 natively. See [Github #2222](Unidata/netcdf-c#2222). * [Enhancement] Add complete bitgroom support to NCZarr. See [Github #2197](Unidata/netcdf-c#2197). * [Bug Fix] Clean up the handling of deeply nested VLEN types. Marks nc_free_vlen() and nc_free_string as deprecated in favor of ncaux_reclaim_data(). See [Github #2179](Unidata/netcdf-c#2179). * [Bug Fix] Make sure that netcdf.h accurately defines the flags in the open/create mode flags. See [Github #2183](Unidata/netcdf-c#2183). * [Enhancement] Improve support for msys2+mingw platform. See [Github #2171](Unidata/netcdf-c#2171). * [Bug Fix] Clean up the various inter-test dependencies in ncdump for CMake. See [Github #2168](Unidata/netcdf-c#2168). * [Bug Fix] Fix use of non-aws appliances. See [Github #2152](Unidata/netcdf-c#2152). * [Enhancement] Added options to suppress the new behavior from [Github #2135](Unidata/netcdf-c#2135). The options for `cmake` and `configure` are, respectively `-DENABLE_LIBXML2` and `--(enable/disable)-libxml2`. Both of these options defaul to 'on/enabled'. When disabled, the bundled `ezxml` XML interpreter is used regardless of whether `libxml2` is present on the system. * [Enhancement] Support optional use of libxml2, otherwise default to ezxml. See [Github #2135](Unidata/netcdf-c#2135) -- H/T to [Egbert Eich](https://github.com/e4t). * [Bug Fix] Fix several os related errors. See [Github #2138](Unidata/netcdf-c#2138). * [Enhancement] Support byte-range reading of netcdf-3 files stored in private buckets in S3. See [Github #2134](Unidata/netcdf-c#2134) * [Enhancement] Support Amazon S3 access for NCZarr. Also support use of the existing Amazon SDK credentials system. See [Github #2114](Unidata/netcdf-c#2114) * [Bug Fix] Fix string allocation error in H5FDhttp.c. See [Github #2127](Unidata/netcdf-c#2127). * [Bug Fix] Apply patches for ezxml and for selected oss-fuzz detected errors. See [Github #2125](Unidata/netcdf-c#2125). * [Bug Fix] Ensure that internal Fortran APIs are always defined. See [Github #2098](Unidata/netcdf-c#2098). * [Enhancement] Support filters for NCZarr. See [Github #2101](Unidata/netcdf-c#2101) * [Bug Fix] Make PR 2075 long file name be idempotent. See [Github #2094](Unidata/netcdf-c#2094). ## 4.8.1 - August 18, 2021 * [Bug Fix] Fix multiple bugs in libnczarr. See [Github #2066](Unidata/netcdf-c#2066). * [Enhancement] Support windows network paths (e.g. \\svc\...). See [Github #2065](Unidata/netcdf-c#2065). * [Enhancement] Convert to a new representation of the NCZarr meta-data extensions: version 2. Read-only backward compatibility is provided. See [Github #2032](Unidata/netcdf-c#2032). * [Bug Fix] Fix dimension_separator bug in libnczarr. See [Github #2035](Unidata/netcdf-c#2035). * [Bug Fix] Fix bugs in libdap4. See [Github #2005](Unidata/netcdf-c#2005). * [Bug Fix] Store NCZarr fillvalue as a singleton instead of a 1-element array. See [Github #2017](Unidata/netcdf-c#2017). * [Bug Fixes] The netcdf-c library was incorrectly determining the scope of dimension; similar to the type scope problem. See [Github #2012](Unidata/netcdf-c#2012) for more information. * [Bug Fix] Re-enable DAP2 authorization testing. See [Github #2011](Unidata/netcdf-c#2011). * [Bug Fix] Fix bug with windows version of mkstemp that causes failure to create more than 26 temp files. See [Github #1998](Unidata/netcdf-c#1998). * [Bug Fix] Fix ncdump bug when printing VLENs with basetype char. See [Github #1986](Unidata/netcdf-c#1986). * [Bug Fixes] The netcdf-c library was incorrectly determining the scope of types referred to by nc_inq_type_equal. See [Github #1959](Unidata/netcdf-c#1959) for more information. * [Bug Fix] Fix bug in use of XGetopt when building under Mingw. See [Github #2009](Unidata/netcdf-c#2009). * [Enhancement] Improve the error reporting when attempting to use a filter for which no implementation can be found in HDF5_PLUGIN_PATH. See [Github #2000](Unidata/netcdf-c#2000) for more information. * [Bug Fix] Fix `make distcheck` issue in `nczarr_test/` directory. See [Github #2007](Unidata/netcdf-c#2007). * [Bug Fix] Fix bug in NCclosedir in dpathmgr.c. See [Github #2003](Unidata/netcdf-c#2003). * [Bug Fix] Fix bug in ncdump that assumes that there is a relationship between the total number of dimensions and the max dimension id. See [Github #2004](Unidata/netcdf-c#2004). * [Bug Fix] Fix bug in JSON processing of strings with embedded quotes. See [Github #1993](Unidata/netcdf-c#1993). * [Enhancement] Add support for the new "dimension_separator" enhancement to Zarr v2. See [Github #1990](Unidata/netcdf-c#1990) for more information. * [Bug Fix] Fix hack for handling failure of shell programs to properly handle escape characters. See [Github #1989](Unidata/netcdf-c#1989). * [Bug Fix] Allow some primitive type names to be used as identifiers depending on the file format. See [Github #1984](Unidata/netcdf-c#1984). * [Enhancement] Add support for reading/writing pure Zarr storage format that supports the XArray _ARRAY_DIMENSIONS attribute. See [Github #1952](Unidata/netcdf-c#1952) for more information. * [Update] Updated version of bzip2 used in filter testing/functionality, in support of [Github #1969](Unidata/netcdf-c#1969). * [Bug Fix] Corrected HDF5 version detection logic as described in [Github #1962](Unidata/netcdf-c#1962). ## 4.8.0 - March 30, 2021 * [Enhancement] Bump the NC_DISPATCH_VERSION from 2 to 3, and as a side effect, unify the definition of NC_DISPATCH_VERSION so it only needs to be defined in CMakeLists.txt and configure.ac. See [Github #1945](Unidata/netcdf-c#1945) for more information. * [Enhancement] Provide better cross platform path name management. This converts paths for various platforms (e.g. Windows, MSYS, etc.) so that they are in the proper format for the executing platform. See [Github #1958](Unidata/netcdf-c#1958) for more information. * [Bug Fixes] The nccopy program was treating -d0 as turning deflation on rather than interpreting it as "turn off deflation". See [Github #1944](Unidata/netcdf-c#1944) for more information. * [Enhancement] Add support for storing NCZarr data in zip files. See [Github #1942](Unidata/netcdf-c#1942) for more information. * [Bug Fixes] Make fillmismatch the default for DAP2 and DAP4; too many servers ignore this requirement. * [Bug Fixes] Fix some memory leaks in NCZarr, fix a bug with long strides in NCZarr. See [Github #1913](Unidata/netcdf-c#1913) for more information. * [Enhancement] Add some optimizations to NCZarr, dosome cleanup of code cruft, add some NCZarr test cases, add a performance test to NCZarr. See [Github #1908](Unidata/netcdf-c#1908) for more information. * [Bug Fix] Implement a better chunk cache system for NCZarr. The cache now uses extendible hashing plus a linked list for provide a combination of expandibility, fast access, and LRU behavior. See [Github #1887](Unidata/netcdf-c#1887) for more information. * [Enhancement] Provide .rc fields for S3 authentication: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY. * [Enhancement] Give the client control over what parts of a DAP2 URL are URL encoded (i.e. %xx). This is to support the different decoding rules that servers apply to incoming URLS. See [Github #1884](Unidata/netcdf-c#1884) for more information. * [Bug Fix] Fix incorrect time offsets from `ncdump -t`, in some cases when the time `units` attribute contains both a **non-zero** time-of-day, and a time zone suffix containing the letter "T", such as "UTC". See [Github #1866](Unidata/netcdf-c#1866) for more information. * [Bug Fix] Cleanup the NCZarr S3 build options. See [Github #1869](Unidata/netcdf-c#1869) for more information. * [Bug Fix] Support aligned access for selected ARM processors. See [Github #1871](Unidata/netcdf-c#1871) for more information. * [Documentation] Migrated the documents in the NUG/ directory to the dedicated NUG repository found at https://github.com/Unidata/netcdf * [Bug Fix] Revert the internal filter code to simplify it. From the user's point of view, the only visible change should be that (1) the functions that convert text to filter specs have had their signature reverted and renamed and have been moved to netcdf_aux.h, and (2) Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter. Internally, the dispatch table has been modified to get rid of the complex structures. * [Bug Fix] If the HDF5 byte-range Virtual File Driver is available )HDf5 1.10.6 or later) then use it because it has better performance than the one currently built into the netcdf library. * [Bug Fix] Fixed byte-range support with cURL > 7.69. See [Unidata/netcdf-c#1798]. * [Enhancement] Added new test for using compression with parallel I/O: nc_test4/tst_h_par_compress.c. See [Unidata/netcdf-c#1784]. * [Bug Fix] Don't return error for extra calls to nc_redef() for netCDF/HDF5 files, unless classic model is in use. See [Unidata/netcdf-c#1779]. * [Enhancement] Added new parallel I/O benchmark program to mimic NOAA UFS data writes, built when --enable-benchmarks is in configure. See [Unidata/netcdf-c#1777]. * [Bug Fix] Now allow szip to be used on variables with unlimited dimension [Unidata/netcdf-c#1774]. * [Enhancement] Add support for cloud storage using a variant of the Zarr storage format. Warning: this feature is highly experimental and is subject to rapid evolution [https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in]. * [Bug Fix] Fix nccopy to properly set default chunking parameters when not otherwise specified. This can significantly improve performance in selected cases. Note that if seeing slow performance with nccopy, then, as a work-around, specifically set the chunking parameters. [Unidata/netcdf-c#1763]. * [Bug Fix] Fix some protocol bugs/differences between the netcdf-c library and the OPeNDAP Hyrax server. Also cleanup checksum handling [https://github.com/Unidata/netcdf-c/issues/1712].* [Bug Fix] IMPORTANT: Ncgen was not properly handling large data sections. The problem manifests as incorrect ordering of data in the created file. Aside from examining the file with ncdump, the error can be detected by running ncgen with the -lc flag (to produce a C file). Examine the file to see if any variable is written in pieces as opposed to a single call to nc_put_vara. If multiple calls to nc_put_vara are used to write a variable, then it is probable that the data order is incorrect. Such multiple writes can occur for large variables and especially when one of the dimensions is unlimited. * [Bug Fix] Add necessary __declspec declarations to allow compilation of netcdf library without causing errors or (_declspec related) warnings [Unidata/netcdf-c#1725]. * [Enhancement] When a filter is applied twice with different parameters, then the second set is used for writing the dataset [Unidata/netcdf-c#1713]. * [Bug Fix] Now larger cache settings are used for sequential HDF5 file creates/opens on parallel I/O capable builds; see [Github #1716](Unidata/netcdf-c#1716) for more information. * [Bug Fix] Add functions to libdispatch/dnotnc4.c to support dispatch table operations that should work for any dispatch table, even if they do not do anything; functions such as nc_inq_var_filter [Unidata/netcdf-c#1693]. * [Bug Fix] Fixed a scalar annotation error when scalar == 0; see [Github #1707](Unidata/netcdf-c#1707) for more information. * [Bug Fix] Use proper CURLOPT values for VERIFYHOST and VERIFYPEER; the semantics for VERIFYHOST in particular changed. Documented in NUG/DAP2.md. See [Unidata/netcdf-c#1684]. * [Bug Fix][cmake] Correct an issue with parallel filter test logic in CMake-based builds. * [Bug Fix] Now allow nc_inq_var_deflate()/nc_inq_var_szip() to be called for all formats, not just HDF5. Non-HDF5 files return NC_NOERR and report no compression in use. This reverts behavior that was changed in the 4.7.4 release. See [Unidata/netcdf-c#1691]. * [Bug Fix] Compiling on a big-endian machine exposes some missing forward delcarations in dfilter.c. * [File Change] Change from HDF5 v1.6 format compatibility, back to v1.8 compatibility, for newly created files. The superblock changes from version 0 back to version 2. An exception is when using libhdf5 deprecated versions 1.10.0 and 1.10.1, which can only create v1.6 compatible format. Full backward read/write compatibility for netCDF-4 is maintained in all cases. See [Github #951](Unidata/netcdf-c#951). ## 4.7.4 - March 27, 2020 * [Windows] Bumped packaged HDF5 to 1.10.6, HDF4 to 4.2.14, and libcurl to 7.60.0. * [Enhancement] Support has been added for HDF5-1.12.0. See [Unidata/netcdf-c#1528]. * [Bug Fix] Correct behavior for the command line utilities when directly accessing a directory using utf8 characters. See [Github #1669] (Unidata/netcdf-c#1669), [Github #1668] (Unidata/netcdf-c#1668) and [Github #1666] (Unidata/netcdf-c#1666) for more information. * [Bug Fix] Attempts to set filters or chunked storage on scalar vars will now return NC_EINVAL. Scalar vars cannot be chunked, and only chunked vars can have filters. Previously the library ignored these attempts, and always storing scalars as contiguous storage. See [Unidata/netcdf-c#1644]. * [Enhancement] Support has been added for multiple filters per variable. See [Unidata/netcdf-c#1584]. * [Enhancement] Now nc_inq_var_szip retuns 0 for parameter values if szip is not in use for var. See [Unidata/netcdf-c#1618]. * [Enhancement] Now allow parallel I/O with filters, for HDF5-1.10.3 and later. See [Unidata/netcdf-c#1473]. * [Enhancement] Increased default size of cache buffer to 16 MB, from 4 MB. Increased number of slots to 4133. See [Unidata/netcdf-c#1541]. * [Enhancement] Allow zlib compression to be used with parallel I/O writes, if HDF5 version is 1.10.3 or greater. See [Unidata/netcdf-c#1580]. * [Enhancement] Restore use of szip compression when writing data (including writing in parallel if HDF5 version is 1.10.3 or greater). See [Unidata/netcdf-c#1546]. * [Enhancement] Enable use of compact storage option for small vars in netCDF/HDF5 files. See [Unidata/netcdf-c#1570]. * [Enhancement] Updated benchmarking program bm_file.c to better handle very large files. See [Unidata/netcdf-c#1555]. * [Enhancement] Added version number to dispatch table, and now check version with nc_def_user_format(). See [Unidata/netcdf-c#1599]. * [Bug Fix] Fixed user setting of MPI launcher for parallel I/O HDF5 test in h5_test. See [Unidata/netcdf-c#1626]. * [Bug Fix] Fixed problem of growing memory when netCDF-4 files were opened and closed. See [Unidata/netcdf-c#1575 and Unidata/netcdf-c#1571]. * [Enhancement] Increased size of maximum allowed name in HDF4 files to NC_MAX_NAME. See [Unidata/netcdf-c#1631]. ## 4.7.3 - November 20, 2019 * [Bug Fix]Fixed an issue where installs from tarballs will not properly compile in parallel environments. * [Bug Fix] Library was modified so that rewriting the same attribute happens without deleting the attribute, to avoid a limit on how many times this may be done in HDF5. This fix was thought to be in 4.6.2 but was not. See [Unidata/netcdf-c#350]. * [Enhancement] Add a dispatch version number to netcdf_meta.h and libnetcdf.settings, in case we decide to change dispatch table in future. See [Unidata/netcdf-c#1469]. * [Bug Fix] Now testing that endianness can only be set on atomic ints and floats. See [Unidata/netcdf-c#1479]. * [Bug Fix] Fix for subtle error involving var and unlimited dim of the same name, but unrelated, in netCDF-4. See [Unidata/netcdf-c#1496]. * [Enhancement] Update for attribute documentation. See [Unidata/netcdf-c#1512]. * [Bug Fix][Enhancement] Corrected assignment of anonymous (a.k.a. phony) dimensions in an HDF5 file. Now when a dataset uses multiple dimensions of the same size, netcdf assumes they are different dimensions. See [GitHub #1484](Unidata/netcdf-c#1484) for more information. ## 4.7.2 - October 22, 2019 * [Bug Fix][Enhancement] Various bug fixes and enhancements. * [Bug Fix][Enhancement] Corrected an issue where protected memory was being written to with some pointer slight-of-hand. This has been in the code for a while, but appears to be caught by the compiler on OSX, under circumstances yet to be completely nailed down. See [GitHub #1486] (Unidata/netcdf-c#1486) for more information. * [Enhancement] [Parallel IO] Added support for parallel functions in MSVC. See [Github #1492](Unidata/netcdf-c#1492) for more information. * [Enhancement] Added a function for changing the ncid of an open file. This function should only be used if you know what you are doing, and is meant to be used primarily with PIO integration. See [GitHub #1483] (Unidata/netcdf-c#1483) and [GitHub #1487] (Unidata/netcdf-c#1487) for more information. ## 4.7.1 - August 27, 2019 * [Enhancement] Added unit_test directory, which contains unit tests for the libdispatch and libsrc4 code (and any other directories that want to put unit tests there). Use --disable-unit-tests to run without unit tests (ex. for code coverage analysis). See [GitHub #1458] (Unidata/netcdf-c#1458). * [Bug Fix] Remove obsolete _CRAYMPP and LOCKNUMREC macros from code. Also brought documentation up to date in man page. These macros were used in ancient times, before modern parallel I/O systems were developed. Programmers interested in parallel I/O should see nc_open_par() and nc_create_par(). See [GitHub #1459](Unidata/netcdf-c#1459). * [Enhancement] Remove obsolete and deprecated functions nc_set_base_pe() and nc_inq_base_pe() from the dispatch table. (Both functions are still supported in the library, this is an internal change only.) See [GitHub #1468](Unidata/netcdf-c#1468). * [Bug Fix] Reverted nccopy behavior so that if no -c parameters are given, then any default chunking is left to the netcdf-c library to decide. See [GitHub #1436](Unidata/netcdf-c#1436). ## 4.7.0 - April 29, 2019 * [Enhancement] Updated behavior of `pkgconfig` and `nc-config` to allow the use of the `--static` flags, e.g. `nc-config --libs --static`, which will show information for linking against `libnetcdf` statically. See [Github #1360] (Unidata/netcdf-c#1360) and [Github #1257] (Unidata/netcdf-c#1257) for more information. * [Enhancement] Provide byte-range reading of remote datasets. This allows read-only access to, for example, Amazon S3 objects and also Thredds Server datasets via the HTTPService access method. See [GitHub #1251] (Unidata/netcdf-c#1251). * Update the license from the home-brewed NetCDF license to the standard 3-Clause BSD License. This change does not result in any new restrictions; it is merely the adoption of a standard, well-known and well-understood license in place of the historic NetCDF license written at Unidata. This is part of a broader push by Unidata to adopt modern, standardized licensing. ## 4.6.3 - February 28, 2019 * [Bug Fix] Correctly generated `netcdf.pc` generated either by `configure` or `cmake`. If linking against a static netcdf, you would need to pass the `--static` argument to `pkg-config` in order to list all of the downstream dependencies. See [Github #1324](Unidata/netcdf-c#1324) for more information. * Now always write hidden coordinates attribute, which allows faster file opens when present. See [Github #1262](Unidata/netcdf-c#1262) for more information. * Some fixes for rename, including fix for renumbering of varids after a rename (#1307), renaming var to dim without coordinate var. See [Github #1297] (Unidata/netcdf-c#1297). * Fix of NULL parameter causing segfaults in put_vars functions. See [Github #1265] (Unidata/netcdf-c#1265) for more information. * Fix of --enable-benchmark benchmark tests [Github #1211] (Unidata/netcdf-c#1211) * Update the license from the home-brewed NetCDF license to the standard 3-Clause BSD License. This change does not result in any new restrictions; it is merely the adoption of a standard, well-known and well-understood license in place of the historic NetCDF license written at Unidata. This is part of a broader push by Unidata to adopt modern, standardized licensing. * [BugFix] Corrected DAP-related issues on big-endian machines. See [Github #1321] (Unidata/netcdf-c#1321), [Github #1302] (Unidata/netcdf-c#1302) for more information. * [BugFix][Enhancement] Various and sundry bugfixes and performance enhancements, thanks to \@edhartnett, \@gsjarrdema, \@t-b, \@wkliao, and all of our other contributors. * [Enhancement] Extended `nccopy -F` syntax to support multiple variables with a single invocation. See [Github #1311](Unidata/netcdf-c#1311) for more information. * [BugFix] Corrected an issue where DAP2 was incorrectly converting signed bytes, resulting in an erroneous error message under some circumstances. See [GitHub #1317] (Unidata/netcdf-c#1317) for more information. See [Github #1319] (Unidata/netcdf-c#1319) for related information. * [BugFix][Enhancement] Modified `nccopy` so that `_NCProperties` is not copied over verbatim but is instead generated based on the version of `libnetcdf` used when copying the file. Additionally, `_NCProperties` are displayed if/when associated with a netcdf3 file, now. See [GitHub#803] (Unidata/netcdf-c#803) for more information.
I've read the Interoperability with HDF5 section in the docs.
Assuming a HDF5 file is written in accordance with the netCDF-4 rules (i.e. no strange types, no looping groups), and assuming that every dataset has a dimension scale attached to each dimension, the netCDF-4 API can be used to read and edit the file, quite easily.
[...]
If dimension scales are not used, then netCDF-4 can still edit the file, and will invent anonymous dimensions for each variable shape.
This works quite well when reading hdf5 weather radar data, but has one quirk.
If the variable has different size for first and second dimension (eg.
(100,200)
) two anonymous dimensions are invented. If the 2D dataset has same size for first and second dimension (eg.(100,100)
) then only one anonymous dimension is invented.In my use case this contradicts the original meaning of the 2D dataset dimensions to be different. Maybe I do not see it immediately, but I also can't think of any use case where the dimensions of a 2D dataset would be the same dimension (sound pretty bad, but I do not know how to explain better).
On my way to find some working solution I've already asked on the netCDF4-python github issue tracker and on the
h5py
mailing-list but it seems a solution is only possible by enhancing/changing behaviour of netcdf-c.From my perspective netcdf-c would need to invent two anonymous dimensions from a 2D dataset, even if the dimensions are of the same size. Another option would be to define the wanted behaviour at reading time. Are there any other options?
Software Version Information
output of `nc-config --all (nectCDF 4.6.2):
--cc -> x86_64-conda_cos6-linux-gnu-cc
--cflags -> -I/home/kai/miniconda/envs/wradlib_150/include
--libs -> -L/home/kai/miniconda/envs/wradlib_150/lib -lnetcdf -lmfhdf -ldf -lhdf5_hl -lhdf5 -lrt -lpthread -lz -ldl -lm -lcurl
--has-c++ -> no
--cxx ->
--has-c++4 -> no
--cxx4 ->
--has-fortran -> no
--has-dap -> yes
--has-dap2 -> yes
--has-dap4 -> yes
--has-nc2 -> yes
--has-nc4 -> yes
--has-hdf5 -> yes
--has-hdf4 -> yes
--has-logging -> no
--has-pnetcdf -> no
--has-szlib -> no
--has-cdf5 -> yes
--has-parallel4 -> no
--has-parallel -> no
--prefix -> /home/kai/miniconda/envs/wradlib_150
--includedir -> /home/kai/miniconda/envs/wradlib_150/include
--libdir -> /home/kai/miniconda/envs/wradlib_150/lib
--version -> netCDF 4.6.2
output of `h5cc --showconfig (hdf 1.10.5):
General Information:
Compiling Options:
Linking Options:
Statically Linked Executables:
LDFLAGS: -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath,/home/kai/miniconda/envs/wradlib_150/lib -Wl,-rpath-link,/home/kai/miniconda/envs/wradlib_150/lib -L/home/kai/miniconda/envs/wradlib_150/lib
H5_LDFLAGS:
AM_LDFLAGS: -L/home/kai/miniconda/envs/wradlib_150/lib
Extra libraries: -lrt -lpthread -lz -ldl -lm
Archiver: /home/conda/feedstock_root/build_artifacts/hdf5_split_1566414109997/_build_env/bin/x86_64-conda_cos6-linux-gnu-ar
AR_FLAGS: cr
Ranlib: /home/conda/feedstock_root/build_artifacts/hdf5_split_1566414109997/_build_env/bin/x86_64-conda_cos6-linux-gnu-ranlib
Languages:
Features:
Parallel Filtered Dataset Writes: no
Large Parallel I/O: no
High-level library: yes
Threadsafety: yes
Default API mapping: v110
With deprecated public symbols: yes
I/O filters (external): deflate(zlib)
MPE: no
Direct VFD: no
dmalloc: no
Packages w/ extra debug output: none
API tracing: no
Using memory checker: yes
Memory allocation sanity checks: no
Function stack tracing: no
Strict file format checks: no
Optimization instrumentation: no
Python Version
netCDF4 version
h5py version
The text was updated successfully, but these errors were encountered: