-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix nc_open_mem #400
fix nc_open_mem #400
Conversation
thehesiod
commented
Apr 28, 2017
•
edited
Loading
edited
- fixes nc_open_mem fails #394
- requires: Fix NC_DISKLESS returns garbage data for certain files #403
for those who are on OSX I've created a custom brew tap with the patch: https://github.com/thehesiod/homebrew-tap/blob/master/Formula/netcdf.rb |
btw, is there a way to create a diskless netCDF file, then get the bytes back w/o touching the disk? Something like |
Not at the moment. And I am not sure it is doable for netcdf-4 because we rely on |
After a quick look, it looks like we would |
thanks for checking! Hopefully this patch can land soon. |
ack, even with this fix it seems like for non trivial files it's returning corrupt data. With the following file: and the following code: #include <iostream>
#include <netcdf.h>
#include <netcdf_mem.h>
#include <iostream>
#include <fstream>
#include <assert.h>
#import <vector>
using namespace std;
#define HRAPY 200
#define HRAPX 333
static size_t value_count[] = {HRAPY, HRAPX};
static size_t start[] = {0, 0};
int main(int argc, const char * argv[]) {
ifstream file("/tmp/foo.nc", ios::in | ios::binary | ios::ate);
streamsize size = file.tellg();
file.seekg(0, ios::beg);
std::vector<char> buffer(size);
int status;
int ncid;
if (file.read(buffer.data(), size))
{
status = nc_open_mem(".nc", 0, size, buffer.data(), &ncid);
if(status != NC_NOERR) {
cout << nc_strerror(status);
assert(false);
}
int rh_id;
status = nc_inq_varid(ncid, "amountofprecip", &rh_id);
if(status != NC_NOERR) {
cout << nc_strerror(status);
assert(false);
}
int rh_vals[HRAPY][HRAPX];
status = nc_get_vara_int(ncid, rh_id, start, value_count, (int*)rh_vals);
if(status != NC_NOERR) {
cout << nc_strerror(status);
assert(false);
}
}
return 0;
} note there's no errors, however rows > 100 return garbage, row 100 looks like:
whereas row 101 looks like:
opening the same file with |
looks like it returns garbage after first iteration of for loop in |
just logged #401 as I found this happens even with |
needs to be reviewed by maintainers
ok, have a fix for |
@DennisHeimbigner This looks straightforward to me but since it is your code I thought I'd wait for you to weigh in before doing anything. It's a very small change and at a glance appears to be a good one. |
Ok,digging deep,what I am seeing is a swap failure because |
Pull request #403 must be applied before applying this pull request |
This reverts commit 09ba25c.
ok, reverted my hack, @DennisHeimbigner could you comment on the comment /* Use half the filesize as the blocksize ; why? */ here: https://github.com/Unidata/netcdf-c/blob/master/libsrc/memio.c#L385 |
@WardF If you wouldn't mind, I think it would be good adding a test for memio so it doesn't break again. |
No problem, but it appears that Dennis added a test as part of his upull request |
@WardF confused because if he added a test it should fail w/o my fix. Ya I see the test supports opening a file from mem but I think it turns off the mem support by default (undef MEM). Pinging @DennisHeimbigner I'm guessing he added a test that theoretically could be run as DISKLESS or MEM but right now is DISKLESS. |
I added a modified version of your code as our testcase; assume you don't mind |
@DennisHeimbigner np, does that mean your PR fixes the issue this PR attempts to fix, or your testcase is currently failing in your PR? |
|
@DennisHeimbigner sorry, there were two issues, let me explain: Issue 1 (nc_open_mem returns error code): Issue 2 (corrupt data): So we're wondering if you also want to fix what this fixes in your PR, which would require 2 testcases, or perhaps one uber testcase of nc_open_mem w/ CDF5 data. |
No, we should probably have a separate test case for the issue 1 (nc_open_mem returns error). |
Ok, another problem. When I run my test case using nc_open_mem, it works fine |
I will check in a bit; I was focused on diskless before, and not open mem. I'm in meetings currently and for much of the rest of the day but will attempt to duplicate ASAP. |
Alex- can you post the dfile.c that you are using (with or without this pr)? |
ya, working on it now, still fighting MADIS data :) |
ok, just used the latest 4.4.1.1 release with the following code: #include <iostream>
#include <netcdf.h>
#include <netcdf_mem.h>
#include <iostream>
#include <fstream>
#include <assert.h>
#import <vector>
using namespace std;
int main(int argc, const char * argv[]) {
ifstream file("/tmp/20160513_1700.nc", ios::in | ios::binary | ios::ate);
streamsize size = file.tellg();
file.seekg(0, ios::beg);
std::vector<char> buffer(size);
int status;
int ncid;
if (file.read(buffer.data(), size))
{
status = nc_open_mem(".nc", 0, size, buffer.data(), &ncid);
if(status != NC_NOERR) {
cout << nc_strerror(status);
assert(false);
}
std::cout << "Success!\n";
}
return 0;
} and this file: https://madis-data.bldr.ncep.noaa.gov/madisPublic1/data/archive/2016/05/13/LDAD/mesonet/netCDF/20160513_1700.gz and I get |
regarding difile.c, can get it from my branch: https://github.com/thehesiod/netcdf-c/blob/nc_open_mem_fix/libdispatch/dfile.c |
Sorry, I just cannot duplcate the failure. |
ok, let me try on debian |
ok my steps to repro: docker run --rm -ti debian
apt-get update
apt-get install -y wget clang build-essential libcurl4-gnutls-dev libhdf4-dev libhdf5-dev vim
cd /tmp
# get test file
wget https://madis-data.bldr.ncep.noaa.gov/madisPublic1/data/archive/2016/05/13/LDAD/mesonet/netCDF/20160513_1700.gz
gunzip 20160513_1700.gz
mv 20160513_1700 20160513_1700.nc
# install cmake
wget https://cmake.org/files/v3.8/cmake-3.8.0-Linux-x86_64.sh && \
chmod u+x cmake-3.8.0-Linux-x86_64.sh && \
./cmake-3.8.0-Linux-x86_64.sh --prefix=/usr/local --skip-license
# build netcdf-c
wget https://github.com/Unidata/netcdf-c/archive/v4.4.1.1.tar.gz && \
tar xvpf v4.4.1.1.tar.gz
cd netcdf-c-4.4.1.1
mkdir build_dir
cd build_dir
cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DHAVE_HDF5_H=/usr/include/hdf5/serial/hdf5.h
make -j$(nproc) install
# build test app
cd /tmp
clang++ test.cpp -lnetcdf -o test
./test results in: root@52c457f5b5f5:/tmp# ./test
test: test.cpp:27: int main(int, const char **): Assertion `false' failed.
No such file or directoryAborted |
Thanks for providing a docker-based solution to reproduce this! I'll follow up on this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
Upstream changes: ## 4.6.1 - March 15, 2018 * [Bug Fix] Corrected an issue which could result in a dap4 failure. See [Github #888](Unidata/netcdf-c#888) for more information. * [Bug Fix][Enhancement] Allow `nccopy` to control output filter suppresion. See [Github #894](Unidata/netcdf-c#894) for more information. * [Enhancement] Reverted some new behaviors that, while in line with the netCDF specification, broke existing workflows. See [Github #843](Unidata/netcdf-c#843) for more information. * [Bug Fix] Improved support for CRT builds with Visual Studio, improves zlib detection in hdf5 library. See [Github #853](Unidata/netcdf-c#853) for more information. * [Enhancement][Internal] Moved HDF4 into a distinct dispatch layer. See [Github #849](Unidata/netcdf-c#849) for more information. ## 4.6.0 - January 24, 2018 * [Enhancement] Full support for using HDF5 dynamic filters, both for reading and writing. See the file docs/filters.md. * [Enhancement] Added an option to enable strict null-byte padding for headers; this padding was specified in the spec but was not enforced. Enabling this option will allow you to check your files, as it will return an E_NULLPAD error. It is possible for these files to have been written by older versions of libnetcdf. There is no effective problem caused by this lack of null padding, so enabling these options is informational only. The options for `configure` and `cmake` are `--enable-strict-null-byte-header-padding` and `-DENABLE_STRICT_NULL_BYTE_HEADER_PADDING`, respectively. See [Github #657](Unidata/netcdf-c#657) for more information. * [Enhancement] Reverted behavior/handling of out-of-range attribute values to pre-4.5.0 default. See [Github #512](Unidata/netcdf-c#512) for more information. * [Bug] Fixed error in tst_parallel2.c. See [Github #545](Unidata/netcdf-c#545) for more information. * [Bug] Fixed handling of corrupt files + proper offset handling for hdf5 files. See [Github #552](Unidata/netcdf-c#552) for more information. * [Bug] Corrected a memory overflow in `tst_h_dimscales`, see [Github #511](Unidata/netcdf-c#511), [Github #505](Unidata/netcdf-c#505), [Github #363](Unidata/netcdf-c#363) and [Github #244](Unidata/netcdf-c#244) for more information. ## 4.5.0 - October 20, 2017 * Corrected an issue which could potential result in a hang while using parallel file I/O. See [Github #449](Unidata/netcdf-c#449) for more information. * Addressed an issue with `ncdump` not properly handling dates on a 366 day calendar. See [GitHub #359](Unidata/netcdf-c#359) for more information. ### 4.5.0-rc3 - September 29, 2017 * [Update] Due to ongoing issues, native CDF5 support has been disabled by **default**. You can use the options mentioned below (`--enable-cdf5` or `-DENABLE_CDF5=TRUE` for `configure` or `cmake`, respectively). Just be aware that for the time being, Reading/Writing CDF5 files on 32-bit platforms may result in unexpected behavior when using extremely large variables. For 32-bit platforms it is best to continue using `NC_FORMAT_64BIT_OFFSET`. * [Bug] Corrected an issue where older versions of curl might fail. See [GitHub #487](Unidata/netcdf-c#487) for more information. * [Enhancement] Added options to enable/disable `CDF5` support at configure time for autotools and cmake-based builds. The options are `--enable/disable-cdf5` and `ENABLE_CDF5`, respectively. See [Github #484](Unidata/netcdf-c#484) for more information. * [Bug Fix] Corrected an issue when subsetting a netcdf3 file via `nccopy -v/-V`. See [Github #425](Unidata/netcdf-c#425) and [Github #463](Unidata/netcdf-c#463) for more information. * [Bug Fix] Corrected `--has-dap` and `--has-dap4` output for cmake-based builds. See [GitHub #473](Unidata/netcdf-c#473) for more information. * [Bug Fix] Corrected an issue where `NC_64BIT_DATA` files were being read incorrectly by ncdump, despite the data having been written correctly. See [GitHub #457](Unidata/netcdf-c#457) for more information. * [Bug Fix] Corrected a potential stack buffer overflow. See [GitHub #450](Unidata/netcdf-c#450) for more information. ### 4.5.0-rc2 - August 7, 2017 * [Bug Fix] Addressed an issue with how cmake was implementing large file support on 32-bit systems. See [GitHub #385](Unidata/netcdf-c#385) for more information. * [Bug Fix] Addressed an issue where ncgen would not respect keyword case. See [GitHub #310](Unidata/netcdf-c#310) for more information. ### 4.5.0-rc1 - June 5, 2017 * [Enhancement] DAP4 is now included. Since dap2 is the default for urls, dap4 must be specified by (1) using "dap4:" as the url protocol, or (2) appending "#protocol=dap4" to the end of the url, or (3) appending "#dap4" to the end of the url Note that dap4 is enabled by default but remote-testing is disbled until the testserver situation is resolved. * [Enhancement] The remote testing server can now be specified with the `--with-testserver` option to ./configure. * [Enhancement] Modified netCDF4 to use ASCII for NC_CHAR. See [Github Pull request #316](Unidata/netcdf-c#316) for more information. * [Bug Fix] Corrected an error with how dimsizes might be read. See [Github #410](Unidata/netcdf-c#410) for more information. * [Bug Fix] Corrected an issue where 'make check' would fail if 'make' or 'make all' had not run first. See [Github #339](Unidata/netcdf-c#339) for more information. * [Bug Fix] Corrected an issue on Windows with Large file tests. See [Github #385](Unidata/netcdf-c#385]) for more information. * [Bug Fix] Corrected an issue with diskless file access, see [Pull Request #400](Unidata/netcdf-c#400) and [Pull Request #403](Unidata/netcdf-c#403) for more information. * [Upgrade] The bash based test scripts have been upgraded to use a common test_common.sh include file that isolates build specific information. * [Upgrade] The bash based test scripts have been upgraded to use a common test_common.sh include file that isolates build specific information. * [Refactor] the oc2 library is no longer independent of the main netcdf-c library. For example, it now uses ncuri, nclist, and ncbytes instead of its homegrown equivalents. * [Bug Fix] `NC_EGLOBAL` is now properly returned when attempting to set a global `_FillValue` attribute. See [GitHub #388](Unidata/netcdf-c#388) and [GitHub #389](Unidata/netcdf-c#389) for more information. * [Bug Fix] Corrected an issue where data loss would occur when `_FillValue` was mistakenly allowed to be redefined. See [Github #390](Unidata/netcdf-c#390), [GitHub #387](Unidata/netcdf-c#387) for more information. * [Upgrade][Bug] Corrected an issue regarding how "orphaned" DAS attributes were handled. See [GitHub #376](Unidata/netcdf-c#376) for more information. * [Upgrade] Update utf8proc.[ch] to use the version now maintained by the Julia Language project (https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md). * [Bug] Addressed conversion problem with Windows sscanf. This primarily affected some OPeNDAP URLs on Windows. See [GitHub #365](Unidata/netcdf-c#365) and [GitHub #366](Unidata/netcdf-c#366) for more information. * [Enhancement] Added support for HDF5 collective metadata operations when available. Patch submitted by Greg Sjaardema, see [Pull request #335](Unidata/netcdf-c#335) for more information. * [Bug] Addressed a potential type punning issue. See [GitHub #351](Unidata/netcdf-c#351) for more information. * [Bug] Addressed an issue where netCDF wouldn't build on Windows systems using MSVC 2012. See [GitHub #304](Unidata/netcdf-c#304) for more information. * [Bug] Fixed an issue related to potential type punning, see [GitHub #344](Unidata/netcdf-c#344) for more information. * [Enhancement] Incorporated an enhancement provided by Greg Sjaardema, which may improve read/write times for some complex files. Basically, linked lists were replaced in some locations where it was safe to use an array/table. See [Pull request #328](Unidata/netcdf-c#328) for more information.