-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support to NeuroJSON (.jnii and .bnii) files (to development branch) #579
Conversation
Hi Qianqian, I would suggest to rebase or cherry-pick you commits onto the head of development branch, to avoid these unecessary merge commits here. Btw, does |
hi @ningfei, I will try a rebase and update this PR.
|
I basically recompiled dcm2niix and used
However, if I convert a nifti file to jnii using |
@ningfei, can you give me the full dcm2niix command (or path of the dicom file you converted)? I just tested again with the files in |
@fangq Yes, I'm sure I used the latest version of jsonlab. I tried both Warning: Failed to decode embedded JData annotations, return raw JSON data
|
That's a general problem I'm afraid. I did tests for all the files under |
what is the output of the other thing to check is to make sure your dcm2niix is compiled from this branch: also, if you can post one of the generated .jnii file to a google drive/dropbox, I will try to see if I can load it from my side. |
For sure, I applied your PR to dcm2niix development branch. But now I also tried using the NeuroJSON repo. Same error. I also tested on Windows, still same error. To me, it seems that the base64 encoding in dcm2niix was not working properly. You can check out this test jnii I converted, it's from |
@ningfei, thanks for the test, I figured out what happened - my implementation of the JSON-encoded un-compressed data does not agree with the specification. In most of my tests, I used compression ( give me a moment to add a correction. |
@ningfei, the issue should be fixed, although you need to update both dcm2niix and jsonlab, here are the two needed patches: what happened was the previously generated uncompressed
where To store uncompressed binary data, I should have used this construct
Because in the past, most of my JSON files use internal compression to reduce size, so storing uncompressed binary data in JSON has not been encountered until now. I am so glad that you caught this so it won't become an issue moving forward. Please test again and let me know if you see any additional issues. PS: I want to add that |
Great, thanks! It works now. I haven't really looked into the details of JData specifications. I understand that jnifti would be more extensible/flexible. But as I tested, both the size of the generated file and the loading speed is not comparable to the standard nifti file. See the comparision below (the same t1 image as we used for test from dcm_qa_uih). Did I miss something here?
|
@ningfei thanks for your great detective work on this PR. You may be interested in this group. Formats like GIfTI (xml with binary data embedded as base64) and JNIfTI (JSON with binary data embedded as base64) trade off file size and speed in an effort to make the headers human readable and extensible. |
Thanks @neurolabusc ! Just joined, will check out the discussion there. |
@ningfei, what @neurolabusc mentioned is correct - it is a trade-off: on the down side:
on the pro side:
If speed/size is a concern, I would recommend you try JNIfTI also supports other compression methods, such as |
@fangq for upcoming formats I would suggest you evaluate zstd instead of lzma or lz4. You can evaluate different compression methods using my zlib-bench-python. Uncomment or add the lines like It should also be noted that recent zstd releases have enhanced performance a lot since I last ran my benchmarks, so the performance will be even more stark. zstd is a real game changer for open source image compression. It would be interesting to see how the latest zstd compares against the outstanding but proprietary Oodle products. My take away is that lzma and lz4 are not on the Pareto frontier - by adjusting the compression level zstd can be tuned from super fast to super compression. |
@fangq great to know that you are trying to add support for mmap. Looking forward! It would be very useful, for example, if I want to query the value at some specific voxel coordiates in a large amount of images. |
thanks @neurolabusc for the pointers on data compression. Somebody did mention zstd in one of the zmat issues (https://github.com/fangq/zmat/issues/5) and I haven't read it carefully. it is indeed quite impressive. Also, the swizzled data compression is impressive too. lot's good tools to incorporate. |
thanks Chris for the feedback. Here is the resubmission of #578, patched against the
development
branch. I re-tested the export feature and everything looks good. Let me know if you have any other feedback.Just want to mention that the travis-CI test seems to be failing due to other changes unrelated to JNIfTI, although I see the upstream repo now uses Appveyor instead of travis.
Dear Chris (@neurolabusc), I would like to submit a patch to enable
dcm2niix
to export to NeuroJSON NIfTI JSON wrappers, including.jnii
- lightweight JSON wrapper to NIfTI (text-based JNIfTI, see current specification).bnii
- binary JSON (yet still human-readable) wrapper to NIfTI for disk/IO efficiencyThe patch is fairly straightforward, including two added flags:
-e j
for exporting to.jnii
and-e b
for exporting to.bnii
. Both flags are compression-aware: when-z y
is given, the data segment of .jnii/.bnii will be compressed based on the compression level. In this patch, I only added a few functions (nii_savejnii
,nii_savebnii
) in thenii_dicom_batch.cpp
unit and two additional helper unitscJSON.c/h
andbase64.c/h
, both of which have been copied from your nii2mesh code tree. The added codes should have no impact to any of the existing features.The updated code has passed the Travis-CI automated tests, and I have also run
valgrind
to make sure there is no memory leakage. I also formatted the new functions to use the same tab-based indentation styles.The bottom screenshot summarizes some of the tests I did, making sure the generated files are readily readable by my Python and MATLAB parsers. It also demonstrate additional benefits enabled by this format - including file sizes, header searchability (without decompressing files) and readability.
I hope you can take a look at this patch and let me know if it can be accepted. I am committed to supporting this feature moving forward. Look forward to hearing from you!