-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve writing when streaming multiple chunks #244
Comments
Hello, this feature looks very promissing! Has there been any update? |
Hi, this issue is currently just a placeholder for something that would be nice to have to reduce file size and improve read performance for written files, but it's not something I've been working on. Your error looks like something else though, it should be possible to append to an existing file if you create a |
Hello, ***************** New logfile section *******************
The code is rather plain. The file was not corrupted, tried out with 2 good ones yet. Taking any hint to make it work. |
Hi @zbeebee, apologies it's taken me a long time to look into this, so this might no longer be relevant to you. I believe I've reproduced this problem and it seems to be caused by npTDMS writing files with an older version number than what the file already uses. I've made a new issue for this at #264 |
Implementing a buffered writer would also help reduce file size and increase speed. |
The TDMS file writing support in npTDMS is currently fairly simple, for every segment we always write a new object list and full raw data index for every channel. When streaming chunks of data to disk it would be nice to make use of the TDMS format features that allow reusing previous metadata to reduce file size and improve the efficiency when reading files. Some things to consider:
The text was updated successfully, but these errors were encountered: