-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reading and writing CBOR documents in pieces #146
Comments
Hello Stuart Thank you for the suggestion. This is definitely a goal for TinyCBOR. It's possible to do what you ask for parsing, though currently only in the dev branch (and possibly my fork's dev branch). It's not a trivial API. Here's the current implementation: For reading, you read until you get I have not implemented for writing, but I do have a Either way, dealing with strings is non-trivial. Even the current implementation of the reader requires the entire string chunk to be present in memory to be decoded. You'd need to chunk your strings on writing (no API for this yet) and you'd need to ensure your writer only sent chunks smaller than your buffer. |
@thiagomacieira @sjlongland Hey! Any update on this? I have exactly this scenario, streaming CBOR payload of potentially large size (thousands of bytes) to a hardware crypto wallet "Ledger Nano S", which definitely falls under CoAP (Constrained Application Protocol), since communication from host machine (sending the long CBOR payload) to the Ledger is limited to 255 bytes. But I cannot make use of more than ~1600 bytes in total for my C program (Ledger Nano S app). Which will probably be fine, since I just need to hash it and also parse out some sparse data from by CBOR payload. Made extra tricky in my situation is the fact that some of my CBOR Map Type (Major type 5) will be larger than the chunk size of 255 bytes. i.e. a single CBOR Map type might begin at @thiagomacieira I realize I'm late to the party (18 months), but do you still have any (potentially stale) branch for doing chunked reading of a stream? (Yeah I did not say, only reading relevant for me...) |
@thiagomacieira also was |
Hmm does this PR #67 relate to this? Or is it only for strings (text), and not relating to top level CBOR Major types such as Array/Map ("object")? Sorry for confusing questions, I'm fairly new to CBOR and entirely new to this great great project, and nor am I an expert at C... |
Hello @Sajjon The dev branch here and in thiagomacieira/tinycbor should have this API working. I'd welcome feedback on suitability for small systems. The biggest roadblock I have had in publishing the API is making sure it's good for small systems. In particular, it was Zephyr, which has a chunked buffer of 256 byte slices and it would be really nice of TinyCBOR could just work with them.
Do note one important detail: you cannot split integers across buffers. So your rebuffering code must deal with up to 8 bytes that could not be interpreted off a chunk and must be copied to the next buffer before processing can resume. |
Hi @thiagomacieira I can't find |
Uh... looks like it's only in my experimental branch. See here: https://github.com/thiagomacieira/tinycbor/tree/dev |
Thanks! |
@thiagomacieira I have some feedback regarding the suitability of I have not found a way to use |
So, an update from my end… I'm starting to have a look at this. Starting point is the reader interface. I've begun by documenting what I understand of the interface, my work is here -- if I did misunderstand something in my hand-static-code-analysis, please sing out! My thinking is that to implement the "chunked" read, we start with a buffer of a fixed size. On the first read, we fill the buffer, and read values out of it as normal. When the read pointer gets to the half-way point in this fixed buffer (i.e. all data prior has been read), we That way, provided a value doesn't span more than two blocks, we should be able to read each value in its entirety. Tricky for networked streams, but for reading files from off-chip storage (e.g. SPIFFS on SPI flash in my case) it should work reasonably well. Looks like to do this, I must implement a Now, the snag I see with this is the I wonder if the interface shouldn't have some sort of flag that says (Edit: I should read others' posts more closely… I see @atomsmith more or less asked the same thing.) |
With regards to @atomsmith's observations… I can see two places where this problematic "duplication" of
Again, where the By the looks of things, there'll not be more than two |
Given this more thought, at the moment our problem is centred around the reader context ( The parser operations are referenced by the
In the end, I think for the cost of an extra I re-worked the unit test case so that the reader context pointed directly at the |
From the writing end… things seem to be straightforward enough, but I'm a bit confused about the This difference in behaviour is not reflected in the default implementation (which just reads/writes to a byte array). Things seem to be working for both read and write on my devices… reading a file off SPIFFS and "streaming" it through a small buffer. If I avoid using dangerous Question is, since I based off https://github.com/thiagomacieira/tinycbor/tree/dev do I submit the pull request there, or on this project? |
Hello Sorry for the delay, I was unavailable last week.
That's what is in the design, but looking at my own code, that enumeration is unused. I don't have notes why I added that enumeration, but for some reason a few years ago I thought it was important to let the handler know whether the data being appended was part of CBOR structured data or the free-form string content. When you add a string to CBOR, the callback function is called twice; once with
What I should do is import everything from there into the dev branch here, create the new "main" branch from it, delete the "master" branch, so your PRs should go here, not to the fork where I make my own changes. Let me see if I can get a week of dedicated time to TinyCBOR so we can make progress towards the 0.6 release. Meanwhile, thanks for the PRs in that branch/fork. I can pick them up and try with Qt, to see if it breaks anything or causes performance regressions. |
Ok, I've updated the main and dev branches (they're now in sync) with 0.6. The last release of 0.5 is done. Can I ask you to retarget this to dev branch, for an upcoming 0.7 release? |
Sure, I'll have a look. :-) |
New pull request is #208 |
Hi,
Is it possible to parse and and emit a CBOR document in parts? The use case here is where I might have a CBOR file that's in the order of 2kB in size, but I don't want to allocate a 2kB buffer to read the whole thing in one go (I've allocated 4kB for a stack and don't have a lot of heap space available).
The thinking is when reading, I might allocate a 256 or 512-byte buffer (on the stack), read the first buffer-full worth of data, then call the parser on that. As I get near the end, I watch for
CborErrorAdvancePastEOF
… when this happens, I obtain a copy of TinyCBOR's read pointer, do amemmove
to move the unread data to the start of my buffer,read
some more in, then tell TinyCBOR to move back to the start of the buffer to resume reading whatever CBOR value was being pointed to at the time.Likewise writing; when
CborErrorOutOfMemory
is encountered, I can do awrite
of the buffer, then tell TinyCBOR to continue from the start of the buffer and resume writing the output.Such a feature would really work well with CoAP block-wise transfers, as the data could be effectively "streamed" instead of having to buffer the lot.
I tried looking around for whether there was a flag I could specify on the decoder, but couldn't see any flags in the documentation.
Regards,
Stuart Longland
The text was updated successfully, but these errors were encountered: