-
-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Other sizes of data (group size and Endianness) #104
Comments
Thank you for the feedback. It's not entirely clear to me what the output would look like. Say I choose
for
for |
That's pretty much exactly what I was picturing, yes. |
This looks similar to
I recently came across this when reading this blog post which makes use of |
It is similar to |
@RinHizakura If you find the time, could you maybe summarize what is and what is not possible with your new option in #170? (released today) |
The new option On the other hand, this could only be shown in the big-endian format. The little-endian dump is not supported now. |
I think this limitation fine for now. 16 would probably be nice, but I understand that it probably interferes with
Right. I agree with @ACleverDisguise that this would be a really nice feature to have. So let's keep this ticket open for now. |
I think the main functionality requested in this ticket is now supported with #189 by @RinHizakura now also merged. |
I frequently have to dump data files (ADC output, for example) that don't just have byte-oriented data. It would be nice to be able to specify data width in the dump so I get the hex data grouped in the natural data size instead of having to do the little-endian two-step and mentally group indistinguishable bytes by 2 or 4 or whatever. Something like:
--word-size=1 (uint8_t, default)
--word-size=2 (uint16_t)
--word-size=4 (uint32_t)
--word-size=8 (uint64_t)
--word-size=16 (uint128_t)
That covers the common-ish types. If you want to be really brave you could do weird crap like 3-byte or 17 byte, but that is likely low return on investment.
Not all such data is little-endian, so an extra flag for those cases where word-size > 1 would be:
--little-endian (default)
--big-endian
Also, interpretation could be signed or unsigned
--signed
--unsigned (default)
Of course with this you'd drop the byte-oriented colouration (but maybe with --signed you'd highlight negative numbers in red or something).
The text was updated successfully, but these errors were encountered: