-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make use of more built-in protobuf scalar types #62
Comments
hmm... currently there is:
bool, uint32 and uint64 are encoded exactly the same, so it makes sense to scrap the uint32 and bool. note current Protobuf Decoding means for bools, any varint != 0 is TRUE. See here The advantage of using sint64 is : that a 64bit "-1" is 10 bytes as a uint64 and only 1 byte as a sint64. |
It could make sense to drop the smaller sized fields like bool, uint32, and sint32 from the protobuf. Directly related, I'd like if the Sparkplug specification could call out exactly how oversized values should be handled relative to the datatype field a metric was birthed with. For example, if your metric is birthed with a datatype of int8, but a value being passed is outside the range [-128,128), what should the recipient do?
None of those answers are perfect, but I would prefer guidance of which to choose in the spec, to minimize the corner case behavior between different implementations. There's an ambiguous disconnect right now between what datatype a metric is birthed with and how various applications store and handle values of that metric. I know Ignition will send out-of-range values compared to the Sparkplug birth datatype. MQTT Engine appears to convert everything into the most appropriate Java datatype (int or long) and then use that datatype in Sparkplug when sending messages back to a node. If it's not obvious, I believe the datatype field a metric is birthed with takes precedence over any datatypes or values used, even if the node later sends the metric with a bigger datatype in a DATA payload. (Which I personally think should be against the spec, but currently is ambiguous.) |
We've decided not to make any changes to the specification for this release. However, the Java implementation had an issue with encoding that is now fixed: eclipse-tahu/tahu#202. This issue has now been deferred to a future release. |
… Rebirth requirements
Will this be in the V4 specification? |
The Sparkplug B protobuf definition only provides uint32 and uint64 types in the one-of value groups. Signed integers are forced in and out of those fields using casting which is not always obvious to implement depending on the programming language.
Additionally, sending any negative value, even small ones, immediately expands the line-size of those fields to the maximum 4 or 8 bytes.
On the next revision of the protobuf definition, I think the one-of value groups should be expanded to also provide either
int32
/int64
values orsint32
/sint64
values. Using these should reduce the amount of casting required to get signed value in and out of the payloads. Additionally, the "sint" versions will allow small signed integers like "-1" to only take one byte of line-size data, although I am unsure if using those versions is easy in all application languages.The text was updated successfully, but these errors were encountered: