Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why value of uint32 type is assigned to long_value that type is uint64_t #131

Closed
zyfromsh opened this issue Apr 12, 2021 · 2 comments
Closed

Comments

@zyfromsh
Copy link

Please see below piece of code from tahu.c

case METRIC_DATA_TYPE_UINT32:
metric->which_value = org_eclipse_tahu_protobuf_Payload_Metric_long_value_tag;
metric->value.long_value = *(uint32_t *)value;

if type of value is METRIC_DATA_TYPE_UINT32, the value is assigned to long_value,
why not use value.int_value to keep the uint32 value? type of int_value is uint32_t which is enough room I think.

@jbrzozoski
Copy link
Contributor

This is a point of confusion right now.

Historically, values of datatype UINT32 have been sent/received in the "long_value" field which is a uint64 protobuf type. For compatibility with Ignition and most Sparkplug implementations you will need to work this way.

Unfortunately, the spec documents didn't call this out clearly enough and there are some commercial implementations starting to appear which send/receive values of datatype UINT32 in the "int_value" field which is a uint32 protobuf type.

As far as I can tell, the historical design used "long_value" for uint32 datatypes because it was being converted from a non-protobuf Java-based design and a Java "long" is required to properly store a uint32.

You will unfortunately need to check which way the systems you are planning on talking to expect to receive their UINT32 types and design you implementation to match. A flexible design would be able to receive either long_value or in_value when it was expecting a UINT32, but you'd still only be able to send one way or the other which would be controlled by a config option before going online.

@wes-johnson
Copy link
Contributor

For the new release of the Sparkplug spec the protobug definition will not change. This means at least the Java (and I think the C client) are both currently not conforming to the spec. I'll have a PR up shortly for the Java implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants