Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sensors: Update to the Sensors V2 decoder API #60944

Closed
tristan-google opened this issue Jul 29, 2023 · 9 comments
Closed

sensors: Update to the Sensors V2 decoder API #60944

tristan-google opened this issue Jul 29, 2023 · 9 comments
Assignees
Labels
area: Sensors Sensors RFC Request For Comments: want input from the community
Milestone

Comments

@tristan-google
Copy link
Collaborator

Objective

Adjust the Sensors V2 decoder API to use a more natural representation of the data and make it suitable for re-use with the proposed Sensor Subsystem.

Background

The recently-merged Sensor V2 asynchronous API (referred to as SensorsV2 in this RFC) introduced a decoder API, implemented by each supported sensor, that processes a buffer of raw sensor register data received through an RTIO transaction and outputs a fixed-point sensor reading for the application to use. Data returned by the decoder API is frame-oriented. That is, samples for all channels in a frame are read out sequentially before advancing to the next frame in the buffer.

The above arrangement was optimized for cases where an application is directly reading data off a physical sensor. However, under the proposed Sensor Subsystem and CHRE, after which it is modeled, the definition of a "sensor" is broadened –producers of data that the Subsystem distributes to consumers may not necessarily be a physical sensor chip, but could also be a virtual sensor data source. One example is a lid-angle virtual sensor that streams samples from two accelerometers located in the base and screen of a laptop, computes the vector angle between the two, and reports a value in radians as its output.

The Subsystem supports “chaining” these sensor types arbitrarily (although a physical sensor can never be a consumer). In the interest of using a harmonized API across both physical and virtual sensors, we propose making two modifications to the existing SensorsV2 decoder API so that it can be adopted in both places:

  1. Transition from frame-by-frame readout of sensor data to grouped by channel. When decoding a buffer, the decoder implementation will read out all samples belonging to one channel in that buffer before advancing to the next channel. This makes it easier for the Sensor Subsystem to fan out different channels to different consumers. Furthermore, individual access to 3D samples (e.g. ACCEL_X as opposed to ACCEL_XYZ) will no longer be supported. There is significant overhead involved in maintaining this functionality, and the majority of use cases expect all 3. Applications that don’t are free to discard unneeded samples.

decoder_read_order

  1. Instead of returning individual samples, the decoder API will fill out a struct containing a header followed by an array of samples for that channel. This provides a more intuitive way to access the data and simplifies the API. Separate API functions for getting the shift value and timestamp are no longer needed as this data is included in the header portion of the struct.

API Changes

See struct sensor_decoder_api in sensor.h for current version.

Keep: get_frame_count

int (*get_frame_count)(const uint8_t *buffer, uint16_t *frame_count)

This function will remain unchanged. It determines how many sample frames exist in a given RTIO buffer.

Remove: get_timestamp

This function is no longer needed and will be removed. Timestamp will be included in the struct outputted by decode.

Remove: get_shift

This function is no longer needed and will be removed. Shift value will be included in the struct outputted by decode

Modify: decode

int (*decode)(const uint8_t *buffer, enum sensor_channel ch, sensor_frame_iterator_t *fit, uint8_t max_count, void *data_out);

As it does today, the decode function reads a buffer obtained from a sensor and extracts usable sensor samples from an internal representation. In the case of physical sensors, that is the raw register representation being translated into Q31 fixed-point values in SI units. Custom virtual sensors are free to format data as they see fit.

However, the decode function’s interface has been adjusted to better allow for channel-by-channel (vertical) reads. There is no longer a channel iterator; instead, a caller requests a specific channel through the ch parameter and a maximum number of samples to extract (max_count). The implementation will write out samples and metadata to data_out using the appropriate struct type from below. These structs have been modeled after CHRE. struct sensor_data_1d is for one-dimensional values and struct sensor_data_3d is for XYZ values. Occurrences (trigger events) use just the header (struct sensor_header), as they do not have associated sample data. Custom virtual sensors that don’t fit any of these are able to declare their own (it will be up to the driver implementation and application). After reading, the frame iterator is updated so subsequent reads pick up where left off.

Proposed Types

The precise names of these types are not final and will be properly namespaced.

/** Header written out in all cases that includes timestamp and count */
struct sensor_header {
    /** The timestamp associated with the first frame. */
    uint64_t timestamp_ns;
    /** Number of data samples that follow */
    uint8_t count;
}

/** Basic sensor data struct for 1D data (e.g. temperature) */
struct sensor_data_1d {
    struct sensor_header header;
    /** The shift count of for particular channel (multiplier) */
    int8_t shift;
    /** The actual sensor data. Number of elements specified in header */
    q31_t data[1];
}

/** Used for accel, gyro, and magnetometer samples */
struct sensor_data_3d {
    struct sensor_header header;
    /** The shift count of for particular channel (multiplier) */
    int8_t shift;
    /** The actual sensor data. Number of elements specified in header */
    struct {
        q31_t x;
        q31_t y;
        q31_t z;
    } data[1];
}

Add: get_size_info

int (*get_size_info)(enum sensor_channel ch, size_t *base_size, size_t *inc_size)

This new API function helps the application determine how much memory to allocate to decode sensor data into. For a given sensor type (accel, gyro, temp, etc…), this function will report the base number of bytes needed to store the initial output struct (header plus one sample) as well as the incremental size needed for each additional sample. The return value is a status code.

A default implementation of this function will be provided that covers all of Zephyr’s standard sensor channel types, but physical or virtual sensor drivers are welcome to export their own implementation.

Impact on Existing Code

While this affects an API that is live within Zephyr, it is only used in a small number of places contributed by Google and is currently considered experimental in case the API evolves before being ready for mainstream use. Google would take care of migrating existing upstream usages if this proposal is accepted.

Alternatives Considered

The alternative is to leave the existing decoder API in place for physical sensor drivers and adopt a fresh API used by virtual/user-space sensors within the Sensor Subsystem. However, this new API would be very similar to the existing API. In the interest of re-use, we find it worthwhile to broaden the existing API while it is still in early stages and to harmonize across all sensors --virtual and physical-- so they are interchangeable as far as the Sensor Subsystem is concerned.

@tristan-google tristan-google added the RFC Request For Comments: want input from the community label Jul 29, 2023
yperess added a commit to yperess/zephyr that referenced this issue Aug 1, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Aug 2, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
@teburd
Copy link
Collaborator

teburd commented Aug 3, 2023

So I think there might be some assumptions being made here that will cause some sensors issues.

Consider something like a adc with an array of channels all reporting the same meaningful data (voltage). There's many sensors like this for light, tof, sound, capacitance, and more.

Is the ability to select channel N of type X accounted for in this setup?

Can you still partially get some number of rows of a channel index and type? E.g. I fetch 5 rows, then fetch another 5, then another 5, and so on?

I still need to look at the implementation you have started @yperess but I recall gaming out the current API quite a bit so I think its worth kind of redoing that here.

@yperess
Copy link
Collaborator

yperess commented Aug 4, 2023

@teburd can you clarify (I'm sorry if I'm missing something obvious). In the current setup the columns are channels and the rows are frames right? So we have a frame x channel grid. The current decoder is such that we read a frame and iterate the channels first. We found that usually if we care about a specific channel this was difficult because the decoding stops on the frame boundary. That is if you just want to get channel X, but 3 frames of it, you have to call decode 3 times.

In your example you mentioned an array of channels reporting the same data. Now the current proposal is a bit out there but I think it's a step in the right direction. We basically merge channels into sensor types (right now just for accel/gyro/mag/pos but others can do the same):

  • ACCEL_X, ACCEL_Y, ACCEL_Z all map to ACCEL_XYZ which will decode to a struct like the sensor subsystem's sensor_three_axis_data.

In the ADC case that's up to the authors of those drivers. I'm neutral with a slight bias to keep them as 1D data. In your example, lets say the ADC gives 3 frames of SENSOR_CHAN_RED, SENSOR_CHAN_GREEN, and SENSOR_CHAN_BLUE. The proposal is that you can get multiple frames of any 1 of these channels at a time.

  • decode(SENSOR_CHAN_RED, ...) will decode multiple frames of just the red channel

If someone wants to create a new channel that combines them, they can. So for example SENSOR_CHAN_RGB might instead of mapping to a 1d struct, map to a 3d struct.

Now the catch. I think that long term, channels should start becoming sensor types. Effectively, once you map accel x/y/z to the single XYZ type and represent those values as a single struct, the individual channels aren't needed anymore and possibly could be deprecated. We've toyed with the idea of just introducing the sensor type enum and leaving channels alone, but I worry about the divergence of the two.

@teburd
Copy link
Collaborator

teburd commented Aug 4, 2023

In the current API though multiple channels can have the same type, e.g. TEMP or VOLTAGE. The type is provided back as you iterate over the channels. In theory a frame could have 10 temperature readings and you'll get back 10 samples from the frame with the type TEMP (ambient, or whatever)

By changing this to not allow for multiple channels of the same type you are going to run into issues with things like multi-channel temp, light, voltage, current, sound, tof, capacitance, etc sensors.

See #1387 and note that multiple channels of the same type need to be supported in the comment by @microbuilder. I believe that's supported by the current decoder API unless I missed that!

@yperess
Copy link
Collaborator

yperess commented Aug 4, 2023

In the current API though multiple channels can have the same type, e.g. TEMP or VOLTAGE. The type is provided back as you iterate over the channels. In theory a frame could have 10 temperature readings and you'll get back 10 samples from the frame with the type TEMP (ambient, or whatever)

By changing this to not allow for multiple channels of the same type you are going to run into issues with things like multi-channel temp, light, voltage, current, sound, tof, capacitance, etc sensors.

Can you clarify, does that mean that the same frame can have 2+ instances of a single channel (for example SENSOR_CHAN_RED)? Or multiple instances of the same unit being measured? I would consider ambient temperature and die temperature separate channels with the same units. Those are fine to have on the same frame. But you can't have 2 samples of fire temperature on the same frame.

See #1387 and note that multiple channels of the same type need to be supported in the comment by @microbuilder. I believe that's supported by the current decoder API unless I missed that!

@teburd
Copy link
Collaborator

teburd commented Aug 4, 2023

In the current API though multiple channels can have the same type, e.g. TEMP or VOLTAGE. The type is provided back as you iterate over the channels. In theory a frame could have 10 temperature readings and you'll get back 10 samples from the frame with the type TEMP (ambient, or whatever)
By changing this to not allow for multiple channels of the same type you are going to run into issues with things like multi-channel temp, light, voltage, current, sound, tof, capacitance, etc sensors.

Can you clarify, does that mean that the same frame can have 2+ instances of a single channel (for example SENSOR_CHAN_RED)? Or multiple instances of the same unit being measured? I would consider ambient temperature and die temperature separate channels with the same units. Those are fine to have on the same frame. But you can't have 2 samples of fire temperature on the same frame.

See #1387 and note that multiple channels of the same type need to be supported in the comment by @microbuilder. I believe that's supported by the current decoder API unless I missed that!

Yes exactly

https://www.ti.com/product/FDC1004 for example measures 4 channels of capacitance, how would I support this in Zephyr (I had a product I was prototyping with this exact chip)

This is just one of many sensor devices that support arrays of readings of the same type.

Today I'd have to add custom channel types CAP1, CAP2, CAP3, CAP4 etc

This has been an issue noted and repeated many times, #13718 and #1387 both mention it. #60833 is the results of not having a channel index selector or identifier available.

The current decoder API sort of accounts for it right, as you iterate over the channels in a frame you can get back the same channel type repeatedly and that's allowable. It might be nicer if it were to return back the index as well.

By switching to a vertical decoder, and I think that is fine, but the index likely needs to be a parameter given then.

@yperess
Copy link
Collaborator

yperess commented Aug 4, 2023

Thanks @teburd , that clarifies, so my question in those sensors is does the index always mean the same thing or is it like devicetree where the ordinal is decided at compile time and may change on a build to build basis. If this is a hard rule that CAP1, CAP2, ..., CAP_N_ always mean the same thing, then this is a simple extra parameter:

int (*decode)(
    const uint8_t *buffer,
    enum sensor_channel channel,
    uint8_t channel_index,  //< New value is used with channel U index to form the unique column to decode.
    sensor_frame_iterator_t *fit,
    uint8_t *out
);

@teburd
Copy link
Collaborator

teburd commented Aug 4, 2023

Yeah exactly, it’s a simple extra parameter. We don’t need to do anything fancy here imo.

In effect the tuple

(device, type, type_index) is the selector

yperess added a commit to yperess/zephyr that referenced this issue Aug 6, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Aug 7, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Aug 7, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Aug 10, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Aug 10, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Aug 23, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Aug 28, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Sep 14, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Sep 14, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Sep 14, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Sep 14, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Sep 14, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
yperess added a commit to yperess/zephyr that referenced this issue Sep 21, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
carlescufi pushed a commit that referenced this issue Sep 25, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See #60944 for details.

Signed-off-by: Yuval Peress <peress@google.com>
Chenhongren pushed a commit to Chenhongren/zephyr that referenced this issue Oct 2, 2023
Update the decoder APIs to vertically decode the raw sensor data. This
means that instead of getting each channel on a frame by frame basis,
the API takes a channel specifier and returns as many frames of that
given channel as there's room.

The goal of this is to make the end decoded result the most useful and
usable for the Zephyr application. See zephyrproject-rtos#60944 for details.

(cherry picked from commit 5e14088)

Original-Signed-off-by: Yuval Peress <peress@google.com>
GitOrigin-RevId: 5e14088
Change-Id: I4b90bd61216a216075c91e1223c58f4a2f13892b
Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/zephyr/+/4891896
Tested-by: ChromeOS Prod (Robot) <chromeos-ci-prod@chromeos-bot.iam.gserviceaccount.com>
Tested-by: Al Semjonovs <asemjonovs@google.com>
Reviewed-by: Al Semjonovs <asemjonovs@google.com>
Commit-Queue: Al Semjonovs <asemjonovs@google.com>
@teburd teburd added this to the v3.6.0 milestone Oct 23, 2023
@teburd teburd modified the milestones: v3.6.0, v3.5.0 Oct 23, 2023
@yperess
Copy link
Collaborator

yperess commented Oct 23, 2023

@tristan-google can we close this since #61022 merged?

@tristan-google
Copy link
Collaborator Author

Yep, can close.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: Sensors Sensors RFC Request For Comments: want input from the community
Projects
Status: Done
Development

No branches or pull requests

4 participants