-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing design concerns with Sensor Subsystem (sensing) #58925
Comments
Thank you for this detailed summary. |
@yperess, thank you for the detailed comments and suggestions, very good! Before explaining to the concerns in this RFC and help other guys to understand the context, I still want to summarize and highlight the key concepts of current We defined two entities (only these two) in current design:
This can keep things simpler and clearer:
At the consume side, The major differences are connection(type for more details, please see https://docs.zephyrproject.org/latest/services/sensing/index.html#sensor-instance-handler Why we have this design, will explain with the concerns in this RFC:
Like described above, although we defined different entities: But internally we reused single config path, for example, For data path, internally most parts are same, only have a small special difference of finally data receive callback APIs:
The difference is, for But for So, in this case, we can't use same data event callback APIs for both
As described above, we reused single config/data path internally, and use devicetree statically config
Sensing Sensor API is also public APIs, Application developer also can use it to overwrite or create new devicetree is good for config different sensor connections without code change, it's suitable for different board configurations. will talk more on the
No, typedef int (*sensing_sensor_init_t)(
const struct device *dev, const struct sensing_sensor_info *info,
const sensing_sensor_handle_t *reporter_handles, int reporters_count); then, almost every APIs which take
To be honest, although we used unified Only for the data push path at the last stage, calling client's data receive callbacks need check connection types, as explained above, due to same I don't agree this means there are two separated code paths.
The idea to make virtual sensors like any other client (Appliation etc), and let virtual sensors like application client open reporter sensors by themselves, like your mentioned example:
It's sounds good, but if take more thinking, I can see below problems:
But with current
This is why we need distinguish
It's true everything inside the subsystem is a sensor,
we can use it to point any device node, for example a camera device, a WIFI or BLE device, to create the new wrapper For example, if want to make a vision sensor with camera device, just create the new vision
Then, in the vision All these are by design, it's very flexible, I don't see there need large refactor.
Agree with you, we need avoid unnecessary or not needed things, and for
As the explanation in the
The goal of I think we need standardize sensor types, the I'm happy, if we can and want define Zephyr's own standard sensor types to cover And in fact, as you can see, the current sensor data format is more like the data format defined in /*Sensing Subsystem */
struct sensing_sensor_value_3d_q31 {
struct sensing_sensor_value_header header;
int8_t shift;
struct {
uint32_t timestamp_delta;
union {
q31_t v[3];
struct {
q31_t x;
q31_t y;
q31_t z;
};
};
} readings[1];
}; /*CHRE*/
struct chreSensorThreeAxisData {
/**
* @see chreSensorDataHeader
*/
struct chreSensorDataHeader header;
struct chreSensorThreeAxisSampleData {
/**
* @see chreSensorDataHeader
*/
uint32_t timestampDelta;
union {
float values[3];
float v[3];
struct {
float x;
float y;
float z;
};
float bias[3];
struct {
float x_bias;
float y_bias;
float z_bias;
};
};
} readings[1];
}; Sine For the
We already have this API ( This API is only for
That's a good idea, we can think and discuss more.
I'm afraid we can't use existing sensor API, because they are all discrete |
And if the purpose of |
Hi, Yuval Thank you very much for the detailed suggestions. I would like to share my thoughts on the first issues you raised.
I think we need to first align on the design concept of sensing subsystem.
In such way, we will inevitably have two set of APIs to serve different purposes
Below figure illustrates the concept of sensor object (ellipse boxes) and sensor application (rectangle boxes), hope it can help clarify the design Now back to the issue, I think the dual path is referring to “path 1” and “path 2” in above figure
So in summary for dual path issue, by design:
Best Regards, |
Thanks @KeHIntel for more explanation and thoughts about the And for easy track, list the problems about the proposal for the
Since every connection is dynamically created, need either pre-allocate a memory pool (may waste of memory) or using malloc (not recommend and strictly controlled in Zephyr).
Each sensor needs an init stage to open and config it's reporter sensors, right? Sensor can register it's init callback when register to the subsystem. In fact, to make everything work correctly, we always need make sure the reporter sensors should init before it's client, we need some sorting process like topological sorting to make the correct init order. If sensors all open and config their reporters dynamically, I don't know the subsystem how to make sure init them with the right dependency order.
It will lose some board configurability, since the sensor code need explicitly open specific sensor nodes (like hard binding), so, if want to change the report nodes (same sensor type but different instance) of a client sensor on a board, need change code, but can't just config in board's devicetree. |
@KeHIntel thank you for the details and I think you hit the nail on the head with the image which I'm going to re-use in order to keep the conversation directed. In this diagram I would argue that not only can we converge paths 1 & 2, but we should do exactly that. In this diagram, Is anyone listening to the
Notice that the above doesn't need to know about connections 3&4. When As a separate example, lets assume the existing APIs and configurations. What happens when/if I don't see why this is needed. I'm not saying it can't work or that it doesn't. I am saying that from what I can see, this is more complicated than it needs to be and should be either very strongly justified. Ideally with a solid example or unit test that will fail without this design. |
@yperess, now I understand your idea better, and here is my comment:
Comments: do you mean that you want to implement each virtual sensor as a Zephyr sensor kernel device driver? Or this is just referring to physical sensor? If you are talking about virtual sensors as Zephyr sensor kernel device driver, I really doubt the effectiveness, given Zephyr sensing subsystem can run into user space, why bother move virtual sensors back to kernel?
Comments: this looks like feasible in high level, but could result in a lot of issues if we deep dive into details:
Hope above comments can help clarifying our design philosophies better. We understand your preference on more flexibility in virtual sensor implementations and configurations, however, as you can see, there will be a lot of new problems coming up. We believe our current proposal is already a good balance among configuration flexibility, sensor development easiness, sensor management simplification, sensor application decoupling, and system running efficiency. So our recommendation is still to keep it in high level unchanged. |
I'm saying you can use the API without having to make them kernel level objects. The key here is that implementing a virtual sensor should feel like any other sensor driver.
In your current design, do you allocate a buffer for every single connection? Generally speaking a pool is much more efficient since some sensors may never need to buffer the data (buffering really is only ever needed when you configure a latency/batching). If you look at the updated APIs for sensors they use the RTIO subsystem + a memory pool which gives a lot of flexibility. It was designed explicitly to work for the sensor subsystem.
Not quite, if you go back to the RTIO model, the sensor subsystem controls the consuming thread for the CQE. When data comes back for the sensor it forwards the information and thus maintains the current callback's context as the current context. As such, the API doesn't actually need to include the
You cannot guarantee this. A sensor could start an async bus operation then the clients close the connection. By the time there's data and sensor produced something it might not have clients. You will always need this check.
I think we need to allow a circular dependency.
Not quite, in my WIP PR #56963 you can see that you can create a
I'm not sure I follow, arbitration really has more to do with configuration than with distributing data. The data distribution is more of a pub/sub problem which only has one kink and that's the batching configuration. If batching is enabled we might want to buffer the data for longer. Since posting the data is effectively like saying "the current The remaining arbitration is done regardless of data generation. When a connection requests a new config, the subsystem iterates through all the current connections for the same
I'm sorry if there was a confusion, see my comment on bullet 5. The
|
By definition in Zephyr, device drivers are kernel drivers. If we want to implement a virtual sensor like other Zephyr sensor device driver, but stay in user space, I am not sure how this can be done. If we need to introduce something new in Zephyr for this, will it become another "2nd path"?
We do not allocate buffer for every single connection, but we allocate buffer for every sensor object. The buffer is just one data struct size, and always holds the last data reported by that sensor. This buffer is must have for each sensor in sensing subsystem. For memory pool, I think it does not work. For example, a new client opens a HALL sensor and want to read the last data that generated 1 hour ago when lid opened. In a pool based solution, the data will be already washed away.
The multi-client post problem is about two clients post same data to one sensor object. For example:
In such case, the quaternion sensor will output data in 200Hz, with one data from EKF algorithm, another data from UKF algorithm.
In sensing subsystem, all sensors (physical and virtual sensors) are scheduled by sensing subsystem. If a virtual sensor has no client, sensing subsystem knows it during runtime, and
If we can handle it in framework level, why bother push it to developers? After introducing Zephyr sensing subsystem, one top priority for us is to grow ecosystem, and in order to grow it as quickly as we can, we have to make developer's life as easy as possible.
If the stillness sensor only detect still, your loop works, but real case is that stillness sensor also needs to detect motion and tell 6dof to wake up and work, in such case, your have a dead loop. stillness sensor who wants to detect motion but based on 6dof who is already set its output to origin because stillness sensor told it device is still. The common approach will be: motion detect sensor based on accel, 6dof based on accel, gyro, motion detect. No loop, no dead lock, lower power (if only need motion detect, no need to run 6dof).
I understand there is other way than 'devicetree' that can config apps. My point is the unnecessary complexity. From developer's point of view, isn't it better to have a single place ('devicetree') to config all sensors for a given platform, instead of doing it in multiple places with multiple methods?
Arbitration is mainly referring to configuration. My point is that in order to get arbitrated configuration from sensing subsystem in apps, app has to implement callbacks from sensing_sensor.h in your app. Then this app will become a special app (who uses 'sensor.h' and 'sensing_sensor.h') other than regular sensor apps (who only uses 'sensor.h'). In such case, isn't it still dual path for sensor clients?
Let's discuss it in bullet 5.
|
@KeHIntel I trimmed the content so it's a bit easier to follow
This works in my branch, what you need to do is create a partition for the associated data so it can be accessed from userspace.
Can you elaborate on this. When you say
If I understood correctly, in the model the client is the virtual sensor. In your above example you have 2 different algorithms (EKF and UKF). These would be 2 separate
I don't think you need this extra scheduler. For 99% of cases you can just ignore it. But for the 1% you will not know at the time the data was requested. For example:
Because you shouldn't handle it in the subsystem. It's not fair to make the assumption that loops are invalid because your clients can always work around you by opening a dynamic connection and using it to generate data. I think it's fine to put on to the developers that you cannot loop 1:1 data producing virtual sensors as it feels like an extreme edge case.
Right, this was just an oversimplified example to show a loop that is valid (not necessarily optimized for anything). I'm not arguing that it's not possible to do this without a loop, but that there's no reason to prevent these loops since you can't predict what developers will want to do.
Yes, but if you look at everything in Zephyr, devicetree is just a tool and is not a requirement. You could create a sensor driver using the
I'm sorry, you lost me there with the use of the word // Assume we got 'handle' as an argument which is a void* to the connection
// Create an empty config to represent the arbitrated final config
struct config union_config = {0};
// Crate a filter function to only look at open connections to the handle
auto is_connected = [handle](struct connection *c) { return c->info == handle; };
// Loop through and arbitrate
for (auto c : connections
| views::filter(is_connected)) {
arbitrate(&union_config, c.config);
} |
@yperess it was stated all along the subsystem would provide not just arbitration and pub/sub like functionality but scheduling order over the nodes as well. Referencing the above diagram, in the current design the data from physical sensor 1 is given to virtual sensor 0 before virtual sensor 1 by design. With arbitrary pub/sub this is lost.
Lets assume topological ordering and consider the following sort of diagram In this scenario the bank selector will be done before the banks themselves are given the sensor data as guaranteed by the scheduling order as I understand the subsystem today. This means that filters in the filter bank that are not selected can skip processing entirely saving time. Data does not need to be kept in filter banks not currently used saving memory. With generalized pub/sub no guarantee is provided that the bank selector will be updated before the sensor data is given to the filters. Suppose the bank selector updates at 1Hz while the sensor updates at 100Hz. This still works. No timestamp comparisons are needed or anything like that, no quirky logic involved. Each bank looks at its last selector update as its sole guidance on whether or not to do anything on each sensor update. Now lets think about doing this same sort of thing with arbitrary pub sub where the ordering is not guaranteed. To me at least, it looks like the logic involved would be more complex to deal with this sort of setup without topological ordering. |
I think you may have hit on an interesting point here and we might need to take a step back to define arbitration. In CHRE and from my perspective arbitration means that you set some threshold but not an exact requirement. For example:
In that sense the bank selector and filter banks will get the same sample rates from the sensor. If we don't arbitrate this way, we risk over complicating the system and also making a lot of assumptions on behalf of the consumer. In the above, do we report to client 1 every other sample? What if they configured a latency of 500ms? Do we buffer 100 samples then copy every other one to the final data? Or maybe just the tail 50? The general idea from other sensor frameworks is to over deliver. Client 1 will get 200Hz of data and it can choose what to use.
This can still be done with a pub/sub. When the bank selector produces a selection event, the bank filters that aren't selected can reduce their sample rate to 0 which means they will no longer get data.
You're right, ordering is not guaranteed, but it does give you more flexibility and reduces the complexity. Going back to the DAG you provided, if bank 0 needs samples at 100Hz while bank 1 needs 50Hz and the selector at 1Hz what does the scheduling look like? It's much simpler to say everyone gets 100Hz and you can filter and take whatever samples you want from that. This is even more true when batching is involved. Lets assume that we agree that everyone above gets 100Hz, the selector gets the sample first and only outputs data 1/100th. When the selector produces data, I'm assuming we'll publish that data before continuing to propagate the sensor data (so it's depth first, where the newest data is sent before the existing event finished propagating). Such a depth first pub is more expensive since memory has to be kept for longer. I would argue that if the events come out of order in the simple pub/sub model where the filter banks get the sensor data before the selector switches then you have 1/100 samples that are bad. If this is time critical, you could easily work around this by having the data come through the selector which would re-publish the selection and the data. I know this is getting into the weeds a bit, but I believe that keeping this simple and not adding a scheduler to the subsystem will make it much more maintainable as well as accessible with lower requirements. |
I think you might have over done it a bit here on the point I was trying to hit home on. The bank selector data rate can be independent of the sensor. We cannot assume the bank selector will provide an update every time there is a sensor update. Without the topological sort ordering we don't know which will come first, or if we need to wait on one or the other to make a decision. So we cannot simply use timestamps to infer what the latest is from the bank select without added logic. With topological sort and ordering we don't have to worry about that at all. The bank select will always be updated if needed before the next sample to the filter bank nodes is given and no additional logic or work is required to get this behavior. Without the topological sort there will absolutely need to be additional logic to deal with this potentially wasting resources. Not just hardware resources but also human resources in debugging and diagnosing problems with ordering. |
@yperess the discussion goes pretty long, and let me trim and summarize a little bit
I do not doubt that you can port sensor_driver_api to user space. My point is that, once it is moved to user space, it becomes something new rather than the standard Zephyr device driver. If it is neither standard Zephyr app nor standard Zephyr device driver, I doubt the value of introducing it to Zephyr.
Each sensor instance, not each sensor type, and yes, it holds the last sample of the sensor, to handle the "HALL sensor data read 1 hour later case" I mentioned above. When virtual sensors become dynamically created, I don't think we can use pool to solve this problem. Batching is another topic, for batching, we will need a dedicated FIFO memory pool, that part is expected to be similar as RTIO.
You understanding is correct, and this can be resolved if EKF and UKF algo post data to 2 separate
No need extra scheduler, sensing subsystem is single thread, it just scans the sensors according to the dependency order from the sensor tree, and selectively call callbacks of the sensors to manage their execution. In this way, sensors with no client do not run, unnecessary processing is not needed, and power is more optimized.
The stillness sensor example actually proved my point that allowing loop in sensor dependencies can easily lead to dead lock. If we upgrade stillness sensor to motion detect sensor, a dead lock happens.
I am confused, do you mean that you actually prefer dual methods in sensor configuration, while you hate to have dual path for sensor data streaming?
On arbitration, I agree with what Tom commented in above
Summary
|
I think @KeHIntel gives a very good and clear summarize in #58925 (comment) And about 0.Device driver in user space and why we not used existing zephyr sensor_driver_api, The key for the design is:
This design can keep very clear boundary and interfaces between different worlds ( Just as @KeHIntel mentioned above, sensor_driver_api (D in the diagram) is defined and designed for device drivers, although we call them all are
And for the
I know this definition in some applications (in Windows Application etc), but I prefer below definition especially in embedded system:
Since this is common logic, not make sense for each sensor to do it for themselves, and just as @teburd mentioned above, let each sensor to do this will waste processing resource, not efficiency, and it's hard for sensors to do by themselves, since they don't know the real data report rate to them, compare samples timestamps? seems not good. And also agree with @teburd :
At last, I strong agree with @KeHIntel in #58925 (comment) :
|
@teburd sorry for the late reply on this. Context: Answer
Number (4) above is where it gets a bit fuzzy because the driver implementation is expected to have the sensor data in the right format currently. My first iteration of the low level sensor drivers was to actually mirror what we wanted for the subsystem. Which I'm thinking might still be a good idea and is worth exploring. Basically the decoder wouldn't decode 1 struct sensor_data_header {
uint64_t timestamp_ns;
uint16_t num_readings;
};
struct sensor_three_axis_data {
struct sensor_data_header header;
int8_t shift;
struct {
union {
q31_t values[3];
struct {
q31_t x;
q31_t y;
q31_t z;
};
};
} readings[1];
};
struct sensor_decoder_api {
int (*get_reading_count)(
const void *data,
enum sensor_channel channel,
uint16_t *num_readings
);
/* Cast 'out' to the right struct based on the channel being decoded.
* For example, if the channel is SENSOR_CHAN_ACCEL_XYZ then 'out' should
* be 'struct sensor_three_axis_data'.
*
* This function will read the 'out' value's header to see how much space is allocated,
* then it'll start at the frame offset 'fit' and start decoding all the data of the appropriate
* type. If the data is FIFO data interleaved ACCEL/GYRO, the channel is ACCEL_XYZ, and
* the 'num_readings' is 3, the decoder is expected to decode the next 3 accelerometer
* readings from 'data' and move 'fit' accordingly. This does mean that a separate frame
* iterator is needed for gyro decoding in this model.
*/
int (*decode)(
const void *data,
sensor_decoder_fit *fit,
enum sensor_channel channel,
void *out
);
}; |
Hi @yperess I have created a PR #60331 to elaborate how would the current sensor API work with the subsystem base on
|
See my comments in the PR, the way you set up the pipe is unlikely to work, especially when streaming data from a hardware FIFO. |
@yperess I think it would be really beneficial to see something definitive in the form of an RFC PR, I believe you have quite a lot done on an ongoing branch but nothing to sort of try, tinker with, and evaluate in a way that would be helpful for ongoing discussion. Secondly I still strongly feel that the topological ordering provided by the sensing subsystem as it is, provides a lot of value. I've stated my sample scenario I believe clear enough. To reiterate my stance though, I feel timestamp comparisons and interpreting sample rates and last seen events seems like a complex way to deal with such ordering issues in my opinion. Lastly I'm not quite sure what is meant by sensor types are not 1:1 chre aligned or what that has to do with a Zephyr sensing subsystem. Can you please elaborate on what exactly is meant by this and why its a blocking issue for a Zephyr sensing subsystem? |
@teburd I have #56963 right now which holds all of my WIP for sensors. It runs and you can play around with it on the TDK board. For the topological order, I had a chat with @keith-zephyr about this and while I don't personally see a strong value, I'm 100% open to that feature being a Kconfig. So this way, when a new connection is opened or closed, we would sort the connection list (or do something to alter the order of data forwarding). I don't think that my proposal precludes it. The same is true for circular dependencies which the check can cause a runtime failure behind a Kconfig. |
Hello, @yperess , I have answered your comments on #60331. I am afraid your below statement is inaccurate, #60331 can work with BMI160,LSM6DSO out of existing accel/gyro sensor device drivers with no change today. The rest icm42688 will need simple upgrade to have soft data_rdy bits implemented in driver.
On FIFO, given no existing Zephyr sensor device driver supports FIFO today, the PR does not cover FIFO case for now. |
Can you explain? If you have 3 separate devices for accel/gyro/die_temp. Y While the FIFO APIs aren't merged yet, they're very close. Regardless, once the accel device in the current subsystem reads the FIFO, the gyro instance will starve. It's much easier to use a single device instance mapped to 3 info structs like I have in my branch. |
Sorry, you are right. I check the BMI160 code again. --- a/drivers/sensor/bmi160/bmi160.c
+++ b/drivers/sensor/bmi160/bmi160.c
@@ -736,8 +736,25 @@ static int bmi160_sample_fetch(const struct device *dev,
}
}
- if (bmi160_read(dev, BMI160_SAMPLE_BURST_READ_ADDR, data->sample.raw,
- BMI160_BUF_SIZE) < 0) {
+ uint8_t reg, size;
+ switch (chan) {
+ case SENSOR_CHAN_ACCEL_XYZ:
+ reg = BMI160_REG_DATA_ACC_X;
+ size = BMI160_AXES * sizeof(uint16_t);
+ break;
+ case SENSOR_CHAN_GYRO_XYZ:
+ reg = BMI160_REG_DATA_GYR_X;
+ size = BMI160_AXES * sizeof(uint16_t);
+ break;
+ case SENSOR_CHAN_ALL:
+ reg = BMI160_REG_DATA_GYR_X;
+ size = 2 * BMI160_AXES * sizeof(uint16_t);
+ default:
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (bmi160_read(dev, reg, data->sample.raw, size) < 0) {
ret = -EIO;
goto out;
}
About the
struct xxx_data {
uint8_t status; // soft data ready bits
};
static int xxx_sample_fetch(const struct device *dev,
enum sensor_channel chan)
{
struct xxx_data *data = dev->data;
uint8_t status, mask;
switch (chan) {
case SENSOR_CHAN_ACCEL_XYZ:
mask = ACC_DRDY_MASK;
break;
case SENSOR_CHAN_GYRO_XYZ:
mask = GYRO_DRDY_MASK;
break;
case SENSOR_CHAN_ALL:
mask = GYRO_DRDY_MASK | ACC_DRDY_MASK
default:
ret = -EINVAL;
goto out;
}
if (data->status & mask != mask) {
i2c_read(dev, REG_STATUS, &status);
data->status |= status;
}
if (data->status & mask != mask) {
ret = -EIO;
goto out;
}
// fetch acc/gyro data by chan
...
data->status &= ~mask; // clear dtdy_bits.
}
I'm still learning these patches. After that, I'll check how to enhance #60331 to support FIFO. |
An interesting statistic on the discussion of HID vs CHRE constants. Android (which uses CHRE is by far the biggest market share) https://gs.statcounter.com/os-market-share#monthly-202111-202303 |
Understood that there are more Android devices than others, not a surprise, but I want to point out a great portion of Android devices do not have CHRE enabled, Android adoption != CHRE adoption, and CHRE is not an industry standard today. |
isnt apple using HID as well? https://developer.apple.com/documentation/hiddriverkit/3201495-sensors, so in this case |
I believe these issues have been addressed in #64478 please verify! |
this has been addressed long time ago, reopen if anything remains |
Introduction
Note: This RFC is used to document the concerns with #55389 and is done in the context of #36963. It was asked that the PR is allowed to progress and concerns will be addressed in following PRs.
Several changes are needed to the first PR for the sensor subsystem which I believe are required for a successful sensor framework. Definitions for success in my mind are (since Chromebooks will be the first clients):
Problem description
My current concerns surround the design of the system and will also be summarized here for a quick TL;DR
Proposed change
When used and a new producer is registered with the subsystem, a node is provided along with metadata about the producer. This includes information such as the sensor type and capabilities. Each open connection is used for arbitration. The metadata for the connection contains the requested attributes alone with the callbacks. When a connection is closed, we can then look at the existing open connections to the producer and re-arbitrate (same will be done when the configuration changes).
Have nodes in devicetree be metadata about the physical sensors OR be virtual sensors
These nodes for physical sensors would be much more easy to manage and arbitrate if a single one is used and registers itself as a producer of 2 types (accel/gyro). This means that the rotation matrix and other augmenting properties can be provided in the same place since they affect the data produced by the same sensor.
Sensors should use the existing sensor API. It already provides attribute get/set capabilities as well as getting samples (including streaming soon). There's no reason to add a new API.
Detailed RFC
The following highlights my concerns with the current design.
Dual code paths
The sensing subsystem contains a separate code path for virtual sensors instantiated via devicetree vs clients of the subsystem. Though both are treated the same if we were to look at the subsystem as a black box. For example:
This is an issue for both memory footprint and passing CTS as it overcomplicates what should be a simple system of arbitration and pub sub. Current design of the subsytem means that if the same developer wants to convert this algorithm to an official virtual sensor, they must use the sensing internal sensor API instead of the one for the clients. They need to register
reporters
in devicetree and the connections to the reporters will automatically be opened by the subsytem. Next, if a virtual sensor wants to dynamically get more information from another sensor, they will have to go through the "normal" client setup of opening a connection, because the previously mentioned flow is only supported for statically registered dependencies. This means that the sensor now needs to support both workflows AND the subsystem needs to include both code paths. To be clear, this means that:I don't see a good reason for Zephyr's sensor subsystem to support 2 separate code paths that do EXACTLY the same thing. Virtual sensors need to be clients of the sensor subsystem just like any other client. The only "special" treatment they should get is that they need to register as data producers with the subsystem and be able to push data into the subsystem and have the subsystem open connections to the virtual sensor.
Devicetree bindings
The current layout of the subsytem is such that every child node of the subsystem is a sensor. This restriction is not necessary and makes it difficult to extend the subsystem. If we ever want to include nodes which are not sensors, it'll require a large refactor.
With the above class diagram, I believe that a binding can be created that will automatically populate a
sensing_sensor_producer
struct. Each property should have explicit needs. For example, with the current PR,minimal-interval
is introduced and is either unnecessary or not needed (I can't tell because it's not really documented). I don't see a need for it and it should be removed until we have a strong reason for adding it.Prior versions of the PR included a rotation matrix in the child sensor nodes. I want to make sure this doesn't come back in (it was removed in recent patches). The rotation matrix is a property of the physical device, it's error prone to have it in the subsytem's sensor node because these nodes are per type (1 node for accel and 1 node for gyro even if they both point to the same physical device). I'm actually not even convinced that having these separate notes will work well since configuring one means also possibly configuring the other so I will need to see how arbitration will work (for example, if we need to update the GPIO behavior of an accel/gyro chip having 1 node for accel and 1 node for gyro might mess up the arbitration. I would like to see a design proposal for how this will be done.
This leads me to the final obvious point that the virtual sensors don't need to be children of the subsystem if they use an iterable section to create the nodes in C, and we guarantee that there's always at most 1 instance of the subsystem, this is already handled by the compatible string for those nodes that will know to create the structs via a call to
SENSING_SENSOR_DT_DECLARE()
or something similar.CHRE sensor types
This is a simple matter of where things are used. CHRE is an official module of Zephyr and will be running on the same compute core as the sensor subsystem. Using HID is only used when sending data off chip. This means that our options are:
This seems like a no brainer to me. I'll also point out that as per CTS, there exists a client (I can't say which) that said they'll switch to Zephyr from FreeRTOS for their Android phones when we get this all working with CHRE. This further makes me think that the extra translation is a waste of cycles and/or a lookup table for every single event.
Proposed change (Detailed)
The key to this proposal is leveraging the existing constructs. Virtual sensors are built like normal sensors (using the same APIs). Virtual sensors that consume information from other sensors are treated the same as the application.
Demo 1: webm
In this demo:
Demo 2: webm
In this demo:
DISCLAIMER: the data for the angle sensor is actually just the dot product. I need to add more features to the DSP library for the angle calculation to fully work
Proposed structure
Below are diagrams of the structure that's being proposed (used in the demo videos)
struct sensor_driver_api
for virtual sensorsstruct sensor_info
for the common information about sensorssensing_sensor_info
struct it's connected toAlternatives
Do nothing?
The text was updated successfully, but these errors were encountered: