You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(From #21667, opening this as an issue to discuss it separately without interfering with that PR more than it already has)
It seems that, for most platforms, we get the time from ISRs or hardware clocks and we (or the platform) integer-divide them to get the number of milliseconds the clock is currently at.
Using AVR as an example, and considering the usage of prescalers and integer division, we have a timing problem, because the current checks are basically (in all algorithms) timer_elapsed >= DEBOUNCE, and that >= will possibly be true anywhere starting from DEBOUNCE - 1 + (1 clock cycle above the prescaler), not at DEBOUNCE milliseconds or greater.
As an example (frequency of 16MHz and prescaler of 64, getting us 1 interrupt every 1ms, or 1 interrupt every 250 CPU cycles, and a DEBOUNCE of 5).
During debounce, we check (and store) the timer at clock 10 * 250 + 230 getting (for example) a value of 10.
On another matrix scan, during debounce, we read the timer at exactly 15 * 250 + 10 clock cycles, meaning we get 15 as return.
We consider 15 >= 10 + 5, and send the matrix update.
The total debouncing time was thus 4.001875ms (4 * 250 + 30 cycles), not 5
We finished debouncing at less than 5 ms here. Currently, our approach only guarantees the debounce will happen at >= DEBOUNCE -1 time.
So we have two choices here:
Move the checks back to >
Let go of the timer being in ms and count in either us or ns, and bit-shift the value gotten from the interrupts if necessary, and then the check becomes more precise since we'll have the actual time. 32-bit should be enough for the purposes of debouncing, so even in ns precision we're talking about 4 seconds before it rolls over.
(From #21667, opening this as an issue to discuss it separately without interfering with that PR more than it already has)
It seems that, for most platforms, we get the time from ISRs or hardware clocks and we (or the platform) integer-divide them to get the number of milliseconds the clock is currently at.
Using AVR as an example, and considering the usage of prescalers and integer division, we have a timing problem, because the current checks are basically (in all algorithms)
timer_elapsed >= DEBOUNCE
, and that>=
will possibly be true anywhere starting fromDEBOUNCE - 1 + (1 clock cycle above the prescaler)
, not atDEBOUNCE
milliseconds or greater.As an example (frequency of 16MHz and prescaler of 64, getting us 1 interrupt every 1ms, or 1 interrupt every 250 CPU cycles, and a
DEBOUNCE
of 5).10 * 250 + 230
getting (for example) a value of 10.15 * 250 + 10
clock cycles, meaning we get 15 as return.15 >= 10 + 5
, and send the matrix update.We finished debouncing at less than 5 ms here. Currently, our approach only guarantees the debounce will happen at
>= DEBOUNCE -1
time.So we have two choices here:
>
Originally posted by @andrebrait in #21667 (comment)
The text was updated successfully, but these errors were encountered: