Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define event order relative to RAF? #9

Closed
RByers opened this issue Apr 17, 2015 · 19 comments
Closed

Define event order relative to RAF? #9

RByers opened this issue Apr 17, 2015 · 19 comments
Assignees
Labels

Comments

@RByers
Copy link
Contributor

RByers commented Apr 17, 2015

Some developers / frameworks that prefer to operate on all touch points at once (like TouchEvents TouchList API style). I'd like to be able to recommend a simple library for doing this conversion. Unfortunately today it would probably depend on a particular browser implementation detail for great smoothness / performance - namely that pointer events are dispatched (for all touch points) shortly before requestAnimationFrame callbacks. Then a library could queue the events for each point and construct their multi-touch data structures at RAF time without worrying about loosing frame synchronization of input.

Should this be in-scope for the spec?

@RByers RByers self-assigned this Apr 17, 2015
@scottgonzalez
Copy link
Member

I think it's reasonable to include this in the spec.

@jacobrossi
Copy link
Member

We don't throttle our pointer events today. So they get put in our queue as soon as the device driver and OS send it to us. That gets dispatched when the message loop is pumped. Our touch events are throttled to the vsync (like RAF), however.

The difference here is a tradeoff between performance and fineness of detail. A painting app might prefer more events per frame for finer detail. A game just wants the latest data at the time of the frame. Given you can throttle your expensive handling yourself (using a library such as you're describing), I prefer the former behavior. That said, I think allowing flexibility here on this tradeoff in implementations is desirable.

In the scenario of this library, don't you just want to have the latest data no matter when it fired?

Consider our PE implementation (I'll call it "JIT events" :-)). We wouldn't want a requirement to guarantee firing events for all pointers before the next frame because that could mean arbitrarily refiring events with the same data just to meet the requirement. The library should IMO just handle all the events and update pointers accordingly. Then when RAF comes around, it has the freshest data possible. Whether all pointers got an event in that frame doesn't much matter.

@RByers
Copy link
Contributor Author

RByers commented Jul 9, 2015

Interesting, thanks.

For maximum smoothness we've found it valuable to align input with vsync, using interpolated positions. You can see this in practice on this demo page. Without this even a 60hz touchscreen can trigger occasional jank when scrolling/dragging due to aliasing effects (as the relative phase between input and output drifts).

But regardless, I agree this should be an implementation detail of the browser - the "JIT events" approach isn't necessarily wrong (it's still what we do on desktop actually).

But then how does one write a reliably smooth pinch library on top of pointer events (say for manipulating a map)? With touch events I can just process each event and transform all at once. With PE I probably want to compute/apply my transform exactly once for all fingers, so I'll probably do it on RAF, right? But then that adds some latency I might prefer to avoid.

I certainly don't want to let a frame be produced that includes the changes from one finger's movement without the changes from the other finger's movement (or I'll get a cyclic zoom-in/out pattern as the user drags with a constant distance between their fingers). In a "JIT events" browser users may see occasional jitter when one finger happens to come before the RAF and one after. But in a buffered-input browser, doing the work on RAF should be reliably smooth.

We've had some complaints before about users doing 2-finger scrolling (with a fixed separation between their fingers) and seeing such a zoom in/out jitter. @tdresser, you've got some tools somewhere for precisely measuring such variability in finger separation, right? Perhaps we should extend them to use pointer events and see if we can quantify this two-finger vsync-boundary effect on some Windows devices and compare it to what we get on typical TouchEvent-based devices.

I suppose you could argue that any such library should always do some amount of smoothing. But that always comes at a cost of increased latency and a feeling of "squishyness", so I'm not sure I'd really recommend that. WDYT?

@tdresser
Copy link

tdresser commented Jul 9, 2015

This page graphs the horizontal (red) and vertical (green) separation between two fingers during a two finger scroll.

In current Chrome, there's an ugly alternation between a large delta and a small delta on the primary axis of movement, as we send one touchmove after the first finger updates each frame, and another touchmove after they've both updated. This would go away if we buffered our touch input.

Is that the kind of demo you're thinking would be valuable here? If so, I'll port it to pointer events.

@RByers
Copy link
Contributor Author

RByers commented Jul 9, 2015

Yep, that's the one - thanks. When you say "current Chrome" you mean Chrome desktop, right? It's fine on Android precisely because the OS does deliver the events in bundles per frame.

@tdresser
Copy link

tdresser commented Jul 9, 2015

Sorry, you're right, its fine on Android.

@tdresser
Copy link

tdresser commented Jul 9, 2015

This version of the demo also works with pointer events. The output is very similar to Chrome desktop. This demo paints when receiving a pointer event, not on RAF.

@smaug----
Copy link
Contributor

Doing more and more stuff around RAF postpones actual graphic updates, so that may not be good for UX. This is a generic issue with the platform.
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28644
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28876

It is not quite clear to me how that issue should be fixed.

@RByers
Copy link
Contributor Author

RByers commented Jul 9, 2015

Whao, this version of the demo makes the difference between Android/iOS and pointer events glaringly obvious. As expected, there's a huge bi-modal behavior of the distance between the fingers.

Jacob, how does the MSGesture implementation deal with this? Does it buffer/postpone or smooth events? Or does it have some internal hook to know about event bundles (when supported by the hardware)?

@RByers
Copy link
Contributor Author

RByers commented Aug 25, 2015

Updating the summary to describe a concrete problem as exhibited by the demo. Maybe it's OK for this to be implementation dependent, but that means that, say, a pinch implementation that works well on Chrome Android may be pretty ugly on Edge (and it's not clear to me how it should be made to behave well on Edge at all).

@RByers
Copy link
Contributor Author

RByers commented Sep 18, 2015

Note that iOS 9 now does this by default too, just like Android (with a coalescedTouchesForTouch API to get the raw un-aligned / uncoalesced input events, just Android's batching APIs).

@RByers
Copy link
Contributor Author

RByers commented Sep 18, 2015

Maybe we'd want to block doing anything here on adding an API for getting coalesced (un-aligned) points? Filed #22.

@dtapuska
Copy link

dtapuska commented May 4, 2017

Now that coalesced events is spec'd as an extension and shipped in Chrome. I've proposed us to implement rAF aligned mouse events.

There is some desire in getting FireFox's and Microsoft's position on these. As the comments on this issue are quite old before we made changes to ship the coalesced points API.
@patrickkettner @smaug----

@smaug----
Copy link
Contributor

pointer events are dispatched (for all touch points) shortly before requestAnimationFrame callbacks

this is super vague. Like when are those events actually dispatched. So if something is spec'ed, better to spec it properly. But are we sure the same setup is what we want in all the contexts? Say, very high performance desktop could perhaps dispatch more events at any time during a frame, or UA could perhaps adapt dispatch time based on the time executing event listeners take.

But we are currently evaluating in Gecko what would be the best time to dispatch input events.
And we do already some mousemove compression and wheel events are coalesced, and repeated key events are dropped if child process can't keep up processing them.

@dtapuska
Copy link

dtapuska commented May 4, 2017

Should this really be specified in UIEvents and HTML Spec? I thought about adding a phase where the request animation frame is executed. I think it is important to identify that calling requestAnimationFrame inside an event handler will execute right away after the input is essentially dispatched. So it is really important to modify the requestAnimationFrame phase information.

@patrickhlauke
Copy link
Member

@dtapuska @RByers @smaug---- do you think this needs any addition/note/clarification in the PE spec? and if so, any rough suggestion where and what? (happy to bash it into shape if you make a first attempt at putting it into layperson's terms)

@patrickhlauke
Copy link
Member

setting to future-v3 as the related #214 was also just set to that

@patrickhlauke
Copy link
Member

diving into some of these long-standing old issues...is this something we're still thinking might be necessary for the spec? or, if it's a good idea to define event order relative to RAF, would this be more appropriately housed in something more generic like UIEvents? it would seem odd if only PE did this sort of definition, and no other input events had this sort of info/clarification?

@patrickhlauke
Copy link
Member

Discussed in PEWG call today. This feels like it should be a more generic issue against UI Events, as it likely affects not just pointer events, but touch and mouse etc events as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants