-
-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complete Gesture Support [Roadmap Proposal] #530
Comments
Also attaching the swift interface @available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *)
public protocol Gesture {
associatedtype Value
static func _makeGesture(gesture: SwiftUI._GraphValue<Self>, inputs: SwiftUI._GestureInputs) -> SwiftUI._GestureOutputs<Self.Value>
associatedtype Body : SwiftUI.Gesture
var body: Self.Body { get }
} |
Can a custom |
Yes, it appears so. Though, state is reset when the gesture ends (you can only change it when the gesture is updating). I think that’s why |
Can’t wait for this feature to be part of tokamak :P |
quick question, would each renderer deliver the necessary information for gestures at a given target? such as GTK? Web and so on? such as
and gestures would be built on top of it? in the TokamakCore layer? |
It will probably depend on specific gestures. In some early experimentation, I found that Safari exposed rotation/scale gestures but not multi-touch events to implement these gestures by hand. Thus, at least some renderers will require implementing gestures on their own. However, even if not required, renderers should expose system gestures when possible, following Tomamak's philosophy of using mostly native functionality. However, if two targets, and by extension their renderers, do not support some gestures, we could implement the business logic in TokamakCore to avoid code duplication between renderers. |
initial PR-adding support can be found here |
Based on work I did here
this code animates with SwiftUI but it doesn't with Tokamak |
This issue expands on the tap-gesture feature request, setting a timeline for complete gesture-support parity with SwiftUI.
Level 1: Tap Gesture & Related Protocols
Button practically uses a tap gesture behind the scenes, so there should be no surprises in explicitly introducing
onTapGesture
. Starting from this simple gesture, we could build the infrastructure for the rest of the gesture API. Namely:Gesture
is a protocol that enables rudimentary composition and is the basis for reacting to gesture interactions.AnyGesture
isGesture
's simple type eraser.GestureState
is key to reactivity. It is updated through theupdating(_:body:)
function which returns aGestureStateGesture
gesture. I'm not sure if this could be implemented through the other mapping methods (onChanged
andonEnded
), because IIRC these mapping methods fire at different points in the view lifecycle or gesture interaction. Finally, I imaginemap
would be easy to implement, as it only changes the gesture's value.gesture(_:)
. To reduce the initial implementation's complexity, we could omit theincluding mask: GestureMask
parameter.Level 2: Many Gesture Types
High-level gesture recognition (like pan, rotation and pinch gestures) is free for native apps, but would likely need a custom implementation on the web. The pointer-events API seems like a good place to start. Besides this guide for implementing the pinch gesture, I didn't find a thorough guide for gesture detection in my brief research. At this point we may want to specify which gestures a given element can accept through CSS, though not every gesture type is available on all major browsers. The following gesture types would need to be recognized:
SpatialTapGesture
would be a refined implementation ofTapGesture
. The gesture would provide the taps' locations as its value by employing the pointer-event API. Namely, it would expect a pointer down and subsequent pointer up event to fire.DragGesture
requires a pointer down, pointer move (potentially with a minimum distance required), and a pointer up to end the gesture.MagnificationGesture
andRotationGesture
are multitouch gestures. They require one or more fingers on touch devices; research is required on how they'd be detected with a trackpad input; I also don't know how SwiftUI handles this for devices with just a mouse input (maybe through scrolling and a modifier key?). I think both of the aforementioned gestures could be implemented by constructing a vector between two fingers; magnification would measure if the vector's magnitude grew (at least by theminimumScaleDelta
); rotation would measure if the vector's principal argument changed (at least by theminimymAngleDelta
). I don't know how more than two fingers would affect the results.LongPressGesture
starts with a pointer down event. It waits forminimumDuration
before firing, and eagerly terminates if a pointer move exceeds themaximumDistance
. This gesture can also be attached through oneonLongPressGesture
method; the other methods with the same name can be safely ignored because they're only available on Apple TV.Level 3: High-Level Control
The following modifiers are used for advanced gesture interactions. After Level 2, which is far into the future, we could start tinkering with how different gestures combine together on our custom gesture-detection engine.
gesture(_: including:)
, where a mask controls precedence for the view's and subviews' gestures. Perhaps, the mask control could be passed down through the environment or directly through the Fiber reconciler. Then, masking could change the priority of the gestures.highPriorityGesture(_:including:)
could probably be implemented by a also changing internal gesture priorities.defersSystemGestures(on:)
is probably difficult to implement on the web; more research is required.ExclusiveGesture
is a gesture where each of the provided sub-gestures fires independently. The gesture's value is either the first or second sub-gesture value. This would likely be implemented by polling both sub-gestures. The first sub-gesture to fire would propagate its value to the exclusive gesture, and polling for the sub-second gesture would stop. After the first sub-gesture ends, the state would be reset.SequenceGesture
waits for the first sub-gesture to fire before polling for the second one. Using the same principle asExclusiveGesture
, the first sub-gesture would be allowed to complete, and then the second one, for the sequence gesture to fire. The gesture's value is either just the first sub-gesture value, or both sub-gestures' values.SimulaneousGesture
/simultaneousGesture(_:including:)
allows its sub-gestures to fire concurrently. Both gestures are continuously polled, making the simultaneous gesture's value equivalent to(First.Value?, Second.Value?)
.The text was updated successfully, but these errors were encountered: