-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KHR_node_selectability Draft Proposal #2422
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks straightforward to me. No real objections.
I cannot say much about the connection to KHR_interactivity
on a low technical level.
On a high level, I'd mention possible alternatives for the output values: When there is a selectionRayOrigin
, one could consider some selectionPointDistance
(i.e. the distance along the ray), or a selectionRayDirection
, or a selectionLine
or so. But there are some ways of deriving one from the other...
selectionLine = selectionPoint - selectionRayOrigin
selectionRayDirection = normalize(selectionLine)
selectionPoint = selectionRayOrigin + selectionLine
selectionPointDistance = distance(selectionRayOrigin, selectionPoint)
selectionLine = selectionPointDistance * selectionRayDirection
selectionPoint = selectionRayOrigin + selectionPointDistance * selectionRayDirection
- ...
Which one is "the best"? I don't know. Others might even want to throw in some "picked triangle" and the barycentric coordinates of the hit point in there...
In the past, I did model the 'result of a picking operation' as result = { ray, distance, pickedObject }
with ray = { origin, direction }
, but others may have different views on that.
A detail that might already be covered by some general "event handling" specification part of KHR_interactivity
(I still have to finish reading that one)...:
This interactivity event node is activated when a “select” event occurs on a glTF node nodeIndex or on any node in its subtree subject to the following propagation rule: the lowest node in the tree receives the select event first, and the event bubbles up the tree until a glTF node with an associated event/onSelect behavior graph node with its stopPropagation configuration value set to true is found.
When stopPropagation
is false, then the event bubbles up further, and that single operation could cause multiple elements to be selected with a single operation - is that correct?
Sure. Two points seem to be the most atomic and fundamental values in this interconnected system. Various math nodes could be used to trivially derive everything else.
Yes if they have event "listeners", i.e., |
Just a general question here from someone who hasn't been involved in the interactivity discussions: Do we need to assume a "selection ray" in this extension? I can easily imagine VR/XR scenarios where I could tap a "selection point" in space, or drag out a selection box of some kind, or use some not-as-yet-invented controller to indicate selection of a node by more futuristic means. Must it always be a ray? |
This sounds like new, more advanced events. We could probably allow returning NaN if the exact coordinates cannot be provided for some reason. @dwrodger WDYT? |
Yes, I think that NaN in the case that the ray can't be defined sounds fine. It may also be appropriate to update the language where it defines selection and what it means for an object to be invisible to selection. That language could just say that implementations that use systems other than ray-based selection are free to interpret "invisible to selection" in whatever way makes sense for their selection mechanics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have substantial comments beyond what I already wrote in the first review. If the review was about the latest changes: I'm lacking some technical context there (e.g. about the 'controllers'). From a structural perspective, I could only brainstorm whether the absence of a point/origin should be modelled with NaN, or whether that should be differentiated into a "selection event" (without these properties) and a "picking event" (with these properties).
Maybe as additional input here, in WebXR each controller has a "ray pose" and a "grip pose" and both are very much needed for different use cases. For example, visuals for controller models are usually aligned with the "grip pose", and subsequently the action of "dragging an object" often is based on the "grip pose". The "ray pose" or "aim pose" is needed for clicking on things. These two so far have survived a number of novel interaction mechanisms, including things like Apple Vision Pro where suddenly pointers are transient (they exist only temporarily while the user is interacting) and have very different "ray pose" (ray from the eye to where the user is looking) and "grip pose" (point and orientation for where in space the user has started the hand gesture for the selection). |
Reading a bit more in the spec, I wonder about this section:
Maybe it should read "the unique ID of the controller that has generated the event" instead? The index of a controller can change; for example, in touch-based systems usually each new touch has a new pointer ID (see https://developer.mozilla.org/en-US/docs/Web/API/PointerEvent/pointerId). Or in WebXR, where users can connect and disconnect new controllers arbitrarily (e.g. switch from controllers to hands and back) or have transient pointers as well. (I think it depends a bit on what expected usage for the returned controller index is – are there nodes that can get more data from a controller?) |
As discussed in the Interactivity DTSG.