-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removed VREyeParameters and related refactors #139
Conversation
Refactored the information it contained into VRFrameData (offset is now *PoseToEyeMatrix for more flexibility) and the renderWidth/Height moved onto the VRDisplay as recommendedRenderWidth and recommendedRenderHeight. They now describe the ideal canvas size, containing both eyes, rather than the ideal render target size for a single eye. Field of view was already deprecated and replaced with the projection matrix on VRFrameData. Removed getPose, because if we’re gonna break backwards compatibility let’s go all in and get rid of “deprecated” functions that never officially launched.
I see you've included helpful descriptions of It might also be useful to have a second pair of dimensions that indicate the resolution at which the center pixels render at 1:1. The values may be the same, but it looks like so far at least for Daydream and GearVR (and presumably Cardboard), they'll be different. I wonder if the recommended values might even change dynamically when the browser or OS determines the device needs to cool down or preserve battery. related |
{{VRPose}}: | ||
<dfn attribute for="VRFrameData">poseMatrix</dfn> | ||
A 4x4 matrix describing the tranform defined by the position and orientation of | ||
the pose, given as a 16 element array in column major order. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If poseMatrix describes the position and orientation of the pose, we should change the position and orientation Float32Arrays into booleans that indicate whether position and/or orientation are available. This way, we avoid data duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the very least we need some way to indicate that tracking was lost for 6dof supporting devices whose fallback "3dof" mode still contains neck modeling.
index.bs
Outdated
Recommended width of the {{VRLayer}} source in pixels. The width reported SHOULD ideally be the resolution at which the final image presented to the user has a 1:1 pixel ratio at the center of the user's view after any distortion the {{VRDisplay}} will apply. The width reported MAY be lower than that, however, if the underlying platform recommends it for performance reasons. The width reported MUST be large enough to contain both the left and right eye viewports. | ||
|
||
<dfn attribute for="VRDisplay">recommendedRenderHeight</dfn> | ||
Recommended height of the {{VRLayer}} source in pixels. The height reported SHOULD ideally be the resolution at which the final image presented to the user has a 1:1 pixel ratio at the center of the user's view after any distortion the {{VRDisplay}} will apply. The height reported MAY be lower than that, however, if the underlying platform recommends it for performance reasons. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that recommendedRenderWidth and recommendedRenderHeight are for the entire VRLayer, it becomes a bit confusing to the web developer what the layout of the left and right eye are expected to be.
For instance, is the developer expected to detect if the user agent returns e.g. 3840x1080 then that means it’s “supposed” to render side by side, but if the user agent returns e.g. 1920x2160 it’s top/bottom? I think we should either go back to using per-eye sizes, or just specify that we always want the data in a side by side fashion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the default use case today (which this pull request doesn't change) content is expected to be rendered side-by-side because that's what the default VRLayer
bounds define. You can always provide your own bounds if you want something more exotic, like top/bottom rendering.
There are several updates in this PR that are aimed at reducing the amount of computation JS apps are expected to do to derive the desired values. IE: the poseMatrix isn't hard to compute, but if we're providing matrices for everything else it's good for API consistency and general usability if the JS apps don't have to copy/past that logic everywhere. Likewise, while this was by no means a complicated calculation if 99% of WebVR apps are going to need to run through some equation to get the canvas size we should just give them the canvas size up front and make the less commonly needed value (the size per eye) the one that requires some math.
That being said, I don't feel so strongly about this that I'm opposed to changing it back to a "size per eye" value if there's a compelling argument for doing so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel super strongly either, but I do think it's a bit weird to provide the size of a combined rect when we don't mandate any particular way of making that combination. If we're letting people choose the layout, it's a bit strange to make assumptions about that layout when providing properties about it.
After your clarification it makes sense that it is to be interpreted using the default layout of the eye rects, but before the clarification it was confusing to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, no question that said clarifications should be present in the spec text.
Updated the comments for |
Should 1:1 pixel ratio post-distort be the defining characteristic for That does provide some useful information though that otherwise developers wouldn't have access to otherwise easily. |
@mkeblx How would you define a maximum? At what point is it just not worth going any higher? |
can you rebase? |
Never! :) There would be a maximum in terms of canvas size but can get already.. (Are you ready to render at 16,384 x 16,384 yet? :) ). Maybe max => maximum along pipeline. But I would just drop & maybe go with As doing some things to look more future oriented, say a 16k screen mobile HMD comes out next year near same power envelope (to primarily eliminate screen door or something) with true 1:1 ratio post-distort being 20k so screenRes kinda has a weak relationship with renderRes. I'm not sure a good solution for figuring out the actual resolution they should render at. Again, app/dev responsibility. |
This PR has been surpassed by the conversations around the 2.0 explainer, so I'm closing it. |
This is born from discussions I've had with Nell recently about a related to handling multiple coordinate systems that should be seeing it's own pull request soon-ish. Basically the thinking is that if we're going to break backwards compatibility in a significant way anyway (as is suggested for Worker support in #116) might as well take the opportunity to tidy up the API all around.
Refactored the information it contained into VRFrameData (offset is now *PoseToEyeMatrix for more flexibility) and the renderWidth/Height moved onto the VRDisplay as recommendedRenderWidth and recommendedRenderHeight. They now describe the ideal canvas size, containing both eyes, rather than the ideal render target size for a single eye. Field of view was already deprecated and replaced with the projection matrix on VRFrameData.
The core desire behind this change is to localize everything that may change frame to frame into
VRFrameData
. It was awkward havinggetEyeParameters
be something that you may have to repeatedly query in addition togetFrameData
, and the vectoroffset
alone couldn't describe if the eye transform had a rotational component. Now that's explicit in the*PoseToEyeMatrix
attributes. (That value COULD be derived from the view matrices andpose
, but the math was pretty cumbersome and tricky to get right. No need to make users do it if we already know the answer.) Three.js in particular should be able to make direct use of those matrices. As for adding the matrix to the pose, that's just because it's awkward to have everything else be a matrix but still expect the user to compute that one. The implementations can (and probably will) choose to lazily compute either the matrix or the individual components so that we're not sending "duplicate" data internally.At the end of the day all of it is to simply say to users "We know you will need these values frequently. We'll give you them up front rather than making everyone do the same math over and over again."
Also Removed getPose, because if we’re gonna break backwards compatibility
let’s go all in and get rid of “deprecated” functions that never officially launched.
Feedback is very much wanted and appreciated!