-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[wgpu.image] Workaround WGPU OpenGL heuristics #2259
[wgpu.image] Workaround WGPU OpenGL heuristics #2259
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! We finally figured out the mystery 🥳
Looks like #2009 was quite close!
Great, when the amount of layers is a multiple of 6 we will lose images again, as wgpu hard-codes 6 to a cube map 🤦 |
@PolyMeilex Looking at the code, if we are hitting the first branch of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Long-standing issues gone 🥳
pub fn new(device: &wgpu::Device, backend: wgpu::Backend) -> Self { | ||
let layers = match backend { | ||
// On the GL backend we start with 2 layers, to help wgpu figure | ||
// out that this texture is `GL_TEXTURE_2D_ARRAY` rather than `GL_TEXTURE_2D` | ||
// https://github.com/gfx-rs/wgpu/blob/004e3efe84a320d9331371ed31fa50baa2414911/wgpu-hal/src/gles/mod.rs#L371 | ||
wgpu::Backend::Gl => vec![Layer::Empty, Layer::Empty], | ||
_ => vec![Layer::Empty], | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made it so that the additional memory is only allocated when using OpenGL.
wgpu OpenGL image rendering has been broken for years (every image is a black rectangle), but finally I accumulated enough willpower to debug and fix this.
So, it turns out that wgpu has to use heuristics to guess that the texture we are creating will be used as
TEXTURE_2D_ARRAY
rather thanTEXTURE_2D
, and OpenGL needs to know that ahead of time, otherwise it will refuse to bind it because of type mismatch.The real fix should be on wgpu side, but that seems to be known and not obvious problem.
So in the meantime, I just increased default layers allocation in the atlas to 2. That way wgpu will see layer depth 2 and correctly assume that we want to use this texture as
TEXTURE_2D_ARRAY
EDIT: Found an issue that mentions the array layer limitations: gfx-rs/wgpu#1574
Fixes #1774.
Fixes #2180.