-
Notifications
You must be signed in to change notification settings - Fork 943
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DirectX12 backend exhibits different behaviour than other backends; scene will not render on some machines #1002
Comments
Thank you for a great issue! What real vendors/models suffer from this issue? I tested on AMD Ryzen 3500U so far, unable to reproduce. |
Same for Intel Iris 550. Going to run on VM now... |
My GTX 1080 doesn't seem to suffer the issue, I'll check with the other people I had test my app Edit: Looks like it exhibited the issue on someone else's 1080, and they were also on Win10 |
So far, it looks like the problem occurs with WARP device only (software-ish implementation of D3D12 that's a part of the SDK). I found gfx-rs/gfx#3432 while looking at the memory in RenderDoc that is all zeroes, but fixing this doesn't address the bug. |
I narrowed it down a bit with this code: if use_iced {
//let _renderer = Renderer::new(Backend::new(&mut device, Settings::default()));
let constant_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: None,
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStage::VERTEX,
ty: wgpu::BindingType::UniformBuffer {
dynamic: false,
min_binding_size: None,
},
count: None,
}],
});
let constants_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: None,
size: 64,
usage: wgpu::BufferUsage::UNIFORM,
mapped_at_creation: false,
});
let _constants = device.create_bind_group(&wgpu::BindGroupDescriptor {
label: None,
layout: &constant_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(
constants_buffer.slice(..),
),
}],
});
} It would be great to see if on |
Looking at this case leads me nowhere. The stuff created in this snipped is getting successfully freed, and doesn't affect anything that happens per frame...
|
@msiglreith I've been looking at this for more than I hoped, and I was trying to narrow down the bug. I got to the end of it, I believe, but not sure how to explain or fix it yet, asking for your opinion. Here is what happens in a test app:
I narrowed the problem down to the descriptor update pool in our DX12 backend. It's a CPU-only descriptor heap for views (UAV/SRV/CBV). What
What happens is that for both writes (of buffer A descriptor and buffer B descriptor), even though the target shader descriptors are different, and the source buffers are different, the intermediate CPU heap we use is the same, and we use the same index 0 of this heap for temporarily storing the CBV. If I disable step (3) - clearing of I was thinking if |
A few more things I tried:
|
All I needed to reproduce was specifying the "dx12 iced" parameters. I see that you are on dx12, but are you sure you got |
Ah, no I missed that now it's also blank.. |
Agree, looks like a bug in the WARP implementation to me. The memory regions of the non shader visibile descriptors should be immediately reusable after |
Thanks looking into this, @msiglreith ! Let's figure out a workaround, or otherwise WARP becomes totally unusable (and we'd need to block it). |
3434: [0.6/dx12] Improve descriptor handling r=msiglreith a=kvark Can be reviewed commit-by-commit. Actually gets us to release the descriptors. It also fixes a mistake I made in one of the previous PRs that would panic in view creation, unnecessarily. Also works around gfx-rs/wgpu#1002 in the following way: - we rotate the temporary descriptor heaps instead of resetting them every time - once we reach the head during rotation, we insert a new heap in That is not a guarantee that we'll never hit this WARP issue again, but there is a high chance most apps will not notice. If there is a better workaround, I'll be happy to scrap this one. I think the old logic was plain wrong, too: the `update_pool_index ` wasn't actually being used... Co-authored-by: Dzmitry Malyshau <kvarkus@gmail.com>
Workaround landed in gfx-backend-dx12-0.6.10 |
Description
My application is simply supposed to draw a grid of lines on the screen. On some Windows 10 machines using the DirectX12 backend, when the iced-wgpu backend is initialized before my custom rendering, the grid disappears. This does not happen in DirectX11, or Vulkan on Linux. On some machines the DirectX12/iced-wgpu combo does work and the grid displays; luckily it seems to be reproducible in a Windows 10 VM.
Repro steps
See my repository here
Platform
Windows 10 build 19042.572
DirectX12
The text was updated successfully, but these errors were encountered: