-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Offscreen Rendering & Screenshots #1845
Conversation
8a9b1c4
to
778153c
Compare
778153c
to
8820583
Compare
wgpu/src/shader/offscreen_blit.wgsl
Outdated
@group(0) @binding(0) var u_texture: texture_2d<f32>; | ||
@group(0) @binding(1) var out_texture: texture_storage_2d<rgba8unorm, write>; | ||
|
||
fn srgb(color: f32) -> f32 { | ||
if (color <= 0.0031308) { | ||
return 12.92 * color; | ||
} else { | ||
return (1.055 * (pow(color, (1.0/2.4)))) - 0.055; | ||
} | ||
} | ||
|
||
@compute @workgroup_size(1) | ||
fn main(@builtin(global_invocation_id) id: vec3<u32>) { | ||
// texture coord must be i32 due to a naga bug: | ||
// https://github.com/gfx-rs/naga/issues/1997 | ||
let coords = vec2(i32(id.x), i32(id.y)); | ||
|
||
let src: vec4<f32> = textureLoad(u_texture, coords, 0); | ||
let srgb_color: vec4<f32> = vec4(srgb(src.x), srgb(src.y), srgb(src.z), src.w); | ||
|
||
textureStore(out_texture, coords, srgb_color); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't work with the web-colors
feature for similar reasons to #1885.
I think we should avoid doing any color conversions in shader code. These should be handled by the GPU automatically.
I believe we can rewrite this using a render pipeline so we can easily control the output format from Rustland.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will look into it, pretty sure I couldn't use a srgb texture format for a storage texture so it will have to be a render pipeline vs compute, yeah.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I reimplemented this with a RenderPipeline
, I now remember when I first tried to implement this using a RenderPipeline
; the issue is that the RenderColorPassAttachement
s TextureView
(which in this case must be Rgba8UnormSrgb
) must be the same format as any render attachment textures (e.g. the frame's texture, which on my M1 is Bgra8UnormSrgb
), which makes it entirely useless for converting between texture formats, unless I'm missing some hidden way to do this with a RenderPipeline
. This is why I ended up using a ComputePipeline
!
I could pass in a flag to the computer shader as a uniform value to flag whether or not to convert to sRGB..?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I follow. I am fairly certain I have sampled textures with a different format than the output format in a render pipeline. It's the whole point of a sampler.
glyphon
uses a masked texture with a single channel for most glyphs, for instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might have a different way to do it using a render pipeline, I'm poking around with it this morning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nvm i was just being a pepeg, should be good now 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking another look, I think now I can just re-use the existing blit.wgsl
shader we have for multisampling; will update this accordingly.
b7baa4d
to
af099fa
Compare
Updated in d955b34 to just use the existing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Thank you!
Just moved a couple things around to simplify some APIs, but everything seems to work! Let's merge! 🥳
This PR adds support for offscreen rendering in Iced! 🎉
screenshot_demo.mov
📸 Users can now capture screenshots 🖼️ using the
window::screenshot()
command.The
window::screenshot
command requires aMessage
which takes aScreenshot
. This is a new struct exposed in theiced_runtime
crate which contains the RGBA bytes of the screenshot, in addition to its size.For now this command is limited to the entire viewport, with an optional
crop()
method available which will crop the bytes to the specifiedRectangle
.cropping.mov
This method returns a
Result<Screenshot, CropError>
for situations where the cropped region is out of bounds, or is not visible.🎉 Offscreen rendering capabilities are now added to both
wgpu
andtiny_skia
!Rendering offscreen is now exposed in the
Compositor
interface with arender_offscreen
function. The outputted texture data is always guaranteed to be in RGBA format in the SRGB color space for compatibility with image libraries & iced images, and to simplify theScreenshot
struct with a predetermined byte order & pixel size.🔢 Implementation details
For
wgpu
, performing offscreen rendering was fairly straightforward. I create a texture based on the viewport size & the chosen texture format, and then pass that into the backend with a newbackend.offscreen
method. This method performs a normal render pass on the new texture, and optionally runs a tiny compute shader after drawing all primitives that converts it towgpu::TextureFormat::Rgba8Unorm
with a manual srgb conversion due towgpu::TextureFormat::Rgba8UnormSrgb
not being a supported format for a storage texture. The whole process on my (admittedly very powerful!) test suite of GPUs (AMD, NVIDIA, M1) took about ~2µs in the test example. The only added overhead over a normal render pass is the compute shader to convert to rgba.For
tiny-skia
this was very straightforward, the same as a normalpresent
call but rendering to a new buffer vs the surface buffer. Our backend returnsARGB
format, so just had to do some bit shiftin' and we're all good.🤔 Outstanding API questions
It was discussed whether it's a cleaner API to do a
window::screenshot
command which optionally can be cropped, or awindow::screenshot(area)
command which can either beFullscreen
orRegion
, which would be a rectangle. Obviously there is a performance hit when rendering the whole window to an offscreen buffer vs a potentially very small region if only a small portion is desired, but the execution is so fast it's almost negligible. I'm curious as to people's thoughts on this!The offscreen render capabilities are only exposed at the moment through this
window::screenshot
command. The intent was to implement the capabilities for offscreen rendering in this PR and expand upon it further in later iterations if new applications of offscreen rendering are needed. Curious to see if anyone has any thoughts on how else this might be exposed!🧪 Testing
Tested wgpu & tiny-skia implementations on MacOS + M1, Linux + PopOS + AMD, Windows 10 + Nvidia combos. Further testing would be greatly appreciated!
Feedback very much welcome! This was my first time working this deeply with textures on the GPU so might have made some rookie mistakes 😉 I also know that there is an open draft PR for offscreen rendering but it hasn't been updated in a few years so thought I'd take a crack at it.
(reopened from 1783 which was auto closed due to
advanced-text
merge)