Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output screen texture to png #1207

Closed
ctangell opened this issue Jan 4, 2021 · 12 comments · Fixed by #7163
Closed

Output screen texture to png #1207

ctangell opened this issue Jan 4, 2021 · 12 comments · Fixed by #7163
Labels
A-Rendering Drawing game state to the screen C-Enhancement A new feature

Comments

@ctangell
Copy link

ctangell commented Jan 4, 2021

What problem does this solve or what need does it fill?

Would enable using Bevy as a general purpose simulation tool for robotics and machine learning research by allowing the screen texture to piped over to a machine learning engine, or for incorporating direct inference in Bevy.

What solution would you like?

The actual solution that would be nice is to have an additional node that can be attached to the render pipeline that will return a png of what is shown on the screen. Being able to scale that also so it's not produced every frame but rather every 10 frames or 100 frames would be helpful for performance issues.

I have tried on my own the following so far:

created in WgpuRenderResourceContext the following code to copy a texture to a buffer:

#[allow(clippy::too_many_arguments)]
    pub fn copy_texture_to_buffer(
        &self, 
        command_encoder: &mut wgpu::CommandEncoder,
        source_texture: TextureId, 
        source_origin: [u32; 3], // TODO: replace with math type
        source_mip_level: u32,
        destination_buffer: BufferId,
        destination_offset: u64,
        destination_bytes_per_row: u32,
        size: Extent3d,
    ) {
        let buffers = self.resources.buffers.read();
        let textures = self.resources.textures.read();

        let source = textures.get(&source_texture).unwrap();
        let destination = buffers.get(&destination_buffer).unwrap();

        command_encoder.copy_texture_to_buffer(
            wgpu::TextureCopyView {
                texture: source,
                mip_level: source_mip_level,
                origin: wgpu::Origin3d {
                    x: source_origin[0],
                    y: source_origin[1],
                    z: source_origin[2],
                },
            },
            wgpu::BufferCopyView {
                buffer: destination,
                layout: wgpu::TextureDataLayout {
                    offset: destination_offset,
                    bytes_per_row: destination_bytes_per_row,
                    rows_per_image: size.height,
                },
            },
            size.wgpu_into(),
        );
    }

and then created in impl RenderResourceContext for WgpuRenderResourceContext the following function to get the buffer out of the gpu:

fn copy_buffer_to_png(&self, buffer_id: BufferId, descriptor: TextureDescriptor) -> () {
        let buffers = self.resources.buffers.read();
        let buffer = buffers.get(&buffer_id).unwrap();

        // reading out from a buffer: https://github.com/gfx-rs/wgpu/issues/239
        // now read the buffer
        let buffer_slice = buffer.slice(..);
        let buffer_future = buffer_slice.map_async(wgpu::MapMode::Read);
        self.device.poll(wgpu::Maintain::Wait);
        if future::block_on(buffer_future).is_ok() {
            let data = buffer_slice.get_mapped_range();

            match descriptor.format {
                TextureFormat::Bgra8UnormSrgb => {
                    // saving for testing only, should output to struct and pass that back
                    image::save_buffer("test.png", 
                                        data.as_ref(),
                                        descriptor.size.width,
                                        descriptor.size.height,
                                        image::ColorType::Bgra8);
                },
                    
                TextureFormat::Depth32Float => {
                    // todo: convert data to f32 then scale then convert to u16 then convert to &[u8]
                    /*image::save_buffer("test_depth.png", 
                                        data.as_ref(),
                                        descriptor.size.width,
                                        descriptor.size.height,
                                        image::ColorType::???);*/
                },

                _ => (),
            };

            drop(data);
            buffer.unmap();
        }
    }

and then tried to implement in impl Node for WindowTextureNode fn update:

if let Some(RenderResourceId::Texture(texture)) = output.get(WINDOW_TEXTURE) {
            let render_resource_context = render_context.resources_mut();
            let descriptor = self.descriptor;
            let width = descriptor.size.width as usize;
            let aligned_width =
                render_resource_context.get_aligned_texture_size(width);
            let format_size = descriptor.format.pixel_size();
            println!("{} {:?}", descriptor.format.pixel_info().type_size, descriptor.size);
            println!("{:?}", descriptor.format);
    
            let texture_buffer = render_resource_context.create_buffer(BufferInfo {
                size: descriptor.size.volume() * format_size,
                buffer_usage: BufferUsage::MAP_READ | BufferUsage::COPY_DST,
                mapped_at_creation: false,
            });
    
            render_context.copy_texture_to_buffer(
                texture,
                [0, 0, 0],
                0,
                texture_buffer,
                0,
                (format_size * aligned_width) as u32,
                descriptor.size,
            );

            let render_resource_context = render_context.resources_mut();
            render_resource_context.copy_buffer_to_png(texture_buffer, descriptor);
    
            // remove the created buffer... for now
            render_resource_context.remove_buffer(texture_buffer);
        }

Unfortunately that is as far as I got as the resulting png image is empty (what comes out is an array of zeros). Ideally this should be it's own node that's attached to the final end of the render pipeline after the screen texture is written.

What alternative(s) have you considered?

Not understanding how the render pipeline works, chose to try window_texture_node.rs as that is the only place with the necessary bevy_render::texture::TextureDescriptor for the screen buffer. From a comment on discord it seems that really the texture to extract is more likely in window_swapchain_node.rs. The problem is, there isn't the necessary information in a TextureDescriptor in that node in order to do the necessary texture -> buffer -> png copying. So some type of information passing from the WindowTextureNode (where the relevant TextureDescriptor is stored) to finally which ever node actually has the access to the final screen texture is needed.

I tried to understand the default render pipeline in base.rs but couldn't make much sense of what was being passed around.

Additional context

The above code causes the game to crash when the window is re-sized.

Additionally, a compute shader that converts the depth buffer to a scaled u16 buffer scaled to match the output from a physical depth camera (like an intel realsense) would be super handy.

@TheRawMeatball
Copy link
Member

I have an untested version of this code in this branch. It might be looking over. Also related: #1159

@mockersf
Copy link
Member

mockersf commented Jan 4, 2021

also related: #22

@ctangell
Copy link
Author

ctangell commented Jan 5, 2021

also related: #22

I think Issue #22 is higher priority, or basically the same thing, because it has the advantage of disconnecting the camera from a view screen, and eliminates the bug where re-sizing the window crashes the game.

I'm happy to help out on this issue and get this moving forward, but I would need somebody to walk me through the render pipeline and explain how it works because I spent a day poking at its internals and couldn't make sense of the design decisions, and how all the parts work together.

@ctangell
Copy link
Author

ctangell commented Jan 9, 2021

I started working on this branch, a fork of @TheRawMeatball 's branch where I tried to implement an example based on his code, and fixed some bugs in the code.

The ReadTextureNode node currently throws an error where it says the buffer is not large enough to copy in the texture, while the example code (commented out, beneath the main impl) worked when I tested it out. The two codes are basically identical now, so I don't know what's wrong there. I could use more experienced eyes on it to figure that out. If I artificially increase the size of the buffer, I then get a different error. Again, the original example code worked, so I don't know what the issue is now.

The example I've been working on is write_to_png.rs in the examples/window folder, which is based off of the multiple_windows.rs example. I basically just add in a ReadTextureNode to currently read off of the WindowTextureNode node starting on line 209.
It's currently connected to WindowTextureNode (which has a texture that exists) and not WindowSwapChainNode because the texture from WindowSwapChainNode doesn't exist by the time ReadTextureNode is called.

Outstanding problems:

  • buffer (for some reason) is not the write size for the copy
  • Even if buffer is big enough, another error is thrown
  • The color texture from WindowSwapChainNode doesn't exist by the time ReadTextureNode runs
  • Need to get out the data from ReadTextureNode and access it in a system

@mrk-its
Copy link
Member

mrk-its commented Jan 12, 2021

@ctangell take a look here: https://github.com/mrk-its/bevy/tree/render_to_texture - it adds working 3d/render_to_texture.rs example.

@Moxinilian Moxinilian added C-Enhancement A new feature A-Rendering Drawing game state to the screen labels Jan 15, 2021
@rmsc
Copy link
Contributor

rmsc commented Jan 20, 2021

I'm also interested in getting this working.

@ctangell I'm not sure if this still matters, but there's a problem with the code you used to write the PNG file:

                    image::save_buffer("test.png", 
                                        data.as_ref(),
                                        descriptor.size.width,
                                        descriptor.size.height,
                                        image::ColorType::Bgra8);

The save_buffer() function is actually failing silently. If you 'unwrap()' the return value you'd get an error, namely that the PNG encoder doesn't support the Bgra8 format.

That said, I still haven't figured out how to make it work, and saving a JPEG results in a black image.

@mrk-its That's a great example of rendering to a texture, thanks! I've been playing with this example and trying to save the texture to an image file, but still unsuccessfully.

Not to hijack this issue, but I'm still trying to figure out how the render graph works. There are a few edges in your example that don't seem to be needed for it to work, namely these:

self
    .add_node_edge(TEXTURE_NODE, FIRST_PASS)
    .unwrap();

and

self.add_node_edge(FIRST_PASS, MAIN_PASS).unwrap();
self.add_node_edge("transform", FIRST_PASS).unwrap();

Could you please let me know if they're really needed an why? Thanks!

@rmsc
Copy link
Contributor

rmsc commented Jan 21, 2021

@mrk-its Nevermind, I actually figured out why those edges are needed. I guess I was lucky that it worked without it.

I've finally managed to get it working. I've started with the render_to_texture example, and then implemented a custom render node that inputs the TEXTURE_NODE and writes it to a file. That node has to depend on the FIRST_PASS, so that it waits for the render to finish.

@rmsc
Copy link
Contributor

rmsc commented Jan 23, 2021

Here's a working (but hacky) example of rendering to a jpeg file: https://github.com/rmsc/bevy/tree/render_to_file

@alice-i-cecile
Copy link
Member

Summarizing related feedback from @leonsver1:

  1. bevy_render has a function that converts from DynamicImage to Texture.
  2. The image crate that bevy_render depends on should be re-exported, like bevy_math does for glam_rs.
  3. Conversion functions for Texture should be methods on Texture for better visibility.

@mattdm
Copy link

mattdm commented Mar 16, 2021

Here's a working (but hacky) example of rendering to a jpeg file: https://github.com/rmsc/bevy/tree/render_to_file

I have a use case which is basically a subset of this. I guess it's similar to the egui issue, but not specific to egui.

I would like to be able to render textures to another texture, like a textureatlas but more generally. (As a one-time thing when called; that texture then shouldn't get re-rendered from its sources.) This is useful in a number of ways, like dynamically generating sprites or panels that don't need to be re-rendered, or generating a map from many tile textures. (Ideally, that texture could then be used in multiple places.)

As I understand the render-to-texture example, that actually does an each-frame render — the texture basically becomes a live view into another world. My need is more like the save-to-file case, except... don't actually write, but instead modify the texture in place.

I'm imagining this working basically like .add_texture() for TextureAtlas... something like

my_canvas_texture.draw_texture(my_source_texture, source_rect, dest_rect);

which would put the bits from my_source_texture in the source_rect with that image into my_canvas_texture at the location and scale given by dest_rect.

It would be nice for there also to be:

my_canvas_texture.draw_texture_from_atlas(my_source_textureatlas, texture_index, source_rect, dest_rect);

where of course texture_index is the index of the actual part of the atlas desired. I guess source_rect could be optional in both of these functions, or maybe even omitted in the latter.

my_canvas_texture could then be used as the texture for a Sprite, and drawn in the world.

It's my understanding that as currently written, the functions to create a TextureAtlas just use CPU and main memory. So this feature could actually be used to refactor that — TextureAtlas is basically a use-case of what I'm looking for.

There could also be a "blit" version of this which uses fancy material/shader stuff, but I'm a humble old-school 2d person and don't understand any of that. :)

Also beyond my understanding, but: if this is using the same basic rendering pipeline as drawing to the screen (even though only on demand rather than part of the loop), this could then automatically benefit from future improvements like render batching and culling.

I can file this all as a separate RFE if that would be helpful, but it seems everything here is all so related that it might actually be close to being solved as basically a side-effect.

@alice-i-cecile
Copy link
Member

As a natural extension of this, it would be very nice to be able to natively capture the screen (or a camera view) to a .gif animation file.

@ogxd
Copy link

ogxd commented Mar 7, 2022

Is there a less hacky / simpler solution to read a camera's pixels since? (for saving to a file for instance)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Rendering Drawing game state to the screen C-Enhancement A new feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants