Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.initRenderTarget missing in WebGPURenderer #29898

Closed
Makio64 opened this issue Nov 15, 2024 · 9 comments
Closed

.initRenderTarget missing in WebGPURenderer #29898

Makio64 opened this issue Nov 15, 2024 · 9 comments
Milestone

Comments

@Makio64
Copy link
Contributor

Makio64 commented Nov 15, 2024

Description

The initRenderTarget is missing in the WebGPURenderer
https://threejs.org/docs/#api/en/renderers/WebGLRenderer.initRenderTarget

Solution

implement it

Alternatives

Maybe there is another way in webgpu to init it before using it i'm not aware of it.

Additional context

No response

@Makio64
Copy link
Contributor Author

Makio64 commented Nov 15, 2024

It appears WebGPURenderer also missing initTexture

@RenaudRohlinger RenaudRohlinger added this to the r171 milestone Nov 15, 2024
@sunag
Copy link
Collaborator

sunag commented Nov 16, 2024

image

WebGPURenderer should initialize the texture if it uses copyTextureToTexture automatically.

@Makio64
Copy link
Contributor Author

Makio64 commented Nov 18, 2024

image

WebGPURenderer should initialize the texture if it uses copyTextureToTexture automatically.

I also want to call it before display it the first time to init the renderTexture in the memory before drawing on it and avoid extra uploadtime, same for initTexture to avoid upload when it matters.

@Mugen87
Copy link
Collaborator

Mugen87 commented Nov 23, 2024

I'm not sure I understand the use case of initRenderTarget() in context of WebGPURenderer. In WebGLRenderer it was only added to fix the usage of uninitialized render target textures with copyTextureToTexture(), see #28282 (comment). As pointed out in #29898 (comment), that step is not required in WebGPURenderer.

Unlike initTexture() there is no texture decode overhead to avoid with initRenderTarget(). Is the framebuffer creation and configuration a noticeable overhead?

@Makio64
Copy link
Contributor Author

Makio64 commented Nov 25, 2024

@Mugen87 I got the habbit to initialized all textures before using them and I though I should do the same for RT.
This said in a current project, when im using a RT which wasn't used I can feel there is a short latency during the frame, which make sense as the rt init screensize texture right ? Maybe there is something i got it wrong about how the RT are working in webgpu.

@Mugen87
Copy link
Collaborator

Mugen87 commented Nov 25, 2024

@Mugen87 I got the habbit to initialized all textures before using them and I though I should do the same for RT.

I'm afraid this isn't correct since the purpose of initTexture() is to eliminate the texture decode overhead which is irrelevant for render targets.

This said in a current project, when im using a RT which wasn't used I can feel there is a short latency during the frame, which make sense as the rt init screensize texture right ?

Please demonstrate with a live example the performance issue with uninitialized render targets.

To be clear, unlike initTexture() the initRenderTarget() method was not added to fix a performance issue. It was a pure functional change to fix a use case with copyTextureToTexture().

Right now, I do not see enough arguments to add initRenderTarget() to WebGPURenderer.

@Makio64
Copy link
Contributor Author

Makio64 commented Nov 25, 2024

Thanks @Mugen87 !
Just to be clear if I create a renderTarget of the size of my screen, it doesnt create a texture of the same size in the gpu which will require allocation space later on the first render on this target ?

Should i use renderer.initTexture( rt.texture ) or is it totally pointless ?

I will try to make an example next weekend on it.

@Mugen87
Copy link
Collaborator

Mugen87 commented Nov 25, 2024

Just to be clear if I create a renderTarget of the size of my screen, it doesnt create a texture of the same size in the gpu which will require allocation space later on the first render on this target ?

No, it does create a texture. I'm interested in the overhead that you encounter in your use case which is caused by pure memory allocation.

Should i use renderer.initTexture( rt.texture ) or is it totally pointless ?

I'm not sure how this method affects the actual allocation of GPU objects when used with render target textures. Even if something like gl.createTexture() is done in advance, the framebuffer setup happens at a later point in time. So I'm not sure how this affects performance.

@mrdoob mrdoob modified the milestones: r171, r172 Nov 29, 2024
@Mugen87
Copy link
Collaborator

Mugen87 commented Dec 19, 2024

Closing. We can reopen the issue when a performance issue with framebuffer creation can be demonstrated.

@Mugen87 Mugen87 closed this as not planned Won't fix, can't repro, duplicate, stale Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants