Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JS implementation? #26

Open
wcandillon opened this issue Jul 27, 2024 · 6 comments
Open

JS implementation? #26

wcandillon opened this issue Jul 27, 2024 · 6 comments

Comments

@wcandillon
Copy link

Congratulation on this amazing work. I would like to run some of these shaders on React Native where WebGPU is available via a JS binding.
I was looking at lib.rs to get a sense of how much boilerplate would need to be rebuilt to run it in JS.
Do you think it something doable? If I were to tackle this, would be be open to give me some guidelines on what would need to be done to get this working?

@munrocket
Copy link
Contributor

WASM already in JS VM except Hermes facebook/hermes#429

If someone will port compute.toys to JS we probably will switch to JavaScript, because wgpu not support some experimental web-sys features gfx-rs/wgpu#5685

@davidar
Copy link
Contributor

davidar commented Dec 21, 2024

I've started porting the engine to typescript in compute-toys/compute.toys#218. It's not yet complete enough to run every shader on the site, but many of them work already.

(I am getting fed up with the webassembly ecosystem still continuing to randomly break shit after all these years... next year is its 10 year anniversary and apparently stability is still nowhere in sight)

@wcandillon
Copy link
Author

@davidar thank you for the update, this is very interesting.

@wcandillon
Copy link
Author

@davidar Thank you again for your PR, I this is very useful. I have a very beginner question, what are the benefits of using a computer shader instead of a quad + fragment shader? Are the benefits substantial? Sorry as it might sound as a "silly" question

@davidar
Copy link
Contributor

davidar commented Dec 31, 2024

With fragment shaders the main limitation is that (roughly speaking) each thread is associated with a specific pixel, so your code has to fit into the form of a function which takes a pixel coordinate as the input and outputs the pixel colour. People on shadertoy have done lots of very clever things within this restriction (particularly voronoi tracking and jump flood), but it does limit the kinds of programs you can run.

The main extra feature that compute shaders bring is the ability to write outputs to random-access buffers. This means that, instead of each thread only being able to influence the colour of a single pixel, they can write values into arrays that influence the colour of many pixels across the entire frame. This is useful for a bunch of different things, particularly particle systems (where you want each thread attached to a particular particle which could be anywhere on the screen).

A nice illustration of the difference is rendering the Buddhabrot fractal - with a fragment shader (and an accumulation buffer) it takes many frames to converge, whereas with a compute shader you can render the entire thing every frame.

@wcandillon
Copy link
Author

Thank you so much David for taking the time to explain it to me. Really appreciate it and wishing you the best for the upcoming new year :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants