- Render freq slice shifted to output texture.
- Render frequency slice to 1px viewport
- Render texture to renderbuffer.
- Swap input and output textures.
- setData(data) method, filling last texture with the data (slice).
- but we need fftSize then, to recognize height. Or passing size in some fashion.
- isolate
.render
method to.push
or alike, where audio data rendering is unbound from the raf.- that way, rendering full waveform is just a cycle of pushing freq slices, we don’t have to care about fftsize or etc
- that way, raf automatically binds spectrum to realtime.
- that way we avoid setting speed - it can be regulated by repeatable push, like push 10 times etc.
- smoothing starts making sense, providing that distance between pushed data is constant.
- that way we avoid playback API.