Deployed at: https://yug34.github.io/audio-visualizer/
Works based on the Web Audio API.
- Sound generated through JS (Oscillator).
- Sound from user's microphone.
- Sound from an audio file that a user can upload.
- The frequency-domain plot of the audio.
- The time-domain plot of the audio.
- An "audio-level", which is the average loudness of all the frequencies in the sample.
The audio input received form the microphone/file is split with a SplitterNode and connected to two gain nodes, one for the left and right channel each.
In the case you are using oscillators to generate sound, there are two oscillators; one each for the left and right channels, connected to two gain nodes.
The audio source nodes (audio from microphone/file/oscillators) are connected to two AnalyserNodes to collect audio data for each audio channel. Finally, these nodes are merged with a MergerNode which is connected to the audio context's destination (your speakers).
The web app allows you to control the sound by changing the gain of each output channel, and in the case of an oscillator, control the gain as well as the frequencies of the oscillators of each audio channel independently.
Got a bit carried away with this project and kept adding unplanned features, so I didn't focus much on code organization (the main App.tsx component is way too big).