-
-
Notifications
You must be signed in to change notification settings - Fork 36
How the composer plays music
The composer is beat based. The song has a certain BPM, each tick will be calculated by getting the duration of one beat and then multiply that to the tempo changer of the column, which can be 1, 1/2, 1/4, 1/8 of the speed.
Once the tick selects the column, every note in that column is played, each note can be part of different layers, each layer being a instrument. A note can be part of a layer if the index for that layer (read from left to right) is equal to 1, if it's 0 then it's ignored.
With the combination of those 4 instruments, tempo changers and bpm, any song can be produced (which fits the octaves of the instruments)
The app uses the WebAudio API, it has the ability to add reverb and pitch change. You can view a scheme of how it works here:
- The audioBufferSource is the actual note being played, it's fetched initially when the instrument is loaded and everytime it has to be played, a new bufferSource is created and the audio buffer is connected to it. They also have the ability to change the pitch by calculating the speed shift with
(pitch / 12)^2
- The gain nodes to the left are the instruments, they control the volume of each instrument.
- The convolver node handles applying reverb, can be either connected or disconnected if reverb is on or off.
- The last gain node handles the volume of the reverb.
- The destination is the final connection to the "speakers"
The composer (and also main page) can record and download the audio as a .wav by using a custom implementation of the audio recorder.
In short it uses a MediaRecorder and a MediaStreamAudioDestinationNode. Instead of connecting the final gain node to the destination, the audio gets proxyed to the MediaStreamAudioDestinationNode, the MediaRecorder listens to stream changes to the node and saves the data.
Once completed, the audio is converted to a wav and downlaoded.