Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AudioContext.createConvolver(), createBiquadFilter() & OfflineAudioContext bindings #1416

Open
Palmer-JC opened this issue Nov 21, 2019 · 12 comments

Comments

@Palmer-JC
Copy link
Contributor

I am now doing quite a lot with WebAudio. The code base I was starting from is github.com/surikov/webaudiofont. It has constructed a node for reverb, which makes the sound seem bigger. (use the echo slider).

There does not appear to be bindings in exokit, but it is in the C code

The OfflineAudioContext is sort of nice to have, but it is also at the C level.

I might try to do this myself, but it you might make a few hints, it might be more doable.

@Palmer-JC
Copy link
Contributor Author

Probably gonna start this right after thanksgiving. Making some notes for myself.

  • The modulesio fork of LabSound is seriously older than where it originates from. From a bindings standpoint, can probably at least start with what is there, but there are probably fixes.

  • Probably should start by making a little table from the mozzilla docs of the API & the gaps. Something like:

Object / calls In Exokit In LabSound
Context
createConvolver No Yes

@Palmer-JC
Copy link
Contributor Author

Palmer-JC commented Nov 25, 2019

Here are the various audio contexts. Do other objects in another comment.

BaseAudioContext In Exokit In Lab Sound Notes
AudioWorklet No No Do not need
currentTime Yes Yes
destination Yes Yes
listener Yes Yes
sampleRate Yes Yes
state No No Do not need
onstatechange() No No Do not need
createAnalyser() Yes Yes
createBiquadFilter() No Yes MUST HAVE
createBuffer() Yes Yes
createBufferSource() Yes Yes
createConstantSource() No No Do not need
createChannelMerger() No Yes Doing, but do not need.
createChannelSplitter() No Yes Doing, but do not need.
createConvolver() No Yes MUST HAVE
createDelay() No Yes Feed back loops might be nice
createEmptyBuffer() Yes Yes NOT IN API
createDynamicsCompressor() No Yes MUST HAVE
createGain() Yes Yes
createIIRFilter() No No Do not need
createOscillator() Yes Yes
createPanner() Yes Yes
createPeriodicWave() No No Do not need
createScriptProcessor() Yes Yes Deprecated in API
createStereoPanner() Yes Yes
createWaveShaper() No Yes Do not need
decodeAudioData() Yes Yes
AudioContext In Exokit In Lab Sound Notes
baseLatency No No Do not need
outputLatency No No Do not need
close Yes Yes Not in Offline
createMediaElementSource() Yes Yes
createMediaStreamSource() Yes Yes
createMediaTrackSource() Yes Yes
getOutputTimestamp() No No Do not need
resume() Yes Yes
suspend() Yes Yes
OfflineAudioContextt In Exokit In Lab Sound Notes
/ length No Not Sure calculate-able, but do not use
/ startRendering No Yes No point without having
resume() No No Do not need
suspend() No No Do not need

@avaer
Copy link
Member

avaer commented Nov 25, 2019

Awesome, thanks for rounding up this info!

The modulesio fork of LabSound is seriously older than where it originates from. From a bindings standpoint, can probably at least start with what is there, but there are probably fixes.

I agree we could pull in from LabSound, and it should work. There are mostly just hardware bindings that are not upstream (I think the generic buffer-based binding for Android), and buffering that was used for the deprecated ScriptProcessorNode.

For most of the node types I mostly just created proxy objects for Javascript access which parallel the LabSound audio processing graph, without changing the LabSound code. The main difficulty in adding new node types is correct object linkage so that they are garbage collected at the right time.

Wonder if ASM is going to work on devices?
Generally it should, though probably the best way to do decoding is to use whatever the platform is providing.

@Palmer-JC
Copy link
Contributor Author

Started on BiquadFilter. Question, it has both regular AudioParams and something probably requiring a fake, type. Can both be done at the same time? Here the labSound h file

// License: BSD 2 Clause
// Copyright (C) 2011, Google Inc. All rights reserved.
// Copyright (C) 2015+, The LabSound Authors. All rights reserved.

#ifndef BiquadFilterNode_h
#define BiquadFilterNode_h

#include "LabSound/core/AudioBasicProcessorNode.h"

namespace lab {

class AudioParam;
class BiquadProcessor;

class BiquadFilterNode : public AudioBasicProcessorNode 
{

    BiquadProcessor * biquadProcessor();

public:

    enum 
    {
        LOWPASS = 0,
        HIGHPASS = 1,
        BANDPASS = 2,
        LOWSHELF = 3,
        HIGHSHELF = 4,
        PEAKING = 5,
        NOTCH = 6,
        ALLPASS = 7
    };
    
    BiquadFilterNode();
    
    unsigned short type();
    void setType(unsigned short type);

    std::shared_ptr<AudioParam> frequency();
    std::shared_ptr<AudioParam> q();
    std::shared_ptr<AudioParam> gain();
    std::shared_ptr<AudioParam> detune();

    // Get the magnitude and phase response of the filter at the given
    // set of frequencies (in Hz). The phase response is in radians.
    void getFrequencyResponse(ContextRenderLock&, const std::vector<float>& frequencyHz, std::vector<float>& magResponse, std::vector<float>& phaseResponse);

};

} // namespace lab

#endif // BiquadFilterNode_h

@Palmer-JC
Copy link
Contributor Author

Ok, I found an example of 2 type of these 'cons' in the Initialize() of the obsolete ScriptProcessorNode, so maybe not so obsolete after all.

Will pattern after that.

@Palmer-JC
Copy link
Contributor Author

Now looking where LabSound might do decoding. They have it in extended. Am noticing that the source files here are very different in both call syntax and # of lines.

Your copy has 375 lines from 9/11/18.

The current from Lab Sound has 108 lines from 7/4/19 & different calls.

Think updating, your fork might be the next move.

@avaer
Copy link
Member

avaer commented Dec 4, 2019

I am using FFmpeg for the decoding; is there some reason to change that?

@Palmer-JC
Copy link
Contributor Author

When I do a search of AudioContext.cpp, I do not get a hit for decodeAudioData. I assumed that you were not doing any decoding. Maybe just implement that, but where are you doing the decoding?

@avaer
Copy link
Member

avaer commented Dec 4, 2019

I think any sort of Audio node loading must be going through that path to load various file types on the Exokit side.

@Palmer-JC
Copy link
Contributor Author

See this sound font I am using stores its data in js files for each instrument. There are multiple "files" within a font. Here is a piano. It uses that method.

I will look around elsewhere in the project for where this is.

@Palmer-JC
Copy link
Contributor Author

Ok, it was not actually in C++, but in native-bindings.js. It works. Having a misunderstand & correcting is much preferred to having to write code!

@Palmer-JC
Copy link
Contributor Author

Went to add Constant Source Node, but found it wasn't actually implemented by LabSound. Edited table above to reflect.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants