Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic support for running Emscripten code as AudioWorklets #12502

Closed
wants to merge 12 commits into from

Conversation

tklajnscek
Copy link
Contributor

@tklajnscek tklajnscek commented Oct 13, 2020

Basically, this PR lets you run AudioWorkletNodes that can nicely call into Emscripten-generated code.
It sets up a pthread environment with shared memory in the AudioWorkletGlobalScope that's shared by all the audio worklets.

Summary of major changes:

  • Implemented a futex that works in the worklets so that the mutex etc works nicely. It's the same busy wait as in the main thread, but without all the extra complexity (no proxying etc)
  • Added slightly customized pthread bootstrapping for audio worklets. This works via a dummy AudioWorkletNode to get the message port for worker communication.
  • All the script loading is done on the main thread in case of AudioWorklets
  • Changed 'worker.js' to make it work in an AudioWorkletGlobalScope. This is mainly to stop it trying to import scripts from the worklet context (not allowed) and to replace various this.XXX since this doesn't exist in AudioWorkletGlobalScope

Current limitations:

Doesn't work with MINIMAL_RUNTIME as I haven't figured out how to export functions so that they can be called from the registered AudioWorkletProcessor.

Examples

There's an example in tests/audioworklet that also tests the pthread environment setup and the futex implementation.

We are using (a much less clean) version of this in production and it works great!
Any comments highly appreciated! :)

@tklajnscek tklajnscek force-pushed the worklet_support branch 5 times, most recently from 358b422 to 17a72a5 Compare October 14, 2020 15:12
@tklajnscek tklajnscek marked this pull request as ready for review October 14, 2020 19:16
emcc.py Outdated Show resolved Hide resolved
src/library_pthread.js Outdated Show resolved Hide resolved
@kripken
Copy link
Member

kripken commented Oct 15, 2020

Interesting @tklajnscek , but I'm a little lost here... can you please explain why this is needed? That is, is this because AudioWorklets are very different than normal environments (if so, what are the specific issues this fixes) or is this adding a new type of functionality entirely (using an AudioWorklet in some special way)?

Or to ask the question in another way, what does this make possible that wasn't before?

@tklajnscek
Copy link
Contributor Author

tklajnscek commented Oct 16, 2020

Interesting @tklajnscek , but I'm a little lost here... can you please explain why this is needed? That is, is this because AudioWorklets are very different than normal environments (if so, what are the specific issues this fixes) or is this adding a new type of functionality entirely (using an AudioWorklet in some special way)?

Or to ask the question in another way, what does this make possible that wasn't before?

Sure, let me try to explain. Forgive me if some of this is stating the obvious 😅

Let me run through the main pain points first vs regular Workers:

  • The global scope is different and there are some things that don't work in AudioWorkletGlobalScope. This is mostly to make it difficult to stall the worklets as they are a real-time thing that needs to spit out the audio data ASAP.
  • So there's no fetch, importScripts etc and the only way to load scripts is by calling addModule on the main thread (this will also respect any static ES6 import statements in the loaded script).
  • There's also no Atomics.wait for fear of making it too easy to block
  • All AudioWorklets run serially in the browser's (single) audio thread and share the same global context (the AudioWorkletGlobalScope).
  • Since the global scope is not shared their interface to the other threads is not on the global scope, like it is for Workers (postMessage/onmessage). The way AudioWorklets do it is they register classes derived from AudioWorkletProcessor when the script is first evaluated. Then instances of these classes are created by calling new AudioWorkletNode(audioContext, 'name-of-processor') on the main thread.

This PR addresses this by:

  • Changing worker.js to work when evaluated in the AudioWorkletGlobalScope in addition to regular Workers
  • Changes the loading/instantiation/bootstrapping code to play nicely with the fact that scripts are loaded via addModule avoiding calls to importScripts
  • If run in AudioWorkletGlobalScope it reroutes the library_pthread message passing through a dummy AudioWorkletProcessor which makes it work with the same code as regular Workers.
  • Implements a special-case futex that can be used in AudioWorkletGlobalScope and uses a simple spin wait (like the main thread implementation) so that various synchronization primitives will work correctly.

All of the above makes it possible to run a shared memory instance of the program in AudioWorkletGlobalScope with the pthread environment initialized properly, just like a regular Worker.

Which in turn makes it possible to execute all the program code in there which makes it super easy to run WASM code (=fast) in audio worklets and lets you port a bunch of existing C(++) audio stuff over much more easily.

The developer only needs to register their AudioWorkletProcessor and call their exported functions from it and it JustWorks(TM).

Without this you can't really run any Emscripten compiled code from an AudioWorkletProcessor unless you do all the steps manually anyways like transferring over the wasmInstance, getting the .js to evaluate, initializing the pthread environment manually to get the stack set up etc. So you end up running custom JS (or potentially, custom WASM+JS) in there with a bunch of glue code to get it to play along with your main program that does a bunch of processing already.

We currently use this to run the final output mixer as an AudioWorklet and it works great and it's the same code as any other platform (with the addition of the AudioWorkletProcessor class that's calling into it). In addition to this we run audio decoders as regular Workers.

I can also see this being very useful to anyone doing threaded/DPC audio output because it most likely maps 1:1 with what they have in there now.

I hope that answers the question @kripken. Let me know if anything is unclear or if I missed something obvious somewhere :)

@kripken
Copy link
Member

kripken commented Oct 19, 2020

I see thanks @tklajnscek ! This does sound like a useful thing to support, and I think I understand now why it takes so much work here.

Some initial comments on the PR:

  1. It looks like this duplicates a bunch of main thread logic, like having a special futex, busy-waiting when necessary, etc. It would be good to find a way to share that code as much as possible (like in wake, having a helper function that handles a special mutex, and it is called once for the main thread futex and once for the worklet).
  2. I suspect we'd want to not include any worklet code when not building for that mode, so it should be ifdef'd out where feasible.
  3. Adding a new valid environment value requires updating settings.js to mention it under ENVIRONMENT.

@juj , have you thought about worklet support?

@tklajnscek tklajnscek force-pushed the worklet_support branch 2 times, most recently from 80379c3 to 1b1512a Compare October 29, 2020 21:00
@tklajnscek
Copy link
Contributor Author

@kripken I believe the latest changes should address all your points and it's rebased on latest master.

  • I removed the code duplication in the futex implementation.
  • If the ENVIRONMENT doesn't contain 'worklet' all the worklet support is stripped out (via ENVIRONMENT_MAY_BE_WORKLET)
  • settings.js ENVIRONMENT entry has been updated

@sbc100 if you have any thoughts on this they'd be much appreciated :)

@juj
Copy link
Collaborator

juj commented Nov 9, 2020

First off, thanks for working on this kind of project, this is a very interesting development, one which I have had a TODO for close to three(!) years now to experiment with..

Which in turn makes it possible to execute all the program code in there which makes it super easy to run WASM code (=fast) in audio worklets and lets you port a bunch of existing C(++) audio stuff over much more easily.

The developer only needs to register their AudioWorkletProcessor and call their exported functions from it and it JustWorks(TM).

I've been going through this code back and forth for a few passes now, and some of the design questions on this front do boil down to the portability vs native web question. You mention portability, but there seems to be currently very little provided in terms of portability in the test code, since it is very JS-specific?

The review and maintenance complexity of this PR is high, so we should be very careful about how to proceed. Some thoughts to get us further:

  • the test code shows an example of how to generate an audio tone. It is not immediately clear to me what is intended to be the public user-facing API of the feature, and what is intended to be part of the test code only. That is, if I wanted to use AudioWorklet pthreads in my own code, what API should I work against?

Maybe we'll need to flesh that out in documentation or a hello world test without test harness, to illustrate the expected API surface area.

  • the other question is about portability vs native web: you mention portability being a goal, but it looks like the test code contains an example of a JS-specific code API that needs to be called to create a "AudioWorklet pthread". That kind of API does not seem to actually aim for portability, since JS code is very JS specific?

My expectation for portability would be something akin to having a C based API, where one would call something like audio_worklet_pthread_create() instead of pthread_create() to create a pthread that runs inside an AudioWorklet, and the thread startup routine would be the audio buffer mixing routine - or something like that?

  • Then, there is a question of code organization. It seems to me that some of the work could be located in a library_audioworklet.js instead, rather than piling it in the library_pthread.js API. Also, the code test example feels somewhat odd as an API, but I am not able at the moment to suggest how this should look like instead - I should probably give this an in-depth playthrough to get a feeling.

  • Finally, code size is a concern. We will need to land this in a fashion that does not regress code size a single byte when one is building without worklet enabled in the build settings. Currently it looks like all projects will pay a code size hit from AudioWorklet related code even though they would not be interested in the feature. That is something we'll need to be very precise about.

To summarize, in order to proceed with this feature, I believe we should define in detail, what the "native web" API would look like to approach this feature, if portability to other platforms is not a concern.

Then we can define what the "portable" API perspective would look like, if/when the added requirement of portability is necessary - and how should that API differ from the native web-like API?

@tklajnscek
Copy link
Contributor Author

@juj I'll try to find some time to work on this in the next few days and we can discuss. For now here's a quick reply to your points.

Maybe we'll need to flesh that out in documentation or a hello world test without test harness, to illustrate the expected API surface area.

Yeah, I think a clean sample would make sense. I'll put something somewhere that we can use a starting point for discussion.

the other question is about portability vs native web: you mention portability being a goal, but it looks like the test code contains an example of a JS-specific code API that needs to be called to create a "AudioWorklet pthread". That kind of API does not seem to actually aim for portability, since JS code is very JS specific?

The port existing code part of my comments above was referring to the ability to easily take existing C(++) audio processing code and make it work pretty much as-is under Emscripten when executed in an audio worklet context. My intention was not to create a generic, portable API for access to the platform-specific audio system which this would become. I didn't even attempt to wrap/abstract/hide the creation the AudioContext etc which immediately makes this a non-portable thing. I think that is totally beyond the scope of Emscripten and it would most likely be a total nightmare to maintain.

What I was aiming for was to add a way of hooking into the audio callbacks like you can on other platforms (using platform specific APIs - XAudio/WASAPI/AU etc.) and use normal C++ code in there without having to modify the internals of Emscripten/pthreads/low level synchronization. This is what I was going for with portability. Because currently, you can't write a function that gets called from an audio worklet and actually any pthread/sync primitive at all.

For example, if you have an existing app that renders audio you likely have a bunch of platform-independent code that processes all the audio/dsp logic. That's the bulk of the code. Then you have some platform-specific code that queries supported features/formats, creates the audio device and ends upsetting up some kind of DPC audio callback which is where the audio gets submitted to the os/driver for playback. And that should stay like that IMO.

Before AudioWorklets this worked fine in Emscripten too. For the 'Emscripten platform' part of the code we used a ScriptAudioProcessor node and simply called our "native" DPC callback to fill the buffers from onaudioprocess. It worked, because it all executed in the same context as the main code on the main thread. But as soon as you move this off the main thread you need some kind of atomics/sync logic and you need Emscripten compiled code to function as intended when executed from within the audio worklet context (which is a bit special). This is what this PR is trying to address. To just add the ability to even run the code from the audio worklet as if it were just another normal thread.

  • It seems to me that some of the work could be located in a library_audioworklet.js instead, rather than piling it in the library_pthread.js

Yeah, definitely. I'll make a pass on this to clean it up.

  • Finally, code size is a concern. We will need to land this in a fashion that does not regress code size a single byte when one is building without worklet enabled in the build settings

While I totally agree with the sentiment, this is more of a question of balance between code duplication and keeping code size down when disabled. @kripken asked me to deduplicate some of the code, which I also think is important to make it maintainable. But at the same time, doing this means that it because more difficult to keep the overhead to an absolute 0 when disabled without making the whole thing a hot ifdef-ridden mess. It becomes especially difficult in the futex implementation since just extracting a part of it into a common function will increase the code size even for non-audio-worklet builds.

In any case, I'm confident we can get pretty damn close to 0, or you know, optimize something else in the process to offset the cost :)

Base automatically changed from master to main March 8, 2021 23:49
@tklajnscek tklajnscek force-pushed the worklet_support branch 3 times, most recently from 6481737 to cadcdd9 Compare June 14, 2021 09:41
@tklajnscek
Copy link
Contributor Author

Hey everyone,

So I said in a few days but then life happened, so here we are 6 months later and I've finally been able to spend some more time on this and brought it up to speed with the latest revision and addressed some of the issues, mostly:

  • Made sure that code size doesn't change unless you actually enable the audio worklet environment
  • tests/audioworklet/audioworklet.cpp is now cleanly separated into test-only and usage example parts where literally just deleting all the test-only parts yields a working usage example.
  • I didn't end up splitting the audio worklet code in library_pthread.js out into library_audioworklet.js as @juj suggested because it's so intimately tied with all the pthread stuff (it's essential just a different way to set it up and then it all just works)

I'd appreciate any guidance on what needs doing so that I stand a chance of getting this merged as I really thing a lot of cool stuff can be done in the audio field with Emscripten and this just brings it way closer to native audio dev then it used to be.

Note: Some of the SSE tests seem to be failing, but I'm not sure how this is related to my changes and it seems to be non-trivial for me to run them locally, but I can try if this is not a known issue with the test suite?

Copy link
Collaborator

@sbc100 sbc100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I'd love to see some the complexity of the test/example reduced. It would seem odd to be if that was the smallest possible test/example of using a worklet.

emcc.py Outdated Show resolved Hide resolved
emcc.py Outdated Show resolved Hide resolved
src/library_pthread.js Outdated Show resolved Hide resolved
src/settings_internal.js Show resolved Hide resolved
src/shell.js Outdated Show resolved Hide resolved
tests/audioworklet/audioworklet_post.js Outdated Show resolved Hide resolved
tests/audioworklet/shell.html Outdated Show resolved Hide resolved
tests/test_browser.py Outdated Show resolved Hide resolved
tests/test_browser.py Outdated Show resolved Hide resolved
tests/audioworklet/audioworklet.cpp Outdated Show resolved Hide resolved
@tklajnscek
Copy link
Contributor Author

Wow, thanks for the uber fast review! I'll jump on it first thing in the morning as it's quite late here already :)

@tklajnscek
Copy link
Contributor Author

In addition the the stuff discussed in the diff comments I split up the worklet test into a futex test and a tone generator example.

I'll amend this PR some more after I'm done playing around with MODULARIZE=0/1 and MINIMAL _RUNTIME.. have some ideas :)

@tklajnscek
Copy link
Contributor Author

Let's see if the tests pass! Made everything work without requiring MODULARIZE=1! :)

@tklajnscek
Copy link
Contributor Author

I think we're good to go here. Any further feedback is much appreciated!

@tklajnscek
Copy link
Contributor Author

Some more fixes:

  • Realized MODULARIZE=0 --closure=1 was broken so fixed that and made the test actually test all these combinations.
  • A few other --closure=1 fixes (message fields accessed as strings mostly)
  • Some minor reorg of the onmessage handler to remove traces of worklet specifics in the regular worker onmessage

I think this is shaping up nicely if I do say so myself 😁

@tklajnscek
Copy link
Contributor Author

Ping :) Has anyone had time to look at this?
I really think this is something that's generally useful and I'd like to get it merged.

@tklajnscek
Copy link
Contributor Author

This is my fortnightly reminder for an update here 😄
It would be much appreciated 🙏

@kripken
Copy link
Member

kripken commented Jul 19, 2021

I tried to do another reading of this, but given all the discussion, II'm not sure where things stand. In particular, I see there was discussion of a possible C API and other options, for example - are those discussions still ongoing? Or is this PR in an intended final state?

Skimming the code, it does look like it no longer duplicates a bunch of logic, which is good!

@tklajnscek
Copy link
Contributor Author

tklajnscek commented Nov 16, 2021

Hey @Jonathhhan, yeah I haven't been merging in the latest changes lately... I've gone ahead and rebased everything on 2.0.34 so this should work for you:

  1. Clone the base SDK folder from https://github.com/emscripten-core/emsdk.git
  2. emsdk install 2.0.34
  3. emsdk activate 2.0.34
  4. remove the content of <sdkdir>/upstream/emscripten
  5. clone the worklet_support branch from my repo https://github.com/tklajnscek/emscripten.git in there
  6. run npm i in there to get acorn etc installed

This should do it. Then go into tests and try running runner.py browser.test_audio_worklet and it should succeed.

The tone generator sample should also build and run - in tests/audioworklet/tone - check the .cpp file for instructions.

Let me know if it works for you :)
Also, be aware that it doesn't look likely this will ever make it into Emscripten in it's current form :)

@Jonathhhan
Copy link
Contributor

Jonathhhan commented Nov 17, 2021

@tklajnscek Thank you very much - I was looking for a solution like that for a long time. Open Frameworks is working with audioworklet now (had to make some small additional changes). I make an audiocallback from the audioworklet every 128 samples. A strange thing is that it only runs smooth with Chrome and Ubuntu, other browser / OS combinations have some audio artifacts / glitches - maybe the buffersize is too small, I will investigate on that. Here is my example: https://gameoflife3d.handmadeproductions.de/ Here is the same thing without audioworklet (and a buffersize of 4096): https://simplesequencer.handmadeproductions.de/ (will post my audioworkletcode later...)

@Jonathhhan
Copy link
Contributor

Jonathhhan commented Nov 18, 2021

I set the Open Frameworks buffersize to 1024 and it is much better now (also compared to scripprocessornode) - not to say it works great.
This is how I implemented the code (I hope it is not too off-topic):

	html5audio_stream_create: function(context_id, bufferSize, inputChannels, outputChannels, inbuffer, outbuffer, callback, userData, pthreadPtr){
// Initializes the audio context and the pthread it it's AudioWorkletGlobalScope

  // Create the context
  Module.audioCtx = new AudioContext({sampleRate: 88200});

  // Initialize the pthread shared by all AudioWorkletNodes in this context
  PThread.initAudioWorkletPThread(Module.audioCtx, pthreadPtr).then(function() {
    out("Audio worklet PThread context initialized!")
  }, function(err) {
    out("Audio worklet PThread context initialization failed: " + [err, err.stack]);
  });

  // Creates an AudioWorkletNode and connects it to the output once it's created
  PThread.createAudioWorkletNode(
    Module.audioCtx,
    'native-passthrough-processor', 
    {
      numberOfInputs: 0,
      numberOfOutputs : 1,
      outputChannelCount : [2],
      processorOptions: {
      inputChannels : inputChannels,
      outputChannels : outputChannels,
      inbuffer : inbuffer,
      outbuffer : outbuffer,
      bufferSize : bufferSize,
      callback : callback,
      userData : userData
      }     
    }
  ).then(function(workletNode) {
    // Connect the worklet to the audio context output
    out("Audio worklet node created! Tap/click on the window if you don't hear audio!");
    workletNode.connect(Module.audioCtx.destination);
  }, function(err) {
    console.log("Audio worklet node creation failed: " + [err, err.stack]);
  });

  // To make this example usable we setup a resume on user interaction as browsers
  // all require the user to interact with the page before letting audio play
  if (window && window.addEventListener) {
    var opts = { capture: true, passive : true };    
    window.addEventListener("touchstart", function() { Module.audioCtx.resume() }, opts);
    window.addEventListener("mousedown", function() { Module.audioCtx.resume() }, opts);
    window.addEventListener("keydown", function() { Module.audioCtx.resume() }, opts);
  }
	},

This is the audioworklet:

/**
 * This is the JS side of the AudioWorklet processing that creates our
 * AudioWorkletProcessor that fetches the audio data from native code and 
 * copies it into the output buffers.
 * 
 * This is intentionally not made part of Emscripten AudioWorklet integration
 * because apps will usually want to a lot of control here (formats, channels, 
 * additional processors etc.)
 */

// Register our audio processors if the code loads in an AudioWorkletGlobalScope
if (typeof AudioWorkletGlobalScope === "function") {
  // This processor node is a simple proxy to the audio generator in native code.
  // It calls the native function then copies the samples into the output buffer
  var counter = 0;
  var inputChannels = 0;
  var outputChannels = 0;
  var inbuffer = 0;
  var outbuffer = 0;
  var bufferSize = 0;
  var callback = 0;
  var userData = 0;
  class NativePassthroughProcessor extends AudioWorkletProcessor {
    constructor (options) {
    super()
    inputChannels = options.processorOptions.inputChannels;
    outputChannels = options.processorOptions.outputChannels;
    inbuffer = options.processorOptions.inbuffer;
    outbuffer = options.processorOptions.outbuffer;
    bufferSize = options.processorOptions.bufferSize;
    callback = options.processorOptions.callback;
    userData = options.processorOptions.userData;
  }
    process(inputs, outputs, parameters) {
    counter = currentFrame / 128 % (bufferSize / 64);
    if (counter == 0) {
        dynCall('viiii',callback, [bufferSize,inputChannels,outputChannels,userData]);
    }
    const output = outputs[0];
    for (let channel = 0; channel < 2; ++channel) {
        const outputChannel = output[channel];
        outputChannel.set(Module.HEAPF32.subarray(outbuffer>>2,(outbuffer>>2)+bufferSize*outputChannels).slice(counter * 128, counter * 128 + 128));
        }
        return true;
    }
}
  // Register the processor as per the audio worklet spec
  registerProcessor('native-passthrough-processor', NativePassthroughProcessor);
}

My only issue is, that I need to edit the Emscripten generated Java Script file every time after compiling, because I do not know how to change the source code for that (a hint would be great).
If I use files and a filesystem (the problem is cause by var xhr = new XMLHttpRequest;), I need to set it in an if (ENVIRONMENT_IS_WEB){} condition (because of that the environments need to be defined earlier).
My change looks like that (its the beginning of the generated Java Script file):

if (typeof AudioWorkletGlobalScope === "function") {
    var Module = globalThis.Module
}
var Module = typeof Module !== "undefined" ? Module : {};
if (!Module.expectedDataFileDownloads) {
    Module.expectedDataFileDownloads = 0
}
Module.expectedDataFileDownloads++;
var ENVIRONMENT_IS_WEB = typeof window === "object";
var ENVIRONMENT_IS_WORKER = typeof importScripts === "function";
var ENVIRONMENT_IS_AUDIOWORKLET = typeof AudioWorkletGlobalScope === "function";
var ENVIRONMENT_IS_NODE = typeof process === "object" && typeof process.versions === "object" && typeof process.versions.node === "string";
var ENVIRONMENT_IS_PTHREAD = Module["ENVIRONMENT_IS_PTHREAD"] || false;
if (ENVIRONMENT_IS_WEB){
(function() {
    var loadPackage = function(metadata) {
        var PACKAGE_PATH = "";
        if (typeof window === "object") {
            PACKAGE_PATH = window["encodeURIComponent"](window.location.pathname.toString().substring(0, window.location.pathname.toString().lastIndexOf("/")) + "/")
        } else if (typeof process === "undefined" && typeof location !== "undefined") {
            PACKAGE_PATH = encodeURIComponent(location.pathname.toString().substring(0, location.pathname.toString().lastIndexOf("/")) + "/")
        }
        var PACKAGE_NAME = "bin/EmscriptenExample.data";
        var REMOTE_PACKAGE_BASE = "EmscriptenExample.data";
        if (typeof Module["locateFilePackage"] === "function" && !Module["locateFile"]) {
            Module["locateFile"] = Module["locateFilePackage"];
            err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")
        }
        var REMOTE_PACKAGE_NAME = Module["locateFile"] ? Module["locateFile"](REMOTE_PACKAGE_BASE, "") : REMOTE_PACKAGE_BASE;
        var REMOTE_PACKAGE_SIZE = metadata["remote_package_size"];
        var PACKAGE_UUID = metadata["package_uuid"];
        function fetchRemotePackage(packageName, packageSize, callback, errback) {
            if (typeof process === "object" && typeof process.versions === "object" && typeof process.versions.node === "string") {
                require("fs").readFile(packageName, function(err, contents) {
                    if (err) {
                        errback(err)
                    } else {
                        callback(contents.buffer)
                    }
                });
                return
            }
            var xhr = new XMLHttpRequest;
            xhr.open("GET", packageName, true);
            xhr.responseType = "arraybuffer";
            xhr.onprogress = function(event) {
                var url = packageName;
                var size = packageSize;
                if (event.total)
                    size = event.total;
                if (event.loaded) {
                    if (!xhr.addedTotal) {
                        xhr.addedTotal = true;
                        if (!Module.dataFileDownloads)
                            Module.dataFileDownloads = {};
                        Module.dataFileDownloads[url] = {
                            loaded: event.loaded,
                            total: size
                        }
                    } else {
                        Module.dataFileDownloads[url].loaded = event.loaded
                    }
                    var total = 0;
                    var loaded = 0;
                    var num = 0;
                    for (var download in Module.dataFileDownloads) {
                        var data = Module.dataFileDownloads[download];
                        total += data.total;
                        loaded += data.loaded;
                        num++
                    }
                    total = Math.ceil(total * Module.expectedDataFileDownloads / num);
                    if (Module["setStatus"])
                        Module["setStatus"]("Downloading data... (" + loaded + "/" + total + ")")
                } else if (!Module.dataFileDownloads) {
                    if (Module["setStatus"])
                        Module["setStatus"]("Downloading data...")
                }
            }
            ;
            xhr.onerror = function(event) {
                throw new Error("NetworkError for: " + packageName)
            }
            ;
            xhr.onload = function(event) {
                if (xhr.status == 200 || xhr.status == 304 || xhr.status == 206 || xhr.status == 0 && xhr.response) {
                    var packageData = xhr.response;
                    callback(packageData)
                } else {
                    throw new Error(xhr.statusText + " : " + xhr.responseURL)
                }
            }
            ;
            xhr.send(null)
        }
        function handleError(error) {
            console.error("package error:", error)
        }
        var fetchedCallback = null;
        var fetched = Module["getPreloadedPackage"] ? Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME, REMOTE_PACKAGE_SIZE) : null;
        if (!fetched)
            fetchRemotePackage(REMOTE_PACKAGE_NAME, REMOTE_PACKAGE_SIZE, function(data) {
                if (fetchedCallback) {
                    fetchedCallback(data);
                    fetchedCallback = null
                } else {
                    fetched = data
                }
            }, handleError);
        function runWithFS() {
            function assert(check, msg) {
                if (!check)
                    throw msg + (new Error).stack
            }
            Module["FS_createPath"]("/", "data", true, true);
            Module["FS_createPath"]("/data", "pd", true, true);
            Module["FS_createPath"]("/data/pd", "data", true, true);
            function DataRequest(start, end, audio) {
                this.start = start;
                this.end = end;
                this.audio = audio
            }
            DataRequest.prototype = {
                requests: {},
                open: function(mode, name) {
                    this.name = name;
                    this.requests[name] = this;
                    Module["addRunDependency"]("fp " + this.name)
                },
                send: function() {},
                onload: function() {
                    var byteArray = this.byteArray.subarray(this.start, this.end);
                    this.finish(byteArray)
                },
                finish: function(byteArray) {
                    var that = this;
                    Module["FS_createDataFile"](this.name, null, byteArray, true, true, true);
                    Module["removeRunDependency"]("fp " + that.name);
                    this.requests[this.name] = null
                }
            };
            var files = metadata["files"];
            for (var i = 0; i < files.length; ++i) {
                new DataRequest(files[i]["start"],files[i]["end"],files[i]["audio"] || 0).open("GET", files[i]["filename"])
            }
            function processPackageData(arrayBuffer) {
                assert(arrayBuffer, "Loading data file failed.");
                assert(arrayBuffer instanceof ArrayBuffer, "bad input to processPackageData");
                var byteArray = new Uint8Array(arrayBuffer);
                DataRequest.prototype.byteArray = byteArray;
                var files = metadata["files"];
                for (var i = 0; i < files.length; ++i) {
                    DataRequest.prototype.requests[files[i].filename].onload()
                }
                Module["removeRunDependency"]("datafile_bin/EmscriptenExample.data")
            }
            Module["addRunDependency"]("datafile_bin/EmscriptenExample.data");
            if (!Module.preloadResults)
                Module.preloadResults = {};
            Module.preloadResults[PACKAGE_NAME] = {
                fromCache: false
            };
            if (fetched) {
                processPackageData(fetched);
                fetched = null
            } else {
                fetchedCallback = processPackageData
            }
        }
        if (Module["calledRun"]) {
            runWithFS()
        } else {
            if (!Module["preRun"])
                Module["preRun"] = [];
            Module["preRun"].push(runWithFS)
        }
    };
    loadPackage({
        "files": [{
            "filename": "/data/audioworklet_tone_post.js",
            "start": 0,
            "end": 1469
        }, {
            "filename": "/data/pd/ofelia_bang.pd",
            "start": 1469,
            "end": 2894
        }, {
            "filename": "/data/pd/ofelia_vSlider.pd",
            "start": 2894,
            "end": 5102
        }, {
            "filename": "/data/pd/ofelia_hRadio.pd",
            "start": 5102,
            "end": 6725
        }, {
            "filename": "/data/pd/ofelia_hSlider.pd",
            "start": 6725,
            "end": 8917
        }, {
            "filename": "/data/pd/ofelia_label.pd",
            "start": 8917,
            "end": 9801
        }, {
            "filename": "/data/pd/ofelia_vRadio.pd",
            "start": 9801,
            "end": 11435
        }, {
            "filename": "/data/pd/ofSetup.lua",
            "start": 11435,
            "end": 12178
        }, {
            "filename": "/data/pd/main.pd",
            "start": 12178,
            "end": 63469
        }, {
            "filename": "/data/pd/gadsr.pd",
            "start": 63469,
            "end": 66878
        }, {
            "filename": "/data/pd/ofSetup.pd",
            "start": 66878,
            "end": 67142
        }, {
            "filename": "/data/pd/preset.pd",
            "start": 67142,
            "end": 68577
        }, {
            "filename": "/data/pd/ofelia_number.pd",
            "start": 68577,
            "end": 72170
        }, {
            "filename": "/data/pd/ofelia_toggle.pd",
            "start": 72170,
            "end": 73626
        }, {
            "filename": "/data/pd/synthvoice.pd",
            "start": 73626,
            "end": 75200
        }, {
            "filename": "/data/pd/data/Raster.frag",
            "start": 75200,
            "end": 75827
        }, {
            "filename": "/data/pd/data/Raster.vert",
            "start": 75827,
            "end": 76079
        }],
        "remote_package_size": 76079,
        "package_uuid": "71d3d0c5-3473-469b-9e09-d610538b347d"
    })
}
)();

@tklajnscek
Copy link
Contributor Author

Hey @Jonathhhan, I tried your demos and they both actually work fine for me in Chrome on Win 10 - couldn't hear any glitches :)

Obviously you have to be careful with any code you run in the audio worklet processor as it has to be very fast to stay real-time. Also, as you already noticed, the buffer is settable and that helps, but as far as I understand, the browser is always supposed to figure the ideal buffer to use with audio worklets and that's the whole point of a fixed 128 sample quantum.

If you can't make it work fast enough, then you might have to run the processing in another (multiple?) thread(s) and then just use the audio worklet processor to fill the output buffers.

Finally, regarding the file packager thingy - we don't actually use that so I'm not going to be able to help you much there, but generally, the issue is that XHRs are not allowed in worklet contexts.

The first solution is to fix file_packager.py to generate code that skips itself like you're doing now manually every time (if(!ENVIRONMENT_IS_AUDIOWORKLET) should be enough btw) .

The other, IMO better, option is switch to manual file fetches which are more flexible, but it requires you to know in the code that you need to download stuff and wait for it before using file.
Roughly, what you need to do is:

  1. Mount a file system (e.g. FS.mount(MEMFS...)
  2. Then call emscripten_async_wget2 to download files into it
  3. Once JS crash: |ReferenceError: _dlopen is not defined|. #2 is done, use the file as usual from the path you downloaded it into

Hope any of this helps :)

@Jonathhhan
Copy link
Contributor

Jonathhhan commented Nov 18, 2021

Hey @tklajnscek
thank you very much for the hints and of course for your audioworklet implementation.
Actually it works also well with a buffersize of 512, with 128 it depends on the browser and OS. All in all it is a big improvement and makes audio in the web reliable (basically I run Pure Data patches with OF and Emscripten)...
Adding those two lines works:

var ENVIRONMENT_IS_AUDIOWORKLET = typeof AudioWorkletGlobalScope === "function";
if(!ENVIRONMENT_IS_AUDIOWORKLET)

Edit: I did something wrong before, it works also very well with a buffer size of 64 (which is 128 for the audioworklet, because it is stereo - if i am right). And it worked with editing file_packager.py, thank you very much :)

@Jonathhhan
Copy link
Contributor

Jonathhhan commented Nov 22, 2021

Hey @tklajnscek, do you maybe know, if it is possible to run audioworklets with your branch together with -s PROXY_TO_PTHREAD=1 -s OFFSCREENCANVAS_SUPPORT=1 -s OFFSCREEN_FRAMEBUFFER=1? I tried, but get this error message:

EmscriptenExample.worker.js:1 worker.js onmessage() captured an uncaught exception: TypeError: Cannot read properties of undefined (reading 'audioWorklet')
self.onmessage @ EmscriptenExample.worker.js:1
EmscriptenExample.worker.js:1 TypeError: Cannot read properties of undefined (reading 'audioWorklet')
    at Object.initAudioWorkletPThread (EmscriptenExample.js:1484:27)
    at _html5audio_stream_create (EmscriptenExample.js:9687:13)
    at ofxEmscriptenSoundStream::setup(ofSoundStreamSettings const&) (EmscriptenExample.wasm:0x21b75)
    at ofxOfelia::init(int, int, int, int, bool, bool, int, int, bool, std::__2::basic_string<char, std::__2::char_traits<char>, std::__2::allocator<char> > const&) (EmscriptenExample.wasm:0x28ae6)
    at ofApp::setup() (EmscriptenExample.wasm:0x1d57a)
    at ofNode::onParentOrientationChanged(glm::qua<float, (glm::qualifier)0>&) (EmscriptenExample.wasm:0x523af3)
    at std::__2::__function::__func<std::__2::shared_ptr<of::priv::Function<ofEventArgs, std::__2::recursive_mutex> > ofEvent<ofEventArgs, std::__2::recursive_mutex>::make_function<ofEasyCam>(ofEasyCam*, void (ofEasyCam::*)(ofEventArgs&), int)::'lambda'(void const*, ofEventArgs&), std::__2::allocator<std::__2::shared_ptr<of::priv::Function<ofEventArgs, std::__2::recursive_mutex> > ofEvent<ofEventArgs, std::__2::recursive_mutex>::make_function<ofEasyCam>(ofEasyCam*, void (ofEasyCam::*)(ofEventArgs&), int)::'lambda'(void const*, ofEventArgs&)>, bool (void const*, ofEventArgs&)>::operator()(void const*&&, ofEventArgs&) (EmscriptenExample.wasm:0x51c5d6)
    at ofEvent<ofHttpResponse, std::__2::recursive_mutex>::notify(ofHttpResponse&) (EmscriptenExample.wasm:0x2076c)
    at ofCoreEvents::notifySetup() (EmscriptenExample.wasm:0x533088)
    at ofxAppEmscriptenWindow::loop() (EmscriptenExample.wasm:0x1f6ab)

openframeworks/openFrameworks#6781

@tklajnscek
Copy link
Contributor Author

Ha! It probably doesn't work because it needs code adding to proxy the call to the main thread, just like the regular pthread_create call does. There's also special case code for offscreen canvas support.

Since I don't really have a use case for this and I have other priorities right now I won't really be able to help you out with this, but you're more than welcome to take my code and make any changes necessary to make it work. I guess you could just look at the proxying and offscreen canvas code paths in __pthread_create_js and follow the same principles?

@Jonathhhan
Copy link
Contributor

Jonathhhan commented Dec 7, 2021

@tklajnscek I wanted to thank you again, your implementation works absolutely great with Open Frameworks and Pure Data (I think I fixed all the issues I had). I build a small demo where you can manipulate local audio files and audio input: https://emscriptenpdeffect.handmadeproductions.de/

@tklajnscek
Copy link
Contributor Author

@Jonathhhan that's really great! I'm happy this was useful to someone else!
Did you have to make any further changes to the internals of emscripten (and my changes) to make it work?

@Jonathhhan
Copy link
Contributor

@tklajnscek actually even the filesystem error disappeared after removing -s ASSERTIONS (which seems a bit strange, but anyhow). I made one small change for recalculating mouse coordinates in fullscreen mode and one for offscreen canvas (but that has nothing to do with the audioWorklet implementation).

@Jonathhhan
Copy link
Contributor

Hey @tklajnscek,
I am just courious if you have any plans to update your branch for future Emscripten versions?
If not, would it be difficult to update it myself? Not that it is necessary at the moment, but it would be great if audioWorklet and Emscripten will still work in the future.

@tommitytom
Copy link

I'd just like to add that I'm using this branch for one of my projects and it has been absolutely fantastic so far. The project is relatively complex and shares a lot of data between the main/audio thread via the use of the moodycamel concurrent queues and it works just as I expect it to!

@kripken
Copy link
Member

kripken commented Feb 15, 2022

From the discussion above I think the general agreement was to focus on @juj 's Wasm Workers PR and to add AudioWorklet support on top of that? If so, then helping in that direction would probably be better. Or is there not consensus on that direction?

@tklajnscek
Copy link
Contributor Author

Oooh! I see there some action over at #12833 now! I was afraid it was dead after a few months of silence and was just about to start asking what's up :)

Also, I'm glad someone else is finding the work here useful! 🥳

Anyways, my plan for this PR is to keep it reasonably updated to at least cover our needs for Emscripten updates until we hopefully end up building our thing on top of the web workers API. We usually take a new version every 3-4 months.

For our code base it shouldn't be a huge problem since we already have a per-platform threading abstraction so this is a natural fit where I can plug in the wasm workers API.

I guess this might not be that straightforward for others that have pthreads code sprinkled all over their codebase...

It'll be a few months until I get around to actually trying the wasm workers though as we're in the midst of shipping a major version update 😅

@Jonathhhan
Copy link
Contributor

Jonathhhan commented Feb 15, 2022

@tklajnscek thanks, that sounds great.

From the discussion above I think the general agreement was to focus on @juj 's Wasm Workers PR and to add AudioWorklet support on top of that? If so, then helping in that direction would probably be better. Or is there not consensus on that direction?

I agree totally with that (only that my programming knowledge is a little limited - but I am happy to test and help where I can), and it would be great to have audioWorklet in the official branch. Its just, that at the moment this branch is the only I know where I can use audioWorklets together with Open Frameworks.

@juj juj mentioned this pull request Feb 28, 2022
@juj
Copy link
Collaborator

juj commented Mar 8, 2022

Wasm Workers PR did land, although not sure if that was premature ( #12833 (comment) ). The PR #16449 is now up for reviewing and landing Audio Worklet support as well.

Looking at the dates on the comment history here, I see it has taken quite some time, so thanks for the patience on this one! Please give the above PR a review for any comments you might have.

@yogesh0509
Copy link

Hello everyone. I am new to open source and have arrived on this pr by following the guidelines mentioned on the Chromium GSOC 2022 Project Proposal: Audio Worklet for Emscripten. I would really like to contribute to this project. Can somebody brief me how I can contribute to this thread or project.

@tommitytom
Copy link

tommitytom commented Oct 1, 2022

Hey folks, any update as to whats going on here? WASM workers appear to have been merged - should they be used to handle this instead, or is this PR still the best method?

EDIT: Just took a look at the above PR (#16449). Disregard this message!

@sbc100
Copy link
Collaborator

sbc100 commented Oct 11, 2024

Can this PR be closed now that #16449 has landed?

@tklajnscek
Copy link
Contributor Author

Agreed!

@tklajnscek tklajnscek closed this Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants