Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audio support #302

Open
samhed opened this issue Sep 18, 2013 · 45 comments
Open

Audio support #302

samhed opened this issue Sep 18, 2013 · 45 comments

Comments

@samhed
Copy link
Member

samhed commented Sep 18, 2013

Would require implementing, for example, a PulseAudio server in javascript.

@monkey-jsun
Copy link

Samhed, can you elaborate your thoughts on this?

I see two general approaches:

  1. streaming PCM or encoded audio over internet (perhaps extending VNC protocol, or simply open another websocket) and then use web audio API to playback on client machine.
  2. extend the webserving capability on the remote machine to provide some audio URL source, e.g., :5901/system-audio.mp3. And on client side we simply present a tag

What do you think?

@samhed
Copy link
Member Author

samhed commented Jul 17, 2015

My thoughts were option 1, I actually have a prototype that works for ESound which uses Web Audio API, but that's quite an outdated protocol..

@kanaka
Copy link
Member

kanaka commented Jul 21, 2015

@samhed did you mean "Audio Data API". Web Audio API is the modern API. The old deprecated one that Mozilla prototype was the Audio Data API (https://wiki.mozilla.org/Audio_Data_API)

@samhed
Copy link
Member Author

samhed commented Jul 22, 2015

Nope, i meant Web Audio API. I made it last year. I used ESound since it is a very simple protocol and I was mostly after a PoC.

@monkey-jsun
Copy link

BTW, just curious about the choice of ESound.

I thought we might be able to use a regular compression codec and
piggyback on VNC (using a custom type).

Jun

On Wed, Jul 22, 2015 at 9:33 AM, Samuel notifications@github.com wrote:

Nope, i meant Web Audio API. I made it last year. I used ESound since it
is a very simple protocol and I was mostly after a PoC.


Reply to this email directly or view it on GitHub
#302 (comment).

@ikreymer
Copy link

Just wanted to check if anyone is currently working on this (I'm guessing not, but good to check)

@samhed Is your EsounD solution open source?

Otherwise, I was thinking perhaps https://github.com/pdeschen/pcm.js may be a start for pulseaudio

I would like to add sound to my project: http://oldweb.today

@samhed
Copy link
Member Author

samhed commented Dec 23, 2015

@ikreymer I'm afraid it's not open source, but I don't think the EsounD solution is worth much. I'm currently not working on this however.

One issue I see with implementing audio support into the noVNC project is that such a solution would require server-side modifications to function, which I don't think fit in noVNC. First, pulseaudio on the server has to be configured to transmit sound from applications to a TCP port. Furthermore a WebSocket has to be opened to send the audio data from that port to the noVNC client. This requires some sort of server-side service.

But after all noVNC could simply be able to interpret and play audio data from a WebSocket which probably is sufficient.

@samhed
Copy link
Member Author

samhed commented Dec 23, 2015

@ikreymer I had a discussion with my boss about the EsounD solution. We decided to make the client-side open. You can find the patch attached to the following bug:

https://www.cendio.com/bugzilla/show_bug.cgi?id=4822

Edit: Note however, that the patch is for an old version of noVNC so it will probably not work as a drop in without modifications.

@vans163
Copy link

vans163 commented Feb 3, 2017

For Qemu I think it should be possible, it seems QEMU uses a custom VNC protocol where sound can also be trammited. It would be a matter of customizing the protocol when interacting with a QEMU server and it would require decoding/+playing the sound in the browser.

Also interesting would be patching qemu to pipe in audio to play as if it was microphone input.

@ikreymer
Copy link

ikreymer commented Feb 3, 2017

I wanted to add a quick update.. I've be able to get audio to work by using FFMPEG + PulseAudio. FFMPEG is able to convert to many formats and can even act as an HTTP server. However, main issue I found is the lag.

I've gotten best results by having FFMPEG write to TCP socket in OPUS format, then having a custom proxy that streams over a WebSocket. The custom proxy allows for optimizations, such as skipping what looks like silence.

If anyone is interested, here are some code references:

Script that runs ffmpeg:
https://github.com/oldweb-today/base-browser/blob/master/audio_stream.sh#L7

The TCP->websocket proxy:
https://github.com/oldweb-today/base-browser/blob/master/audio_proxy.py

On the client, I am using MediaSource Extensions API to playback the OPUS stream:
https://github.com/oldweb-today/browsers/blob/master/shepherd/static/browser_controller.js#L466

However, even with all this, there seems to be a 1-3 sec latency between the noVNC and the audio, even on a local machine, on either Chrome or Firefox. (The server is always a local Docker container).

My thinking was that the latency is from the browser decoding OPUS, so I also began experimenting with sending raw (PCM) data and using the more low-level WebAudio API createBuffer() directly. This definitely reduced latency to essentially 0, but had to deal with PCM and manually managing buffers, which I've started to in the code base as well. Was hoping that perhaps an 8-bit PCM stream might be usable in some cases, but haven't gotten it to work well enough yet.

For now, using the OPUS codec and accepting the latency. It may also be possible to use decodeAudioData API but I haven't been able to get that to work and its unclear what codecs are supported..

The above code is part of a larger system, but happy to factor it out if it is useful to someone or if anyone wants to collaborate on improving it.

I would be curious about the latency with Janus and WebRTC approaches.

@baxerus
Copy link

baxerus commented Nov 29, 2017

Running a janus server for this might be possible, but very unhandy.

I would also like to see a small simple solution for this in noVNC 👍

@ikreymer
Copy link

It's possible to do this without Janus and virtually no lag. I have refined the approach in my previous comment, now using gstreamer also instead of ffmpeg.

The key is that gstreamer also supports a TCP sink, which could be hooked up to websockify, or with a custom script and proxied over a websocket.

A possible gstreamer pipeline is:

gst-launch-1.0 -v alsasrc ! audio/x-raw, channels=2, rate=24000 ! cutter ! opusenc complexity=0 frame-size=2.5 ! webmmux ! tcpserversink port=<port>

which encodes the data as Opus in Webm (similar to what you get over WebRTC).

On the browser, you have to use Media Source Extension (MSE), which is now fairly common in most browsers.

Here's a working example for Chrome and Firefox (you should see the audio play in your browser as soon as it plays in the remote browser):
https://webrecorder.io/wrsc/nfb/20171110085950$br:chrome:53/http://thisland.nfb.ca/#/thisland

Unfortunately, Safari does not support Opus, so probably need to provide an alternative mp3 stream (Based on this test, that seems to be the only option: https://hpr.dogphilosophy.net/test/)

The connection pipeline is:

HTML5 JS Client <- WS Proxy (websockify or custom script) <- gstreamer with tcp sink

My latest version of the client-side code, which could be added to noVNC is here:
https://github.com/oldweb-today/browsers/blob/master/shepherd/static/browser_controller.js#L482

The main remaining issues are:

  1. mp3 support for Safari, other browsers that don't support opus
  2. Ideally, make it work with websockify as a one way connection, to avoid having another custom websocket proxy script
  3. Improve restarting the audio in case of disconnect. This has proved to be rather tricky, possibly a gstreamer issue. For some reason, when restarting the webm stream, if the audio is already playing, the browser seems to not be able to decode it..even if restarting gstreamer. Starting from silence works perfectly. (I run this in Docker containers and it works perfectly on startup)

I've spent a lot of time getting to this point, and would gladly help integrate this into noVNC if there is interest to work with just gstreamer on the server.

@sebLopezCot
Copy link

sebLopezCot commented Jan 23, 2018

@ikreymer, any tips on setting up ALSA properly to work with your proposed pipeline under Ubuntu 16.04? I've been getting client-side errors when trying to reproduce results.

Errors such as:

MediaError CHUNK_DEMUXER_ERROR_APPEND_FAILED
DOMException: Failed to execute 'appendBuffer' on 'SourceBuffer': The HTMLMediaElement.error attribute is not null.

On stack overflow, some have been saying that the source buffer must be flushed before appending once more for the Media Source Extension API. I don't seem to have issues with oldweb.today, however.

@arkhanoid
Copy link

arkhanoid commented Apr 20, 2018

I solved the audio support using jack + darckice + icecast + qjackctl, but the delay is very large ( ~10s)

@mrorgues
Copy link

@ikreymer
Sounds very promising! (no pun ;-) )
Could you please make a PR?

@varunpatro
Copy link

@ikreymer Any update on this?

@juanjoDiaz
Copy link
Contributor

Any updates on this?
I might be willing to contribute to this feature but I would need some guidance.

@CendioOssman
Copy link
Member

I think noVNC should stick to VNC and avoid side channels. So the easiest first step would probably be too look at existing VNC extensions for audio. QEMU's variant is documented in rfbproto:

https://github.com/rfbproto/rfbproto/blob/master/rfbproto.rst#qemu-audio-pseudo-encoding

Hopefully QEMU still supports this so you have something to test against.

So start seeing if you can write the protocol handling for that. And play around with the browser audio API to see what works there.

@tmanok
Copy link

tmanok commented Jun 12, 2020

Can this please happen? I have users practically dying for this and they do not have an alternative.

@tinyzimmer
Copy link

tinyzimmer commented Jul 25, 2020

So unfortunately some of the links in this thread are dead, but @ikreymer led me down the perfect path with the gstreamer and webm/opus suggestions.

After giving up and trying again 3 or 4 times, I got an implementation working in this project, though it's still very hacked up at the time of writing. Pinning the links to a tag so it'll hopefully still be available for the passerby.

There is no way to accomplish this with just noVNC, but in your client you can still provide audio through a side channel. My example is in go, but the gist of it is a second websocket connection for the audio stream. The proxy in this case simply taps into an audio stream and then copies the output to the websocket connection as array buffers. Source is here.

I tried many different ways to capture the audio, but the way I ultimately got it working best is with gstreamer converting to webm/opus from a pulsesrc on the instance running the VNC server. The pipeline is a bit messy in the code, but an example pipeline looked like this:

gst-launch-1.0 -q pulsesrc server=/run/user/1000/pulse/native \
    ! audio/x-raw, channels=2, rate=24000 \
    ! cutter \
    ! opusenc \
    ! webmmux \
    ! fdsink fd=1

The stdout from that was sent back over the websocket connection where the client would execute this code when it wanted an audio stream. Again, props to ikreymer for pointing me at Media Source Extensions.

In the end this worked pretty well, though at the time of writing I have only tested this over local interfaces. If I remember, I'll update later with how performance is over some distance. After this adventure I'm really not sure this should even be noVNC's responsibility. That being said, I guess it could offer a connector similar to the code in the last link above, and then maybe just document ways a server could provide this functionality (the above examples in go as one option).

EDIT: Came back to put links on newer tags so I can clean up that repo a bit. I've since tested this over some distance and honestly it works about the same as locally. I've implemented some metrics in my project and the audio seems to hover around 8ish KB/sec with some fluctuation. The minor lag is still an issue but there isn't a noticeable difference (at least for me) between running locally or hopping a pond.

@tinyzimmer
Copy link

I decided to take a stab at implementing this without a side-channel. On a local clone I've implemented QEMU Extended Audio Client and Audio Server messages, and where I'm at right now is a loss of a good server to test with.

Should I maybe open a PR with what I have so far and we can explore going from there? I'm still not 100% on if it's worth fully implementing this, but at least adding the support for the QEMU Audio Messages could have its benefits in the future.

@CendioOssman
Copy link
Member

Either a draft PR, or share your repo on the devel mailing list and we can discuss things there.

@no-body-in-particular
Copy link

no-body-in-particular commented Oct 26, 2020

Got full audio support working :) Basically wrote a small c utility to pipe the gstreamer output to a socket - then used websockify to stream it to my browser. Thanks everyone for the ideas/leading me in the right direction - and thanks to the NoVNC team for this software ^^.

Code can be found on my blog: https://coredump.ws/index.php?dir=code&post=NoVNC_with_audio

Edit: latency is so low it's not noticeable for me - but maybe someone else can do some proper measurements/optimisation.
Edit: and yes it can handle multiple connections and automatic reconnects without fail.
Last edit ( for now): Also updated websockify-c to support current SSL methods and fixed the paste button for VNC servers without clipboard support.

@vexingcodes
Copy link

@no-body-in-particular Your implementation works quite well for me. The latency is noticeable, but not terrible. There's probably some room for improvement there by tweaking pipeline parameters and lag parameters in the javascript file.

I've uploaded an demonstration based on your webaudio.js, but instead of using a custom TCP server written in C or Go, I utilize ucspi-tcp (available in the default package repositories for several prominent Linux distributions) to spawn a new gstreamer pipeline whenever someone connects. This uses webm using the pipeline @tinyzimmer posted previously, but could use mp4 easily enough.

It would be a decent amount of work to integrate this more nicely into novnc, but it looks like the server-side part (assuming a side-channel is used instead of using the same websocket as VNC, as in the QEMU approach) can be accomplished using off-the-shell components that are already available in many distributions, rather than needing to write custom server components.

@paolomainardi
Copy link

@vexingcodes i'm just trying your implementation and it works great, i am testing it using dosbox + some retro games, you rock guys.

@Kreijstal
Copy link

Got full audio support working :) Basically wrote a small c utility to pipe the gstreamer output to a socket - then used websockify to stream it to my browser. Thanks everyone for the ideas/leading me in the right direction - and thanks to the NoVNC team for this software ^^.

Code can be found on my blog: https://coredump.ws/index.php?dir=code&post=NoVNC_with_audio

Edit: latency is so low it's not noticeable for me - but maybe someone else can do some proper measurements/optimisation.
Edit: and yes it can handle multiple connections and automatic reconnects without fail.
Last edit ( for now): Also updated websockify-c to support current SSL methods and fixed the paste button for VNC servers without clipboard support.

could you put a pull request

@calebj
Copy link

calebj commented Mar 30, 2021

@no-body-in-particular Your implementation works quite well for me. The latency is noticeable, but not terrible. There's probably some room for improvement there by tweaking pipeline parameters and lag parameters in the javascript file.

I've uploaded an demonstration based on your webaudio.js, but instead of using a custom TCP server written in C or Go, I utilize ucspi-tcp (available in the default package repositories for several prominent Linux distributions) to spawn a new gstreamer pipeline whenever someone connects. This uses webm using the pipeline @tinyzimmer posted previously, but could use mp4 easily enough.

It would be a decent amount of work to integrate this more nicely into novnc, but it looks like the server-side part (assuming a side-channel is used instead of using the same websocket as VNC, as in the QEMU approach) can be accomplished using off-the-shell components that are already available in many distributions, rather than needing to write custom server components.

Thanks to these efforts, I have finally been able to get something working on my setup. I've created a branch based on the demonstration, which I am currently using with a fair amount of success. I am using nginx to reverse-proxy both the VNC and audio sockets on their own paths to their own websockify instances. I haven't tested vnc_lite because I use PAM authentication.

The only issue I'm having over an internet connection is that the audio stream gradually falls behind. I'm not sure if this is happening on the browser end (the syncInterval function isn't working correctly) or the server end (gstreamer is buffering the slack). It falls behind quickly with a 48kHz opus stream, but it isn't as bad with 24k. Since my dev console is disabled at work, I can't debug the issue in the environment that has forced me to use noVNC in the first place.

Personally, I think multiplexing the audio in the VNC stream would be the best solution for managing latency, but as far as I can tell, only RealVNC (paid) and QEMU (as stated above) support it on the server side. Since I use turbovnc, I would have to run it through something to splice in the audio-encoded messages, and I am uncertain how practical that would be. Thoughts?

@wu191287278
Copy link

My solution is to use FFmpeg + JSMpeg

Start Pulse Audio:

pulseaudio --start --exit-idle-time=-1

Use FFMPEG to capture audio transfer to UDP protocol

ffmpeg -f alsa -i pulse -f mpegts -codec:a mp2 udp://localhost:1234

Write a UDP Convert WebSocket proxy server in Go

package main

import (
	"flag"
	"fmt"
	"golang.org/x/net/websocket"
	"io"
	"log"
	"net"
	"net/http"
)

func main() {
	// vnc convert to websocket
	http.Handle("/websockify", websocket.Handler(func(wsconn *websocket.Conn) {
			defer wsconn.Close()
			var d net.Dialer
			var address = "localhost:5900"
			conn, err := d.DialContext(wsconn.Request().Context(), "tcp", address)
			if err != nil {
				log.Printf("[%s] [VNC_ERROR] [%v]", address, err)
				return
			}
			defer conn.Close()
			wsconn.PayloadType = websocket.BinaryFrame
			go func() {
				io.Copy(wsconn, conn)
				wsconn.Close()
				log.Printf("[%s] [VNC_SESSION_CLOSED]", address)
			}()
			io.Copy(conn, wsconn)
			log.Printf("[%s] [VNC_CLIENT_DISCONNECTED]", address)
	}))	
	var writers = new(WsMultiWriter)
	writers.writers = map[*websocket.Conn]chan *[]byte{}
	go func() {
		RunJsmpegUDP(":1234", writers)
	}()
	http.Handle("/audio", websocket.Handler(func(conn *websocket.Conn) {
		defer conn.Close()
		conn.PayloadType = websocket.BinaryFrame
		ch := make(chan *[]byte, 1)
		writers.writers[conn] = ch
		for {
			select {
			case <-ch:
			}
			break
		}
	}))
	log.Printf("Http listening os %s \n", *address)

	log.Fatal(http.ListenAndServe(*address, nil))

}

func RunJsmpegUDP(address string, writer io.Writer) {
	udpAddr, _ := net.ResolveUDPAddr("udp4", address)

	//监听端口
	udpConn, err := net.ListenUDP("udp", udpAddr)
	if err != nil {
		fmt.Println(err)
	}
	defer udpConn.Close()

	fmt.Printf("Jsmpeg udp listening on %s \n", address)
	io.Copy(writer, udpConn)

}

type WsMultiWriter struct {
	writers map[*websocket.Conn]chan *[]byte
}

func (t *WsMultiWriter) Write(p []byte) (n int, err error) {
	for k, v := range t.writers {
		if v == nil {
			continue
		}
		n, err = k.Write(p)
		if err != nil {
			v <- nil
			t.writers[k] = nil
			continue
		}
		if n != len(p) {
			return
		}
	}
	return len(p), nil
}

`

jsmpeg:

`
<!DOCTYPE html>
<html>
<head>
    <title>Audio</title>
</head>
<body>
<canvas id="video-canvas" style="width: 0;height: 0;position: fixed"></canvas>
<script type="text/javascript" src="jsmpeg.min.js"></script>
<script type="text/javascript">
    var canvas = document.getElementById('video-canvas');
    var url = 'ws://' + window.location.hostname + ":" + window.location.port + '/audio';
    var player = new JSMpeg.Player(url, {
        canvas: canvas, audioBufferSize: 128 * 64,
        autoplay: true, pauseWhenHidden: false,
    });
    document.addEventListener('touchstart', function () {
        player.audioOut.unlock(function () {
            alert('unlocked!');
        });
    });
</script>
</body>
</html>

`

@wu191287278
Copy link

1624116317053286.mp4

@wu191287278
Copy link

https://github.com/wu191287278/noVNC-audio

@alectrocute
Copy link

alectrocute commented Aug 11, 2021

@wu191287278 Looks awesome, but doesn’t work on macOS Safari :(

Was able to switch to MP3 instead of audio/webm for both the client and gstreamer pipeline and it worked.

@ghost
Copy link

ghost commented Apr 18, 2022

Can anyone help me with try to get the https://coredump.ws/index.php?dir=code&post=NoVNC_with_audio working in windows?
Or can someone give me a copy of https://github.com/wu191287278/noVNC-audio so I can look at it?
I'm a complete noob with streaming audio or compiling c so please help me.

@giegiey
Copy link

giegiey commented Aug 24, 2022

This is my solution and output sound less than one second in local network.
Make sure to enable Sound Mix and set it as default in Windows sound settings.

Audify.js https://almoghamdani.github.io/audify/index.html
PCM-Player JS https://github.com/samirkumardas/pcm-player
Websocket https://www.npmjs.com/package/websocket

Example

Master or VNC Server side

//Audify audio : https://almoghamdani.github.io/audify/index.html
const WebSocket = require('ws')

var wss = new WebSocket.Server({
    port: 8080
});

console.log('Server ready...')
wss.on('connection', function connection(ws) {
    console.log('Socket connected. sending data...')
})

const {
    RtAudio,
    RtAudioFormat,
} = require("audify")

// Init RtAudio instance using default sound API
const rtAudio = new RtAudio()
rtAudio.outputVolume = 0

// Open the input/output stream
rtAudio.openStream({
        deviceId: rtAudio.getDefaultOutputDevice(), // Output device id (Get all devices using `getDevices`)
        nChannels: 2, // Number of channels
        firstChannel: 0 // First channel index on device (default = 0).
    }, {
        deviceId: rtAudio.getDefaultInputDevice(), // Input device id (Get all devices using `getDevices`)
        nChannels: 2, // Number of channels
        firstChannel: 0 // First channel index on device (default = 0).
    },
    RtAudioFormat.RTAUDIO_SINT16, // PCM Format - Signed 16-bit integer
    48000, // Sampling rate is 44.1kHz
    480, // Frame size is 1920 (40ms)
    "MyStream", // The name of the stream (used for JACK Api)
    pcm => {
        wss.clients.forEach(function each(client) {
            if (client.readyState === WebSocket.OPEN) {
                client.send(pcm)
            }
        })
        rtAudio.write(pcm)
    } // Input callback function, write every input pcm data to the output buffer
)

// Start the stream
rtAudio.start()

noVNC side

example.html

<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <title>Opus to PCM</title>
</head>

<body>
    <div id="container" style="width: 400px; margin: 0 auto;">
        <h2>It should play audio if everying went well!</h2>
        <p>However, the AudioContext may not allow starting the playing without a user gesture. Click this text to
            instruct the disobedient browser you really want to listen to the audio!</p>
    </div>
    <script type="text/javascript" src="js/pcm-player.js"></script>
    <script type="text/javascript" src="js/script.js"></script>
</body>

</html>

script.js

//PCM Player : https://github.com/samirkumardas/pcm-player/blob/master/example/server/server.js

window.onload = function () {
    var socketURL = 'ws://192.168.1.107:8080'
    var player = new PCMPlayer({
        encoding: '16bitInt',
        channels: 2,
        sampleRate: 48000,
        flushingTime: 2000
    })

    var ws = new WebSocket(socketURL)
    ws.binaryType = 'arraybuffer'
    ws.addEventListener('message', function (event) {
        var data = new Uint16Array(event.data)
        player.feed(data)
        player.volume(1)
    })
}

@christian-saldana
Copy link

christian-saldana commented Oct 8, 2022

@giegiey This solution worked great for me! I had to connect remotely to a Windows computer a few hundred miles away and I was able to make the audio lag unnoticeable. Originally the lag was about 2 seconds. I am very new to anything related to browser audio, so it took me a while, but then I realized all you had to do was decrease the flushingTime within the script.js file:

window.onload = function () {
    var socketURL = 'ws://192.168.1.107:8080'
    var player = new PCMPlayer({
        encoding: '16bitInt',
        channels: 2,
        sampleRate: 48000,
        flushingTime: 50
    })

    var ws = new WebSocket(socketURL)
    ws.binaryType = 'arraybuffer'
    ws.addEventListener('message', function (event) {
        var data = new Uint16Array(event.data)
        player.feed(data)
        player.volume(1)
    })
}

From my understanding, the flushing time determines how often the browser processes new audio data. If there are any negative consequences to this someone please let me know.

This was very easy to set up with virtually no audio lag from a computer outside of my network.

@masad-frost
Copy link

masad-frost commented Oct 27, 2022

https://github.com/replit/rfbproxy and #1525 solve this! There might be synchronization issues (out of scope for now), but no codec issues at least.

@AirplanegoBrr
Copy link

How is audio coming along? I would like official support for audio. I did try some people's PRs and stuff but I was unable to get it to work

@CendioOssman
Copy link
Member

It would first require server support. Support for the Replit server is being worked on in #1525. I don't think anyone is working on support for the QEMU server.

I'm not aware of any other servers with audio support.

@no-body-in-particular
Copy link

How is audio coming along? I would like official support for audio. I did try some people's PRs and stuff but I was unable to get it to work

https://coredump.ws/index.php?dir=code&post=Small_efficiency_improvements I've updated the code I use to use audio with NoVNC below. Latency is much lower with this setup, there's a native C server for audio - as well as an init script that demonstrates how to launch it.

@AirplanegoBrr
Copy link

Would be nice if it was on github and not some random site, but thank you!

@no-body-in-particular
Copy link

Would be nice if it was on github and not some random site, but thank you!

Will get to that sometime :P I have more projects that I'm going to have to move from my personal site to github. Just got done updating the code yesterday, and tarred it as-is on the server :)

@no-body-in-particular
Copy link

Would be nice if it was on github and not some random site, but thank you!

Finally got done putting it up on github.

https://github.com/no-body-in-particular/NoVNCAudio

All the needed config files, init script - cleaned the code up some more and made it slightly more reliable.

@no-body-in-particular
Copy link

Aaand done, audio only accepted from an authenticated IP, and dependency on websockify removed ;) that should be the last update of that repo for now.

@hasan4791
Copy link

Anyone tried this approach btw?
https://github.com/me-asri/noVNC-audio-plugin/tree/main

@kroese
Copy link

kroese commented May 22, 2024

@hasan4791 In the readme it says it only works for Linux guests, not Windows guests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests