-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate into gobject.MainLoop? #11
Comments
Hey. Thanks. First of all,
https://lazka.github.io/pgi-docs/#GLib-2.0 pulseaudio already uses glib event loop as its main loop by default, see I'm not sure if it's GLib.MainLoop though - should be possible to create bunch of unrelated contexts/loops with glib - you can probably check the source or test it by e.g. registering callback via With async code though, problem should also be that one'd need to invert control flow somehow, i.e. have So best options to explore (that I can think of):
Probably not exhaustive list of options.
I suppose you might get some deprecation warnings or errors from EDIT: clarified which "main" loop I meant in "not sure if it's main loop" - GLib.MainLoop, ofc. |
Btw, not sure if you mean this module or And I do realize that |
Thanks for your fast and detailed response! Well, it is quite sure that those are not the same loops. When calling I started some experiments using import gobject
import pulsectl
def task1():
print('running task1')
return True
def print_events(ev):
print('Pulse event:', ev)
pulse = pulsectl.Pulse('test')
pulse.event_mask_set('all')
pulse.event_callback_set(print_events)
mainloop = gobject.MainLoop()
gobject.timeout_add(1000, task1)
context = mainloop.get_context()
while mainloop is not None:
if context.pending():
context.iteration()
else:
pulse.event_listen(timeout=0.01) That works great in first tests. Basically it combines both loops to act as one. The gobject one is main event loop, the pulsectl event loop is being checked when the main one idles. From what I read in your source this does not seem to create and destroy any objects every 0.01 seconds, so from my perspective the performance is great and it seems like this is indented use for It does not seem to be the case, but is there a chance of missing some events while the main loop is working? |
Yeah, it certainly seem to be the case, unfortunately. Though looking at
I'd say it might still be the case that pulse inits all event handlers in prepare() there and removes in dispatch() or something like that (checked it because I remember re-implementing this bit in pulsectl.py somewhere), which is why you don't have these triggered when running glib mainloop, but it seems unlikely and maybe not very useful anyway.
True, I also put it in the last place there because it seems rather complicated - but likely unavoidably so - to me as well.
Oh yeah, that's where I've kinda re-implemented
Indeed, should only check/create some timer-related values and do few minor state-checks like if context got disconnected/closed and whether threading lock is required (not the case here). I think you can actually use timeout=0 there, as e.g.
Wasn't thinking of mixing different event loops together that way, no. Was rather thinking that timeout might be useful if one's e.g. waiting for 1s for something to happen, do something else when it does (signal success, exit), or some other thing if it does not by that time (e.g. quit with error, whatever). But should only mean that I didn't use/test it that way myself, nothing else.
Don't know details of pulseaudio "native" protocol, but I can imagine two ways it might work:
In first case, doing such poll 1/s should work great, but if it's the second case, obviously something can get lost, client might get disconnected or something like that. Again, unfortunately don't know protocol details to answer conclusively, but maybe will check the sources, or you can probably do that yourself, or e.g. ask in #pulseaudio on freenode, might get an easy answer from devs/maintainers there, or maybe on ML. |
Figured I'd just connect to pulse via tcp socket with something like this:
Then run wireshark and see what's flying there. Bad news - definitely second case - each event sends packet with 40B payload, apparently event data. Given that So that'd be like 1k-5k pulse events within that 1s window that should be necessary to overflow default socket buffers on my machine - don't think it's realistic, something like 50-100 is likely the high watermark here. So I'd say checking on that queue once per second should be perfectly fine for real-world scenarios, unless it's a really busy pulse instance or some parameters above are different. You can also easily check what happens when buffer overflows that way - replace sleep() with input() + event_listen() there, spam pulse events (e.g. roll volume bars or something) until it's clear that queue is full in wireshark/ss send '\n' to do event_listen(), see what happens. |
Thanks again for your efforts and your awesome support! Are you planning on supporting blocking calls? I have some legacy python2 code full with blocking calls... and integration of asyncio is not that easy in python2. |
Thanks. I don't think this module will ever support asyncio or non-blocking API - i.e. should stay as it is now. |
Just randomly thought of another thing related to this topic - you can run two separate glib loops from two separate threads just fine, so it might be another way to integrate the thing. As both threads will be IO-bound, there shouldn't be any issue with GIL, and it should be relatively easy to keep all things pulse in a separate daemon-thread, proxying calls to it from the main one via some queue. |
Weird that I kinda-forgot about this option, given that I did exactly that with this module in mk-fg/pulseaudio-mixer-cli and AccelerateNetworks/PagingServer projects. |
Sorry for the delay, was busy with some other refactoring. I followed your advise and I am very happy with it. By far not that ugly as it sounded first 😄 from gi.repository import GObject
import pulsectl
import threading
import multiprocessing
import sys
import time
class PulseThread(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def on_event(self, event):
self.queue.put(event)
def run(self):
pulse = pulsectl.Pulse('test1')
pulse.event_mask_set('all')
pulse.event_callback_set(self.on_event)
pulse.event_listen()
class PulseReactor(object):
def __init__(self):
self.pulse_queue = multiprocessing.Queue()
self.pulse = pulsectl.Pulse('test2')
def task1(self):
print('running task1')
return True
def on_pulse_event(self, fd, condition):
try:
event = self.pulse_queue.get_nowait()
except:
return True
print(event)
self.update_sinks()
return True
def update_sinks(self):
for sink in self.pulse.sink_list():
print(sink)
def start(self):
pulse_thread = PulseThread(self.pulse_queue)
pulse_thread.daemon = True
pulse_thread.start()
mainloop = GObject.MainLoop()
GObject.timeout_add(1000, self.task1)
GObject.io_add_watch(
self.pulse_queue._reader, GObject.IO_IN | GObject.IO_PRI,
self.on_pulse_event)
try:
mainloop.run()
except KeyboardInterrupt:
pass
reactor = PulseReactor()
reactor.start() One thing I am wondering, are there any potential side effects when using 2 instances of pulsectl? One is currently living in the main thread ( Why did you decide to deny blocking calls from the event callbacks in the first place? |
I've never actually tested two client instances running in the same pid, but from all I've seen in libpulse it shouldn't be a problem there. Implementation looks rather straightforward, a few things that stick-out to me:
|
Forgot to mention here that glib/gobject has its own get_monotonic_time wrapper - https://lazka.github.io/pgi-docs/#GLib-2.0/functions.html#GLib.get_monotonic_time - which would probably be the best option for glib/gobject-based apps. |
Hello, great project!
Is it possible to integrate this into a
gobject.MainLoop
?I've read about the
set_poll_func
but I am not quite sure how that fits intogobject
.Furthermore, is that library compatible with any pulseaudio version?
The text was updated successfully, but these errors were encountered: