Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Guest mode", for cohabitation with Qt etc. #1551

Merged
merged 39 commits into from
Jun 2, 2020

Conversation

njsmith
Copy link
Member

@njsmith njsmith commented May 25, 2020

This adds trio.lowlevel.start_guest_run, which initiates a call to trio.run "on top of" some other "host" loop. It uses the regular Trio I/O backends, so all features are supported. It does not use
polling, so it's efficient and doesn't burn CPU. All you have to do is provide a threadsafe way to schedule a callback onto the host loop.

Since Trio code and the host loop are sharing the same main thread, it's safe for Trio code to call synchronous host loop functions, and for host code to call synchronous Trio functions.

This is an early draft. It has no docs or tests, and I've only tried the epoll mode. This is ready for review/merge.

Fixes: #399

TODO/open questions:

  • Needs Add thread cache #1545 to be merged first

  • Check how much overhead this adds to regular trio.run mode

  • Double-check that it's actually reasonably efficient when combined with e.g. Qt

  • Tests

  • Docs

  • How on earth should lifetime management work between the two loops (maybe this is a problem for downstream libraries that use this? but it'd be nice if reasonable options were reasonable) [conclusion: we just need to document what's possible, and make a strong recommendation that when using this mode you start up the host loop, then immediately start up the trio loop, and treat the trio loop exiting as the signal to shut down the host loop]

  • What on earth should we do about signal handling in general? (share set_wakeup_fd with the host? use the host's run_sync_soon_threadsafe in our signal wakeup path?)

    • and what about control-C in particular?
  • add a non-threadsafe run_sync_soon handler?

  • Should we support multiple run calls simultaneously in the same host loop? It has some logic, but would make accessing Trio from the host substantially more complicated... (conclusion: not in version 1)


Simple demo of Trio and asyncio sharing a thread (also included as
notes-to-self/aio-guest-test.py):

import asyncio
import trio

async def aio_main():
    loop = asyncio.get_running_loop()

    trio_done_fut = asyncio.Future()
    def trio_done_callback(main_outcome):
        print(f"trio_main finished: {main_outcome!r}")
        trio_done_fut.set_result(main_outcome)

    trio.lowlevel.start_guest_run(
        trio_main,
        run_sync_soon_threadsafe=loop.call_soon_threadsafe,
        done_callback=trio_done_callback,
    )

    (await trio_done_fut).unwrap()


async def trio_main():
    print("trio_main started!")

    to_trio, from_aio = trio.open_memory_channel(float("inf"))
    from_trio = asyncio.Queue()

    asyncio.create_task(aio_pingpong(from_trio, to_trio))

    from_trio.put_nowait(0)

    async for n in from_aio:
        print(f"trio got: {n}")
        await trio.sleep(1)
        from_trio.put_nowait(n + 1)
        if n >= 10:
            return

async def aio_pingpong(from_trio, to_trio):
    print("aio_pingpong started!")

    while True:
        n = await from_trio.get()
        print(f"aio got: {n}")
        await asyncio.sleep(1)
        to_trio.send_nowait(n + 1)


asyncio.run(aio_main())

Output:

trio_main started!
aio_pingpong started!
aio got: 0
trio got: 1
aio got: 2
trio got: 3
aio got: 4
trio got: 5
aio got: 6
trio got: 7
aio got: 8
trio got: 9
aio got: 10
trio got: 11
aio got: 12
trio_main finished: Value(None)

@njsmith njsmith marked this pull request as draft May 25, 2020 10:02
@codecov
Copy link

codecov bot commented May 25, 2020

Codecov Report

Merging #1551 into master will increase coverage by 0.01%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master    #1551      +/-   ##
==========================================
+ Coverage   99.68%   99.69%   +0.01%     
==========================================
  Files         110      111       +1     
  Lines       13498    13865     +367     
  Branches     1027     1059      +32     
==========================================
+ Hits        13455    13823     +368     
+ Misses         28       27       -1     
  Partials       15       15              
Impacted Files Coverage Δ
trio/_core/__init__.py 100.00% <ø> (ø)
trio/lowlevel.py 100.00% <ø> (ø)
trio/_core/_io_epoll.py 100.00% <100.00%> (ø)
trio/_core/_io_kqueue.py 85.00% <100.00%> (+1.98%) ⬆️
trio/_core/_io_windows.py 98.57% <100.00%> (+0.03%) ⬆️
trio/_core/_ki.py 100.00% <100.00%> (+1.49%) ⬆️
trio/_core/_run.py 99.76% <100.00%> (+0.02%) ⬆️
trio/_core/_wakeup_socketpair.py 100.00% <100.00%> (ø)
trio/_core/tests/test_guest_mode.py 100.00% <100.00%> (ø)

@njsmith
Copy link
Member Author

njsmith commented May 25, 2020

Basic outline

Changes to I/O backends

We split the old handle_io(timeout) into two pieces: events = get_events(timeout); process_events(events). The idea is that get_events has the blocking part, and process_events has the part that touches global state, so we can shove get_events off into a thread.

I also had to add a force_wakeup method, which is largely redundant with the existing re-entry task system, but it seemed bad to depend on the re-entry task always running... maybe there's some way to merge these, I'm not sure

Changes to core run loop

The core run loop stays the same, except that we replace:

runner.io_manager.handle_io(timeout)

with

events = yield timeout
runner.io_manager.process_events(events)

So that means our core run loop is now a generator (!), and it requires some driver code to read out the timeout, call io_manager.get_events(timeout), and then send the return value back in.

The normal trio.run has a trivial driver loop that just does that.

The "guest mode" driver is a bit cleverer: it arranges to push the get_events off into a thread, and then when it returns it schedules another iteration of our generator onto the host loop.

We keep a strict alternation: either the I/O thread is running looking for I/O, or the Trio scheduler is running (or waiting to run) on the host loop. You never have both running at once.

Originally I was imagining something a bit cleverer, where the I/O thread runs continuously with infinite timeout, and just streams back I/O events to Trio as they happen. The problem with this is that if we're not using the io_manager to handle timeouts, then we have to come up with some other mechanism for that. Now, we could require the host loop to provide some kind of call_at(time, ...) API. But it seemed to add a lot of complications: now you need some way to cancel and reschedule the call_at handlers, you need some way to make sure the time-triggered and I/O-triggered and host-triggered scheduler ticks don't step on each other, the wait_all_tasks_blocked code gets more complicated (since it relies on knowing when a timeout has passed without any tasks being scheduled), etc.

Oh right, host-triggered scheduler ticks: we want to allow the host loop code to invoke synchronous Trio APIs, like memory_channel.send_nowait, nursery.start_soon, cancel_scope.cancel. These might causes tasks to be rescheduled, or global deadlines to change, while Trio is waiting for I/O! This is a new thing that can't happen in regular Trio.

The hack I used to make it work is: if a task becomes scheduled, or the next deadline changes, at a time when the I/O thread is running, then we force the I/O thread to wake up immediately, which triggers a scheduler tick, which will pick up whatever changes were made by the host. This is the most awkward part of this approach, and why I had to add the new force_wakeup method on IO managers.

(Note: I'm not 100% sure I got all the details of this state machine correct yet...)

However, even in the "cleverer" approach where timeouts are managed on the main thread, you still need the equivalent of a force_wakeup method, because you need to have some way to tell the background I/O thread to exit when shutting down!

Anyway, so this approach just seemed simpler all around, and hopefully not too inefficient. There are a handful of extra syscalls and thread transitions compared to the "fancy" approach, but AFAICT it ought to be unnoticeable in practice. Though we should confirm empirically :-)

@njsmith
Copy link
Member Author

njsmith commented May 26, 2020

On further thought, I guess signals are not a big deal. The tricky bit is set_wakeup_fd, but, the way we use the wakeup fd, all we need is that after a signal arrives then the Python VM should start interpretering bytecode again promptly. So there are two cases:

  • The host loop has registered its own set_wakeup_fd. In this case we know it's a Python loop that's listening for signals and doing something with them, so we can assume that their set_wakeup_fd will wake up the Python VM and that's all we need.

  • The host loop hasn't registered its own set_wakeup_fd. In this case we're free to register our own.

So all we have to do is check whether an fd is already registered, and if it is then skip registering ours.

Still no idea what to do about control-C though; I guess that's tightly tied into how people want to coordinate the lifetime of the two loops.

@njsmith
Copy link
Member Author

njsmith commented May 26, 2020

Ha, it turns out that like for everything involving async, Twisted also experimented with this kind of approach. They called it their ThreadedSelectReactor: https://github.com/twisted/twisted/blob/trunk/src/twisted/internet/_threadedselect.py

Apparently they ran into some not-so-obvious problems with it – for example, downloads on Windows couldn't achieve line throughput, which was awkward since fancy download apps are one of the most obvious use cases for combining a GUI library + advanced networking library. There's also a lot different between our approach and theirs, though – different decade, use of select versus not, pretty different strategy for passing info back and forth between the main thread and the worker thread, etc. So it's not at all clear how much of their experience still applies. Still, good to get some warnings of what to watch out for.

Conversation with glyph:

<glyph> njs: ugh threadedselectreactor
<glyph> njs: it’s a testament to trio’s better factoring that the implementation is way less gross and I like the public api that’s being exposed
<njs> glyph: ha, I had no idea that threadedselectreactor existed
<njs> twisted minds think alike or something
<njs> though one of the main motivations is to completely eliminate the whole pluggable reactor concept, so I guess twisted didn't get the full benefit
<glyph> njs: oh, we did, it just sucks
<glyph> it performs terribly, and it can't take advantage of various native loop support things, and it's a bug magnet due to the inherently shared state
<glyph> not to mention it has nasty side effects, like popping up firewall access-control dialog boxes for its loopback sockets on windows
<glyph> granted, you could probably do a better job than we did
<njs> I don't think windows does that anymore... asyncio and trio both create loopback sockets unconditionally on windows
<glyph> there was definitely a brief period where we considered eliminating the pluggable reactor with such a thing but it was a definite "no" after a few years of experience with it
<glyph> perhaps I will expand in more detail in my memoir, "threads don't actually work and neither do signals"
<njs> I think I have a fair idea of what I'm getting into bug-magnet-wise, but I am curious about the terrible performance and the native loop support things you wanted to use but couldn't
<glyph> too bad tef already has the trademark on'lessons learned from a life wasted'
<njs> going in and out of a hot thread is ~10 µs on my laptop, which seems pretty acceptable for an idle GUI app (it only uses the thread when it knows it has to sleep for non-zero time)
<njs> oh, or by "native loop" do you mean "everything that's not 'select'"?
<glyph> njs: as I recall, Windows is a significant multiple of that, and there are strange thundering-herd problems with the scheduler when you start trying to benchmark straight line wire throughput that moves stuff between threads
<glyph> I know I'm talking a lot of Windows here, but "custom reactor" = "GUI", and the production environment for GUIs is either Windows 10 or iOS in the real world
<njs> yeah
<glyph> njs: my general advice here is both "maybe you will have entirely different problems, it's been a while" and also "beware microbenchmarks"
<njs> I guess I am assuming that most people do not care about whether their GUI app can handle 10k requests/second, so some throughput penalty is acceptable
<glyph> the microbenchmarks told us a REMARKABLY different story than actual whole-application profiles
<glyph> people are OK with a latency penalty on GUI apps! but your GUI app is probably a file downloader so people get real mad about throughput penalties :)
<njs> ha
<njs> so are you saying that microbenchmarks were over-optimistic?
<glyph> yes, or at least, they were
<glyph> "how long does it take to do this thing with a thread" was all pretty fast in isolation; "what does it actually look like when we shove bytes over this connection" ended up being implausibly slow based on those numbers
<njs> interesting
<njs> I'm actually having trouble figuring out where download throughput penalties would come from, if we only touch the thread after getting EWOULDBLOCK on recv
<njs> it looks like threadedselectreactor works a bit differently though
<glyph> njs: oh yeah the details are somewhat different
<glyph> njs: it's the thrashing back and forth to the thread; gigabit TCP, particularly with TLS, won't *reliably* keep your pipe full, though
<glyph> and over-the-internet transfers, you'll definitely see EWOULDBLOCK
<njs> sure
<glyph> (can you tell the application I tested this with the most did bulk data transfers a lot)
<njs> I guess the question is whether the stutter after each EWOULDBLOCK is long enough to overflow the transfer window and stall the underlying flow
<glyph> njs: again, it might just be different today, but a decade ago or whenever this was, yes, you would stall the underlying flow, and in being so stalled, you'd switch to a different thread, and that would tell the kernel scheduler that maybe a good guess is you're gonna be busy so to deprioritize you on the scheduler, which leads to smaller TCP window sizes, etc
<njs> I think the theory says that as long as going in and out of the thread has low latency compared to regular scheduling glitches and/or the latency of the underlying link, then it shouldn't affect throughput
<njs> but as we know, theory is not a very good predictor to how networks behave :-)
<njs> okay, yeah
<njs> thanks for the war stories, it's very helpful :-)
<njs> glyph: do you mind if I dump this log into the issue for reference?
<glyph> sure

Copy link
Member

@oremanj oremanj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a huge fan of this approach -- it's much simpler than I was imagining this sort of thing would need to be.

Should we support multiple run calls simultaneously in the same host loop? It has some logic, but would make accessing Trio from the host substantially more complicated...

I don't think it should be the default. Maybe an option to start_guest_run() specifying whether to register the runner in the global run context or not. If you choose "not", you gain the ability to support multiple runs, in exchange for needing to use TrioToken.run_sync_soon to do everything. (And return the TrioToken to make this possible.)

I also had to add a force_wakeup method, which is largely redundant with the existing re-entry task system, but it seemed bad to depend on the re-entry task always running... maybe there's some way to merge these, I'm not sure

One interesting thing this raised for me is that force_wakeup can be more efficient than WakeupSocketpair because it's able to use backend-specific waking mechanisms. Maybe we can use that sort of thing for run_sync_soon wakeups too? I created #1554 to track this.

Originally I was imagining something a bit cleverer, where the I/O thread runs continuously with infinite timeout, and just streams back I/O events to Trio as they happen.

Yeah, I don't think that approach is worth the substantial complexity increase. The one you've pursued in this PR is a lot easier to reason about for anyone familiar with the "normal" straight-through run loop procedures.

Check how much overhead this adds to regular trio.run mode

If this turns out to be a blocker, note that you can use basically the same trick as greenback to write a "generator" in which you can yield from a callee. I think the greenlet library even includes this as one of their examples. That would probably make "guest mode" slower but imply less overhead for normal straight-through mode. (And "guest mode" would depend on greenlet.)


One thing I'm excited about with this diff is that it should be possible to make trio-asyncio "just work" without needing to declare a loop, and no matter whether you start in trio mode or asyncio mode. That's a big win for allowing Trio libraries to be used in asyncio programs; currently only the reverse of that can really be said to be well-supported.

assert self.is_guest
try:
timeout = self.unrolled_run_gen.send(self.unrolled_run_next_send)
except StopIteration:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will need to handle other exceptions here somehow, probably by wrapping in TrioInternalError

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, so this is a general question... in trio.run, if there's some unexpected crash, we convert it to a TrioInternalError and raise it, cool. In guest mode, I'm not actually sure what the desired behavior even is.

My cop-out in this first draft was to let any unexpected internal errors escape escape from the host loop callback, so the host loop could handle them... however it does that. (Probably dump them on the console and continue on.) That's not totally unreasonable, but maybe there's something better.

I guess ideally, we would call the completion callback with the TrioInternalError?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, "call the completion callback with the TrioInternalError" is what I was imagining. It seems closest to what normal Trio does. Relying on the host loop to do something reasonable with uncaught exceptions seems like a stretch to me based on standard practice in the field...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I sys.excepthook in my PyQt programs and present a dialog. Lots of people I suspect just lose the exceptions and painfully debug.


def deliver(events_outcome):
def in_main_thread():
self.unrolled_run_next_send = events_outcome.unwrap()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth storing the outcome directly and using unrolled_run_next_send.send(unrolled_run_gen) to resume? That way you get to handle exceptions through the same path above

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would add a capture/unwrap cycle to regular mode, but maybe that doesn't matter.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're unwrapping in the driver, not inside the generator (so it will resume with throw() if get_events() raised an exception). Regular mode doesn't need to throw I/O exceptions into the generator, since it's just going to exit with a TrioInternalError anyway. But here the I/O exception would be raised out of the callback running in the host event loop, and the likelihood of getting it propagated properly is not good.

trio/_core/_run.py Show resolved Hide resolved
if events or timeout <= 0:
self.unrolled_run_next_send = events
self.guest_tick_scheduled = True
self.run_sync_soon_threadsafe(self.guest_tick)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth a comment here that if you run into throughput problems, you can get more Trio time per unit wallclock time by changing this to be willing to do multiple run loop ticks per wakeup, up to some total time limit.

trio/_core/_run.py Outdated Show resolved Hide resolved
trio/_core/_run.py Outdated Show resolved Hide resolved
njsmith added 2 commits May 27, 2020 00:19
To allow Trio to efficiently cohabitate with arbitrary other loops.
@oremanj
Copy link
Member

oremanj commented May 27, 2020

That comment is great, thank you!

@njsmith
Copy link
Member Author

njsmith commented May 27, 2020

If this turns out to be a blocker, note that you can use basically the same trick as greenback to write a "generator" in which you can yield from a callee. I think the greenlet library even includes this as one of their examples. That would probably make "guest mode" slower but imply less overhead for normal straight-through mode. (And "guest mode" would depend on greenlet.)

Interesting point. Actually, thinking about this, I don't think we'd even need greenlet... the greenlet code would look something like:

timeout = ...
if runner.is_guest:
    events = greenlet_magic_yield(timeout)
else:
    events = io_manager.get_events(timeout)
io_manager.process_events(events)
...

...But if we decide all the yield/resumes are a problem, we could do that just as easily with a regular yield, since we only have one frame involved :-)

@njsmith
Copy link
Member Author

njsmith commented May 27, 2020

(Of course this would mean that the regular mode would have to do unrolled_run(...).send(None) instead of just unrolled_run(...), but that's just odd-looking, not actually a problem.)

@njsmith
Copy link
Member Author

njsmith commented May 28, 2020

More wisdom from glyph:

<njs> glyph: so interestingly, Qt's native socket support on Windows is all based on WSAAsyncSelect, which AFAICT works by... running a thread in the background and then posting notifications back to the main thread
<glyph> Njs: oh everything on windows works like that :-). As I recall when i was measuring this the problems were largely with the implementation of the GIL. Maybe that’s gotten better in the intervening decade too
<njs> glyph: ah yeah plausible
<njs> glyph: the GIL did get replaced sometime in there, so at least it has a different set of perverse edge cases now...
<njs> glyph: worst comes to worst, I think I'd rather rewrite our GetQueuedCompletionEvents code in Rust so it can run in a thread without the GIL, than support pluggable backends

@njsmith
Copy link
Member Author

njsmith commented May 28, 2020

OK I think I rearranged enough stuff now that we should be converting any internals errors into TrioInternalError as appropriate and routing it into the done_callback.

We're also now calling signal.set_wakeup_fd by default in guest mode, so signals should hopefully work. And there's a new kwarg on start_guest_run to control whether we claim it or not, since this depends on the host loop.

Still need to figure out control-C handling.

@njsmith
Copy link
Member Author

njsmith commented May 28, 2020

Well, that's a fun new CI breakage: MagicStack/immutables#46

I guess we might have to disable the nightly python build for a while.

@altendky
Copy link
Member

After making a working demo with Qt I tried to copy over some code I had for asyncio and twisted that allowed awaiting on Qt signals and got an error. I boiled it down to a trio.Event and a QTimer to reproduce the issue. At first I was getting a recursion error running with 9018066. Now I'm getting the UnboundLocalError shown below. Below was run with 5631de4.

(I give up, I've already pulled updates three times while typing out this comment. :] I'll try to add a test now that there's a test file)

Failable Qt example
import functools
import sys
import threading

import attr
import trio
import PyQt5.QtCore
import PyQt5.QtWidgets


class Runner(PyQt5.QtCore.QObject):
    signal = PyQt5.QtCore.pyqtSignal(object)

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)

        self.signal.connect(self.slot)

    def run(self, f):
        print('Runner.run()', threading.get_ident())
        self.signal.emit(f)

    def slot(self, f):
        # TODO: no handling of context argument

        print('Runner.slot()', threading.get_ident())
        f()


@attr.s(auto_attribs=True)
class QTrio:
    application: PyQt5.QtWidgets.QApplication
    widget: PyQt5.QtWidgets.QTextEdit
    runner: Runner

    def run(self):
        PyQt5.QtCore.QTimer.singleShot(0, self.start_trio)
        return self.application.exec()

    async def main(self):
        print('QTrio.main()', threading.get_ident())
        self.widget.show()

        fail = True

        for i in range(10):
            if fail:
                print('a1')
                event = trio.Event()
                print('b1')
                timer = PyQt5.QtCore.QTimer.singleShot(1000, event.set)
                print('c1')
                await event.wait()
                print('d1')
            else:
                print('a2')
                await trio.sleep(1)
                print('b2')
            print('e')
            self.widget.append('{}'.format(i))
            print('f')

    def start_trio(self):
        # TODO: it feels a bit odd not getting anything back here.  is there
        #       really no object worth having access to?
        trio.lowlevel.start_guest_run(
            self.main,
            run_sync_soon_threadsafe=self.runner.run,
            done_callback=self.trio_done,
        )

    def trio_done(self, outcome):
        print('---', repr(outcome))
        self.application.quit()


def main():
    print('main()', threading.get_ident())
    qtrio = QTrio(
        application=PyQt5.QtWidgets.QApplication(sys.argv),
        widget=PyQt5.QtWidgets.QTextEdit(),
        runner=Runner(),
    )
    return qtrio.run()


main()
From 9018066 RecursionError: maximum recursion depth exceeded while calling a Python object
main() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
QTrio.main() 139969798338368
a1
b1
c1
Runner.run() 139969642370816
Runner.slot() 139969798338368
d1
e
f
a1
b1
c1
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Runner.run() 139969798338368
Runner.slot() 139969798338368
Traceback (most recent call last):
  File "/home/altendky/repos/trio/y.py", line 27, in slot
    f()
  File "/home/altendky/repos/trio/trio/_core/_run.py", line 1146, in guest_tick
    timeout = self.unrolled_run_gen.send(self.unrolled_run_next_send)
  File "/home/altendky/repos/trio/trio/_core/_run.py", line 1845, in unrolled_run
    runner.io_manager.process_events(events)
  File "/home/altendky/repos/trio/trio/_core/_io_epoll.py", line 226, in process_events
    waiters = self._registered[fd]
RecursionError: maximum recursion depth exceeded while calling a Python object
From b131958 UnboundLocalError: local variable 'timeout' referenced before assignment
main() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
QTrio.main() 139804458121024
a1
b1
c1
Runner.run() 139804302153472
Runner.slot() 139804458121024
d1
e
f
a1
b1
c1
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
Runner.run() 139804458121024
Runner.slot() 139804458121024
--- Error(TrioInternalError('internal error in Trio - please file a bug!'))
Traceback (most recent call last):
  File "/home/altendky/repos/trio/y.py", line 27, in slot
    f()
  File "/home/altendky/repos/trio/trio/_core/_run.py", line 1156, in guest_tick
    if timeout <= 0 or type(events_outcome) is Error or events_outcome.value:
UnboundLocalError: local variable 'timeout' referenced before assignment

@njsmith
Copy link
Member Author

njsmith commented May 28, 2020

Ah, the recursion error is because you need to pass QtCore.Qt.QueuedConnection to your connect call. Otherwise, when you emit a signal from the main thread, Qt tries to run the callback immediately, rather than scheduling it to run on the next event loop tick.

And I think the unbound-local error is because I added some code to report unexpected errors like your RecursionError, but I forgot to make them abort the run loop after being reported :-)

trio/_core/_run.py Outdated Show resolved Hide resolved
@njsmith
Copy link
Member Author

njsmith commented Jun 1, 2020

So after thinking about it a while, here's where I'm leaning on control-C handling (this is what's currently implemented in this PR):

We strongly recommend that Trio drive the lifetime of the surrounding loop, since... there's not really any better option. You can't have the surrounding loop directly drive Trio's lifetime, since Trio needs to run until the main task finishes. (You can have the surrounding loop nudge the main task to finish though, e.g. by cancelling stuff. But the actual shutdown sequence is still going to be Trio stops → host loop stops → program exits.)

Most loops have terrible stories for control-C, e.g. Python Qt apps tend to ignore it (you can find lots of discussions about this on the internet), asyncio tends to explode messily, etc.

Putting these two things together, it seems like most users will want Trio to take the KeyboardInterrupt safely, let it propagate out, and then that will trigger the host shutdown and exit the app. This does mean that you can't use control-C to break out of an infinite loop that's running in host mode, but realistically that doesn't work very well anyway, so... oh well?

There are a few loops that do do something sensible with control-C, e.g. twisted registers a signal handler that triggers the loop shutdown procedure. But I think this is fine:

  • By default Trio will skip registering its control-C handler, because it will see that Twisted has already registered one.

  • A twisted+trio integration is going to want to hook up Trio to the twisted shutdown procedure (i.e. adding a "before", "shutdown" trigger that cancels the Trio main task and waits for it to exit), so using the Twisted control-C handling will be fine, no worse than what you get in a pure Twisted program

  • A twisted+trio integration might want to override this so that Trio does register its control-C handler (e.g., hacky option: call signal.signal(signal.SIGINT, signal.default_int_handler) after starting the Twisted reactor but before calling start_guest_run). If so, then it'll give slightly nicer behavior for the Trio parts of your code (can break out of infinite loops), and effectively the same behavior for the Twisted parts of your code (after the control-C propagates out of Trio, it'll trigger a Twisted reactor shutdown, which is the same thing control-C does in vanilla Twisted).

So I think we don't even need any configuration knobs here... using our standard control-C logic in guest mode pretty much does the best thing possible in all cases.

@njsmith
Copy link
Member Author

njsmith commented Jun 2, 2020

To check for added overhead to regular trio.run, I wrote a tiny microbenchmark that just triggers scheduler ticks as fast as possible, figuring that this is the worst-possible-case for these changes:

import time
import trio

LOOP_SIZE = 10_000_000

async def main():
    start = time.monotonic()
    for _ in range(LOOP_SIZE):
        await trio.lowlevel.cancel_shielded_checkpoint()
    end = time.monotonic()
    print(f"{LOOP_SIZE / (end - start):.2f} schedules/second")

trio.run(main)

On my laptop, with current master:

✦ ❯ PYTHONPATH=$HOME/trio perf stat python schedule-microbench.py
179727.56 schedules/second

 Performance counter stats for 'python schedule-microbench.py':

         55,762.59 msec task-clock                #    1.000 CPUs utilized          
               683      context-switches          #    0.012 K/sec                  
                 4      cpu-migrations            #    0.000 K/sec                  
             4,483      page-faults               #    0.080 K/sec                  
   178,323,685,701      cycles                    #    3.198 GHz                    
   348,488,306,514      instructions              #    1.95  insn per cycle         
    77,271,483,176      branches                  # 1385.723 M/sec                  
        93,989,483      branch-misses             #    0.12% of all branches        

      55.781963942 seconds time elapsed

      53.811304000 seconds user
       1.952215000 seconds sys

With current master + this PR:

✦ ❯ PYTHONPATH=$HOME/trio perf stat python schedule-microbench.py
177535.51 schedules/second

 Performance counter stats for 'python schedule-microbench.py':

         56,455.26 msec task-clock                #    1.000 CPUs utilized          
               223      context-switches          #    0.004 K/sec                  
                 2      cpu-migrations            #    0.000 K/sec                  
             4,042      page-faults               #    0.072 K/sec                  
   188,097,332,679      cycles                    #    3.332 GHz                    
   352,296,603,132      instructions              #    1.87  insn per cycle         
    78,393,967,951      branches                  # 1388.603 M/sec                  
       150,810,657      branch-misses             #    0.19% of all branches        

      56.456911871 seconds time elapsed

      54.687663000 seconds user
       1.768085000 seconds sys

So the walltime is ~1% slower, and it used ~1% more instructions. (I counted these mainly to cross-check against walltime, in case of stuff like weird thermal effects and CPU frequency changing.)

This is a very focused microbenchmark on a program that's really doing nothing except running through the scheduler as fast as it can. E.g. if you replace cancel_shielded_checkpoint with checkpoint, then it drops to ~25k schedules/second. (checkpoint is somewhat gratuitously slow, but still.) So my conclusion is that the existence of guest mode has extremely minimal impact on regular-mode programs.

@njsmith njsmith marked this pull request as ready for review June 2, 2020 00:35
Copy link
Member

@oremanj oremanj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few remaining cosmetic issues and I think this is ready to merge! Please also update the title and PR description to reflect the fact that it's not a WIP anymore.

docs/source/design.rst Outdated Show resolved Hide resolved
docs/source/reference-lowlevel.rst Outdated Show resolved Hide resolved
trio/_core/_io_kqueue.py Outdated Show resolved Hide resolved
trio/_core/_ki.py Show resolved Hide resolved
trio/_core/_run.py Show resolved Hide resolved
trio/_core/_run.py Show resolved Hide resolved
@njsmith
Copy link
Member Author

njsmith commented Jun 2, 2020

Addressed all review comments.

@njsmith njsmith changed the title [rfc] First attempt at "guest mode", for cohabitation with Qt etc. "Guest mode", for cohabitation with Qt etc. Jun 2, 2020
@oremanj oremanj merged commit e0af102 into python-trio:master Jun 2, 2020
@njsmith njsmith deleted the guest-loop branch June 2, 2020 01:55
njsmith added a commit to njsmith/trio that referenced this pull request Jun 16, 2020
njsmith added a commit to njsmith/trio that referenced this pull request Jun 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Using trio with Qt's event loop?
4 participants