-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop trio-run-in-process, use pure trio process spawner, test out of channel ctrl-c subactor cancellation #128
Conversation
grrr,
This |
This is an initial solution for #120. Allow spawning `asyncio` based actors which run `trio` in guest mode. This enables spawning `tractor` actors on top of the `asyncio` event loop whilst still leveraging the SC focused internal actor supervision machinery. Add a `tractor.to_syncio.run()` api to allow spawning tasks on the `asyncio` loop from an embedded (remote) `trio` task and return or stream results all the way back through the `tractor` IPC system using a very similar api to portals. One outstanding problem is getting SC around calls to `asyncio.create_task()`. Currently a task that crashes isn't able to easily relay the error to the embedded `trio` task without us fully enforcing the portals based message protocol (which seems superfluous given the error ref is in process). Further experiments using `anyio` task groups may alleviate this.
This is an edit to factor out changes needed for the `asyncio` in guest mode integration (which currently isn't tested well) so that later more pertinent changes (which are tested well) can be rebased off of this branch and merged into mainline sooner. The *infect_asyncio* branch will need to be rebased onto this branch as well before merge to mainline.
An initial attempt to discover an issue with trio-run-inprocess. This is a good test to have regardless.
Verify ctrl-c, as a user would trigger it, properly cancels the actor tree. This was an issue with `trio-run-in-process` that clearly wasn't being handled correctly but for sure is now with the plain old `trio` process spawner. Resolves #115
Using the context manager interface does some extra teardown beyond simply calling `.wait()`. Pass the subactor's "uid" on the exec line for debugging purposes when monitoring the process tree from the OS. Hard code the child script module path to avoid a double import warning.
We don't really need stdin for anything but passing the entry point and detaching it seemed to just cause errors on cancellation teardown.
@guilledk @carlosplanchon this is now rebased onto the changes hand-pulled from the Please take a look at this if you have the time 🥺 |
Ok somewhow we've borked windows? The |
7b38faa
to
cf8a980
Compare
The new pure trio spawning backend uses `subprocess` internally which is also supported on windows so let's run it in CI.
4c4a6d3
to
df5e702
Compare
- ease up on first stream test run deadline - skip streaming tests in CI for mp backend, period - give up on > 1 depth nested spawning with mp - completely give up on slow spawning on windows
This continues from #127 since some history was reworked to get some upstream changes in before this to do with #121.
Much thanks to @guilledk for starting this effort!
The main things we have accomplished thus far:
trio-run-in-process
in place of our own nativetrio
based spawner thus solving Let's roll our own subproc spawner ("the way the experts would") #117Things still on the todo:
asyncio
#121 (which still has a bunch of outstandings)git log --follow tractor/to_asyncio.py
cloudpickle
and instead pass allActor.__init__()
data toActor._async_main()
allowing us to just usemsgpack
and theActor.load_modules()
/Actor._get_rpc_func()
way of invoking the target entrypoint.Closes #117, #115, #112