-
-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up async code #631
base: main
Are you sure you want to change the base?
Clean up async code #631
Conversation
Very nice improvements! |
While I'm cleaning up. Why do both When I look at the Rust documentation, there seems to be only a single synchronous operation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, I like the set_error()
and set_result()
. I think that crossed my mind when I worked on this, but I can't recall why I went with that weird result dict.
Good question. The reason is that in order to implement this method in JS, we must probably just call |
I was going to ask about anyio. It seems like a reasonable solution, but I
didn’t want to bring in a third-party library without prior approval.
And pytest-anyio conveniently runs both asyncio and trio.
Ignore the push I just did. I’ll rewrite to use anyio.
…On Mon, Nov 4, 2024 at 13:11 Korijn van Golen ***@***.***> wrote:
Just an open suggestion: maybe it would be helpful to leverage anyio
<https://anyio.readthedocs.io/en/stable/index.html> if our goal is to
support both trio and asyncio? It has a pytest plugin too. It might make
our lives simpler.
Applications and libraries written against AnyIO’s API will run unmodified
on either asyncio or trio
This quote from their docs seems very promising.
—
Reply to this email directly, view it on GitHub
<#631 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABC2LIM2VQLLEMRXSGLMFZLZ67PHRAVCNFSM6AAAAABRBBMQXOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJVG4YDSMZYG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
For testing It definitely makes sense. For implementation, only if it remains somewhat contained and an implementation detail. There's a lot of changes coming in how wgpu-native implements futures, and I'm also not sure what we'd need to do to support this running in JS. My point is that this will code will evolve quite a bit in the coming months, and I don't want to lock into a dependency. |
The Python documentation on writing implementation-agnostic asynchronous
code is very thin. I'm still investigating if there is an easy way of
saying in an async function "Let other coroutines run and then come back to
me" that doesn't tie the code down into a specific implementation.
This is one of the more painful areas of Python development. It's a lot
easier if you know which async library you're going to use.
…On Tue, Nov 5, 2024 at 12:13 AM Almar Klein ***@***.***> wrote:
I was going to ask about anyio.
For testing It definitely makes sense. For implementation, only if it
remains somewhat contained and an implementation detail.
There's a lot of changes coming in how wgpu-native implements futures, and
I'm also not sure what we'd need to do to support this running in JS. My
point is that this will code will evolve quite a bit in the coming months,
and I don't want to lock into a dependency.
—
Reply to this email directly, view it on GitHub
<#631 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABC2LIOTFDYMYH6VAXRKED3Z7B42JAVCNFSM6AAAAABRBBMQXOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJWGUYDEMJTG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I need some advice on how to finish this. I'm temporarily using anyio, which can deal with both trio and asyncio and it's mostly working. The problem I'm running into is on the build. I've included "anyio" in pyproject.toml , and the build and tests run fine. But when the wheel is being built, it doesn't include anyio. How do I tell the wheel-builder to include this? Just as an aside, I went to a clean environment and did "pip install wgpu". I was surprised to see pycparser included. I can't find where that dependency is coming from. |
Wheels don't ship their dependencies, they're only listed in their metadata files. When you build a wheel, and then install that wheel into a new environment, you should see anyio being installed.
I'm guessing it's a transient dependency of cffi. |
The object returned by WgpuAwaitable is now directly awaitable. Make Almar happy.
|
It looks like import time
import trio
class Awaitable:
def __await__(self):
while True:
yield None
async def main():
async with trio.open_nursery() as nursery:
nursery.start_soon(child1)
nursery.start_soon(child2)
async def child1():
await Awaitable()
async def child2():
while True:
print(time.time())
trio.sleep(0.01)
trio.run(main) |
Everything resolved. |
I will agree with Almar on the dependency management topic here. If you switch from anyio to sniffio, you can write something like the following: def get_async_backend():
lib = sniffio.current_async_library()
if lib == "asyncio":
import asyncio
return asyncio
# and so on That would reduce our dependency footprint quite a lot. @almarklein I do prefer to |
Interesting that CI is green; if I checkout this branch and run an example, it panics in
|
I was wondering how our trio example was able to run, while your new test indicated that trio does not handle the edit: I was hoping that the yield trick allowed us to stay async-lib-independent without much effort. But looks like this is not possible. |
Is there something holding this up? |
Yes, these two comments: |
I double checked "yield None" with the simplest possible code:
Meanwhile |
@almarklein . What are you doing to see your crash? I haven't seen that on either my Mac or on the headless Linux machines I occasionally test on. |
|
@almarklein. Hmm. It works just fine on my Mac. . . . Let me see if I can re-create this on a headless Linux device. If not, I'll just close this PR. Not worth the hassle it's causing. |
@almarklein Is there some secret to being able to run examples/cube.py on a headless machine with a GPU, and just having the results written to a backing buffer rather than the actual screen? I seem not to be able to figure out how to do that. What is particularly strange is that although examples/cube.py includes an asynchronous implementation, the code that's actually running is completely synchronous. And the error message you're seeing is utterly bizarre. |
@almarklein. Can you give me details on OS, Python version, and GPU? I rewrote cube.py to be windowless, ran it on a Linux machine, and it just worked. The bug you've found sounds serious, but I just can't reproduce it. . . |
I'm on MacOS (M1), python 3.12.4. This is very strange indeed. I'm having a quick look. edit: I think I know what the problem is. The pipeline descriptor is now build in a separate method, but some of the subfields are not referenced anymore and thefore cleaned up. |
I found and fixed the problem by adding a |
The last remaining issue is to replace |
@almarklein. Are all bugs gone? I'm still surprised you had GC problems when I didn't. I wonder if cffi is implemented slightly different on different machines. |
For the record, I'll try to explain what was happening a bit better. When you create an ffi struct, it does not hold a reference to its fields when these fields are pointers. So if you do: sub_struct = ffi.new("CSubType")
struct = ffi.new("CType")
struct.aField = sub_struct then We solved this problem long ago using the However, for arrays we had code like this in many places: list_of_sub_structs = [new_struct("CSubType", ...) for ... in ...]
array_of_sub_structs = ffi.new("CSubType[]", list_of_sub_structs)
struct.a_field = array_of_sub_structs These arrays are contiguous copies of the list of structs. The problem is not that the sub-structs in the list get cleared by the gc (as I thought earlier), but that any sub-sub-structs and/or sub-sub-arrays of these sub-structs get cleared when the sub-struct gets cleared. The issue with arrays was not hit earlier, because in most cases we create a descriptor and then use it to instantiate a wgpu object. As code is refactored to create the descriptor in a separate method (which is a great change by itself) the lists go out of scope before the descriptor is passed to Rust. |
Wow, this PR is interesting, in being quite technical/deep on two very different topics 😅 |
Asynchronous code in _api needed to know too much about the internal workings of the AwaitHandler. Changed it so that there's a little bit more separation.
Also made some asynchronous functions really be asynchronous.
Added on_submitted_word_done.
Commenting out implementation of create_xxx_pipeline_async because these are unimplemented in wgpu-native. The code is still there, but a constant forces the synchronous implementation.
Added tests. Added dependency on pytest-asyncio to run those tests.