Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async/await/suspend/resume #6025

Open
Tracked by #11899
andrewrk opened this issue Aug 11, 2020 · 53 comments
Open
Tracked by #11899

async/await/suspend/resume #6025

andrewrk opened this issue Aug 11, 2020 · 53 comments
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase. frontend Tokenization, parsing, AstGen, Sema, and Liveness.
Milestone

Comments

@andrewrk
Copy link
Member

andrewrk commented Aug 11, 2020

This is a sub-task of #89.

@andrewrk andrewrk added enhancement Solving this issue will likely involve adding new logic or components to the codebase. frontend Tokenization, parsing, AstGen, Sema, and Liveness. labels Aug 11, 2020
@andrewrk andrewrk added this to the 0.7.0 milestone Aug 11, 2020
@andrewrk andrewrk modified the milestones: 0.7.0, 0.8.0 Oct 13, 2020
@andrewrk andrewrk modified the milestones: 0.8.0, 0.9.0 Jun 4, 2021
@andrewrk andrewrk modified the milestones: 0.9.0, 0.10.0 Nov 21, 2021
@Vexu Vexu mentioned this issue Jun 20, 2022
7 tasks
@Vexu Vexu modified the milestones: 0.10.0, 0.11.0 Aug 6, 2022
@linkpy
Copy link

linkpy commented Nov 10, 2022

I'm willing to try implementing async/await/suspend/resume for stage2 as i require them for a project i'm working on.

The issue is that i dont really know where to start.
It seems like AstGen supports them.
Sema doesn't (calls to failWithUseOfAsync, so here I know where I need to work) so I'll start with that.

The AIR only have async_call, async_call_alloc, suspend_begin, and suspend_end instructions. By looking at stage1 it seems like the await/suspend/resume instructions are missing. Should I try to just add the instructions and replace calls to failWithUseOfAsync by looking at how stage1 implements them?

Futhermore, is async implemented in a similar way in stage2 as stage1? (basically stage1 being a good representation of how stage2 implements and uses frames, async calls, suspends, resumes, etc)

Edit: I've been using the stage2-async branch, assuming that's where the async development is being done.

andrewrk added a commit that referenced this issue Dec 1, 2022
This commit removes async/await/suspend/resume from the language
reference, as that feature does not yet work in the self-hosted
compiler.

We will be regressing this feature temporarily. Users of these language
features should stick with 0.10.x with the `-fstage1` flag until they
are restored.

See tracking issue #6025.
@kuon
Copy link
Contributor

kuon commented Dec 6, 2022

I've been following the WASI development and it seems to be going great! That being said, I am currently working on a new project and I am using some specific stage2 features. I am not using async yet, but I'd love to introduce it soon. Can you provide a very rough estimate of when this is planned to be merged in master? It is just for general planning (no pressure). Cheers!

francisbouvier added a commit to lightpanda-io/zig-js-runtime that referenced this issue Jan 12, 2023
*Async JS*

For now only callback style is handled (Promises is planned later).

We use persistent handle on v8 JS callback call after retrieving the
event from the kernel, has the parent JS function is finished and
therefore local handles are already garbage collected by v8.

* Event Loop*

We do not use the event loop provided in Zig stdlib but instead Tigerbeetle
IO (https://github.com/tigerbeetledb/tigerbeetle/tree/main/src/io).
The main reason is to have a strictly single-threaded event loop, see
ziglang/zig#1908.
In addition the desing of Tigerbeetle IO based on io_uring (for Linux,
with wrapper around kqueue for MacOS), seems to be the right direction for IO.

Our loop provides callback style native APIs. Async/await style native
API are not planned until zig self-hosted compiler (stage2) support
concurrent features (see ziglang/zig#6025).

Signed-off-by: Francis Bouvier <francis.bouvier@gmail.com>
mlugg added a commit to mlugg/zig that referenced this issue Mar 17, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* 'cases/compile_errors/async/*' and all remaining 'safety/*' depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
mlugg added a commit to mlugg/zig that referenced this issue Mar 17, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
mlugg added a commit to mlugg/zig that referenced this issue Mar 19, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
mlugg added a commit to mlugg/zig that referenced this issue Mar 19, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
andrewrk pushed a commit that referenced this issue Mar 20, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see #6025.

Resolves: #14849
truemedian pushed a commit to truemedian/zig that referenced this issue Mar 30, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
@Vexu Vexu removed this from the 0.11.0 milestone Apr 23, 2023
@noonien
Copy link

noonien commented Feb 15, 2024

Please, don't go the horrendous path of async. It'll be a massive time sink, the ABI for function calling will never be the same, resulting in function colors, and the entire ecosystem will either be split, or all async.

Could you look into stackful continuations and effects instead?

@kuon
Copy link
Contributor

kuon commented Feb 15, 2024

Please, don't go the horrendous path of async. It'll be a massive time sink, the ABI for function calling will never be the same, resulting in function colors, and the entire ecosystem will either be split, or all async.

Could you look into stackful continuations and effects instead?

Do you mean something like setjmp and longjmp in C? And for effect what do you have in mind?

I think that in general this is a good discussion to have. I did plenty of rust, and I hate their futures as a user even if many of the design parts make sense.

To me, it ends up to uses cases. We must find what problem we (the developers that use the language) want to solve. In #6025 (comment) I explain a use case that would be covered by a non local jump. But there are more uses of async/await.

As a note, in practice, if I look at how I work with JavaScript as it is there I do the most async now. I realize that 99% of the time, I only want to write sequential imperative code. But I must have await everywhere because one function down the stack calls a JS api that must be async because the result is not immediate.

I think Apple's API such as dispatch and run loop can be of inspiration when thinking around concurrent API design.

@mlugg
Copy link
Member

mlugg commented Feb 15, 2024

[...] resulting in function colours [...]

Why does a function's ABI introduce function colours in any practical way? The only thing it really means is that you can't get a default-callconv function pointer to an async function; if you need to support async functions somewhere that you're using function pointers, then you can use callconv(.Async) pointers which also support non-async functions. There is technically some colouring here, but given how rare function pointers are... not in any practical sense.

Per my understanding, the goal of Zig's colourless async can be framed as that if I have a non-async project, I can throw an async call somewhere into it - potentially causing hundreds or thousands of functions to in turn become async - and it'll basically Just Work.

[...] the entire ecosystem will either be split, or all async.

A huge benefit of colourless async is that it should help avoid this problem. If Alice writes an async version of a package, and Bob a synchronous version, they should both be able to be plugged straight in to any project - regardless of whether or not it is already async - and everything should work with only minor changes.

@noonien
Copy link

noonien commented Feb 15, 2024

@mlugg

Why does a function's ABI introduce function colours in any practical way? The only thing it really means is that you can't get a default-callconv function pointer to an async function;

It seems you have answered your own question

A huge benefit of colourless async is that it should help avoid this problem. If Alice writes an async version of a package, and Bob a synchronous version

Again, you've proved my point, the ecosystem would be divided into async and normal code.

@kuon

Do you mean something like setjmp and longjmp in C?

Continuations can be implemented by using setjmp and longjmp, yes.

The Wikipedia page for Continuations does a good job of explaining them, as well as providing a list of languages that support them.

It's important to understand what async is and what problems it solves.
I would say there are two parts of what people consider async to be:

  • Futures/Promises, these are a way of scheduling tasks cooperatively.
  • async/await, these are keywords that delimit computations.

Delimiting computation means splitting up a function into multiple parts, providing the ability for each part to be executed in multiple ways, and allowing for control flow to be more flexible.

Async/await is what we usually call an implementation of a subset of stackless coroutines.

Coroutines are an abstraction that helps with delimiting computation, however, compared to just async/await they have the additional benefit of being able to yield multiple times. These are also usually stackless.

Continuations are a much lower level of abstraction for delimiting computation, however, they are much more powerful, providing the ability to capture the control state of the current computation as a first-class value. This can be used to implement coroutines, generators, and even effects.

Continuatinuations are usually implemented by capturing the stack, or by having a split stack system, this means that there is just one ABI for calling functions, the same one everyone else uses, this allows calling into C, or any other language, without any issue.

And for effect what do you have in mind?

There is also a good Wikipadia page for Effect systems.

Effects are in essence a control flow method.

They can be used to implement cooperative scheduling (as with async + Future/Promise), but also much more than that, like function purity because they can be used to abstract over "colors".

Here are some examples of effects: Concurrency, I/O, Error handling, Allocation
Because effects can be abstracted upon, the user can plug-in their own functionality for either of those, and more.

Continuations and effects complement each other very well, both have been very well studied for a long time, and continuations are also implemented in quite a few mainstream languages, while effects are just now gaining more attention in languages like OCaml, JavaScript has a proposal for algebraic effects, and there are new languages that experiment with them, like Koka, Eff, Unison.

@omentic
Copy link

omentic commented Feb 15, 2024

Please, don't go the horrendous path of async. It'll be a massive time sink, the ABI for function calling will never be the same, resulting in function colors, and the entire ecosystem will either be split, or all async.

Is not the whole point of Zig's async/await implementation and its focus on @Frames to not introduce this split?

@mrschyte
Copy link

mrschyte commented Feb 16, 2024

Please, don't go the horrendous path of async. It'll be a massive time sink, the ABI for function calling will never be the same, resulting in function colors, and the entire ecosystem will either be split, or all async.

Is not the whole point of Zig's async/await implementation and its focus on @Frames to not introduce this split?

You would still have to write basically two versions (usually interspersed into a single file / function) to account for the different behavior between async and non-async code. As an example, running a producer/consumer type async function in blocking mode would result in a deadlock if you don't have if/else statements to handle this case.

It would be really nice to have continuations built into the language as it is a simple, but very powerful primitive. I've played around with call/cc before in scheme and you can build all sorts of effects on it as @noonien points out.

@ethernetsellout
Copy link

It would be nice to see what is being proposed explicitly w.r.t. effects and continuations, rather than links to wikipedia articles and such. If it's anything that would show up in a function signature, that would clearly constitute coloring. It seems that effects, for instance, would have to show up in the return type of a function.

Why does a function's ABI introduce function colours in any practical way? The only thing it really means is that you can't get a default-callconv function pointer to an async function;

It seems you have answered your own question

The question is whether this shows up in practice. It seems this is the one area where async might introduce coloring, unless taking a function pointer to async code isn't all that common anyway.

@mlugg
Copy link
Member

mlugg commented Feb 16, 2024

Yeah, a concrete proposal would be nice. If you (anyone here!) have a specific idea which you believe to be superior to async, please open a separate issue with a proposal (use the "blank issue" button), which should detail:

  • How the feature should work (syntax and semantics)
  • What problem it solves
  • Why it is better than the alternative (i.e. async)
  • Benefits and drawbacks

Without a proposal, an plea to eliminate async will not be taken seriously, since it is a suggestion to remove a useful language feature with no good alternative. If you think there are fundamental problems with async but do not have a specific alternative in mind, you can open a proposal to remove async, but it will require heavy justification.

Regardless, let's cease discussion of this here - if a proposal is made, its merits and problems can be talked about there. This is a tracking issue for the re-implementation of async, and hence not really the right place for this conversation.

@andrewrk
Copy link
Member Author

anyone here

If you don't have any open source project written in zig with 100+ commits, don't bother

@mobiuscog
Copy link

I'm not smart enough to write a proposal, I'm too old-school that I much prefer green/virtual threads over async where it makes sense, and I certainly don't have the influencer levels that @andrewrk demands on projects ;)

However, I do value the vision that Zig holds so far and want to mention one thing that has been alluded to already and that I think is really important in the 'real' world:

A language should never be async all the way down.

One reason I moved away from a certain other 'modern' language is that the majority of 3rd-party libraries became async-only. They required you to use async, as the authors only cared about that and "why wouldn't you write async".

I didn't want to use async code, and didn't have a need to but suddenly I was having to make changes to use a popular library. Sure, it was possible to workaround but then the code became more complex to maintain. Not great for something that promised 'zero cost'.

I trust Andrew to make the right decisions - it's his language after all and that's why zig is so easy to read whilst being as powerful as it is. I just think it's also important to think about the 'average' user in all of this, not just the compiler or language experts, or the college theorists.. but the people that may want to use the language and not find that their code becomes less maintainable over time because one feature now means multiple ways to write code.

One day, when I've fully gotten up to speed with zig, I'd hope to contribute on a more formal level, but for now I just want to see it become the language I'm growing to love. There is so much potential, and doing things 'right' is so important.

I learned about zig zen the other day. May it forever guide.

TUSF pushed a commit to TUSF/zig that referenced this issue May 9, 2024
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
@folago
Copy link

folago commented May 24, 2024

I also don't have enough knowledge on Zig (or on levels this low) to write a proposal, but I saw a parallel between tasks scheduling and memory allocation.

To allocate you first need an allocator of some kind, like an arena or bump allocator.
To run async tasks you first need a "scheduler" (for lack of a better name) of some kind, like an event loop or something more N:M like in Go.
Either the allocato/scheduler is passed down to you as a parameter or you can create your own of the type you need.

Then async becomes a method of a scheduler, or a subtype, that returns a handle to the task that can be used to await() for the result or cancel() the task or something else depending on the scheduler (maybe you can yield to another handle like a coroutine).

Here is an extremely rough, short, and not well thought example.

const std = @import("std");

pub fn main() void {

    var scheduler = std.concurrent.Async.init(std.concurrent.event_loop);
    defer scheduler.deinit(); // waits for all tasks to be terminated or canceled, maybe can take a timeout param

    
    var task_runner = scheduler.runner() // can run async functins but not for example deinit() so it is safer to pass around


    var task_handle = task_runner.async(do_things());

    // do other things 

    var result = task_handle.wait()
    // could be task_handle.Cancel()
    // maybe different methods depending on the type of scheduler
}


fn do_things() !resutl{
    // do all the concurrent things
}

I am not sure if mirroring the allocation paradigm/pattern will result in function coloring by explicitly passing "schedulers" as is done for allocators.
But I assume the functions you call via scheduler.async(fn) can be regular functions, so no coloring there.

At the beginning I thought this was too wild and did not post it, but then I stumble upon this post about nurseries as concept for structured concurrency that somewhat I feel validated this idea, if you squint the "scheduler" in the example above looks like a nursery in the linked post.

I like how deterministic the nursery concept is and I think it would be a good fit for Zig.

I am sure there are thousands of small devilish details to consider that I am not even aware of, but maybe this path is worth investigating unless it has been already discarded.

@brymer-meneses
Copy link

I really like the async_func().await syntax in rust. It's so much cleaner than having to do something like

var some_value = (await (await async_func()).some_other_thing()).finally()
var some_value = async_func().await.some_other_thing().await.finally()

Would it be possible to adopt this kind of syntax?

@applejag
Copy link

applejag commented Jun 5, 2024

I really like the async_func().await syntax in rust. It's so much cleaner than having to do something like

var some_value = (await (await async_func()).some_other_thing()).finally()
var some_value = async_func().await.some_other_thing().await.finally()

Would it be possible to adopt this kind of syntax?

The original async syntax zig had would not be affected by this, as explained in this blog post from 2020: https://kristoff.it/blog/zig-colorblind-async-await/

Don't know if the syntax idea has changed, but I really liked that Zig just flipped the async/await function call usage, so that for the common case of calling a non-async function and an async function would have the same syntax.

const some_value = async_func().some_other_thing().finally()

// Equivalent to:
const frame = async async_func()
const other_frame = async (await frame).some_other_thing()
const some_value = (async other_frame).finally()

// Equivalent to:
const some_value = (await async (await async async_func()).some_other_thing()).finally()

Though, in this reversed case where you want to grab the async frame, maybe then it could be a field named async on the async function and an await field on the frame, just to get rid of the wrapping parentheses

const frame = async_func.async()
const other_frame = frame.await().some_other_thing.async()
const some_value = other_frame.await().finally()

// Equivalent to:
const some_value = async_func.async().await().some_other_thing.async().await().finally()

@mlugg
Copy link
Member

mlugg commented Jun 5, 2024

That syntax has not changed, and (if async is re-implemented) will not change, because it's required for colorless async. So, yes, we don't need await to be postfix for convenient chaining, because you just write async_func().async_method().synchronous_method().another_async_method() and everything works.

@revskill10
Copy link

revskill10 commented Jul 2, 2024

I think you could make async as the actual function, to transform a sync function into async one.

const asyncfn = std.async(syncfn);
const result = asyncfn()

@xphoenix
Copy link

xphoenix commented Aug 5, 2024

Good day, any ETA for this?

@wooster0
Copy link
Contributor

wooster0 commented Aug 5, 2024

https://github.com/ziglang/zig/wiki/FAQ#what-is-the-status-of-async-in-zig

@sirweixiao

This comment was marked as off-topic.

@mlugg
Copy link
Member

mlugg commented Oct 12, 2024

Please don't add noise like this to the issue tracker.

  • The fate of async in Zig is undecided; it may return, if we determine we can do it well. See the FAQ entry linked above.
  • Zig is not C++; the design constraints are incredibly different. In particular, Zig has a focus on simplicity and minimalism which is in stark contrast to C++'s design.
  • More generally, the direction other languages take, particularly those such as C++, does not have any effect on us as a language.
  • The statement "coroutines are the trend of the future" is a non-point which cannot be used to justify a complex language feature.

If you can provide a particular reason you think Zig should retain async functionalities (or not) -- especially a concrete use case -- then feel free to give it. Otherwise, rest assured that the core team will get to this issue with time.

@sirweixiao

This comment was marked as spam.

@sirweixiao

This comment was marked as off-topic.

@JiaJiaJiang
Copy link

JiaJiaJiang commented Oct 13, 2024

Hello everyone, I have a problem, I don't know if it is suitable for this topic.
I read the Thread.zig source code and found that to create a child thread in nodejs wasm, I need to define the thread-spawn method in the importObject of the WebAssembly.instantiate method. Zig's wasm thread implementation will call this method to create a thread.
However, creating a worker in js is an asynchronous operation, so if I create a new worker in this method, it cannot wait for the thread to be created, the wasm process will continue to execute, and of course it will not be able to join the child thread normally (because they even have not been created yet).
Since the relevant zig operations are currently synchronous, the wasm process continues to occupy the host js process, so the js process cannot complete the creation of the child thread until the wasm code is executed.
If there is a way to allow the wasm(zig) process to actively return the execution right to the js process when creating a thread, and then return to the wasm process after js completes the creation, then this problem will be solved. Otherwise, it seems that there is no good way to use multithreading in nodejs wasm without changing the zig code to fit it (maybe the process needs to be split into several parts and called separately in js).

@kuon
Copy link
Contributor

kuon commented Oct 13, 2024

@JiaJiaJiang I am not sure if this will fix your issue. But I hacked something that can turn async JS calls into sync zig calls.

On zig side, have something like this:

extern fn send_recv(
    buf: [*]const u8,
    buf_len: usize,
    result: [*]u8,
    result_len: *usize,
) u8;

then, on the JS side (in your WASM thread that you spawn in a web worker), bind a function like this one:

    return function (buf_ptr, buf_len, result_ptr, result_len_ptr) {
        // instance is created with something like this WebAssembly.instantiate(...).instance
        const mem = get_memory_buffer() //         return instance.exports.memory.buffer
        const view = get_memory_view() //         return new DataView(instance.exports.memory.buffer)
        const ctx = get_shared_context() //see below
        const data = new Uint8Array(mem, buf_ptr, buf_len)

        ctx.lock()
        ctx.write(data)
        ctx.client_notify()
        ctx.unlock()

        ctx.wait_for_server()
        ctx.lock()
        const result = ctx.read()
        ctx.unlock()

        const result_len = view.getUint32(result_len_ptr, true)

        if (result.length === 0) {
            return 1// error codes for zig
        }

        if (result.length > result_len) {
            return 2 // error codes for zig
        }

        view.setUint32(result_len_ptr, result.length, true)
        const dest = new Uint8Array(mem, result_ptr, result.length)
        dest.set(result)

        return 0// error codes for zig
    }

In another web worker, do something like this:

    const step = async function () {
        const ctx = get_shared_context() // send the same context to both workers
        if (ctx.wait_for_client(10) !== true) {
            step()
            return
        }
        ctx.lock()

        const request = ctx.read()

       // process request.buffer, you can pass JSON commands, function names... encode it the way you like
       const response = await whatever_process_request(request) // this is where the magic happens as it turns an async call to a sync call
       ctx.write(new Uint8Array(response))

       ctx.server_notify()
       ctx.unlock()
       step()
    }

    step() // this starts an infinite loop

A shared context is something I threw together to allow to sync two thread

export default function SharedContext(buffer) {
    if (!buffer) {
        throw new Error("Buffer must be a shared buffer")
    }
    const META = new Int32Array(buffer, 0, 4)

    const LOCK = 0
    const CLIENT_NOTIFY = 1
    const SERVER_NOTIFY = 2
    const BUF_LEN = 3

    // LOCK values
    const UNLOCKED = 0
    const LOCKED = 1

    // NOTIFY values
    const OFF = 0
    const ON = 1

    const DATA = new Uint8Array(buffer, 16) // start at offset 16

    function write(buf) {
        if (buf.length > DATA.length) {
            return 1
        }

        DATA.set(buf, 0)
        Atomics.store(META, BUF_LEN, buf.length)

        return 0
    }

    function writeU32(n) {
        const buf = new Uint8Array(4)
        new DataView(buf).setUint32(n, true)
        return write(buf)
    }

    function lock() {
        while (true) {
            Atomics.wait(META, LOCK, LOCKED)
            if (
                Atomics.compareExchange(META, LOCK, UNLOCKED, LOCKED) ===
                UNLOCKED
            ) {
                Atomics.notify(META, LOCK)
                break
            }
        }
    }

    function unlock() {
        Atomics.store(META, LOCK, UNLOCKED)
        Atomics.notify(META, LOCK)
    }

    function read() {
        const len = Atomics.load(META, BUF_LEN)
        return DATA.slice(0, len)
    }

    function readU32(n) {
        const buf = read()
        return new DataView(buf).getUint32(true)
    }

    function client_notify() {
        Atomics.store(META, CLIENT_NOTIFY, ON)
        Atomics.notify(META, CLIENT_NOTIFY)
    }

    function server_notify() {
        Atomics.store(META, SERVER_NOTIFY, ON)
        Atomics.notify(META, SERVER_NOTIFY)
    }

    function wait_for_client(timeout) {
        if (Atomics.wait(META, CLIENT_NOTIFY, OFF, timeout) === "timed-out") {
            return false
        }

        Atomics.store(META, CLIENT_NOTIFY, OFF)

        return true
    }

    function wait_for_server(timeout) {
        if (Atomics.wait(META, SERVER_NOTIFY, OFF, timeout) === "timed-out") {
            return false
        }

        Atomics.store(META, SERVER_NOTIFY, OFF)

        return true
    }

    return {
        buffer,
        lock,
        unlock,
        write,
        read,
        client_notify,
        server_notify,
        wait_for_client,
        wait_for_server,
    }
}

Create it like this: shared_context = SharedContext(new SharedArrayBuffer(1024 * 1024))

This is something I threw together to unblock my project, I didn't analyze the performances but it works well enough.

@JiaJiaJiang
Copy link

JiaJiaJiang commented Oct 14, 2024

@kuon Thank you for your reply.
I tried to understand your code. Does it implement the function of calling a method in another thread and using atomic lock to wait for the result?
In my case, the problem occurred before the worker was created. The caller(in wasm) and the called async method(in js) were actually in two different contexts of one thread. So if there is no way for me to actively switch to js from the middle of the wasm process to let it complete the event loop, then js cannot execute any asynchronous code.

@mlugg
Copy link
Member

mlugg commented Oct 14, 2024

Please have this discussion in a Zig community instead. This issue exists to track the implementation of Zig's async/await feature in the self-hosted compiler. The issue tracker isn't for questions.

@kuon
Copy link
Contributor

kuon commented Oct 14, 2024

@mlugg I disagree that this discussion is not relevant to this thread. I think it provides good insights on real world usage and can help prioritize this issue and decide how it should be implemented. I use zig in a fairly large and popular app through WASM, and I was able to workaround the missing async with this workaround. It literally made the project possible. It shifted my priorities from "Zig needs async asap" to other things.

Deciding if async should come back and under what exact form is a very important topic for zig, and all arguments in favor or against which are pertinent should be weighted in. If we can give a temporary working solution to developers and unblock their workflow, I think it can have a big impact on their reception of zig.

With that said, I agree that the issue tracker should not be used for a ping/pong kind of discussion as the essence of the issue can be highly diluted and I am sorry if my participation made it go that way.

@DiXaS
Copy link

DiXaS commented Oct 31, 2024

I'm sorry, I'm not an expert in asynchronous programming, but tell me why it's impossible to add runtime like golang with green threads when say const cor = @import("coroutine") (this solves the problem of colored functions)?

@aep
Copy link

aep commented Nov 5, 2024

@noonien

Could you look into stackful continuations and effects instead?

i intended to propose this a while back but was discouraged because it seemed fundamentally incompatible with what everyone else wants.

i think any sensible proposal would need to have a good answer how this works with javascript. the current answer is that LLVM cororutines work, so that's just the path of least resistance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase. frontend Tokenization, parsing, AstGen, Sema, and Liveness.
Projects
None yet
Development

No branches or pull requests