You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
async_mutex::unlock() makes a direct call to handle.resume() on the next awaiter in the queue. This prevents the code releasing the mutex from continuing its execution until the next awaiter -- potentially all the awaiters -- have completed. (Completion here meaning either naturally or by co_awaiting an operation that schedules the remaining work for later execution).
It also means the thread an awaiter resumes on is always the thread of the first lock holder in the chain.
static async_mutex s_mutex{};
IAsyncAction DoWork() {
{
auto scopedLock = co_await s_mutex.lock_async();
// (A) Do work that needs the mutex
}
// Lock is destroyed, calling s_mutex.unlock(); // The next queued call to DoWork starts running. We are now blocked here until it completes.// (B) This line doesn't run until every queued DoWork is processed.
}
This behavior isn't intuitive. Its unusual for a future operation to have an impact on the current one. I would have expected instead that the code calling unlock() queues the next awaiter through some scheduler (thread pool, UI queue, etc), allowing the current operation to run through (B) immediately.
I realize the current design is more agnostic to the environment and devs can still customize where execution continues. So I'd ask to at least improve the documentation here to make clear 1) the impact of this design and 2) that you may want a call to something like winrt::resume_* after the lock_async() call to avoid blocking previous operations.
The text was updated successfully, but these errors were encountered:
What's more - what if resumed coroutine throws an exception?
async_mutex_lock releases async_mutex in destructor via calling mutex->unlock()
So there is potential undefined behavior
async_mutex::unlock()
makes a direct call tohandle.resume()
on the next awaiter in the queue. This prevents the code releasing the mutex from continuing its execution until the next awaiter -- potentially all the awaiters -- have completed. (Completion here meaning either naturally or by co_awaiting an operation that schedules the remaining work for later execution).It also means the thread an awaiter resumes on is always the thread of the first lock holder in the chain.
This behavior isn't intuitive. Its unusual for a future operation to have an impact on the current one. I would have expected instead that the code calling unlock() queues the next awaiter through some scheduler (thread pool, UI queue, etc), allowing the current operation to run through (B) immediately.
I realize the current design is more agnostic to the environment and devs can still customize where execution continues. So I'd ask to at least improve the documentation here to make clear 1) the impact of this design and 2) that you may want a call to something like
winrt::resume_*
after the lock_async() call to avoid blocking previous operations.The text was updated successfully, but these errors were encountered: