Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelism: Should wait(::Condition) take a Mutex to unlock/lock? #30026

Closed
NHDaly opened this issue Nov 14, 2018 · 4 comments · Fixed by #30061
Closed

Parallelism: Should wait(::Condition) take a Mutex to unlock/lock? #30026

NHDaly opened this issue Nov 14, 2018 · 4 comments · Fixed by #30061
Labels
multithreading Base.Threads and related functionality

Comments

@NHDaly
Copy link
Member

NHDaly commented Nov 14, 2018

I saw that Base provides a wait(::Condition) function, to add yourself to the Condition's queue:
https://docs.julialang.org/en/v1/stdlib/Distributed/#Base.wait

julia/base/event.jl

Lines 40 to 51 in d789231

function wait(c::Condition)
ct = current_task()
push!(c.waitq, ct)
try
return wait()
catch
filter!(x->x!==ct, c.waitq)
rethrow()
end
end

Unless I'm missing some other mechanism that exists for this, I think we also need a way to provide the Condition with a Mutex that it will unlock before sleeping, and re-lock upon awaking.

After the multithreading support is in and tasks can run in parallel, I think this is necessary in order to use Conditions for synchronization.

The pattern I'm describing is the Mesa Monitor, and the wikipedia article covers the required API nicely:
https://en.wikipedia.org/wiki/Monitor_(synchronization)#Condition_variables_2

We can either have the wait function take a mutex argument, as in the above Wikipedia example, or we could provide the mutex in the Condition constructor.

C++ takes the first approach: void wait( std::unique_lock<std::mutex>& lock )
https://en.cppreference.com/w/cpp/thread/condition_variable/wait

And Golang takes the second approach: func NewCond(l Locker) *Cond
https://golang.org/pkg/sync/#Cond.Wait
https://golang.org/pkg/sync/#NewCond

I'm not totally sure about the tradeoffs between those two, but I guess seeing as most of our Concurrency API is based on Go's, it might be good take the decision they made.


Here is an example of using the pattern I'm describing:

m = Mutex()  # global
cv = Condition()   # global

function task1()
  lock(m)
  while (!somepred())
    wait(cv, m)  # unlocks m; waits for signal; acquires m
  end
  # ... do stuff ...
  unlock(m)
end
function task2()
  lock(m)
  changepred()
  notify(cv)
  unlock(m)
end

I'm happy to open a PR for this as well, but I just wanted to check that I'm not missing something obvious before doing so! :)

@NHDaly
Copy link
Member Author

NHDaly commented Nov 14, 2018

I'm not totally sure about the tradeoffs between those two, but I guess seeing as most of our Concurrency API is based on Go's, it might be good take the decision they made.

Having the mutex associated with the CV means you can be sure all the Tasks waiting on your queue will acquire the same lock when they awaken, and maybe that can make you more confident about signaling that Condition.

The downside of course is that you have to use the same mutex for all Tasks on the queue. I don't know of any, but maybe there are situations where this forces you to use multiple CVs instead of just one, and can cause code to be overly complex?

@vtjnash
Copy link
Sponsor Member

vtjnash commented Nov 14, 2018

I'm working on a PR now to implement this. :)

@NHDaly
Copy link
Member Author

NHDaly commented Nov 14, 2018

My hero! :)

@JeffBezanson JeffBezanson added the multithreading Base.Threads and related functionality label Nov 15, 2018
@JeffBezanson
Copy link
Sponsor Member

Yes, this is one of the needed items in the new multithreading system. I also think we should use the Go-style API.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multithreading Base.Threads and related functionality
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants