-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix data race in run loop usage #486
Conversation
Signed-off-by: Andrei Lebedev <lebdron@gmail.com>
This fixes If another thread mutates the queue concurrently then perhaps the reference could become invalid? |
Interesting. Here was my logic for these. I did not care if As long as |
EDIT: see the next comment for a possible solution, this is just me thinking out loud Is there a larger context for this, I think I may be missing some detail to better formulate a reply? Runloop implementations I've seen often have a layer of mutexes/condition variables which are necessary to properly send/receive messages across threads. High throughput runloops could use lockfree queues [but tuning this is highly cpu-count and architecture specific]. Judging on the QT example here (#438), it looks like the QT main loop would occasionally call into the RX runloop through a polling mechanism? For sparsely scheduled actions, this seems inherently less efficient than an event-based mechanism to feed the run loop? Also re: thread safety, my under-informed gut reaction seems like perhaps the API could be split? A set of functions that are callable from the loop-owning-thread [perhaps while always under a lock, or while always out of a lock] and another set of functions callable from any thread. This is only a guess without having full context of the above. Usually the only way to "avoid" all atomics would be to invoke user-callbacks while a lock is held, otherwise there would be races from data manipulation coming from other threads. |
After looking at the API again, I'm not sure if it needs to have I believe it is already assumed that the I'm not exactly sure yet how
The first use case would look like
The second use case simply looks like:
Either way, I think dispatch could be changed to return the time of the next item to be scheduled and this would be equally correct?
From what I understood so far it would obviate the need to call |
I like your proposed change to I would like to keep the current changes in this PR for now and add the proposed changes to dispatch in a different PR. Thanks! |
Perhaps this might need to be reverted? There seems to be some potential issues:
|
@iam Yes. This is a breaking change. It will create deadlocks if a notify calls empty or peek. the reference returned from peek is indeed unsafe. these methods were only intended to be called from the same thread that calls dispatch. only dispatch would invalidate the head of the queue. |
Thread sanitizer detects a data race in this run loop usage example #154
https://gist.github.com/lebdron/ee3e3e74250c27b2595a732bae5a9a22
Since queue can be accessed both by current thread and some other thread which calls
schedule()
, queue handles have to be protected by a lock.