Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add spawn_pinned to tokio-util #3370

Merged
merged 47 commits into from
Jan 27, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
9d0ee8d
Add Jetbrains IDE .idea folder to gitignore
AzureMarker Jan 3, 2021
34ccb19
tokio-util: add (incomplete) spawn_pinned support
AzureMarker Jan 3, 2021
1260d6f
Fix clippy warnings
AzureMarker Jan 3, 2021
4501084
Fix the rest of the clippy warnings
AzureMarker Jan 3, 2021
cfdbb8b
Ignore the spawn_pinned test for now because it doesn't terminate
AzureMarker Jan 3, 2021
9dd38b9
Replace global pool with owned handles to individual pools
AzureMarker Jan 4, 2021
af54d47
Add docs
AzureMarker Jan 4, 2021
a621591
Enforce a pool size of at least 1
AzureMarker Jan 5, 2021
0df5844
Track # of tasks in each worker and use least burdened in spawn_pinned
AzureMarker Jan 5, 2021
27df2b2
Add another test
AzureMarker Jan 5, 2021
e454f4f
Revert "Add Jetbrains IDE .idea folder to gitignore"
AzureMarker Sep 28, 2021
c073781
Merge branch 'master' into feature/spawn-pinned
AzureMarker Sep 28, 2021
0189798
Merge branch 'master' into feature/spawn-pinned
AzureMarker Oct 24, 2021
764414c
Replace spawn_blocking with std::thread::spawn
AzureMarker Oct 29, 2021
ff14408
Enable all features on the worker runtime
AzureMarker Oct 29, 2021
ca7f8e8
Fix giving more than one task to the least burdened worker
AzureMarker Oct 29, 2021
9486e32
Remove downcasting in spawn_pinned
AzureMarker Oct 29, 2021
2939338
Add spawn_pinned_nonblocking
AzureMarker Oct 29, 2021
ba8b6be
Fix doc link to spawn_pinned
AzureMarker Oct 30, 2021
c4cce06
Simplify spawn_pinned test
AzureMarker Nov 21, 2021
09839ad
Move LocalPoolHandle creation into LocalPoolHandle::new
AzureMarker Nov 21, 2021
24ca0c7
Simplify bounds on spawn_pinned functions
AzureMarker Nov 21, 2021
5444dda
Create local pool runtimes on user's thread in case it panics
AzureMarker Nov 21, 2021
45b5435
Tweak how join handle send error is dropped
AzureMarker Nov 21, 2021
75b9bb1
Fix some compile errors
AzureMarker Nov 21, 2021
e5972b0
Make spawn_pinned async and name blocking version spawn_pinned_blocking
AzureMarker Nov 25, 2021
4ce300f
Merge branch 'master' into feature/spawn-pinned
AzureMarker Nov 25, 2021
4622c2a
Make spawn_pinned synchronous again, but immediately send the task
AzureMarker Nov 25, 2021
e8e7af9
Fix broken link in docs
AzureMarker Nov 25, 2021
bd32346
Synchronously return join handle in spawn_pinned
AzureMarker Nov 26, 2021
7769e0a
Simplify channel types
AzureMarker Nov 26, 2021
b64edff
Use catch_unwind and AssertUnwindSafe to protect workers from panics
AzureMarker Dec 3, 2021
bb6bdb1
Call create_task in spawn_local to avoid using catch_unwind ourselves
AzureMarker Dec 4, 2021
ff7ad94
Directly resume panic unwinding if the task panics
AzureMarker Dec 4, 2021
5e21d68
Automatically decrement job count via guard (Drop impl)
AzureMarker Dec 5, 2021
b7c4da9
Propagate task cancellation in spawn_pinned
AzureMarker Dec 5, 2021
2067c61
Propagate task cancellation in one more spot (very unlikely to occur)
AzureMarker Dec 5, 2021
16e1521
Minor renaming/formatting
AzureMarker Dec 5, 2021
62f5ec7
Add a test to ensure tasks are balanced across workers
AzureMarker Dec 21, 2021
c130046
Use a robust abort handling mechanism
AzureMarker Dec 21, 2021
8aaa3ca
Fix typo (missing word)
AzureMarker Dec 28, 2021
9214302
Return job count guard when incrementing worker
AzureMarker Dec 29, 2021
1bd5bf2
Add a debug assert to verify the job count doesn't go negative
AzureMarker Dec 29, 2021
34d3646
Check against panic message in tests
AzureMarker Dec 29, 2021
2937f7e
Remove sleeps in spawn_pinned tests
AzureMarker Dec 29, 2021
5d9bbde
Update tokio-util/src/task/spawn_pinned.rs
AzureMarker Dec 29, 2021
fb11c45
Fix some compile errors after applying suggestion
AzureMarker Dec 29, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion tokio-util/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ codec = []
time = ["tokio/time","slab"]
io = []
io-util = ["io", "tokio/rt", "tokio/io-util"]
rt = ["tokio/rt"]
rt = ["tokio/rt", "tokio/sync", "futures-util"]

__docs_rs = ["futures-util"]

Expand Down
1 change: 1 addition & 0 deletions tokio-util/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ cfg_io! {

cfg_rt! {
pub mod context;
pub mod task;
}

cfg_time! {
Expand Down
4 changes: 4 additions & 0 deletions tokio-util/src/task/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
//! Extra utilities for spawning tasks

mod spawn_pinned;
pub use spawn_pinned::LocalPoolHandle;
307 changes: 307 additions & 0 deletions tokio-util/src/task/spawn_pinned.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,307 @@
use futures_util::future::{AbortHandle, Abortable};
use std::fmt;
use std::fmt::{Debug, Formatter};
use std::future::Future;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use tokio::runtime::Builder;
use tokio::sync::mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender};
use tokio::sync::oneshot;
use tokio::task::{spawn_local, JoinHandle, LocalSet};

/// A handle to a local pool, used for spawning `!Send` tasks.
#[derive(Clone)]
pub struct LocalPoolHandle {
pool: Arc<LocalPool>,
}

impl LocalPoolHandle {
/// Create a new pool of threads to handle `!Send` tasks. Spawn tasks onto this
/// pool via [`LocalPoolHandle::spawn_pinned`].
///
/// # Panics
/// Panics if the pool size is less than one.
pub fn new(pool_size: usize) -> LocalPoolHandle {
assert!(pool_size > 0);

let workers = (0..pool_size)
.map(|_| LocalWorkerHandle::new_worker())
.collect();

let pool = Arc::new(LocalPool { workers });

LocalPoolHandle { pool }
}

/// Spawn a task onto a worker thread and pin it there so it can't be moved
/// off of the thread. Note that the future is not [`Send`], but the
/// [`FnOnce`] which creates it is.
///
/// # Examples
/// ```
/// use std::rc::Rc;
/// use tokio_util::task::LocalPoolHandle;
///
/// #[tokio::main]
/// async fn main() {
/// // Create the local pool
/// let pool = LocalPoolHandle::new(1);
///
/// // Spawn a !Send future onto the pool and await it
/// let output = pool
/// .spawn_pinned(|| {
/// // Rc is !Send + !Sync
/// let local_data = Rc::new("test");
///
/// // This future holds an Rc, so it is !Send
/// async move { local_data.to_string() }
/// })
/// .await
/// .unwrap();
///
/// assert_eq!(output, "test");
/// }
/// ```
pub fn spawn_pinned<F, Fut>(&self, create_task: F) -> JoinHandle<Fut::Output>
where
F: FnOnce() -> Fut,
F: Send + 'static,
Fut: Future + 'static,
Fut::Output: Send + 'static,
{
self.pool.spawn_pinned(create_task)
}
}

impl Debug for LocalPoolHandle {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.write_str("LocalPoolHandle")
}
}

struct LocalPool {
workers: Vec<LocalWorkerHandle>,
}

impl LocalPool {
/// Spawn a `?Send` future onto a worker
fn spawn_pinned<F, Fut>(&self, create_task: F) -> JoinHandle<Fut::Output>
where
F: FnOnce() -> Fut,
F: Send + 'static,
Fut: Future + 'static,
Fut::Output: Send + 'static,
AzureMarker marked this conversation as resolved.
Show resolved Hide resolved
{
let (sender, receiver) = oneshot::channel();

let (worker, job_guard) = self.find_and_incr_least_burdened_worker();
let worker_spawner = worker.spawner.clone();

// Spawn a future onto the worker's runtime so we can immediately return
// a join handle.
worker.runtime_handle.spawn(async move {
// Move the job guard into the task
let _job_guard = job_guard;

// Propagate aborts via Abortable/AbortHandle
let (abort_handle, abort_registration) = AbortHandle::new_pair();
let _abort_guard = AbortGuard(abort_handle);

// Inside the future we can't run spawn_local yet because we're not
// in the context of a LocalSet. We need to send create_task to the
// LocalSet task for spawning.
let spawn_task = Box::new(move || {
// Once we're in the LocalSet context we can call spawn_local
let join_handle =
spawn_local(
async move { Abortable::new(create_task(), abort_registration).await },
);

// Send the join handle back to the spawner. If sending fails,
// we assume the parent task was canceled, so cancel this task
// as well.
if let Err(join_handle) = sender.send(join_handle) {
join_handle.abort()
}
});

// Send the callback to the LocalSet task
if let Err(e) = worker_spawner.send(spawn_task) {
// Propagate the error as a panic in the join handle.
panic!("Failed to send job to worker: {}", e);
}

// Wait for the task's join handle
let join_handle = match receiver.await {
Ok(handle) => handle,
Err(e) => {
// We sent the task successfully, but failed to get its
// join handle... We assume something happened to the worker
// and the task was not spawned. Propagate the error as a
// panic in the join handle.
panic!("Worker failed to send join handle: {}", e);
}
};

// Wait for the task to complete
let join_result = join_handle.await;

match join_result {
Ok(Ok(output)) => output,
Ok(Err(_)) => {
// Pinned task was aborted. But that only happens if this
// task is aborted. So this is an impossible branch.
unreachable!(
"Reaching this branch means this task was previously \
aborted but it continued running anyways"
)
}
Err(e) => {
if e.is_panic() {
std::panic::resume_unwind(e.into_panic());
} else if e.is_cancelled() {
// No one else should have the join handle, so this is
// unexpected. Forward this error as a panic in the join
// handle.
panic!("spawn_pinned task was canceled: {}", e);
} else {
// Something unknown happened (not a panic or
// cancellation). Forward this error as a panic in the
// join handle.
panic!("spawn_pinned task failed: {}", e);
}
}
}
})
}

/// Find the worker with the least number of tasks, increment its task
/// count, and return its handle. Make sure to actually spawn a task on
/// the worker so the task count is kept consistent with load.
///
/// A job count guard is also returned to ensure the task count gets
/// decremented when the job is done.
fn find_and_incr_least_burdened_worker(&self) -> (&LocalWorkerHandle, JobCountGuard) {
loop {
let (worker, task_count) = self
.workers
.iter()
.map(|worker| (worker, worker.task_count.load(Ordering::SeqCst)))
.min_by_key(|&(_, count)| count)
.expect("There must be more than one worker");

// Make sure the task count hasn't changed since when we choose this
// worker. Otherwise, restart the search.
if worker
.task_count
.compare_exchange(
task_count,
task_count + 1,
Ordering::SeqCst,
Ordering::Relaxed,
)
.is_ok()
{
return (worker, JobCountGuard(Arc::clone(&worker.task_count)));
}
}
}
}

/// Automatically decrements a worker's job count when a job finishes (when
/// this gets dropped).
struct JobCountGuard(Arc<AtomicUsize>);

impl Drop for JobCountGuard {
fn drop(&mut self) {
// Decrement the job count
let previous_value = self.0.fetch_sub(1, Ordering::SeqCst);
debug_assert!(previous_value >= 1);
}
}

/// Calls abort on the handle when dropped.
struct AbortGuard(AbortHandle);

impl Drop for AbortGuard {
fn drop(&mut self) {
self.0.abort();
}
}

type PinnedFutureSpawner = Box<dyn FnOnce() + Send + 'static>;

struct LocalWorkerHandle {
runtime_handle: tokio::runtime::Handle,
spawner: UnboundedSender<PinnedFutureSpawner>,
task_count: Arc<AtomicUsize>,
}

impl LocalWorkerHandle {
/// Create a new worker for executing pinned tasks
fn new_worker() -> LocalWorkerHandle {
let (sender, receiver) = unbounded_channel();
let runtime = Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to start a pinned worker thread runtime");
let runtime_handle = runtime.handle().clone();
let task_count = Arc::new(AtomicUsize::new(0));
let task_count_clone = Arc::clone(&task_count);

std::thread::spawn(|| Self::run(runtime, receiver, task_count_clone));

LocalWorkerHandle {
runtime_handle,
spawner: sender,
task_count,
}
}

fn run(
runtime: tokio::runtime::Runtime,
mut task_receiver: UnboundedReceiver<PinnedFutureSpawner>,
task_count: Arc<AtomicUsize>,
) {
let local_set = LocalSet::new();
local_set.block_on(&runtime, async {
while let Some(spawn_task) = task_receiver.recv().await {
// Calls spawn_local(future)
(spawn_task)();
}
});

// If there are any tasks on the runtime associated with a LocalSet task
// that has already completed, but whose output has not yet been
// reported, let that task complete.
//
// Since the task_count is decremented when the runtime task exits,
// reading that counter lets us know if any such tasks completed during
// the call to `block_on`.
//
// Tasks on the LocalSet can't complete during this loop since they're
// stored on the LocalSet and we aren't accessing it.
let mut previous_task_count = task_count.load(Ordering::SeqCst);
loop {
// This call will also run tasks spawned on the runtime.
runtime.block_on(tokio::task::yield_now());
let new_task_count = task_count.load(Ordering::SeqCst);
if new_task_count == previous_task_count {
break;
} else {
previous_task_count = new_task_count;
}
}

// It's now no longer possible for a task on the runtime to be
// associated with a LocalSet task that has completed. Drop both the
// LocalSet and runtime to let tasks on the runtime be cancelled if and
// only if they are still on the LocalSet.
//
// Drop the LocalSet task first so that anyone awaiting the runtime
// JoinHandle will see the cancelled error after the LocalSet task
// destructor has completed.
drop(local_set);
drop(runtime);
}
}
Loading