Skip to content

Commit

Permalink
Add epoch-based interruption for cooperative async timeslicing.
Browse files Browse the repository at this point in the history
This PR introduces a new way of performing cooperative timeslicing that
is intended to replace the "fuel" mechanism. The tradeoff is that this
mechanism interrupts with less precision: not at deterministic points
where fuel runs out, but rather when the Engine enters a new epoch. The
generated code instrumentation is substantially faster, however, because
it does not need to do as much work as when tracking fuel; it only loads
the global "epoch counter" and does a compare-and-branch at backedges
and function prologues.

This change has been measured as ~twice as fast as fuel-based
timeslicing for some workloads, especially control-flow-intensive
workloads such as the SpiderMonkey JS interpreter on Wasm/WASI.

The intended interface is that the embedder of the `Engine` performs an
`engine.increment_epoch()` call periodically, e.g. once per millisecond.
An async invocation of a Wasm guest on a `Store` can specify a number of
epoch-ticks that are allowed before an async yield back to the
executor's event loop. (The initial amount and automatic "refills" are
configured on the `Store`, just as for fuel.) This call does only
signal-safe work (it increments an `AtomicU64`) so could be invoked from
a periodic signal, or from a thread that wakes up once per period.
  • Loading branch information
cfallin committed Jan 20, 2022
1 parent ae476fd commit 8a55b5c
Show file tree
Hide file tree
Showing 19 changed files with 1,034 additions and 26 deletions.
151 changes: 150 additions & 1 deletion crates/cranelift/src/func_environ.rs
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,19 @@ pub struct FuncEnvironment<'module_environment> {
/// so if we load it up front we can continue to use it throughout.
vminterrupts_ptr: cranelift_frontend::Variable,

/// A cached epoch deadline value, when performing epoch-based
/// interruption. Loaded from `VMInterrupts` and reloaded after
/// any yield.
epoch_deadline_var: cranelift_frontend::Variable,

/// A cached pointer to the per-Engine epoch counter, when
/// performing epoch-based interruption. Initialized in the
/// function prologue. We prefer to use a variable here rather
/// than reload on each check because it's better to let the
/// regalloc keep it in a register if able; if not, it can always
/// spill, and this isn't any worse than reloading each time.
epoch_ptr_var: cranelift_frontend::Variable,

fuel_consumed: i64,
}

Expand Down Expand Up @@ -166,6 +179,8 @@ impl<'module_environment> FuncEnvironment<'module_environment> {
offsets: VMOffsets::new(isa.pointer_bytes(), &translation.module),
tunables,
fuel_var: Variable::new(0),
epoch_deadline_var: Variable::new(0),
epoch_ptr_var: Variable::new(0),
vminterrupts_ptr: Variable::new(0),

// Start with at least one fuel being consumed because even empty
Expand Down Expand Up @@ -558,6 +573,125 @@ impl<'module_environment> FuncEnvironment<'module_environment> {
builder.switch_to_block(continuation_block);
}

fn epoch_function_entry(&mut self, builder: &mut FunctionBuilder<'_>) {
builder.declare_var(self.epoch_deadline_var, ir::types::I64);
self.epoch_load_deadline_into_var(builder);
builder.declare_var(self.epoch_ptr_var, self.pointer_type());
let epoch_ptr = self.epoch_ptr(builder);
builder.def_var(self.epoch_ptr_var, epoch_ptr);

// We must check for an epoch change when entering a
// function. Why? Why aren't checks at loops sufficient to
// bound runtime to O(|static program size|)?
//
// The reason is that one can construct a "zip-bomb-like"
// program with exponential-in-program-size runtime, with no
// backedges (loops), by building a tree of function calls: f0
// calls f1 ten tims, f1 calls f2 ten times, etc. E.g., nine
// levels of this yields a billion function calls with no
// backedges. So we can't do checks only at backedges.
//
// In this "call-tree" scenario, and in fact in any program
// that uses calls as a sort of control flow to try to evade
// backedge checks, a check at every function entry is
// sufficient. Then, combined with checks at every backedge
// (loop) the longest runtime between checks is bounded by the
// straightline length of any function body.
self.epoch_check(builder);
}

fn epoch_ptr(&mut self, builder: &mut FunctionBuilder<'_>) -> ir::Value {
let vmctx = self.vmctx(builder.func);
let pointer_type = self.pointer_type();
let base = builder.ins().global_value(pointer_type, vmctx);
let offset = i32::try_from(self.offsets.vmctx_epoch_ptr()).unwrap();
let epoch_ptr = builder
.ins()
.load(pointer_type, ir::MemFlags::trusted(), base, offset);
epoch_ptr
}

fn epoch_load_current(&mut self, builder: &mut FunctionBuilder<'_>) -> ir::Value {
let addr = builder.use_var(self.epoch_ptr_var);
builder.ins().load(
ir::types::I64,
ir::MemFlags::trusted(),
addr,
ir::immediates::Offset32::new(0),
)
}

fn epoch_load_deadline_into_var(&mut self, builder: &mut FunctionBuilder<'_>) {
let interrupts = builder.use_var(self.vminterrupts_ptr);
let deadline = builder.ins().load(
ir::types::I64,
ir::MemFlags::trusted(),
interrupts,
ir::immediates::Offset32::new(self.offsets.vminterupts_epoch_deadline() as i32),
);
builder.def_var(self.epoch_deadline_var, deadline);
}

fn epoch_check(&mut self, builder: &mut FunctionBuilder<'_>) {
let new_epoch_block = builder.create_block();
let new_epoch_doublecheck_block = builder.create_block();
let continuation_block = builder.create_block();
builder.set_cold_block(new_epoch_block);
builder.set_cold_block(new_epoch_doublecheck_block);

let epoch_deadline = builder.use_var(self.epoch_deadline_var);
// Load new epoch and check against cached deadline. The
// deadline may be out of date if it was updated (within
// another yield) during some function that we called; this is
// fine, as we'll reload it and check again before yielding in
// the cold path.
let cur_epoch_value = self.epoch_load_current(builder);
let cmp = builder.ins().ifcmp(cur_epoch_value, epoch_deadline);
builder
.ins()
.brif(IntCC::UnsignedGreaterThanOrEqual, cmp, new_epoch_block, &[]);
builder.ins().jump(continuation_block, &[]);
builder.seal_block(new_epoch_block);

// In the "new epoch block", we've noticed that the epoch has
// exceeded our cached deadline. However the real deadline may
// have been moved in the meantime. We keep the cached value
// in a register to speed the checks in the common case
// (between epoch ticks) but we want to do a precise check
// here, on the cold path, by reloading the latest value
// first.
builder.switch_to_block(new_epoch_block);
self.epoch_load_deadline_into_var(builder);
let fresh_epoch_deadline = builder.use_var(self.epoch_deadline_var);
let fresh_cmp = builder.ins().ifcmp(cur_epoch_value, fresh_epoch_deadline);
builder.ins().brif(
IntCC::UnsignedGreaterThanOrEqual,
fresh_cmp,
new_epoch_doublecheck_block,
&[],
);
builder.ins().jump(continuation_block, &[]);
builder.seal_block(new_epoch_doublecheck_block);

builder.switch_to_block(new_epoch_doublecheck_block);
let new_epoch_sig = self.builtin_function_signatures.new_epoch(builder.func);
let (vmctx, new_epoch) = self.translate_load_builtin_function_address(
&mut builder.cursor(),
BuiltinFunctionIndex::new_epoch(),
);
// new_epoch() returns the new deadline, so we don't have to
// reload it.
let call = builder
.ins()
.call_indirect(new_epoch_sig, new_epoch, &[vmctx]);
let new_deadline = *builder.func.dfg.inst_results(call).first().unwrap();
builder.def_var(self.epoch_deadline_var, new_deadline);
builder.ins().jump(continuation_block, &[]);
builder.seal_block(continuation_block);

builder.switch_to_block(continuation_block);
}

fn memory_index_type(&self, index: MemoryIndex) -> ir::Type {
if self.module.memory_plans[index].memory.memory64 {
I64
Expand Down Expand Up @@ -633,6 +767,8 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
fn after_locals(&mut self, num_locals: usize) {
self.vminterrupts_ptr = Variable::new(num_locals);
self.fuel_var = Variable::new(num_locals + 1);
self.epoch_deadline_var = Variable::new(num_locals + 2);
self.epoch_ptr_var = Variable::new(num_locals + 3);
}

fn make_table(&mut self, func: &mut ir::Function, index: TableIndex) -> WasmResult<ir::Table> {
Expand Down Expand Up @@ -1787,6 +1923,12 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
self.fuel_check(builder);
}

// If we are performing epoch-based interruption, check to see
// if the epoch counter has changed.
if self.tunables.epoch_interruption {
self.epoch_check(builder);
}

Ok(())
}

Expand Down Expand Up @@ -1821,13 +1963,20 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
) -> WasmResult<()> {
// If the `vminterrupts_ptr` variable will get used then we initialize
// it here.
if self.tunables.consume_fuel || self.tunables.interruptable {
if self.tunables.consume_fuel
|| self.tunables.interruptable
|| self.tunables.epoch_interruption
{
self.declare_vminterrupts_ptr(builder);
}
// Additionally we initialize `fuel_var` if it will get used.
if self.tunables.consume_fuel {
self.fuel_function_entry(builder);
}
// Initialize `epoch_var` with the current epoch.
if self.tunables.epoch_interruption {
self.epoch_function_entry(builder);
}
Ok(())
}

Expand Down
2 changes: 2 additions & 0 deletions crates/environ/src/builtin.rs
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ macro_rules! foreach_builtin_function {
memory_atomic_wait64(vmctx, i32, pointer, i64, i64) -> (i32);
/// Invoked when fuel has run out while executing a function.
out_of_gas(vmctx) -> ();
/// Invoked when we reach a new epoch.
new_epoch(vmctx) -> (i64);
}
};
}
Expand Down
4 changes: 4 additions & 0 deletions crates/environ/src/tunables.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,9 @@ pub struct Tunables {
/// will be consumed every time a wasm instruction is executed.
pub consume_fuel: bool,

/// Whether or not we use epoch-based interruption.
pub epoch_interruption: bool,

/// Whether or not to treat the static memory bound as the maximum for unbounded heaps.
pub static_memory_bound_is_maximum: bool,

Expand Down Expand Up @@ -88,6 +91,7 @@ impl Default for Tunables {
parse_wasm_debuginfo: true,
interruptable: false,
consume_fuel: false,
epoch_interruption: false,
static_memory_bound_is_maximum: false,
guard_before_linear_memory: true,
generate_address_map: true,
Expand Down
21 changes: 20 additions & 1 deletion crates/environ/src/vmoffsets.rs
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ pub struct VMOffsets<P> {

// precalculated offsets of various member fields
interrupts: u32,
epoch_ptr: u32,
externref_activations_table: u32,
store: u32,
signature_ids: u32,
Expand Down Expand Up @@ -174,6 +175,7 @@ impl<P: PtrSize> From<VMOffsetsFields<P>> for VMOffsets<P> {
num_defined_memories: fields.num_defined_memories,
num_defined_globals: fields.num_defined_globals,
interrupts: 0,
epoch_ptr: 0,
externref_activations_table: 0,
store: 0,
signature_ids: 0,
Expand All @@ -190,10 +192,14 @@ impl<P: PtrSize> From<VMOffsetsFields<P>> for VMOffsets<P> {
};

ret.interrupts = 0;
ret.externref_activations_table = ret
ret.epoch_ptr = ret
.interrupts
.checked_add(u32::from(ret.ptr.size()))
.unwrap();
ret.externref_activations_table = ret
.epoch_ptr
.checked_add(u32::from(ret.ptr.size()))
.unwrap();
ret.store = ret
.externref_activations_table
.checked_add(u32::from(ret.ptr.size()))
Expand Down Expand Up @@ -469,6 +475,12 @@ impl<P: PtrSize> VMOffsets<P> {
pub fn vminterrupts_fuel_consumed(&self) -> u8 {
self.pointer_size()
}

/// Return the offset of the `epoch_deadline` field of `VMInterrupts`
#[inline]
pub fn vminterupts_epoch_deadline(&self) -> u8 {
self.pointer_size() + 8 // `stack_limit` is a pointer; `fuel_consumed` is an `i64`
}
}

/// Offsets for `VMCallerCheckedAnyfunc`.
Expand Down Expand Up @@ -508,6 +520,13 @@ impl<P: PtrSize> VMOffsets<P> {
self.interrupts
}

/// Return the offset to the `*const AtomicU64` epoch-counter
/// pointer.
#[inline]
pub fn vmctx_epoch_ptr(&self) -> u32 {
self.epoch_ptr
}

/// The offset of the `*mut VMExternRefActivationsTable` member.
#[inline]
pub fn vmctx_externref_activations_table(&self) -> u32 {
Expand Down
4 changes: 2 additions & 2 deletions crates/misc/run-examples/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@ fn main() -> anyhow::Result<()> {
format!("examples/{}.{}", example, extension)
};

if extension == &"cc" && !std::path::Path::new(&file).exists() {
// cc files are optional so we can skip them.
if !std::path::Path::new(&file).exists() {
// C and C++ files are optional so we can skip them.
continue;
}

Expand Down
6 changes: 6 additions & 0 deletions crates/runtime/src/instance.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ use std::convert::TryFrom;
use std::hash::Hash;
use std::ops::Range;
use std::ptr::NonNull;
use std::sync::atomic::AtomicU64;
use std::sync::Arc;
use std::{mem, ptr, slice};
use wasmtime_environ::{
Expand Down Expand Up @@ -203,6 +204,11 @@ impl Instance {
unsafe { self.vmctx_plus_offset(self.offsets.vmctx_interrupts()) }
}

/// Return a pointer to the global epoch counter used by this instance.
pub fn epoch_ptr(&self) -> *mut *const AtomicU64 {
unsafe { self.vmctx_plus_offset(self.offsets.vmctx_epoch_ptr()) }
}

/// Return a pointer to the `VMExternRefActivationsTable`.
pub fn externref_activations_table(&self) -> *mut *mut VMExternRefActivationsTable {
unsafe { self.vmctx_plus_offset(self.offsets.vmctx_externref_activations_table()) }
Expand Down
1 change: 1 addition & 0 deletions crates/runtime/src/instance/allocator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -463,6 +463,7 @@ fn initialize_instance(
unsafe fn initialize_vmcontext(instance: &mut Instance, req: InstanceAllocationRequest) {
if let Some(store) = req.store.as_raw() {
*instance.interrupts() = (*store).vminterrupts();
*instance.epoch_ptr() = (*store).epoch_ptr();
*instance.externref_activations_table() = (*store).externref_activations_table().0;
instance.set_store(store);
}
Expand Down
7 changes: 7 additions & 0 deletions crates/runtime/src/instance/allocator/pooling/uffd.rs
Original file line number Diff line number Diff line change
Expand Up @@ -438,6 +438,7 @@ mod test {
Imports, InstanceAllocationRequest, InstanceLimits, ModuleLimits,
PoolingAllocationStrategy, Store, StorePtr, VMSharedSignatureIndex,
};
use std::sync::atomic::AtomicU64;
use std::sync::Arc;
use wasmtime_environ::{Memory, MemoryPlan, MemoryStyle, Module, PrimaryMap, Tunables};

Expand Down Expand Up @@ -546,6 +547,12 @@ mod test {
fn out_of_gas(&mut self) -> Result<(), anyhow::Error> {
Ok(())
}
fn epoch_ptr(&self) -> *const AtomicU64 {
std::ptr::null()
}
fn new_epoch(&mut self) -> Result<u64, anyhow::Error> {
Ok(0)
}
}
struct MockModuleInfo;
impl crate::ModuleInfoLookup for MockModuleInfo {
Expand Down
11 changes: 11 additions & 0 deletions crates/runtime/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@
)
)]

use std::sync::atomic::AtomicU64;

use anyhow::Error;

mod export;
Expand Down Expand Up @@ -84,6 +86,11 @@ pub unsafe trait Store {
/// in the `VMContext`.
fn vminterrupts(&self) -> *mut VMInterrupts;

/// Returns a pointer to the global epoch counter.
///
/// Used to configure the `VMContext` on initialization.
fn epoch_ptr(&self) -> *const AtomicU64;

/// Returns the externref management structures necessary for this store.
///
/// The first element returned is the table in which externrefs are stored
Expand Down Expand Up @@ -119,4 +126,8 @@ pub unsafe trait Store {
/// is returned that's raised as a trap. Otherwise wasm execution will
/// continue as normal.
fn out_of_gas(&mut self) -> Result<(), Error>;
/// Callback invoked whenever an instance observes a new epoch
/// number. Cannot fail; cooperative epoch-based yielding is
/// completely semantically transparent. Returns the new deadline.
fn new_epoch(&mut self) -> Result<u64, Error>;
}
8 changes: 8 additions & 0 deletions crates/runtime/src/libcalls.rs
Original file line number Diff line number Diff line change
Expand Up @@ -557,3 +557,11 @@ pub unsafe extern "C" fn wasmtime_out_of_gas(vmctx: *mut VMContext) {
Err(err) => crate::traphandlers::raise_user_trap(err),
}
}

/// Hook for when an instance observes that the epoch has changed.
pub unsafe extern "C" fn wasmtime_new_epoch(vmctx: *mut VMContext) -> u64 {
match (*(*vmctx).instance().store()).new_epoch() {
Ok(new_deadline) => new_deadline,
Err(err) => crate::traphandlers::raise_user_trap(err),
}
}
Loading

0 comments on commit 8a55b5c

Please sign in to comment.