-
Notifications
You must be signed in to change notification settings - Fork 2.6k
wasm-executor: Support growing the memory #12520
Conversation
…nto bkchr-grow-memory
…nto bkchr-grow-memory
Co-authored-by: Koute <koute@users.noreply.github.com>
Co-authored-by: Koute <koute@users.noreply.github.com>
Co-authored-by: Koute <koute@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
test-utils/runtime/src/lib.rs
Outdated
@@ -1358,7 +1358,12 @@ mod tests { | |||
|
|||
// Try to allocate 1024k of memory on heap. This is going to fail since it is twice larger | |||
// than the heap. | |||
let ret = client.runtime_api().vec_with_capacity(&block_id, 1048576); | |||
let ret = client.runtime_api().vec_with_capacity_with_context( | |||
&block_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will have a conflict here:
&block_id, | |
client.chain_info().best_hash, |
DQ: Having different memory model for benchmarking ( |
/// Run the given closure `run` and grant it read access to the raw memory. | ||
fn with_access<R>(&self, run: impl FnOnce(&[u8]) -> R) -> R; | ||
/// Grow the memory by `additional` pages. | ||
fn grow(&mut self, additional: u32) -> Result<(), ()>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious (as a general practice). Why not just return bool
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you use Result
it's immediately obvious that the function can error out, and that Result::Err
is an error. And you'll get a warning if the Result
is not handled (since it's marked with #[must_use]
). With bool
there's no such convention.
And of course with Result
you can use e.g. map_err
and ?
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, kind of taste? I mean it is a failure and a simple bool
feels too much like C, but semantically it doesn't make any difference.
#[test] | ||
fn should_allocate_properly() { | ||
// given | ||
let mut mem = [0u8; PAGE_SIZE as usize]; | ||
let mut mem = MemoryInstance::with_pages(1); | ||
let mut heap = FreeingBumpHeapAllocator::new(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dumb question (I just drop it here even if not strictly related to this code snip).
What is the motivation behind the design decision of keeping the managed memory buffer (in this case MemoryInstance
) separated from the heap structure manager (i.e. in this case FreeingBumpHeapAllocator
).
Why not add a MemoryInstance field to the heap structure?
As far as I understood the heap structure maintains a bunch of information strictly bounded to the buffer it operates on... perhaps the motivation is to pass it between the host and the runtime without and prevent the whole buffer copy? Just guessing 😃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah should probably now be achievable, before with just operating on the bytes it would have required a trait (which we now got). Maybe some possible future refactor, but is also not such a huge problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of wasmtime
the actual memory is owned by the executor, so to access it you also need a mutable handle to the context, but you're also storing the FreeingBumpHeapAllocator
inside the context, and you can't have two mutable references to the same thing. That's why Basti has to store the allocator inside of an Option
and temporarily take()
it in client/executor/wasmtime/src/host.rs
.
Technically you could add a field inside of the FreeingBumpHeapAllocator
which would store a handle to the memory, but it still wouldn't own the memory (because it's owned by wasmtime
and the Memory
it exposes is just essentially an usize
index into wasmtime
's internal store) and you still would need wasmtime
's context to access it. So there isn't really much point in doing that.
let (min, max) = match heap_alloc_strategy { | ||
HeapAllocStrategy::Dynamic { maximum_pages } => { | ||
// Ensure `initial <= maximum_pages` | ||
(maximum_pages.map(|m| m.min(initial)).unwrap_or(initial), maximum_pages) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't initial
the value that drives the min
?
With this code if maximum_pages
is Some and maximum_pages < initial
then we set min
to maximum_pages
value. And also max
is set to maximum_pages
resulting in (min = max)
and (max < initial)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want initial
to be greater then max
. See the comment above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. But what I was saying is that if the program requires an initial
greater than maximum_pages
we end up giving to the program less memory than how much it requires. I may have missed something anyway 🤔 And maybe that is intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that is intentional. If something requires more, then it will probably fail at execution.
When the allocation fails, we stop the execution. The runtime doesn't see that the allocation failed. |
bot merge |
* Companion for Substrate#12520 paritytech/substrate#12520 * Fix * update lockfile for {"substrate"} --------- Co-authored-by: parity-processbot <>
* As always, start with something :P * Add support for max_heap_pages * Add support for wasmtime * Make it compile * Fix compilation * Copy wrongly merged code * Fix compilation * Some fixes * Fix * Get stuff working * More work * More fixes * ... * More * FIXEs * Switch wasmi to use `RuntimeBlob` like wasmtime * Removed unused stuff * Cleanup * More cleanups * Introduce `CallContext` * Fixes * More fixes * Add builder for creating the `WasmExecutor` * Adds some docs * FMT * First round of feedback. * Review feedback round 2 * More fixes * Fix try-runtime * Update client/executor/wasmtime/src/instance_wrapper.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/runtime_blob/runtime_blob.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Feedback round 3 * FMT * Review comments --------- Co-authored-by: Koute <koute@users.noreply.github.com>
* As always, start with something :P * Add support for max_heap_pages * Add support for wasmtime * Make it compile * Fix compilation * Copy wrongly merged code * Fix compilation * Some fixes * Fix * Get stuff working * More work * More fixes * ... * More * FIXEs * Switch wasmi to use `RuntimeBlob` like wasmtime * Removed unused stuff * Cleanup * More cleanups * Introduce `CallContext` * Fixes * More fixes * Add builder for creating the `WasmExecutor` * Adds some docs * FMT * First round of feedback. * Review feedback round 2 * More fixes * Fix try-runtime * Update client/executor/wasmtime/src/instance_wrapper.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/runtime_blob/runtime_blob.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Feedback round 3 * FMT * Review comments --------- Co-authored-by: Koute <koute@users.noreply.github.com>
* As always, start with something :P * Add support for max_heap_pages * Add support for wasmtime * Make it compile * Fix compilation * Copy wrongly merged code * Fix compilation * Some fixes * Fix * Get stuff working * More work * More fixes * ... * More * FIXEs * Switch wasmi to use `RuntimeBlob` like wasmtime * Removed unused stuff * Cleanup * More cleanups * Introduce `CallContext` * Fixes * More fixes * Add builder for creating the `WasmExecutor` * Adds some docs * FMT * First round of feedback. * Review feedback round 2 * More fixes * Fix try-runtime * Update client/executor/wasmtime/src/instance_wrapper.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/runtime_blob/runtime_blob.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Feedback round 3 * FMT * Review comments --------- Co-authored-by: Koute <koute@users.noreply.github.com>
* As always, start with something :P * Add support for max_heap_pages * Add support for wasmtime * Make it compile * Fix compilation * Copy wrongly merged code * Fix compilation * Some fixes * Fix * Get stuff working * More work * More fixes * ... * More * FIXEs * Switch wasmi to use `RuntimeBlob` like wasmtime * Removed unused stuff * Cleanup * More cleanups * Introduce `CallContext` * Fixes * More fixes * Add builder for creating the `WasmExecutor` * Adds some docs * FMT * First round of feedback. * Review feedback round 2 * More fixes * Fix try-runtime * Update client/executor/wasmtime/src/instance_wrapper.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/runtime_blob/runtime_blob.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Feedback round 3 * FMT * Review comments --------- Co-authored-by: Koute <koute@users.noreply.github.com>
* As always, start with something :P * Add support for max_heap_pages * Add support for wasmtime * Make it compile * Fix compilation * Copy wrongly merged code * Fix compilation * Some fixes * Fix * Get stuff working * More work * More fixes * ... * More * FIXEs * Switch wasmi to use `RuntimeBlob` like wasmtime * Removed unused stuff * Cleanup * More cleanups * Introduce `CallContext` * Fixes * More fixes * Add builder for creating the `WasmExecutor` * Adds some docs * FMT * First round of feedback. * Review feedback round 2 * More fixes * Fix try-runtime * Update client/executor/wasmtime/src/instance_wrapper.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/runtime_blob/runtime_blob.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/executor/common/src/wasm_runtime.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update client/allocator/src/freeing_bump.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Feedback round 3 * FMT * Review comments --------- Co-authored-by: Koute <koute@users.noreply.github.com>
This pull request adds support for growing the wasm memory while executing a runtime function. The pull request doesn't change the old default behavior which was to have a fixed amount of heap pages. By default will still only allocate a fixed amount of memory in total for the wasm memory. The new "allocation mode" that is being added is called "dynamic". If this is enabled, the allocator is allowed to grow the wasm memory up to the upper maximum of what wasm32 supports (4GB). Besides these changes there is a new
CallContext
that needs to passed to the wasm executor call function. Based on this context (onchain/offchain) the executor can use a different heap pages limit as configured on startup. To make it easy to configure theWasmExecutor
a newWasmExecutorBuilder
is added to simplify the configuration.Why supporting for different maximum heap pages, based on the context?
I think that on chain logic will always need to stay in certain limits. This should be done to ensure that we don't run into situations where the we allocate huge amounts of data that we want to read in some offchain context. In this offchain context we then fail because all this data combined doesn't fit into the memory. Another example are Parachains, they have a hard cap of 1GB of memory for the validation phase. So, they need to ensure to stay in this limit while building the PoV. However, in the offchain context we can be more free and open up the possibility to allocate up to 4GB, if configured.
Enabling dynamic memory
To enable dynamic memory growing a user would need to use the newly introduced
WasmExecutorBuilder
and set the appropriate values viawith_offchain_heap_alloc_strategy
/with_onchain_heap_alloc_strategy
. By default nothing changes to the old behavior and everything uses the same memory limits as before. It is also advised to not use this right now, especially not for building/importing blocks. Building/importing blocks should always be done in some fixed limits (especially for Parachains which have only a fixed memory size when validating their blocks) to ensure that everyone can import the blocks.Works towards: paritytech/polkadot-sdk#62
polkadot companion: paritytech/polkadot#6730