Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use bytes for maximum size of linear memory with pooling #8628

Merged

Conversation

alexcrichton
Copy link
Member

This commit changes configuration of the pooling allocator to use a byte-based unit rather than a page based unit. The previous PoolingAllocatorConfig::memory_pages configuration option configures the maximum size that a linear memory may grow to at runtime. This is an important factor in calculation of stripes for MPK and is also a coarse-grained knob apart from StoreLimiter to limit memory consumption. This configuration option has been renamed to max_memory_size and documented that it's in terms of bytes rather than pages as before.

Additionally the documented constraint of max_memory_size must be smaller than static_memory_bound is now additionally enforced as a minor clean-up as part of this PR as well.

This commit changes configuration of the pooling allocator to use a
byte-based unit rather than a page based unit. The previous
`PoolingAllocatorConfig::memory_pages` configuration option configures
the maximum size that a linear memory may grow to at runtime. This is an
important factor in calculation of stripes for MPK and is also a
coarse-grained knob apart from `StoreLimiter` to limit memory
consumption. This configuration option has been renamed to
`max_memory_size` and documented that it's in terms of bytes rather than
pages as before.

Additionally the documented constraint of `max_memory_size` must be
smaller than `static_memory_bound` is now additionally enforced as a
minor clean-up as part of this PR as well.
@alexcrichton alexcrichton requested review from a team as code owners May 15, 2024 17:57
@alexcrichton alexcrichton requested review from fitzgen and removed request for a team May 15, 2024 17:57
Copy link
Member

@fitzgen fitzgen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but I am confused about one thing and have Opinions and Concerns about writing the constant value directly all over the place

crates/cli-flags/src/lib.rs Outdated Show resolved Hide resolved
@@ -76,7 +76,7 @@ impl Config {
if let InstanceAllocationStrategy::Pooling(pooling) = &mut self.wasmtime.strategy {
// One single-page memory
pooling.total_memories = config.max_memories as u32;
pooling.memory_pages = 10;
pooling.max_memory_size = 10 * 65535;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be 65536 right? If you instead want the max address it should be 10 * 65536 - 1.

Also lets use named constants for the Wasm page size so that it is easier to find uses and update in the future for the custom-page-sizes proposal. We have it defined in wasmtime_types already, FWIW, and this could use that here:

pub const WASM_PAGE_SIZE: u32 = 0x10000;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah 65535 is a typo here, and everywhere else, but I'm hesitant to go through all the work to import a constant in all these places since it's all in tests anyway and all the values are basically random constants. In the future the wasm page size won't be constant as well so it feels weird to proliferate usage of the constant... I can change these to >>16 and <<16 though where appropriate perhaps to reduce the risk of typos.

@@ -135,7 +135,7 @@ impl Config {
if pooling.total_memories < 1
|| pooling.total_tables < 5
|| pooling.table_elements < 1_000
|| pooling.memory_pages < 900
|| pooling.max_memory_size < (900 * 65536)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto regarding named constant

if pooling.memory_pages == 0 {
pooling.memory_pages = 1;
if pooling.max_memory_size == 0 {
pooling.max_memory_size = 65535;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also confused about 65535 here since that isn't a multiple of the page size, so this will disallow any non-zero-length memories, assuming we don't do rounding elsewhere in this PR?

Comment on lines 420 to 421
cfg.max_memory32_pages = min / 65536;
cfg.max_memory64_pages = min / 65535;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also confused about 65535 vs 65536 here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And ditto about named constants on everything here.

crates/wasmtime/src/config.rs Outdated Show resolved Hide resolved
@@ -703,7 +704,7 @@ mod test {
let config = PoolingInstanceAllocatorConfig {
limits: InstanceLimits {
total_memories: 1,
memory_pages: 0x10001,
max_memory_size: (1 << 32) + 65536,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

named constant, even tho it's just a test. we want to make sure we are actually testing sizes that wasm could potentially grow to, and not accidentally just testing rounding behavior or something.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you expand on this? I could see a comment going here but I'm not sure otherwise what you're looking for in terms of a constant?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the goal is to test behavior with N wasm pages of capacity, then we should make sure we are actually doing that (as opposed to testing with unaligned capacity) and using a constant helps in this scenario. I feel like the typos are evidence that we can mess this up.

Additionally, having a constant that we can grep for all uses of will make it easier to support the custom-page-sizes proposal down the line: we have a list of uses that we can turn into method calls if necessary, and a list of tests that we might want to update or copy and tweak.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The meaning of this test has sort of been updated and lost a bit over time as I've been tweaking settings. This isn't specifically testing for unalignment or anything like that it's just one size being bigger than the other and various bits and pieces have been lost to translation over time.

I'm gonna go ahead and merge this as-is but we can of course update tests afterwards. I'm basically just somewhat doubtful if it's worth the effort to try to document all tests for precisely what they're doing page-wise and such. I don't think any of the tests really need to be updated with custom-page-sizes since they're all using wasm memories of 64k pages and the tests for custom-page-sizes are going to be pretty orthogonal anyway. Much of the time the page limits are just to reduce VM memory reservations anyway and don't have much to do with page sizes.

@@ -767,7 +763,7 @@ mod tests {
max_tables_per_module: 0,
max_memories_per_module: 3,
table_elements: 0,
memory_pages: 1,
max_memory_size: 65536,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -344,7 +344,9 @@ async fn fuel_eventually_finishes() {
#[tokio::test]
async fn async_with_pooling_stacks() {
let mut pool = crate::small_pool_config();
pool.total_stacks(1).memory_pages(1).table_elements(0);
pool.total_stacks(1)
.max_memory_size(65536)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto in this file's changes

@@ -354,7 +354,7 @@ fn test_pooling_allocator_initial_limits_exceeded() -> Result<()> {
let mut pool = crate::small_pool_config();
pool.total_memories(2)
.max_memories_per_module(2)
.memory_pages(5)
.max_memory_size(5 * 65536)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay I am going to stop saying "ditto about named constants" now and assume you will go over the diff yourself to catch all of them at this point

@github-actions github-actions bot added fuzzing Issues related to our fuzzing infrastructure wasmtime:api Related to the API of the `wasmtime` crate itself wasmtime:config Issues related to the configuration of Wasmtime labels May 15, 2024
Copy link

Subscribe to Label Action

cc @fitzgen

This issue or pull request has been labeled: "fuzzing", "wasmtime:api", "wasmtime:config"

Thus the following users have been cc'd because of the following labels:

  • fitzgen: fuzzing

To subscribe or unsubscribe from this label, edit the .github/subscribe-to-label.json configuration file.

Learn more.

Copy link

Label Messager: wasmtime:config

It looks like you are changing Wasmtime's configuration options. Make sure to
complete this check list:

  • If you added a new Config method, you wrote extensive documentation for
    it.

    Our documentation should be of the following form:

    Short, simple summary sentence.
    
    More details. These details can be multiple paragraphs. There should be
    information about not just the method, but its parameters and results as
    well.
    
    Is this method fallible? If so, when can it return an error?
    
    Can this method panic? If so, when does it panic?
    
    # Example
    
    Optional example here.
    
  • If you added a new Config method, or modified an existing one, you
    ensured that this configuration is exercised by the fuzz targets.

    For example, if you expose a new strategy for allocating the next instance
    slot inside the pooling allocator, you should ensure that at least one of our
    fuzz targets exercises that new strategy.

    Often, all that is required of you is to ensure that there is a knob for this
    configuration option in wasmtime_fuzzing::Config (or one
    of its nested structs).

    Rarely, this may require authoring a new fuzz target to specifically test this
    configuration. See our docs on fuzzing for more details.

  • If you are enabling a configuration option by default, make sure that it
    has been fuzzed for at least two weeks before turning it on by default.


To modify this label's message, edit the .github/label-messager/wasmtime-config.md file.

To add new label messages or remove existing label messages, edit the
.github/label-messager.json configuration file.

Learn more.

@alexcrichton
Copy link
Member Author

I've updated with <<16 and >>16 to reduce the risk of typos, but I'm curious what you think of my pushback on using a name constant everywhere. I'm basically hesitant to add a bunch of process to tests where all the constants are basically already random

@alexcrichton alexcrichton added this pull request to the merge queue May 16, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks May 16, 2024
@alexcrichton alexcrichton requested a review from a team as a code owner May 17, 2024 03:58
@alexcrichton alexcrichton requested review from pchickey and removed request for a team May 17, 2024 03:58
@alexcrichton alexcrichton enabled auto-merge May 17, 2024 03:58
@alexcrichton alexcrichton added this pull request to the merge queue May 17, 2024
Merged via the queue into bytecodealliance:main with commit 906ea01 May 17, 2024
35 checks passed
@alexcrichton alexcrichton deleted the memory-pages-to-bytes branch May 17, 2024 04:20
alexcrichton added a commit to alexcrichton/wasmtime that referenced this pull request May 17, 2024
github-merge-queue bot pushed a commit that referenced this pull request May 17, 2024
* Fix some fuzz memory configuration issues

Fallout from #8628 that I forgot to handle.

* More fuzz tweaks

* More tweaks for more bugs
alexcrichton added a commit to alexcrichton/wasmtime that referenced this pull request May 20, 2024
…iance#8628)

* Use bytes for maximum size of linear memory with pooling

This commit changes configuration of the pooling allocator to use a
byte-based unit rather than a page based unit. The previous
`PoolingAllocatorConfig::memory_pages` configuration option configures
the maximum size that a linear memory may grow to at runtime. This is an
important factor in calculation of stripes for MPK and is also a
coarse-grained knob apart from `StoreLimiter` to limit memory
consumption. This configuration option has been renamed to
`max_memory_size` and documented that it's in terms of bytes rather than
pages as before.

Additionally the documented constraint of `max_memory_size` must be
smaller than `static_memory_bound` is now additionally enforced as a
minor clean-up as part of this PR as well.

* Review comments

* Fix benchmark build
alexcrichton added a commit to alexcrichton/wasmtime that referenced this pull request May 20, 2024
* Fix some fuzz memory configuration issues

Fallout from bytecodealliance#8628 that I forgot to handle.

* More fuzz tweaks

* More tweaks for more bugs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fuzzing Issues related to our fuzzing infrastructure wasmtime:api Related to the API of the `wasmtime` crate itself wasmtime:config Issues related to the configuration of Wasmtime
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants