-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update and revamp wasm32 SIMD intrinsics #874
Conversation
r? @gnzlbg (rust_highfive has picked a reviewer for you, use r? to override) |
0b5db66
to
1ef033a
Compare
Lots of time and lots of things have happened since the simd128 support was first added to this crate. Things are starting to settle down now so this commit syncs the Rust intrinsic definitions with the current specification (https://github.com/WebAssembly/simd). Unfortuantely not everything can be enabled just yet but everything is in the pipeline for getting enabled soon. This commit also applies a major revamp to how intrinsics are tested. The intention is that the setup should be much more lightweight and/or easy to work with after this commit. At a high-level, the changes here are: * Testing with node.js and `#[wasm_bindgen]` has been removed. Instead intrinsics are tested with Wasmtime which has a nearly complete implementation of the SIMD spec (and soon fully complete!) * Testing is switched to `wasm32-wasi` to make idiomatic Rust bits a bit easier to work with (e.g. `panic!)` * Testing of this crate's simd128 feature for wasm is re-enabled. This will run on CI and both compile and execute intrinsics. This should bring wasm intrinsics to the same level of parity as x86 intrinsics, for example. * New wasm intrinsics have been added: * `iNNxMM_loadAxA_{s,u}` * `vNNxMM_load_splat` * `v8x16_swizzle` * `v128_andnot` * `iNNxMM_abs` * `iNNxMM_narrow_*_{u,s}` * `iNNxMM_bitmask` - commented out until LLVM is updated to LLVM 11 * `iNNxMM_widen_*_{u,s}` - commented out until bytecodealliance/wasmtime#1994 lands * `iNNxMM_{max,min}_{u,s}` * `iNNxMM_avgr_u` * Some wasm intrinsics have been removed: * `i64x2_trunc_*` * `f64x2_convert_*` * `i8x16_mul` * The `v8x16.shuffle` instruction is exposed. This is done through a `macro` (not `macro_rules!`, but `macro`). This is intended to be somewhat experimental and unstable until we decide otherwise. This instruction has 16 immediate-mode expressions and is as a result unsuited to the existing `constify_*` logic of this crate. I'm hoping that we can game out over time what a macro might look like and/or look for better solutions. For now, though, what's implemented is the first of its kind in this crate (an architecture-specific macro), so some extra scrutiny looking at it would be appreciated. * Lots of `assert_instr` annotations have been fixed for wasm. * All wasm simd128 tests are uncommented and passing now. This is still missing tests for new intrinsics and it's also missing tests for various corner cases. I hope to get to those later as the upstream spec itself gets closer to stabilization. In the meantime, however, I went ahead and updated the `hex.rs` example with a wasm implementation using intrinsics. With it I got some very impressive speedups using Wasmtime: test benches::large_default ... bench: 213,961 ns/iter (+/- 5,108) = 4900 MB/s test benches::large_fallback ... bench: 3,108,434 ns/iter (+/- 75,730) = 337 MB/s test benches::small_default ... bench: 52 ns/iter (+/- 0) = 2250 MB/s test benches::small_fallback ... bench: 358 ns/iter (+/- 0) = 326 MB/s or otherwise using Wasmtime hex encoding using SIMD is 15x faster on 1MB chunks or 7x faster on small <128byte chunks. All of these intrinsics are still unstable and will continue to be so presumably until the simd proposal in wasm itself progresses to a later stage. Additionaly we'll still want to sync with clang on intrinsic names (or decide not to) at some point in the future.
1ef033a
to
47e3245
Compare
/// | ||
/// All indexes `$i*` must have the type `u32`. | ||
#[allow_internal_unstable(platform_intrinsics, rustc_attrs)] | ||
pub macro v8x16_shuffle( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably worth expanding a bit on what I'm thinking here as well. My hope with this is that we'll enable a platform-specific macro which is not available in the prelude bit instead must be imported:
use std::arch::wasm32::v8x16_shuffle;
This will have the macro clearly scoped to wasm32, and using a macro (as can be seen below) allows us to still control the implementation in such a way that's opaque to the caller, but also in a way where we can satisfy the compiler's codegen requirements. Namely all the immediate indices here can be required as constant by the compiler itself, ensuring that we can actually codegen the instruction to LLVM and other backends and such.
I believe that we have all the pieces necessary to have unstable internals that aren't actually exposed to the user. We can also get function-like type checking by carefully crafting the macro to ensure that the input expressions all have an expected type.
Like I mentioned in the PR description though this is, AFAIK, a new thing to do. I don't believe we've pioneered this elsewhere so I don't know if there are hidden downsides (such as using macro
instead of macro_rules!
). I'm pretty certain we're not altering the current stability story of libcore, and I think we can continue to figure out the best way to expose an intrinsic like this over time.
FWIW the clang implementation uses a macro that calls a __builtin_*
function, as does many x86_64 intrinsics with constant arguments where they're all required to be macros in C.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that the proper way to expose this kind of functionality in the long term is to use const generics. I hope that we will eventually be able to use where
clauses to constrain the const values to be within a certain range, which is needed to avoid LLVM-level ICEs due to bad intrinsic inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is all unstable I'm happy to merge it as it is, but this is something that we should definitely revisit prior to stabilization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh nice! I hadn't thought about the use of const generics here (haven't had a chance to use them myself yet!)
I tested this out and it works well for v8x16_shuffle
implementation. It turns out that simd_shuffle16
in the compiler actually protects us against out-of-bounds indices (they're compile time errors), so I think that happens either with the macro or with the const-generic function. (although this would be best with where
clauses rather than relying on the just-happens-to-be-caught-during-codegen behavior)
One thing I had difficult though was implementing a const-generic v16x8_shuffle
in terms of v8x16_shuffle
. I'm sure it's possible to implement natively but this would mean that we probably would want to include v16x8_shuffle
as an "intrinsic" since it wouldn't be easily implementable externally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I've gone ahead and pushed up a version that uses const generics instead. I went ahead and propagated this to everything else using #[rustc_args_required_const]
in the simd module as well. Given the trajectory of const generics it looks like this may work well for wasm simd!
This commit unconditionally exposes SIMD functions from the `wasm32` module. This is done in such a way that the standard library does not need to be recompiled to access SIMD intrinsics and use them. This, hopefully, is the long-term story for SIMD in WebAssembly in Rust. It's unlikely that all WebAssembly runtimes will end up implementing SIMD so the standard library is unlikely to use SIMD any time soon, but we want to make sure it's easily available to folks! This commit enables all this by ensuring that SIMD is available to the standard library, regardless of compilation flags. This'll come with the same caveats as x86 support, where it doesn't make sense to call these functions unless you're enabling simd support one way or another locally. Additionally, as with x86, if you don't call these functions then the instructions won't show up in your binary. While I was here I went ahead and expanded the WebAssembly-specific documentation for the wasm32 module as well, ensuring that the current state of SIMD/Atomics are documented.
This sync the names of the intrinsics with the current spec, renamed in January.
Ok I've pushed up two more commits, one is to avoid The second is a sync of the names of the atomic memory intrinsics, which I'll need to update the usage of in the standard library when the submodule is updated (which I can do after this lands) |
@Amanieu would you be up for reviewing this? Or should I try to rope in some other wasm folks? |
I'm happy to review this but I won't be able to get around to it before the end of this week. |
Ok cool, thanks! And no rush of course, just figured it was time for me to get around to updating the wasm simd support :) |
80a4a8d
to
011c9df
Compare
Support for them at runtime was added to Wasmtime
This commit switches all wasm simd intrinsics to using const generics instead of the funky `#[rustc_args_required_const]` attribute. This is ideally a bit more future-proof and more readily expresses the constraints of these instructions!
FWIW I think const generics works a lot better for SIMD intrinsics than |
Since |
Ok I switched const indices to |
One thing that may be pretty reasonable to say though is that the text format for SIMD has special forms of
so we're arguably just matching that form where you can create a vector with any of the member types without having to go through all the gymnastics of byte transmutes yourself. |
Thanks again @Amanieu for your help reviewing this! |
This commit updates the src/stdarch submodule primarily to include rust-lang/stdarch#874 which updated and revamped WebAssembly SIMD intrinsics and renamed WebAssembly atomics intrinsics. This is all unstable surface area of the standard library so the changes should be ok here. The SIMD updates also enable SIMD intrinsics to be used by any program any any time, yay! cc rust-lang#74372, a tracking issue I've opened for the stabilization of SIMD intrinsics
…kruppe Update stdarch submodule This commit updates the src/stdarch submodule primarily to include rust-lang/stdarch#874 which updated and revamped WebAssembly SIMD intrinsics and renamed WebAssembly atomics intrinsics. This is all unstable surface area of the standard library so the changes should be ok here. The SIMD updates also enable SIMD intrinsics to be used by any program any any time, yay! cc rust-lang#74372, a tracking issue I've opened for the stabilization of SIMD intrinsics
Lots of time and lots of things have happened since the simd128 support
was first added to this crate. Things are starting to settle down now so
this commit syncs the Rust intrinsic definitions with the current
specification (https://github.com/WebAssembly/simd). Unfortuantely not
everything can be enabled just yet but everything is in the pipeline for
getting enabled soon.
This commit also applies a major revamp to how intrinsics are tested.
The intention is that the setup should be much more lightweight and/or
easy to work with after this commit.
At a high-level, the changes here are:
Testing with node.js and
#[wasm_bindgen]
has been removed. Insteadintrinsics are tested with Wasmtime which has a nearly complete
implementation of the SIMD spec (and soon fully complete!)
Testing is switched to
wasm32-wasi
to make idiomatic Rust bits a biteasier to work with (e.g.
panic!)
Testing of this crate's simd128 feature for wasm is re-enabled. This
will run on CI and both compile and execute intrinsics. This should
bring wasm intrinsics to the same level of parity as x86 intrinsics,
for example.
New wasm intrinsics have been added:
iNNxMM_loadAxA_{s,u}
vNNxMM_load_splat
v8x16_swizzle
v128_andnot
iNNxMM_abs
iNNxMM_narrow_*_{u,s}
iNNxMM_bitmask
- commented out until LLVM is updated to LLVM 11iNNxMM_widen_*_{u,s}
- commented out untilImplement SIMD widening instructions for x86 bytecodealliance/wasmtime#1994 lands
iNNxMM_{max,min}_{u,s}
iNNxMM_avgr_u
Some wasm intrinsics have been removed:
i64x2_trunc_*
f64x2_convert_*
i8x16_mul
The
v8x16.shuffle
instruction is exposed. This is done through amacro
(notmacro_rules!
, butmacro
). This is intended to besomewhat experimental and unstable until we decide otherwise. This
instruction has 16 immediate-mode expressions and is as a result
unsuited to the existing
constify_*
logic of this crate. I'm hopingthat we can game out over time what a macro might look like and/or
look for better solutions. For now, though, what's implemented is the
first of its kind in this crate (an architecture-specific macro), so
some extra scrutiny looking at it would be appreciated.
Lots of
assert_instr
annotations have been fixed for wasm.All wasm simd128 tests are uncommented and passing now.
This is still missing tests for new intrinsics and it's also missing
tests for various corner cases. I hope to get to those later as the
upstream spec itself gets closer to stabilization.
In the meantime, however, I went ahead and updated the
hex.rs
examplewith a wasm implementation using intrinsics. With it I got some very
impressive speedups using Wasmtime:
or otherwise using Wasmtime hex encoding using SIMD is 15x faster on 1MB
chunks or 7x faster on small <128byte chunks.
All of these intrinsics are still unstable and will continue to be so
presumably until the simd proposal in wasm itself progresses to a later
stage. Additionaly we'll still want to sync with clang on intrinsic
names (or decide not to) at some point in the future.