Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🏃 Add a run-mode that executes the input program once and then exits #211

Merged
merged 10 commits into from
Jan 31, 2023

Conversation

itsrainy
Copy link
Contributor

@itsrainy itsrainy commented Dec 16, 2022

This introduces a "run-mode" to Viceroy that will allow us to use it as a test runner. The goal is to allow developers of C@E services to use Viceroy to run unit tests that rely on things provided by Fastly's SDKs. This is based off of a proof-of-concept that @acfoltzer put together a few months ago 🙏

how to use it

To run a wasm program's _start function and then exit, the invocation looks like this:

$ viceroy -C fastly.toml --run my_program.wasm

This will set up the context described in fastly.toml and all the wasm context expected by a C@E service, and then run _start. If you would like to pass arguments to the wasm program, you can specify those after a -- on the command line, like so:

$ viceroy -C fastly.toml --run my_program.wasm -- --arg-for-wasm-program

as a Rust developer, to use this as a test runner you'll need to do the following:

  1. Add the following to your project's .cargo/config:
[build]
target = "wasm32-wasi"

[target.wasm32-wasi]
runner = "{PATH_TO_VICEROY_BINARY} -C fastly.toml --run -q -- "
  1. Install cargo-nextest
  2. Write tests that use the fastly crate a là:
#[test]
fn test_using_client_request() {
    let client_req = fastly::Request::from_client();
    assert_eq!(client_req.get_method(), Method::GET);
    assert_eq!(client_req.get_path(), "/");
}

#[test]
fn test_using_bodies() {
    let mut body1 = fastly::Body::new();
    body1.write_str("hello, ");
    let mut body2 = fastly::Body::new();
    body2.write_str("Viceroy!");
    body1.append(body2);
    let appended_str = body1.into_string();
    assert_eq!(appended_str, "hello, Viceroy!");
}

#[test]
fn test_a_handler_with_fastly_types() {
    let req = fastly::Request::get("http://example.com/Viceroy");
    let resp = some_handler(req).expect("request succeeds");
    assert_eq!(resp.get_content_type(), Some(TEXT_PLAIN_UTF_8));
    assert_eq!(resp.into_body_str(), "hello, /Viceroy!");
}
  1. Run your tests with cargo nextest run:
 % cargo nextest run
   Compiling unit-tests-test v0.1.0 (/Users/rsinclair/src/fastly/Viceroy-test-runner-example/fixture)
    Finished test [unoptimized + debuginfo] target(s) in 1.16s
    Starting 3 tests across 1 binaries
        PASS [   2.106s] unit-tests-test::bin/unit-tests-test tests::test_a_handler_with_fastly_types
        PASS [   2.225s] unit-tests-test::bin/unit-tests-test tests::test_using_bodies
        PASS [   2.223s] unit-tests-test::bin/unit-tests-test tests::test_using_client_request
------------
     Summary [   2.230s] 3 tests run: 3 passed, 0 skipped

why cargo-nextest and not just cargo test?

The above example would work if executed with cargo test. Where that breaks down is if any tests fail. There is no way to recover from a panic in wasm, so test execution would halt as soon as the first test failure occurs. Because of this, we need each test to be executed in its own wasm instance and have the results aggregated to report overall success/failure. As @acfoltzer pointed out below, this is what cargo-nextest does.

@acfoltzer
Copy link
Contributor

@itsrainy:

  • This is currently tied to being used in a very specific way by cargo test. Originally from chatting with @acfoltzer I had hoped to make this a more general execution mode that would simply set up the required state and naively run the provided binary's _start function. Unfortunately, querying the binary for the list of tests and running them one-by-one restricts this to only working with binaries produced by cargo test (or binaries with a very similar interface). I'm interested if anyone has ideas on how to get around that.
  • If we go this route and decide to continue adding support for things like additional arguments, configurable number of threads, etc, we run into a problem where we are pretty much re-implementing large swaths of the default cargo test runner. As a small example, this already tries to mimic the output of cargo test. This seems unideal and I'm not sure what to do about that.

You already anticipated my primary feedback 😆

The idea of using --list and --exact to run each test individually is great, but as you point out, quite dependent on the behavior of the Rust test harness. The good news is that this is such a good idea, that it's exactly what cargo nextest does!

I played around with this using wasmtime as a test runner for a simple test library. In .cargo/config.toml:

[target.wasm32-wasi]
runner = "wasmtime --"

src/lib.rs:

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn it_works() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }

    #[test]
    fn it_doesnt() {
        panic!()
    }
}

With regular cargo test --target wasm32-wasi, the panic leads to a trap, and we end up getting no overall test results:

❯ cargo test --target wasm32-wasi
    Finished test [unoptimized + debuginfo] target(s) in 0.00s
     Running unittests src/lib.rs (target/wasm32-wasi/debug/deps/test_test-77eb6ad9e93c9b9a.wasm)

running 2 tests
test tests::it_doesnt ... Error: failed to run main module /tmp/test-test/target/wasm32-wasi/debug/deps/test_test-77eb6ad9e93c9b9a.wasm

Caused by:
0: failed to invoke command default
1: wasm trap: wasm unreachable instruction executed
wasm backtrace:
0: 0x40d45 - !__rust_start_panic
1: 0x40a7f - !rust_panic
2: 0x40a4e - !std::panicking::rust_panic_with_hook::h6d84c9448306d839
3: 0x3fabb - !std::panicking::begin_panic_handler::{{closure}}::hd63a5a2e775785ab
4: 0x3fa20 - !std::sys_common::backtrace::__rust_end_short_backtrace::h01a301108fc5d2e6
5: 0x402f1 - !rust_begin_unwind
6: 0x4728e - !core::panicking::panic_fmt::h3c051bd1fa34c0ac
7: 0x477e9 - !core::panicking::panic::hcd725ec23c6a51f6
8: 0xf06 - !test_test::tests::it_doesnt::hfccfc12a9e632fb1
9: 0xba5 - !test_test::tests::it_doesnt::{{closure}}::hc9eeceb81507ba32
10: 0x11fb - !core::ops::function::FnOnce::call_once::h32cb9f711f851ff4
11: 0x8cd1 - !test::__rust_begin_short_backtrace::h621ab50980acb8ed
12: 0x8cc6 - !core::ops::function::FnOnce::call_once{{vtable.shim}}::hde89270332be6c9c
13: 0x31402 - !test::run_test::run_test_inner::h244e4fe2b4984316
14: 0x1c885 - !test::run_test::ha4220efa39a49198
15: 0x17168 - !test::console::run_tests_console::h40ce7a7065e76917
16: 0x2f98b - !test::test_main::h008d0f535a3a5ede
17: 0x30925 - !test::test_main_static::h8df7ffd94cb055b4
18: 0xc89 - !test_test::main::hfe2b3bfdf77aa430
19: 0x11ba - !core::ops::function::FnOnce::call_once::h212233d6aa5fc092
20: 0x1688 - !std::sys_common::backtrace::__rust_begin_short_backtrace::h1780b3721df11202
21: 0xd90 - !std::rt::lang_start::{{closure}}::hbd29402be7b85f4e
22: 0x39972 - !std::rt::lang_start_internal::hc6f9d2f73ce7f369
23: 0xd2d - !std::rt::lang_start::h493f21176b2ec7c8
24: 0xcad - !__original_main
25: 0x928 - !_start
26: 0x53cfb - !_start.command_export
note: using the WASMTIME_BACKTRACE_DETAILS=1 environment variable to may show more debugging information

error: test failed, to rerun pass --lib

Caused by:
process didn't exit successfully: wasmtime -- /tmp/test-test/target/wasm32-wasi/debug/deps/test_test-77eb6ad9e93c9b9a.wasm (exit status: 134)

On the other hand, cargo nextest gives us full results:

❯ cargo nextest run --target wasm32-wasi
    Finished test [unoptimized + debuginfo] target(s) in 0.01s
info: for the target platform, using target runner `wasmtime --` defined by `target.wasm32-wasi.runner` within `/tmp/test-test/.cargo/config`
    Starting 2 tests across 1 binaries
        FAIL [   0.046s] test-test tests::it_doesnt

--- STDOUT: test-test tests::it_doesnt ---

running 1 test
test tests::it_doesnt ...
--- STDERR: test-test tests::it_doesnt ---
thread 'main' panicked at 'explicit panic', src/lib.rs:17:9
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
Error: failed to run main module /tmp/test-test/target/wasm32-wasi/debug/deps/test_test-77eb6ad9e93c9b9a.wasm

Caused by:
0: failed to invoke command default
1: wasm trap: wasm unreachable instruction executed
wasm backtrace:
0: 0x40d45 - !__rust_start_panic
1: 0x40a7f - !rust_panic
2: 0x40a4e - !std::panicking::rust_panic_with_hook::h6d84c9448306d839
3: 0x3fabb - !std::panicking::begin_panic_handler::{{closure}}::hd63a5a2e775785ab
4: 0x3fa20 - !std::sys_common::backtrace::__rust_end_short_backtrace::h01a301108fc5d2e6
5: 0x402f1 - !rust_begin_unwind
6: 0x4728e - !core::panicking::panic_fmt::h3c051bd1fa34c0ac
7: 0x477e9 - !core::panicking::panic::hcd725ec23c6a51f6
8: 0xf06 - !test_test::tests::it_doesnt::hfccfc12a9e632fb1
9: 0xba5 - !test_test::tests::it_doesnt::{{closure}}::hc9eeceb81507ba32
10: 0x11fb - !core::ops::function::FnOnce::call_once::h32cb9f711f851ff4
11: 0x8cd1 - !test::__rust_begin_short_backtrace::h621ab50980acb8ed
12: 0x8cc6 - !core::ops::function::FnOnce::call_once{{vtable.shim}}::hde89270332be6c9c
13: 0x31402 - !test::run_test::run_test_inner::h244e4fe2b4984316
14: 0x1c885 - !test::run_test::ha4220efa39a49198
15: 0x17168 - !test::console::run_tests_console::h40ce7a7065e76917
16: 0x2f98b - !test::test_main::h008d0f535a3a5ede
17: 0x30925 - !test::test_main_static::h8df7ffd94cb055b4
18: 0xc89 - !test_test::main::hfe2b3bfdf77aa430
19: 0x11ba - !core::ops::function::FnOnce::call_once::h212233d6aa5fc092
20: 0x1688 - !std::sys_common::backtrace::__rust_begin_short_backtrace::h1780b3721df11202
21: 0xd90 - !std::rt::lang_start::{{closure}}::hbd29402be7b85f4e
22: 0x39972 - !std::rt::lang_start_internal::hc6f9d2f73ce7f369
23: 0xd2d - !std::rt::lang_start::h493f21176b2ec7c8
24: 0xcad - !__original_main
25: 0x928 - !_start
26: 0x53cfb - !_start.command_export
note: using the WASMTIME_BACKTRACE_DETAILS=1 environment variable to may show more debugging information

Canceling due to test failure: 1 tests still running
PASS [ 0.056s] test-test tests::it_works

 Summary [   0.058s] 2 tests run: 1 passed, 1 failed, 0 skipped
    FAIL [   0.046s] test-test tests::it_doesnt

error: test run failed

I'd like us to focus on supporting a generic but comprehensive interface for executing arbitrary programs with arguments and environment variables, much like wasmtime's CLI. That ought to be a robust enough foundation to then build (or adopt) language-specific tooling.

  • Do we want to support calls to backends in tests?

Let's leave this for a future iteration if people demand it.

  • How much of the environment in Fastly.toml should be set up for this execution mode?

I think it'd be reasonable to support the geolocation, dictionaries, and object store fields. But these would all be fairly easy to tack on after landing basic execution support.

@itsrainy
Copy link
Contributor Author

itsrainy commented Jan 17, 2023

@acfoltzer There are still some rough edges here, but how is this general shape?
I pulled out all of the test listing/orchestration stuff and now it just runs the _start function after passing along any params that follow the --. With a cargo config that looks like this:

[build]
target = "wasm32-wasi"

[target.wasm32-wasi]
runner = "{PATH_TO_VICEROY} -C fastly.toml --run -q --"

running cargo nextest run successfully runs all of the tests.

aforementioned rough edges:

  • It seems to be much slower. For my set of very simple tests that don't use the fastly crate, it takes about ~150s on my machine vs. ~20s (using wasmtime directly as the test runner is ~instantaneous). For the tests above which do use the fastly crate, it takes ~450s vs ~50s on my machine. I'd expect a bit of a performance hit, but 7-10x as long seems excessive, so I'm going to dig in here.
  • nextest doesn't like when anything other than the binary output is logged to stdout, so I added a "quiet" flag to viceroy to mute any viceroy/viceroy-lib logs. This could maybe be handled by the verbosity flag, but I haven't looked into that yet.

@acfoltzer
Copy link
Contributor

@itsrainy that looks great, you nailed it!

  • It seems to be much slower. For my set of very simple tests that don't use the fastly crate, it takes about ~150s on my machine vs. ~20s (using wasmtime directly as the test runner is ~instantaneous). For the tests above which do use the fastly crate, it takes ~450s vs ~50s on my machine. I'd expect a bit of a performance hit, but 7-10x as long seems excessive, so I'm going to dig in here.

Are you building Viceroy in --release mode for this testing? Wasmtime/cranelift is notoriously slow in debug mode.

  • nextest doesn't like when anything other than the binary output is logged to stdout, so I added a "quiet" flag to viceroy to mute any viceroy/viceroy-lib logs. This could maybe be handled by the verbosity flag, but I haven't looked into that yet.

Yeah, that seems like the right call for this mode. The stdio handles should look pretty much how they would if we were running the program natively.

@itsrainy
Copy link
Contributor Author

Are you building Viceroy in --release mode for this testing? Wasmtime/cranelift is notoriously slow in debug mode.

Nope, I wasn't! That made it much much faster (<5s total for both the above).

@itsrainy itsrainy changed the title 🧪 Add a test runner mode that is compatible with cargo test 🏃 Add a run-mode that executes the input once and then exits Jan 19, 2023
@itsrainy itsrainy changed the title 🏃 Add a run-mode that executes the input once and then exits 🏃 Add a run-mode that executes the input program once and then exits Jan 19, 2023
@itsrainy
Copy link
Contributor Author

I added some tests and cleaned up the PR description and doc comments and I think this is ready for another round of review

@acfoltzer
Copy link
Contributor

@itsrainy just want to confirm that this is ready for review, given the commit activity since your last message. Also, I'm going to untag other reviewers to make it clear that I own the review for this one

@itsrainy
Copy link
Contributor Author

@acfoltzer yes it's ready! The commit activity was rebasing/force-pushing, not any actual development :)

Copy link
Contributor

@acfoltzer acfoltzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work, @itsrainy! The fundamental implementation of the feature is shippable imo. As I play with the interface, though, I'm coming to the conclusion that we should start using clap subcommands in the CLI rather than relying on particular combinations of flags.

As of this review, it's possible to run Viceroy with nonsensical argument combinations, for example:

# Passing ignored Wasm args to web server creation
$ viceroy -C fastly.toml my_app.wasm -- these_args are_ignored
# Passing ignored server args to `--run`
$ viceroy -C fastly.toml --run my_app.wasm --addr localhost:12345 --log-stdout --log-stderr

I'd like to get some buy-in from other Viceroy devs before embarking on this significant change, but my proposal is:

  • Introduce serve and run subcommands to the Viceroy CLI

  • Keep arguments applicable to both modes in a toplevel struct (probably marked as global so they can come before or after the subcommand), and move mode-specific arguments to the serve and run enums

  • Make serve the default behavior in order to maintain compatibility with drivers of the existing interface like the fastly CLI.

    N.B. I am not sure whether this is straightforward to achieve with clap, so may not be worth the effort if we coordinate sufficiently the CLI to have it perform invocations appropriately for the version of Viceroy it detects.

I know this is a fair amount extra on top of what you've put together here, but I think it'll be a lot easier to onboard new users to this feature if we have more clarity in the interface.

(edit: some examples to illustrate the desired interface)

$ viceroy -C fastly.toml serve my_app.wasm -v2 --addr localhost:12345 --log-stdout --log-stderr
$ viceroy -C fastly.toml run my_app.wasm -v2 -- wasm_arg1 another_wasm_arg

cli/src/main.rs Show resolved Hide resolved
cli/src/main.rs Show resolved Hide resolved
lib/src/execute.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@acfoltzer acfoltzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on offline discussion, we'll be:

  • Moving ahead with the CLI as-implemented in this PR
  • Following up with a PR dedicated to reworking the CLI interface with subcommands
  • Marking as experimental and/or omitting the run interface from the changelog, should a Viceroy release be necessary between the two PRs

cli/src/opts.rs Outdated Show resolved Hide resolved
cli/src/opts.rs Outdated Show resolved Hide resolved
@itsrainy
Copy link
Contributor Author

CLI follow-up ticket: #222

@itsrainy itsrainy requested a review from acfoltzer January 27, 2023 19:35
cli/src/main.rs Outdated Show resolved Hide resolved
cli/src/main.rs Show resolved Hide resolved
cli/src/main.rs Outdated
@@ -80,7 +79,11 @@ pub async fn main() -> ExitCode {
pub async fn run_wasm_main(opts: Opts) -> Result<(), anyhow::Error> {
// Load the wasm module into an execution context
let ctx = create_execution_context(&opts).await?;
ctx.run_main(opts.wasm_args()).await
ctx.run_main(
opts.input().file_stem().unwrap().to_str().unwrap(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the second unwrap here could cause us problems since Windows filenames are, iirc, UTF-16. Can you use to_string_lossy instead?

The first unwrap should be okay, because presumably check_module would fail if this argument was a directory. It might be worth saying so explicitly, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@acfoltzer I think you're right that check_module would catch that while parsing CLI args. I added a message here just to make it a bit more clear what is going on though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also I'm still fairly new to Rust, so if there's a different way that you would have written this, please let me know :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine, especially since it'd come up right away when executing a command rather than lying in wait to bring down a long-running system. I could imagine introducing a new type for the validated filename just to have something tracked by the compiler to this point assuring us that the filename exists, but it'd honestly be overkill for this situation.

@itsrainy itsrainy requested a review from acfoltzer January 30, 2023 22:23
Copy link
Contributor

@acfoltzer acfoltzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@itsrainy itsrainy merged commit 8bef07b into main Jan 31, 2023
@itsrainy itsrainy deleted the rainy/test-runner branch January 31, 2023 03:31
@yannh
Copy link

yannh commented Feb 27, 2023

@itsrainy @acfoltzer sorry for adding essentially a support request to this PR but... does this now permits to run tests in wasm for Fastly compute Rust projects? I would be incredibly interested in even the vaguest description of how to use this 🙏

@acfoltzer
Copy link
Contributor

Hi @yannh, the followup CLI work described in comments here is going to be changing the interface to this functionality, so we're holding off on writing up public documents until then. The PR description @itsrainy wrote above is an accurate description of how to currently run tests for Rust SDK programs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants