Skip to content

Grizzly Replay

Tyson Smith edited this page Apr 29, 2024 · 6 revisions

Overview

Grizzly Replay is a tool to enable (re)running existing test cases. It can be used to verify fixes or investigate issues by capturing logs with different builds and debugging tools. All that is needed is a browser build and a test case. Be sure to check python3 -m grizzly.replay -h for all available options.

Common scenarios

Simple replay

To verify a test case python3 -m grizzly.replay <browser-build> <testcase> will do the trick. The exit code will be set to 0 on success (result found) otherwise it will be 1.

Debugging

Using -l <log_path> will save target logs from stderr & stdout and debuggers to the specified path.

Attach a debugger

Use --post-launch-delay <seconds> with a positive value to allow a chance to attach a debugger before loading the test case.

Unreliable test cases

Test cases may not always reproduce an issue on the first attempt. In this case it is helpful to use the --repeat <#> flag.

Test case triggers multiple crashes

Test cases may trigger more than one issue or crash signature on the same or different builds. Identifying the expected result is important when verifying fixes since a test case might trigger multiple issues. By default the first result found is used as the crash signature to match. To override this use the --sig <sig.json> to use a FuzzManager signature to match the appropriate issue.

rr & Pernosco

rr traces can be easily collected by using --rr and -l <log_path>. Launch times can be slower and saving the rr trace does take extra time. It is best to build the target browser with -O0 -g for best results. These traces should be compatible with Pernosco.