You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Whilst we have identified why it fails, we cannot currently upgrade because the latest version made some unfortunate breaking changes: webrtc-rs/webrtc#413
Automatically retrying failing tests and marking them as flaky is not a new idea. New test runners like nextest incorporate such features: https://nexte.st/book/retries.html
Our interoperability job usually takes ~15 minutes. If it would take 2 minutes I wouldn't consider it worth it but having to wait for another 15 because 1 out of 100 e2e tests failed is a bit of a bummer.
Due to the number of moving parts, e2e tests tend to be a bit flakier than unit tests. We have one test in particular (Rust WebRTC) that fails every now and then: https://github.com/libp2p/rust-libp2p/actions/runs/4466390814/attempts/1
Whilst we have identified why it fails, we cannot currently upgrade because the latest version made some unfortunate breaking changes: webrtc-rs/webrtc#413
Automatically retrying failing tests and marking them as flaky is not a new idea. New test runners like
nextest
incorporate such features: https://nexte.st/book/retries.htmlOur interoperability job usually takes ~15 minutes. If it would take 2 minutes I wouldn't consider it worth it but having to wait for another 15 because 1 out of 100 e2e tests failed is a bit of a bummer.
Can we / Do we want to add something like this to our test runner? To make retries visible, we could output a "warning" annotation like the ones visible here: https://github.com/libp2p/rust-libp2p/actions/runs/4071431104
The text was updated successfully, but these errors were encountered: