Skip to content

Commit

Permalink
Only test ones we know will succeed (#1628)
Browse files Browse the repository at this point in the history
A bit of a hack to only run the network tests we expect to succeed. This
at least ensures we don't get any worse, even if it doesn't directly
allow us to track if we're getting better.

See also actions/runner#2347
  • Loading branch information
ch1bo authored Sep 13, 2024
2 parents 7d9499c + 20c212f commit a589d93
Showing 1 changed file with 7 additions and 11 deletions.
18 changes: 7 additions & 11 deletions .github/workflows/network-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ jobs:
# Currently this is just a label and does not have any functional impact.
peers: [3]
scaling_factor: [10, 50]
netem_loss: [0, 1, 2, 3, 4, 5, 10, 20]
# Note: We only put here the configuration values we _expected to pass_.
netem_loss: [0, 1, 2, 3]
name: "Peers: ${{ matrix.peers }}, scaling: ${{ matrix.scaling_factor }}, loss: ${{ matrix.netem_loss }}"
steps:
- uses: actions/checkout@v4
Expand Down Expand Up @@ -61,6 +62,8 @@ jobs:
- name: Setup containers for network testing
run: |
set -exo pipefail
cd demo
./prepare-devnet.sh
docker compose up -d cardano-node
Expand All @@ -73,9 +76,10 @@ jobs:
--node-socket devnet/node.socket \
--cardano-signing-key devnet/credentials/faucet.sk)
echo $HYDRA_SCRIPTS_TX_ID >> .env
echo "HYDRA_SCRIPTS_TX_ID=$HYDRA_SCRIPTS_TX_ID" > .env
nix run .#cardano-cli query protocol-parameters \
nix run .#cardano-cli -- query protocol-parameters \
--testnet-magic 42 \
--socket-path devnet/node.socket \
--out-file /dev/stdout \
| jq ".txFeeFixed = 0 | .txFeePerByte = 0 | .executionUnitPrices.priceMemory = 0 | .executionUnitPrices.priceSteps = 0" \
Expand Down Expand Up @@ -103,14 +107,6 @@ jobs:
limit-access-to-actor: true

- name: Run pumba and the benchmarks
# Note: We're going to allow everything to fail. In the job on GitHub,
# we will be able to see which ones _did_, in fact, fail. Originally,
# we were keeping track of our expectations with 'include' and
# 'exclude' directives here, but I think it's best to leave those out,
# as some of the tests (say 5%) fail, and overall the conditions of
# failure depend on the scaling factor, the peers, etc, and it becomes
# too complicated to track here.
continue-on-error: true
run: |
# Extract inputs with defaults for non-workflow_dispatch events
percent="${{ matrix.netem_loss }}"
Expand Down

0 comments on commit a589d93

Please sign in to comment.