Skip to content

Commit

Permalink
acceptance: bazelify remaining tests, cleanup python framework (#4183)
Browse files Browse the repository at this point in the history
* bazelify `acceptance/trc_update`

* remove `acceptance/reconnecting`

  This was a false positive that cannot be fixed easily.

  The test tried to restart the dispatcher for the control service, to see that
  reconnecting to the dispatcher works. Unfortunately, with the docker-compose
  file created by topogen, the control service container uses the dispatcher's
  network context (`network_mode: service:scion_disp_cs_...`). In this
  configuration, network access never recovers after the referenced container
  goes away. This test cannot be made to work without changing the topogen
  docker-compose setup, which does not seem to be worth the trouble.

  The test had been passing because it used a bad regex to that failed to match
  the dispatchers it tried restart.

  This test could be replaced with a more focused integration test that checks
  the interaction of the reconnector logic with a restarted dispatcher.

* remove acceptance bash framework, it's all bazel now

* simplification and cleanup pass over the acceptance/common python framework
  • Loading branch information
matzf authored May 31, 2022
1 parent 5cf32a2 commit febf9d6
Show file tree
Hide file tree
Showing 32 changed files with 506 additions and 1,339 deletions.
4 changes: 0 additions & 4 deletions .buildkite/hooks/pre-command
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,6 @@ sudo sysctl -w net.core.rmem_max=1048576

echo "--- Setting up bazel environment"

# ACCEPTANCE_ARTIFACTS is used for acceptance tests built with the "old"
# acceptance framework
export ACCEPTANCE_ARTIFACTS=/tmp/test-artifacts

if [ -z ${BAZEL_REMOTE_S3_ACCESS_KEY_ID+x} ]; then
echo "S3 env not set, not starting bazel remote proxy"
exit 0
Expand Down
1 change: 0 additions & 1 deletion .buildkite/pipeline.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,3 @@ export PARALLELISM=1

cat .buildkite/pipeline.yml
gen_bazel_test_steps //acceptance
gen_acceptance ./acceptance
45 changes: 0 additions & 45 deletions .buildkite/pipeline_lib.sh
Original file line number Diff line number Diff line change
@@ -1,48 +1,3 @@
# gen_acceptance generates all the acceptance steps in a given directory.
# args:
# -1: the directory in which the acceptance tests are.
# -2: the tests which don't need any setup. (default: none)
# -3: tests to skip (default: none)
gen_acceptance() {
local accept_dir=${1}
local no_setup_tests=${2:-""}
local skipped_tests=${3:-""}
for test in "$accept_dir"/*_acceptance; do
name="$(basename ${test%_acceptance})"
echo " - label: \"AT: $name\""
echo " parallelism: $PARALLELISM"
echo " if: build.message !~ /\[doc\]/"
if [ -n "${SINGLE_TEST}" ]; then
if [ "${SINGLE_TEST}" != "${name}" ]; then
echo " skip: true"
fi
else
if [[ ",${skipped_tests}," = *",${name},"* ]]; then
echo " skip: true"
fi
fi
echo " command:"
if [[ ! "${no_setup_tests}" == *"${name}"* ]]; then
# some tests don't need the global setup, they are just starting a
# (few) docker container(s) and run a bazel test against it. So no
# prebuilding of all docker containers is needed.
echo " - ${accept_dir}/ctl gsetup"
fi
echo " - ${accept_dir}/ctl grun $name"
echo " key: ${name}_acceptance"
echo " env:"
echo " PYTHONPATH: \".\""
echo " ACCEPTANCE_DIR: \"$accept_dir\""
echo " artifact_paths:"
echo " - \"artifacts.out/**/*\""
echo " timeout_in_minutes: 20"
echo " retry:"
echo " automatic:"
echo " - exit_status: -1 # Agent was lost"
echo " - exit_status: 255 # Forced agent shutdown"
done
}

# gen_bazel_test_steps generates steps for bazel tests in the given directory.
# args:
# -1: the bazel directory in which the tests are.
Expand Down
67 changes: 24 additions & 43 deletions acceptance/README.md
Original file line number Diff line number Diff line change
@@ -1,63 +1,44 @@
# Acceptance testing framework

To add an acceptance test, create a new `xxx_acceptance` folder in
`/acceptance`, with `xxx` replaced by the name of your test.

The folder must contain a `test` executable, which must support the following arguments:

* `name`, which returns the name of the acceptance test.
* `setup`, which runs the setup portion of the acceptance test. If the return
value of the application is non-zero, the test is aborted.
* `run`, which runs the test itself (including assertions). If the return value
of the function is non-zero, the test is considered to have failed.
* `teardown`, which cleans up after the test. If the return value of the
function is non-zero, the run of the **entire** test suite is aborted.

For an example, see `acceptance/reconnecting_acceptance`.
This directory contains a set of integration tests.
Each test is defined as a bazel test target, with tags `integration` and `exclusive`.

## Basic Commands

To run all defined tests, use:

```bash
acceptance/run
```

To run only the tests matching a certain regular expression, use:
To run all acceptance tests, execute one of the following (equivalent) commands

```bash
acceptance/run REGEX
make test-acceptance # or,
bazel test --config=acceptance_all # or,
bazel test --config=integration //acceptance/... //demo/...
```

where `REGEX` is replaced with a regular expression of your choice.

## Manual Testing

To run fine-grained operations for a single test, use one of the following:
Run a subset of the tests by specifying a different list of targets:

```bash
acceptance/ctl setup TESTNAME
acceptance/ctl run TESTNAME
acceptance/ctl teardown TESTNAME
bazel test --config=integration //acceptance/cert_renewal:all //acceptance/trc_update/...
```

This calls the functions in `acceptance/xxx_acceptance/test.sh` directly,
without any prior setup. This also means docker images are **not** rebuilt,
even if application code has changed.
The following the flags to bazel test can be helpful when running individual tests:

To run the `ctl` commands above, the environment needs to be built first. To do that, run:
- `--test_output=streamed` to display test output to the screen immediately
- `--cache_test_results=no` or `-t-` to re-run tests after a cached success

```bash
acceptance/ctl gsetup
```

This will also rebuild the docker images, taking new code into account.
## Manual Testing

To run the `setup`, `run` and `teardown` phases of a single test (without gsetup):
Some of the tests are defined using a common framework, defined in the
bazel rules `topogen_test` and `raw_test`.
These test cases allow more fine grained interaction.

```bash
acceptance/ctl grun TESTNAME
# Run topogen and start containers, or other relevant setup
bazel run //<test-package>:<target>_setup
# Run the actual test
bazel run //<test-package>:<target>_run
# ... interact with setup, see state in /tmp/artifacts-scion ...
# Shutdown and cleanup
bazel run //<test-package>:<target>_teardown
```

Note that `acceptance/ctl` will not save artifacts on its own, and all output
is dumped on the console.
See [common/README](common/README.md) for more information about the internal
structure of these tests.
4 changes: 2 additions & 2 deletions acceptance/cert_renewal/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ topogen_test(
name = "test",
src = "test.py",
args = [
"--end2end_integration",
"$(location //tools/end2end_integration)",
"--executable",
"end2end_integration:$(location //tools/end2end_integration)",
],
data = ["//tools/end2end_integration"],
topo = "//topology:tiny4.topo",
Expand Down
74 changes: 25 additions & 49 deletions acceptance/cert_renewal/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,18 +23,16 @@
import sys
from http import client

from plumbum import cli

from acceptance.common import base
from acceptance.common import docker
from acceptance.common import scion
from tools.topology.scion_addr import ISD_AS
import toml

logger = logging.getLogger(__name__)


class Test(base.TestBase):
class Test(base.TestTopogen):
"""
Test that in a topology with multiple ASes, every AS is capable of
requesting renewed certificates. The test verifies that each AS has loaded
Expand All @@ -53,27 +51,11 @@ class Test(base.TestBase):
all databases with cached data, including the path and trust database.
7. Restart control servers and check connectivity again.
"""
end2end = cli.SwitchAttr(
"end2end_integration",
str,
default="./bin/end2end_integration",
help="The end2end_integration binary " +
"(default: ./bin/end2end_integration)",
)

def main(self):
if not self.nested_command:
try:
self.setup()
# Give some time for the topology to start.
time.sleep(10)
self._run()
finally:
self.teardown()

def _run(self):

isd_ases = scion.ASList.load("%s/gen/as_list.yml" %
self.test_state.artifacts).all
self.artifacts).all
cs_configs = self._cs_configs()

logger.info("==> Start renewal process")
Expand All @@ -85,17 +67,16 @@ def _run(self):
self._check_key_cert(cs_configs)

logger.info("==> Check connectivity")
subprocess.run(
[self.end2end, "-d", "-outDir", self.test_state.artifacts],
check=True)
end2end = self.get_executable("end2end_integration")["-d", "-outDir", self.artifacts]
end2end.run_fg()

logger.info("==> Shutting down control servers and purging caches")
for container in self.list_containers("scion_sd.*"):
self.test_state.dc("rm", container)
for container in self.list_containers("scion_cs.*"):
self.stop_container(container)
for container in self.dc.list_containers("scion_sd.*"):
self.dc("rm", container)
for container in self.dc.list_containers("scion_cs.*"):
self.dc.stop_container(container)
for cs_config in cs_configs:
files = list((pathlib.Path(self.test_state.artifacts) /
files = list((pathlib.Path(self.artifacts) /
"gen-cache").glob("%s*" % cs_config.stem))
for db_file in files:
db_file.unlink()
Expand All @@ -106,9 +87,7 @@ def _run(self):
time.sleep(5)

logger.info("==> Check connectivity")
subprocess.run(
[self.end2end, "-d", "-outDir", self.test_state.artifacts],
check=True)
end2end.run_fg()

logger.info("==> Backup mode")
for isd_as in isd_ases:
Expand Down Expand Up @@ -140,22 +119,22 @@ def read_file(filename: str) -> str:
"--trc",
docker_dir / "certs/ISD1-B1-S1.trc",
"--sciond",
self.execute("tester_%s" % isd_as.file_fmt(), "sh", "-c",
"echo $SCION_DAEMON").strip(),
self.execute_tester(isd_as, "sh", "-c",
"echo $SCION_DAEMON").strip(),
*self._local_flags(isd_as),
]

logger.info("Requesting certificate chain renewal: %s" %
chain.relative_to(docker_dir))
logger.info(
self.execute("tester_%s" % isd_as.file_fmt(), "./bin/scion-pki",
"certificate", "renew", *args))
self.execute_tester(isd_as, "./bin/scion-pki",
"certificate", "renew", *args))

logger.info("Verify renewed certificate chain")
verify_out = self.execute("tester_%s" % isd_as.file_fmt(),
"./bin/scion-pki", "certificate", "verify",
chain, "--trc",
"/share/gen/trcs/ISD1-B1-S1.trc")
verify_out = self.execute_tester(isd_as,
"./bin/scion-pki", "certificate", "verify",
chain, "--trc",
"/share/gen/trcs/ISD1-B1-S1.trc")
logger.info(str(verify_out).rstrip("\n"))

renewed_chain = read_file(chain_name)
Expand Down Expand Up @@ -221,27 +200,24 @@ def _extract_skid(self, file: pathlib.Path):
return skid

def _rel(self, path: pathlib.Path):
return path.relative_to(pathlib.Path(self.test_state.artifacts))
return path.relative_to(pathlib.Path(self.artifacts))

def _to_as_dir(self, isd_as: ISD_AS) -> pathlib.Path:
return pathlib.Path("%s/gen/AS%s" %
(self.test_state.artifacts, isd_as.as_file_fmt()))
(self.artifacts, isd_as.as_file_fmt()))

def _cs_configs(self) -> List[pathlib.Path]:
return list(
pathlib.Path("%s/gen" %
self.test_state.artifacts).glob("AS*/cs*.toml"))
self.artifacts).glob("AS*/cs*.toml"))

def _local_flags(self, isd_as: ISD_AS) -> List[str]:
return [
"--local",
self.execute("tester_%s" % isd_as.file_fmt(), "sh", "-c",
"echo $SCION_LOCAL_ADDR").strip(),
self.execute_tester(isd_as, "sh", "-c",
"echo $SCION_LOCAL_ADDR").strip(),
]


if __name__ == "__main__":
base.register_commands(Test)
base.TestBase.test_state = base.TestState(scion.SCIONDocker(),
docker.Compose())
Test.run()
base.main(Test)
24 changes: 0 additions & 24 deletions acceptance/color.sh

This file was deleted.

Loading

0 comments on commit febf9d6

Please sign in to comment.