Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: improve developer doc for testing #48

Merged
merged 10 commits into from
Oct 12, 2023
160 changes: 98 additions & 62 deletions tests/README.md
Original file line number Diff line number Diff line change
@@ -1,44 +1,65 @@
# Chalk Testing

This directory contains all the basic functionality tests for chalk.
Tests are run via docker compose in a separate `tests` container which
internally uses the `pytest` framework.
This directory contains all the basic functionality tests for chalk. Tests are run via `make tests`, which uses `docker compose` to run the `tests` container which internally uses the `pytest` framework.

## Requirements
While `pytest` can be used to run the tests directly, it is recommended to use the makefile script instead, as that provides a more consistent developer experience.

## Setup

Before running tests, ensure that `docker` and `docker compose` are installed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we list dependencies and give an option for how to run this stuff NOT in docker? Obviously some Linux-specifiic stuff would break, but:

a) The process does slow things down a lt
b) We probably should run tests suites natively on a Mac anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have added instructions with several caveats.


### Tests Container

Please ensure you have an up-to-date container with:

```sh
docker compose build tests
```

Note that building the `tests` container will set up the `pytest` framework needed to run the tests, but the container will NOT have a copy of the chalk binary to test against. This is discussed in the next section.

### Chalk Binary

Tests assume that a chalk binary is present in the root directory of
the repo, and will use that binary in all tests cases.
Upon starting a test run, the script will first look for an up-to-date chalk binary in the root directory of the repo. If there is no chalk binary, or if the chalk binary is not up to date (ie, there are local code changes that have not yet been built), the script will rebuild the chalk binary.

The quickest way to get a binary with the current local changes is:
WARNING: If there is already a chalk binary in the root directory that the script considers "out of date", this binary will be DELETED. If you want to keep this binary, move or rename it.

1. Build chalk deps container:
Tests will always rebuild the binary if it detects a code change in the underlying nim code. However, you will need to build `chalk` manually when switching between `release` and `debug` builds:

```sh
docker compose build chalk
```
1. By default, the testing script will build a `release` binary for chalk, so if you want to test against the `debug` build instead you must build it yourself.
2. If you have been working with a `debug` build so far, but want to switch to `release`, you will need to rebuild manually as the test script will not rebuild if there have not been any code changes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got the manual instructions running on main, but the tests don't really work there:

========================== 50 failed, 32 passed, 17 skipped, 2 errors in 100.05s (0:01:40)

Some, maybe all of it, is clearly due to an assumption of ELF. I think that's a bit of an issue. But at the very least it needs to be clear in the docs that they are ELF only.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have added a warning at the top


(this step can be skipped if the `chalk` container is up to date)
The quickest way to manually build a binary with the current local changes is:

1. Compile `chalk`:
1. Build chalk deps container:

```sh
# root of the repo
make chalk
```
```sh
docker compose build chalk
```
indecisivedragon marked this conversation as resolved.
Show resolved Hide resolved

The second command should drop a `chalk` binary that is usable by tests.
(this step can be skipped if the `chalk` container is up to date)
indecisivedragon marked this conversation as resolved.
Show resolved Hide resolved

### Tests Container
2. Compile `chalk`.

Please ensure you have an up-to-date container with:
For a release build:

```sh
docker compose build tests
# root of the repo
make chalk
```

For a debug build:

```sh
# root of the repo
make debug
```

### Running Tests
The second command should drop a `chalk` binary that is usable by tests. You can also manually build with `nimble build`, but this is not recommended as it doesn't guarantee that the architecture will be compatible with the `tests` container.

WARNING: Debug builds are very slow, so it is not recommended to run the entire test suite on a debug build.

## Running Tests

All commands given are assumed to be run from the root of the repo.

Expand All @@ -56,9 +77,9 @@ To run all tests within a test file:
make tests args="[TESTFILE]"
```

where `TESTFILE` is the path of the test file
(ex: `test_command.py`, `test_zip.py`).
Note that path **MUST** be relative to `tests/` folder (not repo root).
where `TESTFILE` is the path of the test file (ex: `test_command.py`, `test_zip.py`).

Note: the path **MUST** be relative to `tests/` folder (NOT the repo root).

To run a single test within a test file:

Expand All @@ -70,28 +91,35 @@ make tests args="[TESTFILE]::[TESTNAME]"
where `TESTFILE` is as above, and `TESTNAME` is the name of the test
within the test file (ex: `test_elf.py::test_virtual_valid`).

See [pytest docs](https://docs.pytest.org/en/7.1.x/how-to/usage.html)
for more invocation options.
To run a single case of a single test:

```sh
# root of the repo
make tests args="[TESTFILE]::[TESTNAME][test-case]"
```

ex: `make tests args="test_elf.py::test_virtual_valid[copy_files0]"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move example into code block above. it will render better vs being inlined


#### Slow Tests
Any arguments passed in through `args` will be directly passed through to the underlying `pytest` call. See [pytest docs](https://docs.pytest.org/en/7.1.x/how-to/usage.html) for more invocation options.

To run slower tests which are by default skipped add `--slow` argument:
### Slow Tests

To run slower tests which are by default skipped add the `--slow` argument:

```sh
# root of the repo
make tests args="--slow"
```

#### Live Logs
### Live Logs

By default logs will only show for failed tests.
To show all logs of running tests as they run, add `--logs` argument:
By default logs will only show for failed tests. To show all logs of running tests as they run, add the `--logs` argument:

```sh
make tests args="--logs"
```

#### Parallel Tests
### Parallel Tests

To run tests in parallel, add `-nauto` argument which will run tests
in number of workers as there are CPU cores on the system:
Expand All @@ -104,7 +132,9 @@ If you would like you can also hardcode number of workers like `-n4`.
Note that parallel tests does not work with various other pytest flags
such as `--pdb`.

### Debugging a Failed Test
## Debugging a Failed Test

### PDB

Simplest approach is for `pytest` to enter a debugger on test failure.
To do that run test(s) with `--pdb` flag:
Expand All @@ -114,42 +144,48 @@ To do that run test(s) with `--pdb` flag:
make tests args="[TESTFILE]::[TESTNAME] --pdb"
```

Alternatively you can add `breakpoint()` before the failing assertion
and manually invoke the single test.
Alternatively you can add `breakpoint()` before the failing assertion and manually invoke the single test.

### Container Shell

You can also drop into a shell in the ` tests` container via `docker compose run --entrypoint=sh tests`, and from there run `pytest` manually.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add make tests/sh or something to fasciliate that? I think you need to do that pretty often

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do do it pretty often, a make command for this would be great! Let's do it in another pr and update the doc there, I'd like to keep this one just docs without code changes.


## Adding a Test

Currently, all tests in the `tests` directory are functional tests for the chalk binary. Any unit tests should NOT be added here -- they should go into the nim source code per nim testing conventions.

All python tests must follow `pytest` conventions to be picked up by the test runner:

- New test files should be added to the `tests/` directory as `test_filename.py`. A new test file should be created to test new functionality, for example if support has been added for a new codec.
- Individual tests should be added to a test file as a new function as `test_functionname`. Each individual test in a test file should test some aspect of that test file's functionality; if the test you want to add doesn't fit, consider if it would be more appropriate to create a new test file and add it there instead.
- A new test case for an existing test can be added via the pytest `paramatrize` fixture, if appropriate.

### Datafile Location

All new test files should added to the `tests/` directory, and any test data should be added to the `tests/data` directory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
All new test files should added to the `tests/` directory, and any test data should be added to the `tests/data` directory.
All new test files should be added to the `tests/` directory, and any test data should be added to the `tests/data` directory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does vale catch grammar issues also? 🙀

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some basics I think if you enable appropriate styles (they have defaults for microsoft, google and redhat - https://vale.sh/generator/). this was just me but Im not English expert so I have no idea if its actually correct. my logic if it sounds right, its prolly right but what "right" actually is 🤷


### Adding a Test
WARNING: Any files (including test files and data files) that are NOT in the `/tests` directory will not be accessible from within the `tests` container. Any data files that need to be in a specific path for testing (ex: config files loaded from `/etc/chalk`) must be copied to the target path as part of test setup, which happens inside the container after startup.
indecisivedragon marked this conversation as resolved.
Show resolved Hide resolved

#### Pytest Convention
### Test Fixtures

All python tests must follow `pytest` conventions to be picked up by the
test runner.
Global test fixtures are defined in `conftest.py`. Any test fixtures used across multiple test files should be defined in `conftest.py`; any test fixtures used only in a single test file should be defined in the test file.

- New test files should be added to the `tests/` directory as
`test_filename.py`.
- Individual tests within the test file should be named `test_testname`.
More information about fixtures can be found [here](https://docs.pytest.org/en/6.2.x/fixture.html).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets use consistent version of pytest for all pytest links. above there are links to 7.x but here its 6.x


#### Test Data
The following is a summary of the most commonly used fixtures in chalk testing:

All test data should be added to `tests/data` directory.
- `tmp_data_dir`: creates a temporary data directory for each test that will be destroyed at the end of that test. All tests are run from within their temporary directories, and each test has its own temporary directory that does not conflict with any other temporary directories. It is recommended that any tests that mutate data (ex: chalking a system binary) first copy that data into the `tmp_data_dir` and then act on the copy, so that subsequent or parallel tests don't run into conflicts.
- `copy_files`: copies files into the temporary directory.
- `chalk`: retrieves the chalk binary to be used in testing (which will be the chalk binary at the root of the repository), and loads it with the default testing configuration which subscribes to console output (ensuring that we get chalk reports to stdout even if chalk is not run in a tty). Note that the scope of this fixture is `session`, so this will be the SAME chalk binary for ALL tests in a single invocation of `make chalk`. If your test needs to make any changes to the chalk binary itself, use `chalk_copy` instead.
indecisivedragon marked this conversation as resolved.
Show resolved Hide resolved
- `chalk_copy`: makes a copy of the chalk binary loaded with the default testing configuration into the test's `tmp_data_dir`. The test will invoke this copy, and it will be removed as part of the test's cleanup afterwards. Any tests that make changes to the chalk binary, such as the ones that change `config` in `test_config.py`, should use this fixture so that the changes don't persist across the remaining tests.
- `chalk_default`: retrieves the chalk binary without loading the default testing configuration.

#### Test Fixtures
### Running Chalk Within A Test

Global test fixtures are defined in `conftest.py`. Currently there are
two fixtures that are used across all tests:
`tests/chalk` contains `runner.py` which provides some utility functions for the `chalk` object returned by the fixture, including calling `chalk insert` and `chalk extract` and returning the resulting chalk report in json format.

- `tmp_data_dir`: creates a temporary data directory for each test that
will be destroyed at the end of that test. All tests are run from within
the temporary directory.
- `chalk`: retrieves the chalk binary to be used in testing. Of note
that the scope of this fixture is `session`, so this will be the SAME
chalk binary for ALL tests in a single run.
- `chalk_copy`: Some tests, such as the ones that change `config` in
`test_config.py`, modify the chalk binary itself -- those tests should
use this fixture which makes a copy of `chalk` for each test case.
To validate the chalk reports, there are some utility functions provided in `tests/chalk/validate.py`.

#### Running Chalk Within A Test
### Docker

`tests/chalk` contains `runner.py` which provides some utility functions
for the `chalk` object returned by the fixture, including calling
`chalk insert` and `chalk extract`. The chalk binary can also be run
directly using `subprocess.run`.
Since chalk supports some docker commands, some tests may need to call docker build/run/push. Note that only `test_docker.py` has a fixture to clean up images or containers afterwards, so if you are adding a test that calls docker in a different file, ensure that the test cleans up after ifself (such as by running with `--rm`, or calling `docker prune` afterwards). Otherwise the test images/containers created will persist ON HOST until they are manually removed.