Golden tests for Command Line interfaces.
Replica aims at managing tests suites composed of command line interfaces calls.
It compares the output of the command line to a "golden value": a stored value of the expected outcome of the command line call. If you want a more detailed introduction to golden testing, here is a nice introduction.
The idea comes from the way tests are implemented in idris2.
Its approach is similar to the one proposed by CI/CD tools like github actions or gitlab ci: a tests suite is described in a dhall configuration file (you can also use json, in this case, visit the json documentation) that is processed by the tool to generate tests.
There are few frameworks that are dedicated to CLI tests. None of them, to my knowledge, mix of a structured document to document the test and of interactive golden value generation to specify the expectation.
This approach ease the modification of the CLI output in the early development phases and provide a clear syntax for test development and maintenance.
Other CLI testing frameworks
- Idris2 test package: REPLica's daddy. REPLica was created with the idea to provide a more structured way to write tests (the JSON/Dhall specification) and to develop the functionalities (see the [features])
- Pester: By far more mature than REPLica, thought for powershell, it includes test coverage, test discovery, complex expectations DSL and so on. Pester doesn't, however, provides a way to generate expectation from previous run.
- shelltestrunner: another minimal tool to test CLI, but without golden value generation or test tags.
- Test tags
- Test dependencies (test are run only if other tests succeed)
- Check exit status
- Multi-threads
- Run only selected tests/tags/suite
- Expectation language
- Can test standard output, standard error and file content
REPLica
is available as a flake
.
You can either reuse it as an input to your own flake
s or use it directly
with nix run github:replicatest/replica
.
- idris2 (v0.6.0);
- git;
- dhall and dhall-to-json.
Idris2 dependencies:
- the
papers
package.
# clone repo
git clone git@github.com:ReplicaTest/REPLica.git
# install replica
make install
# Ensure that `${HOME}/.local/bin` is in your path
# health-check
replica help
replica new hello.dhall
This command creates a hello.dhall
file that contains a sample test:
$ replica new hello.dhall
Test file created (Dhall): hello.dhall
$ cat hello.dhall
let Replica = https://raw.githubusercontent.com/ReplicaTest/replica-dhall/v0.1.1/package.dhall
let Prelude = Replica.Prelude
let Test = Replica.Test
let Status = Replica.Status
let Expectation = Replica.Expectation
let hello = Test.Success ::
{ command = "echo \"Hello, World!\""
, description = Some "This test is a placeholder, you can edit it."
, spaceSensitive = False
, stdOut = Expectation ::
{consecutive = ["Hello", "World"], end = Some "!"}
}
let tests : Replica.Type = toMap { hello }
in tests
The given test checks that the output of echo "Hello, World!"
contains consecutively
Hello
and "World", and ends with an exclamation mark ('!'
).
At this stage, replica
isn't able to process dhall
files directly.
We have to generate a JSON file first and then to execute it.
$ dhall-to-json --output hello.json --file hello.dhall
$ replica run hello.json
--------------------------------------------------------------------------------
Running tests...
âś… hello
--------------------------------------------------------------------------------
Summary:
âś… (Success): 1 / 1
Now, edit the hello.dhall
file and change the stdOut
part so that your file looks
like this:
let Replica = https://raw.githubusercontent.com/ReplicaTest/replica-dhall/v0.1.1/package.dhall
let Prelude = Replica.Prelude
let Test = Replica.Test
let Status = Replica.Status
let Expectation = Replica.Expectation
let hello = Test.Success ::
{ command = "echo \"Hello, World!\""
, description = Some "This test is a placeholder, you can edit it."
, spaceSensitive = False
, stdOut = Replica.Generated True
}
let tests : Replica.Type = toMap { hello }
in tests
Instead of providing an expectation, we now rely on a golden value:
a previously saved value of the output of the tested command.
Unfortunately, we didn't save any yet... and thus if we recompile
hello.json
, replica run hello.json
fails now:
$ dhall-to-json --output hello.json --file hello.dhall
$ replica run hello.json
--------------------------------------------------------------------------------
Running tests...
❌ hello:
[Missing Golden for standard output]
[Unexpected content for standard output]
Error on standard output:
Given:
Hello, World!
--------------------------------------------------------------------------------
Summary:
❌ (Failure): 1 / 1
It's totally fine: replica
has no golden value for this test yet,
we need to build one.
To do so, we will rerun the test in the interactive mode:
replica run --interactive hello.json
.
Now you should be prompted if you want to set the golden value for the test:
$ replica run --interactive hello.json
--------------------------------------------------------------------------------
Running tests...
hello: Golden value mismatch for standard output
Expected: Nothing Found
Given:
Hello, World!
Do you want to set the golden value? [N/y]
Answer y
(or yes
) and the test should pass.
Now that the golden value is set,
we can retry to run the suite in a non interactive mode:
replica run hello.json
...
$ replica run hello.json
--------------------------------------------------------------------------------
Running tests...
âś… hello
--------------------------------------------------------------------------------
Summary:
âś… (Success): 1 / 1
TADA... it works.
If you want to see it fails again, you can modify the command in hello.dhall
.
The main motivation of using dhall is:
- type safety;
- ease the generation of a set of similar tests.
The Quickstart section introduced a first, minimal test:
{ hello = Replica.Test :: {command = "echo \"Hello, world!\""}}
We have declared the only mandatory field of a test: command
.
It defines the command that will be tested.
REPLica will save the exit code of the command,
the standard output and the standard error.
By default, REPLica will only check the output,
comparing it to the golden value that will be stored in the interactive
mode.
The beforeTest
and afterTest
allow you to prepare and to clean the test environment.
Warning: commands in each of them run in separated shells.
It means that you can't share (at the moment), variables between beforeTest
, command
,
and afterTest
.
command
won't be executed if beforeTest
failed,
and an error will be emited if a command of beforeTest
or afterTest
failed.
REPLica distinguish an error
(when something went wrong during the execution of a test)
and a failure (when the test doesn't meet the expectations).
{ test_cat = Replica.Test ::
{ beforeTest = ["echo \"test\" > foo.txt"]
, command = "cat foo.txt"
, afterTest = ["rm foo.txt"]
}
}
The require
field ensure that a test will be executed only if the given list of
tests succeed.
If one of the required tests failed, the test will be marked as ignore.
{ test_first = {command = "echo \"Hello, \""}
, test_then =
{ command = "echo \"world!\""
, require = ["test_first"]
}
}
The input
field allows you to define inputs for your command,
repacing the standard input by it's content.
{ send_text_to_cat = Replica.Test ::
{ command = "cat"
, input = Some "hello, wold!"
}
}
The suite
field helps you to organise your tests into suites.
The tests are run suite by suite (if there is no cross-suite requirements).
Test suite are optional. Tests with no suite belongs to a special suite with no name, which can't be specifically selected or excluded from a run
{ hello = Replica.Test ::
{ command = "echo \"Hello, world!\""
, suite = Some "hello"
}
}
You can run a specific suite with the -s
option: replica run -s hello
and
you can exclude a suite with -S
.
The tags
field allow you to select a group of tests in your test suites.
Once you've defined tags for your tests, you can decide to run test that have
(or didn't have) a given tags thanks to REPLica command line options.
{ hello = Replica.Test ::
{ command = "echo \"Hello, world!\""
, tags = ["example", "hello"]
}
}
And then you can run replica -t example your_file.json
to include tests tagged with example
or replica -T example your_file.json
to exclude them.
You can add a description
to your test.
Description is here for informative purpose and can be displayed with replica info
.
If pending
is set to true
, the corresponding test will be ignored.
The status
field allow you to verify the exit code of your command.
You can either set the value to a boolean,
to check if the command succeeded (true
) or failed (false
),
or to a natural, to check if the exit code was exactly the one provided.
{ "success1": Replica.Test::{command = "true", status = Replica.Status.Success}
, "success2": Replica.Test::{command = "true", status = Replica.Status.Exactly 0}
, "success3": Replica.Test::{command = "false", status = Replica.Status.Failure}
, "success4": Replica.Test::{command = "false", status = Replica.Status.Exactly 1}
, "failure": Replica.Test::{command = "false", status = Replica.Status.Exactly 2}
}
If this field is set to false
,
all text comparisons that are performed in this test are space insensitive:
it means that the content that the given and expected content are
"normalized" before the comparison: consecutive space-like characters are
replaced by a single space ands consecutive new lines are replaced by
a single new line.
By default, REPLica compares the standard output (stdOut
)
to a (previously generated) golden value,
and ignores totally the output of the standard error (stdErr
).
The fields stdOut
and stdErr
allow you to modify this behaviour.
The possible values for these fields are describe in the
expectations section.
Aside the standard output and error,
REPLica can also check contents of files, as it can be useful to check
the result of a command.
To do so, we can use the files
field,
which expects an object where keys must be relative paths to the fields
to check and expectations as a value,
to define what is expected for the fields.
For each type of expectations, we give the json and the dhall version. As we use a union type, dhall version is a bit more verbose, but smart constructors ease the pain.
The simpliest expectation is a golden value.
A test expecting a golden value will fail,
as long as you don't set this golden value using
the interactive mode (replica run --interactive
)
{ hello = Replica.Test ::
{ command = "echo \"Hello, world!\""
, stdOut = Replica.Expectation.Golden
}
}
If you set a string as an expectation, the content of the corresponding source is expected to be exactly this string.
{ hello = Replica.Test ::
{ command = "echo \"Hello, world!\""
, stdOut = Replica.Expectation.Exact "Hello, world!"
}
}
If you set a list of strings as a value, the source must contains all the values of the list, in any order.
{ hello = Replica.Minimal ::
{ command = "echo \"Hello, world!\""
, stdOut = Replica.Expectations.Contains ["world, hello"]
}
}
Complex expectations are a solutions that allows you to compose the solutions given before and that enables a few other types of expectations. A complex expectation is an object where the following fields are considered:
generated
: true or false, depending on whether you want to use a golden value or not.exact
: if set, the exact expectation for this source.start
: if set, the source must start with this string.end
: if set, the source must end with this string.contains
: a list of string, that must be found in the source.consecutive
: a list of string, that must be found in this order (optionnaly with some text in between) in the source.
{ hello = Replica.Minimal::
{ command = "echo \"Hello, world!\""
, stdOut = Replica.Expectation
{ generated = True
, consecutive = ["hello", "world"]
, end = Some "!"
}
}
}
By default, stdOut
is expecting a golden value and stdErr
is not checked.
If you want, you can ignore stdOut
explicitly:
{ hello = Replica.Test ::
{ command = "echo \"Hello, world!\""
, stdOut = Replica.Expectation.Ignored
}
}
REPLica is tested with itelf, you can check the test file to have an overview of the possibilities.
The documentation folder also contain useful pieces of information:
- The tests specification in JSON and Dhall.
- A description of the tests execution workplan.
You can also explore the tool options with replica help
.
The utils folder contains a few helpers to ease the integration of
replica in git
and Make
.
I keep track of the things I want to implement in a dedicated project. If you think that something is missing, don't hesitate to submit a feature request.
PR are welcome, you can take a look at the contribution guidelines. If you use the tool, I'd be happy to know about it, drop me a line on twitter.