Skip to content
This repository has been archived by the owner on Apr 29, 2020. It is now read-only.

Commit

Permalink
Working on #245, major DSL features documented and given minimal exam…
Browse files Browse the repository at this point in the history
…ples.
  • Loading branch information
FrankPetrilli committed Jan 12, 2018
1 parent b8ca6df commit cad112c
Show file tree
Hide file tree
Showing 4 changed files with 129 additions and 0 deletions.
14 changes: 14 additions & 0 deletions docs/selectors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Selectors

## Label Selectors in Kubernetes for test run selection

Sometimes, we want to run a particular test only against certain subsets of pods in the Kubernetes cluster (as we may be running multiple simultaneous tests together).

For this purpose, one can make use of the `selector` attribute under a test definition. This follows the kubernetes format of `key=value`, where some of the most common are the `run` key, which indicates a grouping of pods that are part of a particular rungroup.

In this way, one can shape their tests against only go-ipfs or js-ipfs implementations, only ipfs-cluster deployments, only low-bandwidth pods, etc.

```
config:
selector: run=go-ipfs-stress
```
17 changes: 17 additions & 0 deletions docs/test_config.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Test config

Sometimes, we want to scale up or down our tests to compare against the size of a given architecture.

To this end, we can make use of the config.yml to define certain constants we wish to make use of. For an example of how to use this in a real test, see `config-writer.sh`, which creates the config.yml file from the user's input to automatically scale the tests.

```
params:
N: 5
```

Then, in the test definition:

```
config:
nodes: {{N}}
```
22 changes: 22 additions & 0 deletions docs/timeouts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Timeouts

## Timeout for a step

When running a command using IPFS, the result of a failure to resolve is often to hang a long time rather than an outright failure as IPFS searches for a way to connect you to the data you requested.

Sometimes, we expect this to occur, and want to see that the test fails to resolve the data without waiting for its full timeout.

To this end, one can make use of the `timeout` feature, and the `timeouts` parameter under the expected section (see validation.md). If a timeout is met, the timeouts value increments, and we can check against that
to confirm our suspicions that the test has failed.

The format of the parameter is the delay after which we consider the command to have timed out, expressed in **whole seconds**.

```
- name: Cat file on node 2
on_node: 2
inputs:
- FILE
- HASH
cmd: ipfs cat $HASH
timeout: 2
```
76 changes: 76 additions & 0 deletions docs/validation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Validation & Assertions

## Inputs

Within the confines of a 'step', adding the inputs parameter allows the user to specify the same name within the `cmd` tag as a standard `bash` variable (`$VARNAME`),
which creates the possibility to chain inputs from one step to the next.

The format is simply to specify a name.

```
- name: Cat added file
on_node: 1
inputs:
- HASH
cmd: ipfs cat $HASH
```

## Outputs

With in the confines of a 'step', adding the 'outputs' parameter allows the user to send the output from the command on a given line to a given name.
This can then be verified later to validate the output of a particular command.

The format is for a given `line`, save the output to the named variable `save_to`.

```
steps:
- name: Add file
on_node: 1
cmd: head -c {{FILE_SIZE}} /dev/urandom | base64 > /tmp/file.txt && cat /tmp/file.txt && ipfs add -q /tmp/file.txt
outputs:
- line: 0
save_to: FILE
- line: 1
save_to: HASH
```

## Assertions

Once we're ready to validate the output of a particular command (often after chaining data from one node to another), we can make use of the assertions statement
to create a check which `kubernetes-ipfs` will validate.

Assertions as of the current version only have the parameter `should_be_equal_to`, which states that the given line should be equal to the provided output.

If it is not, `kubernetes-ipfs` adds 1 to the 'failures' count. If it is, `kubernetes-ipfs` adds 1 to the 'successes' count.

The format is as follows, using one of the 'inputs' passed to the step.

```
name: Cat added file
on_node: {{ ON_NODE}}
inputs:
- FILE
- HASH
cmd: ipfs cat $HASH
assertions:
- line: 0
should_be_equal_to: FILE
```

## Expects

Once the successes and failures have been talied up across runs for a particular test, the observed value is compared against the `expected` block.
If the observed value matches, `kubernetes-ipfs` will report that expectations were met, and returns a 0 exit code. If they don't match, it will return
a non-zero exit code, and state that the expectations were not met. This allows users to short-circut the tests and immediately fail in the event of a test failure
and to check the results of running a test from external software.

Within the test definition:

```
expected:
successes: 10
failures: 0
timeouts: 0
```

In this case, if the test fails once, we will return a non-zero exit code and fail the test.

1 comment on commit cad112c

@FrankPetrilli
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mistakenly referenced wrong project's issue number, see
ipfs-cluster/ipfs-cluster#245

Please sign in to comment.