Skip to content
This repository has been archived by the owner on Apr 29, 2020. It is now read-only.

Fpetrilli/improve docs #40

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/iteration-io.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
###Iteration and Array IO Specification
### Iteration and Array IO Specification

This DSL feature specifies that a command should be run for a given number of
iterations. These iterations can be bounded by an integer or correspond to the
Expand Down Expand Up @@ -92,4 +92,4 @@ Ex3 -- using default %s index to reference node this is being run on
line: 0
should_be_equal_to: "0"

```
```
2 changes: 1 addition & 1 deletion docs/selection.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
###Selection Specification
### Selection Specification

The DSL selection feature chooses nodes on which to run commands.
In addition to the on_node/end_node option each step can alternatively take
Expand Down
16 changes: 16 additions & 0 deletions docs/selectors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Selectors
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice addition. Looking at this alongside other docs it's becoming clear the file organization I began here is not ideal. Right now its unclear what files are documenting what. This documentation has a different context from that documenting DSL features and our docs should convey that. I'm thinking either one large file with broader subsections (kubernetes-ipfs commands, The DSL, Kubernetes Config, etc) or perhaps separate files for each broader subsection.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On that note it would be good to have more background on the kubernetes config so that this section will make sense to people without a kubernetes background.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FrankPetrilli both of these comments should be addressed before merging:

  1. Consolidate docs into one file or a few logically laid out files to make documentation more approachable
  2. Preface this specific section with background on the kubernetes config file that the selector key value pair goes into. Could be as simple as a link to existing kubernetes docs or a few concise sentences following a header.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, w.r.t. number 2, are you asking for an explanation of where labels go in a kubernetes yaml definition and their purpose?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds along the right track. I think the best answer is that I don't have the kubernetes context that you do to get much more specific. The labels exist in a file that I don't know much about. It would be great if the docs introduced me to this section without assuming I knew what that file is called, what it is used for and what its format is. imo the best way to do this is with the structure suggested in (1) including this section as a subsection of "the kubernets yaml definition"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or a name you think fits better :)


## Label Selectors in Kubernetes for test run selection

Sometimes, we want to run a particular test only against certain subsets of pods in the Kubernetes cluster (as we may be running multiple simultaneous tests together).

For this purpose, one can make use of the `selector` attribute under a test definition. This follows the kubernetes format of `key=value`, where some of the most common are the `run` key, which indicates a grouping of pods that are part of a particular rungroup.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good example of how the above suggestion will improve readability: by spending some time explaining the kubernetes config format before diving into the specifics of the selector section, you will not need to make a digression explaining the general format and its common keys in the middle of explaining the selector attribute.


For more information on the way Kubernetes uses labels and selectors, I delegate to [the Kubernetes documentation](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).

In this way, one can shape their tests against only go-ipfs or js-ipfs implementations, only ipfs-cluster deployments, only low-bandwidth pods, etc.

```
config:
selector: run=go-ipfs-stress
```
24 changes: 24 additions & 0 deletions docs/test_config.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Test config

Sometimes, we want to scale up or down our tests to compare against the size of a given architecture.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While you have added some text incorporating the suggested change this line should be updated too. It is misleading as the opener of the test config section as it's not just variables that track with the size of an architecture that the test configuration allows a user to control (ex: expected successes, number of iterations)


To this end, we can use config.yml to define certain constants we wish to make use of. For an example of how to use this in a real test, see `config-writer.sh`, which creates the config.yml file from the user's input to automatically scale the tests, both by scaling the number of nodes the tests run on as well as scaling the number of pins we make during a test.

In general, the config is available for substituting variables across tests.

To use the config, the user can do one of two things:

- A config file will be automatically found and used if named `config.yml` and in the same directory as the tests
- Specify the path to the config using the flag `--config`.

```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a bit unclear that this statement goes in a config file. You should briefly hit on the config file's format and how it is activated (i.e. flag specifying path to kubernetes-ipfs, same folder as tests) too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be more readable if you specify that this snippet belongs in config.yml

params:
N: 5
```

Then, in the test definition:

```
config:
nodes: {{N}}
```
22 changes: 22 additions & 0 deletions docs/timeouts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Timeouts

## Timeout for a step

When running a command using IPFS, the result of a failure to resolve is often to hang a long time rather than an outright failure as IPFS searches for a way to connect you to the data you requested.

Sometimes, we expect this to occur, and want to see that the test fails to resolve the data without waiting for its full timeout.

To this end, one can make use of the `timeout` feature, and the `timeouts` parameter under the expected section (see validation.md). If a timeout is met, the timeouts value increments, and we can check against that
to confirm our suspicions that the test has failed.

The format of the parameter is the delay after which we consider the command to have timed out, expressed in **whole seconds**.

```
- name: Cat file on node 2
on_node: 2
inputs:
- FILE
- HASH
cmd: ipfs cat $HASH
timeout: 2
```
78 changes: 78 additions & 0 deletions docs/validation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Validation & Assertions

## Inputs

Within the confines of a 'step', adding the inputs parameter allows the user to specify the same name within the `cmd` tag as a standard `bash` variable (`$VARNAME`),
which creates the possibility to chain inputs from one step to the next.

The format is simply to specify a name.

```
- name: Cat added file
on_node: 1
inputs:
- HASH
cmd: ipfs cat $HASH
```

## Outputs

With in the confines of a 'step', adding the 'outputs' parameter allows the user to send the output from the command on a given line to a given name.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small typo: "with in" -> "within"

This can then be verified later to validate the output of a particular command.

The format is for a given `line`, save the output to the named variable `save_to`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should try to find a way to incorporate the append_to feature in this discussion too, perhaps referencing the iteration section.


For documentation of the `append_to` feature, see `iteration-io.md`.

```
steps:
- name: Add file
on_node: 1
cmd: head -c {{FILE_SIZE}} /dev/urandom | base64 > /tmp/file.txt && cat /tmp/file.txt && ipfs add -q /tmp/file.txt
outputs:
- line: 0
save_to: FILE
- line: 1
save_to: HASH
```

## Assertions

Once we're ready to validate the output of a particular command (often after chaining data from one node to another), we can make use of the assertions statement
to create a check which `kubernetes-ipfs` will validate.

Assertions as of the current version only have the parameter `should_be_equal_to`, which states that the given line should be equal to the provided output.

If it is not, `kubernetes-ipfs` adds 1 to the 'failures' count. If it is, `kubernetes-ipfs` adds 1 to the 'successes' count.

The format is as follows, using one of the 'inputs' passed to the step.

```
name: Cat added file
on_node: {{ ON_NODE}}
inputs:
- FILE
- HASH
cmd: ipfs cat $HASH
assertions:
- line: 0
should_be_equal_to: FILE
```

## Expects

Once the successes and failures have been talied up across runs for a particular test, the observed value is compared against the `expected` block.
If the observed value matches, `kubernetes-ipfs` will report that expectations were met, and returns a 0 exit code. If they don't match, it will return
a non-zero exit code, and state that the expectations were not met. This allows users to short-circut the tests (at the full-test level) and immediately fail
in the event of a test failure and to check the results of running a test from external software.

Within the test definition:

```
expected:
successes: 10
failures: 0
timeouts: 0
```

In this case, if the test fails once, we will return a non-zero exit code and fail the test.