-
Notifications
You must be signed in to change notification settings - Fork 25
Fpetrilli/improve docs #40
base: master
Are you sure you want to change the base?
Changes from all commits
cad112c
3461f41
0b4a7ce
01a7d90
1d62ded
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
# Selectors | ||
|
||
## Label Selectors in Kubernetes for test run selection | ||
|
||
Sometimes, we want to run a particular test only against certain subsets of pods in the Kubernetes cluster (as we may be running multiple simultaneous tests together). | ||
|
||
For this purpose, one can make use of the `selector` attribute under a test definition. This follows the kubernetes format of `key=value`, where some of the most common are the `run` key, which indicates a grouping of pods that are part of a particular rungroup. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A good example of how the above suggestion will improve readability: by spending some time explaining the kubernetes config format before diving into the specifics of the selector section, you will not need to make a digression explaining the general format and its common keys in the middle of explaining the selector attribute. |
||
|
||
For more information on the way Kubernetes uses labels and selectors, I delegate to [the Kubernetes documentation](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). | ||
|
||
In this way, one can shape their tests against only go-ipfs or js-ipfs implementations, only ipfs-cluster deployments, only low-bandwidth pods, etc. | ||
|
||
``` | ||
config: | ||
selector: run=go-ipfs-stress | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
# Test config | ||
|
||
Sometimes, we want to scale up or down our tests to compare against the size of a given architecture. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. While you have added some text incorporating the suggested change this line should be updated too. It is misleading as the opener of the test config section as it's not just variables that track with the size of an architecture that the test configuration allows a user to control (ex: expected successes, number of iterations) |
||
|
||
To this end, we can use config.yml to define certain constants we wish to make use of. For an example of how to use this in a real test, see `config-writer.sh`, which creates the config.yml file from the user's input to automatically scale the tests, both by scaling the number of nodes the tests run on as well as scaling the number of pins we make during a test. | ||
|
||
In general, the config is available for substituting variables across tests. | ||
|
||
To use the config, the user can do one of two things: | ||
|
||
- A config file will be automatically found and used if named `config.yml` and in the same directory as the tests | ||
- Specify the path to the config using the flag `--config`. | ||
|
||
``` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will be more readable if you specify that this snippet belongs in config.yml |
||
params: | ||
N: 5 | ||
``` | ||
|
||
Then, in the test definition: | ||
|
||
``` | ||
config: | ||
nodes: {{N}} | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
# Timeouts | ||
|
||
## Timeout for a step | ||
|
||
When running a command using IPFS, the result of a failure to resolve is often to hang a long time rather than an outright failure as IPFS searches for a way to connect you to the data you requested. | ||
|
||
Sometimes, we expect this to occur, and want to see that the test fails to resolve the data without waiting for its full timeout. | ||
|
||
To this end, one can make use of the `timeout` feature, and the `timeouts` parameter under the expected section (see validation.md). If a timeout is met, the timeouts value increments, and we can check against that | ||
to confirm our suspicions that the test has failed. | ||
|
||
The format of the parameter is the delay after which we consider the command to have timed out, expressed in **whole seconds**. | ||
|
||
``` | ||
- name: Cat file on node 2 | ||
on_node: 2 | ||
inputs: | ||
- FILE | ||
- HASH | ||
cmd: ipfs cat $HASH | ||
timeout: 2 | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
# Validation & Assertions | ||
|
||
## Inputs | ||
|
||
Within the confines of a 'step', adding the inputs parameter allows the user to specify the same name within the `cmd` tag as a standard `bash` variable (`$VARNAME`), | ||
which creates the possibility to chain inputs from one step to the next. | ||
|
||
The format is simply to specify a name. | ||
|
||
``` | ||
- name: Cat added file | ||
on_node: 1 | ||
inputs: | ||
- HASH | ||
cmd: ipfs cat $HASH | ||
``` | ||
|
||
## Outputs | ||
|
||
With in the confines of a 'step', adding the 'outputs' parameter allows the user to send the output from the command on a given line to a given name. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. small typo: "with in" -> "within" |
||
This can then be verified later to validate the output of a particular command. | ||
|
||
The format is for a given `line`, save the output to the named variable `save_to`. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You should try to find a way to incorporate the |
||
|
||
For documentation of the `append_to` feature, see `iteration-io.md`. | ||
|
||
``` | ||
steps: | ||
- name: Add file | ||
on_node: 1 | ||
cmd: head -c {{FILE_SIZE}} /dev/urandom | base64 > /tmp/file.txt && cat /tmp/file.txt && ipfs add -q /tmp/file.txt | ||
outputs: | ||
- line: 0 | ||
save_to: FILE | ||
- line: 1 | ||
save_to: HASH | ||
``` | ||
|
||
## Assertions | ||
|
||
Once we're ready to validate the output of a particular command (often after chaining data from one node to another), we can make use of the assertions statement | ||
to create a check which `kubernetes-ipfs` will validate. | ||
|
||
Assertions as of the current version only have the parameter `should_be_equal_to`, which states that the given line should be equal to the provided output. | ||
|
||
If it is not, `kubernetes-ipfs` adds 1 to the 'failures' count. If it is, `kubernetes-ipfs` adds 1 to the 'successes' count. | ||
|
||
The format is as follows, using one of the 'inputs' passed to the step. | ||
|
||
``` | ||
name: Cat added file | ||
on_node: {{ ON_NODE}} | ||
inputs: | ||
- FILE | ||
- HASH | ||
cmd: ipfs cat $HASH | ||
assertions: | ||
- line: 0 | ||
should_be_equal_to: FILE | ||
``` | ||
|
||
## Expects | ||
|
||
Once the successes and failures have been talied up across runs for a particular test, the observed value is compared against the `expected` block. | ||
If the observed value matches, `kubernetes-ipfs` will report that expectations were met, and returns a 0 exit code. If they don't match, it will return | ||
a non-zero exit code, and state that the expectations were not met. | ||
|
||
Within the test definition: | ||
|
||
``` | ||
expected: | ||
successes: 10 | ||
failures: 0 | ||
timeouts: 0 | ||
``` | ||
|
||
In this case, if the test fails once, we will return a non-zero exit code and fail the test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice addition. Looking at this alongside other docs it's becoming clear the file organization I began here is not ideal. Right now its unclear what files are documenting what. This documentation has a different context from that documenting DSL features and our docs should convey that. I'm thinking either one large file with broader subsections (kubernetes-ipfs commands, The DSL, Kubernetes Config, etc) or perhaps separate files for each broader subsection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On that note it would be good to have more background on the kubernetes config so that this section will make sense to people without a kubernetes background.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FrankPetrilli both of these comments should be addressed before merging:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, w.r.t. number 2, are you asking for an explanation of where labels go in a kubernetes yaml definition and their purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds along the right track. I think the best answer is that I don't have the kubernetes context that you do to get much more specific. The labels exist in a file that I don't know much about. It would be great if the docs introduced me to this section without assuming I knew what that file is called, what it is used for and what its format is. imo the best way to do this is with the structure suggested in (1) including this section as a subsection of "the kubernets yaml definition"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or a name you think fits better :)