Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

Commit

Permalink
Updated documentation for large tests
Browse files Browse the repository at this point in the history
  • Loading branch information
kjlyon committed Feb 17, 2017
1 parent 89bf6c0 commit c5ea754
Show file tree
Hide file tree
Showing 3 changed files with 185 additions and 7 deletions.
14 changes: 11 additions & 3 deletions docs/BUILD_AND_TEST.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,16 +45,18 @@ To see how to use Snap, look at [getting started](../README.md#getting-started),

Our tests are written using [smartystreets' GoConvey package](https://github.com/smartystreets/goconvey). See https://github.com/smartystreets/goconvey/wiki for an introduction to creating a test using this package.

Our process for creating large tests is a little different since the adoption of our large test framework. For specific instructions on large tests, visit [LARGE_TESTS.md](LARGE_TESTS.md)

### Tests in Go

We follow the Go methodology of placing tests into files with names that look like `*_test.go`. See [this](https://golang.org/cmd/go/#hdr-Test_packages) section from [go command](https://golang.org/cmd/go/) documentation for more details
We follow the Go methodology of placing tests into files with names that look like `*_test.go`. See [this](https://golang.org/cmd/go/#hdr-Test_packages) section from [go command](https://golang.org/cmd/go/) documentation for more details.

### Test Types in Snap

Tests in Snap are broken down into `small`, `medium`, and `large` tests. These three test types can be best described as follows:
* **Small** tests are written to exercise behavior within a single function or module. While of you might think of these as *unit* tests, a more generic term seems appropriate to avoid any confusion. In general, there is no reliance in a *small* test on any external systems (databases, web servers, etc.), and responses expected from such external services (if any) will be *mocked* or *faked*. When we say reliance on “external systems” we are including reliance on access to the network, the filesystem, external systems (eg. databases), system properties, multiple threads of execution, or the use of sleep statements as part of the test. These tests should be the easiest to automate and the fastest to run (returning a result in a minute or less, with most returning a result in a few seconds or less). These tests will be run automatically on any pull requests received from a contributor, and all *small* tests must pass before a pull request will be reviewed.
* **Medium** tests involve two or more features and test the interaction between those features. For those with previous testing experience, you might think of these as *integration* tests, but because there are a large number of other types of tests that fall into this category a more generic term is needed. The question being answered by these tests is whether or not the interactions between a feature and its nearest neighbors interoperate the way that they are expected to. *Medium* tests can rely on access to local services (a local database, for example), the local filesystem, multiple threads of execution, sleep statements and even access to the (local) network. However, reliance on access to external systems and services (systems and services not available on the localhost) in *medium* tests is discouraged. In general, we should expect that these tests return a result in 5 minutes or less, although some *medium* tests may return a result in much less time than that (depending on local system load). These tests can typically be automated and the set of *medium* tests will be run against any builds prior to their release.
* **Large** tests represent typical user scenarios and might be what some of you would think of as *functional* tests. However, as was the case with the previous two categories, we felt that the more generic term used by the Google team seemed to be appropriate here. For these tests, reliance on access to the network, local services (like databases), the filesystem, external systems, multiple threads of execution, system properties, and the use of sleep statements within the tests are all supported. Some of these tests might be run manually as part of the release process, but every effort is made to ensure that even these *large* tests can be automated (where possible). The response times for testing of some of these user scenarios could be 15 minutes or more (eg. it may take some time to bring the system up to an equilibrium state when load testing), so there are situations where these *large* tests will have to be triggered manually even if the test itself is run as an automated test.
* **Large** tests represent typical user scenarios and might be what some of you would think of as *functional* tests. However, as was the case with the previous two categories, we felt that the more generic term used by the Google team seemed to be appropriate here. For these tests, reliance on access to the network, local services (like databases), the filesystem, external systems, multiple threads of execution, system properties, and the use of sleep statements within the tests are all supported. Some of these tests might be run manually as part of the release process, but every effort is made to ensure that even these *large* tests can be automated (where possible). The response times for testing of some of these user scenarios could be 15 minutes or more (eg. it may take some time to bring the system up to an equilibrium state when load testing), so there are situations where these *large* tests will have to be triggered manually even if the test itself is run as an automated test. More information about Snap's large test framework can be found in [LARGE_TESTS.md](LARGE_TESTS.md)

This taxonomy is the same taxonomy used by the Google Test team and was described in a posting to the Google Testing Blog that can be found [here](http://googletesting.blogspot.com/2010/12/test-sizes.html).

Expand All @@ -76,7 +78,9 @@ It should be noted here that if there are any untagged tests in the directory re

Once the maintainers feel that the *small* tests provide sufficient code coverage, the existing *legacy* tests will be phased out (or used in the construction of a set of *medium* and *large* tests for the Snap CI/CD toolchain). All new tests being added to the Snap framework by contributors should be marked as either `small`, `medium`, or `large` tests, depending on their scope.

### Building Effective Small Tests
### Building Effective Tests

#### Small

Any `small` tests added to the Snap framework must conform to the following constraints:
* They should test the behavior of a single function or method in the framework
Expand All @@ -88,6 +92,10 @@ When complete, the full set of `small` tests for any given function or method sh

It should be noted here that the maintainers will refuse to merge any pull requests that trigger a failure of any of the `small` or `legacy` tests that cover the code being modified or added to the framework. As such, we highly recommend that contributors run the tests that cover their contributions locally before submitting their contribution as a pull request. Maintainers may also ask that contributors add tests to their pull requests to ensure adequate code coverage before they are willing to accept a given pull request, even if all existing tests pass. Our hope is that you, as a contributor, will understand the need for this requirement.

#### Large

More information about large tests can be found in [LARGE_TESTS.md](LARGE_TESTS.md).

### Running Tests

#### On a local machine
Expand Down
165 changes: 165 additions & 0 deletions docs/LARGE_TESTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
# Large Tests: Building and Testing

This guide gets you started with writing and running large tests on Snap plugins. If your plugin needs updated to the large test framework you can find step by step instructions [here](https://github.com/kjlyon/snap/blob/master/Pluginsync_for_large_tests.md).

## Effective Large Tests

As of 1.0 we introduced a large test framework which can be added to your plugin by running our pluginsync tool. The default large test performs the following actions:
* Use environment variable to populate docker compose specification in: `scripts/test/docker_compose.yml`
* Download the latest containers via `docker pull` and run them
* Conditionally run `scripts/test/setup.rb` before any test (use this to create test database, test service etc)
* Download and run the appropriate version of Snap per `$SNAP_VERSION`
* Scan `examples/task/*.yml` for list of tasks and metrics
* Load Snap plugins, first from the local `build/linux/x86_64/*` directory, then try `build.snap-telemetry.io` s3 [bucket](http://snap.ci.snap-telemetry.io)
* Verify plugins are loaded successfully
* Attempt to create, verify, and stop every yaml task in the examples directory
* Shutdown and cleanup containers

If this is not the appropriate behavior, you can write custom large test as `{test_name}_spec.rb` in the `scripts/test` directory.

### Docker Compose
A default `docker_compose.yml` file should be supplied by the developer and placed in `./scripts/test` directory. This will be used by the default large spec test. Additional docker compose config files can be supplied for complex test scenarios and they require their own `custom_spec.rb` test.

Currently the following environment variables are passed to the Snap container:

* OS: any os available in snap-docker repo (default: alpine)
* SNAP_VERSION: any Snap version, or git sha1 that's available in the s3 bucket (default: latest)
* PLUGIN_PATH: used by large test framework, this must be included in the Snap container
Single container:
```
version: '2'
services:
snap: # NOTE: do not change the snap container name
image: intelsdi/snap:${OS}_test
environment:
SNAP_VERSION: "${SNAP_VERSION}"
volumes:
- "${PLUGIN_PATH}:/plugin"
```
Multiple container:
```
version: '2'
services:
snap: # NOTE: do not change the snap container name
image: intelsdi/snap:alpine_test # OS can be locked down to a specific version
environment:
SNAP_VERSION: "${SNAP_VERSION}"
INFLUXDB_HOST: "${INFLUXDB_HOST}" # Custom environment variables require updates to large.sh
volumes:
- "${PLUGIN_PATH}:/plugin"
links:
- influxdb
influxdb:
image: influxdb:1.0
expose:
- "8083"
- “8086"
```
### Travis CI:

To enable large tests on Travis CI, please enable sudo, docker, and add the appropriate test matrix settings in `.sync.yml`:
```
.travis.yml:
sudo: true # large tests require travis.ci VMs instead of containers (enabled via sudo: true)
services: # this ensures docker/docker-compose is installed on the travis agent
- docker
env:
global: # If you change the matrix, please preserve environment globals:
- ORG_PATH=/home/travis/gopath/src/github.com/intelsdi-x
- SNAP_PLUGIN_SOURCE=/home/travis/gopath/src/github.com/${TRAVIS_REPO_SLUG}
matrix:
- TEST_TYPE: small # preserve existing small tests
- TEST_TYPE: medium # preserve existing medium tests (make sure they exist)
# if SNAP_VERSION:latest and OS:alpine is sufficient simply add TEST_TYPE: large
- TEST_TYPE: large
# if multiple SNAP_VERSION, OS needs to be tested, provide an array of versions:
- SNAP_VERSION=latest OS=xenial TEST_TYPE=large
- SNAP_VERSION=latest_build OS=centos7 TEST_TYPE=large
matrix:
# travis doesn't have an easy way to exclude large tests with a regex, so
# please list every large test to exclude it from running on go 1.6.x
exclude:
- go: 1.6.x
env: TEST_TYPE=large
- go: 1.6.x
env: SNAP_VERSION=latest OS=xenial TEST_TYPE=large
- go: 1.6.x
env: SNAP_VERSION=latest_build OS=centos7 TEST_TYPE=large
```
NOTE: If you did not set `sudo: true` and enable docker services, in travis.ci large test will fail with the following error:
```
2017-02-06 23:00:35 UTC [ error] docker needs to be installed
```

### Serverspec

The large tests are written using [serverspec](http://serverspec.org/changes.html) as the system test framework. An example installing and testing `ping`:
```
set :docker_compose_container, :snap # required if you use the os["family"] detection functionality
context "network is functional" do
if os["family"] == "ubuntu"
describe package("iputils-ping") do
it { should be_installed }
end
elsif os["family"] == "redhat"
describe package("iputils") do
it { should be_installed }
end
end
describe command('ping -c1 8.8.8.8') do
its(:exit_status) { should eq 0 }
its(:stdout) { should contain(/1 packets received/) }
end
end
```

If you have more than one container specified in docker compose, tests can be executed in each container separately:
```
describe docker_compose('./docker_compose.yml') do
its_container(:snap) do
# these tests would only run in the snap container
end
its_container(:influxdb) do
# these tests would only run in the influxdb container
end
end
```

## Running Tests
In addition to `make test-large` which is described in [BUILD_AND_TEST.md](BUILD_AND_TEST.md), you have the additional following options when using the large test framework:

Custom environment variables can be supplied such as:
```
OS=trusty SNAP_VERSION=1.0.0 make test-large
```
A subset of tasks can be selected for testing via the TASK environment variable:
```
TASK="psutil*.yml" make test-large
```
To troubleshoot a failing large test, enable the debug flag:
```
DEBUG=true make test-large
```
When the test encounters any failures in debug mode, it will be paused at a [pry session](http://pryrepl.org/). The test containers will remain running and available for further examination. When the problem has been identified, simply exit the debug session to resume testing, or use `exit-program` to quit immediately.

To spin up the environment in demo mode and pause after loading the first task:
```
DEMO=true make test-large
```
A specific task can be selected for usage in demo mode:
```
DEMO=true TASK="psutil-file.yml" make test-large
```
When you are done checking out the containers, simply type `exit-program`.

NOTE: some useful commands once the containers are running in debug or demo mode:

* Login to Snap container: `$ docker exec -it $(docker ps | sed -n 's/\(\)\s*intelsdi\/snap.*/\1/p') /bin/bash`
* View Snap daemon log: `$ docker logs $(docker ps | sed -n 's/\(\)\s*intelsdi\/snap.*/\1/p') `




13 changes: 9 additions & 4 deletions docs/PLUGIN_AUTHORING.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
* [Plugin Metadata](#plugin-metadata)
* [Plugin Catalog](#plugin-catalog)
* [Plugin Status](#plugin-status)
* [Plugin Tests](#plugin-tests)
* [Documentation](#documentation)

## Overview
Expand Down Expand Up @@ -145,13 +146,17 @@ We provide a list of Snap plugins at [snap-telemetry.io](http://snap-telemetry.i
### Plugin Status

While the Snap framework is hardened through tons of testing, **plugins mature at their own pace**. We also want our community to share plugins early and update them often. To help both of these goals, we have tiers of maturity defined for plugins being added to the Plugin Catalog:
* [**Supported**](#supported-plugins) - Created by a company with the intent of supporting customers
* [**Approved**](#approved-plugins) - Vetted by Snap maintainers to meet our best practices for design
* [**Experimental**](#experimental) - Early plugins ready for testing but not known to work as intended
* [**Unlabeled**](#all-other-plugins-unlabeled) - Shared for reference or extension
* [**Supported**](PLUGIN_STATUS.md#supported-plugins) - Created by a company with the intent of supporting customers
* [**Approved**](PLUGIN_STATUS.md#approved-plugins) - Vetted by Snap maintainers to meet our best practices for design
* [**Experimental**](PLUGIN_STATUS.md#experimental) - Early plugins ready for testing but not known to work as intended
* [**Unlabeled**](PLUGIN_STATUS.md#all-other-plugins-unlabeled) - Shared for reference or extension

Further details to these definitions are available in [Plugin Status](PLUGIN_STATUS.md).

### Plugin Tests

For a plugin to be labeled `Approved` or `Supported`, it must have reasonable test coverage. At a minimum we require small tests, but large tests are also encouraged. To learn more about our testing best practices visit [BUILD_AND_TEST.md](BUILD_AND_TEST.md) and [LARGE_TESTS.md](LARGE_TESTS.md).

### Documentation

We request that all plugins include a README with the following information:
Expand Down

0 comments on commit c5ea754

Please sign in to comment.