Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

Commit

Permalink
Spelling/Grammar fix
Browse files Browse the repository at this point in the history
  • Loading branch information
kjlyon committed Nov 15, 2016
1 parent f80bf52 commit ab995cd
Show file tree
Hide file tree
Showing 8 changed files with 16 additions and 16 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ with what commits were fixes and what commits were features.

For any pull request submitted, the maintainers of Snap require `small` tests that cover the code being modified and/or features being added; `medium` and `large` tests are also welcome but are not required. This breakdown of tests into `small`, `medium`, and `large` is a new taxonomy adopted by the Snap team in May, 2016. These three test types can be best described as follows:
* **Small** tests are written to exercise behavior within a single function or module. While of you might think of these as *unit* tests, a more generic term seems appropriate to avoid any confusion. In general, there is no reliance in a *small* test on any external systems (databases, web servers, etc.), and responses expected from such external services (if any) will be *mocked* or *faked*. When we say reliance on “external systems” we are including reliance on access to the network, the filesystem, external systems (eg. databases), system properties, multiple threads of execution, or the use of sleep statements as part of the test. These tests should be the easiest to automate and the fastest to run (returning a result in a minute or less, with most returning a result in a few seconds or less). These tests will be run automatically on any pull requests received from a contributor, and all *small* tests must pass before a pull request will be reviewed.
* **Medium** tests involve two or more features and test the interaction between those features. For those with previous testing experience, you might think of these as *integration* tests, but because there are a large number of other types of tests that fall into this category a more generic term is needed. The question being answered by these tests is whether or not the interactions between a feature and it’s nearest neighbors interoperate the way that they are expected to. *Medium* tests can rely on access to local services (a local database, for example), the local filesystem, multiple threads of execution, sleep statements and even access to the (local) network. However, reliance on access to external systems and services (systems and services not available on the localhost) in *medium* tests is discouraged. In general, we should expect that these tests return a result in 5 minutes or less, although some *medium* tests may return a result in much less time than that (depending on local system load). These tests can typically be automated and the set of *medium* tests will be run against any builds prior to their release.
* **Medium** tests involve two or more features and test the interaction between those features. For those with previous testing experience, you might think of these as *integration* tests, but because there are a large number of other types of tests that fall into this category a more generic term is needed. The question being answered by these tests is whether or not the interactions between a feature and its nearest neighbors interoperate the way that they are expected to. *Medium* tests can rely on access to local services (a local database, for example), the local filesystem, multiple threads of execution, sleep statements and even access to the (local) network. However, reliance on access to external systems and services (systems and services not available on the localhost) in *medium* tests is discouraged. In general, we should expect that these tests return a result in 5 minutes or less, although some *medium* tests may return a result in much less time than that (depending on local system load). These tests can typically be automated and the set of *medium* tests will be run against any builds prior to their release.
* **Large** tests represent typical user scenarios and might be what some of you would think of as *functional* tests. However, as was the case with the previous two categories, we felt that the more generic term used by the Google team seemed to be appropriate here. For these tests, reliance on access to the network, local services (like databases), the filesystem, external systems, multiple threads of execution, system properties, and the use of sleep statements within the tests are all supported. Some of these tests might be run manually as part of the release process, but every effort is made to ensure that even these *large* tests can be automated (where possible). The response times for testing of some of these user scenarios could be 15 minutes or more (eg. it may take some time to bring the system up to an equilibrium state when load testing), so there are situations where these *large* tests will have to be triggered manually even if the test itself is run as an automated test.

This taxonomy is the same taxonomy used by the Google Test team and was described in a posting to the Google Testing Blog that can be found [here](http://googletesting.blogspot.com/2010/12/test-sizes.html).
Expand Down
8 changes: 4 additions & 4 deletions docs/BUILD_AND_TEST.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ By default `make` runs `make deps`, `make snap`, and `make plugins` commands for
* `plugins` builds test plugins for local operating system
* `install`: installs snapd and snapctl binaries in /usr/local/bin

To see how to use Snap, look at [gettings started](../README.md#getting-started), [SNAPD.md](SNAPD.md), and [SNAPCTL.md](SNAPCTL.md).
To see how to use Snap, look at [getting started](../README.md#getting-started), [SNAPD.md](SNAPD.md), and [SNAPCTL.md](SNAPCTL.md).

## Test
### Creating Tests
Expand All @@ -75,7 +75,7 @@ would identify that file as a file that contains *small* tests, while a line lik

// +build medium

would identify that file as a file that contains *medium* tests. Once those build tags are have been added to the test files in the Snap codebase, it is relative simple to run a specific set of tests (by type) by simply adding a `-tags [TAG]` command-line flag to the `go test` command (where the `[TAG]` value is replaced by one of our test types). For example, this command will run all of the tests in the current working directory or any of its subdirectories that have been tagged as *small* tests:
would identify that file as a file that contains *medium* tests. Once those build tags have been added to the test files in the Snap codebase, it is relatively simple to run a specific set of tests (by type) by simply adding a `-tags [TAG]` command-line flag to the `go test` command (where the `[TAG]` value is replaced by one of our test types). For example, this command will run all of the tests in the current working directory or any of its subdirectories that have been tagged as *small* tests:

$ go test -v -tags=small ./...

Expand All @@ -102,7 +102,7 @@ $ make test-small
```
To run the other types of tests in the Snap framework, simply replace the `small` type with one of the other types (`legacy`, `medium`, or `large` in the example `make test-*` command shown above).

If you are interested in running all of the `legacy` tests from the Snap framework *and continuing through to all subdirectories, regardless of any errors that might be encountered*, then you can run a `go test ...` command directly (instead of running a `make test-*` or `scripts/test.sh [SNAP_TEST_TYPE]` command like those shown above). That `go test ...` command would look something like this:
If you are interested in running all of the `legacy` tests from the Snap framework *and continuing through to all subdirectories, regardless of any errors that might be encountered*, then you can run a `go test ...` command directly (instead of running a `make test-*` or `scripts/test.sh [SNAP_TEST_TYPE]` command like those shown above). That `go test ...` command would look something like this:
```
go test -tags=legacy ./...
```
Expand Down Expand Up @@ -143,7 +143,7 @@ Any `small` tests added to the Snap framework must conform to the following cons

When complete, the full set of `small` tests for any given function or method should provide sufficient code coverage to ensure that any changes made to that function or method will not 'break the build'. This will assure the Snap maintainers that any pull requests that are made to modify or add to the framework can be safely merged (provided that there is sufficient code coverage and the associated tests pass).

It should be noted here that the maintainers will refuse to merge any pull requests that trigger a failure of any of the `small` or `legacy` tests that cover the code being modified or added to the framework. As such, we highly recommend that contributors run the tests that cover their contributions locally before submitting their contribution as a pull request. Maintainers may also ask that contributors add tests to their pull requests to ensure adequate code coverage before they are willing to accept a given pull request, even if all existing tests pass. Our hope is that you, as a contributor, will understand the need for this requirement.
It should be noted here that the maintainers will refuse to merge any pull requests that trigger a failure of any of the `small` or `legacy` tests that cover the code being modified or added to the framework. As such, we highly recommend that contributors run the tests that cover their contributions locally before submitting their contribution as a pull request. Maintainers may also ask that contributors add tests to their pull requests to ensure adequate code coverage before they are willing to accept a given pull request, even if all existing tests pass. Our hope is that you, as a contributor, will understand the need for this requirement.

#### In Docker
The Snap Framework supports running tests in an isolated container as opposed to your local host. Run the test script, which calls a `Dockerfile` located at `./scripts/Dockerfile`:
Expand Down
4 changes: 2 additions & 2 deletions docs/METRICS.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,8 +82,8 @@ namespace := core.NewNamespace("intel", "psutil", "load", "load1")
### Dynamic Metric Namespace Example

Dynamic namespaces enable collector plugins to embed runtime data in the namespace with just enough metadata to enable
downstrean plugins (processors and publishers) the ability to extract the data and transform the namespace into its
canonical form often required by some backends.
downstream plugins (processors and publishers) the ability to extract the data and transform the namespace into its
canonical form often required by some back ends.

Given a dynamic metric identified by the namespace `/intel/libvirt/*/disk/*/wrreq` the `NamespaceElement`s would
have values of 'intel', 'libvirt', '*', 'disk', '*' and 'wrreq' respectively. The `Name` and `Description` fields
Expand Down
2 changes: 1 addition & 1 deletion docs/PLUGIN_CATALOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ This file is automatically generated. If you would like to add to the plugin lis

### Wishlist

There will always be more plugins we wish we had. To make sure others can contribute to our community goals, we keep a wish list of what people would like to see. If you see one here and want to start on it please let us know by commenting on the corresponding issue!
There will always be more plugins we wish we had. To make sure others can contribute to our community goals, we keep a wish list of what people would like to see. If you see one here and want to start on it, please let us know by commenting on the corresponding issue!

| Issue | Description |
|-------|-------------|
Expand Down
2 changes: 1 addition & 1 deletion docs/PLUGIN_PACKAGING.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ and Snap executes the program referenced by the `exec` field.

## Why

In cases where we can not or do not want to compile our plugin into a statically
In cases where we cannot or do not want to compile our plugin into a statically
linked binary we can load a plugin packaged as an ACI image. This provides
an obvious advantage for plugins written in Python, Ruby, Java, etc where the
plugins dependencies, potentially including an entire Python virtualenv, could
Expand Down
2 changes: 1 addition & 1 deletion docs/PROFILING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Using pprof and go-torch is a good way to bring to light on how your application
If you have **Docker** installed, jump to [generate a profile](#generating-a-profile).

### System Requirements
Nativaly supported OS:
Natively supported OS:
- Linux
- OS X 10.11+ ([patch for 10.6 - 10.10](https://github.com/rsc/pprof_mac_fix))

Expand Down
4 changes: 2 additions & 2 deletions docs/SNAPD_CONFIGURATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ limitations under the License.

snapd supports being configured through a configuration file located at a default location of `/etc/snap/snapd.conf` on Linux systems or by passing a configuration file in through the `--config` command line flag when starting snapd. YAML and JSON are currently supported for configuration file types.

snapd runs without a configuration file provided and will use the default values defined inside the daemon (shown below). There is an order of precedence when it come to default values, configuration files, and flags when snapd starts. Any value defined in the default configuration file located at `/etc/snap/snapd.conf` will take precedence over default values. Any value defined in a configuration file passed via the `--config` flag will be used in place of any default configuration file on the system and override default values. Any flags passed in on the command line during the start up of snapd will override any values defined in configuration files and default values.
snapd runs without a configuration file provided and will use the default values defined inside the daemon (shown below). There is an order of precedence when it comes to default values, configuration files, and flags when snapd starts. Any value defined in the default configuration file located at `/etc/snap/snapd.conf` will take precedence over default values. Any value defined in a configuration file passed via the `--config` flag will be used in place of any default configuration file on the system and override default values. Any flags passed in on the command line during the start up of snapd will override any values defined in configuration files and default values.

In order of precedence (from greatest to least):
- Command-line flags
Expand Down Expand Up @@ -97,7 +97,7 @@ control:
# keyring files specified in keyring_path. Plugin trust can be disabled (0) which
# will allow loading of all plugins whether signed or not. The warning state allows
# for loading of signed and unsigned plugins. Warning messages will be displayed if
# an unsigned plugin is loaded. Any signed plugins that can not be verified will
# an unsigned plugin is loaded. Any signed plugins that cannot be verified will
# not be loaded. Valid values are 0 - Off, 1 - Enabled, 2 - Warning
plugin_trust_level: 1

Expand Down
8 changes: 4 additions & 4 deletions docs/TASKS.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The header contains a version, used to differentiate between versions of the tas

The schedule describes the schedule type and interval for running the task. The type of a schedule could be a simple "run forever" schedule, which is what we see above as `"simple"` or something more complex. Snap is designed in a way where custom schedulers can easily be dropped in. If a custom schedule is used, it may require more key/value pairs in the schedule section of the manifest. At the time of this writing, Snap has three schedules:
- **simple schedule** which is described above,
- **window schedule** which adds a start and stop time for the task. The time must be given as a quoted string in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format, for example with specific timezone offset:
- **window schedule** which adds a start and stop time for the task. The time must be given as a quoted string in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format, for example with specific time zone offset:
```json
"version": 1,
"schedule": {
Expand All @@ -62,7 +62,7 @@ The schedule describes the schedule type and interval for running the task. The
},
"max-failures": 10,
```
or without timezone offset (in that cases uppercase'Z' must be present):
or without time zone offset (in that cases uppercase'Z' must be present):
```json
"version": 1,
"schedule": {
Expand Down Expand Up @@ -155,7 +155,7 @@ Process and Publish nodes in the workflow can also target remote Snap nodes via
```

If a target is specified for a step in the workflow, that step will be executed on the remote instance specified by the ip:port target. Each node in the workflow is evaluated independently so a workflow can have any, all, or none of it's steps being done remotely (if `target` key is omitted, that step defaults to local). The ip and port target are the ip and port that has a running control-grpc server. These can be specified to snapd via the `control-listen-addr` and `control-listen-port` flags. The default is the same ip as the Snap rest-api and port 8082.
If a target is specified for a step in the workflow, that step will be executed on the remote instance specified by the ip:port target. Each node in the workflow is evaluated independently so a workflow can have any, all, or none of its steps being done remotely (if `target` key is omitted, that step defaults to local). The ip and port target are the ip and port that has a running control-grpc server. These can be specified to snapd via the `control-listen-addr` and `control-listen-port` flags. The default is the same ip as the Snap rest-api and port 8082.

An example json task that uses remote targets:
```json
Expand Down Expand Up @@ -275,7 +275,7 @@ config:

Applying the config at `/intel/perf` means that all leaves of `/intel/perf` (`/intel/perf/foo`, `/intel/perf/bar`, and `/intel/perf/baz` in this case) will receive the config.

The tag section describes additional meta data for metrics. Similary to config, tags can also be described at a branch, and all leaves of that branch will receive the given tag(s). For example, say a task is going to collect `/intel/perf/foo`, `/intel/perf/bar`, and `/intel/perf/baz`, all metrics should be tagged with experiment number, additonally one metric `/intel/perf/bar` should be tagged with OS name. That tags could be described like so:
The tag section describes additional meta data for metrics. Similar to config, tags can also be described at a branch, and all leaves of that branch will receive the given tag(s). For example, say a task is going to collect `/intel/perf/foo`, `/intel/perf/bar`, and `/intel/perf/baz`, all metrics should be tagged with experiment number, additionally one metric `/intel/perf/bar` should be tagged with OS name. That tags could be described like so:

```yaml
---
Expand Down

0 comments on commit ab995cd

Please sign in to comment.