Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

Commit

Permalink
Minor spelling and grammatical corrections in docs
Browse files Browse the repository at this point in the history
  • Loading branch information
JRodDynamite committed Oct 28, 2016
1 parent 140da2a commit 00d468d
Show file tree
Hide file tree
Showing 11 changed files with 29 additions and 28 deletions.
4 changes: 2 additions & 2 deletions docs/BUILD_AND_TEST.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ would identify that file as a file that contains *small* tests, while a line lik

// +build medium

would identify that file as a file that contains *medium* tests. Once those build tags are have been added to the test files in the Snap codebase, it is relative simple to run a specific set of tests (by type) by simply adding a `-tags [TAG]` command-line flag to the `go test` command (where the `[TAG]` value is replaced by one of our test types). For example, this command will run all of the tests in the current working directory or any of it’s subdirectories that have been tagged as *small* tests:
would identify that file as a file that contains *medium* tests. Once those build tags are have been added to the test files in the Snap codebase, it is relative simple to run a specific set of tests (by type) by simply adding a `-tags [TAG]` command-line flag to the `go test` command (where the `[TAG]` value is replaced by one of our test types). For example, this command will run all of the tests in the current working directory or any of its subdirectories that have been tagged as *small* tests:

$ go test -v -tags=small ./...

Expand Down Expand Up @@ -143,7 +143,7 @@ Any `small` tests added to the Snap framework must conform to the following cons

When complete, the full set of `small` tests for any given function or method should provide sufficient code coverage to ensure that any changes made to that function or method will not 'break the build'. This will assure the Snap maintainers that any pull requests that are made to modify or add to the framework can be safely merged (provided that there is sufficient code coverage and the associated tests pass).

It should be noted here that the maintainers will refuse to merge any pull requests that trigger a failure of any of the `small` or `legacy` tests that cover the code being modified or added to the framework. As such, we highly recommend that contributors run the tests that cover their contributions locally before submitting their contribution as a pull request. Maintainers may also ask that contributors add tests to their pull requests to ensure adequate code coverage before the they are willing to accept a given pull request, even if all existing tests pass. Our hope is that you, as a contributor, will understand the need for this requirement.
It should be noted here that the maintainers will refuse to merge any pull requests that trigger a failure of any of the `small` or `legacy` tests that cover the code being modified or added to the framework. As such, we highly recommend that contributors run the tests that cover their contributions locally before submitting their contribution as a pull request. Maintainers may also ask that contributors add tests to their pull requests to ensure adequate code coverage before they are willing to accept a given pull request, even if all existing tests pass. Our hope is that you, as a contributor, will understand the need for this requirement.

#### In Docker
The Snap Framework supports running tests in an isolated container as opposed to your local host. Run the test script, which calls a `Dockerfile` located at `./scripts/Dockerfile`:
Expand Down
2 changes: 1 addition & 1 deletion docs/DISTRIBUTED_WORKFLOW_ARCHITECTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ A distributed workflow is a workflow where one or more steps have a remote targe

## Architecture

Distributed workflow is accomplished by allowing remote targets to be specified as part of a task workflow. This is done by having a gRPC server running that can handle actions needed by the scheduler to run a task. These are defined in the [managesMetrics](https://github.com/intelsdi-x/snap/blob/distributed-workflow/scheduler/scheduler.go) interface defined in scheduler/scheduler.go. This interface is implemented by both pluginControl in control/control.go and ControlProxy in grpc/controlproxy/controlproxy.go. This allows the scheduler to not know/care where a step in the workflow is running. On task creation the workflow is walked and the appropriate type is selected or created for each step in the workflow.
Distributed workflow is accomplished by allowing remote targets to be specified as part of a task workflow. This is done by having a gRPC server running that can handle actions needed by the scheduler to run a task. These are defined in the [managesMetrics](https://github.com/intelsdi-x/snap/blob/distributed-workflow/scheduler/scheduler.go) interface defined in scheduler/scheduler.go. This interface is implemented by both pluginControl in control/control.go and ControlProxy in grpc/controlproxy/controlproxy.go. This allows the scheduler to not know/care where a step in the workflow is running. On task creation, the workflow is walked and the appropriate type is selected or created for each step in the workflow.

## Performance considerations

Expand Down
6 changes: 3 additions & 3 deletions docs/METRICS.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,21 +10,21 @@ A metric in snap has the following fields.
* Version `int`
* Is bound to the version of the plugin
* Multiple versions of the same metric can be added to the catalog
* Unless specified in the Task Manifest, the latest available metric will collected
* Unless specified in the Task Manifest, the latest available metric will be collected
* Config `*cdata.ConfigDataNode`
* Contains data needed to collect a metric
* Examples include 'uri', 'username', 'password', 'paths'
* Data `interface{}`
* The collected data
* Tags `map[string]string`
* Are key value pairs that provide additional meta data about the metric
* Are key value pairs that provide additional metadata about the metric
* May be added by the framework or other plugins (processors)
* The framework currently adds the following standard tag to all metrics
* `plugin_running_on` describing on which host the plugin is running. This value is updated every hour due to a TTL set internally.
* May be added by a task manifests as described [here](https://github.com/intelsdi-x/snap/pull/941)
* May be added by the snapd config as described [here](https://github.com/intelsdi-x/snap/issues/827)
* Unit `string`
* Describes the magnititude being measured
* Describes the magnitude being measured
* Can be an empty string for unitless data
* See [Metrics20.org](http://metrics20.org/spec/) for more guidance on units
* Description `string`
Expand Down
3 changes: 2 additions & 1 deletion docs/PLUGIN_AUTHORING.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,8 @@ Example:
| /intel/mock/bar.no | not allowed characters | /intel/mock/bar_no |
| /intel/mock/bar!? | not allowed characters | /intel/mock/bar |

Snap validates the metrics exposed by plugin and, if validation failed, return an error and not load the plugin.

Snap validates the metrics exposed by the plugin and, if validation fails, an error is returned and the plugin is not loaded.

##### c) static and dynamic metrics
Snap supports both static and dynamic metrics. You can find more detail about static and dynamic metrics [here](./METRICS.md).
Expand Down
10 changes: 5 additions & 5 deletions docs/PLUGIN_BEST_PRACTICES.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@
### Leverage plugin configurability options
1. **Compile time configuration** - use `Plugin.PluginMeta` to define plugin's: name, version, type, accepted and returned content types, concurrency level, exclusiveness, secure communication settings and cache TTL. This type of configuration is usually specified in `main()` in which `plugin.Start()` method is called.
2. **Run time configuration**
- **Global** - This config is useful if configuration data are needed to obtain list of metrics (for example: user names, paths to tools, etc.). Values from Global cofig (as defined in config json) are available in `GetMetricTypes()` method.
- **Global** - This config is useful if configuration data is needed to obtain the list of metrics (for example: user names, paths to tools, etc.). Values from Global config (as defined in config json) are available in `GetMetricTypes()` method.
- **Task level** - This config is useful when you need to pass configuration per metric or plugin in order to collect the metrics. Use `GetConfigPolicy()` to set configurable items for plugin. Values from Task config are available in `CollectMetrics()` method.

### Use `snap-plugin-utilities` library
The library and guide is available [here](https://github.com/intelsdi-x/snap-plugin-utilities). The library consists of the following helper packages:
The library and guide are available [here](https://github.com/intelsdi-x/snap-plugin-utilities). The library consists of the following helper packages:
* **`config`** - The config package provides helpful methods to retrieve global config items.
* **`logger`** - The logger package wraps the logrus package. (https://github.com/Sirupsen/logrus). It sets logging from a plugin to separate files and adds a caller function name to each message. It's best to use log level defined during `snapd` start.
* **`ns`** - The ns package provides functions to extract namespace from maps, JSON and struct compositions. It is useful for situations when full knowledge of available metrics is not known at time when `GetMetricTypes()` is called.
* **`pipeline`** - Creates array of Pipes connected by channels. Each Pipe can perform a single process on data transmitted by channels.
* **`ns`** - The ns package provides functions to extract namespace from maps, JSON and struct compositions. It is useful for situations when full knowledge of available metrics is not known at the time when `GetMetricTypes()` is called.
* **`pipeline`** - Creates an array of Pipes connected by channels. Each Pipe can perform a single process on data transmitted by channels.
* **`source`** - The source package provides handy ways of dealing with external command output. It can be used for continuous command execution (PCM like), or for single command calls.
* **`stack`** - The stack package provides simple implementation of a stack.
* **`stack`** - The stack package provides a simple implementation of a stack.

### Namespace definition
* The `GetMetricTypes()` method returns namespaces for your metrics. For new metrics, it is a good idea to start with your organization (for example: "intel"), then enumerate information starting from most general to most detailed. Examples: `/intel/server/mem/free` or `/intel/linux/iostat/avg-cpu/%idle`.
Expand Down
2 changes: 1 addition & 1 deletion docs/PLUGIN_LIFECYCLE.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ subscription group are unsubscribed and the subscription group is removed.
processing of all subscription groups. When a subscription group is processed the
requested metrics are evaluated and mapped to collector plugins. The required
plugins are compared with the previous state of the subscription group
triggering the appropriate subscribe or unsubscribe calls. Finally the
triggering the appropriate subscribe or unsubscribe calls. Finally, the
subscription group view is updated with the current plugin dependencies and
the metrics that will be collected based on the requested metrics (query).

Expand Down
4 changes: 2 additions & 2 deletions docs/PLUGIN_SIGNING.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ gpg: Good signature from "Tiffany Jernigan (Plugin signing key) <my.email@intel.
```

###Signing file using key in your default keyring
If you already have a key, you can use that. Otherwise you can create a key and directly add to your keyring
If you already have a key, you can use that. Otherwise, you can create a key and directly add to your keyring
Create a file named `gpg-batch` with the following
```
%echo Generating a default key
Expand Down Expand Up @@ -325,7 +325,7 @@ $ gpg --import pubkeys.gpg
```
####Validating a public key from someone else
From the [GPG Handbook](https://www.gnupg.org/gph/en/manual/x56.html):
Once a key is imported it should be validated. GnuPG uses a powerful and flexible trust model that does not require you to personally validate each key you import. Some keys may need to be personally validated, however. A key is validated by verifying the key's fingerprint and then signing the key to certify it as a valid key. A key's fingerprint can be quickly viewed with the --fingerprint command-line option, but in order to certify the key you must edit it.
Once a key is imported it should be validated. GnuPG uses a powerful and flexible trust model that does not require you to personally validate each key you import. Some keys may need to be personally validated, however. A key is validated by verifying the key's fingerprint and then signing the key to certify it as a valid key. A key's fingerprint can be quickly viewed with the --fingerprint command-line option, but in order to certify the key, you must edit it.

Add --no-default-keyring --keyring <keyringFile> to all commands below if you are editing a specific keyring that isn't your gnupg default one.
```
Expand Down
6 changes: 3 additions & 3 deletions docs/PROFILING.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ go get github.com/uber/go-torch
```

## Generating a profile
Before exploiting any result with go-torch and pprof we need to generate a profile - in our case a CPU profile. In this example we'll use the package `github.com/pkg/profile`.
Before exploiting any result with go-torch and pprof we need to generate a profile - in our case a CPU profile. In this example, we'll use the package `github.com/pkg/profile`.

### Implement the code
So on your main function start the profile:
Expand Down Expand Up @@ -90,7 +90,7 @@ func startInterruptHandling(modules ...coreModule) {
```
## Launch your program
Now launch your program - in this example we started snapd and ran a task. The longer you run your task, the deeper your graph will go into the subroutines.
Now launch your program - in this example, we started snapd and ran a task. The longer you run your task, the deeper your graph will go into the subroutines.
When your program quits you should have a `cpu.pprof` file generated in your current folder.
Expand All @@ -111,4 +111,4 @@ go tool pprof snapd cpu.pprof
Note that pprof allows you to watch the profile of any Go program executed during the profiling:
```
go tool pprof $SNAP_PLUGIN/snap-plugin-type-myplugin cpu.pprof
```
```
6 changes: 3 additions & 3 deletions docs/SNAPCTL.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ In one terminal window, run snapd (log level is set to 1 and signing is turned o
$ $SNAP_PATH/bin/snapd -l 1 -t 0
```

prepare a task manifest file, for example task.json with following content:
prepare a task manifest file, for example, task.json with following content:
```json
{
"version": 1,
Expand Down Expand Up @@ -147,7 +147,7 @@ prepare a task manifest file, for example task.json with following content:
}
```

prepare a workflow manifest file, for example workflow.json with following content:
prepare a workflow manifest file, for example, workflow.json with the following content:
```json
{
"collect": {
Expand Down Expand Up @@ -181,7 +181,7 @@ and then:
5. start a task with a task manifest
6. start a task with a workflow manifest
7. list the tasks
8. unload a plugins
8. unload the plugins

```
$ $SNAP_PATH/bin/snapctl plugin load $SNAP_PATH/plugin/snap-plugin-collector-mock1
Expand Down
8 changes: 4 additions & 4 deletions docs/SNAPD_CONFIGURATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,8 @@ control:

# plugin_trust_level sets the plugin trust level for snapd. The default state
# for plugin trust level is enabled (1). When enabled, only signed plugins that can
# be verified will be loaded into snapd. Signatures are verifed from
# keyring files specided in keyring_path. Plugin trust can be disabled (0) which
# be verified will be loaded into snapd. Signatures are verified from
# keyring files specified in keyring_path. Plugin trust can be disabled (0) which
# will allow loading of all plugins whether signed or not. The warning state allows
# for loading of signed and unsigned plugins. Warning messages will be displayed if
# an unsigned plugin is loaded. Any signed plugins that can not be verified will
Expand Down Expand Up @@ -166,7 +166,7 @@ restapi:
# rest_auth enables authentication for the REST API. Default value is false
rest_auth: false

# rest_auth_password sets the password to use for the REST API. Currently user and password
# rest_auth_password sets the password to use the REST API. Currently user and password
# combinations are not supported.
rest_auth_password: changeme

Expand Down Expand Up @@ -195,7 +195,7 @@ tribe:
bind_port: 6000

# name sets the name to use for this snapd instance in the tribe
# membership. Default value defaults to local hostname of the system.
# membership. Default value defaults to the local hostname of the system.
name: snaphost-01

# seed sets the snapd instance to use as the seed for tribe communications
Expand Down
6 changes: 3 additions & 3 deletions docs/TASKS.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,9 @@ The schedule describes the schedule type and interval for running the task. The
More on cron expressions can be found here: https://godoc.org/github.com/robfig/cron

#### Max-Failures
By default, Snap will disable a task if there is 10 consecutive errors from any plugins within the workflow. The configuration
By default, Snap will disable a task if there are 10 consecutive errors from any plugins within the workflow. The configuration
can be changed by specifying the number of failures value in the task header. If the max-failures value is -1, Snap will
not disable a task with consecutive failure. Instead, Snap will sleep for 1 second for every 10 consective failures
not disable a task with consecutive failure. Instead, Snap will sleep for 1 second for every 10 consecutive failures
and retry again.

For more on tasks, visit [`SNAPCTL.md`](SNAPCTL.md).
Expand Down Expand Up @@ -103,7 +103,7 @@ The workflow is a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph) wh

#### Remote Targets

Process and Publish nodes in the workflow can also target remote Snap nodes via the 'target' key. The purpose of this is to allow offloading of resource intensive workflow steps from the node where data collection is occuring. Modifying the example above we have:
Process and Publish nodes in the workflow can also target remote Snap nodes via the 'target' key. The purpose of this is to allow offloading of resource intensive workflow steps from the node where data collection is occurring. Modifying the example above we have:

```yaml
---
Expand Down

0 comments on commit 00d468d

Please sign in to comment.