Skip to content

Commit

Permalink
Update documentation and tests to [docsrs] (#10240)
Browse files Browse the repository at this point in the history
* Update documentation and tests to [docsrs]

* Fix language identifier

* Replace tensorflow with tokio
  • Loading branch information
PyvesB authored Jun 10, 2024
1 parent 6be6c1d commit 8698fa1
Show file tree
Hide file tree
Showing 4 changed files with 109 additions and 110 deletions.
2 changes: 1 addition & 1 deletion doc/TUTORIAL.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Each service has a directory for its files:
`/services/example/example.service.js`.
If you add a badge for a new API, create a new directory.

Example: [wercker](https://github.com/badges/shields/tree/master/services/wercker)
Example: [Docs.rs](https://github.com/badges/shields/tree/master/services/docsrs)

- For service families with multiple badges we usually store the code for each
badge in its own file like this:
Expand Down
181 changes: 80 additions & 101 deletions doc/service-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ automated tests should be included. They serve three purposes:
3. They speed up future contributors when they are debugging or improving a
badge.

Test should cover:
Tests should cover:

1. Valid behavior
2. Optional parameters like tags or branches
Expand All @@ -24,11 +24,11 @@ Test should cover:
Before getting started, set up a development environment by following the
[setup instructions](https://github.com/badges/shields/blob/master/doc/TUTORIAL.md#2-setup)

We will write some tests for the [Wercker Build service](https://github.com/badges/shields/blob/master/services/wercker/wercker.service.js)
We will write some tests for [Docs.rs](https://github.com/badges/shields/blob/master/services/docsrs/docsrs.service.js), a service that builds documentation of crates, which are packages in the Rust programming language.

### (1) Boilerplate

The code for our badge is in `services/wercker/wercker.service.js`. Tests for this badge should be stored in `services/wercker/wercker.tester.js`.
The code for our badge is in `services/docsrs/docsrs.service.js`. Tests for this badge should be stored in `services/docsrs/docsrs.tester.js`.

We'll start by adding some boilerplate to our file:

Expand All @@ -41,8 +41,8 @@ export const t = await createServiceTester()
If our `.service.js` module exports a single class, we can
`createServiceTester`, which uses convention to create a
`ServiceTester` object. Calling this inside
`services/wercker/wercker.tester.js` will create a `ServiceTester` object
configured for the service exported in `services/wercker/wercker.service.js`.
`services/docsrs/docsrs.tester.js` will create a `ServiceTester` object
configured for the service exported in `services/docsrs/docsrs.service.js`.
We will add our tests to this `ServiceTester` object `t`, which is exported
from the module.

Expand All @@ -51,27 +51,30 @@ from the module.
First we'll add a test for the typical case:

```js
import { isBuildStatus } from '../test-validators.js'
import Joi from 'joi'

t.create('Build status')
.get('/build/wercker/go-wercker-api.json')
.expectBadge({ label: 'build', message: isBuildStatus })
t.create('Docs with no version specified')
.get('/tokio.json')
.expectBadge({
label: 'docs',
message: Joi.allow('passing', 'failing'),
})
```

1. The `create()` method adds a new test to the tester object.
The chained-on calls come from the API testing framework [IcedFrisby][].
Here's a [longer example][] and the complete [API guide][icedfrisby api].
2. We use the `get()` method to request a badge. There are several points to consider here:
- We need a real project to test against. In this case we have used [wercker/go-wercker-api](https://app.wercker.com/wercker/go-wercker-api/runs) but we could have chosen any stable project.
- We need a real crate to test against. In this case we have used [Tokio](https://docs.rs/tokio) but we could have chosen any one.
- Note that when we call our badge, we are allowing it to communicate with an external service without mocking the response. We write tests which interact with external services, which is unusual practice in unit testing. We do this because one of the purposes of service tests is to notify us if a badge has broken due to an upstream API change. For this reason it is important for at least one test to call the live API without mocking the interaction.
- All badges on shields can be requested in a number of formats. As well as calling https://img.shields.io/wercker/build/wercker/go-wercker-api.svg to generate ![](https://img.shields.io/wercker/build/wercker/go-wercker-api.svg) we can also call https://img.shields.io/wercker/build/wercker/go-wercker-api.json to request the same content as JSON. When writing service tests, we request the badge in JSON format so it is easier to make assertions about the content.
- We don't need to explicitly call `/wercker/build/wercker/go-wercker-api.json` here, only `/build/wercker/go-wercker-api.json`. When we create a tester object with `createServiceTester()` the URL base defined in our service class (in this case `/wercker`) is used as the base URL for any requests made by the tester object.
- All badges on shields can be requested in a number of formats. As well as calling https://img.shields.io/docsrs/tokio.svg to generate ![](https://img.shields.io/docsrs/tokio.svg) we can also call https://img.shields.io/docsrs/tokio.json to request the same content as JSON. When writing service tests, we request the badge in JSON format so it is easier to make assertions about the content.
- We don't need to explicitly call `/docsrs/tokio.json` here, only `/tokio.json`. When we create a tester object with `createServiceTester()` the URL base defined in our service class (in this case `/docsrs`) is used as the base URL for any requests made by the tester object.
3. `expectBadge()` is a helper function which accepts either a string literal, a [RegExp][] or a [Joi][] schema for the different fields.
Joi is a validation library that is built into IcedFrisby which you can use to
match based on a set of allowed strings, regexes, or specific values. You can
refer to their [API reference][joi api].
4. We expect `label` to be a string literal `"build"`.
5. Because this test depends on a live service, we don't want our test to depend on our API call returning a particular build status. Instead we should perform a "picture check" to assert that the badge data conforms to an expected pattern. Our test should not depend on the status of the example project's build, but should fail if trying to generate the badge throws an error, or if there is a breaking change to the upstream API. In this case we will use a pre-defined regular expression to check that the badge value looks like a build status. [services/test-validators.js](https://github.com/badges/shields/blob/master/services/test-validators.js) defines a number of useful validators we can use. Many of the common badge types (version, downloads, rank, etc.) already have validators defined here.
4. We expect `label` to be a string literal `"docs"`.
5. Because this test depends on a live service, we don't want our test to depend on our API call returning a particular build status. Instead we should perform a "picture check" to assert that the badge data conforms to an expected pattern. Our test should not depend on the status of the example crates's documentation build, but should fail if trying to generate the badge throws an error, or if there is a breaking change to the upstream API. In this case, we specify a list with all possible response values, `Joi.allow('passing', 'failing')`. For more complex cases, [services/test-validators.js](https://github.com/badges/shields/blob/master/services/test-validators.js) defines a number of useful validators we can use with regular expressions. Many of the common badge types (version, downloads, rank, etc.) already have validators defined there.

When defining an IcedFrisby test, typically you would invoke the `toss()`
method, to register the test. This is not necessary, because the Shields test
Expand All @@ -89,7 +92,7 @@ harness will call it for you.
Lets run the test we have written:

```
npm run test:services -- --only=wercker
npm run test:services -- --only=docsrs
```

The `--only=` option indicates which service or services you want to test. You
Expand All @@ -101,11 +104,12 @@ runner.
Here's the output:

```
Server is starting up: http://lib/service-test-runner/cli.js:80/
Wercker
Build status
[ GET /build/wercker/go-wercker-api.json ] (572ms)
Server is starting up: http://localhost:1111/
DocsRs
[live] Docs with no version specified
[ GET /tokio.json ] (441ms)
1 passing (1s)
```
Expand All @@ -115,80 +119,98 @@ That's looking good!
Sometimes if we have a failing test, it is useful to be able to see some logging output to help work out why the test is failing. We can do that by calling `npm run test:services:trace`. Try running

```
npm run test:services:trace -- --only=wercker
npm run test:services:trace -- --only=docsrs
```

to run the test with some additional debug output.

### (4) Writing More Tests

We should write tests cases for valid paths through our code. The Wercker badge supports an optional branch parameter so we'll add a second test for a branch build.
We should write tests cases for valid paths through our code. The Docs.rs badge supports an optional version parameter so we'll add a second test for a branch build. In this case, we know for sure that the documentation for this older version was successfully built, we specify a string literal instead of a Joi schema for `message`. This narrows down the expectation and gives us a more helpful error message if the test fails.

```js
t.create('Build status (with branch)')
.get('/build/wercker/go-wercker-api/master.json')
.expectBadge({ label: 'build', message: isBuildStatus })
t.create('Passing docs for version').get('/tokio/1.37.0.json').expectBadge({
label: 'docs@1.37.0',
message: 'passing',
color: 'brightgreen',
})
```

```
Server is starting up: http://lib/service-test-runner/cli.js:80/
Wercker
Build status
[ GET /build/wercker/go-wercker-api.json ] (572ms)
Build status (with branch)
[ GET /build/wercker/go-wercker-api/master.json ] (368ms)
2 passing (1s)
Server is starting up: http://localhost:1111/
DocsRs
[live] Docs with no version specified
[ GET /tokio.json ] (408ms)
[live] Passing docs for version
[ GET /tokio/1.37.0.json ] (171ms)
2 passing (2s)
```

Once we have multiple tests, sometimes it is useful to run only one test. We can do this using the `--fgrep` argument. For example:

```
npm run test:services -- --only="wercker" --fgrep="Build status (with branch)"
npm run test:services -- --only="docsrs" --fgrep="Passing docs for version"
```

Having covered the typical and custom cases, we'll move on to errors. We should include a test for the 'not found' response and also tests for any other custom error handling. The Wercker integration defines a custom error condition for 401 as well as a custom 404 message:
Documentation for tokio version 1.32.1 failed to build, we can also add a corresponding test:

```js
httpErrors: {
401: 'private application not supported',
404: 'application not found',
}
t.create('Failing docs for version').get('/tokio/1.32.1.json').expectBadge({
label: 'docs@1.32.1',
message: 'failing',
color: 'red',
})
```

First we'll add a test for a project which will return a 404 error:
Note that in these tests, we have specified a `color` parameter in `expectBadge`. This is helpful in a case like this when we want to test custom color logic, but it is only necessary to explicitly test color values if our badge implements custom logic for setting the badge colors.

Having covered the typical and custom cases, we'll move on to errors. We should include a test for the 'not found' response and also tests for any other custom error handling. When a version is specified, the Docs.rs integration defines a custom error condition for 400 status codes:

```js
t.create('Build status (application not found)')
.get('/build/some-project/that-doesnt-exist.json')
.expectBadge({ label: 'build', message: 'application not found' })
httpErrors: version ? { 400: 'malformed version' } : {},
```

In this case we are expecting a string literal instead of a pattern for `message`. This narrows down the expectation and gives us a more helpful error message if the test fails.
First we'll add a test for a crate and a test for a version which will return 404 errors:

We also want to include a test for the 'private application not supported' case. One way to do this would be to find another example of a private project which is unlikely to change. For example:
```js
t.create('Crate not found')
.get('/not-a-crate/latest.json')
.expectBadge({ label: 'docs', message: 'not found' })

t.create('Version not found')
.get('/tokio/0.8.json')
.expectBadge({ label: 'docs', message: 'not found' })
```

We also want to include a test for a case where a malformed version was specified. For example:

```js
t.create('Build status (private application)')
.get('/build/wercker/blueprint.json')
.expectBadge({ label: 'build', message: 'private application not supported' })
t.create('Malformed version')
.get('/tokio/not-a-version.json')
.expectBadge({ label: 'docs', message: 'malformed version' })
```

## (5) Mocking Responses

If we didn't have a stable example of a private project, another approach would be to mock the response. An alternative test for the 'private application' case might look like:
If we didn't have a stable example of crate version with a failing documentation build, another approach would be to mock the response. An alternative test for the 'Failing docs for version' case might look like:

```js
t.create('Build status (private application)')
.get('/build/wercker/go-wercker-api.json')
t.create('Failing docs for version')
.get('/tokio/1.32.1.json')
.intercept(nock =>
nock('https://app.wercker.com/api/v3/applications/')
.get('/wercker/go-wercker-api/builds?limit=1')
.reply(401),
nock('https://docs.rs/crate')
.get('/tokio/1.32.1/status.json')
.reply(200, { doc_status: false }),
)
.expectBadge({ label: 'build', message: 'private application not supported' })
.expectBadge({
label: 'docs@1.32.1',
message: 'failing',
color: 'red',
})
```

This will intercept the request and provide our own mock response.
Expand All @@ -204,57 +226,14 @@ and path.
[icedfrisby-nock]: https://github.com/paulmelnikow/icedfrisby-nock#usage
[nock]: https://github.com/node-nock/nock

Our test suite should also include service tests which receive a known value from the API. For example, in the `render()` method of our service, there is some logic which sets the badge color based on the build status:

```js
static render({ status, result }) {
if (status === 'finished') {
if (result === 'passed') {
return { message: 'passing', color: 'brightgreen' }
} else {
return { message: result, color: 'red' }
}
}
return { message: status }
}
```

We can also use nock to intercept API calls to return a known response body.

```js
t.create('Build passed')
.get('/build/wercker/go-wercker-api.json')
.intercept(nock =>
nock('https://app.wercker.com/api/v3/applications/')
.get('/wercker/go-wercker-api/builds?limit=1')
.reply(200, [{ status: 'finished', result: 'passed' }]),
)
.expectBadge({
label: 'build',
message: 'passing',
color: 'brightgreen',
})

t.create('Build failed')
.get('/build/wercker/go-wercker-api.json')
.intercept(nock =>
nock('https://app.wercker.com/api/v3/applications/')
.get('/wercker/go-wercker-api/builds?limit=1')
.reply(200, [{ status: 'finished', result: 'failed' }]),
)
.expectBadge({ label: 'build', message: 'failed', color: 'red' })
```

Note that in these tests, we have specified a `color` parameter in `expectBadge`. This is helpful in a case like this when we want to test custom color logic, but it is only necessary to explicitly test color values if our badge implements custom logic for setting the badge colors.

## Code coverage

By checking code coverage, we can make sure we've covered all our bases.

We can generate a coverage report and open it:

```
npm run coverage:test:services -- -- --only=wercker
npm run coverage:test:services -- -- --only=docsrs
npm run coverage:report:open
```

Expand Down
1 change: 1 addition & 0 deletions services/docsrs/docsrs.service.js
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ export default class DocsRs extends BaseJsonService {
return await this._requestJson({
schema,
url: `https://docs.rs/crate/${crate}/${version}/status.json`,
httpErrors: version ? { 400: 'malformed version' } : {},
})
}

Expand Down
35 changes: 27 additions & 8 deletions services/docsrs/docsrs.tester.js
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,32 @@ import Joi from 'joi'
import { createServiceTester } from '../tester.js'
export const t = await createServiceTester()

t.create('Passing docs')
.get('/tokio/0.3.0.json')
.expectBadge({ label: 'docs@0.3.0', message: 'passing' })
t.create('Docs with no version specified')
.get('/tokio.json')
.expectBadge({
label: 'docs',
message: Joi.allow('passing', 'failing'),
})

t.create('Passing docs for version').get('/tokio/1.37.0.json').expectBadge({
label: 'docs@1.37.0',
message: 'passing',
color: 'brightgreen',
})

t.create('Failing docs')
.get('/tensorflow/0.16.1.json')
.expectBadge({ label: 'docs@0.16.1', message: 'failing' })
t.create('Failing docs for version').get('/tokio/1.32.1.json').expectBadge({
label: 'docs@1.32.1',
message: 'failing',
color: 'red',
})

t.create('Multiple builds, latest passing')
.get('/bevy_tweening/0.3.1.json')
.expectBadge({ label: 'docs@0.3.1', message: 'passing' })
.expectBadge({
label: 'docs@0.3.1',
message: 'passing',
color: 'brightgreen',
})

t.create('Getting latest version works')
.get('/rand/latest.json')
Expand All @@ -26,5 +41,9 @@ t.create('Crate not found')
.expectBadge({ label: 'docs', message: 'not found' })

t.create('Version not found')
.get('/tokio/not-a-version.json')
.get('/tokio/0.8.json')
.expectBadge({ label: 'docs', message: 'not found' })

t.create('Malformed version')
.get('/tokio/not-a-version.json')
.expectBadge({ label: 'docs', message: 'malformed version' })

0 comments on commit 8698fa1

Please sign in to comment.