-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to skip tests programmatically #7245
Comments
This seems like a bad idea to me. Currently you can do it.skip() to explicitly skip a particular test and it's not even executed. Skipping programmatically, and only running a portion of your test suite as a result, doesn't seem like it's serving the purpose of tests. A test failure at that point would be beneficial so the problem(s) could be fixed. And if they can't be fixed then marking a test as skipped explicitly like I've showed above is an appropriate reaction. |
It serves integration tests. Where tests depend on some external factors, and unavailability of some external resource, shouldn't indicate a problem with a project (reported with fail) but fact that test cannot be pursued (hence skipped) |
I'll leave this here for others to discuss. But personally I don't think this is a great idea. Your hypothetical example would not give confidence that any given change in a code base caused problems, thus why I said a test failure is beneficial. |
Yes, it's not for a case, where we want to confirm our project is free of bugs on its own (that should be solved with unit tests or mocked integration tests). It's about the case, where we test integration with external resource (such tests might be run on different schedule). Having fails for both resource unavailability and an errors in handling of it, makes tests unreliable, as it produces false positives and in result increases the risk of ignoring the latter type of the issue. |
For that case you could consider using jest.retryTimes |
I don't want to retry, I want to abort without fail (and also signal that test was not really run) |
@medikoo I agree with @palmerj3 I think being able to dynamically disable tests kind of misses the point of testing. Instead of disabling because of unavailability of some resource I would argue that you probably want to be alerted to this with the failure and then resolve the real problem of the resource not being available.
|
When speaking of external resource I mean something I do not own or control, so it's not an issue I can resolve. And uptime monitoring of external resources which alarms/informs me whether given service is accessible or not, is a different thing which I don't see as a part of integration tests. |
This is part of jasmine ( @aaronabramov might have thoughts on that? |
what if the external source starts failing all the time? then you'll just have a skipped test that will never run. i think for very specific scenarios you can just use: test('stuff', () => {
if (serviceUnavailable) {
logSomething();
return;
}
// else test something
}); but i agree with @palmerj3 that having this in the core doesn't look like a great idea |
@aaronabramov it's what we do now (return and log), still as we have large number of tests, those logs usually come unnoticed. If they would be skipped, then in the final summary any skip if happened will be noticed. |
This is a pretty common use case. Sometimes writing tests is hard and takes a long time to do it correctly. Sometimes writing a test to work only in certain circumstances is achievable in far less time and better than writing no tests at all or permanently skipping tests. So for all the devs with deadlines, here is a hacky workaround:
|
Another possibility if (thing) test.only('skipping all other things', () => {
console.warn('skipping tests');
});
// ...all other things Jest itself uses this to skip certain tests on windows |
To weigh in on this: this is already a feature supported by Mocha which is especially useful in
Based on the responses here (and elsewhere on Google), the only options for this kind of test are:
|
And to address a couple of the common issues that have been raised with this kind of testing:
This is a business decision: I've decided that the cost of "test suite fails every time $service goes down" is higher than the cost of "certain portions of the test suite are not exercised until someone responds to the pagerduty and fixes the broken service".
|
Jest does not allow you to skip a test once it's begun, but you can always extract that logic and conditionally use For example: const {USER, PASSWORD} = process.env
const withAuth = USERNAME && PASSWORD ? it : it.skip
if (withAuth == it.skip) {
console.warn('USERNAME or PASSWORD is undefined, skipping authed tests')
}
withAuth('do stuff with USERNAME and PASSWORD', () => {
// ...
}) |
What if you want to skip certain tests on certain OSes? That seems like a pretty valid reason for programmatically skipping tests. |
@kaiyoma see above
|
describe('something wonderful I imagine', () => {
it('can set up properly', () => {
setUp()
})
it('can do something after setup', () => {
skipIf('can set up properly').failed()
setUp()
doSomethingThatDependsOnSetUpHavingWorked()
})
}) Idea being that I want one test to fail telling me exactly "hey dingus, you broke this part", not one test, and all others that depend on whatever its testing to go well. Basically I want dependency in js. |
I agree with @wolever 100%. @palmerj3 and @aaronabramov: Your reasoning for not providing this feature is predicated on a false assumptions about the business need of our test application. Your assumptions are understandable in the context of application self-testing but for external resource test, the model breaks down fast. |
My use case for conditionally skipping tests is when the resource is only available during certain times of the day/week. For example, testing the API consistency of a live stock market data service doesn't make sense on weekends, so those tests should be skipped. Yes, I assume the risk that the API response format changed over the weekend, but that's a business decision, as others have mentioned. @okorz001's |
I'm assuming you have a very good reason to not mock the API calls in tests, so I won't ask. Can't you just perform your check within the tests? const hasAuth => USER && PASSWORD
describe('something wonderful, () => {
it('does something with auth', () => {
if (!hasAuth) { it.skip() }
// ...
})
}) |
describe('auth tests', () => {
if (!(USER && PASSWORD)) {
it.only('', () => {
console.warn('Missing username and password, skipping auth tests');
});
}
// actual auth tests
});
// all non-auth tests You could have a helper as well, sort of like import {skipTestOnCondition} from './my-test-helpers'
describe('auth tests', () => {
skipTestOnCondition(!(USER && PASSWORD));
// actual auth tests
});
// all non-auth tests If you don't like Again, Jest does something very similar: https://github.com/facebook/jest/blob/3f5a0e8cdef4983813029e511d691f8d8d7b15e2/packages/test-utils/src/ConditionalTest.ts I don't think we need to increase the API surface of Jest for this, when it's trivial to implement in user land |
I do still want this, which isn't trivial in userland: |
Seems also like something you can use |
In our case, we have different teams working on different aspects of the pipeline. We use ENV flags to decide if a service is available for integration testing. For teams to work independently, we wanted to see if we could run full test suites base on a given ENV set up in the pipeline or skip it altogether. Using your suggested approach above @medikoo , it would mean a member from team X would have to go back and touch code when team Y completes their service. If the mocked specifications worked well and were well tested in the first place, there shouldn't be a need to do this at all. Please consider this use case. |
Please reconsider this feature request: Unit testing frameworks are used not only for unit testing. They are also valuable in integration testing. |
@elialgranti There has been more discussion on this (and it seems like core devs are in favour of if) in #8604 |
@SimenB: I've tried the https://repl.it/repls/SillyPastDictionary Filed #9014. |
pytest permits to skip test programmatically , because of the missing of preconditions. if a test cannot pass because of missing of preconditions, it will give a false positive. for example, i need to test pressing a button my lamp switch on. if there is no electric power, the test need to skip, otherwise you will obtain a false positive. there will be another test somewhere that test there is electrical power. now in jest i need to avoid the assertions that cannot success. test("something", async () => { const precondition = something if(precondition) { }) but this is a boilerplate. |
Another reason to have a feature for this: We have some VERY long running tests on sections of a system that doesn't change much. We want to keep these running on CI, where we don't care if they take a long time, but devs shouldn't have to worry about them while developing. Many other test runners have ways of classifying tests (small, big, ci, etc...) and then you can pick which tests you want to run while developing. |
Another use case is ability to run some tests only in CI. E.g. I have some canary integration tests that run against a production system, using secrets, which are stored in CI only. Developers, including open source devs, simply just don't have access to these keys. So the tests would fail on their systems. |
Another use-case similar to @moltar's is when a server may have or not-have the capability to run a particular test. For example, I'm writing tests to verify that Daylight Savings Time is handled correctly, but if it's run on a local timezone without DST (which I can detect programmatically) then I want to skip the test but I want to let users know that the test is skipped. Here's how I'm doing it now, which seems to be working pretty well.
|
|
We ended up doing something like this: function describe( name, callback ) {
if ( name.toLowerCase().includes( "bluetooth" ) && BLUETOOTH_IS_DISABLED )
return this.describe.skip( name, callback );
else
return this.describe( name, callback );
} Not perfect but works well and is unobtrusive. This prevents the usage of |
my use case for this is tests relying on external services that we have no control over, obviously some part of the test should be mocked, but it would also be good to actually test the request to the service. |
Was this ever resolved ? |
I only want to run integration tests against service Foo when I've started service Foo and indicated it to my tests with |
Has anyone successfully gotten tests to skip based on an async condition? I have found that jest parses tests before my async condition is resolved even with various assiduous logic in I'd still like to see this feature in core. |
Pretty annoying that this isn't accepted as a feature. The arguments against feel a bit weak. Especially as there is a clear use case for it. In any case if you're still looking for a solution there is a pretty simple pattern here: https://stackoverflow.com/questions/58264344/conditionally-run-tests-in-jest/66143240#66143240 |
My use case is to run a matrix of tests. I use
If you have 20 preconditions with 20 tests to go through, you can't expect one to write 400 test cases of copy pasting similar tasks to run and manually describe the scenario names individually. Having |
FWIW, you can get around this using
const NodeEnvironment = require('jest-environment-node');
const { stateService } = require('./dist/app/utils/state.service');
/**
* @jest-environment ./jest-environment
*/
class CustomEnvironment extends NodeEnvironment {
constructor(config, context) {
super(config);
this.global.stateService = stateService
}
async handleTestEvent(event, state) {
if (event.name === 'run_describe_start') {
stateService.setValueOf('skip',false)
}
if (event.name === 'test_start') {
try {
if(event.test.status !== 'skip' ) this.checkForPreviousFailures(state.currentlyRunningTest)
} catch(e) {
stateService.setValueOf('skip', true)
}
}
if (event.name === 'test_done') {
if(event.test.errors.some(e => e[0].name === 'DependencyError')) {
event.test.errors = []
event.test.status = 'skip'
}
}
}
checkForPreviousFailures(state) {
if(state.parent.children.find(child => child.errors.length > 0)) throw new Error('Found an Error')
}
}
module.exports = CustomEnvironment
// A Simple Key-Value store for JS
export class StateService {
private _store: any = {};
constructor() { }
get store() {
return this._store;
}
getValueOf(key: string) {
if(this._store[key] !== undefined) return this._store[key];
else throw new Error(`${key} does not exist within the current store`);
}
setValueOf(key: string, value: any) {
this._store[key] = value;
}
appendTo(key: string, value: any, create: boolean = true) {
if(this._store[key]) {
if(!Array.isArray(this._store[key])) throw `key=${key} is not an array type, thus it cannot be appended`
this._store[key].push(value);
} else {
if(create) {
this._store[key] = [value]
} else {
throw new Error(`${key} does not exist within the current store`)
}
}
}
}
export const stateService = new StateService();
import { StateService } from './state.service';
// @ts-ignore
let stateService: StateService = global.stateService;
export class DependencyError extends Error {
constructor(message: string) {
super(message);
this.name = "DependencyError"
}
}
export const checkDependency = () => {
if(stateService.getValueOf('skip')) throw new DependencyError("A dependency check has failed")
}
import { checkDependency } from "../../utils/dependency.service"
describe('Domain1', () => {
describe('E2E flow 1', () => {
it('should be true', () => {
expect(true).toBeTruthy()
})
it('should fail', () => {
checkDependency()
expect(true).toBeFalsy()
})
it('should skip this', () => {
checkDependency()
expect(true).toBeTruthy()
})
it('should skip this too', () => {
checkDependency()
expect(true).toBeTruthy()
})
})
describe('E2E flow 2', () => {
it('should be true', () => {
expect(true).toBeTruthy()
})
it('should be true', () => {
checkDependency()
expect(true).toBeTruthy()
})
it('should be true', () => {
checkDependency()
expect(true).toBeTruthy()
})
it('should be true', () => {
checkDependency()
expect(true).toBeTruthy()
})
})
})
FAIL src/api/app/e2e/positive/dependency.spec.ts
Domain1
E2E flow 1
✓ should be true (3 ms)
✕ should fail (2 ms)
○ skipped should skip this
○ skipped should skip this too
E2E flow 2
✓ should be true
✓ should be true
✓ should be true
✓ should be true
● Domain1 › E2E flow 1 › should fail
expect(received).toBeFalsy()
Received: true
8 | it('should fail', () => {
9 | checkDependency()
> 10 | expect(true).toBeFalsy()
| ^
11 | })
12 | it('should skip this', () => {
13 | checkDependency()
at Object.<anonymous> (src/api/app/e2e/positive/dependency.spec.ts:10:26)
Test Suites: 1 failed, 1 total
Tests: 1 failed, 2 skipped, 5 passed, 8 total
Snapshots: 0 total
Time: 2.123 s
Ran all test suites matching /src\/api\/app\/e2e\/positive\/dependency.spec.ts/i.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. |
There are two ways to disable tests in Jest which i believe not documented yet (!! Really !!) With a conditional statement, you can call
|
This is also extremely useful in theory-based testing that uses the assume paradigm. Allowing the caller to generate test cases and then |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
🚀 Feature Proposal
Mocha supports skipping tests programmatically (in both
before
andit
) as:Motivation
It's very useful for cases where during tests setup we find out whether test can be pursued or not e.g. we need some external data, but due to some unavailability we can't so we decide to skip tests.
Is this somewhere on a roadmap?
The text was updated successfully, but these errors were encountered: