-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
skip suite and test without failing #332
Comments
+1 |
2 similar comments
+1 |
+1 |
This issue has been inactive for over 1 month so I'm closing it. If you think it's still an issue re-open. - tjbot |
AFAIK it is still an issue ,) |
This is definitely still an issue. |
still issue please |
hmm i never even saw this issue. I'm not sure I get it, what would this be useful for? Is this fixed by |
Many reasons to skip without fail.
|
well:
|
Does not invalidate need to skip without fail. Sent from my iPhone On Jun 17, 2012, at 11:50 AM, TJ Holowaychuk
|
I'm open to the idea but I'm not sold on the idea of needing to conditionally skip tests. Even if that's the case --grep is great for that, or just separate those tests into other files and dont load them for some envs |
Sometimes you cannot know until you are actually in the test. Sent from my iPhone On Jun 17, 2012, at 11:54 AM, TJ Holowaychuk
|
sure, but I would argue that it's not a very good test if you have to conditionally skip it. If some external service is not available then it should be mocked etc, or use an environment variable to toggle wether or not it's hitting the service or the mock |
Sometimes not possible Sent from my iPhone On Jun 17, 2012, at 12:05 PM, TJ Holowaychuk
|
You are thinking only of unit tests; Please believe me I have 10 years exp in software test industry and a Sent from my iPhone On Jun 17, 2012, at 12:05 PM, TJ Holowaychuk
|
anyway apparently I am not expert enough so I will just stop talking and let the other +1's pipe in with their perspectives. |
nah man I dont mean any offense, I just nitpick because I'm trying to understand the use-cases better before committing to an API that I need to maintain that's all |
I understand :-) |
plus there are several +1's in here so it definitely seems like something people want, I just need use-cases so I can figure out what the best way to approach this is |
@visionmedia My use case is this: I have integration tests that test that when I execute code snippet displayed on web page is doing the right thing. This is done by executing code in subprocess. However, we have fair number of such snippets and for some of those, environment might not be installed (i.e. F# binary). I would like to mark those as skipped... |
@Almad but why not tag them with "f#" or similar, |
@visionmedia I would prefer unified starting interface so that executor don't have to know which tests to select. |
I'll think about it, this just seems like a very weird way to handle it |
We have a simple use case for it. We have a system which can act in two different modes, and many functions are disabled in one of these modes. The mode is toggled by an environment variable, and we'd like to skip all the tests for the affected functions based on the same environment variable. |
@pedrosnk that's mostly an organizational problem, you could for example use https://github.com/visionmedia/mocha/wiki/Tagging |
I just dont get why you are so opposed to this. Every other test harness
|
because every feature I add includes additional baggage that I have to maintain, no feature should be added without discussion unless it's clearly needed. This is definitely one I've never needed, and it can easily be bypassed using other existing features IMO |
I have also bumped into this limitation. Here is another use case: a set of integration tests you want to skip when a service isn't running. Wrapping the describe in an if works, but it doesn't notify the dev running the tests that some things were skipped. Failing isn't an option because certain integration environments can't have these dependent services running. The ability to skip from within a before() is the best solution to this problem. |
@jbnicolai Hmm, @visionmedia gave me push access, so I don't know what that means exactly for "maintainer" status. Anyway, I have push access, so Hi. I hope to help w/ cleaning up this backlog of issues. |
You can see the list of collaborators to this project via:
I don't know any way else to do it. |
ya we have about a gazillion "maintainers" now. what matters is that mocha's going to be continued to be developed. |
Haha, hi @boneskull, and welcome 😄. Thanks for the cURL command to list people with push access, that's quite handy. I suggest adding everyone with push access to the |
as mentioned in mochajs#332
+1 for that, @starsquare solution looks great |
Allow skip from test context for #332
Closed by #946. |
I'm still confused about this. I'm doing an API get, getting a 404, and then calling this.skip(). But when i call this.skip() I get an uncaught reference error to "this." it looks like:
Also, I've never used github before so this may be in the wrong place.... |
@ToshioMagic This made it into the 2.2.0 release of mocha; what version are you using? |
@cmbuckley 2.2.5 |
@ToshioMagic sounds like you're dealing with a separate problem, As the |
My test structure is defined inside a require module e.g. define(function(require){
before(function(){
// do stuff
});
require('spec1');
require('spec2');
//etc
}); And in the specs there is something similar to describe('Suite', function(){
var enabled = false;
before(function(){
if(!loaded) return this.skip();
// ...
});
it('should do something', function(){
// ...
});
}); From what I can tell if the module is not loaded it should skip the rest of suite but not skip the spec2, in my case it is skipping ALL tests after that suite. Is this by design? |
@ToshioMagic could you solved the problem? I have the same problem, when I use skip() inside a async callback I got following error from mocha. Error: the object {
"message": [undefined]
"uncaught": true
} was thrown, throw an Error :)
at Runner.fail (\node_modules\mocha\lib\runner.js:225:11)
at Runner.uncaught (\node_modules\mocha\lib\runner.js:682:8)
at process.uncaught (\node_modules\mocha\lib\runner.js:727:10) I'm using mocha v2.3.4 . can anyone confirm this problem? Should I open a new issue? |
@Code-guru skip within async code isn't yet supported, sorry! See #1618 |
That worked thanks. :) |
@danielstjules Thanks dan, I saw you had added the async skip to previous releases. But why it is not available in v2.3.4? I think its use cases are very common. For example when you have an end2end system test, and need to check if the server is listening, and if it isn't, just skip the test. Also since all network requests and database connections are aync in node.js, an async skip is a necessity, and not just a fancy feature. |
Async skip was added in #2335 |
as of mocha 3.2.0
And yes, for any doubters, the use case if very common:
|
I agree functionality / methods should be consistent. I wanted to point out that – as I got it – the outcome described in your use cases would be better addressed by the tagging feature (which is not there yet but): #1445. |
@dasilvacontin Thanks, for making me aware of work being done on Tags. However tags are inferior, to simply allowing a conditional. First of all, for most scenarios , they will force you to tag every single spec. Also you can imagine someone coming up with multiple tags just to describe few conditions. Allowing for conditionals is much more flexible. Simply ensuring this.skip() works everywhere (describe, it , before etc) , and being able to provide optional skip reason, that would participate in output, would be the most flexible thing to do:
Outputs : |
@mrt123 Wouldn't you have to add an // file1
describe('some class', function () {
this.tag('integration', 'slow')
...
})
// file2
describe('some other class', function () {
this.tag('integration', 'slow')
...
}) vs // file1
const { isIntegration, isSlow } = require('./specUtils.js')
describe('some class', function () {
if (isIntegration) this.skip('integration skip reason')
if (isSlow) this.skip('slow skip reason')
...
})
// or
const { skipIfIntegration, skipIfSlow } = require('./specUtils.js')
describe('some class', function () {
skipIfIntegration(this)
skipIfSlow(this)
...
}) (since you probably want to reuse the condition across spec files, and maybe some of the skip messages) If what you are concerned about is skip messages, that's a different improvement (feel free to open issue if there's none!), and nothing would stop you from using both tags and skip when there's a particular case where you want to skip + show a message. |
To sum up, it feels like moving conditions from setup declarations to code inside tests. You would be able to have any conditions you want that generate options / set filter tags. Unless you don't plan on reusing them in multiple specs/tests, I don't see the benefit of having the conditions inside the spec/test itself. (and reusing them is the common case) // setup.js
const mocha = require('mocha')
const tags = []
if (/* integration condition */) tags.push('integration')
if (/* slow condition */) tags.push('slow')
mocha.tags(tags) |
@dasilvacontin Agreed, however still |
@mrt123 Yep, I'm aware of that sad fact. I was just replying to:
For now you can only conditionally declare the sub |
I would like some function like skipTest() and skipSuite() that would allow me to skip a test or suite without failing it.
If you are in suiteSetup() then you could skip the suite
if you are in test() then you could skip the rest of the suite or the test
if you are in setup() then you could skip the rest of the suite or the test
if you are in tearDown() then you could skip rest of the suite
if you are in suiteTearDown() then you could not skip either
The text was updated successfully, but these errors were encountered: