-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Top-level suite configuration #669
Comments
The same thing goes for To to this, we would need to provide a way to annotate tests so that Intern will know what type of environment is required (or desired, in case they're different). There's definitely some worthwhile possibility here, although it would require some significant internal changes. It would also probably work better if Intern could pre-enumerate its tests so that it could select the necessary engine(s) up front; that's planned for some indeterminate future release. A short term possibility is just to use separate configs for each testing stage (where the user could handle which tests were run, and how they were run, in the stage config). |
I think what would make this more intuitive for people is if it were |
Sure, In the above configuration a test stage with It would give us several benefits:
We currently control the mechanics of how we test using The process would look something like
This type of selective instrumentation will also likely set us up for a plugable instrumentation architecture by allowing the configuration to define additional 3rd party instrumentation plugins. @jason0x43 regarding annotating test, I'm not sure I follow. Is this similar to the |
The term "instrumentation" is already used for code coverage; "test runner" or "engine" would be preferable when talking about the thing that's running the tests. Unfortunately we kind of already used "runner" for "webdriver runner" (vs. "Node client"). So...engine? Test runner? This seems like more of a usage pattern than a feature. I mean, most of this can already be handled easily via external scripts, or in the Intern config itself. Also, the concept of "stages" will be unique to the user, and it may not be something with universal appeal. It's pretty trivial to select suite sets in a single config: define([ 'intern' ], function (intern) {
var stage = intern.args.stage;
var config = {
// stuff
suites: [],
functionalSuites: []
};
switch (stage) {
case 'visual':
config.suites = [ /* suites */ ];
config.functionalSuites = [ /* other suites */ ];
break;
case 'e2e':
// ...
break;
}
}); Getting some sort of automatic or configured test runner selection is a bit trickier. There isn't a good way for Intern to know whether unit tests need a browser or can run in the Node client (and the user might prefer one in any case). There are also cases where you may want to do both (like Intern's self tests). |
How about we add one more array to the config? The problem lies in the fact that |
That's a possibility. We could have something like |
It sounds like we're going to hold on this until around/after Intern 4 is released. I am still in favor of engine and suite selection via configuration rather than programmatically defining what engines and suites should run. All of these are problems that were mentioned here, engine selection (runner vs client), multiple suite configuration, and test selection (i.e.
While selecting tests that run in node, browser, or webdriver are common filters, we should aim to generalize test filtering using Finally, just to play devil's advocate for following the programmatic path... Intern provides information about what engine it is running so technically the separate |
The main issue I have is one of simplicity vs complexity. Currently, the available tools (Intern features) are pretty straightforward. We have a Node client and browser client and a WebDriver runner. You select your environment, and Intern runs the relevant tests. (There is some conflation between node vs browser suites because of the single A specialized lifecycle-based configuration layer adds complexity to Intern without actually achieving the goal of relieving the test writer of having to be aware of test runners since there is no direct mapping between lifecycle stage and test runner. That not to say that this sort of configuration wouldn't be valuable in some scenarios, but it may fit better in a front end rather than Intern's core. |
I don't mean to sound like I think Intern doesn't need improvement. It could certainly be made more approachable and easier to configure. I just think we may be better served by focusing on making the core functionality easier to understand and utilize than we would by adding another layer of configuration. |
We're not adding an additional layer of configuration so much as moving choices that already need to be made by the developer into the configuration. Instead of making engine selection an external process that happens on the command line we are better served moving this into the configuration. Today if an engineer in charge of testing wants to have multiple test lifecycles that use various engines they need to write an Intern configuration file for each one and inform other engineers on their team which configuration to use and which engine to select. There are external solutions to this issue of information sharing (i.e. use npm scripts; grunt; gulp; intern cli; command line). These various solutions do nothing to reduce complexity on the whole; they only help compartmentalize it. Instead we should move these choices into a configuration file that can be used as a single source of information.
The above configuration moves engine and suite selection into a single configuration file, which could in turn be queried by the cli; e.g. Intern is a very capable and complete testing framework. If we are going to have complexity, my preference is to keep it in a single location and use the cli to simplify usage for the overall team. We are not well served by keeping Intern simple if it makes it harder to use for everyone else. |
tl;dr This proposal would fit better in
In the projects I've seen and worked on, that kind of knowledge is abstracted away, as much as makes sense, with grunt, scripts, maven, or whatever test driver the project uses. Yes, the test writers need to have some understanding of how Intern works, but the proposed stage config does really help with that.
I think this is the key point where I disagree with this proposal. I feel like this type of opinionated configuration should be handled with external solutions (including |
I think we should discuss how we envision fixing usability issues with Intern. From what you said above it sounds like if we had to do it all over again that Intern would only have a programmatic API and we would produce external tools like We should use our review time on the roadmap to discuss where Intern 4 and 5 should take us. If we are to reseat this feature request in And if in some parallel universe Intern did only have a programmatic API and relied on external tools like Long story short, we need to revisit our vision for Intern before we come back around to this. I am OK moving this feature as long as it produces a cohesive ecosystem, but I don't think it is possible to move this feature out of Intern's core without also externalizing Intern's configuration. |
This is a natural direction for Intern to take; there's no need to "do it all over again". It lets core Intern focus on providing testing facilities without shackling it to any particular UI or testing methodology.
This feature could be implemented in a UI without requiring any changes to Intern.
There is no requirement that Intern handle engine selection. It certainly could (to some extent), but the UI could just as well. The main goal of creating a feature like this should be to improve usability for end users. Assuming automatic engine selection is something that's a significant pain point, it's not going to matter to an end user where that's handled in Intern's stack, just that it's handled. From an architectural perspective, I'm not sure that it's better to have Intern perform automatic engine selection vs letting the UI (or the user) make that decision.
Intern's CLI is already a pretty thin layer over the underlying code; exposing an API won't be all that difficult. In any case, many users already rely on something else to provide the user-facing UI, like grunt, npm scripts, maven, etc., so this isn't a huge paradigm shift.
One of the main benefits of separating the UI from the core is that experimentation with new UI concepts becomes both easier and less risky. No changes to Intern's core (that I can see) are required to implement this feature, so there's no reason it couldn't be implemented in a standalone UI right now (or added to intern-cli). |
This ticket started with adding to Intern's configuration to address a common use case and well-defined scenario and the level of effort has increased to creating an external UI. Creating an external UI with the same features of Intern means adding the configuration then building a cli, a grunt task, and a browser runner that can transform the new configuration into one Intern understands and then maintaining it separately as Intern is updated. This is an unrealistically high bar to set for any feature. |
During some offline discussions, the following modifications to Intern were proposed that would allow the proposed feature to be implemented:
With these updates, high-level organization can be implemented using individual config files. These can be queried using standard OS facilities such as |
Closing this as its specific to Intern 3, and because Intern 4 essentially supports the original proposal with child configs. |
Currently Intern's configuration recognizes two modes of testing. These modes break down into Selenium/WD tests (functionalSuites) and tests that can run in a client or browser platform (suites).
{
stages: {
unit: {
suites: [ 'tests/unit/all' ],
platform: 'client'
},
// functional tests of individual components
functional: {
suites: [ 'tests/functional/all' ],
platform: 'webdriver'
}
// blackbox feature testing (requires VPN & server)
e2e: {
suites: [ 'tests/e2e/all' ],
platform: 'webdriver'
},
// visual regression tests (requires VPN & server)
visual: {
suites: [ 'tests/visual/all' ],
platform: 'webdriver'
},
// tests generated by QA using Intern recorder (requires VPN & server)
acceptance: {
suites: [ 'tests/qa/all' ],
platform: 'webdriver'
}
default: {
suites: [ 'tests/unit/all', 'tests/functional/all' ]
platform: 'webdriver'
}
}
}
``` From the command line we can run tests using
intern unit --config=tests/intern
or `intern --config=tests/intern` to run the default set of tests. Being able to use custom configurations for each stage of testing may also help test other platforms, such as Electron or Appium. It also has the benefit of replacing `intern-runner` and `intern-client` with a single configuration-driven solution.I'd love to hear feedback on this before starting on a PR. Thanks.
The text was updated successfully, but these errors were encountered: