Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idea: Implementation of tests #104

Open
Venefilyn opened this issue Jun 17, 2019 · 18 comments
Open

Idea: Implementation of tests #104

Venefilyn opened this issue Jun 17, 2019 · 18 comments

Comments

@Venefilyn
Copy link
Member

Venefilyn commented Jun 17, 2019

I believe we've always wanted tests, though never had a chance to implement it ourselves. After some discussion on Discord we've decided we want to add unit tests, and e2e tests.

Tools

To make testing easier we should use similar setup as RES.

  • For e2e we can utilize Nightwatch.
    • creesch has worked with it and believe it's good
    • It has dashboard functionality, how useful it is I don't know. But would give some insight in how it looks when a test fails for example.
    • Could not find a plugin for extension testing, we should look at how RES does it.
  • For unit RES uses AVA, though I've only worked with Jest, and Mocha+sinonjs. I'm up for discussion
  • CI tool - up for discussion, there's CircleCI, AppVeyor, and TravisCI as big contenders. Only ever worked with Travis myself
  • Code coverage tool - Codecov.io and Coveralls. Not worked with neither of them, but Codecov seems nicer at a glance as it gives better code coverage imagery and has better UX.

How

When it comes to the tests themselves, we have a few approaches I would suggest

  • Unit tests should mock out any Reddit responses and work with successes and errors alike. Tests should be written as a black box, data goes in and data or requests comes out.
  • E2e tests should utilize both live and mocked responses. As in, we can have a live test to make sure toolbox integrates well with the API, and we can have tests without server to test failures.
  • CI that runs tests on every commit. If possible it should only run the e2e tests when something is released to avoid attacking the API too much

Once we have established what we will do we will create a tracking issue for what needs to be setup in terms of enabling tooling, code coverage, and CI. Codecov.io can be used for code coverage reporting. The tracking issue should also detail all the components that need tests written for them, preferably in order.

@creesch
Copy link
Member

creesch commented Jun 17, 2019

e2e/UI testing

Could not find a plugin for extension testing, we should look at how RES does it.

Seems to be part of the selenium calls, no plugin needed just provide the browser with a profile that contains the extension. You can see how they did it here specifically.

Which they run with travis where you can simply see the console output as dashboard.

For example: https://travis-ci.org/honestbleeps/Reddit-Enhancement-Suite/jobs/546448318

Sorts of tests I'd like to have.

Obviously I don't want to go overboard here with the sorts of tests we do here, I'd like to limit it to a few important functionalities that work through a variety of UI triggers and have broken down in the past. Some things that come to mind.

  • Usernotes:

    • Adding a usernote.
      • Viewing added usernote in the button and the window itself.
      • Viewing same notes in a fresh installation (cleared settings).
    • Removing a usernote.
  • Settings (Not even sure here, but a lot of this relies on interface interaction).

    • Changing a setting.
    • Exporting settings.
    • Importing settings.
      • On a clean installation.
      • On a installation with already changed settings.
    • Resetting settings.
  • Cache:

    • Clearing cache.
  • Removal reason flows.

  • Mod macros.

Ideally we do some of these on both old and new reddit to test if the interface elements are there. Though we might want to simpilify the tests here. For example if the user note button is visible in an expected way on new reddit and we already tested the popup on old reddit there is no need to also click on it and do the rest of the interaction.

Also I only wrote down happy flow scenarios we obviously need to put some thoughts on negative flow scenarios.

Unit tests

I honestly don't have much of an opinion on this from what I know AVA is fairly light and easy to work with. Mocha requires much more setup.

CI

Travis seems like a good choice here so we can borrow from RES.

What I am not entirely clear on is how it handles the credentials. Looking at the config for the earlier mentioned job here I assume they are there but shown as a hash?

Code coverage tool

Codecov.io is not open source as far as I can tell, if we use AVA we could just Istanbul as AVA recommends it in their own repo. Seems to output nicely to the commandline so we can just run it together with tests.

I don't want to get too bogged down on code coverage though. Adding code coverage to old code doesn't yield that much benefit so I think the best approach is to focus on adding tests for newly developed code and code that is changed.

So ideally I'd actually also see something implemented that tests the quality of unit tests as it is far too easy to write unit tests that don't actually test anything but do increase the coverage numbers. We could look at adding Stryker to the mix which is a mutation test framework.

@creesch creesch pinned this issue Jun 17, 2019
@Venefilyn
Copy link
Member Author

e2e/UI testing

Could not find a plugin for extension testing, we should look at how RES does it.

Seems to be part of the selenium calls, no plugin needed just provide the browser with a profile that contains the extension. You can see how they did it here specifically .

Which they run with travis where you can simply see the console output as dashboard.

For example: travis-ci.org/honestbleeps/Reddit-Enhancement-Suite/jobs/546448318

Good to know about the plugin situation.

I think any e2e tool will work as the example you showed with headless mode. Cypress does the same thing. What I meant with a dashboard, which is the closed source of Cypress, was that you would have recorded runs and screenshots of failed tests. Which plays a big role in some UI testing.

It would be great if Nightwatch had something like it, but maybe there is another solution for it.

Sorts of tests I'd like to have.

Obviously I don't want to go overboard here with the sorts of tests we do here, I'd like to limit it to a few important functionalities that work through a variety of UI triggers and have broken down in the past. Some things that come to mind.

* Usernotes:
  
  * Adding a usernote.
    
    * Viewing added usernote in the button and the window itself.
    * Viewing same notes in a fresh installation (cleared settings).
  * Removing a usernote.

* Settings (Not even sure here, but a lot of this relies on interface interaction).
  
  * Changing a setting.
  * Exporting settings.
  * Importing settings.
    
    * On a clean installation.
    * On a installation with already changed settings.
  * Resetting settings.

* Cache:
  
  * Clearing cache.

* Removal reason flows.

* Mod macros.

These all look like e2e tests. If possible we should have these tests on both Firefox and Chromium headless drivers

Ideally we do some of these on both old and new reddit to test if the interface elements are there. Though we might want to simpilify the tests here. For example if the user note button is visible in an expected way on new reddit and we already tested the popup on old reddit there is no need to also click on it and do the rest of the interaction.

Also I only wrote down happy flow scenarios we obviously need to put some thoughts on negative flow scenarios.

These are mostly when use cancels something, or response is an error

Unit tests

I honestly don't have much of an opinion on this from what I know AVA is fairly light and easy to work with. Mocha requires much more setup.

As you said in Discord I also just realized that AVA and stryker are not compatible reading the docs.

Jest is easy to setup and has built-in testing for a lot of stuff

CI

Travis seems like a good choice here so we can borrow from RES.

What I am not entirely clear on is how it handles the credentials. Looking at the config for the earlier mentioned job here I assume they are there but shown as a hash?

They're stored in travis, then used as hashes that point to it internally within travis.

Code coverage tool

Codecov.io is not open source as far as I can tell, if we use AVA we could just Istanbul as AVA recommends it in their own repo. Seems to output nicely to the commandline so we can just run it together with tests.

Codecov does give more graphs and insight rather than just a text output. There's no reason we can't have both really. Istanbul doesn't give that much insight comparatively to other solutions

I don't want to get too bogged down on code coverage though. Adding code coverage to old code doesn't yield that much benefit so I think the best approach is to focus on adding tests for newly developed code and code that is changed.

We should prioritize newer code first and slowly work backwards. Focus should be on writing quality tests, not quickly write tests that don't cover enough edge cases and regression cases.

So ideally I'd actually also see something implemented that tests the quality of unit tests as it is far too easy to write unit tests that don't actually test anything but do increase the coverage numbers. We could look at adding Stryker to the mix which is a mutation test framework.

Mutation test framework is a good indicator. Never used one though so no idea how it would work

@creesch
Copy link
Member

creesch commented Jun 17, 2019

Just for clarity, I have very little interest in dashboards and all that sort of stuff starting out. We don't have stakeholders or clients we need to report to. If it outputs the information to the command line it is good enough as that means we can just look at the travis output.

I don't want to spread out the management and information of the project over a dozen of different tools and I want them to be easily accessible and viewable from right here in the repo.

I also don't want to have too much dependencies in regards to cloud services. We can't realistically avoid using a selenium grid service like saucelabs/browserstack and we need a CI platform but I'd like to keep it at those if at all possible.

@Venefilyn
Copy link
Member Author

Sure, I agree dashboards might be a bit much starting out. I'm just letting you know of how it would help

We can't realistically avoid using a selenium grid service like saucelabs/browserstack

Since I've never used this, what purpose would it serve?

@creesch
Copy link
Member

creesch commented Jun 17, 2019

Since I've never used this, what purpose would it serve?

The tl;dr is that it easily allows you to run multiple browsers and machines. Selenium grid. Saucelabs and browserstack (probably more out there) are cloud services that effectively offer pre configured selenium grid for you.

The reason why we'd want this is because the alternative is setting up the browsers locally each time for each run in the CI environment because I don't think you can configure them once and return. As you could see in the RES example you simply tell it "I want firefox" possibly with a version and you are good to go. It also means we can test on multiple platforms if need be this is what saucelabs supports. So this means that we very easily can also do a bunch of tests on different browser versions we technically still support. For example in our manifest we state that we support firefox 57.0 but we don't really test with that not and then we can.

In a similar manner though less important, a manifest key that was supported on firefox desktop broke toolbox for android firefox this weekend.

We could even go full overboard and when we tag a release kick of a test that does a ton of browser varieties on different platforms to see if toolbox will load on it.

If we'd somehow want to configure those environments by hand it would be a ton of work.

It is one of the few instances where I think using a cloud service likes this makes sense to me.

@creesch
Copy link
Member

creesch commented Jun 17, 2019

Oh I should mention that both saucelabs, browserstack and similar services all make use of the same sort of API which also means we aren't screwed if for some reason one of them stops or changes their free plan.

Also because it is selenium based you can easily set it up in a way that when you are developing locally you simply run the tests on your local browsers if you'd want to. It simply is a very mature and rich ecosystem with which the majority of the web automation testing is done.

@creesch
Copy link
Member

creesch commented Jun 18, 2019

Some random notes on unit testing

Since you said you had experience with jest I'd be fine with seeing if we can use that.
Though I wonder if we can use it? A quick look shows me that it expects a module approach so you can import your code into the test js
That might be our biggest challenge actually
Though I guess we could us an approach like this (I assume Jest can also act in a similar manner) .

Having thought it all over we probably want to focus on utility functions first.

@Venefilyn
Copy link
Member Author

I don't like the idea of running it in the browser as it would be quite slow and I'm not sure how it would work or report in a CI.

I might have to play around to see if I can manage to write tests without a browser environment running.

There is a Jest WebExtension module that mocks chrome and browser globals

You can import files using Node that test it that way without utilizing exports, I found this. Though I'm not sure how to import and run specific functions yet

@creesch
Copy link
Member

creesch commented Jun 18, 2019

Oh for sure, if we can avoid actually running in a browser that is great. Though we should then also be aware that we are running in a node context where not everything is the same. For example, I don't think our usernotes mechanism would work in a node context due to base64 stuff. But that is getting ahead of things 😁

The stackechange thing you found looks promising indeed. If we can leverage that it would be cool.

For the e2e tests I am going to apply to saucelabs for their open source program.

@creesch
Copy link
Member

creesch commented Jun 20, 2019

So our saucelabs application got approved.

I still need to do some administrative work to get the account but once that is done I am going to see about setting it up properly.
Looking at the RES project they have a decent setup but there is some room for improvement and also some differences. For example RES doesn't need to be logged in which is something we do need to.

They also don't use pageobjects or globals to reuse things in tests.

@Venefilyn
Copy link
Member Author

Venefilyn commented Jun 20, 2019

It might pose a problem with local testing since we need to be logged in, but if it's all in a CI then it wouldn't too much of an issue. snoowrap manages it just fine for example.

I don't see why we would use pageobjects or globals in tests either? At least for unit tests they're just going to make the tests less robust

@creesch
Copy link
Member

creesch commented Jun 20, 2019

pageobjects or globals in tests either?

Talking e2e tests, pageobjects things like logging in can be a single pageobject as we are not testing that.

@creesch
Copy link
Member

creesch commented Jun 20, 2019

Just as an extra clarification, something like this (not tested probably broken but based on actual reddit elements)

const reddit = {
    loginReddit: function(userName, password) {
        //verify present on login screen
        return this.waitForElementVisible('@usernameField', 5000)
            .setValue('@usernameField', userName)
            .setValue('@passwordField', password)
        //submit form
            .click('@loginButton');
    }
};

module.exports = {
    commands: [reddit],
    elements: {
        usernameField: {
            selector: '[name="user"]'
        },
        passwordField: {
            selector: '[name="passwd"]',
        }        ,
        loginButton: {
            selector: '.submit button[type="submit"]'
        }
    }
};

@Venefilyn
Copy link
Member Author

From #185; cc @creesch

Probably a discussion for something outside of this PR but I am wondering if we should start building tests for modules or maybe first for the utility classes and objects?

I'd say we go for utility classes first if it's possible and then whatever is the most important to get right, such as banning or removing

@Venefilyn
Copy link
Member Author

module.exports = {
    commands: [reddit],
    elements: {
        usernameField: {
            selector: '[name="user"]'
        },
        passwordField: {
            selector: '[name="passwd"]',
        }        ,
        loginButton: {
            selector: '.submit button[type="submit"]'
        }
    }
};

We should export these selectors for both unit and e2e. Would be easier for unit tests that need to touch an element or see that an element changes

@creesch
Copy link
Member

creesch commented Oct 22, 2019

Sounds like a plan.

Though we need to figure out what selectors we need and in what context so we can group them accordingly or see if there is actually overlap.

@creesch
Copy link
Member

creesch commented Oct 27, 2019

Alright, unit tests and code coverage have been added to travis. I also noted that the coverage count is not correct so we'd have to look into it. It picks up jquery (it should ignore libraries) and other than that only the defined test as 100%.

@Venefilyn
Copy link
Member Author

Venefilyn commented Oct 31, 2019

Alright, unit tests and code coverage have been added to travis. I also noted that the coverage count is not correct so we'd have to look into it. It picks up jquery (it should ignore libraries) and other than that only the defined test as 100%.

We now ignore libs in coverage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Todo
Development

No branches or pull requests

3 participants