Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tech choices for WebUI++ #51

Closed
alanshaw opened this issue Apr 3, 2018 · 9 comments
Closed

Tech choices for WebUI++ #51

alanshaw opened this issue Apr 3, 2018 · 9 comments

Comments

@alanshaw
Copy link
Member

alanshaw commented Apr 3, 2018

We're approaching the point at which we can start building a new WebUI, using the existing WebUI as a working prototype for what we want to build.

I'd like to put forward some suggestions for your approval for how we build the new app in terms of the tech we use and how it's implemented.


Styles

As a team I believe we're already sold on atomic styling as a methodology and have already used Tachyons on public Protocol websites, within Companion and Desktop as well as Tachyons inspired atoms in ipfs-css.

The existing WebUI uses Less CSS, but using Tachyons largely removes the need for it and means we'll not need a build step to transpile it - hooray!

Frontend framework

I think it's a good idea to stick with a framework that's well known and widely used. The WebUI will have a lot of exposure to users through being bundled with go-ipfs (initially at least) as well as being used in Desktop and Companion (in the near future) so keeping the barrier super low by sticking to familiar concepts and ideas will encourage the community to invest and contribute more readily.

The existing WebUI is built with React, but I'd like to suggest using Preact, which is an alternative and largely compatible with React and has a much smaller file size.

I think the drastically smaller file size is more than enough reason to at least try Preact, and if we encounter significant issues then we should be able to easily switch back to React.

State management

Currently the WebUI is using Redux to manage global state and Redux Saga to create and execute asynchronous actions. I've used Redux in quite a few other projects. There's a ton of benefits but what I love the most is that it makes your actions and components much easier to isolate and test. I'd like to continue using something like this to manage state.

A common pain point with Redux seems to be the amount of "boilerplate" code that has to be written to reduce state and create actions. I'm mildly in agreement with that sentiment so I'm in favour of trying out something new but similar.

I feel that Redux Saga is difficult to understand initially, and that doesn't further our goal to encourage community or even team contributions. In my opinion, async/await is a easier concept to understand than generators and having seen both these language features come into existence the sense I have got is that async/await is much more popular and accessible within the javascript community.

@olizilla suggested unistore as an alternative to Redux. It's a tiny library (650b) and it allows you to create async actions (using async/await) by default. I'm totally up for giving it a shot.

Routing

TLDR;

I think we can just use react-router or preact-router depending on the frontend framework. Sadly we'll have to use hash based routing for the time being. There's no progressive web app generators or routing libraries I know of that don't have issues with routing relative to a particular directory.

Routing backstory

Initially @olizilla and I were thinking of a hash per section e.g.

https://ipfs.io/ipfs/Qmhash0 # ---> home
https://ipfs.io/ipfs/Qmhash1 # ---> mfs explorer
https://ipfs.io/ipfs/Qmhash2 # ---> ipld explorer
# ...etc.

...because we could upgrade sections independently.

However that makes single page app routing difficult (impossible?). We could link to these pages individually, but it would cause a full page reload...

Instead I'd have liked to do:

https://ipfs.io/ipfs/Qmhash/
https://ipfs.io/ipfs/Qmhash/files
https://ipfs.io/ipfs/Qmhash/ipld
# ...etc.

...but off the shelf routing libraries want to assume you're working from the root of a domain, not a subpath, so it means that things might work here:

https://webui.ipfs.io/
https://webui.ipfs.io/files
https://webui.ipfs.io/ipld
# ...etc.

...but not at the aforementioned hash from a gateway.

I'm interested to hear recommendations for routing libs that might allow for this (maybe we should just roll our own) so in the interests of getting things working I suggest we just use hash based routing for now:

https://ipfs.io/ipfs/Qmhash/
https://ipfs.io/ipfs/Qmhash/#/files
https://ipfs.io/ipfs/Qmhash/#/ipld
# ...etc.

Transpiling

The current WebUI is ES2015+ and I'm happy to stick with that. We have to transpile JSX anyway (assuming that we want to use JSX) and so we might as well take advantage of ES2015+ features :D.

I'd like to ensure we include a class fields transform so that we don't have to do any manual binding of instance methods, so we can easily define static class properties and so we can avoid a bit of boilerplate writing constructors to create default state.

Linting

I'd like to use standardjs for linting as I've found that it has sensible defaults and is good at catching common mistakes before runtime. I have no desire to configure and maintain a custom eslint (or similar) and considering standard is in use by many other JS IPFS projects it makes sense to stick with it.

Bundlers

I find webpack config confusing and I'm much more inclined to just use browserify to bundle up our codes. I was hoping we'd be able to use factor-bundle (as we have done on a bunch of the Protocol websites as well as in Companion) to pull out common dependencies but hash based routing means that we'd just have to load everything anyway since everything is served from the root of the application.

Do people have strong feelings about using webpack or other (rollup‽)?

Testing

I'd like to see good test coverage from day 0. I'm well aware that the vast majority of IPFS projects use mocha but I'd like to throw in my preferences for consideration:

I've always been a fan of tape but nowadays I've been leaning more towards ava since it's like tape, but concurrent tests, promises support and you can write tests in ES2017 out of the box.

For coverage I've switched to nyc, which is basically istanbul but with concurrency support.

I want to have e2e browser tests, but I'm just so disheartened with selenium based browser testing frameworks. Is there something better than nightwatch.js? Is there any selenium alternatives that have some traction and are maintained?

Build tools

AEgir is the obvious choice, but proposals above for bundling and testing are at odds with what AEgir provides. Shall we just use AEgir? I'd prefer to use browserify but it's not the end of the world if we use webpack and likewise for mocha.

Otherwise, I'm an advocate for no build tool and just npm scripts. @olizilla and I have been using npm-run-all quite extensively and with great success but I have been getting frustrated with ordering and dependent tasks. I basically want to be able to define dependencies (other npm scripts) that need to execute before mine does and have the library figure out what the ordering should be. Kinda like async.auto. @olizilla pointed out that I was describing makefiles so I think we should just use make (which we have done already in IPLD website IPFS website and others). That said, make is not platform independent - Windows users might have a bad time.


I'd like to hear your thoughts and if I've missed anything! Lets build glorious future together 🚀

@lidel
Copy link
Member

lidel commented Apr 4, 2018

Styles

Sounds like we all are ok for going forward with Tachyon-like atoms.
Extracted some css/icons/fonts scope/naming decisions to ipfs-shipyard/ipfs-css#11

Frontend framework, State management

There are two paths we can take:

  • go with mainstream React ecosystem (keeping the entry barrier super low, producing reusable components as a side-effect)
  • set up a custom stack with things like Preact, unistore etc. (still familiar, but results with smaller code optimized for speed, size and ease of maintenance)

The question I have is related to long term planning: how reusable are going to be Preact components that we create for WebUI? Will it be possible to extract some things (eg. drag&drop and progress feedback for file sharing) so that someone can use it in own (React?) app, or are we deciding it is out of scope for now?

No strong opinions on my end, but the small size of Preact, unistore etc makes me happy as it makes it easier to ship with things like browser extension. And use of async/await where possible is a big plus.

Routing

Current routing in WebUI is hash-based: http://127.0.0.1:5001/webui#/files/explorer
I think it is perfectly fine for now (no regression in browser, not visible in electron), and makes it easier to cross-link across browser boundary.

Linting

Ok with Standard.js

ipfs/community/js-code-guidelines.md suggests we should also enforce the use of strict mode, but not sure if it is worth increasing complexity by moving to full eslint setup.

Bundlers

As a data point, here is a comparison between WebPack, browserify etc. Mostly a matter of personal preference.

Seems that a lot of js-ipfs land builds with webpack (+ AEgir), which stores pipeline definitions in separate files. If we go our own way, we may end up with very, very long browserify one-liners in package.json (like in Companion).
Apart form that, I don't see big problem with staying with browserify as long we follow established conventions for bundling.

Testing

E2E is bit better now that we can run Firefox and Chrome in headless mode:

Nightwatch alternatives for E2E:

Build

Build based on npm scripts and npm-run-all is working fine for ipfs-companion since its inception. And it works everywhere. I agree that there is no need for AEgir (backed by Gulp) as a build tool, unless we include it for other reasons.

Release Management

Release Management is something we did not discuss yet, but we will produce multiple packages: embeddable webui for companion, ipfs-css etc.
Some of chores related to that are nicely automated by AEgir, but I am not sure if it makes sense to include AEgir in the stack just for that. We could just make sure to follow the same conventions.

AEgir?

As noted here, AEgir was created as an opinionated effort to automate project management, streamline conventions and remove decision paralysis.
One of first questions we need to answer is "Should we use AEgir?".
Sounds like a lot of things depends on the answer.

@JonKrone
Copy link

JonKrone commented Apr 13, 2018

Frontend Framework

My experience using a library like Preact is that it works well and I hardly notice a difference!

There is a drawback:

  • Ecosystem compatibility might be getting worse. Because Preact can not currently support a few React 16 features (Fragments, Portals, Error boundaries), packages that use those will then not be usable by us even with preact-compat. Preact is currently working on enabling those, but its a hard job that might need to be done with each major React update.

As I understand, the current WebUI is only ever served from local resources. Is this going to change? I saw references to non-local paths above, like https://ipfs.io/ipfs/Qmhash/. Will you be able to open the WebUI on a random computer and connect to a remote node? I do like that idea a lot but I haven't seen it mentioned anywhere else.

If it's only ever going to be served from local resources, bundle size is less important than usual. For example, the current WebUI js bundle is more than 3MB but still scores a 95/100 with the Chrome performance audit (no cpu throttling, good laptop).

Linting / Formating

I suggest using prettier to automatically format code to standardjs rules. It's a fairly simple integration that can eliminate most format problems reaching a PR. There is a standardjs version: prettier-standard

I believe it is on a roadmap (mentioned ipfs/js-ipfs#1260) to implement into AEgir.

Typing

I have never worked with a typed frontend project but I wanted to bring it up. Have y'all previously talked about it or have experience here? Typing in general is a great thing and bumps the maintainability of a project over the longer term but can be burdensome. Many of the IPFS libraries we consume are not (yet) typed and so would hinder efforts to type the rest of our system.

Flow is on the AEgir roadmap (ipfs/js-ipfs#1260) and has had significant conversation here (initial proposal), and here (initial implementation).

I don't have a sense of how important tooling consistency is between the ipfs-shipyard and ipfs projects.

Routing

This is just a minor point but if you use Unistore, you will need find some means of connecting our Router to the store. For the React + Redux stack, there is a solution with react-router-redux but you may need to roll your own preact-router-unistore. But hey, that'd be cool.

Note https://github.com/developit/preact-router:

React Router 4 even works directly with Preact, no compatibility layer needed!

so Preact + Redux precludes this.

Just some thoughts. Tons of potential with the new WebUI 💯

@alanshaw
Copy link
Member Author

@lidel:

The question I have is related to long term planning: how reusable are going to be Preact components that we create for WebUI? Will it be possible to extract some things (eg. drag&drop and progress feedback for file sharing) so that someone can use it in own (React?) app, or are we deciding it is out of scope for now?

I think we should write it in a way that doesn't make it difficult to extract components into modules.

As far as I'm aware we can use preact-compat to allow us to use React modules in our preact project. When we get round to extracting common components we can make tweaks to allow them to be used in a React project so we get the benefit of the larger eco-system.

@JonKrone:

If it's only ever going to be served from local resources, bundle size is less important than usual.

Whilst I mildly agree with this statement I'd still like to keep our apps as lean as possible.

Typing

By the sounds of it the flow typing changes are still a little bit hot off the press. Ideally I'd like to see what the outcome of that is before we decide to adopt it here. I'm also keen to make good progress this quarter and I don't want to risk that - flow types are completely new to me.

I suggest using prettier to automatically format code to standardjs rules

Sure, consistency ftw.

This is just a minor point but if you use Unistore, you will need find some means of connecting our Router to the store

Could you explain this a little more? I'm not certain I understand. I think if we find that any of these technologies are proving painful we should re-evaluate asap and fallback to something more familiar.

@JonKrone
Copy link

@alanshaw:

✔️ with typing and bundle size.

re: my Unistore comment: The routing system (react-router, preact-router, etc..) in a single page app needs to be hooked up to the app's state so that route changes within the app trigger a state change and render. So when the user presses the back button, we need to intercept that and turn it into a state change rather than an actual page reload. That enables things like cross-page time travel debugging and replay via Redux Devtools.

I was saying you'd have to write the connection between the router and state store because I haven't seen one for Unistore. react-router-redux is a fairly simple library, about 100 LOC: https://www.npmjs.com/package/react-router-redux.

@alanshaw
Copy link
Member Author

Aegir in WebUI 2.0

@olizilla and I are in the process of creating some foundations for WebUI 2.0 (working title) based on the tech choices we've largely agreed on. Since Aegir is a bit of an open question, we thought we'd try using it and evaluate it for ourselves. Here's our findings:

What did we end up not using?

  • Lint

    I immediately had the problem of my editor's standard plugin not activating because the project didn't have standard as a direct dependency. This is mildly inconvenient but then on running aegir lint we got the error "Parsing error: 'import' and 'export' may appear only with 'sourceType: module'". We switched to using standard directly because of this and the fact that we had to configure the linting engine to be babel-eslint to get the latest ES features we want to use in the project.

  • Build

    Aegir is focused on building JavaScript libraries but when building a web application there are other things to consider.

    We tried both parcel and create-react-app for building and bundling the web application. They both offer similar features but a good one to list here is that it eliminates any npm scripts we'd have to setup to copy/compile resources other than JS. We settled on create-react-app because it scaffolds us an application that plays nicely with other tools in the React eco-system, such as react-cosmos, storybook, and styleguidist.

  • Coverage

    Jest, the test runner bundled with create-react-app comes with coverage reporting built in.

  • Test webworker

    Super minor point but the WebUI will not run in a webworker.

  • Docs

    We're not building a library for people to consume so this was not needed.

  • Release

    We don't need to publish this to npm (although it wouldn't be a big issue). I think npm version to increment the version number and tag in git will probably be sufficient.

What was missing?

  • Watch

    There's no documented watch command for watching files for changes and automatically triggering a build. I'm not sure how I'd go about doing an incremental build without looking at Aegir internals and maybe sending a PR(?). We instead used create-react-app which comes with this built in.

  • Serve

    We need a way of serving our built site during development. I can't see any documented way of doing this but I'd assume since it uses webpack we'd use the webpack serve module somehow? Again, create-react-app comes with this (and hot reloading) built in.

  • Test e2e

    There's no opinions on end-to-end testing in Aegir so we've experimented and come up with a low dev friction way of doing quick e2e testing.

    I've worked with a number of e2e test frameworks and my main pain points have been:

    1. Flakey commands - e.g. failing to select elements on the page that do exist
    2. Awkward APIs - most frameworks have implemented a sort of promise queue based system that makes it difficult to know what code is running when. It's also often difficult to run just one test while developing
    3. Complicated config and setup/teardown - most selenium frameworks require a selenium server to be started and ended when the tests are completed and requires a config file for selenium as well as a config file for the test runner
    4. Slow execution - it's not uncommon for an e2e test suite to take 30 minutes+ to run, I'd like to see if this can be sped up

    Almost all of the non-selenium based e2e testing frameworks list "not selenium based so not flakey" as one of their selling points. This speaks to the reliability of selenium and sadly this has been the case in my experience. I've been keen to try something different i.e. not selenium based.

    I've discounted Nightwatch.js because of the reasons listed above. We looked at TestCafe but realised it's payware and we were hoping to find something open source. We also took a look at Codecept but weren't that enamoured with it's DSL.

    In the end we evaluated Cypress and puppeteer by installing them and creating/running tests.

    Cypress looked promising but felt a bit complicated. It has all the batteries included. Puppeteer is a little closer to the metal. It doesn't have a test runner or assertion library included, so we're free to use what we want. This could be mocha and chai (like what Aegir includes) but what's interesting about having this flexibility is that we can easily intersperse e2e tests with our unit/functional tests making e2e tests an integral part of daily developer life. It also opens up possibilities for running concurrent tests - Jest parallelizes test runs across workers so we can hopefully speed up test runs.

What are we left with?

  • Test

    Testing in node and testing in a web browser. With Aegir we'd need to use mocha and karma with mocha in the browser.

    Jest runs only in node. I'm not as worried about running our unit/functional tests in a browser because Jest allows us to functionally test our components in a jsdom environment, we'll also have adequate e2e tests to cover any edge cases.


The open question is, considering the above information, should we use Aegir? I think having actually used and evaluated it I'm now leaning towards not, as I think the benefits outlined above will give us a better work flow and will reduce dev friction.

@lidel
Copy link
Member

lidel commented May 2, 2018

Thank you!

Aegir

We seem to agree that shipping backend libraries is where Aegir shines – it removes a lot of chores by being opinionated and having 'batteries included'.

WebUI is user-facing GUI product, and its development will include a different set of challenges and tools than a lib project, which means often ignoring what is provided by, or even going against the flow of Aegir.

Due to this I am ok with starting without it.

Release

We don't need to publish this to npm

We should not rely on fetching WebUI from IPNS as it assumes network connectivity is working, which may not be the case for all users. On top of that human-readable path requires DNS (a potential single point of failure).

In my opinion we actually do want it at npm: having a bundle similar to ipfs-css simplifies including "offline" distribution of WebUI inside of ipfs-desktop and ipfs-companion and tracking its updates via package.json in each project.

E2E Tests

Ideally, we should be able to run the same test suite against both Firefox and Chrome (trust, but verify).

AFAIK Puppeteer is Chromium-only.
Are we planning to use something like puppeteer-fx? (I just glanced at it, but it may not support all APIs)

@alanshaw
Copy link
Member Author

alanshaw commented May 8, 2018

In my opinion we actually do want it at npm

Yes, you're right. I'm not sure what I was thinking.

Ideally, we should be able to run the same test suite against both Firefox and Chrome (trust, but verify)

I think puppeteer-fx would allow us to do this. Step one is to have e2e tests that are easy to write, that people want to write and that are easy to debug and maintain. I want to know that the test failures are because the app isn't working correctly, not because the test runner is flaky. I haven't had that for e2e tests before and by all accounts puppeteer seems to be the best option for achieving this. If this is true, then we should open it up to multi browser.

@olizilla
Copy link
Member

Hi everyone, @alanshaw and I have been working on a foundation from which to build the new WebUI from. Here's what we've created so far. It's basic but covers the pieces we need to build on top of as well as examples of how to do things.

What's in there:

  • React - not preact - we u-turned on that decision given the points that @JonKrone raised about incompatibility in the future. It also allowed us to use create-react-app (CRA) to scaffold and gain the benefits from react-scripts i.e. easy build, test and run with hot reloading

  • Redux - we read human redux, learnt how to redux properly and decided to use redux-bundler to reduce the amount of boilerplate we'd need to write as well as create small testable bundles of redux functionality. It also comes with some great built bundles that we can make use of

  • Routing - is done via the redux bundler router bundle - super simple and easy to use, just use an a tag with the route you want. You can programatically change the url by connecting in the doUpdateHash action creator to your component.

  • Testing - Jest (CRA default) and also puppeteer. Test files are placed in the same directory as the source files that they test. CRA also comes with coverage built in by running npm test --coverage.

  • CI - Travis is hooked up and running all the tests, unit and e2e. Jenkins will be used for building out previews and deploying to a live domain, and hopefully all the testing too.

  • IPFS - we've created an IPFS bundle what uses window.ipfs-fallback to get hold of an IPFS instance. When IPFS is ready the store will have it's ipfsReady property set to true. You can then access it from window.ipfs. Action creators have a getIpfs() function passed to them automatically, so they don't have to care about where the ipfs instance comes from.

  • Styling - Tachyons + ipfs-css. Svg icons are converted to react components by svgr.

  • uglify-es - We depend on modules that are es6 and up, so we can't use the default create-react-app webpack config. We're using a prebuild step to patch the production webopack config with our one with the latest uglify-es.

The code is over here: https://github.com/tableflip/ipfs-webui-cra

There is a demo of the app shell deployed here: https://ipfs.io/ipfs/QmcqH39RqCDnsgvAZsEaMDuvBGvU6RivhXQBNFNh4q3x6g

It's just a lovely navbar and some mostly empty pages right now:

screenshot 2018-05-14 15 10 27

What's next:

  • accessibility and i18n tooling.
  • The e2e tests should run against the dev server by default as they do in peer-pad. Right now the tests are run against the src code for unit tests, but the prod build for e2e tests, which gets out of sync.
  • Double check the foundations, move the repo, and BUILD ALL THE AMAZING... Status pags, Files page, Sharing, IPLD page, and Settings.

@olizilla
Copy link
Member

These choices are now embodied in

  • IPFS Web UI - Graphical IPFS file manager and network explorer.
  • IPFS Desktop - Launch and manage IPFS from your OS GUI.
  • IPFS CSS - Single-purpose css rules and font-face config to IPFS up your UI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants