Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Future of Service Worker Tooling #44

Closed
gauntface opened this issue Sep 22, 2016 · 17 comments
Closed

Future of Service Worker Tooling #44

gauntface opened this issue Sep 22, 2016 · 17 comments

Comments

@gauntface
Copy link

cc @addyosmani, @jeffposnick

We've chatted a little bit about future of tooling, but it would good to move the discussion a little further forward and agree of a rough plan and then move into some more technical detail and strategy.

This is a bit of a brain dump as to audience we might want to target, followed by general requests / approaches people have hinted at wanting and finally a super vague idea of what we can do.

Audience

  • Developer who wants a build time solution for each release. Run '*-cli' and generate my service worker.
  • Developer who wants a high level library to simplify writing a service worker.
  • Developer who has written their own service worker but wants specific feature from the high level library (i.e. cache expiration, Analytics cachings).

Possible audience based on some developers editing a generated SW?

  • Developer who wants to a generic service worker to generate once and use.

Requests / Approaches

  • CLI that builds a complete service worker for you
  • A library that offers high level helpers to create a service worker by hand
  • Individual modules that can be pulled in and used for small pieces of logic

General Architecture

  • Small modules like we have in sw-helpers. Self contained pieces of logic with clearly defined API's.
  • High Level Library consisting of the smaller modules, nothing more than a sanitised wrapper around the smaller modules (i.e. very little to know logic).
  • CLI tool that generates a service worker using the high level library.

What are the thoughts of these vague topics, is this the general approach people have in their heads? Is a target audience missing? General architecture seem wonk / destined to fail?

@jeffposnick
Copy link
Contributor

jeffposnick commented Sep 22, 2016

Here's a rough outline of what I was thinking about in terms of developer experience:

/*
Assume that:
- routing.js covers similar ground to sw-toolbox's current Express/RegExp routing, and exposes goog.routing.Route
  Route has three properties:
    - when: a function that's passed both a URL-wrapper around FetchEvent.request.url, and the raw FetchEvent,
      and returns true when this route should handle the request
    - options: used to pass additional context to the handler, e.g. name of cache to use.
    - handler: a function that's passed the FetchEvent and any options, and returns a Promise which resolves with a Response.
- runtime-caching.js includes the various caching strategies in sw-toolbox, and exposes goog.runtimeCaching.*
- precache.js is the output of a sw-precache-style build process, including a list of assets, along with
  the install/activate handlers, and exposes goog.precache.*
*/
importScripts(['routing.js', 'runtime-caching.js', 'precache.js']);

const routes = [
  new goog.routing.Route({
    when: ({url, event}) => goog.precache.shouldHandle(event),
    handler: goog.precache.handler
  }),

  new goog.routing.Route({
    if: ({url, event}) => url.pathname.startsWith('/api/'),
    handler: goog.runtimeCaching.staleWhileRevalidate,
  }),

  new goog.routing.Route({
    when: ({url, event}) => url.origin.startsWith('https://imagecdn.com'),
    handler: goog.runtimeCaching.cacheFirst,
    // See https://github.com/GoogleChrome/sw-helpers/issues/44#issuecomment-250766630
    // for discussion of options/configuration.
    options: {
      cache:{
        name: 'images',
        maxEntries: 10
      }
    }
  }),

  new goog.routing.Route({
    when: ({url, event}) => event.request.mode === 'navigate',
    handler: ({url, event}) => {
      // Implement your own strategy here.
    },
  }),
];

// Optional helper to add in .catch() to the end of your response chain,
// regardless of which route is chosen.
const catchHandler = ({url, event}) => fetch(event.request);

// Optional helper which is used when nothing else matches.
const defaultHandler = ({url, event}) => {
  return caches.match(event.request).then(response => response || fetch(event.request));
}

goog.registerRoutes({routes, catchHandler, defaultHandler});

Inside runtime-caching.js, precache.js, we'd take advantage of smaller modules that can handle things like cache expiration and notification of cache updates.

As mentioned in GoogleChromeLabs/sw-precache#147, we're still blocked on sw-precache needing to generate the top-level service worker file in order to properly trigger the service worker update flow. If we wanted to prototype something new, we could do so if we made sure that the filename of the sw-precache output ('precache.js' in the example above) included a hash that was tied to its contents.

Happy to brainstorm more!

@jeffposnick
Copy link
Contributor

Actually, here's a potential approach forward without waiting for GoogleChromeLabs/sw-precache#147: if we enforce as a prerequisite that everyone will consume the modules we produce via either import or require, and then force folks to use webpack/rollup/etc. to bundle their modularized service worker code into a final, top-level service-worker.js output file, then we can safely assume that the top-level output file's contents will change whenever the precache module's contents change.

This means the developers will need to hold off on using importScripts to pull in our module code, but requiring the use of webpack/rollup/etc. in 2016 doesn't sound too onerous. We could also put out our own command-line tool that could be used for both the precache generation (like the current sw-precache CLI does) and also automates the bundling for folks who aren't already using webpack/rollup.

Thoughts?

@jeffposnick
Copy link
Contributor

I put in some time over the weekend exploring what it might be like to break up the existing sw-toolbox functionality into smaller modules, and how they could be packaged, consumed, and play together. The very-much-work-in-progress code is in the new projects under https://github.com/GoogleChrome/sw-helpers/tree/future-of-sw-tooling/projects

I did far enough along that the code in https://github.com/GoogleChrome/sw-helpers/blob/future-of-sw-tooling/projects/sw-routing/demo/service-worker.js works, using a syntax similar to what I proposed above.

I'm not sold that it's the right approach, and I'm looking forward to feedback, but it's good just familiarizing myself with using Rollup to bundle smallish, ES2015-friendly modules and proving that they could be consumed via either importScripts() or via transpiled ES2015 modules.

@gauntface
Copy link
Author

@jeffposnick

First off can we agree on the rough audience, approaches and architecture before discussing API and implementation?

@jeffposnick
Copy link
Contributor

I'm on board with what you outlined to start of the thread.

Producing smaller components that could be opted-into piecemeal for those who want to add value to their current hand-crafted service worker script: 👍

Creating a new, optional build tool/CLI layer on top of those components that could generate an entire service worker file, matching the current sw-precache experience: 👍

The blocker for me has historically been whether we could move to this sort of model while still triggering the SW lifecycle events we needed to properly handle cache maintenance. w3c/ServiceWorker#839 (comment) will solve that generally, but I'm eager to prototype whether ES2015 modules + requiring a webpack/Rollup step is a viable path forward without placing unreasonable demands on developers. I'm thinking that yes, it can be.

@jeffposnick
Copy link
Contributor

jeffposnick commented Sep 30, 2016

In addition to general feedback, I'm particularly interested in hearing ideas for configuring cache behavior. Here are my high-level thoughts:

  • The two specific types of behavior I have in mind involve setting cache expiration options and support for detecting and broadcasting when a resource has changed (see broadcast-cache-changes #43). Developers might want to opt-in to none, or one, or both of those behaviors.
  • We need to be flexible enough to support additional behaviors that we develop in the future, instead of assuming those are the only two that will ever be used.
  • Whatever we do should not tie in the those behaviors to the overall "framework" logic that we're also going to be setting up. So, while it should be easy to opt-in to those behaviors via an option passed to the hypothetical goog.routing.Route() from Future of Service Worker Tooling #44 (comment), they also need to be exposed in a standalone fashion that other libraries could make use of.
  • While it's a little more overhead, I feel like configuration via classes is the way to go. sw-toolbox's approach of using a configuration Object that accepted arbitrary properties without validation made it too easy to pass in invalid configurations without getting any warning.

My strawman suggestion is something like

new goog.routing.Route({
  when: ({url, event}) => goog.precache.shouldHandle(event),
  handler: goog.precache.handler,
  configuration: [
    new goog.CacheBroadcastBehavior({channelName: 'cache-changes'})
  ]
});

new goog.routing.Route({
  when: ({url, event}) => url.origin.startsWith('https://imagecdn.com'),
  handler: goog.runtimeCaching.cacheFirst,
  configuration: [
    new goog.CacheNameBehavior({name: 'image-cache'}),
    new goog.CacheExpirationBehavior({
      maxEntries: 10,
      maxAgeSeconds: 3600
    }),
    new goog.CacheBroadcastBehavior({
      channelName: 'cache-changes',
      headers: ['last-modified', 'etag', 'content-length']
    })
  ]
});

@samertm
Copy link

samertm commented Sep 30, 2016

FWIW, I think most non-trivial service worker apps will be okay with the requirement to use a module bundler instead of importScripts. If you're pulling in any npm modules, you'll be using one anyway. Plus, it gives you more control over what code you pull in.

@gauntface
Copy link
Author

Firstly apologies that we are doing this from two very different approaches, I'm doing highlevel down and you're thinking details up (but weirdly may be helpful ;) ).

Small Modules We End Up With:

  • Routing (Path to callback)
  • Cache Management (Expiration / Request Limit)
  • Precache (Managing install and activate event)
  • Offline Analytics (Additional helpers)
  • .....

Developers can use this as is. Recommended approach for fine grained control and reducing file size.

Higher Level Projects

  • sw-goog (Name just for demo purposes)
    A library that is imported in the service worker and makes use of the smaller modules to simplify code overall. Akin to sw-toolbox.
  • sw-goog-cli
    A CLI Tool that can generate a complete service worker for your project using the smaller modules configured for best caching possible.

Questions

  1. Should the CLI offer a mode that generates a software service worker, one that serves as an "OK" implementation using our tools but can be edited by hand afterwards? We've seen from a few external developers this is how they've been using sw-precache.
  2. Should smaller modules like offline analytics be including in sw-goog and / or sw-goog-cli? If so, how do we decide whats in, whats not? If not, is there anything we can do to just drop it in and make it work? Maybe sw-goog can look for known modules or is that too magical?
  3. Should we use CDN's for this stuff? sw-goog-cli has total control so it's really easy to pull in the required files for the developer, but sw-goog may be easier to use via CDN rather than npm install, but should we encourage that? We could do the same for the individual modules.

@addyosmani
Copy link
Member

Should the CLI offer a mode that generates a software service worker, one that serves as an "OK" implementation using our tools but can be edited by hand afterwards? We've seen from a few external developers this is how they've been using sw-precache.

What about the current behavior being the default, but a --standalone option being made available for folks that want to generate a once-off version they then hand-maintain and edit themselves? Is that being too flexible?

Should smaller modules like offline analytics be including in sw-goog and / or sw-goog-cli? If so, how do we decide whats in, whats not? If not, is there anything we can do to just drop it in and make it work? Maybe sw-goog can look for known modules or is that too magical?

Imo smaller modules (like analytics) should be considered opt-in pieces you install into your project and can hook into the sw-goog-cli, but aren't included by default. This is in the spirit of avoiding the kitchen-sink as much as one can. Keep it minimal.

There are a few ways this can be done: sw-goog install analytics could try installing sw-goog-analytics and scaffolding it up to your project. Or, it could be less prescriptive and we simply tell folks to npm install any extra pieces they need and document how to hook them up to the other sw-* libraries they are using.

I wonder if there's value in us getting atomic with our caching strategies as separate modules too (e.g sw-goog-cache-first, sw-goog-cache-fallback etc).

Should we use CDN's for this stuff? sw-goog-cli has total control so it's really easy to pull in the required files for the developer, but sw-goog may be easier to use via CDN rather than npm install, but should we encourage that? We could do the same for the individual modules.

Our current libraries get the most use in larger companies who may have security/other requirements around self-hosting. I wonder if that means there's less value in using CDN hosting for these pieces, but this could be useful from a prototypers point of view.

@jeffposnick
Copy link
Contributor

Yes, I've been thinking about/prototyping more of the individual pieces, and less of the bigger picture. It's good to think about both!

Here are my thoughts about the higher-level questions from both @gauntface and @addyosmani's responses:

Should the CLI offer a mode that generates a software service worker, one that serves as an "OK" implementation using our tools but can be edited by hand afterwards? We've seen from a few external developers this is how they've been using sw-precache.

I might change my mind given more thought/after playing with a prototype, but my current thinking is that we should ship a JavaScript module (with a CLI wrapper) that emulates what sw-precache's staticFileGlobs and dynamicUrlToDependencies does, and outputs a single JavaScript module containing both the "manifest" of file hashes as well as the install/activate handlers and logic for the shouldHandle() predicate/handle() handler used in routing. (This is amounts to precache.js in my proposal above.) The other things that sw-precache currently does that are more "magical", like handling fallback routes, ignoring certain URL parameters, or treating / as a synonym for /index.html, could be accomplished via the routing/cache configuration helpers that we're exposing elsewhere.

We could start with that minimal approach and see whether it's actually necessary to ship a "generate your SW" tool as a follow-up. The one thing that a "generate your SW" tool would have going for it is that we could have it also take care of the bundling step for you, which could make it more accessible for developers not familiar with Rollup/webpack. But... I'm going to assume that would be a secondary deliverable.

Should smaller modules like offline analytics be including in sw-goog and / or sw-goog-cli? If so, how do we decide whats in, whats not? If not, is there anything we can do to just drop it in and make it work? Maybe sw-goog can look for known modules or is that too magical?

My vote would be no. I don't know that CLI helpers to integrate them would be worthwhile for the initial implementation. I'm approaching this from the perspective of developers owning the JavaScript in their top-level service worker file and needing to explicitly opt-in to behaviors. We'd want the API surfaces to be similar and friendly to use, and could put out "recipes" for various scenarios. (This might be a symptom of me over-correcting for how opaque the sw-precache-generated service worker was...)

Should we use CDN's for this stuff? sw-goog-cli has total control so it's really easy to pull in the required files for the developer, but sw-goog may be easier to use via CDN rather than npm install, but should we encourage that? We could do the same for the individual modules.

I'm assuming that for the initial implementation, when we effectively have to require (hah!) that developers use local npm modules + a Rollup/webpack build, CDNs won't be relevant. And existing CDNs like https://unpkg.com should theoretically "just work" if we did find ourselves needing that later on, right?

I wonder if there's value in us getting atomic with our caching strategies as separate modules too (e.g sw-goog-cache-first, sw-goog-cache-fallback etc).

Folks consuming our ES2015 module interface (which should be everyone, until the issues with importScripts are resolved) would be able to benefit from named exports (see the prototype at https://github.com/GoogleChrome/sw-helpers/blob/future-of-sw-tooling/projects/sw-runtime-caching/src/index.js). I think it's crucial that we expose a way of configuring the cache behavior (cache name, expiration policy, notification of updates, etc.) associated with those runtime strategies that works in a standalone environment and doesn't rely on assumptions about the global state.

@addyosmani
Copy link
Member

I forgot to comment on your strawman suggestion! The configuration array reminds me quite a lot of how Webpack plugins currently get configured. That is to say I find the below relatively straight-forward to grok:

  configuration: [
    new goog.CacheNameBehavior({name: 'image-cache'}),
    new goog.CacheExpirationBehavior({
      maxEntries: 10,
      maxAgeSeconds: 3600
    }),
    new goog.CacheBroadcastBehavior({
      channelName: 'cache-changes',
      headers: ['last-modified', 'etag', 'content-length']
    })
  ]
});

This part felt like we could make it a little less verbose however:

new goog.routing.Route({
  when: ({url, event}) => url.origin.startsWith('https://imagecdn.com'),

@jeffposnick
Copy link
Contributor

jeffposnick commented Oct 5, 2016

Yeah, that's not the best example. My general idea is a predicate that takes in a {url, event} as a parameter would give us maximum flexibility, and we could then expose some predicate-factories on top of that. So, the most fluent usage might look like (assuming ES2015 module syntax, which also just looks nicer):

import {Route, matchesRegexp, pathEquals, registerRoutes} from 'routing';
import {staleWhileRevalidate, networkFirst} from 'runtime-caching';

const routes = [
  new Route({
    when: matchesRegexp(/\.js$/),
    handler: networkFirst
  }),
  new Route({
    when: pathEquals('/path/to/file.html'),
    handler: staleWhileRevalidate
  })
];

registerRoutes({routes});

And then routing.js would have those predicate-factories defined as:

export function matchesRegexp(r) {
  return ({url}) => url.href.matches(r);
}

export function pathEquals(p) {
  return ({url}) => url.origin === self.location.origin && url.pathname === p;
}

// ... matchesExpressRoute to aid in sw-toolbox conversion, etc.

@addyosmani
Copy link
Member

addyosmani commented Oct 7, 2016

It occurred to me that it might be valuable to also loop in @NekR, @robwormald and @rwjblue who have also been hacking on service worker tooling for their input. In particular around whether our thoughts on trying to modularize sw-toolbox/precache/the pieces around there can offer up lower level things that would have made it easier for them to write the tooling they've worked on.

@NekR
Copy link

NekR commented Oct 17, 2016

Should the CLI offer a mode that generates a software service worker, one that serves as an "OK" implementation using our tools but can be edited by hand afterwards? We've seen from a few external developers this is how they've been using sw-precache.

I think this is something weird. Automated tools are automated by a reason. Feels more like an error-prone anti-patter. Unless you mean that it would generate library-driven code, not pure SW. I feel like pure SW isn't something regular developer should write. There are too many pitfalls it's must have to use an library unless you a Facebook/Flipkart or for sure know what you are doing. e.g. I wouldn't even write an pure SW myself in my new projects. That's too dangerous.

This comment's #44 (comment) way of handling configuration looks good me. If I were going to write something with SW library, I definitely would prefer this way. I don't why, maybe because I use webpack too much.

@addyosmani addyosmani modified the milestone: sw-goog / Service Worker Tooling.next Oct 18, 2016
@felipenmoura
Copy link

Very promising thread.

I've worked hard on DSW. It has a very different approach but I think one project may pretty much help and inspire the other.

In DSW we do have an init command so it can generate the basic files and structure for a PWA, even applying some changes to the index.html file in case it exists. We are still gathering feedback to see if this is indeed the best approach, though.

I'm now working on unit tests for it (it turned out to be a little bit trickier than I expect, testing service workers).

In sw-helpers, I find it interesting to install and scaffold modules into projects as needed. Developers are used to it when using babel or even gulp or grunt, for example.

@NekR
Copy link

NekR commented Oct 26, 2016

I'm still not sure if I'm sold on "modular" functionality. It might be too much abstraction. Remember modular mobile phones, they sounded very good and promising but didn't really work out. Though, Babel indeed is a good example of modularity.

I think it makes sense (at least for me) to wait until there a lot of features/ways to extend SW so it could be good split everything to modules, not bloat installs or SW files itself. At this moment, there is barely couple of pluggable features, if we talk about offline/caching only. It may make sense for BackgroundSync/PushNotifications/OtherStuff though.

@jeffposnick
Copy link
Contributor

We've got a more concrete proposal detailed in #61

I'm going to close this issue and encourage folks currently CC:ed on this to check out the proposal there, as this thread has a lot of earlier discussions which might not be relevant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants