Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should EventSource and WebSocket be exposed in service workers? #947

Closed
annevk opened this issue Aug 10, 2016 · 63 comments
Closed

Should EventSource and WebSocket be exposed in service workers? #947

annevk opened this issue Aug 10, 2016 · 63 comments

Comments

@annevk
Copy link
Member

annevk commented Aug 10, 2016

It seems like XMLHttpRequest we best treat these as "legacy APIs" that you can use fetch() instead for (at least, once we have upload streams).

Also with the lifetime of a service worker it's unlikely these APIs are useful.

@nolanlawson
Copy link
Member

FWIW just tested in html5workertest, and it seems Firefox 48 doesn't expose either EventSource or WebSocket inside a ServiceWorker, but Chrome 52 exposes both. Interestingly Firefox 50 Dev Edition exposes WebSocket.

Related: nolanlawson/html5workertest#14

@smaug----
Copy link

Yeah, Gecko doesn't have EventSource in any workers.
https://bugzilla.mozilla.org/show_bug.cgi?id=1243942 fixed WebSocket on ServiceWorkers.

@smaug----
Copy link

FWIW, if we get sub-workers in SW, these, including XHR, would become useable there too, if for nothing else, consistency.
Unless we then want to create some new type of sub-worker which doesn't do any I/O.

@flaki
Copy link

flaki commented Oct 20, 2016

I was wondering, is there any legit use case for having WebSockets in service workers? Considering the service worker isn't something long-running, and WebSockets are best utilized when the connection built up with the (relatively expensive) handshake lasts longer I couldn't come up with a good usecase.

(fwiw being able to host "sub-workers" and run socket connections in there does seem to solve this and sounds like a better usecase for me)

@annevk
Copy link
Member Author

annevk commented Oct 21, 2016

Basically, both WebSocket and EventSource will soon be obsolete due to Fetch + Streams. WebSocket has the additional sadness of not working with HTTP/2. That's the main reason to avoid exposing them in new places.

However, as @smaug---- said, if service workers get sub-workers, and we don't make those different from normal workers, all these restrictions are getting rather arbitrary.

@smaug----
Copy link

Basically, both WebSocket and EventSource will soon be obsolete due to Fetch + Streams

FWIW, It is very unclear to me how Fetch+Streams let UA to do similar memory handling optimizations to Blob messages as what WebSocket (and XHR) let: (huge )Blobs can be stored in temporary files.

@annevk
Copy link
Member Author

annevk commented Oct 23, 2016

fetch(url).then((res)=>res.blob()) is as efficient. Fair point for streaming though. Do we have data as to whether that matters there?

@smaug----
Copy link

I'm not aware of having data for streaming + blob. But if it is needed for non-streaming case, why wouldn't it be needed for streaming case (I could imagine some web file sharing service to want to use streaming-like API to pass many files, and not do that one fetch per file.).
Gecko does not store right now WebSocket blobs in temporary files, but I consider that as a bug, since it means browser's memory usage may be way too high in certain cases, and XHR's and WebSocket should have similar blob handling.

@nolanlawson
Copy link
Member

both WebSocket and EventSource will soon be obsolete due to Fetch + Streams

You could also add "sendBeacon" to that list of obsolete APIs 😉

@annevk
Copy link
Member Author

annevk commented Oct 24, 2016

@smaug---- well, with HTTP/2 requests are cheaper since the connection stays alive for longer, so if you already have an open connection for your bi-directional communication channel over a request-response pair, you might as well just send another request to get the actual file. That way it doesn't block the server from sending other messages meanwhile either.

@ricea
Copy link
Contributor

ricea commented Dec 7, 2016

While it's true that WebSockets do not fit well with the ServiceWorker model, it is not the case that WebSockets are supplanted by the fetch API. Their use cases are different. It is unfortunate that WebSocket over HTTP/2 is not a thing (yet?) but arguably for many things WebSockets are actually used for that is irrelevant.

@zdila
Copy link

zdila commented Dec 7, 2016

We use service worker to indicate incoming call. It starts with push notification. Then ServiceWorker opens WebSocket where it receives message if the ringing is active. If yes then it shows a Notification and waits until it receives "ring_end" message. Afterwards it closes the Notification and the connection. Without WebSocket we would need to use HTTP (long)polling.

@annevk
Copy link
Member Author

annevk commented Dec 7, 2016

@ricea how are they not supplanted by the Fetch API? I don't think we should do WebSocket over H/2. Not worth the effort.

@tyoshino
Copy link
Contributor

tyoshino commented Dec 8, 2016

@zdila What did you mean by "incoming call" and "If yes"? Is your application a voice chat?

Does the WebSocket receive only 1 message for ring start and 1 message for ring end? Or while the ringing is active, the server keeps sending a message to indicate the ringing is still active periodically? Or the received WebSocket messages are used for updating existing notification? Why can't the push notification be used for these purposes in addition to the first push notification?

@tyoshino
Copy link
Contributor

tyoshino commented Dec 8, 2016

@annevk It's off topic (or too much general topic), but I've recently started sharing an idea named WiSH at IETF. Here's the I-D https://tools.ietf.org/html/draft-yoshino-wish-01. There're also some real (ideas that uses some HTTP2 level stuff such as mapping WS frames to HTTP2 frames) WS/HTTP2 proposals. But WiSH just uses the Fetch API and the Streams to provide almost equivalent functionality as WebSocket for now, HTTP2 era, QUIC era, and the future while keeping taking care of migration of existing WebSocket API users from the WebSocket protocol over TCP.

I'm thinking that providing some message oriented API on the web platform makes some sense even after we finish the HTTP API powerful enough. We already have the WebSocket API, and it would be reasonable to keep providing almost the same interface and evolve its capability (e.g. make it able to benefit from HTTP2, QUIC). Until that evolution happens, the combination of the WebSocket API and the WebSocket protocol would stay there to satisfy the demands.

Possible simplifcation of the architecture may motivate disallowing WebSockets in SW, but we haven't done the evolution, yet. I think we should keep WebSockets available regardless of the context (window, worker, SW) for now if the use case is reasonable enough.

(I'd like to hear your general opinion on the WiSH idea somewhere. Should I file an issue at whatwg/fetch repo for this?)

@tyoshino
Copy link
Contributor

tyoshino commented Dec 8, 2016

Also with the lifetime of a service worker it's unlikely these APIs are useful.

Yeah. The nature of SW would make most of WS's key features useless. I'd like to understand zdila's use case more.

Regarding streamed efficient blob receiving, basically I agree with Anne's answer at #947 (comment). We haven't had any real study on whether anyone has been utilizing this power, so not backed by data though.

@zdila
Copy link

zdila commented Dec 8, 2016

What did you mean by "incoming call" and "If yes"? Is your application a voice chat?
@tyoshino yes, it is a video chat application.

Simplified, we receive only two messages via WebSocket as you described - start ringing with caller details and stop ringing. It can be reworked to use 2 push notifications as you described. It will just mean for us to not to reuse existing API which is now used in several other cases.

@tyoshino
Copy link
Contributor

tyoshino commented Dec 8, 2016

Thank you @zdila. Then, I recommend that you switch to use that method (two pushes) for this use case. As Anne said and flaki summarized at #947 (comment), that WebSocket may get closed when the Service Worker instance is shut down even if the server didn't intend to signal ring_end.

to reuse existing API which is now used in several other cases

I see.


So, if one has an existing WebSocket based service, it would be convenient if it also works in a service worker though it requires event.waitUntil() to work correctly and shouldn't last for long time. How about background sync + WebSocket in SW?

@annevk
Copy link
Member Author

annevk commented Dec 9, 2016

We could discuss WiSH over at whatwg/fetch, sure. I think the main problem with keeping WebSocket in is that it'll make it harder to remove in the future. Whereas the reverse will be easy to do once we have explored alternatives.

@zdila
Copy link

zdila commented Dec 9, 2016

@tyoshino I realised that the problem with two-push method is that Chrome now requires to show notification for every push (userVisibleOnly to be true). This means that "ring end" push which would be meant to hide the notification will have to actually show one.

We actually have this problem also now because after the first push we may also find out that there is actually no ringing outgoing already (it is just gone).

@ricea
Copy link
Contributor

ricea commented Jan 5, 2017

@zdila If you just need a single notification then a hanging GET will be more efficient than a WebSocket in terms of network bytes and browser CPU usage.

@zdila
Copy link

zdila commented Jan 5, 2017

@ricea then I would need to do a ugly polling to find out when ring ended. Not an option.

@ricea
Copy link
Contributor

ricea commented Jan 5, 2017

@annevk WebSocket is more efficient for small messages. It provides a drop-in replacement for TCP when interfacing with legacy protocols. It is simple to implement on existing infrastructure.

@annevk
Copy link
Member Author

annevk commented Jan 5, 2017

How does a request body/response body channel established through H/2 rather than WebSocket not have the same benefits?

@ricea
Copy link
Contributor

ricea commented Jan 5, 2017

An H/2 frame has a 9 octet header, compared to 2 octets for a small WebSocket message. H/2 is a complex multiplexed beast which is not very much like a TCP connection. It is not simple to implement.

@puhazh
Copy link

puhazh commented Jan 18, 2017

We use WebSockets in ServiceWorkers for syncing Indexed DB data. The site is capable of working in full offline mode, we have multiple input forms in our application and when the user saves a form the data is written to Indexed DB and the data is synced across clients using ServiceWorkers. This is a two way sync and all clients are kept updated of any changes to the data. The sync happens in the service worker as having the sync on page will cause issue when the user navigates away from the page and there should be a WebSocket connection open for each page load, as there is no way we can have a persistent connection open.

@jakearchibald
Copy link
Contributor

jakearchibald commented Mar 31, 2017

@annevk is EventSource being deprecated? I thought its auto-reconnection stuff made it a nice high-level API? I agree it isn't useful in a service worker though.

The reason I'm nervous about removing WebSockets from SW is it prevents use of a protocol. Removing XHR wasn't so bad, as the intention is for fetch to be able to do everything it does.

@annevk
Copy link
Member Author

annevk commented Mar 31, 2017

EventSource is not deprecated, but you can do everything it can do with Fetch, the same goes for WebSocket, though WebSocket has some framing advantages as pointed out above and maybe some connection advantages as long as HTTP sits on top of TCP.

Anyway, if everyone exposes these let's close this and hope we don't regret it later.

@kentonv
Copy link

kentonv commented Oct 12, 2017

On the theoretical side, I would make two arguments:

  • The built-in message framing provided by WebSocket is pretty useful. While it could be rebuilt as a library on top of streams, I feel like a lot of modern web APIs are intended to avoid the need for using libraries to do basic tasks.
  • My previous argument in this thread, that while bidirectional streaming in an HTTP request is supported by the protocol, it's the kind of thing that middleboxes are highly likely to break without realizing it, possibly in subtle ways that don't fail fast (e.g. infinite buffering). Whereas WebSocket is very clearly intended to operate in a bidirectional streaming mode, and will fail fast when it's not supported.

That said I don't feel super-strongly about this, from the theoretical angle.

@wenbozhu
Copy link

Purely on the theoretical side,

https://tools.ietf.org/html/rfc7540#section-8.1

doesn't really spec out full-duplex (simplex bidi is never an issue, i.e. upload followed by a download). Rather, it clarifies early 2xx completion is allowed (as opposed to http/1.1 which states early error responses are to be expected). While "early 2xx completion" may enable full-duplex support, it's not quite the same thing. More specifically, early 2xx completion is about committing an OK response (i.e. generating all the headers) while request body is still in flight. When the server decides to produce an early response, any request body that has not been received is deemed "useless" data. This is not really the case for most full-duplex use cases, where response data is causally generated from the request data, albeit in a streaming fashion (e.g. speech translation).

===

For user-space protocols that use http/2 (framing) purely as a transport, HTTP/2 (being a transport to HTTP/1.1 semantics) can certainly be treated as a full-duplex bidi transport (i.e. multiplex TCP streams), subject to middlebox interpretation (which is a big unknown to me).

@ricea
Copy link
Contributor

ricea commented Oct 13, 2017

@annevk Part of the extra size of HTTP/2 frame headers is explained by needing stream IDs, which is an inescapable consequence of being multiplexed. I must confess I don't know what the rest is.

There were three things I think I was talking about when I said "HTTP/2 is not simple to implement":

  1. From a simple engineering standpoint, implementing the HTTP/1.1 protocol is easy, as long you forget about chunked transfer encoding. I've done it several times just for personal projects. I would never contemplate implementing HTTP/2 for a personal project. The WebSocket protocol has lots of seemingly unnecessary fiddly bits, but a small simply implementation can still be put together quickly.
  2. From a deployment standpoint, small scale HTTP deployments are ubiquitous and adding in WebSocket support to an existing deployment is easy in many developments. Adding HTTP/2 to a large-scale deployment where you're already using a reverse proxy setup is pretty straightforward, but for a small-scale deployment you suddenly have a whole extra bundle of complexity to deal with. I expect this part to change as even small deployments start from scratch with HTTP/2.
  3. From a conceptual standpoint, TCP:TLS:WebSocket is a 1:1:1 relationship. TCP:TLS:HTTP/2:fetch is a 1:1:1:N relationship. When all you need is a single stream, multiplexing is pure cognitive overhead.

@ricea
Copy link
Contributor

ricea commented Oct 13, 2017

@kentonv While lots of developers get surprisingly far using bare WebSockets, I couldn't recommend it for general-purpose applications over the open Internet[1]. There are far too many people stuck in environments where only HTTP/1.1 will get through. So, in practice, you need fallbacks, and to make that not be painful you need some kind of library.

[1] Games seem to be an exception. Game developers appear to be quite happy to say "if my game doesn't work with your ISP, get a new ISP".

@annevk
Copy link
Member Author

annevk commented Oct 13, 2017

So if we assume the cost for HTTP/2 approaches zero in the future, the main features WebSocket has that Fetch does not would be:

  1. Smaller frames.
  2. Dedicated connection.

It feels like we should explore exposing those "primitives" to Fetch somehow if they are important.

It worries me a little bit that @wenbozhu claims that HTTP is not full-duplex whereas the discussion in whatwg/fetch#229 concluded it pretty clearly is. I hope that's just a misunderstanding.

@ricea
Copy link
Contributor

ricea commented Oct 13, 2017

My position on the full-duplex issue is that HTTP/1.1 will never be full-duplex except in restricted environments. The interoperability issues are intractable.

I know of no first-order interoperability issues with HTTP/2, but what happens when you're in an environment where HTTP/2 doesn't work? Should the page behave differently depending on the transport protocol in use?

@kentonv
Copy link

kentonv commented Oct 14, 2017

@annevk

  1. Smaller frames.

I think this argument is confused. HTTP/2 (AFAIK) doesn't have frames in the WebSocket sense. HTTP/2 frames are an internal protocol detail used for multiplexing but not revealed to the application. WebSocket frames are application-visible message segmentation that have nothing to do with multiplexing. So comparing their sizes doesn't really make sense; these are completely unrelated protocol features.

So I would rephrase as:

  1. Built-in message segmentation.
  2. Dedicated connections (no multiplexing overhead).

@ricea FWIW Sandstorm.io (my previous project / startup) had a number of longstanding bugs that would manifest when WebSockets weren't available. In practice I don't remember ever getting a complaint from a single user who turned out to be on a WebSocket-breaking network. We did get a number of reports from users where it turned out Chrome had incorrectly decided that WebSockets weren't available and so was fast-failing them without even trying -- this was always fixed by restarting Chrome.

I would agree that for a big cloud service, having a fallback from WebSocket is necessary. But there are plenty of smaller / private services where you can pretty safely skip the fallback these days.

@ricea
Copy link
Contributor

ricea commented Oct 16, 2017

So I would rephrase as:

  1. Built-in message segmentation.
  2. Dedicated connections (no multiplexing overhead).

The other factor is lower per-message overhead, when messages are sent individually.

@ricea FWIW Sandstorm.io (my previous project / startup) had a number of longstanding bugs that would manifest when WebSockets weren't available. In practice I don't remember ever getting a complaint from a single user who turned out to be on a WebSocket-breaking network. We did get a number of reports from users where it turned out Chrome had incorrectly decided that WebSockets weren't available and so was fast-failing them without even trying -- this was always fixed by restarting Chrome.

That's great news, thanks. We only have client-side metrics, so we don't have much insight into why things are failing in the wild.

The bad news is that Chrome has no logic to intentionally make WebSockets fail fast when they're not available. So if it's doing that then the cause is unknown.

@annevk
Copy link
Member Author

annevk commented Oct 16, 2017

@kentonv it's a bit confused, but as @ricea points out it's likely you end up packaging such messages in H2 frames, meaning you have more overhead per message.

Should the page behave differently depending on the transport protocol in use?

That's already the case (requiring secure contexts; I think various performance APIs might expose the protocol) and would definitely be the case if we ever expose a H2 server push API. It doesn't seem like a huge deal to me. If we always required everything to work over H1 we can't really make progress.

@kentonv
Copy link

kentonv commented Oct 21, 2017

I guess I consider "Dedicated connections (no multiplexing overhead)" and "lower per-message overhead" to be the same issue, since the overhead is specifically there to allow for multiplexing.

@annevk
Copy link
Member Author

annevk commented Oct 22, 2017

Well, there's at least one proposal for WebSocket over H2 that addresses one, but not the other: mcmanus/draft-h2ws#1.

@ghost
Copy link

ghost commented Mar 5, 2018

@kentonv we got web sockets in the Cloudflare workers yet?

@kentonv
Copy link

kentonv commented Mar 5, 2018

@chick-fil-a-21 Currently CF Workers supports WebSocket pass-through -- that is, if all you do is rewrite requests, e.g. changing some headers or the URL, it will "just work" with a WebSocket request. We don't yet support examining or creating WebSocket content, only proxying.

@kentonv
Copy link

kentonv commented Mar 7, 2018

@unicomp21 Thanks! But this issue thread probably isn't the right place to discuss Cloudflare stuff. I suggest posting a thread on the Cloudflare Community; you can @ me there if you like. You can enable the Workers beta in your Cloudflare account settings (it's one of the boxes along the top).

@subversivo58
Copy link

subversivo58 commented Aug 6, 2018

Hello everyone, I had already moved on to a quick reading on this issue some time ago.

Today testing the APIs available on the global object of a ServiceWorker I was surprised to find WebSocket running on Chrome.

I remember just coming to this discussion some time ago because trying to instantiate a socket in ServiceWorker would return an error.

Soon I ran to see if there were any changes that I had not noticed but it seems to me that there were no changes.

I searched for some information before I decided to share here, if there were any notes about Chrome | Chromium or even bugs reported but I did not find anything.

So I do not really know if there were changes or if this could be an experimental feature or a bug.

  • Chrome Stable: version 68.0.3440.84 (64 bit) Ubuntu 18.04.1 LTS
  • Nginx: 1.14.0 (reverse proxy to Node)
  • Node: 10.8.0

PS:

I forgot to mention but, after ServiceWorker gets control of the same page on "refresh" the socket connection is not terminated. This was my test:

// sw.js
const WebSocketConnect = () => {
    let ws = new WebSocket("ws://localhost/signaling", ['token', 'xxxxxxxxx'])
    ws.onopen = function() {
        setInterval(() => {
            if ( 'readyState' in ws && ws.readyState === 1 ) {
                ws.send('Hay')
            }
        }, 5000)
    }

    ws.onmessage = function(e) {
        console.log('Message:', e.data)
    }

    ws.onclose = function(CloseEvent) {
        ws.close()
        console.log(`Socket is clossed, code: ${CloseEvent.code} - reason: ${CloseEvent.reason}`)
        setTimeout(() => {
            WebSocketConnect()
        }, 3000)
    }

    ws.onerror = function(err) {
        console.log(err)
        //console.error('Socket encountered error: ', err.message, 'Closing socket')
        ws.close()
    }
}

WebSocketConnect()
// main.js (Node)
const http = require('http')
const PORT = process.env.PORT || 3000
const WebSocketServer = require('uws').Server
// HTTP Server
const server = http.createServer((req, res) => {
   // usually serve files by routes don't route "/signaling"
})
server.listen(PORT)

// Socket
const wss = new WebSocketServer({
    server: server,
    path: '/signaling'
})

/*
 incremental value for test if ServiceWorker controler page close
 socket connection after page refresh
*/
let increment = 1

// message from ServiceWorker is ignored
const sendMessage = (message, websocket) => {
    increment++
    websocket.send(`increment message - ${increment}`)
}

wss.on('connection', function(ws) {
    ws.on('message', function(message) {
        sendMessage(message, ws)
    })
    // await ServiceWorker close connection after page refresh
    ws.on('close', function(code) {
        console.log(`Socket has been closed. Code: ${code}`) // expect 1001
    })
    // first message from connection
    ws.send('something')
})

The "onclose" event on the server does not receive an advertisement when the ServiceWorker is active and a "simple refresh" is done on the page, nor does the ServiceWorker report.

Using DevTools (I know this has nothing to do with the specification) throting for offline does not enter the socket and neither does the socket connection shown in the Network > WS

@kael
Copy link

kael commented Aug 7, 2018

Seems the reactivation of WebSocket in ServiceWorker started there for Chrome.

Can you clarify a bit what's expected and what's not happening ?

Also, what happens when the WebSocket server uses its native ping ?

@subversivo58
Copy link

subversivo58 commented Aug 7, 2018

Hi @kael

I followed the threaded thread but, I did not find exactly where in that version (path, release) Chrome came to support WebSocket within ServiceWorker. There are some versions back I could not stand.

I hoped that:

  • when updating the page (user press F5 or front window.location.reload()) or browsing there is another page the socket connection was closed

This is not happening and the connection continues to be persisted. There may be some scenario that this is helpful, but I believe this is not the standard.

Just out of curiosity: the socket connection started in the ServiceWorker is not shown in the Network > WS. As in Developer Tools, the socket does not lose the connection even after simulating the offline state.

I'm not using the ws module, I'm using the native API in frontend and uws in the backend.


Edit: the socket connection is only shown on the Network > WS tab after the server explicitly shut down the socket (or fall)

On Firefex 61.0.1 when the user presses F5 or when window.location.reload() is used or even when there is a navigation to another page, the socket (frontend) terminates the connection.

I do not know exactly what the correct default behavior is, what the specification defines in these cases (if it defines) but, I believe what happens in Firefox is correct, is not it?

@ricea
Copy link
Contributor

ricea commented Aug 7, 2018

As far as I know, Chrome has supported WebSockets in ServiceWorkers for as long as it has supported ServiceWorkers.

On Firefex 61.0.1 when the user presses F5 or when window.location.reload() is used or even when there is a navigation to another page, the socket (frontend) terminates the connection.

This seems odd, since a single ServiceWorker can have multiple pages as clients. Does navigating any of the pages terminate the connection, or is one specific page special in this regard?

@subversivo58
Copy link

As far as I know, Chrome has supported WebSockets in ServiceWorkers for as long as it has supported ServiceWorkers.

I do not know this, there are a few previous versions (I can not remember for sure) tried to use WebSocket in the ServiceWorker script in Chrome and received an error.

I have two pages served by a ServiceWorker (localhost): index.html and about.html

Firefox opens the WebSocket connection and maintains it for only 30 seconds (+/-) ... before reaching this limit (?) if the user presses the F5 key or if window.location.reload() is released page refreshes and there is no connection loss, it is as if the count would recommence. If you reach/pass this limit, the connection is closed silently without sending a "close frame" ... some time later the server reports 1006 (CLOSE_ABNORMAL).

In Firefox navigate between cached pages (within this "30-second" limit) it also produces the same ... the open connection on the previous page is closed silently without sending a "close frame".

In Chrome there is no time limit and browsing between different pages already cached does not end the connection.

I'm not sure which of the two implementations follows the pattern defined by the WebSocket API (Chrome or Firefox) ... I tried to find reference in SPEC but I think this does not concern WebSocket's spec but rather that of ServiceWorker.

I believe Firefox is closer to the ServiceWorker specification

Note: ServiceWorkerGlobalScope object provides generic, event-driven, time-limited script execution contexts that run at an origin. Once successfully registered, a service worker is started, kept alive and killed by their relationship to events, not service worker clients. Any type of synchronous requests must not be initiated inside of a service worker.

SPEC Reference

Although I do not know if this is intentional since it is not sent a "close frame".

I found interesting persistence in the connection obtained in Chrome but, after all, which of these two distinct implementations is correct (or about to be correct)?

@ricea
Copy link
Contributor

ricea commented Aug 8, 2018

I do not know this, there are a few previous versions (I can not remember for sure) tried to use WebSocket in the ServiceWorker script in Chrome and received an error.

I know of one version where it was broken due to a bug. It wasn't intentional.

Firefox opens the WebSocket connection and maintains it for only 30 seconds

It sounds like they're trying to avoid people assuming that they can make long-term connections from a ServiceWorker. But having a different lifetime for the WebSocket and the variable that references it is really confusing, so I don't consider this a good idea.

If you reach/pass this limit, the connection is closed silently without sending a "close frame" ... some time later the server reports 1006 (CLOSE_ABNORMAL).

This sounds like a bug, since it's a straightforward RFC6455 violation. See https://tools.ietf.org/html/rfc6455#page-44.

@subversivo58
Copy link

@ricea ok

This sounds like a bug, since it's a straightforward RFC6455 violation. See https://tools.ietf.org/html/rfc6455#page-44.

But this (the closing without sending a "close frame") would be the implementation of WebSockets that Firefox does in ServiceWorker or, this would originate from the ServiceWorker routine?

I confess to having been confused ... I must assume that no implementation carried out by the mentioned browsers should be followed/used?

@annevk
Copy link
Member Author

annevk commented Aug 8, 2018

Closing this since this can no longer be "fixed" now it's been shipping all over.

@annevk annevk closed this as completed Aug 8, 2018
@wanderview
Copy link
Member

wanderview commented Aug 8, 2018

Firefox opens the WebSocket connection and maintains it for only 30 seconds (+/-) ... before reaching this limit (?) if the user presses the F5 key or if window.location.reload() is released page refreshes and there is no connection loss, it is as if the count would recommence. If you reach/pass this limit, the connection is closed silently without sending a "close frame" ... some time later the server reports 1006 (CLOSE_ABNORMAL).

In Firefox navigate between cached pages (within this "30-second" limit) it also produces the same ... the open connection on the previous page is closed silently without sending a "close frame".

I believe this is firefox implementing the idle timeout of the service worker. If there is no functional event waitUntil() holding the service worker alive it will be stopped to conserve system resources. The idle timeout is restarted when another functional event (like FetchEvent) is dispatched to the service worker.

In Chrome there is no time limit and browsing between different pages already cached does not end the connection.

Do you have devtools open while running this test? At least at one point chrome did not timeout service workers while devtools were open. Not sure if that is still the case or not.

@mohammad-masud

This comment has been minimized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests