Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-author sites #468

Closed
mnot opened this issue Sep 20, 2014 · 72 comments
Closed

Multi-author sites #468

mnot opened this issue Sep 20, 2014 · 72 comments

Comments

@mnot
Copy link
Member

mnot commented Sep 20, 2014

As discussed, https:// sites that have multiple authors may be surprised to discover that user a can now overwrite content for user b.

What's needed:

  1. text (Security Considerations?) explaining the attack
  2. evangelisation to warn sites
  3. maybe a csp-ish opt-out (or -in, but @slightlyoff isn't hot on that)

Will do a pull for 1.

@slightlyoff
Copy link
Contributor

There has been quite a lot of further discussion on this point. Looks like we're going to sacrifice this goat to accommodate sites which, frankly, are already broken. Le sigh.

We're likely going to go with a CSP-based OPT IN to enable Service Workers. The straw man is to require a CSP header for the on-origin SW script.

Thoughts?

@KenjiBaheux
Copy link
Collaborator

Filed crbug.com/423983 for tracking in Blink

@annevk
Copy link
Member

annevk commented Oct 16, 2014

The amount of developer hurdles keeps growing, but we have required opting into Danger Zone for less.

Are we still requiring that the SW is served with a correct JavaScript MIME type?

A CSP header on the SW script itself affects what the SW can do. It would make more sense to me if the opt-in came with the document or worker that wanted to use a SW.

@slightlyoff
Copy link
Contributor

Yes, still requiring valid JS. The CSP header affecting what the SW can do seems good? We want more people setting CSP, no?

Do you have a straw-man for another way of doing this that you prefer?

@annevk
Copy link
Member

annevk commented Oct 16, 2014

A new token for Content-Security-Policy: serviceworker. To be set by global environment that is to register service workers rather than the service worker script.

@inexorabletash
Copy link
Member

We need to make sure some spec says you can't set this new CSP token via meta tags.

(I'm a CSP n00b, but it looks like this is the first opt-in CSP directive? Are there any other places where the general opt-out design of CSP will require special handling?)

@jakearchibald
Copy link
Contributor

I don't think page-based CSP is a good idea here, the impact of a SW is beyond the page. Also, CSP is so-far opt-out, and we're proposing opt-in.

I think we should reconsider the SW script location when it comes to scope. /~jakearchibald/my/app/sw.js may not be registered for scopes wider than /~jakearchibald/my/app/. There's been confusion about this when I've proposed this in the past, so I want to stress that it's the location of the serviceworker script that limits scope, and not the registering page.

The benefit of this is we don't break hosts that are safe, such as github.

If we must go for an opt-in solution, I'd rather we went for a content type like application/serviceworker+javascript, or application/javascript;serviceworker, or something along those lines. But I much prefer a solution that doesn't break hosts that happily give you your own origin.

@dominiccooney
Copy link

Implementer feedback: While we haven't looked at this in great depth yet, our plan a this point is to implement the CSP thing (token or header) on the Service Worker script. Our reasoning is:

  • If an attacker "takes a site offline" with a Service Worker, headers on pages are ineffective redress. The only requests the site will see will be for Service Worker scripts.
  • Having the header on the script ameliorates script reflection (eg JSONP) + XSS attacks for sites that do use Service Workers.
  • Having the metadata on the script and not the page avoids an issue with meta tags, if there is one. We didn't look into that.

The solutions in @jakearchibald's previous comment would be easier for us, frankly.

@domenic
Copy link
Contributor

domenic commented Oct 17, 2014

I really like the content-type idea, mainly because then static content servers could come configured out of the box with a .swjs -> application/javascript+serviceworker mapping, and then once that gets rolled out to GitHub pages we could use service workers there. (Whereas, with the CSP solution, we're unlikely to ever get a generic host like GitHub pages to work with service workers.)

@jakearchibald
Copy link
Contributor

@domenic what are your thoughts on the path-based method? It has the benefit of working securely without changing servers. Also, appcache uses the same method for restricting how FALLBACK works.

@domenic
Copy link
Contributor

domenic commented Oct 17, 2014

@jakearchibald sounds pretty good. A tiny bit ugly since I like to keep my JS nice and tidy inside a /js or /assets folder, but I could make an exception for my service worker.

@jakearchibald
Copy link
Contributor

@annevk @slightlyoff can I persuade you to reconsider the service-worker script path approach?

Failing that, a special content-type. This avoids the messiness of a CSP opt in, and confusions around which client's CSP it applies to.

@annevk
Copy link
Member

annevk commented Oct 20, 2014

  1. A special path seems rather magical. We have a few of those (e.g. /favicon.ico) and they are not liked much.

  2. A MIME type works, but once servers start deploying that in the default configuration you're lost again, no (also, if any it should be something like text/sw+javascript)?

@jakearchibald
Copy link
Contributor

  1. A special path seems rather magical. We have a few of those (e.g. /favicon.ico) and they are not liked much.

I think having to add special headers is liked less.

@annevk
Copy link
Member

annevk commented Oct 20, 2014

That is fair and this is less constrained than /favicon.ico and /robots.txt. You can still pick the name, as long as it does not contain a slash.

@slightlyoff
Copy link
Contributor

So a few thoughts:

  • I dislike the notion that it's CSP on the installing page that controls this. That doesn't comport with the design constraint imposed by the folks who have driven this that any random page not be able to install a SW. That means you need to be able to show affirmative control over the server in some way, and a CSP header that's different in some way to what you can set in the <meta> tag variant is...bad.
  • I want this to be fused to the script
  • Paths are bad, bad, bad. They don't actually deal with all of John's examples and I'm dubious that they are valuable.

This leads me to suggesting a header on the script file and, if we can't agree on CSP for the script file, thinking we should do something like Service-Worker-Allowed: true.

Reactions?

@annevk
Copy link
Member

annevk commented Oct 21, 2014

Pointer to John's examples? And an explanation of sorts why paths are bad and don't address the examples if that's not self-evident.

@jakearchibald
Copy link
Contributor

CSP is opt out, and CSP blocks requests happening not responses being used. Since we're talking about something that's opt-in and blocks on response, I can't understand why we think CSP is the answer here.

The path solution means github pages, other github raw viewers, and tilde-based sites just work. If you can put a script there, you control it.

jsbin.com allowed you to put a script in the root, which is highly unusual, and @remy already fixed that.

@johnmellor
Copy link

My examples of common sites that miss out on Same-origin policy protection (because they put content from non-mutually-trusted users on a single origin) were:

Of these jsbin.com was the only one which didn't use paths to separate content from different users, and that's now fixed. It's likely there are more such sites (maybe some sites upload everything to the root with a hashed filename, or use paths within the query string, like /index.php?path=/~username) but these seem rather rarer compared to sites which use paths to separate users.

@inexorabletash
Copy link
Member

http://www.w3.org/2001/tag/issues.html#siteData-36 should be considered.

From Tim Berners-Lee:

The architecture of the web is that the space of identifiers
on an http web site is owned by the owner of the domain name.
The owner, "publisher", is free to allocate identifiers
and define how they are served.

Any variation from this breaks the web.

Path-based restrictions run afoul of this, albeit not as badly as /favicon.ico or /robots.txt, since we wouldn't be saying "you MUST use this identifier", merely "if you want to do XYZ, you MUST structure your identifiers as..."

@jakearchibald
Copy link
Contributor

We're trying to fix cases that already run afoul of that. Cases where part
of an origin are pseudo-owned by someone else.
On 21 Oct 2014 18:19, "Joshua Bell" notifications@github.com wrote:

http://www.w3.org/2001/tag/issues.html#siteData-36 should be considered.

From Tim Berners-Lee:

The architecture of the web is that the space of identifiers
on an http web site is owned by the owner of the domain name.
The owner, "publisher", is free to allocate identifiers
and define how they are served.

Any variation from this breaks the web.

Path-based restrictions run afoul of this, albeit not as badly as
/favicon.ico or /robots.txt, since we wouldn't be saying "you MUST use
this identifier", merely "if you want to do XYZ, you MUST structure your
identifiers as..."


Reply to this email directly or view it on GitHub
#468 (comment)
.

@remy
Copy link

remy commented Oct 21, 2014

Just to chime in, I didn't fix it by separating out paths for users. http://jsbin.com/foo.js is still valid. We simply serve scripts identifying themselves as service workers: jsbin/jsbin@ce53bb2

jsbin.com allowed you to put a script in the root, which is highly unusual

The internet is a big place, "highly unusual" is going to be a much bigger number than you anticipated when this thing is shipped.

If you take the path scoping approach, I suspect (aka gut feeling) that in years to come, it'll catch out new devs, mostly because web devs tend to throw something together/copy & blindly paste before reading the fine print specs. And by catch out, I mean some will use path scoping and accidently protect themselves, others won't and it could be too late.

@jakearchibald
Copy link
Contributor

Before SW, the amount of damage /A/ could do to /B/'s content depends on how reliant /B/ was on origin storage such as local storage or IDB.

The thing that was deemed unacceptable here (as far as I can tell) is SW allowed /A/ to take over the content of /B/ even if /B/ was totally unreliant on storage, and that was new.

We've protected against that. But if /B/ itself becomes reliant on origin storage (through SW or otherwise) we cannot protect it from /A/, but that's old news.

Ideally no one would have let untrusted parties share an origin & this issue wouldn't have existed, but bleh.

@annevk
Copy link
Member

annevk commented Nov 4, 2014

We are creating a false sense of security by providing protections against /X/ taking over /Y/. The same false sense of security cookies give with Path.

@mnot
Copy link
Member Author

mnot commented Nov 5, 2014

We're not trying to create any sense of security here -- we're trying to avoid surprising users who have existing, deployed content that can get screwed by SW.

Getting on our collective high horse (illustration to follow) and saying "you already have these theoretical security holes" (never mind that they're not actually using cookies, etc.) is going to look like a poor excuse indeed to these people.

It's true that this is mitigated a bit by the fact that SW is https only, and that these sites are less common than they used to be. That said, I suspect that someone is going to get caught out by this, and some of the early press around SW is going to be "it's that thing that introduced a new security hole".

When I talked with @slightlyoff about this in SF, I think we came to a place where we could either address this with some form of scoping, or take the risk and just try to do an education campaign -- i.e., "if you run a HTTPS multi-user site, be aware that you need to take these steps..."

@annevk
Copy link
Member

annevk commented Nov 6, 2014

Even with scoping and /A/ being a non-malicious entity, if /B/ is a malicious entity, /A/ is screwed the moment /A/ deploys service workers.

Again, restricting things where there is no security boundary will lead to /A/ feeling secure, while an XSS on /B/ or a malicious /B/ can seriously hurt /A/.

@qgustavor
Copy link

With the introduction of "same-path policy" (I don't know the correct name) blob: and filesystem: URLs, which can be considered trusted origins, can't be used as service workers (at least in the current Chrome implementation).

Is better to keep those denied from being used or change the specs to make those URLs have the same path as the script which created those when using those as service workers?

As the FileSystem API is considered dead and the content from filesystem: URLs can be changed by any user in the same origin, then it's better deny those (although before the "same-path policy" those could be used in Chrome as service workers).

@jakearchibald
Copy link
Contributor

With the introduction of "same-path policy" (I don't know the correct name) blob: and filesystem: URLs, which can be considered trusted origins

This was already the case. We can't use a browser storage resource for the SW as it becomes really tricky to update. You end up relying on cached assets updating cached assets, which will likely lead to lock-in.

@qgustavor
Copy link

One of the main reasons for using blob:s as service workers is that It can allow the updating process to be controlled by the JavaScript. I don't think that it is "really tricky to update": just register other blob over the current one. It doesn't open a door for attacks as the path policy still apply.

It can be either better than the current updating process, as WebCrypto can be used for improved security, or not, in case content from insecure origins get being used without authentication or when the updating process fails or is not implemented. In case of error then simply unregister the worker. In case it isn't implemented, which will end in a lock-in, why not doing the same as when cache expiration headers is badly set: just allow the user to unregister the service worker like how the cache can be cleared.

Storing those locally also can allow some possibilities, like checking the integrity of the script used (by the user in a browser interface, not by the script, except for updates). Even if hashes get used for this, which are not optimal as explained by Tor Project, it is still better as it is now. Some people suggested PGP but for me it's too counter-intuitive (although it can work for applications where encryption is the main focus).

@jakearchibald
Copy link
Contributor

I don't think that it is "really tricky to update": just register other blob over the current one

From where? If your SW content is:

self.onfetch = e => e.respondWith(new Response("Hi"));

…you've just locked yourself in. No way to update. Not only tricky, it's impossible.

@qgustavor
Copy link

As I said updating isn't hard but preventing hard locks is: is not possible
to determine if a script isn't changing because it will only update on user
input, if it will update via network or if it's bugged. When filesystem:
urls worked in service workers rarely I got problems like this: unless it
is an attack I don't think someone will write something like your code
without being it a bug.

As bugs are the only case where lock-ins happen I showed in the last
comment some solutions for fixing those comparing how lock-ins already
happen and how are fixed. But even requiring things in order to mitigate
those problems, like user input or specific headers, when considering the
benefits it's still a good improvement to the spec.

On Tue, Nov 11, 2014 at 5:01 PM, Jake Archibald notifications@github.com
wrote:

One of the main reasons for using blob:s as service workers is that It
can allow the updating process to be controlled by the JavaScript. I
don't think that it is "really tricky to update": just register other blob
over the current one

From where? If your SW content is:

self.onfetch = e => e.respondWith(new Response("Yo!"));

…you've just locked yourself in. No way to update.


Reply to this email directly or view it on GitHub
#468 (comment)
.

@jakearchibald
Copy link
Contributor

I don't think someone will write something like your code without being it a bug.

Right, but I've definitely heard of buggy code being released before. Not by me of course. I'm ace.

When filesystem: urls worked in service workers rarely I got problems like this

It wouldn't be exactly like my code, it would be a lot more complex making it difficult to spot before it's too late. Also, I think it's really easy to launch something without thinking of the update path, that's why we put the new worker in charge of how it wants to update, rather than require something of the old worker.

Eg I accidentally set a 1 year max-age on an appcache manifest before. It locked users in & there was nothing I could do about it. We really don't want ServiceWorker getting into that situation.

I guess I only see downsides to blob service workers. What are the benefits?

@qgustavor
Copy link

Well, limiting the potential of this because we're humans. Moving the
updating to JavaScript allow those features but it isn't bug proof.
If we mistakes the only way to fix is by the same method it got transfered
before.
The current updating mechanism doesn't allow CDNs or code signing, isn't
user configurable, can't be paused or configured programmatically (even if
required user interaction), can reveal a tiny bit of browsing info via DNS
and relies on origin and path as security. Just because we are humans and
because we do mistakes.
Maybe in future someone will find a better solution.
2014/11/13 13:03 "Jake Archibald" notifications@github.com:

I don't think someone will write something like your code without being it
a bug.

Right, but I've definitely heard of buggy code being released before. Not
by me of course. I'm ace.

I don't think someone will write something like your code without being it
a bug.

It wouldn't be exactly like my code, it would be a lot more complex making
it difficult to spot before it's too late. Also, I think it's really easy
to launch something without thinking of the update path, that's why we put
the new worker in charge of how it wants to update, rather than require
something of the old worker.

Eg I accidentally set a 1 year max-age on an appcache manifest before. It
locked users in & there was nothing I could do about it. We really don't
want ServiceWorker getting into that situation.


Reply to this email directly or view it on GitHub
#468 (comment)
.

@KenjiBaheux
Copy link
Collaborator

"Header to control the restriction" tracked via http://crbug.com/436747 for blink

@slightlyoff
Copy link
Contributor

@mvano, @jakearchibald : I agree that we should change the default path. Have been bit by this now that Chrome implements the path restriction.

@jakearchibald
Copy link
Contributor

@slightlyoff see #595 - we've done just that

@KenjiBaheux
Copy link
Collaborator

@jungkees would you mind updating the spec to take into account the Service-Worker-Allowed header as described in #468 (comment) ?

Background: we've got feedback from Service Worker customers that the ability to override the path restriction via this header is important to them. The team has pro-actively started its implementation assuming that it is in the pipeline of spec updates.

@jungkees
Copy link
Collaborator

Alright. I'll work on that.

@jungkees
Copy link
Collaborator

Updated the steps to allow setting a max scope with the Service-Worker-Allowed header: ddb21be.

The scope check has been moved from Register algorithm to Update algorithm 4.3.9 ~ 4.3.17 as such.

@mfalken
Copy link
Member

mfalken commented Jan 20, 2015

@jungkees Thanks for the quick update. I have some questions about the spec. I'll opened #604 as this bug is really long.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests