Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Signed exchange use case: selectively downgrading content? #149

Closed
ithinkihaveacat opened this issue Mar 15, 2018 · 3 comments
Closed

Signed exchange use case: selectively downgrading content? #149

ithinkihaveacat opened this issue Mar 15, 2018 · 3 comments

Comments

@ithinkihaveacat
Copy link
Contributor

The use case document has a section on how webpackages may help users avoid censorship but I'm wondering if they may also help achieve the opposite outcome: to allow authorities to censor content. (In particular, to withhold new content, and present old content as if it were fresh.)

With current browsers, when a user clicks on a link, if a page subsequently loads and https://news.example.com/ is shown in the URL bar, they can be confident that the browser is displaying content retrieved from news.example.com and that was retrieved after they clicked on the link. (That is, the content is fresh.) However, this is not necessarily true if the resource in question is a signed webpackage delivered by a CDN: whilst the content was signed by news.example.com, it could be up to 7 days old.

This may result in unexpected behavior for users in regions where the use of such CDNs/caches is pervasive (for e.g. performance and/or regulatory reasons) and where the cache operator has the ability and desire to selectively censor information (perhaps in reaction to a particular event or crisis).

If the cache rewrites links, it may even be possible to arrange for an entire website to appear up to date and completely functional when in fact fresh content is mixed with old at the discretion of the intermediary.

Some mechanisms that may help reduce these risks:

  • Clients could indicate to users that content was not retrieved from an intermediary, and not the origin, and recommend that the "reload" feature be used if users are unsure of its freshness. (This also requires that the reload feature applies to the user-visible URL, and not the network URL; is this a recommendation of the format?)
  • Clients could verify the validityUrl more frequently than required, or allow users to manually trigger a validity check. (Related to 6.3 Downgrades.)
  • Recommend that origins who believe they might be subject to such attacks expire signatures within seconds or minutes. (Perhaps this could vary per cache, depending on the degree of confidence the origin has in the cache.)
  • Recommend that origins embed absolute timestamps within their content (e.g. "Last modified: 2018-03-15 18:45"), so that users are aware when content is not fresh.
@ithinkihaveacat ithinkihaveacat changed the title Signed exchange use case: censorship? Signed exchange use case: selectively returning un-fresh content? Mar 16, 2018
@ithinkihaveacat ithinkihaveacat changed the title Signed exchange use case: selectively returning un-fresh content? Signed exchange use case: selectively downgrading content? Mar 16, 2018
@jyasskin
Copy link
Member

The first two paragraphs are definitely a risk. If someone's browsing an untrustworthy site, with signed exchanges it has the ability to link to an old version of a story instead of the newest version, and there's currently no plan to show the user that in the URL bar.

The third paragraph seems unlikely to me: each link source has to separately opt into serving a signed exchange for a resource, so the attack would have to come from a widely-used CDN that all of these sources opt into.

Caches can't rewrite links within a signed exchange without breaking the exchange's signature, so I don't think that's a risk at all. With bundles, we'll even be able to ensure that the intermediate can't selectively upgrade resources: if the publisher signs their site as a group (TBD exactly how this'll be expressed), the intermediate will be able to selectively break the signature of a subset of the resources, but the client can remove those from the cache and re-request them instead of mixing an old cached version with a newer signed version.

I haven't written the recommendation that "reload" always ask the logical origin for freshness information, but that's the plan.

We can't have publishers expire signatures within minutes because that doesn't give the intermediate enough time to serve it on, but hours are probably plausible in many cases. Sites can set the Cache-control header to expire in minutes, like they currently do, and we could maybe use that as the indication that the browser should fetch the validityUrl eagerly. Maybe we could also expose the fact that a loaded page failed its validity check to javascript so the page can force-reload itself or otherwise notify the user? Pages could also ping their origin server themselves.

@ithinkihaveacat
Copy link
Contributor Author

so the attack would have to come from a widely-used CDN that all of these sources opt into

My understanding is that internet connectivity in some regions already has an equivalent property: in some parts of the world, it is an actual or de facto requirement that all network traffic go through a central point.

Right now, this gives the central point the ability to block entire origins. However, this proposal seems to introduce new capabilities: (1) the ability to downgrade (not block) resources (so content appears to be fresh, but isn't); and (2) the ability to selectively downgrade or block individual resources (paths), rather than entire origins (so some content really is fresh).

Perhaps neither of these capabilities is especially powerful/dangerous, but they do seem to add complexity to the user's security model.

Caches can't rewrite links within a signed exchange without breaking the exchange's signature

I was thinking of rewriting the links prior to signing, not after signing. (For example, caches might choose to rewrite links to always point back to the cache for improved performance/reliability.)

@ithinkihaveacat
Copy link
Contributor Author

Closing, I think this concern is about a "resolved" as it ever will be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants