-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is Sec-Fetch-Site
based on the full URL redirect chain?
#28
Comments
Basically, open redirects. The example that (I believe) you're missing above is "attacker.com makes a request to victim.com/redir?url=//victim.com/secret which redirects to victim.com/secret". In this case we can't send the final request as
I'm not entirely sure I understand this point -- an attacker can make a request to the final URL, but that direct request will be |
Sorry, but I am still not getting it :-/. Please see my replies below.
The algorithm that is described at https://mikewest.github.io/sec-metadata/#sec-fetch-site-header and implemented in Chrome compares the origin of the initiator (i.e. attacker.com) with the origin of the target (i.e. victim.com/redir... or victim.com/secret). The result of the comparison would be In other words, in your example, the final request will not be I think you might be (incorrectly I think) saying that the final request will use
Exactly! For the attacker to gain something, the earlier destination has to be less attacker-friendly than the final destination:
BTW: Maybe it is okay to say that |
Ah, yes, I understood your proposal to "consider the current/last URL in the redirect chain" as only comparing the URLs between the last two hops of the redirect (i.e. if we have a chain I think the primary reason to consider the full chain is the case where an application loads any external resources; for example if my application fetches an A similar concern applies in cases where an application allows sanitized attacker-controlled markup, such as a webmail client which renders a safe subset of HTML. The sanitizer could presumably prevent the attacker from directly loading same-origin resources by rewriting URLs in untrusted markup, but if the attacker can make a request to their own server and then redirect it to the original application and make it appear as This also applies to navigation restrictions to mitigate XSS: if I configure my application to disallow |
Similar question came-up for PaymentRequest API for non-HTTP "redirects" (#30), but I also want to note that the initial PaymentRequest initiator-tracking implementation for HTTP-requests also doesn't look at the full URL chain - see here: https://chromium-review.googlesource.com/c/chromium/src/+/1636635/4/components/payments/core/payment_manifest_downloader.cc#187 |
In general you need to look at the full chain, A -> B -> A is not trustworthy. |
I think this can be closed. Perhaps an issue should be opened for payments so that they adopt a more secure model. |
Agreed.
Assuming https://w3c.github.io/payment-method-manifest/#fetch-the-web-app-manifest-for-a-default-payment-app is the feature in question, it should follow normal fetch semantics, so its fetch metadata behavior should be well-defined. If Chromium still diverges from that, we should consider it an implementation bug. |
Hello,
I wonder if https://mikewest.github.io/sec-metadata/#sec-fetch-site-header could clarify (in a non-normative comment maybe) why the algorithm considers "each url in r’s url list". An alternative would be to only consider the current/last URL in the redirect chain.
Notes:
cross-site
(full chain) orcross-site
.cross-site
(full chain) orsame-origin
. But then, it victim.com can be coerced into making a request to attacker.com/blah, then it probably can also be coerced into making a direct request to victim.com/boom.The text was updated successfully, but these errors were encountered: