-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reports compression #62
Comments
I thought there was a general desire to avoid compressing data that mixes data from (speaking colloquially) multiple security contexts to avoid leaking info about how well those pieces of data compress against each other. |
That depends on the data, how it's compressed and who is in a position to observe the compressed bytes size to determine secrets in the data. I might be missing something, but I wouldn't think the data leak concerns apply here, as compression contexts won't be shared between origins. I also don't know of a scenario where a network observer can initiate many different reports and compare their transfer sizes (and imagine browsers can apply safeguards against such a scenario, if necessary). We could also make the reports' transferSize invisible to Resource Timing, to avoid XSS creating many different reports and deducing their contents to exfiltrate information it normally doesn't have access to (I'm not sure it's a real threat though). |
I expect that, if the Reporting API and Network Error Logging become popular, major analytics providers will offer reporting endpoints. The Reporting API currently batches reports from different origins together if they're destined for the same endpoint. If, say, Google Analytics collected network error reports this way, nefarious sites could deliberately cause error reports for specially-crafted URLs to see how well they compress against the other data in the upload. I don't think Reporting API uploads are currently observable at all, but that might change in the future if it's integrated more closely with Fetch. I was imagining a network-level attacker; maybe that's not actually a concern? |
I wasn't aware of the batching, so you make a good point. In this scenario, network observers could use compression to observe content changes (even if I'm not sure it's a practical attack). |
That could work. I'm not sure how big I expect reports to get, or how many are likely to be batched from the same origin. It might be worth measuring that before adding compression to the spec. |
Potentially relevant: http://httpwg.org/specs/rfc7694.html |
SGTM |
AFAICT, right now there's no way to define a compression scheme that will be applied to the reports. There is no HTTP mechanism for requests to perform content negotiation on the uploaded content.
It's probably better to define a compression scheme as part of the spec, have the server declare its preference for compression as part of the
Report-To
header, and have the browser apply that compression before sending the reports up./cc @mikewest
The text was updated successfully, but these errors were encountered: