Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using secure-context gated features with local devices #60

Open
daurnimator opened this issue Apr 6, 2018 · 30 comments
Open

Using secure-context gated features with local devices #60

daurnimator opened this issue Apr 6, 2018 · 30 comments

Comments

@daurnimator
Copy link

As proposed here which is continuing on from w3ctag/design-principles#75:

It should be possible for people to create devices that are located on home networks that use modern browser features. One example is watching videos from a local NAS in fullscreen; another is home automation that wants access to bluetooth devices. These devices should not need to be connected to the internet; and infact from a network security perspective, it's better if they aren't!


One idea I've had is that things in the private ip space (i.e. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, fd00::/8; possibly also including 127.0.0.0/8 and ::1 for local development purposes) should be considered "secure contexts" for use of web APIs.

@pinobatch
Copy link

Cleartext HTTP to an attacker in the same coffee shop isn't secure.

Therefore, I propose treating cleartext HTTP sites on RFC 1918 reserved blocks (10/8, 172.16/12, and 192.168/16) and their IPv6 counterparts differently depending on whether the user has configured the user agent to trust a particular network. This way, the UA can apply ".local is a secure context" one way for the user's home or work network and another way for an access point in a coffee shop.

The exact means of configuration would depend on the UA, of course. But I encourage documenting the considerations for the sort of UA most affected by gated features, namely an interactive web browser. When a UA sees a cleartext site on a private address trying to access gated features, or when an UA sees an HTTPS host on a private address presenting a certificate from an unknown issuer, the UA SHOULD present information identifying the network to let the user make an informed decision on whether to trust one particular certificate (if HTTPS) or all HTTP sites on the network. This information would include much of the following:

  • Medium type (wired or 802.11)
  • Whether the network uses some sort of access control (e.g. open, WPA Personal, RADIUS)
  • Gateway's MAC or BSSID
  • SSID (e.g. "att-wifi" or "Doe Residence")
  • Private IP address of the web server
  • Fingerprint of the unknown-issuer certificate, if any, for comparison to a fingerprint transmitted out of band by the server

@curry684
Copy link

curry684 commented May 4, 2018

To add another VERY common use case to this: actual PWA development. Most companies I am aware of use some sort of central development environment. I myself run a local DNS server directing *.local queries to the IP of my local VM so I can actually develop and maintain many sites simultaneously.

Right now there simply is no way to debug/develop PWA's in a setup like this, and we're being forced to use hacky solutions on 127/8 addresses. We definitely need some way of whitelisting for developers.

@pinobatch
Copy link

For PWA dev, run a private certificate authority and trust its root certificate on all testing devices. Production is a bit trickier, as non-technical users of routers, printers, NAS devices, and the like aren't likely to know how to run a CA.

@daurnimator
Copy link
Author

aren't likely to know how to run a CA.

That's probably something we could fix with some effort.

The missing thing is having end users install a CA. It's very different in every browser (and don't forget mobile apps!). and it's generally not simple to only install a CA for specific hosts/ips: it's either for the whole accessible internet/IP space or nothing.

@jpiesing
Copy link

aren't likely to know how to run a CA.

That's probably something we could fix with some effort.

The missing thing is having end users install a CA. It's very different in every browser (and don't forget mobile apps!). and it's generally not simple to only install a CA for specific hosts/ips: it's either for the whole accessible internet/IP space or nothing.

Even if you could talk users through installing a CA, what is the risk of enough mistakes being made that significant security weaknesses are created?

@daurnimator
Copy link
Author

Even if you could talk users through installing a CA, what is the risk of enough mistakes being made that significant security weaknesses are created?

Reasonably high; nevermind the issues with even providing instructions to do so. However with cooperation from browsers to standardise the flow, it could at least be a possible solution to this issue.
However you'll note its not the one I advocated in the OP.

@pinobatch
Copy link

pinobatch commented Jun 6, 2018

As for the solution you advocated in the OP (RFC 1918 IP = automatic secure context), I don't see how it can be made secure against an attacker on a coffee shop LAN.

@pinobatch
Copy link

For what it's worth, the procedure I describe is similar to that described by jstriegel in a message to public-webappsec in December 2014.

@mikewest
Copy link
Member

mikewest commented Jun 25, 2018

Hey folks,

  1. Local networks are just as eavesdroppable as remote networks. Folks generally would be well-served to encrypt all traffic, and not to treat the intranet/internet boundary as meaningful. BeyondCorp is a good example of what the state of the art should look like.
  2. The underlying request to make it simpler to trust IoT devices with powerful features is a reasonable one. I think whatever a solution looks like will be implemented at a layer above the Secure Contexts spec. That is, this document outlines a set of principles that user agents should apply to determine whether a given page meets a minimum bar. If the user agent can establish a secure connection with an IoT device via some mechanism, great! Right now, that establishment is grounded either in the web PKI and existing CA structure, or in user-driven configuration of the agent itself to trust certain hosts.
  3. Local development is something that should be served by user agents' configuration. It's not something that I think the spec needs to do any more work to support.

@pinobatch
Copy link

Even if configuring which certificates to accept for the IoT and local development use cases is out of the normative scope of the present spec, there still ought to be some spec giving best practices for a configuration interface. It could be a non-normative section of the present spec or a different spec that the present spec cites. But giving zero guidance whatsoever to user agent developers is not the answer.

@mikewest
Copy link
Member

mikewest commented Jul 2, 2018

@pinobatch: What would you like the spec to say?

@pinobatch
Copy link

pinobatch commented Jul 2, 2018

Here's my first attempt:

§3.4. Is origin potentially locally trustworthy? (new section)

A potentially locally trustworthy origin is one whose identity a user can validate in person. This is common with appliances on a private home or office network, such as a gateway, printer, or network-attached storage device. It is also common with a staging server during development.

Given an origin origin and the certificate returned by attempting to connect thereto, the following algorithm returns "Potentially Locally Trustworthy" or "Not Locally Trustworthy" as appropriate.

  1. If origin's scheme is not "https:" or "wss:", return "Not Locally Trustworthy".
  2. If origin's host component resolves to an address on a different subnet from the user agent, return "Not Locally Trustworthy".
  3. If the user agent can determine that its subnet is a public network, return "Not Locally Trustworthy".
  4. If the certificate presented by the server is invalid for a reason other than that its issuer is unknown, return "Not Locally Trustworthy".
  5. If other specifications such as HSTS rule out offering to let the user make an exception, return "Not Locally Trustworthy".
  6. Return "Potentially Locally Trustworthy".

§7.2. Development Environments

§7.2. Development and Other Local Environments (new name)

Append the following paragraphs:

Another way to support internal servers is to make use of a secure scheme on a trusted network more convenient. A user agent MAY do this by allowing the user to add an exception for a self-signed certificate.

If the algorithm §3.4 Is origin potentially locally trustworthy? returns "Potentially Locally Trustworthy" for the origin of the top-level browsing context, and the user agent allows exceptions in general, the user agent SHOULD present the interface for making an exception in a way that clearly distinguishes this case from the case where an attacker is trying to steal the user's information. For example, it SHOULD display the certificate's issuer and fingerprint, so that the user may compare it to the fingerprint on the staging server or the fingerprint printed on a printer's test page. If an exception exists for a "Potentially Locally Trustworthy" origin, and the origin subsequently identifies itself with a certificate other than the one for which an exception was made, the user agent SHOULD make it obvious that the certificate has changed, which could indicate an inside attack.

@curry684
Copy link

curry684 commented Jul 3, 2018

Isn't this overly complicating an issue that could simply be solved by, for example in Chrome, adding a button in the devtools Security tab when a certifcate is not valid and trusted to "Consider this origin secure for current session", after which it shows a clear certificate error in the browser bar but still loads service workers et al?

Given that the coffeeshop argument is all too valid I'm not sure it's a good idea to pollute the spec with it, and instead UAs should just handle it on their end.

@pinobatch
Copy link

pinobatch commented Jul 3, 2018

A "session" terminated by closing all UA windows isn't a useful boundary given that the user can open Chrome on a laptop on a trusted network, suspend it, enter a coffee shop, and resume it. The whole point of using a secure scheme is to help the user establish trust with a server, be it through a trusted introducer (a CA) or through validation in person (validate once, then trust the same certificate later). Introduction in advance can make even the coffee shop use case secure: negotiate the exception between a server on a laptop and a client on a tablet over a private network, then associate both the tablet and the laptop to a public network, and the exception persists (because the tablet's cert store trusts the laptop's self-signed cert) and is a "secure context" (because it's a secure scheme). The exception isn't for an origin but for a certificate.

As for whether it "pollutes the spec": Improving UAs' communication with users in cases I call "Potentially Locally Trustworthy" would reduce the negative impact of secure context gating on private servers. This would allow more actually secure contexts to be created. Describing this improvement in a spec of some sort with security and UX experts' eyeballs on it, be it this spec or another, would ensure that measures actually are improvements.

@jharris1993
Copy link

jharris1993 commented Jan 13, 2022

Greetings:

Ref:  w3c/gamepad#145 (comment)

One thing that this spec should consider is hardware devices connected to a system.

Because of spec's like this, things like joysticks and gamepads get locked behind a certificate paywall or are forced to use self-signed certificates.

This disallows uses like robotics, accessibility aids, and locally attached hardware to a local device, because the name can and does change based on circumstances.  (i.e. Multiple copies of the same robot used in a classroom will need different hostnames - requiring a new certificate to be generated every time the hostname changes..)

This also encourages the use of self-signed certificates which undermines the entire certificate security infrastructure by encouraging users to just blow-by certificate errors and/or self-signed certificates without knowing what they're doing.

Please see my two comments at:

w3c/gamepad#145 (comment)
and
w3c/gamepad#145 (comment)

for additional information and insight.

P.S.
Before you jump my bones for being potentially off-topic, the folks on the gamepad forums specifically pointed me here to comment on this.

Thanks!

@jharris1993
Copy link

@pinobatch

Cleartext HTTP to an attacker in the same coffee shop isn't secure.

As for the solution you advocated in the OP (RFC 1918 IP = automatic secure context), I don't see how it can be made secure against an attacker on a coffee shop LAN.

If I understand the original poster's question rightly, (and I may not be), he's not talking about a "coffee-shop LAN". He's talking about a known network that is under the user's physical control, talking to devices that are under his physical control in ways that are under his physical control.

We can send people to the moon.  We can do heart transplants.  We can use the GPS satellite system to locate something to with a few centimeters.  Surely we can identify a secure local network and allow things to use it without kludgy solutions like self-signed certificates that actually reduce the security of the context.

@pinobatch
Copy link

The classroom use case isn't quite the same as the home use case, as a school is an establishment that probably already has its own website (and thus its own domain name). "Multiple copies of the same robot used in a classroom" can be given subdomains under the school's domain name and then issued certificates. As for "paywall", Let's Encrypt issues certificates without charge to the owner of a domain name. Perhaps this suggests using the "IndieWeb" paradigm, in which the head of household owns a personal domain name under which to create subdomains and thereby obtain certificates, again without charge. I acknowledge that owning a personal domain name is a paywall.

@jharris1993
Copy link

@pinobatch

Unfortunately, the school scenario is not so advanced as you may imagine.  The typical use case is one of the teachers is "volunteered" and given a handful of robots.  Over on the Dexter Industries forums, we've seen it all - from Phys-Ed teachers who struggle with smartphones to instructors who are reasonably competent

Additionally, if I understand rightly, a domain is expected to resolve to a routable IP address.  Many of these devices broadcast their own access point on 10.10.10.10 - with the name of the robot being the only distinction.

The point that I am trying to make here is the "if you're a hammer, everything looks like a nail" effect.  Everyone is so hyped-up on "secure contexts" that they fail to realize that simply using a secure connection - via https - isn't always the only, or even the best solution.

And. . . most schools of any size usually share the school district's domain address and schools are paranoid enough about their networks as it is - they're not going to hand out subdomains and certificates to a bunch of robots.  In fact, one instructor on the Dexter forums actually had to use his personal cell phone as a hot-spot so that this students could run apt-get and update the robot's operating system.

All that aside. . .

I have a robot I am experimenting with, trying to use a joystick to control it.  I am retired and pursue this as a hobby.  Are you seriously suggesting that I should go out and buy some domain name just so I can experiment with a robot?  With all due respect, that makes about as much sense as requiring everyone with a computer, smartphone, television or tablet to go out and buy themselves domain names so they can use their remote control or a Bluetooth device to change channels.

I am not saying that secure contexts are bad.  All I am saying, and the point I hope to get through, is that you cannot solve the Internet's problems simply by throwing HTTPS at it.  As I said before, the bad actors out there who want to fingerprint you already have secure certificates and secure web-sites.  Once you're inside their secure domain, everything's verified and they're considered trustworthy, they can do whatever they want.  Very few sites are not secure nowadays and fingerprinting, persistent "evercookies" and the like are still a world-wide problem.

Obviously, the secure certificate and secure context isn't solving the problem.  Please don't make it any more difficult for the rest of us.

@aerik
Copy link

aerik commented Apr 1, 2022

I want to strongly support this. The whole definition of secure contexts and restricting certain features to only be available in secure contexts seems to be conceived from the perspective of publicly available websties.

I write software that is used by universities and hospitals. Know what the customers who are most concerned with security want to do? They want to run their own server inside their secured local network. A fraction of them set up and deploy a certificate - the vast majority just give the server a static local IP and have the users bookmark it.

I don't think the concept of "secure context" is itself the problem, the problem is the definition. If a secure context is an "authenticated and confidential channel" (with additional precautions for iframes, etc), the problem is with defining "authenticated" in terms of DNS names, or apparently in very rare cases, publicly facing IP addresses. A website only claiming to be 123.456.789.10 is, in fact, that website - at least on that network. Is there a risk there? Sure, but it's not the same risk as someone pretending to be reallypopularbank.com, or event just twitter. A user would have to log onto another network, navigate to that IP address, and receive a fake website tricking them into interacting with it. For sure it could happen - but it's not the same problem.

What really worries me is the section on unrestricted legacy features - it is propsed that, in order to secure more publicly available websites, we further cripple websites running inside the LANs of schools and corporations.

The basic concept of domain name ownership being a prerequisite to access certain features is ludicrous. And the the assertion that a school should have to assign subdomains to robots is just silly.

@vincentlaucsb
Copy link

Strongly support this. I wanted to create a desktop app that syncs data to a mobile device by means of a PWA using service workers. I'm completely astounded that there's no way to do this without manual intervention by the user. Yes, us software developers (and power users) can figure ways around this, but for software intended for use by the general public, it is not acceptable. Software is designed to help people do things, and anything that impedes this whether it be poorly designed UI or irrational and arbitrary security policies should be revised.

Apparently, it's fine to stream data between your mobile device and a website that stores your data on a third-party cloud provider (that will hopefully do the right thing and not perform data mining operations on it) as long as the data was transmitted on an SSL connection.

But others in this thread have argued that to do the same between your own phone and your own laptop on a private WiFi network protected by WPA2-PSK encryption is somehow less secure, even though the actual data transfer doesn't touch the internet. That's completely bogus. Yes, you can brute force a weak WiFi network password, but you can also do the same with a bank login over an HTTPS connection. You can also brute force a weak bank account password from anywhere in the world, while attacking a WiFi network requires physical proximity.

My WiFi router can transfer 50 GB of video between my devices faster than any cloud service ever will. Why is a publicly facing server with a publicly registered domain name and a signed SSL certificate a necessary requirement? If security is the main reason, an exception should be granted for private networks where the user has sole possession of the data at all times.

As @aerik said earlier, many businesses opt for private networks because having a publicly facing site introduces many additional attack vectors that a private network doesn't have. Furthermore, many hospitals and insurance companies prefer running software on-premises due to HIPAA requirements which compel anybody who handles or possesses personal health information (see covered entities) to also comply with HIPAA. A nurse filling out a checklist on her tablet by a patient's bed side shouldn't need to worry about self-installing an SSL certificate, and neither should the hospital's IT staff be burdened with another chore.

If Windows can ask me if a network is public or private when I connect to it, so can a web browser. If a web page on the 192.* range attempts to use gated features, then the browser can allow or deny it based on the user answering if they are on a public or private network.

@jharris1993
Copy link

jharris1993 commented Mar 20, 2023

Is anything happening with this, or are all these alternate use-cases simply going to be ignored?

@holta
Copy link

holta commented Apr 12, 2023

What really worries me is the section on unrestricted legacy features - it is [proposed] that, in order to secure more publicly available websites, we further cripple websites running inside the LANs of schools and corporations.

The basic concept of domain name ownership being a prerequisite to access certain features is ludicrous.

Is anything happening with this, or are all these alternate use-cases simply going to be ignored?

Great questions above, thank you @aerik & @jharris1993 💯

@timbl @wseltzer @ikreymer @WikiDocJames @metasj @mitra42 @rgaudin @mikedawson @tim-moody

What does it take to unblock millions of overlooked developing world users?

Permanently offline schools and rural communities use $40 air-gapped web servers and similar, that cannot possibly use HTTPS in any pragmatic way — they are permanently offline (often rural, almost always poor) and so cannot renew SSL/TLS web server certificates when they expire in 398 days (absolute maximum!) So how can we help the world's most vulnerable kids/citizens to once again access learning content in offline schools-and-similar? (e.g. people using https://internet-in-a-box.org and many similar Offline Learning "community fountain wifi hotspots", who in recent years are increasingly being shut out when modern web pages move to Progressive Web App / PWA designs with Service Workers, more and more year-by-year!)

SPECIFICALLY: Modern PWA / Service Worker pages require HTTPS (and hence self-signed web server certs in permanently offline schools/communities, good luck in offline villages!) which essentially-and-increasingly shuts out developing world citizens. If given any options at all (e.g. if someone miraculously establishes a self-signed web server cert in a rural school) low-income and low-literacy individuals still cannot realistically navigate a confusopoly of browser pop-ups, with SSL/TLS small-print legalese, in language they cannot possibly understand. Both Above (server-side quagmire and client-side quicksand) Tragic Showstoppers! 😢

RESULT: Millions of people are increasingly Locked Out, as a practical matter, prevented from accessing modern web pages with "Service Workers" (i.e. that require HTTPS) within countless developing world offline libraries/schools-and-similar. This cruel/destructive blockade (effectively preventing people from using their own phones/tablets/laptops to browse their very own community's knowledge, inside their very own offline schools, offices, libraries, churches, cafes, homes, etc) largely arising from... the originally well-intended HTTPS EVERYWHERE campaign (i.e. Snowden's revelations a decade ago in 2013!)

COROLLARY/QUESTION: Has the word EVERYWHERE in the "HTTPS EVERYWHERE" slogan been reduced to disinformation a decade later, effectively perverted to shut out the world's poor? Instead of enabling encryption — why are we now somehow preventing local HTTPS-like encryption and modern pages, in offline LAN's and offline civic networks — doing quite serious damage to the world's most vulnerable? 😞

How to find our way out of this mess! OPTIMISTICALLY: Can W3C-or-similar help everyone restore local pages for millions of low-income and vulnerable, each deserving of modern learning websites + web apps + opportunities in their Necessarily Offline schools/libraries & similar? Can someone please suggest a creative/concrete/legit path forward? PESSIMISTICALLY: Are we facing a Kafkaesque reality here: where offline, indigenous, survivalist and poor users/citizens' offline voices are Structurally Shut Out, further year-by-year as (1) PWA page/site designs grow + (2) SSL/TLS cert expiry date windows shrink + (3) Tortuous browser pop-up warnings further obfuscate (with no seat at the table for global citizens suffering the consequences??) CONCLUSION: Can we (e.g. the "Bandwidth Rich") p-l-e-a-s-e try to find a way to end HTTPS EVERYWHERE Lip Service — instead serving the "Bandwidth Poor" in truly Vital Need — working towards any-kind-of OFFLINE HTTPS (or similar, Something/Anything that stops asphyxiating offline civic network communities' LAN's!?) 🤔

2020/2021 Background:

@mikedawson
Copy link

mikedawson commented Apr 12, 2023

I think the spec designates localhost and browser extensions as "potentially trustworthy", the same as https ( https://w3c.github.io/webappsec-secure-contexts/#algorithms ). This provides a way for local content to work within a secure context without needing a certificate.

Specifically: a client app or browser extension etc on a device (desktop/laptop/phone etc) can run an embedded server on localhost (considered a secure context), then use it's own private keys to transfer data to/from a local network server (e.g. after showing a simple security prompt). It can then reject / alert the user if the key changes unexpectedly (e.g. like SSH does).

There are airgapped networks that are using plaintext http over a local network, I think the use of plaintext http over the network itself should be phased out as soon as possible.

It is quite likely that people will use the same passwords for their LANs as they use with mobile network accounts (even mobile money wallets), or other accounts. All it takes is one device on the network to be compromised to make it possible to intercept this information.

In the EU sending personal info (e.g. username/password, phone number, email, etc) over a network in plain text is a violation of GDPR. In the US if this data concerns children in a school / hospital etc. it would be a violation of Children's Online Privacy Protection Act.

Even where many people have limited Internet access, there are considerable numbers who have some Internet access via (sometimes limited) mobile data bundles, when they visit other towns, etc. It is getting and more and more likely that the person who is using an airgapped server also has a mobile banking account and other usernames/passwords.

@tim-moody
Copy link

I totally agree with the position that an isolated lan with non-routable IPs should be able to use http if its administrators so choose. I realize that browsers can accept self signed certificates, but the process is cumbersome and scary for users not used to it.

I think the use of plaintext http over the network itself should be phased out as soon as possible.

https serves two purposes (unfortunately): authentication and privacy.

Only a domain can be authenticated because only then can it prove ownership to an outside authority. I can't prove my 10.10.10.10 is not yours.

Privacy, where required, is possible with supplementary encryption libraries within applications even over http, but it is not always required. A student watching a Khan Academy video from a local server is no less secure over http than https. In school labs around the world no one logs in except administrators and the only personal data is a user name.

@jharris1993
Copy link

jharris1993 commented Apr 12, 2023

"In the EU sending personal info (e.g. username/password, phone number, email, etc) over a network in plain text is a violation of GDPR. In the US if this data concerns children in a school / hospital etc. it would be a violation of Children's Online Privacy Protection Act."

Pleeeease!
Are you seriously suggesting that I, communicating with a robot that requires a login, is going to land me in jail?

Are you seriously suggesting that logging into a SMB server, behind my firewall, using a private IP, is going to land me in hot water?

If that were to happen, any lawyer with half the brains God gave a squirrel would immediately move for dismissal and the court would likely sanction the prosecution for wasting the courts time.

This is the big issue. People can't, or don't want to, distinguish between a publicly facing network and a private one.

They don't want to acknowledge that a private, household or classroom network is different than an coffee house lan, or a network that contains sensitive legal or health information.

That this argument is beyond silly should be intuitively obvious and I cannot believe that people are still raising this issue.

@aerik
Copy link

aerik commented Apr 14, 2023

This

https serves two purposes (unfortunately): authentication and privacy.

is pretty close to the core of the issue. That "secure context" is a one-size-fits-all solution for a wide range of possible security risk types is another way of looking at it.

It's hard to imagine the powers that be making more categories than "secure context", so I think the approaches most like to succeed are either 1) making it easier to create a secure context and/or 2) convincing those powers to be less aggressive in gating off features.

Only a domain can be authenticated because only then can it prove ownership to an outside authority. I can't prove my 10.10.10.10 is not yours.

Yes, but for two things: https://1.1.1.1/ and the fact that, barring IP collisions, if I reached your server at 10.10.10.10, you are that address. Maybe there's a little more complexity there, but there's complexity to DNS, too. It seems very reasonable to me to say that self signed certificates over private networks with reserved IPs like the OP proposed not only be trusted, but be generated for that static IP (apparently Cloudflare is doing something tricky to make it look like they have a cert for that address).

In the EU sending personal info... over a network in plain text is a violation

That's hysterical. Then the GDPR just criminalized tons of legacy tech. It's a well-intentioned ridiculousness. AND YET - that further underscores my point about making it easier to run HTTPS.

@jharris1993
Copy link

Yes, but for two things: https://1.1.1.1/ and the fact that, barring IP collisions, if I reached your server at 10.10.10.10, you are that address.

Not necessarily.

That's true only to the extent that this address references one unique location.

My ISP in Russia places private IP addresses behind a 10.x.x.x proxy, and which you get is a toss up.

In the case of my robots, (and those of others), EVERY robot has THE EXACT SAME IP address, distinguished only by SSID.

We need a method, like a browser context flag, that allows the maintainer to create a special context for special needs.

@aerik

This comment has been minimized.

@drzraf
Copy link

drzraf commented Feb 6, 2024

I agree with the previous comment: The methods used to forcefully push this undesirable move (as proven again and again in most bugtrackers) was highly and visibly anti-communitarian:

  • The impossibility to rollback what is seen as a significant specification regression
  • The arguments favoring HTTPS transforming into a pretext each new day, with half of the web being MiTM by Cloudflare and the CA-independant DANE solution having been expertly ditched by every browser vendor for years
  • The fact that this spec' is confusing language's features/API (websockets, storage, crypto, ...) with the content/data they manage
  • ...

Such a long thread leading little to no actual change makes harder to focus on the specs when the underlying issue seems to be so much about the governing body and governance itself.

@aerik
Copy link

aerik commented Feb 16, 2024

Some objections have been in the form of, "we don't want to make more prompts for users to allow features", but more and more it seems this is the solution that is actually adopted. We have prompts for microphone, camera, location, and with the increasing restrictions on third party cookies, we're going to see huge proliferation of prompts for access by ifames (https://privacycg.github.io/storage-access/) to allow embedded content to continue to work.

I fail to see why prompting a user to allow a cookie so that their embedded social media sites will continue to work is any more worthy, or less confusing, than the many use cases presented in this (and related) issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests