Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why would notice and consent not be adequate? (Notice and consent debate) #5

Open
jwrosewell opened this issue Feb 10, 2022 · 63 comments

Comments

@jwrosewell
Copy link

No description provided.

@darobin
Copy link

darobin commented Feb 10, 2022

A person's autonomy is their ability to make decisions of their own volition, without undue influence from other parties. People have limited intellectual resources and time with which to weigh decisions, and by necessity rely on shortcuts when making decisions. This makes their preferences, including privacy preferences, malleable and susceptible to manipulation. A person's autonomy is enhanced by a system or device when that system offers a shortcut that aligns more with what that person would have decided given arbitrary amounts of time and relatively unlimited intellectual ability; and autonomy is decreased when a similar shortcut goes against decisions made under such ideal conditions.

The failure of notice and consent regimes is widely documented (as has been pointed out to you several times over). Here is a very cursory list from the primary literature:

I have spent way too much time reading on this topic and I am not aware of a single study showing that notice and consent is ever effective except in very specific cases that do not include persistent capture.

@jwrosewell
Copy link
Author

And yet GDPR allows for and provides guidance on consent.

All the businesses that are affiliated with people in this group rely on consent to conduct their digital business.

On what basis would this group decided that a recognised law concerning consent was not good enough? This is a fundamental question.

@darobin
Copy link

darobin commented Feb 10, 2022

If extensive scholarly investigation spanning decades and leading to peer-reviewed scientific consensus published in top-tier journals does not convince you, I don't see what will. It's not the job of this group or any other group to deal with tactics that amount to climate change denialism.

@Furchin
Copy link

Furchin commented Feb 10, 2022

Today's meeting made a broad unsupported assertion on the topic that consent and notice is not adequate and I also wanted to learn more about the premise behind this assertion. Responding with a list of sources -- no fewer than four of which literally use notice and consent themselves in presenting the information -- and then comparing the asking of a reasonable question to climate change feels like a poor approach to starting a productive conversation.

I'd love to learn more about what would be considered sufficient, but I also don't want to be vilified for asking the question.

@darobin
Copy link

darobin commented Feb 10, 2022

It's great that you want to learn more about the problems with notice and consent regimes. I didn't just provide a list of sources, I provided a short summary of the issue and a list of sources to back that up for people who have a sincere interest in digging into the problem themselves. I'm not exactly sure what more you expect beyond a concise answer and a list of further reading.

I have to point out that it's not four of the sites hosting those sources that relies on notice regimes; it's all of them. They don't do that because it provides better privacy, they do that because they are legally required to. In many legal regimes, such as the US and the EU, it's the primary way that companies have to protect themselves against privacy claims of their users.

No one is being vilified here. James has asked the question many times and has systematically ignored the research. The comparison with climate change denialism stems from a pattern of interaction.

If your question is sincere, which is my starting assumption, that's great. However, I'm not sure that I have a better plan to offer you than either to trust those of us who've done our homework in order to work on this — and found that it does not work — or to do your own reading and review of the literature, for which I have provided some pointers.

This isn't a novel problem, it's a topic that has animated the field for about fifty years now. It's the reason why newer proposed laws are moving away from it (often keeping some notice, typically a privacy policy, but replacing consent with more effective mechanisms like outright prohibitions). To take an example that I just happen to have in a nearby tab, there's Consumer Reports' model law that has a quick section on the ineffectiveness of consent.

@jdelhommeau
Copy link

@darobin you seem to assume that notice and consent regimes can't coexist with effective mechanisms. I don't believe this to be the case. I think that while Privacy Enhanced Technology are definitely a good thing for users, that doesn't mean notice and consent are no longer required. Foundation of privacy under GDPR is transparency and control. If you build a complex system which eventually prevent bad actors from doing things they weren't allowed in the first place, but the user doesn't understand it, I don't think this will help in the long term for restoring user's confidence in the internet.
That said, I imagine we could argue about the above and not reach consensus, so not a great way to spend time and resources.
However, even if we don't necessary agree on the above, it remains that we must work in the scope of the law. As you said so yesterday in the call, a standard must be more restrictive than laws, otherwise no one would, or at least should use it. In EMEA, device access require consent and notice. Personal Data Processing is more flexible with regards to legal basis, but if we assume either consent or legitimate interest, both require at the very least transparency and control.
As per @martinthomson presentation yesterday, if we start with the assumption that those new mechanisms should be "on by default with opt-out", then we are designing solutions that will not work for EMEA. I don't think this is right. And we can't assume that regulation will eventually evolve (ePrivacy Regulation) to meet our needs. We must work within the existing constraints, and eventually adapt our solution as law evolve.

@notImposterSyndromeIfImposter

I don't think the argument is that we shouldn't gather consent or notify users. The argument is that the current state of consent gathering is terrible and insufficient unto itself. In practice it is people clicking a bunch of stuff they don't understand to get to the content they want.

I think the discussion of "on by default" can be separately debated and it's a good call out that in some jurisdictions might be a non-starter.

@anderagakura
Copy link

anderagakura commented Feb 10, 2022

@darobin The papers you've shared are interesting, really. For example, from the first one "a method of privacy regulation which promises transparency and agency but delivers neither" below is an interesting part :

Fundamentally, “notice and choice” is a misnomer when few privacy notices offer sufficiently meaningful information capable of influencing the user’s ultimate decision, and when a choice of whether to accept all the terms offered or simply seek a different product is often no choice at all. Notice and choice has been roundly criticized by policymakers, academics, social scientists, advocates, and others for quite some time, and with good reason. The idea that a generic description of a company’s practices could possibly provide a sufficient disclaimer as to what data a company collects and how the data is used begs credulity; considering that the description is generally written in ten-point font and inscrutable legalese, is buried on the company’s website, and is one of an unmanageable number that individuals encounter in a day, the proposition is laughable. People encounter so many privacy policies in their daily lives that it would be irrational to read each of them—one study calculated that it would take the average person 200 hours per year. There are also all kinds of cognitive phenomena that prevent individuals from obtaining meaningful information from privacy policies in the way that a notice and choice regime assumes they do, such as hyperbolic discounting and optimism bias.

The researches show many people indicates there are issues with "notice and choice". True indeed. During the meeting, it has been recalled that the purpose of this group is to be focused on technical ideas and apply the laws (not focus on writing legislation or policy). True as well. But, we have the opportunity to shape (re-shape) products all in the scope of the law to protect user's privacy. People do not understand the cookies, too much pages to read/understand, sometimes really technical etc...

As it's our role to help the advertising ecosystem to evolve, I think it's also our role to make sure the user gets all the information (not simple for sure but we have to) via a clear, transparent, readable, accessible and controllable mechanism. Otherwise, some areas could face some issues for using them (e. g. EMEA with GDPR) and we will reproduce some issues from the past.

@lknik
Copy link

lknik commented Feb 10, 2022

Simple question.

Assuming that we have a true privacy-preserving scheme (no personal data processed, then), under what circumstances consent for 'data protection' would be needed? What would be its role, to who it should be granted?

Ps. obviously it's a separate issue from user's autonomy.

@jdelhommeau
Copy link

There will always be personal data processing. Just by less people. Taking topic as an example, but same logic applies to all propositions I believe: Before, hundreds of vendors would be tracking users across domains to infer their interest. In Topics API, the browser (so Google Chrome for example) is doing this Personal Data processing. The fact that the personal data processing is happening on the browser instead of a third party server doesn't change the fact that personal data processing is happening, and as such, falls under GDPR. In that case, it means that Chrome will require some legal base (likely consent) in order to do that.
Maybe a more recent example, the Belgium DPA ruling against IAB Europe that happened last week. As per this ruling, the ability to evaluate user's choice about their data processing is itself a personal data processing and requires a valid legal base. So there is no escaping GDPR.

Finally, GDPR is only one of the applicable law. The other one, ePrivacy, regulates device access, whether it is for personal data or not. Since most solutions out there (IPA, topics, FLEDGE, PARAKEET, etc.) require device storage (so device access), it means they are all subject to consent under current law.

@lknik
Copy link

lknik commented Feb 10, 2022

By topic do you mean Topics API? In this case there's no question that personal data may be processed, and consent is needed. What I meant is actual privacy-preserving tech where no personal data is processed.

I agree with ePrivacy take, though. But it would be tough to decide who should ask about the consent, and where (on a website browsing? Why, if conversion//fetches happen later?) in context of Turtledove. So I'd like to highlight that this will be a big headache (and I know what I'm talking about here), because in terms of "consent" it is necessary to establish WHO should get the consent.

My initial question was more generic, not aimed at any specific proposal.

@michael-oneill
Copy link

The company/entity that manages the domain where storage is being used needs to obtain consent.
Incessant requests for that should be outawed, and/or protocols developed so browser can detect exempt storage and block the others, unless the user agrees.

@jdelhommeau
Copy link

Topics was really just an example, but you can do similar analysis to all solutions out there:

  • PCM: impression is stored on device, so require consent. As to who, indeed, another question that need to be carefully evaluated. The browser is also processing the impression and conversion from that user, so personal data processing by the browser.
  • FLEDGE: Interest Group, web bundle, etc. all those require device access, so consent. The browser is managing the IG of that specific user, so would be personal data processing.
    etc.

@alextcone
Copy link

alextcone commented Feb 10, 2022

I have a few observations and bucket them into general, standards, legal concepts, and strategy.

General

  • There are a lot of claims above about the GDPR and the ePrivacy Directive that read as very binary
  • If I've learned anything working with lawyers it is there is very little that is as binary as the conversation above implies
  • Can we please acknowledge that data protective/privacy-seeking ads APIs and jurisdictional legal requirements around notice and choice are not mutually exclusive? We don't have to decide or come to consensus that we have to go all in on one or the other.

Standards

  • Each must have some sort of scope
  • The scope of the current proposals and the PATCG is not to build a new notice and choice mechanism...
  • ...though as many have stated, the final standards we arrive at, if any, should accommodate user understanding and control to the best of our abilities and there are a number of ways to skin that cat)
  • If someone is looking to build (or iterate) one of those notice and choice standards there are places to do so and importantly, still feel aligned with the work inside PATCG's scope
  • It’s ok and quite healthy for a standards discussion focused on data protection and privacy in the ads space to try and test new means of achieving the spirit of digital data protection and privacy in ads (doing so is not illegal prima facie)
  • Standards that seek to reduce the spray of global, non purpose limited identifiers are being welcomed by many regulators including ones in the UK

Legal Concepts

  • Laws evolve
  • Legal interpretations are very often all over the map
  • Each regulatory enforcement evolves those legal interpretations that are still all over the map even if a little less so with the text and nature of each new enforcement
  • We are very unlikely to achieve a harmony in legal interpretations in this forum and with this group makeup
  • The direction of lawmaking around privacy, data protection and advertising is currently very dynamic and there’s no certainty any one law or concept today is going to be the same next year or the year after that

Strategy

  • Using today's laws to maintain status quo is arguably a really poor strategy as tomorrow's laws may look a lot different and be less agreeable to the status quo (see one of many recent examples)
  • It is wise to open our eyes to the direction of laws and regulatory discourse around our subject area of ads, privacy and data protection
  • Much of the tenor of that lawmaking discourse is a reaction to the pervasiveness of global, non-purpose limited IDs sprayed all over the place
  • If this CG is able to chart a path for achieving purpose limited processing it will be doing so against a backdrop of lawmaking motion that is likely to view our achievement as an advancement

@jdelhommeau
Copy link

I am not sure if some of your notes above are addressed to my previous comment @alextcone, but wanted to clarify my position so it is not mis-interpreted. I am not saying that status quo is good. I believe that there can definitely be an improvement for user's privacy over the web using some of the enhanced privacy technics that have been discussed during the call yesterday.
However, my only points is that we shouldn't assume that those new techniques will allow us to bypass the existing laws. So we need to make sure that whatever we come up with can work with the current framework of the law.
While I wouldn't risk myself into making broad assumption about GDPR / ePrivacy interpretation, I believe it is fine to assume that device access requires consent under ePrivacy and personal data processing requires a legal base (likely consent or legitimate interest in our position).
So unless we have reason to believe 100% that none of the solution we will come up with will require device access, or that legitimate interest will perfectly satisfy any PD processing, starting with the assumption that we can have an opt-in by default with possibility to opt-out likely means those solutions will not be available for EMEA market. At least in the current state of the law.

@alextcone
Copy link

My observations were primarily focused on the title of this Issue which asks "Why would notice and consent not be adequate?" Yes, I was also reacting to the binary nature of your legal opinion on ePD and the GDPR, @jdelhommeau, not because it is my opinion you're wrong, but because I think it has the potential to be counterproductive. There's a strawperson being created here that seems to indicate that the prevailing opinion is that privacy/data protective systems, technologies, and standards don't fall under the law. I do not think that is the prevailing opinion. I realize @lknik asked the question, but that question is not representative of what I believe to be the likely conclusions many legal professionals are arriving at at the moment.

Bigger picture, I want us to avoid over-indexing to any one law or legal concept. There are several reasons for that. But the main one is I think what PATCG is trying to do and what other standards initiatives are trying to do can co-exist and, importantly, not break laws.

@jwrosewell jwrosewell changed the title Why would notice and consent not be adequate? Notice and consent debate Feb 10, 2022
@darobin
Copy link

darobin commented Feb 10, 2022

To add to @alextcone's excellent notes above, I wanted to clarify a few points.

The important point here is that notice and choice does not provide adequate protection and does not enable people to make decisions that correspond to autonomous choice. What this means is that, as a standards group, we cannot create a potentially dangerous piece of technology and then shirk our responsibilities by saying "it's okay, we'll just prompt the user for their consent." Not only is that an established bad practice in privacy, it is also an established bad practice in browser UI. The Web community has been struggling for years, nay, decades with permission systems for powerful capabilities and it remains an unsolved problem. Notice and choice is to privacy what ActiveX prompts were to security. We might just be better off if we don't reproduce the same mistakes we made in the 90s.

image

That we will design systems that are safe without notice and choice doesn't mean that we can magic notice and choice away when it is legally required. Without getting into details, there are definitely potential conflicts between the ePrivacy Directive and some privacy-enhancing techniques. The fact that ineffective laws exist does not lessen the value of producing technology that delivers utility while being private by design. If and when ePD is an issue, we can explore options. One would be to speak to ask legislators to carve out exemptions for specific PET designs. Another, if we can demonstrate credible enforcement in the system, could be to bring an Article 40 Code of Conduct to a DPA or the EDPB that would waive (as is already done in some cases) local storage requirements. It's too early for that now, though, we can cross that bridge when we get there. So @jdelhommeau, I think we are all agreed on this?

@anderagakura I'm glad that you enjoyed Barrett's article, I find her to be a very effective (and funny) writer. I agree with you that we will still need transparency. The point here is not that we should eliminate transparency (or even choice, when it is useful and effective), but rather that we shouldn't rely on transparency or choice to make a system safe.

It's like the Linux Problem: being able to tinker with something is liberating, being required to tinker with a thing before it can be useful is alienating.

I believe that we have consensus on the following statements:

  • Notice and choice is not an appropriate mechanism with which to protect people and meet their expectations of privacy. This statement is well documented in the scientific record.
  • Making a system privacy-friendly does not necessarily eliminate legal requirements in all jurisdictions. There may be solutions there, but we will have to cross that bridge when we get there.
  • Not relying on notice and choice does not mean eliminating notice and choice. In cases in which it can contribute, for instance to transparency, it can still add value.

Unless there an unresolved objection in substance, I propose that we close this issue. This point is discussed in the TAG's privacy document (forthcoming) and therefore I don't think that we need to capture it in the PAT-specific principles document.

@michael-oneill
Copy link

michael-oneill commented Feb 10, 2022

The consent requirement (as in ePrivacy for storage access, or in the GDPR as a legal basis for processing personal data), does not require a user prompt, it just means the user has to agree before the access happens. That could be as simple as a setting in the browser, such as DNT or GPC (actually its reciprocal).
Also, saying "..if and when ePD is an issue" is not a good way to encourage policy makers to create an exemption. The ePD has been around in one way or another since 2002, and the opt-in requirement (for storage-access - which is necessary for tracking) in force since 2011.

Better to assume an opt-in for now, and try and get an exemption later when this has proved itself in practice.

@jwrosewell
Copy link
Author

I will close this particular comment thread after the meeting. It has formed a useful "scratch pad" for debate between the meetings. Chairs may wish to include reference for the record.

I do not consider the subject closed as @darobin suggests for at least the following reasons.

There are many stakeholders that have not had an opportunity to contribute to the debate. Particular those in Europe who are underrepresented in the meeting.

Regarding the broader subject of privacy policy.

The TAG document is not yet finalised and there is still opportunity to balance it so that it becomes more broadly acceptable. Issue number 106 is an example of such an issue. I'm concerned about privacy absolutists dictating the direction and the impact on competition of a position that provides internet gatekeepers licence to create information asymmetries.

Ultimately I believe we need to agree a range on the following scale from Model State Privacy Act, embed that range in the charter, and then ensure we stick to it.

image

Thank you @darobin for sharing the documents. I have only skim read this particular document so I’m not in a position to comment on the broader content.

@darobin
Copy link

darobin commented Feb 10, 2022

James, this is not a "debate" not matter how much you may use tricks as you have in renaming the issue after the fact to try to create deceptive optics.

The matter of whether notice and choice provides adequate protection is clear. This has been shown to you repeatedly. At no point have you provided the slightest shred of evidence to indicate otherwise. No one in this issue has challenged this fact, people have simply been discussing how that fact interacts with other aspects of the world and helped clarify the position.

As always, if someone has new information relating to the effectiveness of notice and choice, it should be looked at.

I would add that, as you have been told several times previously and as documented above, notice and choice regimes have noted anticompetitive effects. If you had a sincere interest in addressing competition issues, you would consider notice and choice as problematic.

Privacy is the set of rules that govern flows of information. Bringing up "privacy absolutists" is a meaningless term that just serves to stir up FUD — which for the record I note is another the trolling tactic.

@jwrosewell
Copy link
Author

FWIW

"Privacy absolutism" - 106

I used "privacy absolutist" to mean someone who advocates for "privacy absolutisim" as explained by @jbradleychen.

@jwrosewell jwrosewell changed the title Notice and consent debate Why would notice and consent not be adequate? (Notice and consent debate) Feb 10, 2022
@darobin
Copy link

darobin commented Feb 10, 2022

Referencing someone else's use of a term does not make it meaningful or any less of a trolling tactic. But you knew that.

@jbradleychen
Copy link

jbradleychen commented Feb 11, 2022

For the record, I am still worried about the risks of privacy decontextualized from the ethical web. I apologize that I have not been able to keep up with progress on this draft. I would like to be confident that we are working together towards excellent privacy and excellent safety. Unfortunately such an outcome is not inevitable, and the tone of these last few comments does not give me confidence that we are converging on such an outcome.

I will try to find some time to read through it again in the next few days.

@darobin
Copy link

darobin commented Feb 11, 2022

@jbradleychen I am confident that we remain aligned on these goals. It will take time to solve these issues but the conceptual framework we have is conducive to it since it treats information unsafety as a privacy issue. We can also make more constructive progress in TAG repos because @jwrosewell is banned there for exactly the kind of behaviour exhibited here. It's easier to be productive without concern trolling, sealioning, evidence denialism, someone trying to stir up dissent that isn't there, invoking disagreement from parties that aren't there, renaming the issue after the fact to try to present it as controversial, etc.

Regarding the topic at hand, I don't believe that there are known safety benefits from notice and choice?

@jdelhommeau
Copy link

@darobin , I agree with you on most points:

  • Consent and notice alone are not efficient ways to protect user's privacy
  • They should be seconded with privacy preserving mechanisms such as the ones debated in this group (PMC, on device, etc.)
    However, to echo @michael-oneill 's point, I would feel better about assuming optout by default since this is likely going to be a requirement at least for EMEA. Once the solution is designed, we can review and eventually change to a optin by default if we can identify a solution to do so (no device access, exemption, etc.).
    I just don't see the point to start with an assumption that is very likely going to be a blocker.
    That said, I don't believe this to be a critical point for now, and as mentioned by @alextcone , I would rather have this group focus on what privacy model for the web means, along with solution to support such privacy model. I just don't feel comfortable having in writing that we assume an optin by default for future solution as our starting point, hence why I raised concerns.

@anderagakura
Copy link

anderagakura commented Feb 11, 2022

@darobin Thanks for your reply. Just for my knowledge I will keep reading.

+1 with @jdelhommeau : Opt-out by default is primordial for EMEA unless the default setup fulfils the requirements (e.g no device access, geo...)

@michael-oneill : If the idea is to be product focus first, I understand. But the risk is, doing that we will come to a point that we will need to deploy it in order to test it. Last year, FLoC was deployed to be tested. It was in different areas but not in EMEA because it was not compliant with GDPR restrictions. Thus, we could reproduce the same scenario.

Better to assume an opt-in for now, and try and get an exemption later when this has proved itself in practice.

@michael-oneill
Copy link

michael-oneill commented Feb 11, 2022

If the idea is to be product focus first, I understand. But the risk is, doing that we will come to a point that we will need to deploy it in order to test it. Last year, FLoC was deployed to be tested. It was in different areas but not in EMEA because it was not compliant with GDPR restrictions. Thus, we could reproduce the same scenario.

Yes, testing in EMEA will need an opt-in. but the browser that does it could also provide a consent setting (defaulted to off of course i.e. GPC:1 as Brave does)

@michael-oneill
Copy link

The consent setting could be a new one that does not even have to be communicated in a header, because the browser itself could just block sending the MPC info unless the user had explicitly agreed.

@jwrosewell
Copy link
Author

At least Google, the UK CMA, and 51Degrees agree that solutions must adhere to GDPR. GDPR allows for and requires consent. Browser vendors performing processing and using personalize data under GDPR will need to gain consent to do this. Browser vendors that are required to avoid self-preferencing will also need to ensure others are able to obtain the same consent on an equal basis.

Consent can be problematic due to users understanding of what they are consenting to as @darobin points out. Examples I use include; gaining permission for something alongside a gazillion other things at the point of setting up a shinny new and expensive phone, or presenting so many options people just give up.

But despite all these issues consent is used by every digital service provider to operate their service (including Github for the posting of these comments).

Therefore I’m concerned by @darobin’s following statement.

The answer to this question is a clear and resounding "no."

If the scope of this group to address the long standing issues associated with consent, then perhaps it should be renamed to the “Fixing Consent Community Group” and would likely need to involve a wider group of participants.

If not, then we cannot limit the innovation of the group by placing an unjustified burden on proposers concerning the role of consent. Instead, we need to establish success criteria associated with the presentation, capture, and monitoring, of consent.

I think we agree on the following.

  • “dark patterns” must be avoided;
  • fewer and simpler questions are desirable;
  • the associated legal text associated with data use should be as simple and clear as possible; and
  • some method of verifying adherence to people’s choices is desirable.

I fear we do not agree that the widest number of entities should be able to participate and innovate, that entities should be free to decide what is and is not necessary, the suitability of the Ethical Web Principles (and other W3C doctrine) in general, alignment to laws, that we are all acting in good faith, and that all entities should have choice concerning the other entities they work with.

@AramZS as chair; how do we constructively identify and address these differences?

@darobin
Copy link

darobin commented Mar 1, 2022

James, the best way for you to contribute constructively is to engage with the existing body of work that shows that consent does not, apart in rare cases, provide a sound basis for data protection or privacy.

The purpose of this group is to develop private advertising technology. If consent worked for the kind of data processing required in advertising, we wouldn't actually need this group. The technology to produce prompts inside of browsers actually exists. Since consent doesn't work, we need better solutions. This isn't "concerning," it's just the conclusion that is supported by the evidence — as indicated to you over and over again. Repeating the same discredited argument without novel material is not constructive.

That consent cannot provide a defensible approach for the kind of processing this group is working on does not mean that this becomes the "Fixing Consent Community Group." You know what else doesn't provide a defensible approach for the kind of processing this group is working on? Bourbonnais donkeys. That doesn't mean we need to rename this group to the "Fixing Bourbonnais Donkeys Community Group." We just won't rely on Bourbonnais donkeys in the same way that we won't rely on consent. Claiming that we need to fix your pet preferred solution because it doesn't work isn't constructive. Making strawman arguments isn't constructive.

Having to deal with reality isn't an "unjustified burden." But hey, if you have something actually new and that you've somehow solved consent, then there's literally a couple of decades' worth of backlog in APIs that we can't expose because relying on consent is unsafe. It would be a revolution in Web technology.

@BasileLeparmentier
Copy link

Hi,

I won't comment on the subject but I want to make a comment on the tone of this thread, which I have now seen repeated across many occasion. It looks to me very close to outright bullying of James and is personally making me really uncomfortable, and I am quite thick skinned.

I would really appreciate if we could go back to more civil, but still frank way of expressing our disagreements, with respect. When I see the tone here I really don't feel this is possible to have a constructive discussions.

This is a shame.
Best,
Basile

@alextcone
Copy link

alextcone commented Mar 1, 2022

@jwrosewell can you clarify if you are raising this Issue as an attempt to build a legal argument that someone involved in standards design is saying that new private ad technology standards do not need controls or transparency? I have not seen a single comment here that indicates any of us think that jurisdictional legal requirements can be magicked away.

@BasileLeparmentier - I appreciate you making the call for civility. I believe it is a two way street. @jwrosewell's initial framing (and subsequent reframing) of the Issue makes me quite uncomfortable given the amount of legal accusations he has openly associated himself to. That atmosphere makes productive contributions and discussions incredibly difficult and psychologically taxing.

@jwrosewell
Copy link
Author

@darobin – Could you focus on confirming the four points that I thought we agreed on? I’m well aware that we do not agree on the role of consent. My position is simply that consent is used as the basis for the provision of digital services from publishing, web browsers, GitHub and everything else and that this group needs to include solutions that utilize consent in the most responsible way. This is a reasonable position.

@alexcone – We agree that jurisdictional legal requirements cannot be magicked away. This is perhaps a fifth point of agreement.

@alexcone – As soon as the confusion associated with retitling the issue was raised with me I turned it back to the original question. I apologize for the disruption caused.

Overall laws play a role in the solutions we are debating. We need to be open about that and ensure that they are considered. We will not find optimum solutions only considering the role of a single profession.

@darobin
Copy link

darobin commented Mar 1, 2022

@BasileLeparmentier I appreciate your message, but how long should civility be extended to someone who will then use it for nothing other than repeating debunked claims, casting insulting and unfounded aspersions ("I fear we do not agree that the widest number of entities should be able to participate and innovate"), sealioning, renaming the issue to reframe it after the fact, etc.?

If this were an entirely new situation, I would wholeheartedly agree with you Basile. But James's behaviour here fits a pattern that we've seen over and over again. Going back in time, when James loudly demanded that the TAG's security questionnaire be changed, many of us tried to engage on open-minded terms. But over time it surfaced not only that James continuously ignored every argument that was inconvenient to him but in fact had not even made the most cursory attempt at understanding the Web's security model and threats. He eventually managed to anger the chairs, both of whom are some of the kindest and most patient people in the community, and was banned from participation. Frankly, that takes a lot.

I have a huge amount of sympathy for the fact that the Web is a complex beast and that approaching standards is hard. Like many others, over the years I've worked to ensure that the community is more inclusive, more open, and that participating is easier. Anyone who needs support navigating this work can come to me (and to many others — I'm not special in that) and people do. But this requires at least two things: 1) the willingness the listen (even if it's to disagree) and 2) a commitment to engaging with prior art and doing one's homework. James has shown repeatedly that he is interested in neither.

People make mistakes, people learn, people grow. If James does sincerely want to adjust his behaviour, then the door is of course open. One basic adjustments would be accepting the fact that if consent had been shown to work, we'd use it. SMC is fun, but there is no self-respecting technologist who would consider using it in production if they could just use a prompt instead. That's just basic respect for the fact that others aren't dumb and are acting in good faith. In order to reject that elementary starting point, James has to come up with insulting conspiracy theories like "I fear we do not agree that the widest number of entities should be able to participate and innovate".

I wasn't joking when I said that solving consent would be a revolution in Web tech. The link I pointed to in my previous post here is to a group I chartered and chaired that had permissions management — of which consent is a part — as a key component of its scope. Many things were tried in that group, before that group, after that group; all have failed. It's solved for a few specific cases (eg. button-based pickers) but that's it. Maybe it would be respectful to engage with prior art at least some before claiming one has the solution figured out?

At any rate, as Alex says respect and civility are two-way streets. I have much better things to do with my time than to have to push back against disrespectful behaviour and I would be delighted not to have to do this.

@darobin
Copy link

darobin commented Mar 1, 2022

@jwrosewell A lot of things are used and yet don't work, consent is one of them. See copious prior work.

I can't imagine that this group will prevent anyone from using consent in the most reasonable ways, when those exist. There is however no known way to share a significant portion of cross-context reading history with a party, at scale, based on consent, and so the reasonable application of consent for this group is to not rely on it. I'm glad that you agree with this.

(Is it annoying when people claim that you agree with them when they know you don't? Maybe you should consider not doing that to others.)

I'm not sure what to make of the handful of potential properties of consent that you list. Is it your expectation that no one has thought of this before? Is there a specific problem in the space, based on your reading of prior art, that you believe you are bringing a novel solution to?

@martinthomson
Copy link

This thread really has gone past the point where I think it is adding value.

James asked why notice and consent (or choice; to allow for the ambiguity, I'll just abbreviate to N&C) is not sufficient and we've veered off into discussions of whether it might be necessary sometimes. Those little diversions seem to have been fruitful in terms of reiterating a few things that were worth repeating: namely, whatever we might standardize here won't give anyone a license to break applicable laws, so relying on N&C might be necessary in some cases.

As far as the point of deeper disagreement, I don't see evidence of agreement regarding the sufficiency aspect. That is, I don't see any evidence that N&C might be sufficient basis for privacy protection. Wading through the rhetoric here, I'm only seeing one person who might be asserting that N&C is sufficient.

Thankfully, we're chartered in a way that rules this debate moot:

Features that support advertising but provide privacy by means that are primarily non-technical should be proposed elsewhere.

That is, the charter assumes that this question has been decided. It would take a fairly creative reading of that text to lead someone to conclude that a system that relied on N&C - exclusively or extensively - for its privacy properties was in scope for this group.

@BasileLeparmentier
Copy link

Hi @darobin,

You seem to feel justified to be bullying James, and I don't intend to delve if you are justified or not. I am no Judge.

I will just note that the tone of the discussion, which is not new, and this is why I decided to speak up, is not only shocking me, but also other people with whom I discussed it, who are not speaking up.

The consequences is that I am not able to speak my mind in thread where you answer like that, stifling potential disagreement.

I won't comment further on this.
Basile

@alextcone
Copy link

I see a host of dynamics just under the surface here. I’m doubtful we’ll resolve those dynamics via Issue comments. I’m hopeful 2022 will see an in-person meeting of PATCG so that we can better get to know one another. This group’s charter is just too important to get distracted with anything else.

@darobin
Copy link

darobin commented Mar 2, 2022

Dear @BasileLeparmentier,

I am sincerely sorry that you feel this way. I have never wanted to bully anyone, but I believe it is also important to stand up to toxic individuals who constantly harass others and I am saddened to report that I know many who refuse to participate because of how James has harassed them in the past. This issue is not the place to discuss this topic further, but if you wish to reach out offline I would certainly be happy to speak.

For the topic at hand, as @martinthomson notes, this issue has not progressed since I had proposed to close it 20 days ago and no novel argument has been made that would establish consent as sufficient to data protection contrary to the intended charter. We should close and document.

@jwrosewell
Copy link
Author

I assume the charter can be rewritten to make the position regarding alignment to GDPR clear. I believe there is benefit in that at least.

Regarding @martinthomson statement.

As far as the point of deeper disagreement, I don't see evidence of agreement regarding the sufficiency aspect. That is, I don't see any evidence that N&C might be sufficient basis for privacy protection. Wading through the rhetoric here, I'm only seeing one person who might be asserting that N&C is sufficient.

@darobin and @BasileLeparmentier have raised concerns about contributions. We can assume they act in good faith and there are people and organisations that have not contributed to this debate. To draw the conclusion Martin does is premature as it does not considered the input of those that do not contribute to public debate. It was for these reasons I raised this now closed issue concerning secret ballot and @jeffjaffe (W3C CEO) raised this issue concerning anonymous Formal Objection. @jeffjaffe has also agreed that where new members join and do not agree with prior consensus there is no longer consensus. Therefore, how do we assess the views of the group so that these fears do not result in a vocal minority steering the group down a path that others do not agree with? This is particularly important concerning a matter that seeks to steer the group down a route that embraces a singular type of solution and as such risks limiting innovation that might otherwise result in a better solution for society and people.

@jeffjaffe
Copy link

It was for these reasons I raised this now closed issue concerning secret ballot and @jeffjaffe (W3C CEO) raised this issue concerning anonymous Formal Objection.

My concerns in #497 are quite different from this discussion.

@jwrosewell
Copy link
Author

@jeffjaffe I should have clarified there are times when anonymity is desirable and we should be able to handle that. Your specific concerns around the circumstance for anonymity in #497 are different to the concerns in this discussion.

My issue #469 is also relevant, attracted some debate at the time, and is being passed to the AB.

The key point still stands. We can not conclude as @martinthomson suggests.

@darobin
Copy link

darobin commented Mar 3, 2022

Kiran asked a good question on the public list that for some reason was not captured here as well. I am answering here to make sure we keep it in a single place. He asked if the issue with consent "is a limitation of browsers which cannot share significant portions of cross-context reading history at scale?"

The short answer is that this isn't a limitation of browsers but a limitation of what people can consent to through the kind of large-scale interaction that exist on the Web and through browsers. But if you don't have the background on this topic, I think that this answer won't be satisfactory. So I thought it would be helpful to provide a short backgrounder on consent so that not everyone has to read all the things just to reach the same conclusion. In the interest of brevity I will stick to the salient points regarding consent that have brought us to the present day; experts on the topic should of course chime in if they feel I've missed an important part.

Informed consent as used in computer systems today (and specifically for data processing) is an idea borrow from (pre-digital) research on human subjects. One particularly important foundation of informed consent is the Belmont Principles, most notably the first principle, Respect for Persons. The idea of respect for persons is that people should be treated in such a way that they will make decisions based on their own set of values, preferences, and beliefs without undue influence or interference that will distort or skew their ability to make decisions. The important thing to note here is that respect for persons is meant to protect people's autonomy in contexts in which their ability to make good decisions can be impaired.

The way that this is operationalised in the context of research on human subject is through informed consent. At some point, someone looked at this and realised that things like profiling, analytics, A/B testing, etc. look a lot like research on human subjects (which is true). And so they decided to just copy and paste informed consent over on computers, with the expectation that it would address problems of autonomy with data.

As often happens when people copy the superficial implementation onto computers but without the underlying structure that makes it work, this fell apart. First, one key component of research on human subjects is the Institutional Review Board (IRB), an independent group that reviews the research for ethical concerns. IRBs aren't perfect, but using an IRB means that in the vast majority of cases unethical treatment is prevented before any subject even gets to consent to it. Some companies do have IRBs (The Times does, as does Facebook for instance) but they can never be as open, independent, and systematic as they are in research. Second, the informed consent step is slow, deliberate, with a vivid depiction of risks. Subjects are often already volunteers. You might get a grad student sitting down with you to explain the pros and cons of participation, or a video equivalent.

What's really important to understand here is that informed consent is not about not using dark patterns and making some description of processing readable; it's about relying on an independent institution of multidisciplinary experts to make sure that the processing is ethical and on top of this independent assessment of the ethics of the intervention taking proactive steps to ensure that subjects understand what they are walking into. There are Web equivalents of informed consent — studies based on Mozilla Rally are a good example of this — but they work by reproducing the full apparatus of informed consent and not just the superficial bits that make the lawyers happy. Rally involves volunteering (installing an extension), gatekeeping to ensure that studies are ethical (eg. the Princeton IRB validated the studies I'm in), volunteering again to join specific studies and being walked through a description before consenting, and then strong technical measures to protect the data (like, it is only decrypted and analysed on devices disconnected from the Internet).

None of this scales to the kind of Web-wide data processing that is required to make our advertising infrastructure work (or to enable many other potentially harmful functions). People have tried, but as shown repeatedly by the research I linked to previously (and more generally all the work on bounded rationality) it doesn't work. What "doesn't work" means is that relying on consent for this kind of data processing means that you end up with a lot of people consenting when in fact they don't want what they are consenting to; they are only doing it because the system is directing them in ways that don't effectively align with the requirements of informed consent. (To give just one example, Hoofnagle et al. have found that 62% of people believe that if a site has a privacy policy that means that the site can't share their data with other parties. Informed consent means eliminating that kind of misunderstanding and then providing a detailed explanation of the risks. It's a steep hill and few people have to time for it.)

One possible reaction upon learning this is to not care. Some people will say "well, it's not my fault that people don't understand how privacy law and data work — if they don't like it, we gave them a 'choice'." But giving people a choice that you already know they will get wrong more often than not isn't ethical and doesn't align with respect for people.

As members of the Web community, however, we don't want to build unethical things. The Web is built atop the same ethical tradition that produced informed consent in research on human subjects: respect for persons. (We formulate it as putting people first, but it's the same idea.) Since we try our best to make decisions based on reality rather than on what is convenient, we can't in good conscience see that consent doesn't work and then decide to use it anyway. There is also a fair bit of evidence that relying on consent favours larger, more established companies which makes consent problematic from a competition standpoint as well. Because of this, it is incumbent upon us to build something better. (In a sense, we have to be the IRB that the Web can't have for every site.)

Is it technically possible to overturn this consensus? Of course. But we have to consider what the burden of proof looks like given the state of knowledge accumulated over the past fifty years that people have been working on this. Finding a lack of consensus requires more than just someone saying "I disagree," it would require establishing that respect for persons is secondary (and reinventing informed consent on non-Belmont principles) or that bounded rationality isn't real or high-powered empirical studies showing that people aren't tricked out of their autonomy or some other very significant scientific upheaval. It might be possible, but we're essentially talking about providing a proof of the Riemann hypothesis using basic arithmetics: I don't believe that it's been shown that you couldn't do that, and there are very regularly people who claim to have done it, but it would be unreasonable to put anything on hold for that in the absence of novel, solid evidence.

I hope this is helpful for people who haven't been wrestling with this topic. What the charter delineates is helpful because it protects this group from walking down blind alleys that have been explored extensively with no solution in sight. If people find this kind of informal background helpful, I would be happy to document it more prominently.

@nlongcn
Copy link

nlongcn commented Mar 3, 2022 via email

@ansuz
Copy link

ansuz commented Mar 4, 2022

Thanks for your extremely thorough and articulate explanation, @darobin !

I've been following along with this discussion and generally haven't felt like the average web user is represented by many of the participants here.

It seems to me that many here take it as an accepted first principle that the web without advertising is inconceivable, and it follows that the best we can do is make advertising somewhat less terrible.

In my opinion, the possibility that unregulated or unsupervised web advertising cannot be reformed must not be beyond consideration. Arguing that "there is no alternative" is not a good look.

I will leave you all with a meme to consider:

why-does-X

@lknik
Copy link

lknik commented Mar 4, 2022

First of all, I agree with some people who issued earlier comments, that this thread is rather no longer providing much value. Secondly, I wanted to comment on one point made by @darobin

That consent cannot provide a defensible approach for the kind of processing this group is working on does not mean that this becomes the "Fixing Consent Community Group." ...

That is fair. However, some laws may still need consent, even if private processing is required. Even if we may or may not consider such laws as outdated, they nonetheless may or may not exist in the EU and the UK, depending on the technical solution. (hence the "may or may not")

@lbdvt
Copy link

lbdvt commented Mar 7, 2022

Hi @darobin,

It seems to me that the kind of consent mechanisms you describe (Institutional Review Board...) are well suited for things like medical research (e.g. "Are you willing to take an experimental treatment for a life-threatening disease?"), but that there's a big difference in complexity and consequences compared to web advertising (e.g. "If you click ok, your visit on nicefurnitures.example may be used to show you ads later on the web.").

Could you please expand on the use cases under:

At some point, someone looked at this and realised that things like profiling, analytics, A/B testing, etc. look a lot like research on human subjects (which is true)

?

@drpaulfarrow
Copy link

Love the additional background, Robin, thanks!

It would be great if you could expand a little bit on certain points...

For example, you say: "And so they decided to just copy and paste informed consent over on computers, with the expectation that it would address problems of autonomy with data."

And:

"As often happens when people copy the superficial implementation onto computers but without the underlying structure that makes it work, this fell apart."

Who are you referring to when you say 'they', and how has it fallen apart in your view?

Also, I would love to hear your thoughts on the applicability of 'broad consent', as opposed to the more classical definition of 'informed consent' as it is stated in the Belmont Report (which actually itself acknowledges that "presenting information in a disorganized and rapid fashion, allowing too little
time for consideration or curtailing opportunities for questioning, all may adversely affect a subject's ability to make an informed choice.
", which would seem to make it a poor choice for webpage applications from the get-go!)

Cheers!

@darobin
Copy link

darobin commented Mar 7, 2022

@lbdvt The thing we need to be very careful about is to not assume that how we want the system to be used is how it actually gets used. If we could ensure that your visit to Nice Furniture™ could only be used to show you a furniture ad for a comparatively short period of time thereafter, we'd be in a very different position compared to the one we're in now. If a study emerges in a few years showing that students whose parents are into nice furniture do less well in college, this data could be used by some universities to turn your kids down in the future. Evidently, this is a deliberately contrived example and it wouldn't be possible in all countries but the key is that privacy harms are impossible for people to predict and time shifted. This has already had real consequences, for instance with tracking data used to hunt down undocumented people (WSJ, Vice).

Lack of purpose limitation is a constant source of problems in data. There's a strong sense in which guaranteeing purpose limitations is a key objective of this group.

@drpaulfarrow I don't have a definitive text on the history, but my understanding from reading around isn't that one person one day decided to apply ideas from HSR to computer systems but rather that it happened because those were the conceptual tools "lying around" at the time. For instance, Records, Computers, and the Rights of Citizens already mentions data as research and the term "data subject" which became the norm. These assumptions are also in Convention 108, the 1995 Data Directive, and all the EU texts that follow. I've been meaning to look at the Conseil d'État's 1970 report on this to see what's there. If you're interested, you might be able to dig into the Hessischer Landtag's Vorlage des Datenschutzbeauftragten that was influential at the time.

There is a related thread that concerns more general permissions for a website to access more powerful capabilities (including data) and that has been a recurring unsolved issue in the W3C and broader Web community. It reads like a list of failures trying to rely on consent when risk is involved: ActiveX, Java applet security model, Device APIs & Policy WG (with similar issues in WebApps and HTML WGs), the PowerBox proposal from Mozilla & Sony Ericsson, delegated trust… We looked at the problem again as recently as 2018 and there wasn't much progress in terms of the state of the art.

In terms of what fell apart: the short version is that we are looking at an absence of autonomy in the processing of personal data and significant data protection impacts.

I think broad consent is certainly an interesting way to think about options! I suspect you might have more experience with it than I from your previous work? One key component of broad consent is how what is being consented to is generally quite limited in scope (even if not in details) and still under IRB accountability. I'm not sure that I see it becoming useful in our context because, by the time we've enforced purpose limitations, does consent (of any kind) add something valuable on top?

@alextcone
Copy link

Lack of purpose limitation is a constant source of problems in data. There's a strong sense in which guaranteeing purpose limitations is a key objective of this group.

Right on, @darobin

@kirangopinath71
Copy link

<I'm not sure that I see it becoming useful in our context because, by the time we've enforced purpose limitations, does consent (of any kind) add something valuable on top?>

Thanks for the clarification, Robin. Agree with you that purpose limitation will solve a major part of the problem.

Consent might still be required for users to opt in and opt out of specific data sharing, even with purpose limitation. Eg. a 14 year old girl curious about pregnancy test kits might not want to share her reading/search data for any purpose to avoid potential harms (which could range from embarrassment to harassment to more).

There is a required effort to make the consent management more frictionless, easier, less annoying than cookie banners and yet available upfront for the user, which could perhaps be added to the proposed solution.

@dmarti
Copy link

dmarti commented Mar 11, 2022

@kirangopinath71 This is a good example of a situation where it is important to first evaluate whether or not it is appropriate to ask for consent at all.

When designing a system, we have to take into account the user research literature and what we can be reasonably expected to understand, as human web developers, about human web users.

The number of situations in which any human being would want to share with anyone else that they were browsing a web page about a pregnancy test kit is vanishingly low, so asking for consent is going to produce far more errors than true consent. Asking for consent would not only waste the time of everyone who answered the consent prompt correctly, but also result in inappropriate processing of the information of everyone who failed to get it right. This specific case is a good example of where consent is not a good fit. (Users who really want to share this info can do it on their own)

We have to do a better job of looking at the user research to determine not just when to ask for consent (and how to do it, and when to skip it), but how to apply a "consent yes" setting to real-world data processing decisions. About 36% of users are more likely to engage with personalized ads but that probably doesn't mean that they want all their pharmacy shopping habits shared.

@jwrosewell
Copy link
Author

Movement for an Open Web (MOW) have now published analysis of the most recent guidance from UK competition and data protection bodies in relation to consent. This analysis has a bearing on the answer to this question and the role of consent in solutions developed by this group and more broadly across the W3C and other standards forum.

@alextcone
Copy link

alextcone commented Mar 14, 2022

@jwrosewell - can you make clear what you mean by the title of this Issue?

Why would notice and consent not be adequate?

Adequate for what?

  1. As a replacement technology for the removal of cross-site/app identifiers?
  2. As a mechanism on top of the status quo of cross-site/app identifiers that haven't all gone away yet?
  3. As approaches for user controls for new purpose limited "private ad technologies" incubated by PATCG?
  4. Something else?

As I read through this monster thread it appears there is a lot of miscommunication going on and I think that may be due in part to not finishing the sentence ("adequate for..."). Is it 1, 2, 3 or 4? If 4, please let us know adequate for what.

@jwrosewell
Copy link
Author

Question came from the last meeting. Minutes are here.

I've copied the minutes below and added clarification in square brackets.

James R:thanks! Regarding consensus about new APIs. Not clear what the problem is with existing APIs. Discussing some of the very largest companies, but not the majority of participants in this group, and those larger companies are engaging in more groups. At w3c, believe we should use existing functionality / lego bricks. Proposals from different browsers or gatekeepers tend to play to their functionality/advantage. Not clear why we need to make any change from existing APIs.

Ben: Facebook doesn’t operate a major web browser, but looking at large browser vendors to find the possibility of shipping an API across major browsers. Don’t want to waste time on proposals that won’t be shipped by major web browsers.

James: confused why notice and consent wouldn’t be adequate [because that is a position major web browsers seem to have taken and is a constraint that proposers seem to be working to]. For default on, not sure who controls the defaults [and how users consent to these defaults]. Why [default to a proposal that] use[s] a multi-party compute solution?

Martin: We do not have the time to fully answer that question.

Aram: Agreed, please open an issue in the proposal space or on the issue thread.

This question and the resulting thread and interest suggests it might be important to answer considering all the information provided.

@alextcone
Copy link

So it seems the second half to the original Issue question of 'Why would notice and consent not be adequate" is:

  1. As a mechanism on top of the status quo of cross-site/app identifiers that haven't all gone away yet?

I base this interpretation of your indirect reply directly above on the following quote from your reply directly above (emphasis mine):

James R:thanks! Regarding consensus about new APIs. Not clear what the problem is with existing APIs. Discussing some of the very largest companies, but not the majority of participants in this group, and those larger companies are engaging in more groups. At w3c, believe we should use existing functionality / lego bricks. Proposals from different browsers or gatekeepers tend to play to their functionality/advantage. Not clear why we need to make any change from existing APIs.

If my interpretation is correct then it seems your intention for this Issue was to propose this group focus on a notice and consent mechanism for existing APIs (I presume third party cookies?) and not whether any net new purpose limited APIs, like IPA for example, should be subject to best practices and legal requirements as far as user-level transparency and control mechanics go. Regarding the latter, I don't see anyone objecting to making new APIs subject to best practices and legal requirements as far as user-level transparency and control go.

So if the intent behind the issue was about notice and consent on top of existing web APIs I believe you should say this plainly (without making people dig through minutes and make further inference). I think a lot of the back and forth of this thread could have been avoided.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests