Skip to content

The white paper describing the vision for the Canonical Debate Lab

License

Notifications You must be signed in to change notification settings

canonical-debate-lab/paper

Repository files navigation

  DEF: 1
  Title: The Canonical Debate
  Author: @canonical-debate-lab
  Comments-Summary: No comments yet.
  Status: Active
  Type: Paper
  Created: 2018-06-11
  License: MIT
  Replaces: 0

The Canonical Debate

A proposal to fix the current state of online discourse through the promotion of fact-based reasoning, and the accumulation of human knowledge, brought to you by the Canonical Debate Lab, a community project of the Democracy Earth Foundation.

i. Abstract

We waste so much time repeating old arguments and running in opposite directions. The internet looked like it would ease the problem by giving us access to vast knowledge and each of us a voice. It seems to have made the situation worse by overwhelming us with disorganized, contradictory information. Social media amplifies bias and creates echo chambers.

The Canonical Debate Lab is building a resource to gather and organize all information for all sides of contentious issues so everyone can make better decisions with less effort. This tool reverses many of the natural incentives of social networks that have led to information bubbles, clickbait headlines, sensationalist journalism, and "fake news." We believe that with this tool, we can finally fulfill the promise that was given with the rise of the Internet.

ii. Contents

This document describes our vision for online debates and decision making in several parts, first from a conceptual and philosophical level, followed by a detailed description of our proposed solution for this vision:

  • Problems: Provides an overview of the current state of debate and discourse online, and where we see a need for improvement.
  • Principles: The fundamental principles upon which our solution is built.
  • Solution: Our proposal for a new platform upon which intelligent decisions can be constructed in collaboration with one another.
This document is hosted as a living document of the group's high-level vision. It is meant to evolve over time in concert with the ideas and goals of the organization. The project, and this document, are an open source effort, and we welcome all to participate.

Table of Contents


1. Problems

Those of us that are old enough may remember a time when there was the optimistic belief that the Internet would bring about an era of enlightenment, peace and unprecedented productivity. The technological and logistical advances that came with this new era permit access to information in a capacity never before experienced by humans, and modes of instantaneous and asynchronous communication between people that were science fiction only decades before. These structural changes were supposed to eradicate the problem of human ignorance, and bring people around the world together into higher levels of mutual understanding and cooperation.

So, what happened? More than 20 years after the dawn of the World Wide Web, The Oxford Dictionary declared "post-truth" as its word of the year. This year also heralded a major shift in world politics towards nationalistic, isolationist trends, and become common practice to categorize information which one considers displeasing as "fake news".

The state of debate and collective deliberation, both online and off, appears to have taken a major step backwards since the dawn of the Internet. We of the Canonical Debate Lab believe, however, that all is not lost. We believe that the problem is structural in nature, and therefore can be solved with a structural fix.

1.1 The Problem With Debate Online

The state of debate as it occurs online is particularly ineffectual and disorganized. None of the technologies in use today (with the exception of email and a few others) were available twenty years ago, and yet with this incredible growth in capability, we have failed to address fundamental issues with deliberation, and created many new ones.

1.1.1 Debates are scattered

As in the pre-Internet era, there is no one place to look for information regarding a single topic of debate that represents all sides of the issue and presents enough depth and research for the information provided. It requires more work than the average person is willing to do to collect all the relevant information. Given the overhead involved in a task as small as verifying the facts behind even a single news article, no single person can ever hope to gain a complete understanding of our most complex and important issues.

1.1.2 Arguments are made in silos

Rather than responding point by point to each argument, online debate is generally executed in long form, where a single person or group offers a complete set of opinions on a single, or possibly multiple subjects. This format makes it nearly impossible for one side to completely address the points of the other side, and gives leeway to gloss over them with only a superficial treatment, or even ignore them completely. As such, it is difficult to fully resolve the case regarding even a single fact. It also enables many fallacies, such as the Strawman Fallacy and Cherry Picking.

1.1.3 Effort is wasted in repetition

Because debates are held in multiple places, with multiple overlapping, incomplete sets of information at varying levels of specificity, accuracy and truth, most of the effort in debate is wasted in repetition. There is currently no efficient way to "onboard" newcomers to the debate with the accumulated set of information that has been discussed. More knowledgeable participants must inevitably answer the same questions multiple times, and rebut arguments that have been made to them before, with no way to resolve a question once and for all.

The most facile example of this is the unexpected rise in people (or at least, people on the Internet) that believe that the Earth, one way or another, is flat. It appears that it is easier to find repositories for arguments in support of a single viewpoint than it is to find a place where everyone can agree represents an issue fairly and completely.

1.1.4 Models promote polarization

Studies have repeatedly shown that polarization in American politics is a growing concern, to the point that there has even been a dramatic increase in the number of people that would be upset if their child married someone of the opposing political party. The rise of the Internet, which offers greater options for human networking and access to information sources, was idealized as a revolution that would bring people of diverse ideas together. Trends indicate this has not been the case.

One theory as to why this may be happening is that it may be due to the filter bubble effect, which is attributed to social media and news platforms algorithmically selecting content for users based on their likes and preferences, rather than on an analysis of what they need to know. Recent studies have shown that this effect may not be as pronounced as initially feared. However, there is still a risk that has caused enough alarm for major platforms to consider changes to their algorithms. There is no doubt that these algorithms influence the news we see, and in ways that must be compatible with their business model.

Other studies are showing that even when exposed to opposing viewpoints on social media, the result is an increase, rather than a decrease in polarization of political viewpoints. This may be an indication of the quality (or lack thereof) of such interactions on these platforms.

1.1.5 Trolling is rewarded

Many Internet destinations which encourage interaction with the public do so in a format that favors "trolling" or "flame bait": flippant remarks designed to insult or enrage readers, or to get a cheap laugh, without substantially advancing the conversation. Comments sections in news and video sites, as well as most unmoderated forums, are notorious for this problem, resulting in a perverse circus-like competition of comments which often cross the line to hate speech. Many news sites have chosen to turn off public comments entirely, and it is a common joke that "the comments section may be hazardous to your health."

Microblog formats, like Twitter, do better in some respects, but suffer from similar problems, though in less concentrated doses. The brief format works best for sound bites, quick jokes, or off-the-cuff opinions, rather than deep and detailed reasoning. The fully public nature of the platform, coupled with the incentive to gain "followers" and "likes", rewards phrasing and entertainment value at least as much as the ideas that are being represented. Twitter icons like Ann Coulter have mastered the technique of retrying the same comment or joke multiple times until it has reached the right pitch of phrasing, humor and controversy to "go viral". This is a necessary skill that unfortunately not many scientists, field experts or deep thinkers possess.

Research into the psychology of online trolling has found that there is a notably sadistic and narcissistic tendency to those which exhibit this behavior. Their intention is specifically to cause some form of emotional harm to their readers, who have no reasonable recourse other than try to ignore such comments. A direct response only worsens the situation, giving more opportunity for the troll to insult or enrage the reader. However, research also shows that people are not born to be trolls, but rather learn the behavior from experiencing it online. It is unfortunate, then, that most sites do not have enough resources nor the proper structure to moderate open conversation. Trolls are thus given a much louder voice than those with more moderate tendencies, leading to the misperception that the majority of people in the world have extreme and insulting opinions. One reaction to this phenomenon was the planning and execution in 2010 of the Rally to Restore Sanity and/or Fear, a tongue-in-cheek action to show the world that the majority are actually reasonable that don't have radical opinions, or at least can discuss them in a civil manner.

1.1.6 Debate is tied to reputation

When a controversial issue is discussed online in an official capacity, rather than anonymously or pseudonymously, it means the arguments being made are tied to a person or entity with a reputation to establish and maintain. This creates a bias in the arguments that can be made. In order to expand human knowledge and make the best decisions, it is necessary to confront all possibilities, no matter how controversial they may at first seem. Unfortunately, public entities reasonably fear making unpopular arguments, or alienating their supporters or readers.

Exploratory arguments are especially dangerous, in that they may be misunderstood or taken (intentionally or unintentionally) out of context. Online, it can be very difficult to establish a safe place for "blue sky" thinking, or using the brainstorming practice of gathering all possible ideas free from any evaluation so that the best ideas may then be selected. It would be easy for casual readers to mistake fanciful out-of-the-box ideas as the actual opinions of the person expressing them.

While this may appear to be an insignificant issue, there is an exaggerated danger online of reputational damage that can be very difficult to undo. Many people have lost their job expressing an unpopular opinion online. The cartoonist Scott Adams was labeled a misogynist (correctly or not) for posting thought experiments on his blog, prompting him to preface many of his subsequent posts with a disclaimer. More recently, the author and podcaster Sam Harris dared to discuss with an open mind the controversial research of Charles Murray and found himself placed on a list of racist leaders. There is a very real risk in tackling controversy online, with very little chance of recovering from reputational damage.

1.1.7 Experts are drowned out

One of the great achievements of the internet is the extent to which it has enabled just about anyone to join in on the global conversation, to publish writings and opinions with little to no cost and effort. The internet can be compared to the revolution created by the Gutenberg printing press in terms of its impacts on communications and society.

Unfortunately, quantity is not the same thing as quality. While it is much easier to find some amount of information on just about anything, there are few mechanisms in place for selecting based on quality and depth.

Especially in the case of real-time news, it is much easier to get a sense of general popular opinion (or at least, the popular opinion within your self-selected information bubble) than it is to find the most accurate or relevant information. When experts are interviewed, the content is superficial and more opinion-rich than information-rich. It is very likely that reader habits are driving this trend, but as news sources respond to accommodate, it creates a vicious cycle.

Real information is commonly buried in academic papers, and primary sources are hard to find, and may be hidden behind paywalls. The format and language used by experts is generally too dense and erudite to be useful to the average person.

1.1.8 Debates can be "hacked"

As difficult as it may be with the cacophony of superficial opinions to find a vein of expert and verified information, the problem can easily be exacerbated by those who do not want the truth to be known. Through use of bots, fake accounts and other guerrilla marketing techniques the online debate can be swayed by a false sense of popularity for otherwise unfounded positions.

Such were the findings of the U.S. House Intelligence Committee, which found that a Russian agency was created specifically to sew distrust and division in the U.S. via online ads. Networks of fake accounts were also used to promote falsified information that would also increase the internal strife. While politicians may label "fake news" as anything with which they do not agree, there is a real threat of sophisticated attacks on our online information systems which are intended to promote false versions of the truth.

1.2 The Problem With Debate Offline

We are calling "offline" any debate or discussion that occurs in real time between a limited number of people, without significant aid from reference materials. These debates range from informal conversation with friends and family to formal debating events, in which carefully chosen participants have been given time to prepare their best case on a matter.

While every instance of this form of communication is important, there are many ways in which it is insufficient to advance our collective knowledge in an efficient and lasting manner. The main deficiencies are listed below, but are succinctly captured in the following quote from a discussion in Quora regarding Ben Shapiro:

He has a style of speech that just seems from another world, his ability to recall relevant data and shut down his opponents and leave them flustered is a sight to behold, as is his ability to articulate himself with razor sharp precision.

While impressive, the one adjective missing from this description is the word "constructive".

1.2.1 Not enough time

When a topic is important enough to debate with another person, there are usually multiple arguments on either side that must be considered. Each argument must be based on one or more facts that to their own merit may require justification, and so on. In-person debates, whether between an informal group of people, or formally presented and moderated in front of an audience, are too short to cover all the relevant points, to the final level of detail.

Live debates can only be an approximation of the whole picture. Take, for example, the formal structure of Oxford-style debates:

  1. One member of the team in support of the "motion" makes their case, to the best of their ability, in favor of the topic under discussion, within a given time limit.
  2. A panelist from the side opposing the motion gets their chance to make the case against the statement within the time limit given.
  3. This continues for each pair of additional panelists.
  4. There is a limited question-and-response segment in which participants, moderators and/or observers can ask questions. Each side is allowed a chance to respond.
  5. The two sides then give their closing statements, again within the limited time frame.
While this is a very interesting and civilized form of discussion, the limited time frame requires each side to choose which facts and arguments they wish to present. Furthermore, the structure does not allow one side to stop the other if it disputes a specific claim. There is no process to "dive deep" into single point so that its impact may be fully established.

This limitation exists in formal debate, but holds for informal discussion as well. In order to not distract from the "main point", we have to pick and choose which points are worth discussing at any given moment.

1.2.2 Not enough memory

Humans cannot possibly hold in their minds a complete picture of all the information related to a specific debate. Even experts on a subject would be expected to have to refer to documents, studies and other primary sources of information in order to hold a comprehensive discussion on a topic of debate. When a debate is held in real time, to compensate, points must often be made on the basis of “feelings”. At best, an aggregated opinion based on previous in-depth studies can be expressed, but it must be taken on faith by the other parties, or disregarded entirely. This is yet one more reason why the act of passing on important information from one person to the next can only be partially successful.

1.2.3 It's usually about "winning"

If discussion were always a cooperative activity in which parties joined forces (memory and intellect) to come to the best conclusion, then it is possible that humankind would be able to overcome the limitations cited above. Unfortunately, debates are often adversarial in nature, and too often focus on which side can “win” the debate, either by making the best show of things to an external audience, or by convincing their "opponents" that they are in the right (or at least overwhelming them with better arguments). Under such conditions, truth is only a secondary concern, and it can be tempting to employ logical fallacies in the name of making a point.

Unfortunately, as common as this posture is, truth may not be the only casualty here. The adversarial approach belies a lack of empathy for other participants, and can lead to alienation between those of differing opinions, driving a wedge between those that would otherwise benefit from cooperation.

1.2.4 Preparation matters

Under ideal circumstances, a debate would have the best representation possible from all perspectives. Unfortunately, in addition to the limitations cited above of time and human memory, there is the problem of unevenness of knowledge and experience between individuals. A debate may be won or lost based on which side has studied the subject in more depth, and is more prepared to relate and defend their arguments. No real-time debate can be said to present a complete picture, and witnesses and participants can therefore be swayed by whichever side is better prepared to represent their half.

1.2.5 Eloquence matters

One final variable in the result of a debate that is immaterial to the actual substance of the subject is the way in which the arguments are made. Unfortunately, the "winner" of a debate may be decided by who is the better speaker, rather than by which side has the strongest case.

1.3 The Problem with Politics

Politics, or governance in general, is one area in which the results of a debate can have an enormous impact on the lives of the people involved. Unfortunately, political debates as they stand today, with the most vocal participants being delegates or representatives who have their own careers and agendas at stake, suffer their own set of acute deficiencies.

1.3.1 Risk aversion

Politics is a strange form of popularity contest, in which careers are based and world-changing decisions made on the basis of getting the most votes (for a person, not an issue). This means that bold or risky opinions are discouraged, as are blue-sky brainstorms and general speculation. As a result, there is a bias in the process towards more tepid and conservative proposals, even though this often leads to sub-par outcomes.

1.3.2 Uncertainty is not allowed

Politicians are discouraged from showing uncertainty. This reduces the level of honesty in the discussion, and closes the door on external discourse.

1.3.3 Changing opinions is a sign of weakness

It is considered a lack of leadership, intelligence and/or backbone to change one’s opinion in public as a politician. This almost completely eliminates the purpose of constructive discourse, which is to arrive at a consensus cooperatively through the exchange of arguments and evidence.

1.3.4 Ambiguity is rewarded

In order to appeal to a larger group, and to avoid risking the accusation of being a “waffler”, statements from politicians tend to be as abstract, unspecific and non-committal as possible, while still trying to strike an emotional chord. This kind of discourse avoids the difficult questions rather than facing them head-on.

1.3. The problems are too complex

In fact, many of the policy issues that will be affected by the outcome of political debates and public votes are much too complex for any single individual to master. It would be absurd to expect an entire population to grasp these issues in the depth required. This is known as the problem of Rational Ignorance: if it takes more effort than is worth it to understand something well enough (to vote on it), people won’t do it. Thus, political discussions tend to resort to metaphors and proxy issues (such as the issue of gay marriage) as a substitute for meatier topics.

1.3.5 Horse trading

There is a problem with the way legislation is designed and proposed. It is designed based on what the authors THINK will be passable, then goes through a round of horse-trading, in which items get slipped into the bill totally unrelated to the purpose of the legislation in order to “buy” the votes of specific representatives. In the end, decisions are made at least partially not on whether or not it is a good idea, but rather, that it is a “passable” idea.

1.3.6 Gaming the system

An important part of politics is also figuring out the best way to get legislation passed by avoiding proper due dilligence and the need to build consensus. Decisions may be rushed: either presented and rushed to vote before all the delegates have had a chance to read it, or rushed so that there can be a vote before the session ends, before vacation, or before dynamics change (elections).

Then there are also the “at least we tried” bills: legislation proposed insincerely as a form of showing constituents that an attempt was made, without really reaching across the table to find some sort of middle ground.

1.3.7 Focus on the wrong things

Politics very often focuses on the character of the candidate, rather than on the issues they support. While to some degree this is justified in that the character of a representative matters to the extent that voters much trust in them to make the right decisions over their term, it would be difficult to argue that voters have enough information to really predict how their representative will think and behave. Furthermore, discussions about their "character" often focus on less relevant issues, such as their romantic involvements, the way they shake hands, or the way they screamed on camera in an unfortunate moment.

Also, legislation is often dressed up in such a way to be easy to publicize and gain acceptance, while hiding the details of its impact. Legislation is intentionally named with bacronyms that purport to represent its purpose, such as the USA PATRIOT Act and the JOBS Act, in the hopes that voters will read no further. Also, legislation will be given nicknames meant to deride its purpose in order to create voter bias.

1.3.8 The real debate is hidden

Much of what really happens in the process of policy decisions happens behind closed doors, off the public record. Whether it’s in conversations with lobbyists, political whips, large donors, personal family influence, conversations with a pastor, or something else, much of what goes into making up the mind of a representative happens out of sight, through personal contact. Once a decision has been made, a conscious effort is made to find a way to frame it to the public in such a way that it can be "justified". While this is actually a natural part of the process (imagine if you were a representative - wouldn’t you want feedback from family and friends?), as a side effect, it actually disempowers the remainder of a representative’s constituents.

1.3.9 Lack of local information

Especially for local elections and referendums, which should be more familiar with voters, there is ironically a severe lack of relevant information. This is unfortunate, as the more local an issue is, the more likely it is to impact voters. Unfortunately, there is no consistent resource for local voter information the way there is for national issues.

1.4 The Human Condition

There are many problems with our attempts at collaboration that could be more effective were it not for certain characteristics of the people participating in these activities. These problems may be difficult to solve directly with technology, but can certainly be improved through proper education, and some guidance from the tools used in collaboration.

1.4.1 People don't know how to argue

A large percentage of the world population does not know what makes for a reasonable argument. While logic and rhetoric are taught in many schools, it seems that it is not very common to employ these skills in practice. Both online and offline, it appears that discussions are rife with logical fallacies, and it is more common to see arguments of an adversarial rather than a constructive nature.

1.4.2 People assume the worst

In online discussions, even among friends, it seems that very simple statements can be read with the worst possible interpretation. This is in part due to tribalism, but it seems to have become a social norm to assume bad intentions (or stupidity), rather than an honest difference of opinion, as the source of disagreement. The problem is exacerbated when brief written statements are the medium, since there is no room for nuance, caveats, or detailed context. In order to make a point, it may be necessary to resort to extreme examples, without room to explain that the author understands they are exaggerating. Responses are often equally extreme. This phenomenon has even given rise to the infamous Godwin's Law, which is basically an attempt to make people recognize when the discussion has veered too far from good sense.

This problem could be solved if everyone were to adopt the Principle of Charity. Unfortunately, it would be hoping too much to expect this to happen any time soon.

1.4.3 People are lazy

Humans are generally too lazy to do the research necessary to verify if any supposed fact they read online is actually true. Although there are several reliable fact-checking services available, very few people go to the effort of consulting them. Meanwhile, there is a natural human bias towards believing information that supports a pre-existing opinion, and doubting one that debunks it. This leads to certain lies spreading very quickly online.

1.4.4 Cognitive Biases

Lastly, there is a long (and growing) and well-documented list of errors or irrationality in human thinking, collectively known as cognitive biases. These are perhaps the most subtle and most difficult issues to rectify in the course of deliberation. However, as they are studied, and people learn more about them, our chances of succeeding improve.

1.5 Current State of the Art

There have been in the past, and are at present, many attempts to resolve the problems with debate listed above to one degree or another. Since concern has grown following the 2016 election cycle for the United States regarding the problem of "fake news", many solutions have been created to focus on countering the trend and to restore trust in journalism. Fact-checking sites have become an integral part of the online discussion, existing platforms such as Facebook and Google are rethinking their model for recommending content. Also, projects and businesses have been created with the aim of providing additional context to discussions via various means. In the area of argument mapping, there have been many efforts to create a rich and complete picture of important debates, some by way of careful curation by an internal team, while others permit external users to supply the arguments.

None of the approaches we have seen so far have been able to provide a complete solution for the problems we have listed above. In this section we outline many of the characteristics we believe are required, but are generally missing from the solutions we have seen.

1.5.1 Centralization of control

If the debate is "owned" by a single entity (in terms of who can edit the data), without complete and trustworthy transparency and history, the solution is subject to accusations of favoritism. As noble as the mission may be, it is always possible to cast aspersions on the organization that holds the keys to its operation. We have seen this in the case of WikiLeaks, which has been accused of international partisanship. We are also beginning to see this occur with fact-checking sites.

If any entity has the power to edit the history of a debate, prevent access to certain parties, or change votes, it doesn't matter how honest the organization actually is. It will be possible for anyone that disagrees with the results to cast doubts upon the reliability of the platform, and provide an excuse for their followers to ignore the information. It also provides fertile ground for such parties to create their own competing platforms, and thus end this attempt at canonicality.

1.5.2 Centralization of data

Even if the platform is completely transparent and verifiable, if the content of the debate is owned by a single party, there is a risk that the debate will one day disappear if that entity ceases to exist. This undermines the perceived value in participation in a number of ways: The more permanent a contribution seems, the more incentive one has to go through the effort. Also, no matter how trusted the custodian of the information may be during its time of operation, there is a risk that a change of management, or a case of desperate insolvency may lead to a corruption of the original intents.

1.5.3 Conflicts of interest

The problem that faces the majority of communication platforms online at present is the inherent conflict of interest between the platform host, which generally has a commercial for-profit purpose, and its users. This is the crisis currently facing Facebook and other platforms at the moment, which generate revenue by selling user data for advertising and other purposes. These platforms are already extremely powerful due to the number of "signals" users are providing them, based on simple upvotes. A platform that deals in complex debate topics has an incredible opportunity to understand the thinking, beliefs and opinions of its users on just about any subject that motivates them enough to participate. There is an incredible danger for this platform to be abused by its host unless all care is taken to ensure the good of the platform is completely aligned with the good of its users.

1.5.4 Fake accounts

Most implementations do not provide adequate protection against "Sybil" accounts (fake accounts). Any solution that includes voting (upvotes, scoring, etc.) is subject to "attacks" by networks by fake accounts attempting to sway the outcome of the debate. This has been a major enabler of propaganda campaigns, for such mundane purposes as increasing the reputation of public figures and thought leaders to actual attempts by foreign governments to sway elections.

In order to make it easy for new users to sign up, most platforms opt for very low requirements in terms of verification of new users. None to date, outside of official government agencies, and certain financial institutions with "Know Your Customer" (KYC) requirements go through any effort to certify that there is a one-to-one correspondence between user accounts and actual human beings.

When governance is a concern, and a one-vote-per-person strategy is the standard, this guarantee is necessary. Likewise, in the case of the canonical debate, where trust is an essential component, and where the goal is to attempt to honestly portray our collective knowledge and opinions, it is critical to fight for allowing only one vote per human.

1.5.5 Not reusable

Traditional argument mapping focuses on the debate around the validity of a single statement. This is decided by way of a tree structure of supporting and attacking arguments, each of which themselves may be attacked or defended. The result, in a complete mapping, is a judgement that can be made regarding that single statement. No consideration is given regarding how that result could be used in future debates, nor in how the effectiveness of one of the underlying arguments could be reused in a new context (debate).

Most solutions seen so far follow this pattern. They do not treat the debates as canonical. That is, they treat each debate as a major topic to be discussed on its own, a separate, isolated entity, whose result cannot be used in future debates. Rather than creating a repository of proposed and vetted facts, the debates stand alone as isolated topics of discussion. This prevents knowledge from accumulating to the degree we believe necessary if we are to find any hope of a trajectory towards a better future.

1.5.6 Not popular

There is a certain circularity behind the reasoning in this problem, and in fact it is difficult, if not impossible, to design a solution that can resolve this issue with any guarantee. That said, it's important that we strive to make the canonical debate well-known enough that it's considered the main, if not the only structured place to consult and participate in this type of debate.

No solution so far has managed to become THE place for debate. Until one succeeds in this respect, our knowledge will remain fragmented, with continued lack of efficiency, and no guarantee of providing a complete vision on complex and confusing topics. Our proposal for a canonical debate can offer strategies for accomplishing this goal, but there is no way of ensuring we will succeed.

1.5.7 Insufficient curation

In order for a canonical debate to remain canonical, in the sense of singular and unique, it is necessary to ensure that there is no duplication of information or arguments. There are a number of other challenges as well related to keeping the debates organized, readable and sensible that, at least for now, require expert human attention to maintain. Unfortunately, most current implementations lack the tools necessary to keep the debate organized, clean and productive.

1.5.8 Bad user experience

Debate is a VERY complex activity. It needs to be trivially easy to read and to use. Unfortunately, argument maps in general tend to be extremely boring for the average user to read and navigate, which defeats the purpose of educating the public. Furthermore, it can be rather cumbersome to understand the structure of a debate and provide a valuable contribution without exerting more effort than what more than the most dedicated (or fanatical) users may be willing to provide.

In preliminary experiments with various projects created by our founding members, we have seen first-hand the difficulty in capturing the complexity of debates while at the same time creating an environment that doesn't discourage use. We applaud the site Kialo for making great strides in terms of usability and innovation in terms of the visualization of argument maps. However, the site lacks many of the other characteristics listed here, and we believe that there is both more improvements that should be made, and more complexity that must be incorporated into a canonical solution.

1.5.9 Boring

Overlapping with, but distinct from the problem of the user experience, it is apparent that deconstructed debates, removed from individuals, and avoiding incendiary speech can unfortunately be very dry. This makes the platform difficult to achieve mass adoption, and therefore canonicality. A successful solution must find a way to engage human emotion in the process, while at the same time maintaining its integrity.

1.5.10 No context

None of the solutions we have seen to date have taken "context" to be a critical element of any claims or arguments. Context is not used at all as an element of the debate, and can only colloquially be said to be "relating to the debate at hand". Unfortunately, the result tends to be that claims and arguments are made at a somewhat generic level, or are given as examples that are so concrete that they can only be considered anecdotal evidence at best.

This lack of context impedes the reusability of claim. It can result in them being used incorrectly, as statements may be given multiple interpretations, or used in a more generic or specific situation than they were originally intended. It can also cause confusion in the debate itself if the participants use different interpretations of the claim being made.

1.5.11 No concept of generic vs. specific

It is very common to begin a debate with a very broad statement, such as "Illicite drugs should be legalized." When such a broad topic is debated in earnest, you will find that the discussion can range from one spectrum of the debate (e.g. all drugs vs. a specific illegal substance, such as heroine) to another (in which geographical locations, and under which conditions).

It should become evident after some contact with generic debates that they are unresolvable without considering the issues within a specific context. The ideal outcome of such a discussion is not to find a definitive answer, but rather to uncover the criteria and conditions under which the claim would be true, and those in which it would be false. Unfortunately, solutions that we have analyzed so far do not provide any explicit support for navigating such a "debate space", and carrying conditional arguments or heuristics from one level of granularity to another.

1.5.12 Inadequate scoring

Scoring of claims and arguments is an important feature of this type of platform, as we will describe in more detail in later sections. Scoring can serve as a mechanism for sorting arguments in order to highlight those that are the most relevant and have the greatest impact on the issue at hand, while clearing irrelevant or "spammy" arguments out of the way. Scoring can also be a tool for understanding oneself and other people, when handled properly. However, no single scoring model could be considered impartial and accurate for all purposes. An ideal solution should provide more than one way to score and view a debate.

The systems we have analyzed up to now, with the exception of some computational frameworks by the people at ARG-tech, have taken only a cursory or superficial approach to scoring. Generally only one method of scoring is supported.

1.5.13 Limited tools for learning and contemplation

Debates are commonly treated as a group activity, in which the final, total outcome is what matters. However, we believe that it is at least as important to encourage each participant to actively engage and reflect on the content, regardless of the final outcome. While tools will allow a user to dive into each argument if they so desire, there is little additional support provided for learning and contemplation. We believe that, when given the proper tools and support, a place of debate can educate users on how to hold constructive debates, can be used to help participants understand their own views, and better understand the perspective of others involved in the debate.

2. Principles

We have seen many good attempts at resolving the problems listed above, and there are many promising projects under development right now. However, many of these solutions are incomplete or may fall into certain pitfalls that we have seen in the past. In order to ensure that the canonical debate solution does not make the same missteps, we have established the following fundamental principles that should guide all design decisions, organizational activities and even outreach and marketing campaigns. Ours is an open and community-based project, and in lieu of a single central figure to give the orders, we use these principles, shared beliefs and common goals as our guiding light.

2.1 Beliefs

We hold a number of beliefs in common which give us confidence that our current problems in deliberation can indeed be solved. We call them "beliefs" in the same vein as the term will be used further on in this paper: that is, these are claims that we make which have yet to be proved or disproved. We choose to accept these claims as fact until proven otherwise, and use this as inspiration to go forth with this effort.

2.1.1 The problems are structural

We believe that the majority of the problems we have listed in the first section can indeed be solved by making structural changes to the internet and to the way we interact. Just as Twitter, Google and Facebook (not to mention television and other innovations in communication) have all changed the way we exchange information, there is always the opportunity for a new innovation to cause a major shift.

One source of inspiration for all the members of the Canonical Debate Lab is the realization that each of us separately has come to the same conclusion that there is something missing, and our visions as to what that may be are remarkably alike. To borrow a phrase, we believe that the need for a Canonical Debate is self-evident, to the point that each of us has started creating the same invention independently and without prior communication. We are now ready to work together.

2.1.2 Debating can bring people together

Many of the problems we have discussed reflect upon the way in which people are becoming increasingly isolated into separate groups, each of which shares a common world view, in opposition to other groups. Social networks and biased television news channels increase this isolation. The number of opportunities for people of differing opinions to exchange information across these self-created borders is diminishing.

However, debate is one human activity that by its own nature brings people of differing beliefs together for this very purpose. While it is not always done constructively, there is at least some contact between worlds for those that choose to engage.

If the nature of debate can be changed, and if more people can be involved in debate, we believe this is a rare opportunity to reverse the trends of isolation.

2.1.3 Social norms can be changed

There has been a lot of focus of late on the importance of "social norms". Donald Trump, acting as an unconventional President of the United States, has brought about the realization that many of the institutions that U.S. citizens have taken for granted (such as full divestment from one's business interests, the way the President relates to journalists, the handling of presidential pardons, and various other processes and traditions) are not actually written into law, but are merely norms which previous presidents have chosen to follow. There has also been much discussion regarding his actions which "normalize" certain behaviors, such as using insulting language to make a point, and being loose with facts.

Millenials have been characterized as regarding gay marriage and non-binary gender identities as much more "normal" than previous generations. Objective journalism was once considered the gold standard in reporting, but has fallen away to much more biased news. And people walking down the street, or hanging out with friends, and looking at their phones rather than each other has also become commonplace.

The point is that what is considered normal and acceptable has changed over time, and will continue to change. We believe that while trolling and insulting people with whom one disagrees is currently a norm, this may be changed with a slant towards constructive discussion and respect for other beliefs. Social norms are very difficult to change intentionally, and are best led by example and repetition. We believe that the canonical debate platform provides ample opportunity to demonstrate what constructive debate can be, and can provide tools that discourage negative attacks online. We hope that over time, as people get accustomed with the debate, it will affect social norms in a positive manner.

2.1.4 True disagreements are about beliefs, values and priorities

Many of our arguments on and offline can be boiled down to misunderstandings, misjudgements, an unwillingness to consider the opinions of others, or simple lack of information (and thus arguments based on assumptions). One of the main goals of the Canonical Debate is to ensure that everyone can have the proper information at their fingertips, organized and clearly stated in order to remove any ambiguity.

This leads to an interesting question: if we are able to assemble all human knowledge towards resolving a debate, what will be the result? Will there be anything left to discuss? Will everyone have no choice but to agree on all things?

While we hope that the Canonical Debate can resolve the petty disputes described above, we believe that there are true disagreements that cannot be solved simply by assembling all the facts together in one place. The most important discussions are those that pit one value against another. Nearly all decisions in politics, and all the important decisions in a person's life involve making trade offs and picking priorities. These are the most important discussions that can be had between people. We believe that the Canonical Debate can clear the fog of petty discussions and allow people to focus on those that really matter.

2.1.5 We can get closer to the truth

Donald Trump has shown little regard for accuracy when labeling things "fake news", sometimes using it to mean simply any "negative" coverage of him whatsoever. His Counselor Kellyanne Conway coined the dubious phrase "alternative facts". Over time we have seen that even what appear to be scientific facts will continue to be doubted and debated by those that choose to believe the contrary, no matter how strong the evidence.

We believe that this is a natural state of human existence. To some degree, it is healthy to have sceptics who are ever-vigilant and willing to doubt any claim. Such variety in thinking can lead to a larger surface of knowledge being explored overall.

However, despite recent efforts to make people even doubt the existence of truth, we do believe that such tactics can be overcome by a tool such as the Canonical Debate. A platform which encourages collaboration and in which, by definition, each contribution is constructive, can only help us to get closer to seeing the truth behind each statement. While not every statement or fact can be proved, the fact that each one is itself debatable brings us closer one step at a time.

2.2 Guiding Principles

We present here a list of fundamental principles that we believe are necessary to consider in any decision made on behalf of this project.

2.2.1 Knowledge should accumulate

As has been discussed, the human species so far has been extremely inefficient in terms of our ability to accumulate knowledge. For millennia, the only option available was through the spoken word and collective memory of people. Tools have evolved for improving this process, beginning with painting and other works of art and symbolism, then moving on to the written word in various formats, and so on. Tools of mass communication, and instantaneous communication across large geographic divides have also evolved, to the point that in the current era, we have more information than we can possibly digest. While the process is still imperfect, it is clear that tools can significantly improve our ability to accumulate knowledge.

There are currently nearly 8 billion people on this planet. There are so many topics that bear an importance to all of us, or at least very large groups of us: climate change, human rights, health care, ethics, economics, and on and on. The effort to convey information regarding any of these topics can be enormous, and often imprecise, and takes on the form of 8 million people repeating the same arguments to one another over and over again in the hope that this repetition will generate some form of consensus or common understanding.

Imagine if each argument only had to be made once, for everyone. If you map these discussions into a single location, the repetition becomes clear. By eliminating this duplication, we can free up people to get quickly up to speed on the best information humans have to offer, and move on to add their own contributions.

There are 8 million of us, and we are spending our time repeating what others already know. Instead, we must find ways to make this knowledge accumulate and grow.

2.2.2 Keep it constructive

In all aspects of the design of the solution, it is important to focus on making sure the discussion remains constructive. Personal attacks, intentional mistruths and activities of bad faith must all be discouraged by the platform itself in order to avoid falling into the traps we have described in the first section of this document.

2.2.3 Respect human nature

When it comes down to it, we are all humans, and we need to find a way to work together, rather than against one another. However, for better or worse, we understand that humans are imperfect, have desires, and do not always do what seems to be the most logical action. Rather than ignore this fact, or try to fight against what sometimes seems to be destructive natural impulses, we must recognize what it is to be human, and design our solutions to work with these traits.

In practice, this means many things:

  • We must avoid at all cost conflicts of interest that may put our objectives at risk
  • We must recognize the risks, and the value of human emotion
  • We must understand the common cognitive biases, teach people about them, and find ways to work within their constraints
  • We must create a solution that people will want to use, rather than have to be forced to use

2.2.4 Be trustworthy

If there can be cast any doubt as to the sanctity of the information contained in the canonical debate, it will provide a wedge which those who refuse to accept it can use to drive people away from the platform. If this happens, it means that the debate in the end will not be canonical, and many of our objectives will not be achieved. Trustworthiness must be a primary concern in any decision undertaken on behalf of this project. Transparency, decentralization, and the elimination of conflicts of interest all play a major role in this effort.

2.2.5 Include everyone

There is a tendency to believe that only experts should be heard on important subjects. This bias stems in part from good intentions: if they are experts, then they should know better than anyone the truth behind the claims. Experts in various fields also tend to be better at expressing an idea and using rhetoric to form their arguments. However, this does not necessarily mean that the actual premise behind the arguments is any more or less sound. As an example, a common divide can be seen between academics who study an issue deeply, and the people that are actually affected by the issue. While an expert may be able to provide information on macro trends and statistics, only someone that is directly involved, regardless of educational level or time studying the situation, could convey the experience of being involved in the situation.

As mentioned earlier in this paper, no one person is capable of containing all the information relevant to a complex issue in their head. In order to maintain the first principle of making knowledge accumulate, we must be able to call upon every person to provide whatever relevant information they may have. As is common with brainstorming exercises, it is important to provide first an opportunity to generate as much information as possible before applying a filter in a second step to evaluate the quality of the information. If the filter is applied first, there is a great risk of preventing important information and opinions from entering the discussion.

There is another, equally important reason for including everyone. By doing so, we increase the buy-in by everyone to accept the debate itself. If there is a division between the "cans" and "cannots", this will create a sense of disenfranchisement in the exercise, ultimately leading to a sense of irrelevance. In a report from the Washington Post regarding examination of one believer in the "QAnon conspiracy theory", the author makes the following analysis: "What became clear from our conversation is that Burton’s belief in QAnon stems from his frustration with how authority over information and verification is allocated. He resents what he perceives to be the self-righteous assumption of expertise made by members of the media and academia." Privileging one group over another can separate us in more ways than one, and can stop a quest for shared knowledge in its tracks.

Finally, and perhaps most importantly, we believe that only by getting people involved in debates can they learn the difference between what makes a debate constructive, as opposed to the mainly destructive exchanges that have become the norm. People must be accepted into the discussion, and encouraged to contribute, no matter what the level of their education or argumentation literacy may be. This alone is enough to justify the effort, if we can see a general improvement in the ability of people to discuss controversial topics productively rather than with rancor.

2.2.6 Make it about learning, not winning

Too often, debates are about who wins, rather than whether or not we have arrived at the proper conclusion. In fact, in many cases, the right conclusion may be a matter of beliefs, values and priorities. When the debate focuses on choosing a winner, it can be counterproductive and alienating. When the debate focuses on what we can learn, and allows each person to take away their own conclusions, it relieves some of the pressure, and allows room for people to focus on reason rather than expressions of emotion.

2.2.7 We should inspire with emotion, but argue with reason

Many of the hot-headed arguments we see online follow the reverse of this principle: arguments are crafted to "jab" at our "opponents" in order to elicit the most outrage. This tit-for-tat atmosphere rewards arguments for their emotional content rather than for their accuracy, which does little to advance the accumulation of knowledge, or arriving at a collective consensus.

We must work to foster an environment that instead rewards participation based on the strength of arguments, in terms of their accuracy and their relevance. Debates should be evaluated based on their reasoning, rather than on their rhetorical devices.

However, it is equally important to recognize the value that emotion can bring to a debate. A purely logical debate, free of any emotional triggers, can alienate all but the most erudite of participants. This is perhaps one of the main factors that prevents academic papers from reaching mainstream audiences. Emotion can inspire participation, and drive engagement, which is critical to the success of creating a canonical debate. Thus, a key aspect to designing the solution is choosing the right times to inspire users with emotion, and when to filter it out in deference to level-headed reason.

2.3 Design Goals

Each of the objectives below are reactions to the problems that we have identified in the first section of this document. We believe that each of them are realistic goals, and many of the elements in the Solution section of this document relate to these goals. The success of our effort should be measured by the clarity with which we can draw a line from each problem statement above to one or more goals below, and finally to one or more elements of our solution. But, of course, none of this matters if we do not in the end produce a working platform that actually achieves all of these goals.

2.3.1 Create the canonical debate

In the most mundane sense of this goal, our objective is to design and implement the software necessary to provide the functionality that we consider essential to this project. This includes supporting all the objectives listed here below.

However, as those of us who are founding members know, simply building software is the easy part, and is not enough. In order for the debate to be truly canonical, we must be sure that everyone that has information to contribute to a debate, and everyone that would like information on the debate topic, all interact with the same base of knowledge. Whether this means making the Canonical Debate a household name, or it means creating one solid backbone upon which a multitude of services interact is not important. What matters is that we succeed in finally making knowledge accumulate productively.

2.3.2 Make debates reusable

We consider debates, in their simplest form, to be facts that someone has proposed as being true. Once a person has learned of (or created out of whole cloth) such a fact, it is natural for them to make use of it in order to prove or disprove other claims that have been made. Reuse of "facts" already occurs naturally. What is missing is the complete context of supporting and detracting arguments that have been made regarding this supposed fact. Claims are already reusable. Our goal is to make the entire debate and the overall consensus regarding that claim part of the act of reuse.

2.3.3 Promote constructive discourse

The system must be designed in such a way that it discourages disparaging remarks between individuals and groups, while promoting a focus on working together to improve the common understanding regarding the topic under debate. Disagreements should be welcomed as long as they are based on reason, and as long as the participants are encouraged to be accepting of them.

2.3.4 Teach people about debates

A large part of the world population has never been exposed to the study of logic and argumentation. This fact alone has significant ramifications regarding the quality of democracy throughout the world. If the Canonical Debate is successful, it will be a place where people of all backgrounds will interact with controversial topics. This provides an excellent opportunity to expose people to the elements that turn a debate into a constructive activity.

Many design elements have been considered for creating learning opportunities for canonical debate users. However, we believe that simply through everyday contact with well-formed arguments and the criteria by which they are judged, we should see a significant change to the understanding people have in general regarding constructive argumentation.

2.3.5 Promote the best arguments

The integrity of debates can be undermined by any of a large number of false, weak or irrelevant arguments that distract from the most important points. Philosophers have studied for millennia the category of possible errors of logic and misconception collectively known as (formal and informal) fallacies.

A platform that offers access to everyone around the world is sure to see the widest variety possible of arguments for any given debate. The quality of these arguments will vary widely with respect to their strength. They will also vary in terms of the motives behind the participants making those arguments. For those who disagree with the debate in principle (or because they feel threatened by the progress of the debate), an obvious tactic would be to flood the debate with poor arguments meant to distract or overwhelm any potential viewers.

In order to make the platform resistant to this type of attack, and to improve usability in general, it must support one or more methods for sorting the arguments according to their strength and relevance. The most salient arguments should be seen first, easy to read and absorb at a quick glance. Weak or irrelevant arguments should only be seen by those users that would like to see a complete picture of everything that was discussed, regardless of quality.

2.3.6 Require context

Context, in the general sense of the term, is important for many reasons. One of the problems we mentioned previously is that of a debate with very poorly-defined parameters for the discussion. A debate may be conducted (or occur naturally) in relation to a very general statement. In such a case, the different sides of the debate may find that they spend a good part of their time arguing over or using completely different definitions of the same set of words. Likewise, a general statement may leave only implied or subject to interpretation the specifics of the cases involved. For example, in a debate on public healthcare, one group may be focused on how it applies to Canada, while another is considering its implementation in Germany, or South Africa.

As the concept applies to evidence, or research papers, or scientific findings, there is the possibility of applying these empirical facts to situations that are completely different from the conditions of the study. Or, if a study only applies to a subset of the topic under discussion, it is important to understand under which conditions it is relevant.

When a claim is made on the platform, it is important to make sure that any ambiguities have been satisfied. When this happens, it can then be made clear exactly what the discussion is about. Furthermore, it makes it much easier to determine if any evidence that is proposed is relevant to the object of the discussion, and if so, to what degree.

2.3.7 Reduce sources of bias

Earlier, we have acknowledged that humans are susceptible to various forms of bias when it comes to choosing a side on which to stand. Many of these biases, such as the Sunken Cost Fallacy, Confirmation Bias and Belief Revision are pre-determined by previous activities by the users. However, there are many biases that can cause an irrational influence on readers of a debate that can, with some effort, be avoided.

The following is a brief list of the types of bias that the platform should avoid promoting as a part of its design:

  • Ambiguity Effect - by crowd-sourcing arguments from the larger community, the platform should allow us to leave no stone unturned. In places where the current group of users realize there is missing information, we have designed a mechanism to request someone to provide it.
  • Authority Bias - arguments and claims on the platform are presented to readers without any indication of the person that created it. This will help readers to focus on the content, rather than on the figure that made the claim.
  • Automation Bias - one of our goals is to provide users with the possibility of choosing results based on their own beliefs rather than on what algorithms seem to suggest.
  • Availability Cascade - the platform is designed to remove any duplication of arguments, such that they may be decided upon their own merit, rather than on the number of "shares" it receives, or number of times they are posted.
  • Bandwagon Effect - we do provide a view of the popular vote regarding the validity of claims. This may cause this type of bias. However, the default perspective should be first the user's own votes or beliefs, in order to reduce this effect.
  • Belief Bias - this is another delicate aspect to consider. We do not wish to alienate users from their personal beliefs, and therefore offer them the opportunity to maintain this sort of bias. However, by providing all the information available, and by providing multiple perspectives regarding the validity of each claim, we believe this will temper the influence of this effect.
  • Courtesy Bias - by making arguments anonymous (from the perspective of other readers), we remove the risk of people being perceived negatively based on their opinions. Furthermore, we understand that there are many that wish only to get the best information possible regarding a debate, and expect them to provide arguments for all sides, regardless of their own opinions.
  • False Consensus Effect - the platform provides the popular view on each debate as one of the basic perspectives. This should serve as a reality check for those that have an inaccurate view of popular opinion.
  • Framing Effect - the system must supply mechanisms to provide the best possible description for each claim and argument presented, equally for all sides of an argument.
  • Hostile Attribution Bias - this particular bias has ruined online discussions almost single-handedly. The platform reduces this effect by removing any source for arguments, in order to remove any possible target of blame for the argument itself. Naturally, certain positions can be attributed to a group in general, so there is no way to remove the bias completely, but there is a limit to what the platform can do.
  • Identifiable Victim Effect - the platform supports the concept of grading the "relevance" (and importance) of an argument. This structure provides a mechanism to rate individual anecdotes with a much lower impact than evidence taken from a much larger sample.
  • Illusory Truth Effect - by providing simple, clear descriptions for ALL arguments and claims, it reduces the possibility of one appealing over another due to its simplicity. As with Availability Cascade, the removal of duplication will also limit this effect.
  • Less-is-better Effect - the platform will provide tools to keep larger sets of detailed arguments grouped under a single, simpler argument that can be judged as a whole (see Curation below).
  • Reactive Devaluation - again, by making claims and arguments anonymous, this effect can be diminished, but only insofar as an opinion is not easily attributable to a recognizable group.
  • Semmelweis Reflex - while individuals may still find it difficult to respond to new evidence as it is revealed, the platform can help to highlight when a change occurs to a debate, and promote careful reconsideration.
  • Subjective Validation - the platform intends to support tools which highlight inconsistencies within a user's own personal votes and beliefs. While this will not prevent users from maintaining certain beliefs against all possible evidence, it will be a powerful tool to allow users to reflect upon their default beliefs when they contradict other cases that may appear to them to be "common sense".
  • Memory Biases - this whole category deserves a special mention, as one of the fundamental problems with the accumulation of knowledge is human limitations on memory. The platform seeks to relieve many of these biases (with the exception of the Google Effect, unfortunately) by creating a single repository that will be easily accessible.
There are many other biases which may or may not play a part in a debate. We leave it up to the participants of the debate themselves to point out when they believe a bias is in effect, and therefore provide counterarguments to highlight them.

2.3.8 Make arguments anonymous

If you consider the previous objective, it should be clear that there are several cognitive biases that occur as a result of an identification of the argument with the person that is making the argument. What is not completely covered there, but equally important, is the list of problems that come from a participant being aware that their statements and opinions may be visible by the public. We discuss this in more detail in the first section regarding the problems with other forms of debate.

On a more conceptual basis, the idea of a canonical debate implies that what is being discussed at any moment is the validity of a proposed fact or claim. While it requires a person to suggest the fact for debate, it does not make sense to associate that claim with the person who originally made the proposal. For example, in a debate regarding whether or not the earth orbits the sun, it should not make a difference if the original sponsor of the debate were Galileo or a professional chef in Istanbul. What matters is the proposal itself.

2.3.9 Prevent spam and fake accounts

Social networks and content providers on the Internet share a common problem in the battle to reduce the impact of authentic exchanges of information becoming drowned out or obfuscated by publications from fake or even automated sources. It is well known that such efforts in the past have been successful on just about every public platform to alter the focus of conversations and sway opinions by exploiting biases such as the Bandwagon Effect and an Availability Cascade.

In a platform as crucial as the Canonical Debate, which aims to be the definitive source of information regarding any controversy, it would be very dangerous to follow the lax policies of other platforms regarding accounts. Principally in the features related to voting and scoring popular opinion, everything possible must be done to ensure a one-vote-per-person policy.

In the Solutions section below, we discuss opportunities and technologies that may be used to enforce a policy of a single account per human. This is in contrast to the posture taken by Google, Twitter, Facebook and Apple, which apply only lax security requirements for users to verify that they are not creating a second account, and which, as a consequence, suffer from influence attacks by humans and bots.

2.3.10 Be fully transparent

As a direct consequence of our principle of being completely trustworthy, it is clear that the debates must offer complete transparency to all users in terms of debate history (but not in terms of user information - see below). All users of the platform must have access to every change that was ever made to each debate so that they may trust that no one has manipulated the results. Furthermore, we must provide a mechanism to prove undeniably that this history was not altered in any way, so that no accusations may be directed at a debate or the platform with regards to the falsification of this history.

2.3.11 Be fully distributed

We recognize in Problems section of this document the risks associated with a debate platform that is owned and controlled by a single entity. The only solution we know to avoid these pitfalls is to therefore create one with decentralized ownership and control. Only then can we ensure that the platform may be free from accusations of bias, and that it can survive the Canonical Debate Lab and its individual members.

2.3.12 Eliminate conflicts of interest

Most critical of all in order to promote trustworthiness, and to prevent these goals and our principles from being violated or altered in the future, is to construct a platform that in all ways possible is free from conflicts of interest. The greatest category of risks that we have identified stems from platforms which are designed with a profit motive by the owning entity. The solution in this case is fairly clear and simple: divest the platform of any possible commercial interests, and divest it from control by any single entity which may wish to perturb these goals.

This is on a grand scale, but the elimination of conflicts of interest must be considered at every level of detail in the design. For example, the motivations of the users of the site must be considered. The platform must be designed such that a debate cannot be manipulated by individuals or groups wishing to control the outcome of a debate. The platform should encourage positive motives, such as learning and collaboration, and discourage negative ones, including personal gain (in terms of monetary value or reputation) in relation to a debate, attacks against other participants, and so on.

2.3.13 Keep it organized

The platform must provide tools, both automated and for assisting people, to keep the canonical debate organized. We recognize that content moderation (or, as we prefer, curation) will be a very necessary activity. While some moderation-style activities, such as improving the text of a claim or argument, can be performed by the users and debate participants themselves, others may require a group of specialists trained to perform reorganizations quickly and correctly. Some examples include removing duplicate claims, and reorganizing arguments such that they may be grouped under a more general statement.

This category is important enough that it has warranted an entire section dedicated to describing these activities, and how the platform can facilitate them.

2.3.14 Make it effortless

For the canonical debate to become truly canonical, it must be trivially easy for anyone to use and to navigate. This will not be an easy task, given the inherent complexity of debates. However, we understand the importance of making the platform usable, and will focus a great amount of effort on designing and testing the user experience.

For the casual users, one critical objective is that anyone should be able to view a debate and grasp its most important arguments in under a minute. If they wish to delve deeper, this should be easy to do as well, but in order for the platform to serve as a universal reference, passing users must be able to get their answers right away.

Even the newest users should be able to figure out on their own how to participate in a debate, whether that be voting on the arguments of others or creating a brand new one of their own. More advanced users that understand the concepts underpinning the platform should be able to provide extra context for their claims and arguments, and easily find pre-existing claims to use as arguments in a separate debate.

The platform must find the right balance between complexity and simplicity, and allow the users to choose the experience they want.

2.3.15 Promote engagement

Through empirical experience, after many attempts to create a solution like the canonical debate, it has become clear the necessity of finding ways to encourage more users to participate. While there are some people that may dedicate their whole lives to a specific subject, and will be more than happy to provide every piece of their knowledge in the central debate, it can be a tedious task to supply (or even read) all this information.

The platform must be designed with many different use cases in mind, from the casual reader looking for a quick primer to experts, activists, representatives of political organizations, and researchers. It must be useful for each of these stakeholders, but it must also elicit active engagement by people that might otherwise be only passive users. Debate can be a very emotional and provocative activity, and as we have stated in our principles, if this is done properly, it should be easy to motivate users to get involved.

2.3.16 Include everyone

As already mentioned several times, the Canonical Debate cannot be canonical unless it is accepted by society as the definitive place to go to learn about any debate. This means that it must be accessible, usable and provide value for everyone. It must provide features for every necessary stakeholder of the platform that can be identified.

A less obvious aspect of this objective is that the platform must also not alienate any of its stakeholders. The designers of the platform must identify situations that have the potential to drive users away from the system, and find solutions to ameliorate the problem.

One specific example of this problem is how to treat users that disagree intensely with the current results of a debate. This specific situation is an opportunity to inspire those users to engage, perhaps by contributing more arguments, but is also a moment of risk in which the participant may choose to give up and take their arguments elsewhere. The platform must be designed to allow users in such situations something they can do to feel they at least have some opportunity to change minds (e.g. provide new information, vote their opinion), and to deal constructively with the notion that the majority of people disagree with them (e.g. provide tools to better understand the reasons behind the opinions of others, and permit them to see the debate from their perspective rather than from that of popular opinion).

2.3.17 Illuminate different perspectives

We realize that even with all the facts available, people will still come to different conclusions regarding a debate. In order to promote the best arguments, we must provide some mechanism for judging and weighting them. However, we do not believe that a single model can accurately and uncontroversially represent the prioritization for everyone. Many of these, such as the popular vote, may in fact alienate users that disagree with the results.

To solve this problem, the solution must support multiple methods for scoring and weighting arguments and claims. This support should ideally be extensible to permit new methods. It should then be possible for users to view the debate using any one of these different perspectives in order to better understand the arguments, their relation to each other, and to the user.

2.3.18 Bring people together

We truly believe that constructive debate is the most natural way for people to come into contact with the opinions of people that see things differently from themselves. Whereas other platforms provide tools and opportunities for their users to isolate themselves into like-minded groups, the Canonical Debate will have the opposite effect. It can even provide tools to notify users when an argument which disagrees with their opinions has been added to a debate with which the user has engaged.

The Canonical Debate will also provide tools, as part of the previous objective of illuminating perspectives, for the individual to explore the opinions of people with whom they disagree. With this tool, it will be possible to explore the world view of other individuals, and come to a better understanding of their philosophy. We believe that the more practice people have with trying to understand the reasoning of others, the more they will learn to treat them with respect.

2.3.19 Foster an ecosystem

While the Canonical Debate must remain pure in content, motivation and incentives, we understand that there are so many ways to view and treat debates. There are already several projects that are considering using the Canonical Debate as the core data source for their unique view on user interactions, and others which may have unique ways of contributing new information to the Canonical Debate itself. The Canonical Debate must therefore provide an open API for projects to read from and contribute to the debates (with, of course, proper measures for security and protection against external attacks). The Canonical Debate must not be for profit, but we believe it can work hand-in-hand with other projects, both community and commercial project alike.

3. Solution

The solution that is evident to us is to build the missing piece to the internet, one that achieves the objectives we have laid out in this document. The problem is a structural one. But it is possible to build a fix, one that can change the way people interact. There are several examples of this. Consider how the world was before and after each of these "pieces" were added:

  • Google
  • Wikipedia
  • Text messaging
  • Twitter
  • Facebook
None of these pieces have fulfilled the "promise of the internet", but they have radically changed the way we interact, for better or worse. What, then, is needed to fulfill that promise?

We propose the creation of a Canonical Debate, a central location on the Internet into which all information related to any debatable topic may accumulate. The goal will be to build a repository of proposed and disputable facts which may be used as the foundation for further suppositions and assertions. If we are successful, the platform will become an indispensable tool for the Internet, bolstering journalism, dialog, decision-making and democracy for all.

3.1 Basic Elements

There is a general academic consensus regarding the structure of a debate, at least as it relates to argumentation in the form of a Dialectic, supported by an Argument Map.

A simple Argument Map, with supporting and attacking arguments A simple Argument Map, with supporting and attacking arguments

However, to a large degree this research focuses on debates within the context of actors performing the debate, providing their arguments and counter-arguments in response. While this foundational work profoundly informs the design of our solution, it is not generally adapted for the context we are proposing: that of a canonical debate, independent of specific actors.

We propose creating what could be called a Canonical Debate Map, a single reusable inter-connected graph of debates and arguments that exist independent of the context of a single conversation between a limited group of participants. What follows is a description of the principle elements of such a map, and how they relate to one another.

3.1.1 Claim

A Claim in the Canonical Debate is a proposition, a statement which is intended to be taken as fact, or as the truth. In many cases (in fact, in the majority of cases), agreement on the validity of the statement is not universal. These differences of opinion form the foundation for debate, and the purpose of the Canonical Debate is to provide the tools for participants to work together to come to the best possible vision regarding the validity of a Claim.

3.1.1.1 Examples of and Variations on Claims

Although deceptively simple in concept, in practice there is a lot to consider. The Canonical Debate can work with several types of Claim:

  • A proposed fact: The Earth is not flat.
  • A statement of subjective opinion: This reporter is a moron.
  • A proposed course of action: We should dedicate all our resources towards preventing meteor impacts.
In each of these examples, there is a slightly different set of possible outcomes to the debate:

  • Fact: true or false
  • Opinion: agree or disagree
  • Proposal: yes or no
What they all have in common is that their outcome is binary. In other words, a Claim is a Dialectic.

3.1.1.2 Canonicality

One of the problems stated in the first section of this white paper is that the quality of our debates is weakened by the fractured nature of current discussions. A common fact may be discussed ad nauseum online, and yet reach no conclusion as each new participant must start from a position of ignorance. A mixture of impatience and limited memory leads to each new participant receiving an abbreviated or erroneous version of the debate, upon which they must build their opinions and pass on to the next person.

In the Canonical Debate platform, Claims are created, and their debate can result in a large number of arguments and counter arguments. Even a simple example, such as the Claim "The Earth is not flat" can generate a large quantity of proofs and counter-proofs. But that's not the end of life for a Claim: the validity of one Claim can affect the debate over another. For example, the statement "We should sail west to reach the Indies" could hang quite critically on whether or not the Earth is a sphere. Likewise, a statement about how long the sun stays up in the summer on the North Pole would depend on this.

The debate platform must also deal with this problem: What happens when a Claim that has already been fully debated is used in a totally separate debate? Many systems ignore this issue, and leave it up to the users to restate their Claims in each new debate, resulting in a repetition of the previous debate, often skipping some arguments, and perhaps adding new ones that were missed the first time around.

We can solve this problem by making Claims canonical. For each Claim, there can and must be only one place to debate its validity. When a Claim is used in a debate with another Claim, it would be a waste to make a new copy. Instead, that debate refers to the Claim (and all its discussions) in its canonical place so that none of the original debate is lost. Furthermore, should new arguments or information ever arise relating to the Claim, it can be added to the canonical debate, and its impact will ripple across all other debates that have relied on this Claim in some form.

3.1.1.3 Attributes

Although not a final design, we believe that the following attributes would be helpful to define for every Claim:

  • Unique ID - It is important to guarantee that each and every Claim can be referenced within and from outside of the system via a simple identifier not tied to the other changeable attributes.
  • Title - Each Claim should have a brief explanation, easy to read and quick to digest. Ideally, this "title" should be enough for the average reader to understand what is being claimed at a glance. A more advanced implementation under consideration would be to allow multiple versions of the Title, based on how users wish to view the Claim (see below).
  • Description - A Description would be a more in-depth explanation of the Claim being made, which might include clarifications on the words used in the Title (although disambiguation should generally be handled by Context), and any necessary background information to help readers understand why the Claim is being made. For the sake of expediency, Descriptions should be optional when creating a Claim.
  • References - Claims should support an optional set of external references (URLs, or "links") to supporting information. This will be especially useful in the case of Claims of evidence, such as the claim that there is a video which shows the occurrence of an event.
  • Related media - For the sake of improving the user experience, Claims should provide the opportunity to provide images, videos, logos or other media which can accompany the Claim for display within a web site or other user interface. For example, a Claim regarding a person or event might include a photo of said person or event for display along with the Title and other attributes.
  • Context - In any debate, it is critical to define exactly what is being claimed and argued. This process is known as disambiguation. In a canonical debate, this becomes even more important in order to avoid any misunderstandings, duplications of information, and more. The solution we propose for the Canonical Debate is the use of Context elements related to the Claim. See the relevant section below on this topic for more information.
  • Truth Score - As previously mentioned in the Problems section, it is important to provide a mechanism for evaluating claims and arguments within the system. In fact, we propose the implementation of multiple scoring mechanisms as described below. In the case of a Claim, the attribute to be measured is the question of its veracity, or agreeability.
  • Arguments For/Against the Claim - A Claim that stands on its own is not much of a debate. It could be considered a stated fact (or half-fact, depending on the scoring mechanism), but is devoid of any further consideration. Most Claims, especially in the Canonical Debate, will be supported and attacked by one or more Arguments for or against the Claim.
  • Arguments based on this Claim - Regarding canonicality, Claims are meant to be reused in further debates. This is done, as described below, by making an Argument based on the Claim. Claims should therefore maintain and provide a list of these Arguments which are affected by the truth of the Claim.
3.1.1.4 Confidence/Belief

One of the most important facets of a Claim is what we call its Truth Score. It is not always the best term for the attribute, since some Claims aren't statements of fact, but rather statements with which one may simply agree or disagree. Nevertheless, it is important to provide some kind of measure of how correct, agreeable or accurate the claim is.

Although the Claim is a Dialectic, with a supposedly binary outcome, truths are never absolute, at least in the light of the human capacity to doubt and contradict just about anything. Therefore, the Truth Score is meant to be a sliding scale of confidence in the statement, from 0% to 100%, or from 0 to 1.0, or whichever scale seems appropriate for the user experience.

As will be described in the scoring section below, many scores and approaches are possible. There are two reasons for this. First, we must admit that no one algorithm will likely match the intuition of every human in every debate and circumstance. Scoring algorithms are only models that try to approximate through basic mathematical rules what humans may think about a debate. They will probably never be "correct" in every circumstance. What we can hope for is that these models may illuminate specific aspects of a debate.

This leads to the second point, which is that there are many ways to view the outcome of a debate. The principle perspectives that we see a need to support are the "objective" result (models which score independent of human opinion), the "popular" result (models based on aggregate user opinions), and the "individual" result (models which focus on the opinions of a single individual). We call the latter model the set of "beliefs" of an individual. This has some interesting implications and applications in the system, as will become apparent.

3.1.1.5 Multi-premise Claims

Some Claims can be very simple, singular statements of fact (or opinion). For example, the Claim that "I am a genius", true or not, is a fairly focused discussion that can be had. However, many Claims turn out to actually be a combination of two or more proposed facts, each of which may be debated separately. If we change the Claim to state that "I am a stable genius", the truth of the Claim now rests on two supposed facts: "I am stable" and "I am a genius".

We call this a Multi-premise Claim, a Claim which is composed of more than one Claim. For the purpose of The Canonical Debate, we may refer to a "simple" Claim, with only one proposed fact, as a premise in order to differentiate this type of Claim from a Multi-premise Claim.

In the example above, the two Claims are joined together in what is known as an "and" relationship. That is, in order for the Claim to be true, the first Claim must be true AND the second must be true. If either is false, the whole Claim falls apart ("You may be stable, but you ain't no genius."). It is also possible to create a Claim in which only one of the options needs to be true for the whole Claim to be true. This is known as an "or" relationship: "Either Fred or Velma or Daphne was present at the meeting."

For those familiar with Boolean logic, it may be apparent now that Multi-premise Claims are a mechanism for supporting the full breadth of possible relationships between Claims in a premise. It is also possible to support Claims containing an "exclusive or" relationship, such as "Either Fred or Velma or Daphne was present at the meeting, but only one of them went". A Multi-premise Claim can even contain other Multi-premise Claims, so it is possible to express (and debate) statements such as "Fred and either Velma or Daphne were present at the meeting" (where "Velma or Daphne" is one Multi-premise Claim used by the "Fred and (Velma or Daphne)" Multi-premise Claim).

A Multi-premise Claim behaves much like any other Claim, in that its truth may be debated, and it may be used as the basis for Arguments (see below). The difference lies in the way that it is debated (each Claim may separately be attacked and defended), and the way that it is scored (again, see below).

3.1.1.6 Changing Perspectives

One of our stated goals is to "Illuminate different perspectives". This requires tools that permit actually choosing a perspective from which to view a debate. With regards to a Claim, this could mean a number of user-facing changes, which would require a way to capture and maintain this information.

Variations in Title

A Claim is a Dialectic, and yet it must be stated in the form of some type of affirmation. This creates a conflict, since some users may be offended by the assertion chosen by the first person that proposed the Claim. For example a debate regarding the existence of god may be stated equally as God exists, or as God does not exist. While both outcomes are equally likely (at least at the outset), it would be arbitrary to allow the submitter to choose the Title for all. Furthermore, if one outcome is clearly more favored than the other (such as in the case of The Sun revolves around the Earth leaning towards a convincing NO), it would be awkward to always require viewing the debate according to the initial Title.

We therefore propose supporting multiple versions of the Title for a Claim. Here are some of the variations we are considering:

  • Positive and negative assertions - Since each Claim has two sides, it makes sense to be able to state the Claim from either perspective. The Claim should therefore support two forms of Title: a positive statement (e.g. The Earth is flat) and a negative version (The Earth is not flat). Note that when viewing a Claim as its inverse (the negative version of the claim), the meaning of Pro and Con must be inverted as well (Arguments supporting a positive assertion would be shown as Arguments attacking a negative assertion, and vice-versa).
  • Inquisitive form - A more neutral approach to the Title of a Claim would be to state it as a question: Does God exist?. Again, it would be useful to support storing an inquistive version of the Claim Title for use in linking to and viewing the debate.
  • Variations in emotional "temperature" - It might be interesting to view other versions of a Claim Title according to the emotional impact that the title might contain, independent of the actual content of the debate itself. One of the main problems with a typical Argument Map, especially one that seeks to separate the arguments from the individuals making the claims, is that the resulting map can be logical to the point of alienating the common reader. Following the principle that "We should inspire with emotion, but argue with reason", it might be "inspiring" to see a debate that says "Only an idiot would believe that the Earth is flat", at least at first. Certainly users would find it entertaining to play with the "temperature level" of debates as they explore the content of the Canonical Debate. Each of the types above (positive, negative and inquisitive) would maintain its own collection of variations on the title.
All of these potential variations imply that a single Title for the Claim might not be sufficient to cover every potential use. It would also be naive to assume that the first person to submit a Title would always choose the best way to phrase the proposition in a concise and clear manner on the first try. This implies that Titles should not only support multiple perspectives, but also support a process for proposing alternative versions of the same Title, and electing the most effective one along the same lines as the goal to Promote the best arguments.

One approach would be to allow users to submit alternative Titles for each type above, and allow users to vote on the best option. However, this is only one of several approaches currently being examined.

3.1.2 Argument

When a Claim is used in a debate to prove or disprove another Claim, we call this an Argument. This is a very important distinction from the non-canonical approach normally adopted for Argument Maps. In a standard Argument Map, arguments are their own standalone, separate entities. In the Canonical Debate, there is a unique Claim that may be used many times as an Argument in various debates. In this way, we can ensure that the accumulated knowledge (description, scores, and Arguements) related to that Claim are carried over into any debate in which the Claim is used.

Arguments are generally made either in favor of a Claim or against it (although sometimes it can go either way - see below). There are a few different, but synonymous ways to describe which side of the debate an Argument supports:

  • Argument In Favor/Argument Against
  • Argument For/Argument Against
  • Pro Argument/Con Argument
  • Supporting Argument/Attacking Argument (or Opposing Argument)
Although an Argument may be confused with the Claim it represents, it is not the same thing. It may have an identical title as the Claim on which it's based, but it may also have a title that's more suited to the debate (e.g. Claim: John Doe hates Jane Blow; Argument: I hate your mother).

3.1.2.1 Relevance

While the veracity of the Claim is important on its own, what is important for an Argument is its relevance. That is, it's important to judge not only whether or not the Claim is true, but whether or not using it in the debate is meaningful.

The most common example of an Argument that is by nature irrelevant is the well-known ad hominem fallacy, in which the arguer attacks the nature of their opponent rather than the substance of their arguments. For example, the statement You have something green stuck between your teeth may be absolutely true, but that would have no consequence in a debate against the Claim The Earth is flat.

This measurement is captured as a score in the same way that the we capture the Truth score for a Claim. So, while Claims have a Truth score, Arguments have a Relevance score.

A subtle distinction could be made between the relevance of an Argument (is it related to the topic at hand?) and its importance (how much should we care?). After much discussion, it was decided that the distinction between these two characteristics is unimportant for the purpose of this platform, and so the two are grouped together into the single attribute called Relevance.

3.1.2.2 Argument for an Argument

As we like to say, "everything's debatable". That includes the Relevance of an Argument. Consider arguments being made in a criminal case of a bank robbery, in which evidence is being presented against the defendant:

The defendant has a black mask exactly like the mask that was worn by the person seen robbing the bank.

This is an Argument in favor of the Claim that "The defendant is the one who robbed the bank." There are two ways that one could attempt to counter this Argument:

  1. Argue that they do not, in fact, have a mask identical to that of the culprit: "The defendant's mask is red, not black."
  2. Argue that the argument being made is weak or unimportant: "Everyone has a mask like that."
The first example is actually an argument about the truth behind the Argument. That is, it is an Argument against the Truth of the Claim on which the Argument was based. The second approach is actually an Argument against the Argument itself. It is attacking the Relevance of the Argument. It doesn't try to refute the truth of the statement ("Yes, they do have a mask like that, but..."), but rather tries to minimize its importance.

When circumstantial evidence is presented, there is always a portion of the discussion that will necessarily focus on exactly how relevant (or "damning") the evidence is. There will also probably be some discussion as to the truth of the argument. However, when direct evidence is provided (e.g. an eyewitness, or video footage of a crime), there is little room for debate regarding its importance. The only counterarguments available would be related to the validity of the evidence ("This witness is lying!", or "That's not my client in the video."). Arguments about Relevance are not limited to criminal cases, of course. One could argue that a scientific study that was presented as proof is not totally relevant because it considers a different geographic region than the one under discussion, for example. Basically, any argument that could begin with "Yes, that is true, but..." is probably an Argument for or against an Argument.

Understanding the difference between Relevance Arguments and Claim Arguments is a point that may be confusing at first to the average user, so it will be very important to make the distinction abundantly clear via the user interface. We believe that in doing so, it will help those that become familiar with the platform learn about the structure of arguments, which is, after all, one of our objectives.

3.1.2.3 Relevance vs. Truth: Strength

While an Argument is concerned with its Relevance, its Claim is concerned with its Truth. Both must somehow contribute to the total effectiveness of the Argument within the debate. In the previous section, we mentioned how an Argument could be attacked by focusing on either one of these aspects. Indeed, an Argument is meaningless if it is based on a complete lie, or if it is irrelevant to the target of the debate.

We call the combination of these two attributes, the Relevance of the Argument, and the Truth of its underlying Claim, its Strength. A strong Argument is one that is true and relevant. A weak Argument is one that fails in at least one of these regards. We will discuss exactly how this works in more detail down in the Scoring section below.

3.1.2.4 Arguments That Are Pro AND Con

Unfortunately, argumentation can at times get complicated. There are many cases where an argument can be considered to be supporting or attacking a Claim based on the perspective of the viewer. This is especially common in qualitative discussions (e.g. "Barack Obama was a good president") and decisions regarding a course of action (e.g. "The U.S. should build a wall on the border with Mexico"). In such cases, the arguments made may be as controversial as the original Claim itself.

Consider the following two Arguments, related to the examples above:

Barack Obama authorized the use of drone strikes on terror targets in Pakistan.

Building a wall would send a message that the United States is no longer a friendly destination for those south of the border.

In both cases, it depends on your opinion regarding these subjects whether or not the argument supports or undermines the main point. The Canonical Debate platform must be built in such a way that it allows for this situation.

There are two approaches that are currently under consideration:

  1. Repeat the same Argument on each side, with one version supporting the Claim, and the other version attacking it
  2. Display the Argument on the pro side or the con side depending on the aggregate result of the underlying arguments
The first approach makes it explicit the fact that the Argument is a double-edged sword, by essentially displaying each edge as a separate entity in the debate. However, it may result in the same sub-Arguments being repeated on both sides. The second approach is probably more intuitive (why see the same argument twice?), but then the Scoring system must be designed such that the same Argument may show up on one side or another depending on the balance between scores on each side. Also, under the second option, if both sides are equally balanced, it isn't clear on which side to show the Argument.

This is one area still under consideration. As we experiment with the different approaches, we will update this document according to what we have learned.

3.1.2.5 Attributes

Arguments have many of the same attributes as Claims, at least in terms of displaying them to debate participants. In fact, the average person may not understand the concept of separating an Argument from its Claim; when one writes out the main points in a debate, they generally provide ONLY the Argument (with its Context-relative title), and the Claim itself remains implicit.

Nevertheless, Arguments do have a few differences:

  • Unique ID - An Argument will have an ID separate from its Claim.
  • Title - The Argument has a title which may or may not be the same as its base Claim. This provides an opportunity to make the debate more readable to people, by substituting pronouns (e.g. "he", "she", "they") for proper nouns in the Claim Title (e.g. "The Space Shuttle Columbia", "armadillo"), and so on.
  • Description - A Description for an Argument would entail providing some further background on the Argument itself, perhaps the reason why it was chosen in the debate at hand. It is an optional attribute.
  • Relevance Score - As noted, Arguments have a Relevance Score rather than a Truth Score. The Strength Score, however, would not be considered an attribute, as it is derived from the combination of the Relevance Score and the base Claim's Truth Score.
  • Arguments For/Against the Argument - The Relevance Score of the Argument can be supported or undermined by other Arguments.
  • Base Claim - Every Argument must be based on a Claim. We call this Claim its Base Claim, and the Truth Score of that Claim has a major impact on its Strength.
  • Target - The Claim or Argument which this Argument supports or attacks. For example, if one makes an Argument to show that someone else's point is irrelevant, the Target would be the other, irrelevant, Argument.
In terms of the user interface, the Argument could also be displayed with some of the properties of its Base Claim, such as:

  • Relevant links - The links from its Base Claim should be relevant for the Argument as well.
  • Media - The interface could show the images and other media from the Base Claim along with the Argument.
  • Context - An Argument would not have its own Context. Rather, its Context is that of its Base Claim, being used within the Context of its Target Claim or Argument. There are many possibilities implied by this relationship of differing Contexts (that of the base and the target). We will discuss this more in the section on Context and in our discussions of opportunities for machine learning features below.
3.1.2.6 The Debate Graph

An Argument Map is generally conceived as a tree, with the focus of the debate, the proposition, at the top, and then the arguments and counter-arguments cascading below. This structure is oversimplified for a Canonical Debate. In a tree-like structure it must be assumed that each "branch" is a separate, independent argument (you wouldn't expect the same argument to appear in two places in the same debate). In the Canonical Debate, we know that while Arguments are unique, Claims may be reused many times, creating more of a network pattern of Arguments connecting Claims and other Arguments.

The proper name for a network-like structure of entities interconnected in this manner is a graph, much like the Facebook Friend Graph, although probably much sparser. Although the distinction seems technical, there is one very interesting implication from this. While the tree-like Argument Maps stand on their own, the Canonical Debate Graph is something that can be navigated, much like a knowledge graph (and that is no coincidence).

If and when the Canonical Debate has been fully realized, it should be possible to navigate from one Claim to another via their interconnecting Arguments. This permits users to visualize and understand the impacts that one "fact" (e.g. "The sails of ships can be seen dropping below the horizon.") may have on another (e.g. "The Earth is not flat."). We will explore this topic more in our discussion of beliefs.

3.1.2.7 Common Argument Types

Although the platform will probably not provide explicit support for the Argument types below, it is interesting to note them here and discuss how they will be represented in the system.

3.1.2.7.1 Evidence

Supporting evidence is essentially an objective fact presented as some sort of "proof" for a Claim. Common types of evidence include:

  • A photo, video or other documentation of a supposed event
  • Formal documentation of a fact (e.g. a birth certificate)
  • Scientific studies
  • Eye-witness reports
Unlike other forms of argumentation, evidence has a very strong history of formal use in the practice of law. This includes the importance of proving the provenance and authenticity of the evidence, as well as its reliability.

As a side note, the authenticity of digital media can be hard to prove. As technology improves the capacity to falsify speech and even video, this can be a critical threat to honest debate. Fortunately, cryptography and publicly shared ledgers (blockchain) may provide a countermeasure to this problem: if authentic recordings can be registered automatically on a blockchain the moment they are recorded, it will be in practical measures impossible to alter the recordings after the fact. Technology is always part of a cat-and-mouse game, so it is important to remain vigilant regarding the presentation of evidence.

Current plans for the Canonical Debate do not include creating a type specifically for the treatment of evidence. It should be noted that evidence isn't really an Argument, but rather a Claim (e.g. "This video shows XYX") that, if true, can be used as an Argument in support of another Claim. The use of Multi-premise Claims, along with the References attribute, should go some way towards providing a mechanism for presenting evidence in canonical debates.

3.1.2.7.2 Logical and Informal Fallacies

The first thing one learns when studying rhetoric is the notion of a logical fallacy. Some fallacies are formal, and so, if an Argument is shown to be an example of one, it can be shown to be completely ineffective (i.e. a Relevance Score of 0). Others are informal, and may in fact maintain some validity even in the face of an accusation of being a fallacy.

Take, for example, an Appeal to Authority:

Warren Buffet, the most successful investor of all time, says that it would be unwise to invest in cryptocurrencies.

An Argument that could be used against the Strength of this Argument would be:

This is an Appeal to Authority, and doesn't help explain why one shouldn't invest.

In the Canonical Debate, every Argument must have an underlying Claim, and in this case that Claim might be stated as:

Using an Appeal to Authority as an argument does not help support a debate logically

Once this Claim has been presented (and debated - and this topic is, in fact, contentious) in the system, it is ready to be reused for any other debate as needed. And so it should be for the entire list of both formal and informal fallacies.

Deep consideration has been given to making fallacies a special category or feature within the Canonical Debate. However, it has become clear that not only is the basic structure of Claims and Arguments sufficient; it also permits active debate on use of the fallacies. Although some fallacies are fairly black and white (in the sense that if the argument is indeed a fallacy of that type, then the Argument must be completely irrelevant), some fallacies are something of a sliding scale in terms of how effective they are (for example, the Slippery Slope fallacy). The point is to provide room for debate, and to penalize to the appropriate degree.

3.1.2.7.3 Basic Values and Principles

As we stated in the introductory sections, there is a type of argument that can be said to be based on principle. These are often the type of arguments that are invoked in political discussions, and other debates where emotion is involved. Particularly in the case of ambivalent Arguments, there is a good chance that the reasoning behind each side of the argument is based on some kind of principle, or prioritization of values

For example, in the case of the example argument regarding drone strikes, the side in favor of them might argue something like:

Using drones instead of foot soldiers reduces U.S. military casualties.

Meanwhile, the other side might argue that:

Using drones instead of foot soldiers increases the number of civilian casualties.

Digging deeper into these arguments, there are some implicit value judgements that are being weighed in the balance here. One way to describe them would be:

U.S. military casualties are bad.

and:

Civilian casualties are bad.

Both statements are value judgements. Some may agree with one of them (not everyone cares about American soldiers, and others might not have a problem with civilian deaths), while others may actually agree to both statements, to varying degrees. The outcome of such an Argument may depend on which way the viewer chooses to prioritize each value statement.

In the structure we have defined for the Canonical Debate, it makes sense to define each value and principle as a canonical Claim all its own. This would to some degree provide an opportunity to capture, in argument form, the reasoning behind each concept. It would also provide the mechanism for applying each of them as an Argument into any debate, including the opportunity to debate the relevance of such an application. This structure fulfills, and puts to test, our belief that true disagreements are about beliefs, values and priorities.

As a final consideration, such basic value-based Claims provide an excellent opportunity to examine contradictions in personal beliefs (e.g. "Why is it that you care so much about security in this one debate, but not at all in another?"). As part of our effort to teach people about themselves and others, it would be a valuable tool to highlight apparent contradictions. It would also help people reflect deep within themselves the reasons behind their own opinions.

3.1.3 Request for Information (RFI)

It is common for people in a debate to realize that information outside the scope of their knowledge could dramatically sway their opinion on a matter, assuming it could be trusted. In a live debate, those points can only be avoided, but in the Canonical Debate, there is the opportunity to request help from experts to fill in the blanks. Much like Quora, it should be possible for users to follow specific topics (Contexts) in order to receive notifications for RFIs in their area of expertise.

Although we have not fully decided how it should look and behave at this stage, we believe it is important to define an explicit element that could be used for this purpose. An RFI would act as a placeholder argument which would be visible in the debate, but differentiated from other Arguments. It would best be framed as a question rather than an affirmation (e.g. "What is the difference in relative salary between men and women for college graduates for the period from 2010 through 2017?"), and would help frame the discussion in such a way as to allow participants to proceed with the debate while formally acknowledging the limits of their knowledge.

3.1.4 Debate

Although the whole purpose of the project is to facilitate debates, a debate is not really a first-class citizen in the Canonical Debate. Debate is more of a verb than a noun. In fact, there is no object or element in the design of the platform specifically called a "Debate". Rather, we say that everything on the platform (Claims, Arguments, etc.) is debatable. Debating is the process of working as a group to come to a consensus, if not on the truth of a Claim, at least on what are the principle points of difference, the beliefs and the set of values that each one must know and balance.

The term "Debate" does have a functional purpose, however, and so it is useful to provide an official definition of the term: A Debate is the word we use to mean "the current Claim under consideration". The Canonical Debate is a graph, and so there is no official starting point, or final Claim to be discussed. However, when viewing a discussion, it is useful to display a subset of the graph as if it were a standalone tree, with the truth of one Claim being decided at the top. If, for example, this Claim were "The Eath is flat.", we could say that we are debating that Claim, even though there are many other equally important (in other circumstances) Claims involved.

A Debate is the Claim on which we are focusing at any given moment.

3.2 Supporting Elements

3.2.1 Context

Many logical fallacies are made during the course of debates just on the basis of using evidence, statistics or supposed facts from one context and applying them within the very different context of the debate. Another common problem is cherry-picking a study from one very specific set of conditions, and obscuring or removing its context in an attempt to make a much more general statement. Unfortunately, a lot of time is also lost haggling over definitions of the debate, and in correcting misunderstandings due to imprecisions in language.

The Canonical Debate platform resolves this problem through the explicit declaration of Context elements for each Claim. These elements to some degree can be thought of as the dictionary definition of each of the words used in the Claim itself. For example, in the argument: John Doe hates my mother, the Context could be declared as follows:

John Doe (Jonathan Randall Joe, SSN: 112-12-0234, USA) hates my mother (Janathan Rosalyn Blow, SSN: 052-33-0101, USA)

This Claim has been declared with two Contexts which, presumably, avoid confusion about which individuals we are discussing. However, this still leaves some wiggle room for an Argument of the type:

No I (Jonathan Randall Joe, SSN: 112-12-0234, USA) do not. I merely despise her (Janathan Rosalyn Blow, SSN: 052-33-0101, USA).

Thus, it would be desirable to provide Context definitions even for verbs, although that may prove difficult to achieve.

3.2.1.1 Knowledge Graphs

Defining Context as dictionary definitions for the terms used in Claims would already be a step forward. However, so much more is possible, and necessary, in order to achieve our objectives, and make the most of the possibilities. By couching Claims in their proper context, it allows us to ask related but distinct questions by looking at variations in the terms.

Consider the possible debate:

Abraham Lincoln was the best President the United States ever had.

If definitions could be related to one another in some form of taxonomy, this would allow us to "navigate" around different Claims just by their relationship via context elements. From the debate above, it would be possible to search for debates regarding "Who was the worst President of the United States?", "What else was Abraham Lincoln the best at?" "Who was the best Prime Minister of the U.K.?", and so on. If one considers each Context element as an axis on which Claims reside, it is possible to imagine these Claims living in a multi-dimensional, interconnected space of debates about proposed facts.

It would be a phenomenal task to attempt to build a database of all the possible Contexts that could be required for public debates, not to mention building the infrastructure and taxonomies that guide the relationship between all these elements. Fortunately this concept of relating entities within an interconnected web of domains already exists, under the name Ontology, or more recently (and informally) knowledge graph.

There are some incredible organizations that have already shouldered this burden and are continuously improving their data set. These include:

Our proposal is to make Context elements represent one of these external knowledge graph entities within the platform. Any of these online databases can be used as the source for a Context element, and where an entity is defined in more than one, the Context entity should provide a reference to each of the matching systems.

There may very well be cases in which a Context element is needed, but there is no matching entity in any known external graph. In such a case, it should be possible to define at the very least a simple definition to explain the terms that are being used in an unambiguous fashion. Ideally, the community would do well to add the new element to one of the external partner systems that serve as a source to the Canonical Debate.

3.2.1.2 Attributes

Context elements are simpler than Claims and Arguments, but still require some data in order to fulfill their purpose:

  • Unique ID - As with Claims, Contexts should not have duplicate entries representing the same concept.
  • Name - A Context element should have a an appropriate, and not necessarily unique, name.
  • Definition - In the case of a Context without any external references, a definition would be required. Otherwise, the definition would be an optional summary to differentiate this item from other Contexts with similar names (for example, it would be useful to say that "Paris" is "A prince of Troy in Greek mythology" in order to differentiate that figure from one of the several cities in the world with the same name).
  • Knowledge Graph References - Given that there may be more than one potential reference entity in external knowledge graphs, each Context element should contain one external reference for each system which matches it. Each system has its own method of reference (for example, the Google Knowledge Graph uses an identifer like kg:/m/0dl567 to represent one of its elements, whereas for Wikipedia, a URL might be the appropriate format), so each field should support the appropriate format.
3.2.1.3 Context to Refine Debates

One very common problem in debates is the tendency to generalize the statistics in one context to a much larger context, without any clear indication that it is reasonable to do so. This is known as the Hasty Generalization fallacy, and is only one example of fallacies due to loose or uncareful use of context. One of the benefits of making Context explicit is the ability to identify when this is the case, and correct for it.

Consider the following Claim:

Immigrants are responsible for higher rates of crime.

This is an extremely general statement. There should therefore be a much greater burden to prove such a statement than, for example, what the original author of the Claim intended to say:

Undocumented immigrants in the United States in the decades between 2000 and 2018 have been the principle cause of an increase in violent crime in the cities where they are most present.

This would be a relatively easier Claim to prove, and perhaps a better place to start for a much more complex debate. However, it's easy to envision breakdowns of this one Claim by city, and perhaps even by neighborhood, year, or population, should it interest the debating public.

3.2.1.4 Context and Arguments

This section began by discussing the relationship between Context and Claims, and yet the first examples were really relating Contexts to Arguments. This was a bit misleading, as Arguments do not themselves possess any Context. Arguments merely provide a link between two separate Claims, each of which has its own set of Contexts. Therefore, the Context of an Argument is actually the Context of its underlying Claim (which is being used within the Context of the target Claim or Argument).

3.2.1.5 Context and Relevance

Consider the following Claim:

Study X conducted by the University of Lalandia shows that violent crime rates in communities in the U.S. with a high population of Nambian refugees shows an unusually high number of violent crimes between 2014 and 2017.

If this Claim were to be used as an Argument in the prior debate regarding crime rates, a few things should become evident regarding its Context:

  1. It refers to refugees, rather than undocumented immigrants
  2. The study covers only a portion of the time period being debated in the Claim
This implies that the Relevance score of the Argument itself should NOT be the maximum possible score. It cannot conclusively prove the Claim because its Context is not a perfect match. In fact the first point implies that this Argument may not be relevant at all. Likewise, if the study were regarding undocumented immigrants in Germany rather than the United States, or during a previous century, its relevance would also be in question. What this reveals is that Context is directly linked to the Relevance of Arguments that are used for or against a Claim.

In the current iteration of the Canonical Debate platform, no attempt is being made to automate calculations of Relevance based on Context; however, Context may be used to flag to users when there is no match whatsoever, or to recommend Claims that seem to have Contexts that match the target Claim.

3.2.1.6 Recommendations, duplicate Argument reduction, etc.

The previous section gave a glimpse into the additional possibilities that a careful use of Context can provide. Context opens the door to a number of features that can improve the user experience, help keep the debates organized, and more. These are described in more detail in the sections below in features and technology, but here are a few important ideas to consider:

  • Navigation - Users can find related debates by navigating according to the relationships between Context elements as defined in external knowledge graphs.
  • Recommendations - The system can recommend debates to the user according to their interests, which may be defined by specific Context elements. It can also recommend Claims which may potentially be used as Arguments due to the similarity in Contexts between the potential Base Claim and the debate Claim. And, users can sign up to receive notifications for RFIs attached to specific Context elements.
  • Prevention of duplicate Claims - As will be described in the section on Curation below, one of the bigest challenges to keeping the Canonical Debate organized will be to prevent the creation of redundant Claims. The platform can provide a method to detect potentially duplicate entries by comparing the Context elements to those of other existing Claims, and require the user to check these similarities before approving the creation.
  • Claim hierarchies - As will be described in the section below there is an interesting relationship between similar Claims that can be explored and exploited by adding and removing Context elements one at a time.

3.2.2 User

Every human-facing system has users in one form or another. The Canonical Debate, which is designed to improve the shared knowledge between people, is no exception. While the exact set of attributes recorded for each user have not been fully designed, it is important to discuss certain aspects related to users that are unique or significant to the Canonical Debate platform.

3.2.2.1 Characteristics

The more significant characteristics of users in the Canonical Debate are described below.

3.2.2.1.1 Uniqueness

The Canonical Debate focuses on issues that are by nature contentious. Although the goal should be to work cooperatively towards the best answers, we know that many will prioritize winning over reaching an agreement.

In a system like this, where popular opinion (or the appearance of such) wins the day, it is important to prevent dishonest attempts to change or sway the results. One of the main vectors of attack in such situations is the creation of fake, or Sybil, accounts in order to afford multiple voices to a single person. It is essential that we do not let this happen, at least to a degree that creates a notable impact on any results.

This is not an easy task to achieve. It means implementing a policy of one account per person. This is a policy that has never been embraced by any of the big players online. Twitter, Google, Facebook, and so many more apply something of a hybrid approach to users, such that they attempt to make it difficult to create multiple accounts, but far from impossible. Twitter goes through extra efforts to verify certain well-known public figures, but leave the door wide open for accounts of unverified status. Only governments, financial institutions, and similar entities that have something to lose from impostors go through the painstaking effort of ensuring that accounts are matched to a person, and that the person maintains only a single account.

As we will describe better below, we propose adopting a solution that does not fully exist at the moment for guaranteeing the identity of users. The technology world is currently undergoing a revolution in terms of defining one's own "digital identity". The fruits of this labor, if successful, could mean an opportunity to guarantee (within an acceptable margin of error) a policy of one account per person.

3.2.2.1.2 Anonymity

Certain aspects of the Canonical Debate require some type of "voting" in order to express one's own, and the total sum of opinions regarding topics of debate. As is commonly understood, it is important to ensure the anonymity of votes in order to reduce the risk of coercion by force or by bribery.

This seems somewhat antithetical to the previous characteristic of guaranteeing there be only one account per person. However, there are many groups working right at this moment on solutions to anonymize voting transactions, even when there is rich information available regarding the participants of a system. At the very least, the system should strive to separate the votes from the users to outside viewers, but in the near future we believe it will be possible to protect this information even from those that work to maintain the system.

3.2.2.1.3 Interests and expertise

Each user will have their points of interest with regards to debates within the system. Some of this information can be derived based on their activity. However, it should also be possible for users to declare certain areas of expertise or interest that they have in order to guide recommendations for new Claims, as well as a way to enable notifications whenever a new debate or RFI is created which is of their interest.

3.2.2.2 Roles

Although there may be accounts with special privileges, such as system administrators and the like, in general principle, there are two types of users that deserve mention here.

3.2.2.2.1 Debater

The average user is what we would call a "Debater". These are the people that come to the platform to either learn about the information relevant to a debate, or intend to actively contribute information (Arguments and Claims) to the Canonical Debate. These users might perform actions such as:

  • Search for debates - A typical Debater would be interested in browsing the Canonical Debate for topics of interest via several different mechanisms, including simple text- or Context-based searches, navigation across knowledge graphs, responses to activity notifications, and so on.
  • Create Claims and Arguments - Debaters may wish to contribute their own Arguments to a specific debate, or even start their own debate by proposing a new Claim that has not been previously discussed.
  • Suggest improvements to existing debates - Debaters may propose a better description for an existing Claim, or add links and media to enrich an item that already exists.
  • Vote on debates - Debaters may show their beliefs regarding a debate by voting on Claims and Arguments (hopefully after carefully reviewing the information that is available). These votes will affect the Popular Score, and can serve as a reference for the Debaters to reflect on their own beliefs in the future.
  • View scoring perspectives - Debaters can switch between different scoring mechanisms to view the results of a debate according to the chosen set of rules. This includes the option to select the scores of a specific (anonymous) user, in order to get a better understanding of how other Debaters may think.
3.2.2.2.2 Curator

Curators are a special type of user tasked with the responsibility of keeping the debates organized, clean, constructive and canonical. These users may be draw from the general pool of Debaters as having shown a particular interest in maintaining the quality of the conversation, and having a good record of responsibility. The Curator role is very similar in this degree to the role of moderator (or "Administrator") on Wikipedia.

The Curator role is specifically concerned with maintaining the structure of the Canonical Debate. This includes removing duplicate Arguments or Claims, adding missing Context to existing Claims, regrouping similar Arguments under a single, more readable heading, moving misplaced Arguments to where they belong, and so on. This subject deserves a much more in-depth treatment, and so has been given a section all its own below.

There is discussion as to whether or not curation should be done by Debaters themselves, rather than creating a whole separate role for the task. On the one hand, the structure of the Canonical Debate should be as debatable as anything else, and not be subject to the whims of an elite class of people. On the other hand, this is the type of task that requires a special understanding of the structure of the Canonical Debate, and the work of keeping the debates organized would be much more efficient and consistent when not handled by way of popular vote. In initial stages of the Canonical Debate platform, the system will restrict these actions to users with this specialized role. After experimentation, it may make sense to try a more open approach.

3.3 Scoring

It's very important to collect all the Arguments and information available related to a debate, and make them available to everyone interested. However, some Arguments are better than others, and keeping with the principle of reducing friction, there should be a way of separating the chaff from the wheat. In fact, many of our Principles can be supported through a process of numerically evaluating and weighing the different Arguments. To this end, the Canonical Debate supports several different formulas for rating, or "Scoring" Claims and Arguments. What we call here Scoring Methods loosely relate to what are referred to in the field of computation and argumentation as an argumentation framework. More specifically, we deal with the case of uncertain premises, such as is studied under Bayesian Argumentation.

For the sake of discussion, our examples here and in the sections below will represent Scores as a percent range from 0% to 100%, with the following meanings:

  • 0% Truth - Indicates that a Claim is completely false; that there is no truth whatsoever behind the Claim.
  • 50% Truth - Indicates an outcome in which the truth behind a Claim is suspect. Depending on the type of Claim, this can mean that the Claim is as likely to be true as it is to be false, or that half of the population believes the Claim while half do not, or that there is a 50% probability of the Claim being true (for example, if it refers to events that are unknowable, or in the future).
  • 100% Truth - Indicates that a Claim is definitely true, or that everyone agrees with the Claim.
  • O% Relevance - Indicates that an Argument is totally irrelevant, or has no impact on the discussion whatsoever.
  • 50% Relevance - With Relevance, this can have one of several meanings, or be a combination these. This could indicate that people are divided as to whether or not the Argument is relevant. This could also mean that the Argument is relevant, but that it only somewhat impacts the discussion (that is, it's only "partially convincing").
  • 100% Relevance - Indicates that an Argument is completely relevant, and on its own should be enough to decide if a Claim is true or false if it's underlying Claim is completely true. That is, an Argument can be completely relevant and convincing, but still be based on a lie.
  • 0% Strength - Indicates that an Argument should have absolutely no impact on the debate, either because it is based on a false Claim, or because it is based on an irrelevant Argument (or both).
  • 50% Strength - Indicates an Argument that is at least partially effective, and should help to sway the outcome of a debate towards the side that it supports.
  • 100% Strength - Indicates that this Argument is irrefutable, and should alone be enough to resolve the debate.

3.3.1 Purpose

Scoring serves the following purposes:

Scoring should not be about who "wins". The natural instinct will be for participants to try to put their side on top. The real point is to get a clearer view of your own personal viewpoint on the debate. When this is what is given the focus, as is design goal for this platform, the value of the debate will become clearer. Hopefully, the populace will learn to view their own discussions in such a light as well.

3.3.2 Criteria

Before designing a scoring method, it's important to reflect on what are characteristics we want in a scoring algorithm. Although we have derived the criteria below based on our own experimentations and intuition, many are supported by recent literature in the field of argumentation. Certainly, the method should help us achieve each of the goals listed in the section above. No one algorithm (at least so far) is perfectly suited to every objective, and so the Canonical Debate supports several different approaches, as described below. We also intend to support new scoring methods for experimental purposes, and as our techniques and understanding improve.

No algorithm that we have designed to date satisfies every one of these criteria completely and in a consistent manner. Therefore, each algorithm should be considered a best effort. In this sense, the criteria described below are really heuristics that may be ignored if they work contrary to the underlying reasoning behind a specific Scoring Method. No algorithm should ever be mistaken as being the final word on the truth, and Debaters are encouraged to use their best sense and reason to make up their own minds.

3.3.2.1 Scores should not be biased by content

This point seems so obvious that it was initially taken for granted and not even included on the list. But it is important to be clear on this point: a Scoring algorithm that gives preference to one side of a debate over another simply based on which side the debate is supporting represents an obvious bias that should not be accepted. Such a method would go against the objective of being constructive, and choosing Arguments based on their merit.

3.3.2.2 The Score of a Claim with just one Argument shouldn’t be better than the Strength of that Argument

It would be misleading, for example, to award a score of 100% to a Claim that is supported by only one Argument with a Strength Score of 30%. This would mean that a single somewhat flawed Argument has left participants without a single doubt as to the truth of the debate.

3.3.2.3 Adding another Argument on a side should increase the weight on that side

If a Claim is supported by an Argument with nearly 100% Strength, and a second, weaker Argument is provided to prop up the first, it wouldn't make sense to take an average (for example), which would reduce the overall support of the Claim.

As a corollary to this, increasing the score of one of the Arguments should always increase the weight on that side, or at least not be harmful to its case.

An example of this would be Proposition 5, as described in Bayesian Argumentation and the Value of Logical Validity.

3.3.2.4 A single Argument with a Strength of 100% and no opposition should have a result of 100%

And, correspondingly, a 100% effective counter-argument should result in a Truth score of 0%, assuming there are no supporting Arguments. A good example of this is a piece of irrefutable evidence: just one good piece of evidence should be enough to prove a Claim.

3.3.2.5 Adding an Argument with a Score of 0% should not affect the result

An Argument with a Strength Score of 0% is either completely false, totally irrelevant, or both. It would definitely be a problem if participants could alter the outcome of a debate by spamming the discussion with ineffective Arguments.

3.3.2.6 Re-grouping shouldn’t change the outcome significantly

Curation is a subject we will discuss in more detail below, but it has profound implications for Scoring in general. Curation involves reorganizing elements of a debate in order to improve the structure, but these actions may change the way the debate is scored. In some cases, such as when an Argument was made against a Claim when it should have been against an Argument, it is natural that the reorganization affects the score (since it is a correction). In the case of regrouping, the action is done more for the purpose of clarity, and ideally should not change the result. If this type of curation can affect scores, it could be used as a mechanism for Curators to influence the outcome of the debate.

This is the hardest criteria to support mathematically. Intuitively, assuming that all Arguments have been sufficiently vetted, rebutted and voted, reorganizing similar arguments under a larger heading should not change the outcome. Three partially-relevant Arguments (30% each, let's say), should be just as effective separately as one more relevant (e.g. 65%) Argument that has them as supporting Arguments.

3.3.2.7 The outcome should match human intuition

This is the necessary catch-all for any criteria we may have missed. If the outcome of a calculation doesn't match what people would expect (and this is hard to define, since each person has a slightly different intuition), then we must pause and consider what we were expecting and why. Then, adjust the algorithm accordingly.

Some of the more difficult questions to consider here are:

  • If both sides have 100% convincing (Strong) Arguments, but one side has extra supporting Arguments, should we declare that side to be more convincing? (What does 100% mean in this case?)
  • Is one 80% Strength Argument more effective, the same, or less than two 40% Strength Arguments?
  • If not, how many 40% Arguments would it take to defeat an 80% Argument

3.3.3 Scoring Methods

Scoring (i.e. calculating mathematically the truth of) Claims is at once presumptuous, and necessary. Even if there is an objective truth, it is unlikely that we will be able to get 100% of the human race to agree on it. To choose one method of scoring would be equally contentious, and lead to the alienation of those that disagree with the results.

For the Canonical Debate, we seek to establish certain principles that should be followed in order to work with our model, but otherwise intend to provide a platform for users to view the results of debates from a variety of perspectives and methodologies. In the section below, we describe the fundamental pieces of our model that relate to Scoring, the initial methods supported for this purpose, as well as options for extending these basic approaches.

3.3.3.1 Score interconnectivity

The Score of a Claim is a representation of its truthfulness, the confidence that people can have that the statement represents a true statement. Since this is synonymous with a debate, arriving at a Truth Score for the Claim can be considered the objective of that debate.

Likewise, the Relevance Score of an Argument is a measure of the impact that it can have in affecting the outcome of the Claim or Argument which it targets. This is in turn combined with the Truth Score of the Claim upon which it is based in order to create a Strength Score, which is the final measure of its impact.

Each Scoring Method has its own way of weighing and summing up the impact of each Argument on its target Claim or Argument in order to produce a final score. What is common to all of them is the notion that the score of a Claim affects the Strength of an Argument that uses it, which in turn affects the score of its target Claim, which will then affect the Strength of an Argument in the next debate, and so on.

It is through this interconnectivity of Scores that we achieve our main purpose. The scoring mechanisms highlight, each in their own way, the ways in which the validity (or lack thereof) of a Claim affects any other Arguments and Claims on which they are based. By sorting out and choosing the best of these Claims, it is possible to build ever-stronger understandings of the truth.

3.3.3.1.1 Flat Scores

Although the true benefit from scoring comes from viewing the way the Arguments affect the outcome, it is possible to calculate a Score without any such considerations.

A simple poll is one example. Although the Arguments under a Claim may affect the final opinion of each participant, it is possible to calculate a Score by asking each reader what they think, and then average the results.

When a Score comes only from data collected at the level of the item being discussed, we call this a Flat Score.

3.3.3.1.2 Roll-up Scores

The more interesting cases, however, come from calculating the Score at the top level by somehow combining the Scores of the Arguments which target them. Since the Scores of those Arguments are, in turn, calculated from the combination of Arguments which target them (and their Base Claim), it is possible to generate a Score for one item by recursively calculating each item that is connected to them in the Debate Graph.

We call these types of Scores Roll-up Scores, since their behavior reflects a common operation in data analysis, that of summing up multiple values into a broader vision.

3.3.3.1.3 Roll-up Levels

There are several problems that the platform encounters when calculating the result of a Score based on the sum total of information that affects it.

The most mundane problem is performance. As information is added to the system, and opinions are given regarding that information, each change can cause a giant ripple across the entire space of Arguments and Claims so far accumulated. While this is a trivial concern when the data set is small, or the number of users and rate of new information is limited, the problem grows exponentially as Claims are added.

A simple way to deal with this issue is to allow the user to define the number of "levels" to which they wish to calculate the Score of the Claim they are viewing. This would create an immediate and manageable limitation to the degree of calculation required for their needs.

A Roll-up Score from Level 1 entails calculating the Score based solely on the Strength Scores of the Arguments which have the Claim (or Argument) as a target. Since Strength Score requires both the Relevance Score of the Arguments and the Truth Score of their Base Claims, this means that a Level 1 Roll-up Score uses one set of Arguments which targets a Claim, and their Claims. In this case, there would be no consideration of Arguments which target an Argument.

A Roll-up Score of Level 2 would require a lot more computation. Not only would this calculation require all the data of a Level 1 Roll-up, it would require calculating those values based on the values of their Arguments (and Base Claims). In this case, this means that Arguments which are targeted by other Arguments have their Relevance Scores derived from the results of those sub-Arguments (and, of course, their Base Claims).

In other words, a Roll-up Score of Level 2 requires calculating the Level 1 Roll-up of each Argument affecting the Level-1 Arguments and Claims, and then performing a Level 1 Roll-up calculation for the primary Claim from the results. A Level 3 Roll-up requires a Level 2 Roll-up calculation on the Arguments targeting the Claim, which requires a Level 1 Roll-up on their Arguments, and so on.

We can generalize this by saying that any Level N Roll-up Score can be performed by calculating a Level N-1 Score on each of the Arguments (and their Base Claim) that target a Claim or Argument.

3.3.3.1.4 Claims without Arguments

How can we calculate the Roll-up Score if we encounter an Argument or a Claim that doesn't have any Arguments of its own? In this case, it is not possible to calculate any deeper than this.

The correct answer to this question depends on the philosophy behind the Scoring Method that is being used. This is one of the parameters that every Scoring Method must define. Should Arguments with no defense nor counter-argument be ignored, be given the maximum consideration, or somewhere in between? What should we consider the Truth Score of a Claim (especially a new Claim) that has been subject to no discussion whatsoever? Should it be considered true until proven false? Or vice-versa, or half and half?

There is no general solution to this question, only the requirement that default Score must be defined.

3.3.3.1.5 Cyclical Debates

A more philosophical issue is the problem of creating cycles. A cycle occurs in a graph when one item references another item which has references that come back to the original item. A Debater may try to make an Argument that uses a Claim which itself is being debated based on Argument from the original Claim.

The simplest example of this is known in philosophy as "Begging the Question". It is clearly an ineffective form of argumentation. But this poses a question: should the Canonical Debate platform permit this to happen? Or should the system detect any circular references and prevent a Debater from proposing such an Argument?

Though not clearly stated in our principles, underlying our philosophy is the idea that every argument should be captured so that it can be properly debated. An omission of an Argument, or rather the act of preventing someone from making an Argument, would not only be contrary to that goal, but also alienating for those that find it salient enough to consider. Ideally, we would want to catch this type of argument, if only so that it may have its chance and be properly refuted.

It is also quite possible that a cycle may occur even in the case of well-reasoned debate with only subtle flaws. Like putting the proverbial cart before the horse, it is quite likely that someone will confuse a cause for an effect, which might naturally result in a circuitous but undeniable cycle.

This means that Scoring algorithms must take cyclical debates into account. There are already well-established solutions for detecting a cycle, so this should be trivial. Once the Scoring Method realizes that it has already seen a specific Claim before, it must choose what to do. One solution would be to treat it as if it were a an argumentless Claim. But that is just one possibility. It is up to each Scoring Method to define how it should treat a cyclical debate.

3.3.3.2 Sources for Score Values

Scoring Methods, as described in the previous section, are mainly focused on the mathematical formulas used to calculate a final score based on multiple values associated across many Claims and Arguments. What hasn't been discussed is exactly how the initial score values are generated. There are many possible solutions, which fall along a sliding scale from completely objective to fully subjective.

3.3.3.2.1 Objective Scoring

A fundamental question of philosophy (and science) is if there exists an objective truth. And, if one exists, is it possible for us to see it? Until we have this answer, the best we can do is try our best to find it. Objective Scoring is an approach to Scoring Method that avoids subjective opinions in favor of looking for an answer based on the content alone.

The source of these values depends on the algorithm, and it can be a serious challenge to generate objective data from information provided by subjective humans. One approach could be to use data mining across many independent sources, or to use AI algorithms to determine a score based on use of language, graph connectivity, or other strategies.

Reason Score is one very interesting example from the Canonical Debate Lab. It is a vote-free scoring algorithm that follows more or less the graph connectivity strategy by placing an emphasis on the interaction of Claims and counter-Claims, rather than popular opinion. With voting-based (subjective) scoring methods, debaters try to defend their side of the debate by scoring the Claims according to their personal beliefs. With Reason Score, the only way to impact the final result is by providing a new, reasonable Claim. Instead of casting a vote, Debaters must provide a claim to explain why they disagree with the current outcome. Reason Score therefore encourages Debaters to think through their objections to a given topic and put it in words, which can result in a more complete set of Claims.

As a side benefit, Reason Score values do not have to be reset after Curation actions, unlike opinion-based scores. On the other hand, Curation can have a significant impact on the results. Assuming Curation is handled correctly, this is not necessarily a negative trait.

3.3.3.2.2 Belief Scores

On the opposite end of the scale, the most subjective of all Scoring Methods is to allow each Debater to rate every Claim and Argument according to one's own personal opinion on the matters. This method draws upon at most a single score per item in the Debate Graph, according to where the user has provided a Score and where they have not.

This strategy is at heart a very introspective exercise, providing a user with the opportunity to consider their own opinions on each matter based only on the Arguments and information at hand (hopefully, as complete as society is able to provide). By Scoring each item separately, and then viewing the results at varying Roll-up Levels, they can see how the Score changes when viewed at a flat level, as opposed to how it is scored according the Arguments, the Claims on which those are based, and so on.

The more complete a user is able to analyze and score the Debate Graph, the more interesting and useful the tool will become for them. If there is a large discrepancy between their Flat Score, and the Level 1 Score (or deeper), this will prompt them to reflect on why that is. It could be that an Argument "feels right" to them, but is based on something they realized is not true (or likely). Or, it could be that during the process of scoring distant topics in the graph, they were convinced of a Claim they had previously disregarded. This might prompt them to adjust their opinion at the Flat Level. The most interesting of all these cases is when one discovers that they are treating certain values, beliefs or principles in an inconsistent manner, depending on the debate. This makes this type of Scoring a powerful tool for studying one's own biases.

Equally powerful would be the ability to view the Belief Scores of another user (while protecting their identity). Debaters could be given the option of choosing to view the Personal Scores of a user that disagrees with them on a given Claim, or even of someone who more or less seems to agree with them.

Debaters should be encouraged to define their own Personal Scores before viewing the Scores of others. This act of introspection is very important to the goal of teaching people about themselves. Allowing them to see the opinions of others first could also lead to an Anchoring effect.

3.3.3.2.3 Popular Vote

The Popular Vote is a very straightforward mechanism for scoring. On any Claim or Argument, Debaters can register their opinion according to their Belief Score of an element. The Popular Vote, then, is simply the process of combining all the Belief Scores for that element. The simplest calculation would be to take an average of these scores, but each Scoring Method can choose its formula.

Popular voting can be considered a "dangerous" form of viewing the results of a debate. There are several reasons for this:

It can be subject to attack by dishonest voters, especially those with many fake accounts

This is a very genuine concern. It is critical that debates not be influenced or dominated by bad actors with fake accounts. As stated in the Principles, the system must guarantee to the best of our ability that there is only one account per person.

Just showing the Popular Vote can distort the outcome, due to popularity bias

There will certainly be some distortion due to this effect - there is no escaping the way our brain works, not completely. However, the main purpose here is not to see which side wins, but rather to provide the tools to help people understand themselves, which is the first tool in the fight against cognitive bias. Similar to our warning in the previous section, we believe that the best defense against this would be to encourage users to begin by analyzing their own opinion. However, this would at best be a nudge, only marginally successful. While this is a very valid concern, we do believe that the benefits outweigh the bias it may create.

There is no guarantee the popular outcome is the right outcome

This is true, but the Canonical Debate is not intended to be a system for voting and decision-making. Rather, it is meant to be a tool for informing users in the best way possible in their votes and decisions outside the platform. The tyranny of the majority is a problem for a democracy, but is therefore outside the scope of this system. For more information on how this can be resolved, see the Democracy Earth white paper. With the creation of this system, we certainly hope that the majority will be much more understanding and accommodating, rather than tyrannical.

3.3.3.3 Score Perspectives

We have discussed how there can be many different algorithms to calculate Scores, and different ways to choose the source for those Scores. We have also discussed how each method is imperfect, and at best can serve to highlight certain aspects of the debate. Given that each method will have certain biases, it should be possible to give users access to the variety of approaches.

We call the combination of a Scoring algorithm (mathematical formula for combining Scores), plus the method of generating these Scores (Score source) a Score Perspective. Although we recommend that the platform begin by requesting a user to define their own Belief Scores, it should provide users the option to switch between the many Score Perspectives available.

3.3.4 Ranking of Arguments

One of the most important roles for scoring is to enable the ranking of Arguments, from "best" (strongest or most relevant) to worst. By assigning a Score based on popular or personal opinion, or on the preponderance of facts, it becomes possible to design an interface which highlights the most salient information in a given debate.

In a mathematical sense, it is easy to rank the Arguments targeting a given Claim. This is simply the process of sorting the Arguments according to their Strength Scores. However, from a user experience perspective, there are several challenges to consider:

  • Should Pro and Con Arguments be ranked separately?
  • If so, how should this ranking avoid confusing the viewer by mixing scales (e.g. How should the ranking handle the case when all the Arguments on one side have a higher Strength Score than those on the opposite side?)?
  • How should the ranking deal with the subtle difference in Relevance Scores vs. Strength Scores (i.e. "If this is true, it's the best Argument, although we're not yet sure.")?
  • Should the interface hide Arguments that don't meet a certain threshold? What if none of the Arguments on one side meet that threshold?
  • Should the interface show Arguments that are new, or haven't met a certain threshold of Scoring and Debate?
One type of user interface currently under consideration is the possibility of Scoring by ranking. It would be a compelling and intuitive way to Score by manually sorting Arguments in order of "best" to "worst", and from this draw numerical Scores. However, this faces the problem mentioned above: Would this ranking be a rating only of the Relevance Score, or of the total Strength Score? Would the average user be sophisticated enough to sort only by Relevance? If not, how could the Relevance Score be extracted from the sorted Strength Score? How would one register a Relevance Score for an Argument with a base Claim that is totally false?

These problems are far from insurmountable. We discuss them in this white paper only to highlight some of the decisions that an implementation of the Canonical Debate must make. The best approach should be selected through practice and experimentation.

3.4 Curation

Curation is the act of improving the structure and organization of the Canonical Debate by making small, incremental changes. As we have noted in the Problems section above, it is important to ensure that a debate in which anyone in the world can participate maintains a standard of quality and does not become a random mess of opinions, ideas and comments.

In this section, we outline the many actions that might be necessary to maintain a canonical structure, and to promote a constructive discourse.

3.4.1 Who Should Curate?

At this moment, the Canonical Debate Lab has not come to a consensus as to the proper mechanism for performing curation actions. There are two main approaches currently under consideration for this task, and it is possible that both approaches will be tried in practice before a final decision is made.

One solution would be to hand-pick a group of users and give them the power to perform these curation actions. The is the approach used, for example, by Wikipedia, which has a special class of moderators, or Administrators tasked with the responsibility of making sure the site's articles adhere to the standards they've defined.

Another approach would be to allow any Debater to propose an improvement to an existing debate, and allow others to provide their opinions on the subject. After all, as we like to say, everything is debatable.

The number of arguments on both sides of this debate are enough to fill out an Argument Map all its own. The basic trade offs come down to the problem that Curation seems to be a specialized skill that requires detachment from the debate at hand (the Curator should not be intentionally trying to influence the outcome of a debate), and will need to be performed quickly in order to keep popular debates from becoming too disorganized. Maintaining an audit trail of Curator actions could be one way of keeping Curators from abusing their power.

Allowing Debaters to propose and debate improvements to the debate structure is more consistent with the general philosophy behind the platform. In many cases, it may be more efficient than having a small group of Curators, since there are many more Debaters to keep an eye on every part of the debate. On the other hand, this would replace one ostensibly non-partisan Curator with an indeterminate number of partisan Debaters. This could expose the debates to a form of Polyopoly Attack in which a single, outsized, motivated group could dominate a single debate to propose changes that are detrimental to the overall quality. Also, this platform does not itself intend to be a platform for making decisions, only to facilitate external decisions. Thus, there is no intention of providing a voting mechanism that establishes a final outcome (such as triggering a suggested Curation action).

3.4.2 Curation Actions

The following is a guide to the types of actions that a Curator may choose to execute in the effort to maintain the debate as canonical and constructive. For those who are familiar with software development, these are analogous to types of code refactoring, but are applied to arguments in a debate.

Many of these actions can have a dramatic impact on the Score of a debate. Objective Scoring Methods can recalculate the new values on the fly; however, the Subjective Sores may require notifying Debaters that have contributed an opinion to revise their opinion, or even wipe out all votes entirely to start over. In order to avoid major negative consequences, the sooner a necessary Curation can be done, the better.

3.4.2.1 Fundamental Actions

These are the simplest, most rudimentary actions required to keep a debate organized and canonical. Many of the more complicated curation activities are essentially a combination of the Fundamental Actions executed in a specific sequence.

In order for the Canonical Debate platform to reach its design goals, it must support at a minimum all of these actions.

3.4.2.1.1 Create Claim or Argument

This is the most obvious action required of any debate. It only deserves mention here as it is a required step in some of the more complex actions listed below, including Split Claim and Group Arguments.

3.4.2.1.2 Delete Claim

This is a highly contentious and dangerous action to put in the hands of anyone. The fact that historical forms of the debate can be viewed helps temper the problem, but it remains to be seen whether or not this should be an option at all.

The simplest case, that of a Claim with no Arguments of its own, and no votes, actually makes sense. There's little or no harm done in this case, and can be considered a simple act of housekeeping (a User can always come along and create the Claim a second time).

If a Claim with parent or sub-Arguments gets deleted, it would necessarily require the deletion of those Arguments as well (and then the deletion of their Arguments, and so on), so it makes sense to treat this option with care. Perhaps the only reason to permit this action would be as a simpler version of Merge Duplicates when a brand new duplicate Claim has been detected.

3.4.2.1.3 Delete Argument

This carries the same risks as Delete Claim. However, it makes a little more sense than the latter, in the case that an Argument is created that has absolutely no relevance to the debate whatsoever (although Scoring and ranking the Argument out of the top list would be a less controversial solution). More specifically, this action is an important final step in the process of moving an Argument.

3.4.2.1.4 Attribute Enhancements

This is the simplest form of curation, involving small changes to the text or attributes of a Claim or an Argument. These changes may include:

  • Changing the Title
  • Changing the Description
  • Changing media elements
  • Adding References
  • Adding or correcting Context elements
This type of change is not structural, and should not have a major impact on the scoring of the element. As such, there is no need to reset the scores that have been chosen for this element. However, it would probably make sense to send a notification to the contributors when this action is performed.

Given its simplicity, this type of Curation might be made available to average Debaters, even if the general strategy is chosen of using a restricted group of Curators. In a previous experiment, it was possible for common users to propose a new version of a Title or Description. Debaters could then vote on which of all proposed versions they considered to be the best. The version with the most votes was then the version that would appear to users (it would still be possible to click a tab to see all the proposed versions at once). While simple, and subject to attack by bad-intentioned users (or those wanting to make a joke), it would nevertheless be a way to improve Claim and Argument quality without extra overhead for the Curators. An additional feature to counterbalance could be the option to override and lock a Title or Description by a Curator in order to prevent such attacks.

3.4.2.1.5 Attach Premise

If the Canonical Debate platform supports Multi-premise Claims, then there needs to be a mechanism for associating simple Claims with a Multi-premise Claim. As with Create Claim or Argument, this is a standard feature that should be available to any Debater, and is listed here only for completeness.

3.4.2.2 Compound Actions

These are actions that could be reproduced with some extra work by a combination of the actions above, but it is recommended that the system support these as explicit options, as much for ease of use as for consistency in the way they are handled.

3.4.2.2.1 Merge Duplicates

Probably the most common problem in attempting to maintain a canonical debate is the occurrence of duplicate Arguments or Claims. Whether Debaters misunderstand what has previously been said, are too lazy to read the points that have already been made, or just want to say it in their own words, duplication is rampant in online debates.

Merging is the act of taking two identical Claims or Arguments and unifying them into one. It sounds simple, but would require the following fundamental actions to reproduce:

  1. Move each of the child Arguments over to one of the two Arguments or Claims (the "Survivor").
  2. In the case of merging a Claim, for each Argument that uses the non-Survivor as base, create a new Argument identical to it using the Survivor as a base instead. Repeat also for any Arguments attached to those Arguments, and so on.
  3. Perform any changes to Title or Description as appropriate (note that the history of Attribute Enhancements on the non-Survivor will be lost).
  4. Delete the Argument or Claim that was not selected as the Survivor
Doing this as a single supported action makes the operation smoother, with less risk of human error. Even so, it has a profound impact on every one of the attributes previously listed.

3.4.2.2.2 Split Claim

Some Claims may be made in such a way that they are really a combination unrelated facts. An exaggerated example might be something like the phrase "You're ugly and you smell bad", which is really the combination of the two Claims "You're ugly" and "You smell bad". A Curator in such a situation would therefore need to make a change such that two Claims are made from the one.

If you have read the previous sections, you might recognize this type of Claim as as Multi-premise Claim. If the platform provides support for this type of structure, then most cases for a Split Claim Action will actually be a conversion to a Multi-premise Claim, which entails creating the separate, distinct Premise Claims, and joining them together into the original Multi-premise Claim.

The Split Claim Action is equivalent to the following actions in sequence:

  1. Create a new Claim with a Title and Description for each Premise Claim that can be extracted from the original Claim
  2. Associate each of the new premises with the original Multi-premise Claim
  3. Move any Arguments that were targeting the original Claim so that they instead target one of the new premises, as appropriate
  4. Update the Contexts of each Premise Claim to reflect the reduced scope of each new premise
3.4.2.2.3 Split Argument

This is not in fact a feature that needs to be supported. Since an Argument is merely the expression of a Claim used to support or attack another Claim or Argument, you cannot have multiple Arguments in a single entity. However, it's possible to create text that appears to be two Arguments in one. In such a case, the proper solution would be to correct the Title and Description such that it is appropriate for the base Claim, or to convert its Base Claim into a Multi-premise Claim.

3.4.2.2.4 Move Argument

This is one of the most critical and frequent types of Curation. The idea of a Canonical Debate, and the idea of separating Arguments into their distinct logical components, is not (yet) a common practice for most people. Our natural instinct is to provide a defense or counter-argument of any kind we are able to conjure, regardless of whether it is relevant or not. This is only made worse by the need to have the last word. Creating a constructive and canonical debate, however, takes careful thought and effort.

When an Argument is made in the wrong place, its impact on the outcome of the debate may be negated due to its irrelevance. This does not mean that it is unimportant, only that it has been used in the wrong Context. The work for the Curator, then, is to find the proper Context for the Argument, and move it there.

Consider a debate over gun control, where the following Claim is being discussed:

Preventing people with a diagnosis of mental illness from owning firearms does not go far enough to protect the public.

And the following Argument added as an argument for the Claim:

The AR-15 has been used in many of the most recent mass shootings in the U.S.

While totally irrelevant to the previous Claim, it would be an important argument for the broader discussion on gun control.

The much more common case of a misplaced Argument, however, is one that will be a common error made by those users that haven't gotten used to the difference between Claim and Argument. People are accustomed to responding to an argument with a counter-argument of their own. However, they have probably never been asked to identify whether or not they are attacking the importance of the argument, as opposed to its underlying basis on fact. Until users are used this distinction, and the user interface has evolved to properly facilitate this process, Curators will likely be spending the majority of their time correcting this type of mistake.

Consider the following response to the previous Argument regarding statistics on the AR-15 firearm:

Actually, the AR-15 was never used in a mass shooting.

Historical inaccuracies aside, this is definitely an attack on the validity of the statement. If the user made this Argument in direct response to the Argument itself, rather than placing it on the Base Claim, the Move Argument Action would be required.

For a Curator in this situation, the task is simple: change the target of the Argument that is being moved to the new, more appropriate target.

It could be implemented using the Fundamental Actions listed as part of the Merge Claims action:

  1. Create a new Argument with the new Target as its destination.
  2. "Move" (in the way being described here) any sub-Arguments over to the new Argument.
  3. Delete the old Argument.
Whether or not this action is performed in a single step, or using the Fundamental Actions above, there is another step that must be considered: any Arguments that no longer make sense will need to be deleted. Pretty much any Argument attacking the Relevance of a moved Argument will be suspect, given that the entire context in which the Argument was made will have changed.

3.4.2.2.5 Group Arguments

Very often, multiple Arguments may be presented which are distinct, but are making more or less the same point. Several related anecdotes might be grouped together into a single heading, or multiple studies of the same topic.

Imagine, for example, these Arguments for the Claim "I should get a tattoo on my arm."

Study A shows that 3 out of 4 people prefer people with tattooed arms over those without. Study B shows that in Utah, tattooed arms are on the rise. Study C shows that having no tattoo on your arm is still king.

Under such circumstances, it may make sense to group these Arguments under a single heading, rather than leaving them on their own.

People with tattoos on their arms are more popular than those without.

This new statement would be used as a single Argument in the debate, making it easier to understand. The old Arguments would then become sub-Arguments of the new Argument, but there's a subtle point to be made here: This new Argument requires a base Claim of its own.

Grouping Arguments, then, consists of the following Fundamental and Compound Actions:

  1. Create a new Argument, with a new base Claim
  2. Move each of the existing Arguments to become Arguments for or against the new Claim
Grouping Arguments is one of the principle activities that can help to keep debates quick to read and easy to understand. However, this action can clearly change the way one perceives a debate. For one thing, it could play into certain perceptual biases when several minor Arguments are grouped into a single, more relevant case. It has also already been noted that unless the Scoring Methods are well-designed, the actual scored results may be changed by this otherwise very positive act.

As a final warning, it is important to note that re-grouping can not only be done intentionally to harm one side of a debate; it can also be very difficult to perform in a consistent manner across multiple subjects, and multiple Curators. An in-depth set of heuristics are needed to guide these decisions in a consistent manner.

3.4.2.2.6 Clone Claim

In the course of a discussion, it is common to see the main subject of the debate evolve over time. An extreme example of this is the informal logical fallacy known as Moving the Goalposts. However, even (or especially) in the case of a constructive discourse, the original debate may require some refinement in order to come to a reasonable conclusion on the subject.

Consider the debate regarding arm tattoos described above:

People with tattoos on their arms are more popular than those without.

Suppose that during the course of the debate, it becomes clear that while tattooed arms are obviously better when it comes to the 65 and older crowd, teenagers clearly prefer plain arms. The canonical Claim above would naturally have mixed results, with an outcome of, perhaps, a 47% Truth Score. While that is interesting to observe, it becomes more interesting to look at each of the more specific cases on their own:

People 65 years of age or older with tattoos on their arms are more popular than those of similar age without. Teenagers with tattoos on their arms are less popular than those without.

Each of those Claims would bear the most relevant parts of the more generalized debate, and have, perhaps, a Truth Score closer to 70% (for example). It would then be interesting to create new debates with even more specifics, such as:

Tattooed arms are more popular than ones without for people 65 years of age and older on the Island of Tasmania for the period of 2010 through 2018.

And so on.

The creation of related Claims is a very complicated action to perform via strictly Fundamental Actions, and so should be facilitated by more advanced tools in the long term. This activity also relates to a much deeper subject that we have chosen not to fully address in the current version of this white paper: hierarchies of Claims. This concept supports navigating through varying levels of Claim specificity by way of changes to the Context elements involved. We are looking forward to developing this concept further in the near future.

3.4.2.2.7 Refine Argument

If a Claim can be refined, what does that mean for Arguments that are based on that Claim? Consider, again, the case of the tattooed arms:

I should get a tattoo on my arm

Imagine that we use, as an Argument in favor, the original Claim:

People with tattoos on their arms are more popular than those without.

However, another Claim has already been cloned and refined from this original point:

People 65 years of age or older with tattoos on their arms are more popular than those of similar age without.

If the I in the original Claim is amongst the group of 65 and older, then this second Claim is more appropriate than the original in the debate. It's possible, then, that both Arguments could be made against the same Claim. This would slant the result of the debate towards the side that they support, since it would essentially be double-scoring the same argument. In this case, the proper action would be to delete the less-relevant Argument.

If only the more general Argument has been made, however, this essentially short-changes the main point. The more general Claim is probably less true, and is certainly less directly relevant than the more specific case, and thus would result in a lower Strength Score than if the more specific Argument were used. In this case, it is appropriate for the Curator to replace the less-specific Argument with the more specific one.

Refine Argument could be described as a combination of the following actions:

  1. Delete the less appropriate Argument
  2. If one does not already exist, create a more specific specific Claim by way of the Clone Claim Action
  3. Create a new Argument based on the most appropriate Claim available
The "most appropriate" Argument would be the one that has the highest Strength Score (Relevance and base Truth combination). If a new Argument is being created, there will not yet be a Relevance Score, in which case the Curator must use their best judgement. Context in the Target Claim might help guide this decision, but in general it should not be a difficult decision to make.

3.4.2 On Annoying Debaters

Although Curation should help to keep the Canonical Debate readable, efficient and productive, it will have an impact on Scoring. While Objective Scores will be updated automatically, Subjective Scores will require each previous participant to evaluate the change and register a new vote.

The platform can facilitate this need by sending a notification to each Debater that had previously registered interest, but if these changes are ongoing and frequent, it could lead to fatigue and disinterest by Debaters.

Fortunately, the life cycle of a Claim is such that the relevant and unique Arguments will be provided fairly quickly until all the interesting options are exhausted. At that point, given that the debate is canonical, new Arguments will most likely be duplicates of previous Arguments, and therefore deleted or merged before many people have bothered to vote on the new item. There will generally be a flurry of activity in the beginning until the debate is properly organized, and then the changes should settle down.

Regardless, it is important that Curators examine new Claims and Arguments as soon as possible in order to take the proper course of action before too many Debaters have gone to the trouble of scoring the unexamined elements. This will be a major design concern for the Canonical Debate platform.

3.4.3 Viewing Previous Versions

Having Curators, a separate class of users with special privileges, poses a risk that they might abuse their powers to influence a debate. The best defense against this type of risk is to provide full transparency as to what actions each Curator has taken. This requires that each action performed by a Curator be auditable, and be associated with that specific Curator.

To this end, the Canonical Debate platform will provide access to the entire history of any Debate. It will be possible to view what changes were made at what time, and by whom. The identity of Debaters will be protected; however it will be important to show specifically which actions were performed by Curators, and specifically which ones. This will provide the opportunity for others to challenge the objectivity of Curators who seem to be abusing their position.

Beyond the need for checks against abusive Curators, it would be interesting for many other reasons to be able to view the evolution of a debate over time. To satisfy the curiosity of the average user, assuage the fears of doubters, and provide a historical record for researches, we believe that providing a complete audit trail is an essential part of the platform.

3.4.4 Undoing Curation Actions

Even if the actions of a Curator are determined to be abusive, or in the case of a simple innocent mistake, there remains the question of what should be done. Any Curation Action can be reversed by viewing what the debate looked like in the past, and performing a second curation which puts things back as they were.

The problem is that many of these actions are destructive, at least with regards to Subjective Scoring Methods, which generally require that scores be wiped clean after a Curation Action. Thus, even if a bad actor is detected, and their actions are manually undone, there may be a lasting impact in the sense of the Subjective Scoring (votes will be lost).

A possible approach that is still under investigation is the possibility of reversing, or "undoing" Curation Actions that have been deemed biased or inappropriate. This would essentially entail undoing the destructive action, rolling back to the previous version as if the action had never happened. All Belief Scores would be restored, deleted Arguments would be restored, and so on.

We have not yet determined the rules and conditions behind such a feature. For example, what should be done if new Arguments have been based on the invalid changes that were made? What about votes that were cast prior to the reversal? What would even trigger such a reversal? These questions are still under consideration, and should be explained in future versions of this document.

3.5 Other Features

This is by no means an exhaustive list, but we would like to highlight some interesting features that we believe would either be necessary to maximize the usability of the platform, or which present interesting opportunities for users of this platform.

3.5.1 Notifications

Consistent with the goal of motivating users to interact with the platform, the Canonical Debate needs to support sending notifications to users whenever a Debate of their interest receives new information. Especially in the case where a Debater has registered a Belief Score for a topic that has been curated, it is important to call them back to give an updated opinion on the new organization.

Notifications can also be a very powerful tool for helping resolve RFIs when they occur. Debaters can register to receive RFI notifications based on a specific Context element, or combination of Contexts, essentially declaring themselves as an expert (or at least a person with a great interest in) a specific topic. Whether or not they are truly an expert is beside the point - the platform is designed to create a meritocracy of ideas, or at least of arguments.

3.5.2 Search

Many modes of search are possible on the Canonical Debate platform, and each should help promote discovery of information in a quick and unbiased manner.

3.5.2.1 Text Search

The usual text search capabilities should be enabled to provide users the ability to locate any Claim that matches their interests on a best-effort basis. This includes the standard techniques of using related terms and language-aware matches, including every text field in a single search, including related Context information, and providing a ranking of the results based on the fields which matched and the accuracy of the match.

The platform should specifically not include rankings based on previous user activity, or based on declared preferences. Such algorithms are not transparent to users, nor necessarily to their creators, and could lead to the creation of information bubbles.

3.5.2.2 Graph Navigation

A natural form of navigation throughout the platform will be to choose a starting point, a Claim of interest, often by way of some external search or link, and examine the space of ideas, evidence and arguments connected to that initial topic. This type of navigation is the default mode of understanding the issues and ideas related to a debate, as readers seek to challenge or understand the reasoning behind specific arguments, declare their Beliefs at ever-more detailed levels, and so on.

Though not strictly speaking a form of search, it is a mechanism of discovery. There will generally be little surprise while “drilling down” towards more specific Claims upon which Arguments are based. However, there will be an element of unpredictability and surprise when looking in the opposite direction to see the number of Arguments upon which a certain Claim were made. In many cases, the reuse of a given Claim may be very limited or none at all. However, more fundamental Claims, such as the Claims which represent logical fallacies will yield a complete listing of Arguments of this type.

3.5.2.3 Context Navigation

Similar to Graph Navigation is the type of exploration we discussed previously that is possible with the use of Context. Knowledge Graph paths provide a way to find issues related to any single subject, item, location, or other category supported by these ontologies.

3.5.2.4 Interest-based Navigation

As users (anonymously) register their Beliefs, they accumulate a trail of interests and opinions. This creates the possibility for users to discover new topics of interest on which other users have voted. Users can choose to browse the topics of interest for users that agree with an opinion of their own, or of those that have disagreed with them. And this could be based on the collective interests of a group of users, or they can follow the opinions of a single user. These are all interesting opportunities to browse and discover what interests other users of the platform.

3.5.3 Opportunities for Machine Learning

Such a rich data set provides a rather limitless field of possibilities for data mining and machine learning algorithms to help with the process, and with tools for analysis. We have already been approached by groups interested in possible projects that could use machine learning in conjunction with our work on the Canonical Debate. Below are a few examples of the types of projects that could use machine learning to support the platform.

3.5.3.1 Context Identification

One of the most difficult tasks will be to ensure that the platform remains canonical, by eliminating duplicate entries for Claims. An important part of this will be to prevent users from creating duplicates in the first place. Many customer support sites already include functionality to avoid customers from repeating questions that have already been answered by attempting to guess the question as the user is typing. Often before the user is allowed to ask the question, the system returns several guesses as to what they were trying to ask, and asks the user to first verify if one of the items presented resolves their issue. In a similar vein, the Canonical Debate platform could perform intelligent searches on existing Claims to show the user those that appear similar to the one they are trying to create.

Taking this a step further, it is important that users attach Context elements to their Claims in order to help with the organization. Similar Context to existing Claims would also be a possible indicator of Claim duplication. However, this is a fairly onerous task to ask the average user to do quickly and correctly. Machine learning algorithms could make a significant impact in the usability of the platform by automatically attempting to determine the Context elements implied by the Claim's text, possibly providing a set of options to choose from for each word or phrase in the statement. This capability would also dramatically improve the success of the previous strategy of suggesting existing Claims that match what the user is trying to do.

3.5.3.2 Argument Mining

Argument mining is already a very rich field of research, and many tools have already been developed by groups such as ARG-tech. As this technology progresses, it becomes increasingly viable to automatically generate Canonical Claims and related Arguments based on content found on the Internet. If these tools are combined with the Context identification algorithms described above, the Canonical Debate will have the capability to enrich the Debate Graph at the same pace with which these discussions are created and discussed across the Internet.

3.5.3.3 Stopping Trolls

One of the original motivations for creating this platform is the proliferation of online trolls which seem to be occupying the space of constructive debate and overwhelming attempts at reasonable conversation.

The Canonical debate platform is designed to provide mechanisms to prevent standard trolling tactics, including removing individuals from the debate itself (and thus avoiding personal attacks), ensuring that trolls cannot create multiple accounts in order to have multiple online personalities, and making the Claims canonical so that it is not possible to repeat Arguments that have already been made (and to some degree resolved). The platform even attempts to include would-be trolls into the conversation by providing mechanisms to evolve their exaggerated statements into more reasonable Claims that actually address the points they are trying to make.

Nevertheless, there is the possibility that individuals with bad intentions can still perform a type of "Denial of Service" attack by creating duplicate Claims with inflammatory language repeatedly so that users will see them until Curators have time to clean them up. The platform should therefore support tools to detect this type of intentionally bad activity, and prevent these users from continuing their attacks. This type of detection will depend on multiple forms of pattern matching, based both on language and more general patterns of activity.

There are many AI projects in progress aimed at fighting online trolls, and the Canonical Debate platform can benefit from them and others to further prevent this type of activity, as well as providing a new object for research and development.

3.5.3.4 Identifying Language Temperature

Another hot topic of research is focused on identifying the "temperature" of language that is used in online discussions. These tools could be used to detect when the description of a Claim or Argument goes beyond a dispassionate argument based on reason to involving rhetoric in an attempt to influence based on biases and emotion.

While these types of arguments work contrary to some of the platform objectives, they could be used to benefit another objective: that of teaching people about argumentation and perspectives. Rather than eliminate or "correct" statements that carry a certain emotional content, it would be interesting to maintain multiple versions of the Claim and Argument texts, each one with a slightly different emotional temperature. Given the proper tools to detect and categorize these variations, the platform could offer yet another type of perspective: emotional versus completely logical (or "civilized") debate styles. It would be a fantastic learning opportunity to be able to compare the two.

3.6 Technologies

In this white paper we will not go into too much detail as to the technologies which can or should be employed, except where specifically significant to the project objectives.

3.6.1 Blockchain

Blockchain technology is currently on the rise. Although Bitcoin is the most well-known application of Blockchain, what many people do not understand is that the Blockchain on which it is built presents a revolution in terms of technology.

For the purpose of the Canonical Debate, Blockchain represents the very first widespread solution for a fully-distributed, fully-trustless system. This means that any person (or general-purpose computer) can participate. It means that every participant can view the entire history of activities on the system, and verify that they haven't been altered by someone trying to manipulate the data. It also means that the system, and the data, will survive as long as there is at least one participant up and running.

Blockchain technologies are currently also being used to enable universal identity solutions, which will be critical for the purpose of preventing fake accounts. Voting solutions and solutions for incentivizing "good behavior" are also undergoing experimentation using this technology.

Blockchain and related technologies are still in their infancy, and there are many problems that remain to be solved. Initial versions of the Canonical Debate will most likely not make use of this technology, as they increase the complexity of the work, and are currently a moving target in terms of best-of-class solutions. What is clear is that some of the goals we have laid out it our vision can only be achieved with the help of this new class of technology.

3.7 Future Evolutions

This first version of our white paper represents only the initial vision of what is possible vis-à-vis the Canonical Debate. There are many concepts that we would have liked to have included already in this first iteration. However, in order to keep focus and make sure that we could accomplish at least a small part of that vision, we decided to limit our scope for now to the concepts described above.

This section gives a first glance as some of the ideas that we have agreed to put off until after we have created a first version of the platform, and have gained a better understanding of what we have proposed through actual experience.

3.7.1 Claim Hierarchies

We previously spoke several times of there being an indirect relationship between Claims that are in essence alike, but contain different levels of specificity. It turns out that if we can make this relationship more explicit, some very interesting things can happen.

As an example, consider the following Claim used as a dialectic on the program Intelligence Squared:

Humanitarian intervention does more harm than good.

The majority of the participants focused on a subset of the Claim above:

Humanitarian military intervention does more harm than good.

However, one of the participants of the debate, Bernard Kouchner, co-founder of Doctors Without Borders, essentially considered the debate closed when he used the argument:

Humanitarian medical intervention does more harm than good.

Each of these latter Claims are obviously related to the original Claim in some way. They basically add specificity to the original Claim, placing a tighter constraint on the scope of the discussion. More specifically, the original claim would be associated with the Context element humanitarian aid. If the ontology supports it, the second Claim would be supported with the more specific Context of humanitarian military aid. If this is not supported by the knowledge graph, another option would be to add the Context element military. In other words, the Claims are related by sharing the same Context elements, with slight refinements or variations.

As should be evident, it is much easier to prove (or disprove) a more specific Claim than it is to support a more general one. Thus, the original Claim would probably not have a "final" answer of true or false, but rather a set of trade offs.

Imagine now that the claim regarding military intervention was determined to be mostly false, while the claim regarding medical aid was considered mostly true. The medical intervention Claim would then be used as an Argument against the general Claim, whereas the military intervention Claim would support it.

As the debate progressed, a number of trade offs were brought up regarding the military end of humanitarian intervention:

Humanitarian military intervention does more harm than good IF there is no clear objective.

Humanitarian military intervention does more harm than good IF the intervention is under-budgeted.

Humanitarian military intervention does more harm than good IF the intervention is done for political gains rather than humanitarian principles.

And so on. If we were to reframe this Claim, a better way to state it might be something like:

Humanitarian military intervention should be an international tool used to genocide and similar atrocities.

It should be clear by now that this is not a Claim that can be resolved in the general sense. Instead, the debate at this abstract level would act as a guide for debates in a more specific case:

We should apply humanitarian military intervention to the civil war in Yemen.

No, because we cannot build a coalition that has one definite objective.

This concrete debate can be viewed as an instance of the more abstract debate, with one of the conditional if Arguments being applied to the the debate in a more concrete form, with specific discussion regarding the relevance and truth behind that condition.

Thus, we have yet another way of viewing and navigating the "Debate space", that of examining a single core concept at the abstract level, and then at different levels of concreteness. These different levels are defined by an attribute we already know well by now: Context. Thus, we can see that a general debate becomes more concrete as we either add Context elements ("The Yemeni Civil War"), or switch a Context from a more general one ("humanitarian aid") to a more specific one on an ontological hierarchy ("humanitarian military aid").

This relationship can become even more useful if we offer the functionality to create a new, specific debate from the more abstract one:

The United Nations should use military intervention in the dispute between Katy Perry and Taylor Swift.

The platform could then offer to automatically create Arguments and Base Claims to get the debate started:

The military intervention in the dispute between Katy Perry and Taylor Swift has clear objectives.

The military intervention in the dispute between Katy Perry and Taylor Swift has a sufficient budget to be successful.

Etc. Someone might notice that this is a pretty absurd action to recommend for such a trivial dispute, and make the Argument:

The dispute between these two singers is just a public, verbal spat, and doesn't warrant military action at all!

This brand new Argument, brought at the very specific level, can have repercussions across the debate hierarchy: shouldn't every debate regarding military intervention weigh the harm that a military intervention can do against the harm of doing nothing at all? This is another powerful feature that the platform can offer: when an Argument is added anywhere on a debate hierarchy, it can ask the creator if it should be applied at the more abstract levels. If so, it can automatically create a new Argument and Base Claim for each debate in the hierarchy in order to maintain consistency and make sure that no aspect is left unconsidered.

3.8.2 Problem Solving

The end goal of debate is usually to come to some conclusion as to a decision to be made, or a course of action to be taken. In a democracy, there must be a place where legislation and projects can be proposed, discussed, and finally taken to a vote on whether or not to adopt the proposal. Democracy Earth has proposed the Agora for this purpose as a place where members of an entity or organization can have these debates online in order to reach a better-informed consensus.

We of the Canonical Debate Lab have agreed with the members of the Democracy Earth Foundation that our proposal for a Canonical Debate is an important component of this system. Our current proposal is just the first step, and one that we believe is the foundational component for a more complete decision-making system. Once again, for the sake of starting with a simpler, more reachable goal, we have chosen to focus only on dialectic Claims. What follows is a description of the beginnings of a model for the much larger future vision.

3.8.2.1 Problem

A dialectic Claim could be used to represent a simple decision:

We should go to the new Indian restaurant for dinner.

This works fairly well until someone comes up with a counter-proposal for an Argument:

No, we should go to our favorite sushi bar.

This counter-argument is itself a second, opposite Claim that must be resolved at the same time as the first Claim. Of course, it will have as one of its opposing Arguments:

No, we should go to the new Indian restaurant.

This is a very simple and clear example of a cycle in the Debate Graph. It also points to the fact that the two Claims are intertwined within a larger issue, or Problem. In this case, it might be stated as:

Where should we go for dinner tonight?

Notice that this is stated in the form of an open question, which could have many possible answers. True decision-making really needs to be couched within the context of a larger problem that needs to be solved.

The Problem should include a title, a detailed description, and the Context to which it applies. Note also that a Problem might also just be one aspect within an even broader Problem to be solved. For example:

What should we do tonight?

3.8.2.2 Solution

A Solution is essentially a Claim that is a proposed solution to the Problem. It is possible that a Solution may only be seeking to resolve one distinct part of the Problem, rather than the whole issue. Also, some groups of Solutions may be mutually exclusive, while others may be compatible or complimentary: We should all go out to dinner and We should go to a movie would probably not be mutually exclusive, whereas We should go to the Indian restaurant and We should go to the sushi bar would be.

Discussing the viability of a Solution for the problem would be very much like deciding the truth behind a Claim, which is why we have chosen to begin this project by focusing on the simpler dialectic form.

3.8.2.2.1 The Null Solution

One answer to any Problem is to actually deny that the Problem needs to be resolved at all. In the case of the dinner Problem, this might take the form:

We had a very late lunch, and I'm not even hungry.

In other cases, this might be simply denying that a problem exists:

Problem: How do we attract more tourists to our town?

Null Solution: We don't! I like this town the way it is, free of tourists.

3.8.2.2.2 Comparative Arguments

It would be extremely inefficient and confusing to attempt to resolve open-ended problems through the use of dialectic Claims. One of the main reasons for this is that there would naturally be a repetition of the same Arguments applied to each of the proposed Solutions.

Consider the following Argument against the sushi bar:

Sushi is more expensive than Indian food.

This would see itself repeated as an Argument in favor of the Indian restaurant:

Indian food is less expensive than sushi.

If we multiply the number of Solutions from two to three, four, or ten proposals, this would require a repetition of certain types of Arguments to every new instance. This would quickly become unmanageable.

Although we haven't come to a conclusion at this point regarding the exact way to model this type of Argument, it is evident that it would be more efficient to take a more comparative approach, rather than listing each Argument separately. This would entail ranking the Solutions according to certain criteria (discussed below).

3.8.2.3 Constraints

Problems generally come with a set of constraints as to what would be considered an acceptable solution. In the case of choosing a restaurant, one of those constraints might be the neighborhoods in which the participants are willing to eat. The amount of money each person is willing to spend, the time they have for dinner, and the type of food that the participants are able or willing to eat would all be considered Constraints.

Constraints can serve as the basis for Arguments in favor, or more commonly against a Solution. For example, if there is the Constraint:

The movie starts at 8:30PM

Then one could make the Argument against the Indian restaurant proposal:

The Indian restaurant is all booked up for reservations until 9:30PM.

(Note that this would actually be an Argument based on a type of Multi-premise Claim, which would include the previous statement, plus and the movie starts at 8:30PM)

The Constraints themselves are as debatable as anything else in the platform. One Debater may decide that they are willing to spend any amount on dinner, while another would like to keep it below $25 a person. Most of the diners may make the assumption that they only wish to eat somewhere nearby, while another may argue that they have a private jet, and would love to take it out of the country for dinner...

3.8.2.4 Criteria

The selection of one Solution over another requires some kind of framework for making the decision. This framework can usually be expressed as a set of criteria against which each solution should be weighed.

In some cases, these Criteria may be purely logical, such as the cost of each Solution, the time required, or the consumption of some other limited resource. In other cases, the Criteria may be value-based, or completely subjective, such as comparing the impact each Solution might have on "personal freedom" versus "security".

Consistent with the philosophy of the Canonical Debate platform as expressed thus far, we do not intend to develop a platform for voting on which would be the best Solution (or combination of Solutions), but rather to allow each user to examine their own values, choose their own priorities, and come to a conclusion about which is the right answer according to their own beliefs.

3.8.3 Multi-language and Multi-cultural Support

This is a subject that has been avoided up to now, and yet is one of the most important issues for a Canonical Debate: if we intend to be all-inclusive, the platform must provide adequate support for every person that wishes to be involved in the process. This includes supporting all languages, but it also means taking into account local and cultural differences.

This is not a simple task. There are generally two approaches available for global support: either provide translations for each element into every supported language, or provide a separate instance for each variation. Wikipedia takes the latter approach, allowing each language group to define their own version of the encyclopedia. Unfortunately, this leads to a bit of an imbalance, as the English language version supports almost 5 billion articles, way beyond the 2.2 billion articles available in German. The quality of each article varies from language to language as well.

The option to offer translations for each Claim and Argument seems like a more compatible policy with the aims of the Canonical Debate. Claims are meant to be canonical, which means that having a separate Debate Graph for each language would defeat the purpose. An Argument created for the Portuguese version of a debate might be absent from the French version, to the detriment of French users. Claims are related via Context to a Knowledge Graph, and this ontology is meant to be universal, independent of language or culture.

That being said, values vary from culture to culture, so combining Popular Scores across different cultures might lead to distorted or unexpected results. Perspectives might require a "geographical fencing" feature in order to provide a more relevant result for a given constituency.

Multiculturalism is a very complex topic, and so we have decided to give this subject more thought before coming to a final decision regarding the solution for the Canonical Debate.

4. About

The Canonical Debate Lab is a community project of the Democracy Earth Foundation. It is made possible by collaborators and supporters around the world. While we maintain a close relationship with the Democracy Earth Foundation, we hold our own separate community events, maintain our own Slack Team and Github repository, and act as an independent organization.

4.1 Team & Collaborators

Timothy High, Bruno Sato, Stephen Wicklund, Iwan Itterman, Kevin Wiesmueller, Bentley Davis, Jamie Joyce, Oz Fraier, Benjamin Brown, James Tolley.

4.2 Acknowledgements

These are some of the minds that inspired the ideas expressed on this document.

Mark Klein (MIT).

4.3 Supporters

These organizations supported our work through partnerships and recognition of our research and development efforts.

Democracy Earth Foundation, Digital Peace Talks, Internet Government.

About

The white paper describing the vision for the Canonical Debate Lab

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published