Ananny, M. (2016). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values, 41(1), 93–117. https://doi.org/10.1177/0162243915606523
Abstract: Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of “what we ought to do,” I examine ethical dimensions of contemporary NIAs. Specifically, in an effort to sketch an empirically grounded, pragmatic ethics of algorithms, I trace an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action.
Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 1461444816676645. https://doi.org/10.1177/1461444816676645
Review: This theoretical article interrogates the ideal of transparency in the context of algorithmic accountability. It argues that, as algorithms are sociotechnical systems, transparency cannot hold them accountable because it doesn’t allow for a comprehension of the system as a whole. It does so by reviewing the history of the transparency ideal, which emerged during early Enlightenment, raising two points: the transparency ideal entails that “seeing equals knowing”, and was also a way to surveil and control the people. The authors argue that transparency is an inadequate way to keep algorithms accountable, in part because algorithms need to be seen as a sociotechnical systems of humans and non-humans. From an actor-network theory point of view, this means that their workings can “only be apprehended through relations”, which means “looking across”, rather “looking inside” (transparency). They then present 10 significant limitations of transparency as a type of accountability. They call for an alternative algorithmic governance, using the limits of transparency as conceptual tools.
This article gives interesting insights on the history of transparency as an ideal, and provides a useful framework to explore the limitations of transparency in different situations. Its clear argument, structured explanations, and suggestions on how to use the limitations of transparency as conceptual tools make it a good starting point for a reflection on algorithmic accountability.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. https://doi.org/10.1177/2053951715622512
Review: This article addresses the issue of algorithmic opacity in machine learning algorithms, especially in the context of opacity leading to discriminatory classification and digital inequality. It draws a distinction between three types of opacity (intentional secrecy, technical illiteracy, and resulting from the characteristics of machine learning algorithms). It then focuses on the third one, testing machine learning algorithms (a neural network applied to an image recognition task, and a spam filter) to try and make sense of the code, establishing that the explanatory elements available can only yield a partial explanation. The author, Jenna Burrell, is an associate professor at UC Berkeley’s School of Information. She has a background in sociology and computer science. Here, she deliberately focuses on the technical categories of algorithms that are used, deliberately approaching an approach different from that of the “socio-technical systems in the wild”. The “lightweight form of code audit” the author performs is an interesting take on machine learning algorithm transparency, and gives some insights into the strengths and shortcomings of a code audit. This approach makes this article a useful complement to Annany and Crawford’s work cited above. Although this article is only focused on machine learning algorithms, it is a useful tool to evaluate the different types of opacity in algorithmic systems, as well as tu understand what is actually explainable in machine learning programs, in order to find the appropriate solution to prevent algorithmic harm.
Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411
Review: This paper investigates reverse engineering as a method for algorithm accountability reporting. It starts by breaking down algorithmic power into the “atomic decisions” algorithms make (prioritization, association, classification, filtering), which raise specific questions about algorithmic accountability. Then, it looks at 5 cases of reverse engineering algorithms (4 through interviews with investigative journalists, and 1 through the author taking part in reverse engineering himself), describe the challenges and opportunities they illustrate, and suggests informational dimensions that could be disclosed as part of algorithmic accountability. This work contributes both a theoretical lens to explore the power of algorithms, and to the empirical body of knowledge of algorithmic accountability. The conclusions of the work as to what could be disclosed about algorithms offer interesting paths for exploration (1.the criteria driving the decision in the algorithm; 2.what data acts as input to the algorithm; 3.the accuracy including the false positive and false negative rate of errors; 4.descriptions of training data and its potential bias; 5.the definitions, operationalizations, or thresholds used by similarity or classification algorithms). Although algorithmic accountability is studied here through the lens of investigative journalism, certain elements and methodology discussed here could be used in regulation or ethical guides within public administration to ensure algorithmic accountability. In addition, the author uses participant observation to reverse engineer an algorithm himself, which is an interesting methodology for new research on specific algorithms. Therefore, it is a very useful, accessible paper for someone wanting to start reflecting about algorithmic accountability.
D’Ignazio, C., & Bhargava, R. (2015). Approaches to Building Big Data Literacy. In Proceedings of the Bloomberg Data for Good Exchange Conference.
Review: This paper reflects on big data literacy to address 4 issues related to big data projects: lack of transparency, extractive collection, technological complexity, and the control of impacts. It uses Paulo Freire’s work on empowerment through literacy education and the concept of Popular Education, coined by Freire, achieving literacy through “learner guided explorations, facilitation over teaching, accessibility to a diverse set of learners and a focus on real problems in the community”. It uses existing notions of data literacy and adapts them to the new issues raised by big data. It proposes different ways of developing big data literacy, and also calls for pedagogy geared towards users, but also technical designers of algorithms, especially to raise their awareness about the ethical concerns surrounding algorithms. This work, written by Catherine D’Ignazio, an assistant professor of data visualization at Emerson College, and Rahul Bhargava, a research scientist based at the MIT Media Lab, offers both a theoretical framework for big data literacy, and suggestions to put it into practice. The emphasis put on the need to educate everyone, not only users, shows a true commitment to putting technology in the service of what the authors themselves call “social good”. It is especially important in the context of an overview of algorithmic accountability, as literacy is a solution to a certain type of algorithmic opacity. It is as important to reflect on the legal regulations that can force the opening of algorithmic “black boxes” as it is to reflect on what tools to give citizens to be better informed about the technologies they interact with everyday (as they are themselves part of a wider socio-technical system).
Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., … Wood, A. (2017). Accountability of AI Under the Law: The Role of Explanation. ArXiv:1711.01134 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1711.01134
Abstract: The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before---applications range from clinical decision support to autonomous driving and predictive policing. That said, there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems. There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation, and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. In this work, we review contexts in which explanation is currently required under the law, and then list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans.
Edwards, L., & Veale, M. (2017). Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For (SSRN Scholarly Paper No. ID 2972855). Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=2972855
Abstract: Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive.
However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric" explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers' worries of intellectual property or trade secrets disclosure.
Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ("right to be forgotten") and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered.
Neyland, D. (2016). Bearing Account-able Witness to the Ethical Algorithmic System. Science, Technology, & Human Values, 41(1), 50–76. https://doi.org/10.1177/0162243915598056
Abstract: This paper explores how accountability might make otherwise obscure and inaccessible algorithms available for governance. The potential import and difficulty of accountability is made clear in the compelling narrative reproduced across recent popular and academic reports. Through this narrative we are told that algorithms trap us and control our lives, undermine our privacy, have power and an independent agential impact, at the same time as being inaccessible, reducing our opportunities for critical engagement. The paper suggests that STS sensibilities can provide a basis for scrutinizing the terms of the compelling narrative, disturbing the notion that algorithms have a single, essential characteristic and a predictable power or agency. In place of taking for granted the terms of the compelling narrative, ethnomethodological work on sense-making accounts is drawn together with more conventional approaches to accountability focused on openness and transparency. The paper uses empirical material from a study of the development of an “ethical,” “smart” algorithmic videosurveillance system. The paper introduces the “ethical” algorithmic surveillance system, the approach to accountability developed, and some of the challenges of attempting algorithmic accountability in action. The paper concludes with reflections on future questions of algorithms and accountability.