-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathprinciples.html
1106 lines (1106 loc) · 59.6 KB
/
principles.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="pt-BR">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>AI Ethics Guide</title>
<link rel="stylesheet" href="./assets/css/reset.css" />
<link rel="stylesheet" href="./assets/css/style.css" />
<link rel="shortcut icon" href="./assets/img/favicon.ico" />
</head>
<body>
<header class="header">
<!-- Nav -->
<nav class="nav">
<a href="index.html" class="nav__logo">
<img class="logo" src="./assets/img/logo.png" alt="" srcset=""
/></a>
<ul class="menu">
<li class="menu__item"><a href="index.html">Introduction</a></li>
<li class="menu__item"><a href="game.html">Guide</a></li>
<li class="menu__item"><a href="principles.html">Principles</a></li>
<li class="menu__item"><a href="tools.html">Tools</a></li>
<li class="menu__item"><a href="tradeoffs.html">Trade-offs</a></li>
<li class="menu__item"><a href="about.html">About</a></li>
</ul>
</nav>
<!-- Main Section -->
<section class="principal">
<div class="content__title">
<h1 class="title">Ethics in AI</h1>
<h3 class="subtitle">
Guide for Artificial Intelligence Ethical Requirements Elicitation
</h3>
<a href="game.html" class="button__game">Start Guide</a>
</div>
<div>
<picture>
<img
class="illustration"
src="./assets/img/card-game.svg"
alt="RE4AI Ethical Guide logo"
/>
</picture>
</div>
</section>
</header>
<!-- Section Principles -->
<main class="container main">
<h2 class="about__title">Principles</h2>
<p>
The evolution of the emergence of software that makes use of AI
techniques, mostly ML, amplifies the manifestations of accidents and the
awareness of the associated ethical issues [1]. In general, ethics in AI
has been addressed, in the literature, in its theoretical field, through
ethical guidelines [2]. In the last three years, there has been a
veritable proliferation of organisations publishing guidelines seeking
to provide normative guides to AI ethics [3,4]. As of November 2019, at
least 84 of these initiatives have published reports describing ethical
principles, values or other high-level abstract requirements for the
development and deployment of AI [2]. Due to this high number of
publications, sometimes the terms appear interchangeably in the papers,
as in the <a href="https://futureoflife.org/ai-principles/"
>Asilomar AI Principles</a
>, where they present principles composed of guidelines. We assume
throughout this paper that the guidelines -- the guides -- contain the
principles of AI ethics.
</p>
<p>
Whilst the existence of guidelines and principles is necessary, little
practical direction exists for developers -- those responsible for
implementing ethics in AI-based systems -- to apply in real-world
contexts, even more so with the demands for market deliverables [2],
where often the ethical considerations involved is a quality to be
considered in software only after its deployment [5]. Furthermore,
developers do not receive adequate training within development projects,
nor during their education. There are no legal consequences for not
implementing AI ethics, as the guidelines present in the literature, and
proposed by organisations, are often non-binding laws, and the AI
developer not being a formal profession. To clarify: "Reports and
guidance documents for AI ethics are examples of what is called policy
instruments of a non-binding character or soft law''[6]. Thus,
there is neither motivation nor punishment for developers in the area of
AI ethics. In this sense, binding laws are paramount to effectively
align public interests with practice in application development in the
context of AI.
</p>
<p>
<strong>Legally binding documents</strong>, backed by legislation,
provide the actors involved in the process with real binding
responsibilities and rights. These types of documents are called binding
or hard law. We will present the two most notorious binding laws. First,
the European Union's (EU)
<a href="https://data.consilium.europa.eu/doc/document/ST-5419-2016-INIT/en/pdf"
>General Data Protection Regulation (GDPR)</a> -- which
came into force on 25 May 2018 and has been hugely influential in
establishing safeguards for personal data protection in today's
technological environment. Several countries outside the EU have adopted
similar data protection rules, analogous to or inspired by GDPR, which
is increasingly being recognised for its high standard of data
protection, Brazil being one such nation. Aimed at empowering EU
citizens to have control over their data and protect them from data and
privacy breaches, the GDPR applies to all relevant actors within the EU
and those who process, monitor or store EU citizens' data outside
the EU [7]. Second, the
<a
href="http://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/l13709.htm"
>General Law on Personal Data Protection (LGPD)</a
>
in Brazil, which came into force on 18 September 2020, with the sanction
of Law 14.058/2020, originating from Provisional Measure (MP) 959/20.
The LGPD defines "rights of individuals in relation to their personal
information and rules for those who collect and handle these records
with the aim of protecting the fundamental rights of freedom and privacy
and the free development of the personality of citizens.". An
effort towards harmonisation between AI ethics guidelines (non-binding)
and legislation (legally binding) is an important next step for the
global community [6]
</p>
<p>
AI ethics guidelines contain ethical principles, and each published
guideline contains its own set of principles. In the literature, most
studies focus on the conceptual part of AI ethics, and one of them is
the compilation, presentation and evaluation of ethics guidelines and
their principles. Several authors have used different methodologies to
explore sets of documents and extract the most recurrent principles and
their definitions, usually concluding that they are too general, have
high level of abstraction and degree of difficulty in applying them in
real contexts, besides there is an overlap between the principles.
</p>
<p><br /></p>
<p>
Ryan and Stahl [8] conducted a rigorous study with a robust methodology
that reviewed a set of guidelines and compiled the detailed guidance
that is available, presenting a list of principles aimed at developers
and users. To the best of our knowledge, this is the study that makes
use of a methodology that encompasses the most guidelines and
definitions -- as well as presenting a comprehensive taxonomy.
</p>
<p>
We have included below the authors' original text for readability.
</p>
<p><br /></p>
<h2 class="about__title">1. Transparency</h2>
<p>
Transparency has quickly become one of the most widely discussed
principles within the AI ethics debate, with Floridi (2019) and the
High-Level Expert Group on AI (2019) viewing it as a defining
characteristic within the debate. Transparency can typically be
understood in two ways: the transparency of the AI technology itself and
the transparency of the AI organisations developing and using it.
Throughout our analysis, transparency was regularly discussed directly,
or in relation to processes required to ensure it, such as
explainability, understandability and communication.
</p>
<p><br /></p>
<p><strong>1.2 Transparency. </strong></p>
<p>
AI developers need to ensure transparency because it protects many other
requirements – such as the fundamental human rights, privacy,
dignity, autonomy and well-being (UNI Global Union, 2017). Organisations
using AI should be transparent about their aim for using AI, benefits
and harms and potential outcomes that may occur (IBM, 2017). AI
developers should ensure transparency because it allows consumers to
make informed choices about sharing their data and using AI (ADMA,
2013).
</p>
<p><br /></p>
<p><strong>1.3 Explainability. </strong></p>
<p>
AI must be subject to active monitoring to ensure that they are
producing accurate results (Algo.Rules, 2019). AI organisations should
document how their AI makes certain decisions and be able to reproduce
them for audits (SIIA, 2017). AI should be explainable to external
algorithmic auditing bodies to ensure the technical and ethical
functionality of their AI. If there is a tension between performance and
explainability, this should be clearly identified (Cerna Collectif,
2018).
</p>
<p><br /></p>
<p><strong>1.4 Explicability. </strong></p>
<p>
AI organisations (i.e. organisations using or developing AI) should be
able to intelligibly explain the data that goes in, the data coming out,
what their algorithms do, and their objective for doing so (Demiaux and
Abdallah, 2017, p. 51). AI organisations should ensure traceability and
explicability to guarantee safety (OECD 2019). AI needs to have a strong
degree of traceability to ensure that if harms arise, they can be traced
back to the cause (IEEE, 2017). Data should be traceable back to where,
how and when it was captured, retrieved, cleaned and analysed (Cerna
Collectif, 2018). Decisions made by AI should be reproducible by
external auditors (AMA, 2018).
</p>
<p><br /></p>
<p><strong>1.5 Understandability. </strong></p>
<p>
AI organisations need to implement appropriate methods to monitor the
data, algorithms and the decisions that will be arrived at by those
processes, and for actions taken by AI to be comprehensible by human
beings (European Parliament, 2017). AI organisations should understand
how their AI works and explain the technical functioning and decisions
reached by those technologies, whenever possible (Floridi et al., 2018).
</p>
<p><br /></p>
<p><strong>1.6 Interpretability. </strong></p>
<p>
While there is a degree of opaqueness in some machine-learning
technologies, AI organisations should be able to understand how a
decision was reached and how human oversight ensures that harms caused
by algorithmic black-boxing are addressed and prevented (IEEE, 2019).
High-stake domains (such as health care, criminal justice and welfare)
should reconsider using black-box AI altogether (AI Now Institute,
2017). Algorithmic reviews should be done on a regular basis to
determine if they are fit-for-purpose and interpretable (Algo.Rules,
2019). Organisations should be able to clearly interpret and demonstrate
how their AI is abiding by current legislation, such as the general data
protection regulation (GDPR), and be able to demonstrate what measures
are being taken to ensure compliance (UK Government, 2018).
</p>
<p><br /></p>
<p><strong>1.7 Communication. </strong></p>
<p>
End users should be provided with accurate information to ensure that
they are not manipulated, deceived, or coerced by AI (High-Level Expert
Group on AI, 2019, p. 16). End users should be informed about the intent
and outcomes of the technology (IBM, 2018). AI companies should be
explicitly clear and discuss in a jargon-free manner, the potential
flaws or harm that may arise from their AI (Algo.Rules, 2019).
Communication methods may have to change for different industries,
expertise and context of use (Floridi et al.,2018). AI organisations
should communicate their progress and likelihood to hit particular
milestones to governments, so that they can plan for these outcomes
(NSTC, 2016a).
</p>
<p><br /></p>
<p><strong>1.8 Disclosure. </strong></p>
<p>
AI should be designed and used to retrieve little to no personal data,
or if required, that any data retrieved is anonymised, encrypted and
securely processed, while being able to demonstrate this to a
third-party auditor (High-Level Expert Group on AI, 2019). AI should go
through internal and external auditing to ensure they are fit for
purpose, but the organisation also needs to be able to explain and
justify the use of their AI. Organisations should allow for independent
analysis and review of their systems (Amnesty International/Access Now,
2018).
</p>
<p><br /></p>
<p><strong>1.9 Showing. </strong></p>
<p>
Data should be accurate, up-to-date and fit-for-purpose, and companies
should be able to demonstrate this (ICO, 2017). Data quality should be
transparent, available for periodic assessment and there should be
regular and continued anomaly detection set in place [United Nations
Development Group (UNDG), 2017]. Developers of AI should also be able to
provide their ethics codes to public authorities, organisational users
and where possible, the public (University of Montreal, 2017). This can
be achieved through periodic review sessions, appropriate oversight
mechanisms and collective responsibility approaches within the
organisation (ICDPPC, 2018). It should also be clear to the end user
that they are interacting with an AI system, rather than a human (EPSRC,
2011).
</p>
<p><br /></p>
<h2 class="about__title">2. Justice and fairness</h2>
<p>
Discrimination and unfair outcomes stemming from algorithms has become a
hot topic within the media and academic circles (O’Neil, 2016). It
is not surprising that issues of fairness, equality and equity were
repeatedly discussed throughout the ethics guidelines. In addition to
simply addressing issues of harm and injustice themselves, many of the
guidelines provided recommendations on how to implement steps to
minimise these harms. Furthermore, some documents also highlighted how
different organisations should implement methods to reverse, remedy and
allow fair redress, in instances where harms have occurred.
</p>
<p><br /></p>
<p><strong>2.1 Justice.</strong></p>
<p>
AI practitioners should identify what levels of justice and fairness can
be implemented into the AI system during the design process (NSTC,
2016b). For example, if AI is used within the judicial system in any
way, accountability should still lie with the human user, e.g. the judge
(Rathenau Institute, 2017, p. 43). In addition, AI will replace many
human jobs in the future, so it is important that there are effective
and just ways to retrain and retool the human workforce (COMEST/UNESCO,
2017, pp. 52-53).
</p>
<p><br /></p>
<p><strong>2.2 Fairness.</strong></p>
<p>
While AI developers may have their own values, they should not develop
algorithms with historically unfair prejudices (Latonero, 2018). There
should be steps in place to ensure that data being used by AI is not
unfair, or contains errors and inaccuracies, that will corrupt the
response and decisions taken by the AI (ICO, 2017). To ensure the
fairness of AI, their design should be fit for purpose, identify impacts
on different aspects of society and should be designed to promote human
welfare, rather than endanger it (ICDPPC, 2018). Organisations should
consider using fairness-aware data mining algorithms (FATML,
2016).
</p>
<p><br /></p>
<p><strong>2.3 Consistency. </strong></p>
<p>
To prevent harmful actions in the decision-making process, organisations
should ensure that accurate and representative sample data is collected,
analysed and used [IPC Ontario (Information and Privacy Commissioner of
Ontario), 2017]. Organisations need to establish procedures to ensure
the identification, prevention and the minimisation of inaccuracies in
their AI. To achieve this, data should be of the highest quality (UNDG,
2017), external algorithmic auditing should be carried out (Intel ,
2017), and there should be consistent, repeated and regular discussions
with end users and stakeholders that may be affected (PwC, 2019).
</p>
<p><br /></p>
<p><strong>2.4 Inclusion. </strong></p>
<p>
AI should not become another tool for exclusion within society (AI for
Humanity, 2018). Particular attention should be given to
under-represented and vulnerable groups and communities, such as those
with disabilities, ethnic minorities, children and those in the
developing world (High-Level Expert Group on AI, 2019). Data that is
being used should be representative of the target population and should
be as inclusive as possible (High-Level Expert Group on AI, 2019). AI
organisations should not only reduce exclusion issues but should promote
active inclusion of women and minority groups into the development and
design of AI (Gilburt, 2019; WEF, 2018).
</p>
<p><br /></p>
<p><strong>2.5 Equality. </strong></p>
<p>
AI should not harm, and where ever possible, should promote, the
equality of individuals in respect to their rights, dignity and freedom
to flourish (The Future Society, 2018; Tieto, 2018). One way equality
can be enabled is through greater diversity in AI teams and data sets
and designs (Sage, 2017). More steps need to be taken to address sexist,
misogynistic and gender-biased harms resulting from some AI (World Wide
Web Foundation, 2018).
</p>
<p><br /></p>
<p><strong>2.6 Equity. </strong></p>
<p>
The aims of AI, generally, should be to empower and benefit individuals,
provide equal opportunities while distributing the rewards from its use
in a fair and equitable manner (EGE, 2018; IEEE, 2019; SIIA, 2017). AI
should be developed so that it can be used within society in a fair and
equal way (Japanese Society for Artificial Intelligence, 2017).
</p>
<p><br /></p>
<p><strong>2.7 Non-bias. </strong></p>
<p>
AI organisations should invest in ways to identify, address and mitigate
unfair biases (ICDPPC, 2018). Developers should examine unfair biases at
every stage of the development process and should eliminate those found
(The Public Voice, 2018). There should be close attention paid to the
training data used, potential human biases and bias derived from the
results of algorithmic processes (Cerna Collectif, 2018). Developers and
organisational users of AI should conduct analysis to identify unfair
bias, and there should be explicit attempts to avoid individual and
societal bias, continual mechanisms in place and dialogue with
stakeholders to raise awareness and reverse any biases detected (IBM,
2018). If there is any indication of unfair bias, the AI organisations
should demonstrate the elimination of such bias before a competent
authority (Council of Europe, 2017).
</p>
<p><br /></p>
<p><strong>2.8 Non-discrimination. </strong></p>
<p>
AI should be designed for universal usage and not discriminate against
people, or groups of people, based on gender, race, culture, religion,
age or ethnicity (Cerna Collectif, 2018). There should be mechanisms in
place to effectively prevent, remedy and reverse discriminatory outcomes
resulting from AI use (Amnesty International/Access Now, 2018). AI use
should not lead to discrimination against individuals or groups of
individuals in accordance with the Equality Act 2010, and organisations
should create “discrimination impact assessments” to
identify issues before their AI are used (AI for Humanity, 2018).
</p>
<p><br /></p>
<p><strong>2.9 Diversity. </strong></p>
<p>
To promote diversity, AI organisations should instil an inclusionary
working environment (Cerna Collectif, 2018), hire teams from a range of
backgrounds (IBM, 2018) and disciplines (SAP, 2018), conduct regular
diversity sessions and incorporate the viewpoints from a wide range of
stakeholders (Amnesty International/Access Now, 2018). Organisations
implementing and using AI should encourage a diversity of opinions
throughout every stage of its use (Smart Dubai, 2019).
</p>
<p><br /></p>
<p><strong>2.10 Plurality. </strong></p>
<p>AI developers should consider the range of social and cultural</p>
<p>
viewpoints within society and should attempt to prevent societal
homogenization of behaviour and practices (University of Montreal,
2017). Organisations should not only be focused on “pipeline
model” changes in their organisation but should ensure that the
plurality of individuals within their organisation have a voice and they
create a culture of inclusion, which should be reflected in the AI
technology (AI Now Institute, 2018). Create a multi-stakeholder dialogue
and incorporate the viewpoints of women, underrepresented groups and
marginalised individuals at every stage of AI applications (Leaders of
the G7, 2018).
</p>
<p><br /></p>
<p><strong>2.11 Accessibility. </strong></p>
<p>
Organisations should protect the rights of data subjects, such as the
right of information access about them (Datatilsynet, 2018). Individuals
have a right to access data that is being stored and used about them,
and subsequently, to request that this is rectified or deleted
(Datatilsynet, 2018). When decisions are made about individuals,
explanations should be available that are easily accessed, free of
charge and user-friendly (Smart Dubai, 2019).
</p>
<p><br /></p>
<p><strong>2.12 Reversibility. </strong></p>
<p>
It is important to clearly articulate if the outcomes of AI decisions
are reversible, e.g. if individuals are refused a loan because of an AI
algorithm, can such a decision be reversed if the customer can
demonstrate their credit-worthiness (Personal Data Protection Commission
Singapore, 2019, p. 16)? Organisations using AI need to ensure that the
autonomy of AI is restricted and the outcomes are reversible when there
is a harm caused (Floridi et al., 2018). AI should be programmed with a
condition of reversibility, which ensures controllability and safety of
the system: The ability to undo the last action or a sequence of actions
allows users to undo undesired actions and get back to the
‘good’ stage of their work” (Clark, 2019).
</p>
<p><br /></p>
<p><strong>2.13 Remedy. </strong></p>
<p>
When AI holds the possibility of creating harm, there needs to be
preemptive steps in place to trace these issues and deal with them in a
prompt and responsible manner. Organisations should abide by the
“termination obligation”, which states that when a system is
no longer under human control, then it must be terminated (Telef?onica,
2018). There needs to be specific “red lines” drawn, that
when breached, appropriate steps are taken to override the system,
terminate it temporarily or indefinitely and remedy any potential issues
that may have occurred (PwC, 2019).
</p>
<p><br /></p>
<p><strong>2.14 Redress. </strong></p>
<p>
In situations where harmful and/or unjust events occur as a result of
using AI, those affected should have appropriate and visible measures of
redress in a timely manner (FATML, 2016). When decisions made by
algorithms create harmful or questionable results, individuals should
have the possibility to lodge a complaint and request a justification of
the decision (Algo.Rules, 2019). This should be done in a manner that is
understandable by those affected and should allow them the opportunity
to challenge these decisions (B Debate, 2017). Accountability strategies
should be created within companies, with appropriate measures for
redress if these internal and external standards are not met (Dawson et
al.,2019).
</p>
<p><br /></p>
<p><strong>2.15 Challenge. </strong></p>
<p>
AI companies should allow for “conscientious objectors, employee
organizing and ethical whistleblowers” (AI Now Institute, 2018).
There should be clear policies to protect conscientious objectors,
employees to voice their concerns and whistle- blowers to feel
protected, when it is in the public interest and safety (AI Now
Institute, 2018).
</p>
<p><br /></p>
<p>2.16 Access and distribution. </p>
<p>
AI organisations should ensure that their technologies are fair and
accessible among a diversity of user groups within society (Smart Dubai,
2019). Organisations should especially concentrate on “populations
that currently lack such access” (AI Now Institute, 2016, p. 3).
AI should be accessible to those that are often socially disadvantaged
(such as those with vision problems, dyslexia or mobility issues) (Sage,
2017). Wherever possible, organisations should use open data for their
AI to ensure access and transparency (NSTC, 2016b).
</p>
<p><br /></p>
<h2 class="about__title">3. Non-maleficence</h2>
<p>
The principle of nonmaleficence gained attention, resulting from
Beauchamp and Childress (1979) ground-breaking Principles of Biomedical
Ethics and its subsequent editions. In its most basic form, it means to
do no harm or avoid doing harm to others. In AI ethics, the avoidance of
harm to human beings has been one of the greatest concerns, with some of
the most high-profile examples coming from killer robots, autonomous
cars and drone technology. It is no surprise that most of the ethics
guidelines had a strong emphasis on ensuring no harm comes to citizens,
through security and safety of the AI, and precautionary and remedial
steps to be taken, if harm occurs.
</p>
<p><br /></p>
<p><strong>3.1 Non-maleficence. </strong></p>
<p>
AI should be designed with the intent of not doing foreseeable harm to
human beings (Personal Data Protection Commission Singapore, 2018).
Developers and organisations using AI should receive and incorporate the
advice of legal authorities and research ethics boards to ensure that
data is retrieved, analysed and used in a manner that does not harm
individuals [IPC Ontario (Information and Privacy Commissioner of
Ontario), 2017]. Organisations should regularly test their algorithms to
determine that no harm results from them (ACM2017; American College of
Radiology, 2019).
</p>
<p><br /></p>
<p><strong>3.2 Security. </strong></p>
<p>
AI should be robust, secure and safe throughout their life cycle and
must function appropriately and not pose unreasonable safety risks (OECD
2019). Organisations must ensure effective cybersecurity so that their
AI is protected against attacks (Allistene, 2014). Security must be
built into the architecture of the AI (Public Voice 2018) and must be
tested before implementation (Algo.Rules, 2019). When security
researchers find vulnerabilities or design flaws, they should disclose
these findings to be resolved (Internet Society, 2017).
</p>
<p><br /></p>
<p><strong>3.3 Safety. </strong></p>
<p>
Developers and organisational users should ensure that AI does not
infringe on human rights by ensuring their technology’s safety
(EGE 2018). They must assess the public safety risks that arise from
their AI and implement effective safety controls (Public Voice 2018).
Organisations should enforce strict safety measures, ensuring their
AI’s manageability and control and that adequate procedures are in
place for security breaches (Algo.Rules, 2019). AI should pass quality
assurance processes and be tested in real-world scenarios before, during
and after deployment (SAP 2018).
</p>
<p><br /></p>
<p><strong>3.4 Harm. </strong></p>
<p>
The objectives and expected impact of AI must be assessed and documented
in the development stage (Algo.Rules, 2019). The effects of these
systems must be reviewed on an ongoing basis (Algo.Rules, 2019).
Organisations should encourage a form of “algorithmic
accountability” and should exercise caution when developing AI
that may have negative impacts (ICO, 2017). AI technology that replaces
human activity should produce at least a diminution of harm before it is
allowed on the market (Federal Ministry of Transport and Digital
Infrastructure, 2017). AI should not “cause bodily injury or
severe emotional distress to any person” (IIIM, 2015).
</p>
<p><br /></p>
<p><strong>3.5 Protection. </strong></p>
<p>
Developers should implement mechanisms and safeguards to protect user
safety (OECD 2019), and AI must be safe and secure throughout their life
cycle (IEEE, 2019). AI systems should prioritize the protection of human
life (Federal Ministry of Transport and Digital Infrastructure, 2017).
External auditors should be allowed to conduct examinations and report
negative impacts of the AI without fear of harm or threat by the AI
organisations. In addition, the protection of whistle-blowers within AI
organisations should also be ensured to allow for effective and
legitimate reporting of harms (High-Level Expert Group on AI,
2019,p.20).
</p>
<p><br /></p>
<p><strong>3.6 Precaution. </strong></p>
<p>
Those who develop AI must have the necessary skills to understand how
they function and their potential impacts (Algo.Rules, 2019), and
security precautions must be well documented (Public Voice 2018). AI
organisations may receive advice from trained legal professionals,
ethicists working in the area and policy analysts. If no consensus can
be agreed upon, development of the AI “should not proceed in that
form” (High-Level Expert Group on AI, 2019, p. 20). AI systems
need to allow for human interruption, or their shutdown, when there is
potential harm (Internet Society, 2017).
</p>
<p><br /></p>
<p><strong>3.7 Prevention. </strong></p>
<p>
An AI system must be manageable throughout the lifetime and its control
must be made possible (Algo.Rules, 2019). The reliability and robustness
of AI and its reliability with respect to attacks, access and
manipulation must be guaranteed (Public Voice 2018). Great effort should
be put into ensuring reliability and safety (IEEE, 2019). AI systems
should prevent accidents from occurring, whenever possible, and avoid
critical situations from occurring in the first place (Federal Ministry
of Transport and Digital Infrastructure, 2017).
</p>
<p><br /></p>
<p><strong>3.8 Integrity. </strong></p>
<p>
Attacks against AI should not compromise the bodily and mental integrity
of people by ensuring the reliability and internal robustness of the
systems (EGE 2018). AI should “fail gracefully” (e.g.
shutdown safely or go into safe mode) (IEEE, 2019).
</p>
<p><br /></p>
<p><strong>3.9 Non-subversion. </strong></p>
<p>
AI systems should be used to respect and improve the lives of citizens,
rather than “subvert, the social and civic processes on which the
health of society depends” (Future of LifeInstitute, 2017).
</p>
<p><br /></p>
<h2 class="about__title">4. Responsibility</h2>
<p>
Moral responsibility is a very important issue within AI ethics, with a
fear that companies will try to obfuscate blame and responsibility onto
the autonomous or semi-autonomous system. There may also be incidences
where because of this relative autonomy, AI creates a
</p>
<p>
“responsibility gap”, whereby it is unclear who is
responsible. Issues of responsibility, accountability, liability and
acting with integrity appeared in many of the ethics guidelines that we
analysed.
</p>
<p><br /></p>
<p><strong>4.1 Responsibility. </strong></p>
<p>
Developers are primarily responsible for the design and functionality of
the AI, and when there is an error or harm, then the onus of
responsibility often lies with them. When the issue is caused by the use
and implementation of the technology, the onus is with the
organisational user of the AI. There needs to be clear and concise
allocation of responsibilities within the organisation using AI, and the
creation of potential scenarios and ways to deal with harms when they
occur (EGE 2018; FATML, 2016).
</p>
<p><br /></p>
<p><strong>4.2 Accountability. </strong></p>
<p>
AI organisations need to be aware of the issues involved with using poor
data and be held accountable if there are harmful consequences as a
result of this. Developers need to be aware that they are accountable
for these systems’ impact on the world (IBM, 2018). They need to
be open and accountable by means of auditing, monitoring and conducting
impact assessments of AI (ICDPPC, 2018). A legal person must always be
held accountable for harms caused by AI and this blame cannot be placed
on the tools that cause the damage (Algo.Rules, 2019).
</p>
<p><br /></p>
<p><strong>4.3 Liability. </strong></p>
<p>
There is a need to distinguish between the designer and organisational
users of those systems for legal reasons (Cerna Collectif, 2018). To
attribute liability in situations of malfunction, error and harms, there
needs to be clear attributions of responsibility. Definitive liability
should be established for when autonomous systems cause undesired
effects (EGE, 2018). This can be achieved through adequate
record-keeping, systems for registration, and documentation (IEEE,
2019).
</p>
<p><br /></p>
<p><strong>4.4 Acting with integrity. </strong></p>
<p>
AI organisations must ensure that their data meets quality and integrity
standards at every stage of use (ITI, 2017). If those working with AI
discover errors, security breaches or data leaks, then they must report
these issues to the relevant authorities, stakeholders, and if relevant,
the wider public (University of Montreal, 2017). Ethics training should
be implemented to ensure responsible development and deployment of AI
(AI for Humanity 2018). AI companies should respect and support the
academic and professional integrity of their partners and researchers
(Deepmind, 2017).
</p>
<p><br /></p>
<h2 class="about__title">5. Privacy</h2>
<p>
Since the GDPR came into force in 2018, privacy has been a hot topic for
anyone working in fields where personal data is being used.
Particularly, there is a great concern in the development and use of AI,
with many of the ethics guidelines strongly featuring privacy and data
protection as key tenets in their recommendations. Because of the large
abundance of data that is required for AI to work, it is important that
individuals’ privacy is not jeopardised as a result.
</p>
<p><br /></p>
<p><strong>5.1 Privacy. </strong></p>
<p>
Some of the steps that AI organisations should take to ensure privacy
are the security of databases, storage and AI systems through
de-identification, anomaly- detection and effective cybersecurity (IPC
of Ontario, 2017); ensuring informed consent is retrieved (EGE, 2018);
users should have control and access to data stored about them (IEEE,
2019); follow current data protection regulations (UK Government, 2018)
and non- regulatory privacy-by-design frameworks (ICDPPC, 2018) and
ensuring that the data retrieved is of a high standard. Organisations
purchasing off-the-shelf AI can cultivate a privacy culture by demanding
privacy-by-design AI (Datatilsynet, 2018).
</p>
<p><br /></p>
<p><strong>5.2 Personal or private information.</strong> </p>
<p>
The development and use of AI should ensure a strong adherence to the
privacy and data protection standards outlined in the General Data
Protection Regulation (2018), in addition to non-regulatory frameworks,
such as privacy-by- design and privacy impact assessment frameworks
(IEEE, 2019; Intel, 2018). Developers and organisational users of AI
must place the end user’s privacy and personal data at the
forefront of the design process, viewing privacy as a human right
(Latonero, 2018). The end user’s personal data, and data derived
or created about them, should be processed in a fair, lawful and
legitimate way (UNDG, 2017). Whenever possible, the collection and use
of personal data should be kept to a minimum, unless completely
necessary and relevant.
</p>
<p><br /></p>
<h2 class="about__title">6. Beneficence</h2>
<p>
The principle of beneficence also gained greater acknowledgement and
adoption after Beauchamp and Childress (1979) Principles of Biomedical
Ethics. Beneficence essentially means to do good, to carry out an
activity with the intention of benefitting someone or society as a
whole. Often, beneficence is overlooked in the AI ethics literature,
often being seen as a given that AI will bring benefits. The ethics
guidelines we analysed highlighted beneficence to promote the
flourishing of individual well-being, ensuring people receive benefits
fromAI use, or that it should promote peace and the social and common
good.
</p>
<p><br /></p>
<p><strong>6.1 Benefits. </strong></p>
<p>
AI organisations should ensure that their AI is designed to benefit
humans (IEEE, 2019). They should clearly map out those benefits and the
parties benefiting from them (The Information Accountability Foundation,
2015). AI systems must create greater benefits than their costs for
people (Dawson et al.,2019, p. 6) and should benefitas many people as
possible (Future of LifeInstitute, 2017; The Partnership on AI, 2016).
AI organisations should “advance scientific understanding of the
world, and to enable the application of this knowledge for the benefit
and betterment of humankind” (IIIM, 2015).
</p>
<p><br /></p>
<p><strong>6.2 Beneficence. </strong></p>
<p>
AI organisations should find solutions to some of the world’s
greatest problems, such as curing diseases, ensuring food security and
preventing environmental damage (Intel, 2017). AI organisations should
use data retrieved for the benefit of their customers and society (OP,
2019). Ultimately, AI should “compliment the human experience in a
positive way” (Unity Blog, 2018).
</p>
<p><br /></p>
<p><strong>6.3 Well-being. </strong></p>
<p>
AI organisations should ensure individual well-being and flourishing
(IEEE, 2019). They should ensure that their AI is fit-for-purpose and
that it does not prohibit individual development and access to primary
goods, it ensures human welfare, and allows for the empowerment of
individuals around the world (EGE, 2018). AI should be used to
compliment those working in the health care sector to provide better
care and support the well-being of patients (RCP London, 2018).
</p>
<p><br /></p>
<p><strong>6.4 Peace. </strong></p>
<p>
AI organisations should aim to avoid an “arms race in lethal
autonomous weapons” (Future of Life Institute, 2017; see also
Smart Dubai, 2019). If AI threatens peace, organisations should
collaborate with governments to reduce potential conflicts (OpenAI,
2018).
</p>
<p><br /></p>
<p><strong>6.5 Social good. </strong></p>
<p>
AI should bring an improvement in beneficial opportunities for society
(The Information Accountability Foundation, 2015, p. 10). AI
organisations should cultivate a healthy AI industry ecosystem, built on
cooperation and healthy competition (Government of the Republic of
Korea, 2017, p. 62). The use of AI should not come at a cost of causing
a conflict with non-users of these technologies (Ministry of State for
Science and Technology Policy, 2019, p. 22). 4.6.6 Common good. AI
should be developed to support the common good (Future of Life
</p>
<p>
Institute, 2017) and the service of people (AGID, 2018). AI
organisations should weigh up the benefits and harms resulting fromAI
and should take careful consideration to develop ways to mitigate and
harms to ensure an overall common good for society (The Information
Accountability Foundation, 2015, p. 8). Appropriate steps should be
considered to ensure that AI is used for good and that humanity is
protected from potentially harmful impacts resulting from it (OpenAI,
2018).
</p>
<p><br /></p>
<h2 class="about__title">7. Freedom and autonomy</h2>
<p>
Democratic societies place value in freedom and autonomy, and it is
important that AI use does not encumber or harm these for us. The ethics
guidelines addressed ways to ensure autonomy-promoting and
liberty-protecting AI. For example, the AI organisation should ensure
that individuals consent to how their data is being used, AI should not
harm individuals’ abilities to make choices, or manipulate their
self-determination.
</p>
<p><br /></p>
<p><strong>7.1 Freedom. </strong></p>
<p>
Developers should acknowledge, identify and ameliorate circumstances
where AI may create harm against human freedoms. Organisations should
ensure that the end users’ freedoms are not infringed upon during
the use of AI (High-Level Expert Group on AI, 2019). Developers should
ensure that AI does not harm end users through tracking (freedom of
movement), censorship (freedom of expression) or surveillance (freedom
of association).
</p>
<p><br /></p>
<p><strong>7.2 Autonomy. </strong></p>
<p>
AI organisations should ensure that end users are informed, not deceived
or manipulated by AI and should be allowed to exercise their autonomy
(EGE, 2018). AI organisations need to ensure that the “principle
of user autonomy must be central to the system’s
functionality” (High-Level Expert Group on AI, 2019, p. 16). Users
should be informed actors and have control over their decisions when
interacting with AI (Council of Europe, 2019).
</p>
<p><br /></p>
<p><strong>7.3 Consent. </strong></p>
<p>
The use of personal data must be clearly articulated and agreed upon
before its use (UNDG, 2017). If personal data is repurposed, developers
should ensure that it is compatible with the original fair processing
requirements when consent is given (ICO, 2017), in those cases where
consent is the legal basis of data processing. Personal data should not
be processed in a way that the data subject considers inappropriate or
objectionable (Council of Europe, 2017). The use of personal data should
also be done within reasonable expectations and consent of the
individuals but must also be used for legitimate purposes (Future
Advocacy, 2019).
</p>
<p><br /></p>
<p><strong>7.4 Choice. </strong></p>
<p>
AI should protect users’ power to decide about decisions in their
lives (Floridi et al., 2018). AI should not “compromise human
freedom and autonomy by illegitimately and surreptitiously reducing
options for and knowledge of citizens” (European Group on Ethics
in Science and New Technologies, 2018,p.17).
</p>
<p><br /></p>
<p><strong>7.5 Self-determination. </strong></p>
<p>
There needs to be a balance between decision-making power that is freely
given by the user to the autonomous systems and when this option is
taken away or undermined by the system (Floridi et al.,2018). AI
organisations should not manipulate individual’s
self-determination, particularly those who may be vulnerable to abuse
(Rathenau Institute, 2017,p. 26).
</p>
<p><br /></p>
<p><strong>7.6 Liberty. </strong></p>
<p>
AI organisations need to ensure that their AI protects
individuals’ liberties, as outlined in many human rights
legislations, such as the EU’s Charter of Fundamental Human Rights
(2000) and the Universal Declaration of Human Rights (1948). Liberty
refers to rights such as freedom of speech, freedom of assembly and
freedom of movement. During the development of AI, there should be
strong adherence to the protection of liberties, outlined in these
fundamental human rights documents.
</p>
<p><br /></p>
<p>
4.AI should be used to empower and strengthen our human rights, rather
than curtailing or infringing upon them (ICDPPC, 2018). If decisions are
made about individuals that may harm their liberties, they should be
empowered with the right to challenge such decisions (ICO, 2017).
</p>
<p><br /></p>
<h2 class="about__title">8. Trust</h2>
<p>
Trust is such a fundamental principle for interpersonal interactions and
is a foundational precept for society to function. Similarly, trust is
being acknowledged as a key requirement for the ethical deployment and
use of AI. The HLEG (2019) even use it as their defining paradigm for
their ethics guidelines, referring to it throughout the entire document.
It appears to be a relatively new phenomenon however, with most of the
guidelines that make reference to trust coming after 2017.
</p>
<p><br /></p>
<p><strong>8.1 Trustworthiness. </strong></p>
<p>
AI organisations should prove they are trustworthy and that their
technologies are reliable (Digital Decisions, 2019; MI Garage, 2019).
End users should be able to justly trust AI organisations to fulfil
their promises and to ensure that their systems function as intended
(Deutsche Telekom, 2018; Institute of Business Ethics, 2018; Microsoft,
2018; Sony, 2018; NITI Aayog, 2018; and Microsoft, 2017). Building trust
should be encouraged by ensuring accountability, transparency and safety
of AI (Royal Society, Organisations can cultivate trust by demonstrating
the security of their AI (Intel, 2017) and guard the data retrieved from
these systems in a responsible way (Unity Blog, 2018).
</p>
<p><br /></p>
<h2 class="about__title">9. Sustainability</h2>
<p>
Sustainability is a key principle in global discussions at present, and
its importance is only set to rapidly increase as a result of climate
change predictions and ongoing environmental destruction. All fields and
disciplines are affected and need to incorporate sustainability agendas,
and AI is no exception. Despite this, it did not appear as an overly
pressing concern in the majority of guidelines, demonstrating a greater
need to identify how it can be incorporated more effectively.
</p>
<p><br /></p>
<p><strong>9.1 Sustainability. </strong></p>
<p>
AI organisations need to ensure that they are environmentally
sustainable and incorporate environmental outcomes within their
decision-making (Special Interest Group on Artificial Intelligence,
2018). There must be an adherence to resource- efficient, sustainable
energy-promoting and the protection of biodiversity, by the AI.
</p>
<p><br /></p>
<p><strong>9.2 Environment (nature). </strong></p>
<p>
Organisations should use AI that has been developed in an
environmentally conscious manner (SIIA, 2018). In situations where there
is ecological harm caused by AI beyond acceptable levels, steps should
be taken to either immediately halt it (temporarily or permanently),
identify ways to use it in a non-harmful way or consult the designers
for potential solutions and responses. AI should not be used to harm
biodiversity (UNI Global 2017).
</p>
<p><br /></p>
<p><strong>9.3 Energy. </strong></p>
<p>
The use of AI should be respectful of energy efficiency, mitigate
greenhouse gas emissions and protect biodiversity (University of
Montreal, 2017). Those responsible for AI should ensure that its
ecological footprint is minimal and all efforts are taken to reduce
emission levels (Green Digital Working Group, 2016,p. 7).
</p>
<p><br /></p>
<p><strong>9.4 Resources (energy). </strong></p>
<p>
AI should be created in a way that ensures effective energy and resource
consumption, promotes resource efficiency, the use of renewable
materials, and reduction of use of scarce materials and minimal waste
(European Parliament, 2017). Resource use and environmental impact
should be held in importance in the life cycle impact assessment of AI
(COMEST/UNESCO, 2017, p. 55).
</p>
<p><br /></p>
<h2 class="about__title">10. Dignity</h2>
<p>
Human dignity is the recognition that individuals have inherent worth
and that their rights should be respected. It is important that AI does
not infringe or harm the dignity of end users or other members within
society. Respecting individuals’ dignity is a vital principle that
should be taken into account within AI ethics guidelines.
</p>
<p><br /></p>
<p><strong>10.1 Dignity. </strong></p>
<p>
Human beings have intrinsic value and developers/organisational users
should ensure that this is respected in the design and use of AI (The
Conference toward AI Network Society, 2017). AI should be developed and
used in a way that “respects, serves and protects humans”
physical and mental integrity, personal and cultural sense of identity,
and satisfaction of their essential needs” (High-Level Expert
Group on AI, 2019, p. 10). AI needs to be developed and used in a way
that makes it clear to the user that they are interacting with AI and
not another human being (EGE, 2018). Efforts need to be made to ensure
that AI is not confused with human beings, as dignity is a value
inherent to human beings (COMEST/UNESCO, 2017, p. 50). Organisations
should ensure that their AI does not violate the end-user’s
dignity and should closely follow the principle of dignity outlined in
the first chapter of the EU Charter (Latonero, 2018).
</p>
<p><br /></p>
<h2 class="about__title">11. Solidarity</h2>
<p>
With the widespread use of AI to disseminate fake news, its potential to
surveil and invade individuals’ privacy, there is a growing
concern that AI may be used to undermine and jeopardise societal
relationships and solidarity. It is important to consider if the AI
supports rich and meaningful social interaction, both professionally and
in private life, and not support segregation and division, within the
design and development process. AI should promote social security and
cohesion and should not jeopardise societal bonds and
relationships.
</p>
<p><br /></p>
<p><strong>11.1 Solidarity. </strong></p>
<p>
AI should be developed to promote, or avoid harm to, societal bonds and
relationships between people and generations (University of Montreal,
2017). AI should facilitate and promote human development, rather than
being designed to obstruct or endanger it (ICDPPC, 2018). There should
be consideration towards preserving and promoting solidarity and should
not undermine existing social structures (Floridi et al., 2018). AI
should not create “social dislocation”, whereby it adversely
harm cultural and social identity, and those organisations that cause it
should be held responsible (Accenture, 2019).
</p>
<p><br /></p>
<p><strong>11.2 Social security. </strong></p>
<p>