The Ethics of Algorithms Key Problems and Solution
The Ethics of Algorithms Key Problems and Solution
https://doi.org/10.1007/s00146-021-01154-8
OPEN FORUM
Received: 27 July 2020 / Accepted: 22 January 2021 / Published online: 20 February 2021
© The Author(s) 2021
Abstract
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development
and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in soci-
ety have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big
Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications
of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the
governance of the design, development and deployment of algorithms.
Keywords Algorithm · Artificial intelligence · Autonomy · Digital ethics · Explainability · Fairness · Machine learning ·
Privacy · Responsibility · Transparency · Trust
13
Vol.:(0123456789)
13
It is important to stress that this conceptual map can be The search identified 4891 unique papers for review.2
interpreted at both a micro- and macro-ethical level. At the After initial review of title/abstract, 180 papers were selected
micro-ethical level, it sheds light on the ethical problems for a full review. Of these, 62 were rejected as off-topic,
that particular algorithms may pose. By highlighting how leaving 118 articles for a full review. There are all listed
these issues are inseparable from those related to data and in the reference list of the paper. Another 37 articles and
responsibilities, it shows the need to take a macro-ethical books were reviewed and referenced in this paper to provide
approach to addressing the ethics of algorithms as part of a additional information regarding specific ethical issues and
wider conceptual space, namely, digital ethics (Floridi and solutions (eg. technical details, examples and tools). These
Taddeo 2016). As Floridi and Taddeo argue: were sourced from the bibliographies of the 118 articles
we reviewed as well as provided on an ad-hoc basis when
“While they are distinct lines of research, the ethics
agreed upon by the authors as being helpful for clarification.
of data, algorithms and practices are obviously inter-
twined … [Digital] ethics must address the whole
conceptual space and hence all three axes of research
3 Inconclusive evidence leading
together, even if with different priorities and focus”
to unjustified actions
(Floridi and Taddeo 2016, 4).
In the remainder of this article, we address each of these Research focusing on inconclusive evidence refers to the
six ethical concerns in turn, offering an updated analysis of way in which non-deterministic, ML algorithms produce
the ethics of algorithms literature (at a micro level), with outputs that are expressed in probabilistic terms (James
the goal of contributing to the debate on digital ethics (at a et al. 2013; Valiant 1984). These types of algorithms gener-
macro level). ally identify association and correlation between variables
A systematic literature search was performed via keyword in the underlying data, but not causal connections. As such,
queries on four widely used reference repositories to identify they encourage the practice of apophenia: “seeing patterns
and analyse the literature on the ethics of algorithms (seeTa- where none actually exist, simply because massive quantities
ble 1). Four keywords were used to describe an algorithm: of data can offer connections that radiate in all directions”
‘algorithm’, ‘machine learning’, ‘software’ and ‘computer (boyd and Crawford 2012, 668). This is highly problem-
program’.1 The search was limited to publications made atic, as patterns identified by algorithms may be the result
available between November 2016 and March 2020. of inherent properties of the system modelled by the data,
1 2
The literature search was limited to English language articles in Many of which were purely technical in nature, especially for “dis-
peer-reviewed journals and conference proceedings. crimination” and “(transparency OR scrutability OR opacity)”.
13
of the datasets (that is, of the model itself, rather than the ensure that data fed to algorithms are validated independently,
underlying system), or of skillful manipulation of datasets and data retention and reproducibility measures are in place to
(properties neither of the model nor of the system). This is mitigate inconclusive evidence leading to unjustified actions,
the case, for example, of Simpson’s paradox, when trends along with auditing processes to identify unfair outcomes and
that are observed in different groups of data reverse when the unintended consequences (Henderson et al. 2018; Rahwan
data is aggregated (Blyth 1972). In the last two cases, poor 2018; Davis and Marcus 2019; Brundage et al. 2020).
quality of the data leads to inconclusive evidence to support The danger arising from inconclusive evidence and erro-
human decisions. neous actionable insights also stems from the perceived
Recent research has underlined the concern that inconclu- mechanistic objectivity associated with computer-generated
sive evidence can give rise to serious ethical risks. For exam- analytics (Karppi 2018; Lee 2018; Buhmann et al. 2019).
ple, focusing on non-causal indicators may distract attention This can lead to human decision-makers ignoring their
from the underlying causes of a given problem (Floridi et al. own experienced assessments—so-called ‘automation bias’
2020). Even with the use of causal methods, the available (Cummings 2012)—or even shirking part of their respon-
data may not always contain enough information to justify sibility for decisions (see Traceability below) (Grote and
an action or make a decision fair (Olhede and Wolfe 2018, 7). Berens 2020). As we shall see in Sects. 4 and 8, a lack of
Data quality—the timeliness, completeness and correctness of understanding of how algorithms generate outputs exacer-
a dataset—constrains the questions that can be answered using bates this problem.
a given dataset (Olteanu et al. 2016). Additionally, the insights
that can be extracted from datasets are fundamentally depend-
ent on the assumptions that guided the data collection process 4 Inscrutable evidence leading to opacity
itself (Diakopoulos and Koliska 2017). For example, algo-
rithms designed to predict patient outcomes in clinical settings Inscrutable evidence focuses on problems related to the lack
rely entirely on data inputs that can be quantified (e.g. vital of transparency that often characterise algorithms (particularly
signs and previous success rates of comparative treatments), ML algorithms and models); the socio-technical infrastructure
whilst ignoring other emotional facts (e.g. the willingness to in which they exist; and the decisions they support. Lack of
live) which can have a significant impact on patient outcomes, transparency—whether inherent due to the limits of technology
and thus, undermine the accuracy of the algorithmic prediction or acquired by design decisions and obfuscation of the underly-
(Buhmann, Paßmann, and Fieseler 2019). This example high- ing data (Lepri et al. 2018; Dahl 2018; Ananny and Crawford
lights how insights stemming from algorithmic data processing 2018; Weller 2019)—often translates into a lack of scrutiny and/
can be uncertain, incomplete, and time-sensitive (Diakopoulos or accountability (Oswald 2018; Fink 2018; Webb et al. 2019)
and Koliska 2017). and leads to a lack of “trustworthiness” (see Al-Hleg 2019).
One may embrace a naïve, inductivist approach and assume According to the recent literature, factors contributing
that inconclusive evidence can be avoided if algorithms are to the overall lack of algorithmic transparency include the
fed enough data, even if a causal explanation for these results cognitive impossibility for humans to interpret massive algo-
cannot be established. Yet, recent research rejects this view. rithmic models and datasets; a lack of appropriate tools to
In particular, literature focusing on the ethical risks of racial visualise and track large volumes of code and data; code and
profiling using algorithmic systems has demonstrated the data that are so poorly structured that they are impossible
limits of this approach highlighting, among other things, that to read; and ongoing updates and human influence over a
long-standing structural inequalities are often deeply embed- model (Diakopoulos and Koliska 2017; Stilgoe 2018; Zerilli
ded in the algorithms’ datasets and are rarely, if ever, corrected et al. 2019; Buhmann et al. 2019). Lack of transparency is
for (Hu 2017; Turner Lee 2018; Noble 2018; Benjamin 2019; also an inherent characteristic of self-learning algorithms,
Richardson et al. 2019; Abebe et al. 2020). More data by them- which alter their decision logic (produce new sets of rules)
selves do not lead to greater accuracy or greater representation. during the learning process, making it difficult for devel-
On the contrary, they may exacerbate issues of inconclusive opers to maintain a detailed understanding of why certain
data by enabling correlations to be found where there really are changes were made (Burrell 2016; Buhmann et al. 2019).
none. As Ruha Benjamin (2020) put it “computational depth However, this does not necessarily translate into opaque
without historical or sociological depth is just superficial learn- outcomes, as even without understanding each logical step,
ing [not deep learning]”. These limitations pose serious con- developers can adjust hyperparameters, the parameters that
straints on the justifiability of algorithmic outputs, which could govern the training process, to test for various outputs. In
have a negative impact on individuals or an entire population this respect, Martin (2019) stresses that, while the difficulty
due to suboptimal inferences or, in the case of the physical of explaining ML algorithms’ outputs is certainly real, it is
sciences, even tip the evidence for or against “a specific scien- important not to let this difficulty incentivise organisations
tific theory” (Ras et al. 2018, 10). This is why it is crucial to to develop complex systems to shirk responsibility.
13
Lack of transparency can also result from the malleability propose that the constraints on transparency posed by the
of algorithms, whereby algorithms can be reprogrammed in malleability of algorithms can be addressed, in part, by using
a continuous, distributed, and dynamic way (Sandvig et al. standard documentary procedures similar to those deployed
2016). Algorithmic malleability allows developers to moni- in the electronics industry, where.
tor and improve an already-deployed algorithm, but it may
“every component, no matter how simple or complex,
also be abused to blur the history of its evolution and leave
is accompanied with a datasheet describing its operat-
end-users in a state of confusion about the affordances of a
ing characteristics, test results, recommended usage,
given algorithm (Ananny and Crawford 2018). Consider for
and other information” (Gebru et al. 2020, 2).
example Google’s main search algorithm. Its malleability
enables the company to make continuous revisions, sug- Unfortunately, publicly available documentation is cur-
gesting a permanent state of destabilisation (Sandvig et al. rently uncommon in the development of algorithmic sys-
2016). This requires those affected by the algorithm to moni- tems and there is no agreed-upon format for what should be
tor it constantly and update their understanding accordingly included when documenting the origin of a dataset (Arnold
–an impossible task for most (Ananny and Crawford 2018). et al. 2019; Gebru et al. 2020).
As Floridi and Turilli (2009, 105) note, transparency is Although relatively nascent, another potentially promis-
not an “ethical principle in itself but a pro-ethical condition ing approach to enforcing algorithmic transparency is the
for enabling or impairing other ethical practices or princi- use of technical tools to test and audit algorithmic systems
ples”. And indeed, complete transparency can itself cause and decision-making. Testing whether algorithms exhibit
distinct ethical problems (Ananny and Crawford 2018): negative tendencies, like unfair discrimination, and audit-
transparency can provide users with some critical infor- ing a prediction or decision trail in detail, can help maintain
mation about the features and limitations of an algorithm, a high level of transparency (Weller 2019; Malhotra et al.
but it can also overwhelm users with information and thus, 2018; Brundage et al. 2020). To this end, discursive frame-
render the algorithm more opaque (Kizilcec 2016; Ananny works have been developed to help businesses and public
and Crawford 2018). Other research stress that excessive sector organisations understand the potential impacts of
focus on transparency can be detrimental to innovation and opaque algorithms, thus encouraging good practices (ICO
unnecessarily divert resources that could instead be used 2020). For instance, the AI Now Institute at New York Uni-
to improving safety, performance and accuracy (Danks and versity has produced algorithmic impact assessment guid-
London 2017; Oswald 2018; Ananny and Crawford 2018; ance, which seeks to raise awareness and improve dialogue
Weller 2019). For example, the debate over prioritising over potential harms of ML algorithms (Reisman et al.
transparency (and explainability) is especially contentious 2018). This includes the two aims of enabling developers
in the context of medical algorithms (Robbins 2019). to design more transparent, and therefore more trustworthy
Transparency can enable individuals to game the system ML algorithms, and of improving the public understanding
(Martin 2019; Magalhães 2018; Floridi et al. 2020). Knowl- and control of algorithms. In the same vein, Diakopoulos
edge about the source of a dataset, the assumptions under and Koliska have provided a comprehensive list of “transpar-
which sampling was done, or the metrics that an algorithm ency factors” across four layers of algorithmic systems: data,
uses to sort new inputs, may be used to figure out ways to model, inference, and interface. Factors include, inter alia.
take advantage of an algorithm (Szegedy et al. 2014; Yam-
“uncertainty (e.g. error margins), timeliness (e.g. when
polskiy 2018). Yet, the ability to game algorithms is only
was the data collected), completeness or missing ele-
within reach for some groups of the population—those with
ments, sampling method, provenance (e.g. sources),
higher digital literacy for example—thus, creating another
and volume (e.g. of training data used in machine
form of social inequality (Martin 2019; Bambauer and Zar-
learning)” (Diakopoulos and Koliska 2017, 818).
sky 2018). Therefore, confusing transparency for an end in
itself, instead of a pro-ethical factor (Floridi 2017) enabling Effective transparency procedures are likely, and indeed
crucial ethical practices, may not solve existing ethical prob- ought to, involve an interpretable explanation of the internal
lems related to the use of algorithms and, indeed, pose new processes of these systems. Buhmann et al. (2019) argue that
ones. This is why it is important to distinguish between the while a lack of transparency is an inherent feature of many
different factors that may hinder transparency of algorithms, ML algorithms, this does not mean that improvements cannot
identify their cause, and nuance the call for transparency by be made (Watson et al. 2019). For example, companies like
specifying which factors are required and at which layers of Google and IBM have increased their efforts to make ML
algorithmic systems they should be addressed (Diakopoulos algorithms more interpretable and inclusive by making tools
and Koliska 2017). such as Explainable AI, AI Explainability 360, and the What-
There are different ways of addressing the problems If Tool publicly available. These tools provide developers and
related to lack of transparency. For example, Gebru et al. also the general public with interactive visual interfaces that
13
improve human readability, explore various model results, illusion of precision (Karppi 2018; Selbst et al. 2019). For
provide case-based reasoning, directly interpretable rules, these reasons, the use of algorithms in some settings is ques-
and even identify and mitigate unwanted biases in datasets tioned altogether (Selbst et al. 2019; Mayson 2019; Katell
and algorithmic models (Mojsilovic 2018; Wexler 2018). et al. 2020; Abebe et al. 2020). For example, a growing
However, explanations for ML algorithms are constrained number of scholars criticise the use of algorithm-based risk
by the type of explanation sought, the fact that decisions assessment tools in court settings (Berk et al. 2018; Abebe
are often multi-dimensional in their nature, and that differ- et al. 2020).
ent users may require different explanations (Edwards and Some scholars affirm the limits of abstractions with
Veale 2017). Identifying appropriate methods for providing regard to unwanted bias in algorithms and argue for the need
explanations has been a problem since the late 1990s (Tickle to develop a sociotechnical frame to address and improve the
et al. 1998), but contemporary efforts can be categorised fairness of algorithms (Edwards and Veale 2017; Selbst et al.
into two main approaches: subject-centric explanations and 2019; Wong 2019; Katell et al. 2020; Abebe et al. 2020). In
model-centric explanations (Doshi-Velez and Kim 2017; Lee this respect, Selbst et al. (2019, 60–63) point to five abstrac-
et al. 2017; Baumer 2017; Buhmann et al. 2019). In the for- tion “traps”, or failures to account for the social context in
mer, the accuracy and length of the explanation is tailored to which algorithms operate, which persist in algorithmic
users and their specific interactions with a given algorithm design due to the absence of a sociotechnical frame, namely:
(see for example [Green and Viljoen 2020] and the game-
like model proposed by [Watson and Floridi 2020]); in the 1. A failure to model the entire system over which a social
latter, explanations concern the model as a whole and do not criterion, such as fairness, will be enforced;
depend on their audience. 2. A failure to understand how repurposing algorithmic
Explainability is particularly important when considering solutions designed for one social context may be mis-
the rapidly growing number of open source and easy-to-use leading, inaccurate, or otherwise do harm when applied
models and datasets. Increasingly, non-experts are experi- to a different context;
menting with state-of-the-art algorithmic models widely 3. A failure to account for the full meaning of social con-
available via online libraries or platforms, like GitHub, with- cepts such as fairness, which can be procedural, contex-
out always fully grasping their limits and properties (Hutson tual, and contestable, and cannot be resolved through
2019). This has prompted scholars to suggest that, to tackle mathematical formalisms;
the issue of technical complexity, it is necessary to invest 4. A failure to understand how the insertion of technology
more heavily in public education to enhance computational into an existing social system changes the behaviours
and data literacy (Lepri et al. 2018). Doing so would seem and embedded values of the pre-existing system; and
to be an appropriate long-term solution to the multi-layered 5. A failure to recognize the possibility that the best solu-
issues introduced by ubiquitous algorithms, and open-source tion to a problem may not involve technology.
software is often cited as critical to the solution (Lepri et al.
2018). The term ‘bias’ often comes with a negative connotation,
but it is used here to denote a “deviation from a standard”
(Danks and London 2017, 4692), which can occur at any
5 Misguided evidence leading to unwanted stage of the design, development, and deployment pro-
bias cess. The data used to train an algorithm is one of the main
sources from which bias emerges (Shah 2018), through pref-
Developers are predominantly focused on ensuring that their erentially sampled data or from data reflecting existing soci-
algorithms perform the tasks for which they were designed. etal bias (Diakopoulos and Koliska 2017; Danks and Lon-
Thus, the type of thinking that guides developers is essen- don 2017; Binns 2018; Malhotra et al. 2018). For example,
tial to understanding the emergence of bias in algorithms morally problematic structural inequalities that disadvantage
and algorithmic decision-making. Some scholars refer to the certain ethnicities may not be apparent in data and thus not
dominant thinking in the field of algorithm development as corrected for (Nobles 2018; Benjamin 2019). Additionally,
being defined by “algorithmic formalism”—an adherence data used to train algorithms are seldom obtained “accord-
to prescribed rules and form (Green and Viljoen 2020, 21). ing to any specific experimental design” (Olhede and Wolfe
While this approach is useful for abstracting and defining 2018, 3) and are used even though they may be inaccurate,
analytical processes, it tends to ignore the social complexity skewed, or systemically biased, offering a poor representa-
of the real world (Katell et al. 2020). Indeed, this approach tion of a population under study (Richardson et al. 2019).
leads to algorithmic interventions that strive to be ‘neutral’ One possible approach to mitigating this problem is to
but in doing so, it risks entrenching existing social con- exclude intentionally some specific data variables from
ditions (Green and Viljoen 2020, 20), while creating the informing algorithmic decision-making. Indeed, the
13
processing of statistically relevant sensitive or “protected would create a vicious cycle in which a defendant with con-
variables”—such as gender or race—is typically limited or victed friends will be deemed more likely to offend, and
prohibited under anti-discrimination and data protection law, therefore sentenced to prison, hence increasing the number
to limit the risks of unfair discrimination. Unfortunately, of people with criminal records in a given group on the basis
even if protections for specific classes can be encoded in of mere correlation (Grgić-Hlača et al. 2018; Richardson
an algorithm, there could always be biases that were not et al. 2019).
considered ex ante, as in the case, for example, of language High-profile examples of algorithmic bias in recent
models reproducing heavily male-focused texts (Fuster et al. years—not least investigative reporting around the COMPAS
2017; Doshi-Velez and Kim 2017). Even while bias may system (Angwin et al. 2016)—have led to a growing focus
be anticipated and protected variables excluded from the on issues of algorithmic fairness. The definition and opera-
data, unanticipated proxies for these variables could still be tionalisation of algorithmic fairness have become “urgent
used to reconstruct biases, leading to “bias by proxy” that tasks in academia and industry” (Shin and Park 2019), as the
is difficult to detect and avoid (Fuster et al. 2017; Gillis and significant uptick in the number of papers, workshops and
Spiess 2019). conferences dedicated to ‘fairness, accountability and trans-
At the same time, there may be good reasons to rely on parency’ (FAT) highlights (Hoffmann et al. 2018; Ekstrand
statistically biased estimators in algorithmic processing, and Levy 2018; Shin and Park 2019). We analyse key topics
as they can be used to mitigate training data bias. In this and contributions in this area in the next section.
way, one type of problematic algorithmic bias is counterbal-
anced by another type of algorithmic bias or by introducing
compensatory bias when interpreting algorithmic outputs
(Danks and London 2017). Simpler approaches to mitigating 6 Unfair outcomes leading to discrimination
bias in data involve piloting algorithms in different contexts
and with various datasets (Shah 2018). Having a model, its There is widespread agreement on the need for algorith-
datasets, and metadata (on provenance) published to enable mic fairness, particularly to mitigate the risks of direct and
external scrutiny can also help correct unseen or unwanted indirect discrimination (under US law, ‘disparate treatment’
bias (Shah 2018). It is also worth noting that so-called ‘syn- and ‘disparate impact’, respectively) due to algorithmic deci-
thetic data’, or algorithmically generated data, produced via sions (Barocas and Selbst 2016; Grgić-Hlača et al. 2018;
reinforcement learning or generative adversarial networks Green and Chen 2019). Yet there remains a lack of agree-
(GANs) offer an opportunity to address certain issues of data ment among researchers on the definition, measurements and
bias (Floridi 2019a; Xu et al. 2018). Fair data generation standards of algorithmic fairness (Gajane and Pechenizkiy
with GANs may help diversify datasets used in computer 2018; Saxena et al. 2019; Lee 2018; Milano et al. 2020).
vision algorithms (Xu et al. 2018). For example, StyleGAN2 Wong (2019) identifies up to 21 definitions of fairness across
(Karras et al. 2019) is able to produce high-quality images the literature and such definitions are often mutually incon-
of non-existing human faces and has proven to be especially sistent (Doshi-Velez and Kim 2017).
useful in creating diverse datasets of human faces, some- There are many nuances in the definition, measurement,
thing that many algorithmic systems for facial recognition and application of different standards of algorithmic fair-
currently lack (Obermeyer et al. 2019; Kortylewski et al. ness. For instance, algorithmic fairness can be defined both
2019; Harwell 2020). in relation to groups as well as individuals (Doshi-Velez
Unwanted bias also occurs due to improper deployment of and Kim 2017). Four main definitions of algorithmic fair-
an algorithm. Consider transfer context bias: the problematic ness have gained prominence in the recent literature (see for
bias that emerges when a functioning algorithm is used in example [Kleinberg et al. 2016; Corbett-Davies and Goel
a new environment. For example, if a research hospital’s 2018]):
healthcare algorithm is used in a rural clinic and assumes
that the same level of resources are available to the rural 1. Anti-classification, which refers to protected categories,
clinic as the research hospital, the healthcare resource allo- such as race and gender, and their proxies not being
cation decisions generated by the algorithm will be inac- explicitly used in decision making;
curate and flawed (Danks and London 2017). 2. Classification parity, which regards a model as being fair
In the same vein, Grgić-Hlača et al. (2018) warn of if common measures of predictive performance, includ-
vicious cycles when algorithms make misguided chain ing false positive and negative rates, are equal across
assessments. For example, in the context of the COMPAS protected groups;
risk-assessment algorithm, one of the assessment criteria for 3. Calibration, which considers fairness as a measure of
predicting recidivism is the criminal history of a defendant’s how well-calibrated an algorithm is between protected
friends. It follows that having friends with a criminal history groups;
13
4. Statistical parity, which defines fairness as an equal aver- preference for a “calibrated fairness definition”, or merit-
age probability estimate over all members of protected based selection, as compared to “treating similar people
groups. similarly” and argue in favour of the principle of affirmative
action. In a similar study, Lee (2018) offers evidence sug-
However, each of these commonly used definitions of gesting that, when considering tasks that require uniquely
fairness has drawbacks and are generally mutually incom- human skills, people consider algorithmic decisions to be
patible (Kleinberg et al. 2016). Taking anti-classification as less fair and algorithms to be less trustworthy.
an example, protected characteristics, such as race, gender Reporting on empirical work conducted on algorithmic
and religion, cannot simply be removed from training data interpretability and transparency, Webb et al. (2019) reveal
to prevent discrimination, as noted above (Gillis and Spiess that moral references, particularly on fairness, are con-
2019). Structural inequalities mean that formally non-dis- sistent across participants discussing their preferences on
criminatory data points such as postcodes can act as proxies algorithms. The study notes that people tend to go beyond
for, and be used, either intentionally or unintentionally, to personal preferences to focus instead on “right and wrong
infer protected characteristics, like race (Edwards and Veale behaviour”, as a way to indicate the need to understand
2017). the context of deployment of the algorithm and the diffi-
There are important cases where it is appropriate to culty of understanding the algorithm and its consequences
consider protected characteristics to make equitable deci- (Webb et al. 2019). In the context of recommender systems,
sions. For example, lower female reoffending rates mean Burke (2017) proposes a multi-stakeholder and multi-
that excluding gender as an input in recidivism algorithms sided approach to defining fairness, moving beyond user-
would leave women with disproportionately high-risk rat- centric definitions to include the interests of other system
ings (Corbett-Davies and Goel 2018). Because of this, Binns stakeholders.
(2018) stresses the importance of considering the historical It has become clear that understanding the public view on
and sociological context that cannot be captured in the data algorithmic fairness would help technologists in developing
presented to algorithms but that can inform contextually algorithms with fairness principles that align with the senti-
appropriate approaches to fairness in algorithms. It is also ments of the general public on prevailing notions of fairness
critical to note that algorithmic models can often produce (Saxena et al. 2019, 1). Grounding the design decisions of
unexpected outcomes, contrary to human intuitions and the providers of an algorithm “with reasons that are accept-
perturb their understanding. For example, as Grgić-Hlača able by the most adversely affected” as well as being “open
et al. (2018) highlight, using features that people believe to to adjustments in light of new reasons” (Wong 2019, 15) is
be fair can in some cases increase the racism exhibited by crucial to improving the social impact of algorithms. It is
algorithms and decrease accuracy. important to appreciate, however, that measures of fairness
Regarding methods for improving algorithmic fairness, are often completely inadequate when they seek to validate
Veale and Binns (2017) and Katell et al. (2020) offer two models that are deployed on groups of people that are already
approaches. The first envisages a third-party intervention, disadvantaged in society because of their origin, income
whereby an entity external to the provider of algorithms level, or sexual orientation. We simply cannot “optimise
would hold data on sensitive or protected characteristics around” (Benjamin 2019) existing economic, social, and
and attempt to identify and reduce discrimination caused political power dynamics (Winner 1980; Benjamin 2019).
by the data and models. The second approach proposes a
collaborative knowledge-based method which would focus
on community-driven data resources containing practical
experiences of ML and modelling (Veale and Binns 2017; 7 Transformative effects leading
Katell et al. 2020). The two approaches are not mutually to challenges for autonomy
exclusive, they may bring different benefits depending on and informational privacy
contexts of application, and their combination may also
be beneficial. The collective impact of algorithms has spurred discussions
Given the significant impact that algorithmic decisions on the autonomy afforded to end users. (Ananny and Craw-
have on people’s lives and the importance of context for ford 2018; Beer 2017; Taddeo and Floridi 2018b; Möller
choosing appropriate measures of fairness, it is surprising et al. 2018; Malhotra et al. 2018; Shin and Park 2019; Hauer
that there has been little effort to capture public views on 2019; Bauer and Dubljević 2020). Algorithm-based services
algorithmic fairness (Lee et al. 2017; Saxena et al. 2019; are increasingly featured “within an ecosystem of complex,
Binns 2018). Examining public perceptions of different socio-technical issues” (Shin and Park 2019), which can hin-
definitions of algorithmic fairness, Saxena et al. (2019, 3) der the autonomy of users. Limits to users’ autonomy stem
note that in the context of loan decisions people exhibit a from three sources:
13
1. pervasive distribution and proactivity of (learning) algo- deployment and to amend and reverse the decisions of
rithms to inform users’ choice (Yang et al. 2018; Taddeo algorithmic systems that already underlie social activi-
and Floridi 2018b); ties. This framework aims to maintain a well-functioning
2. users’ limited understanding of algorithms; “algorithmic social contract”, defined as “a pact between
3. lack of second-order power (or appeals) over algorithmic various human stakeholders, mediated by machines”
outcomes (Rubel et al. 2019). (Rahwan 2018, 1). It accomplishes this by identifying and
negotiating the values of different stakeholders affected by
In considering the ethical challenges of AI, Yang et al. algorithmic systems as the basis for monitoring adherence
(2018, 11) focus on the impact of autonomous, self-learning to the social contract.
algorithms on human self-determination and stress that “AI’s Informational privacy is intimately linked with user
predictive power and relentless nudging, even if uninten- autonomy (Cohen 2000; Rössler 2015). Informational pri-
tional, should foster and not undermine human dignity and vacy guarantees peoples’ freedom to think, communicate,
self-determination”. and form relationships, among other essential human activi-
The risks that algorithmic systems may hinder human ties (Rachels 1975; Allen 2011). However, people’s increas-
autonomy by shaping users’ choices has been widely ing interaction with algorithmic systems has effectively
reported in the literature and has taken centre stage in reduced their ability to control who has access to informa-
most of the high-level ethical principles for AI, including, tion that concerns them and what is being done with it. The
inter alia, those of the European Commission’s European vast amounts of sensitive data required in algorithmic profil-
Group on Ethics in Science and Technologies, and the UK’s ing and predictions, central to recommender systems, pose
House of Lords Artificial Intelligence Committee (Floridi multiple issues regarding individuals’ informational privacy.
and Cowls 2019). In their analysis of these high-level prin- Algorithmic profiling takes place over an indefinite
ciples, Floridi and Cowls (2019) note that it does not suf- period of time, in which individuals are categorised accord-
fice that algorithms promote people’s autonomy: rather, the ing to a system’s internal logic, and their profiles are updated
autonomy of algorithms should be constrained and revers- as new information is obtained about them. This information
ible. Looking beyond the West, the Beijing AI Principles— is typically obtained directly, from when a person interacts
developed by a consortium of China’s leading companies with a given system, or indirectly, inferred from algorithmi-
and universities for guiding AI research and development— cally assembled groups of individuals (Paraschakis 2018).
also emphasise that human autonomy should be respected Indeed, algorithmic profiling will also rely on information
(Roberts et al. 2020). gathered about other individuals and groups of people that
Human autonomy can also be limited by the inability of have been categorised in a similar manner to a targeted per-
an individual to understand some information or make the son. This includes information ranging from characteristics
appropriate decisions. As Shin and Park suggest, algorithms like geographical location and age to information on spe-
“do not have the affordance that would allow users to under- cific behaviour and preferences, including what type of con-
stand them or how best to utilize them to achieve their goals” tent a person is likely to seek the most on a given platform
(Shin and Park 2019, 279). As such, a key issue identified in (Chakraborty et al. 2019). While this poses a problem of
debates over users’ autonomy is the difficulty of striking an inconclusive evidence, it also indicates that if group privacy
appropriate balance between people’s own decision-making (Taylor et al. 2017) is not ensured, it may be impossible for
and that which they delegate to algorithms (Floridi et al. individuals to ever remove themselves from the process of
2018). This is further complicated by a lack of transpar- algorithmic profiling and predictions (Milano et al. 2020).
ency over the decision-making process by which particular In other words, individuals’ informational privacy cannot be
decisions are delegated to algorithms. Ananny and Crawford secured without securing group privacy.
(2018) note that often this process does not account for all Users may not always be aware of, or may not have the
stakeholders and is not void of structural inequalities. ability to gain awareness about, the type of information that
As a method of Responsible Research and Innovation is being held about them and what that information is used
(RRI), ‘participatory design’ is often mentioned for its for. Considering that recommender systems contribute to the
focus on the design of algorithms to promote the values dynamic construction of individuals’ identities by interven-
of end users and protect their autonomy (Whitman et al. ing in their choices, a lack of control over one’s information
2018; Katell et al. 2020). Participatory design aims at translates in a loss of autonomy.
“bringing participants’ tacit knowledge and embodied Giving individuals the ability to contribute to the design
experience into the design process” (Whitman et al. 2018, of a recommender system can help create more accurate
2). For example, Rahwan’s ‘Society-in-the-Loop’ (2018) profiles that account for attributes and social categories that
conceptual framework seeks to enable different stake- would have otherwise not been included in the labelling
holders in society to design algorithmic systems before used by the system to categorise users (Milano et al. 2020).
13
While the desirability of improving algorithmic profiling in contemporary sociotechnical contexts. Wider sociotechni-
will vary with the context, improving the algorithmic design cal structures make it difficult to trace back responsibility for
by including feedback from the various stakeholders of the actions performed by distributed, hybrid systems of human
algorithm falls in line with the aforementioned scholarship and artificial agents (Floridi 2012; Crain 2018).
on RRI and improves users’ ability for self-determination Additionally, due to the structure and operation of the
(Whitman et al. 2018). data brokerage market, it is in many cases impossible to
Knowledge about who owns one’s data and what is done “trace any given datum to its original source” once it has
with them can also help inform trade-offs between informa- been introduced to the marketplace (Crain 2018, 93). Rea-
tional privacy and information-processing benefits (Sloan sons for this include trade secret protection; complex mar-
and Warner 2018, 21). For example, in medical contexts, kets that “divorce” the data collection process from the
individuals are more likely to be willing to share informa- selling and buying process; and the mix of large volumes
tion that can help inform their, or others’ diagnostics, less of computationally generated information with “no ‘real’
so in the context of job recruitment. Information coordina- empirical source” combined with genuine data (Crain 2018,
tion norms, as Sloan and Warner (2018) argue, can serve 94).
to ensure that these trade-offs adapt correctly to different The technical complexity and dynamism of ML algo-
contexts and do not place an excessive amount of responsi- rithms make them prone to concerns of “agency launder-
bility and effort on single individuals. For example, personal ing”: a moral wrong which consists in distancing oneself
information ought to flow differently in the context of law from morally suspect actions, regardless of whether those
enforcement procedures as compared to a job recruitment actions were intended or not, by blaming the algorithm
process. The European Union’s General Data Protection (Rubel et al. 2019). This is practiced by organisations as
Regulation has played an important role in instituting the well as by individuals. Rubel et al. provide a straightforward
basis of such norms (Sloan and Warner 2018). and chilling example of agency laundering by Facebook:
Finally, a growing scholarship on differential privacy is
“Using Facebook’s automated system, the ProPub-
providing new privacy protection methods for organisations
lica team found a user-generated category called “Jew
looking to protect their users’ privacy while also keeping
hater” with over 2200 members. […] To help ProPub-
good model quality, as well as manageable software costs
lica find a larger audience (and hence have a better
and complexity, striking a balance between utility and pri-
ad purchase), Facebook suggested a number of addi-
vacy (Abadi et al. 2016; Wang et al. 2017; Xian et al. 2017).
tional categories. […] ProPublica used the platform to
Technical advancements of this kind, which allow organisa-
select other profiles displaying anti-Semitic categories,
tions to share publicly a dataset while keeping information
and Facebook approved ProPublica’s ad with minor
about individuals secret (preventing re-identification), and
changes. When ProPublica revealed the anti-Semitic
can ensure provable privacy protection on sensitive data,
categories and other news outlets reported similarly
such as genomic data (Wang et al. 2017). Indeed, differen-
odious categories, Facebook responded by explaining
tial privacy was recently used by Social Science One and
that algorithms had created the categories based on
Facebook to release safely one of the largest datasets (38
user responses to target fields [and that] “[w]e never
million URLs shared publicly on Facebook) for academic
intended or anticipated this functionality being used
research on the societal impacts of social media (King and
this way” (Rubel et al. 2019, 1024–25).
Persily 2020).
Today, the failure to grasp the unintended effects of mass
personal data processing and commercialisation, a familiar
8 Traceability leading to moral problem in the history of technology (Wiener 1950; Klee
responsibility 1996; Benjamin 2019), is coupled with the limited expla-
nations that most ML algorithms provide (Watson et al.
The technical limitations of various ML algorithms, such as 2019). This approach risks to favour avoidance of responsi-
lack of transparency and lack of explainability, undermine bility through “the computer said so” type of denial (Karppi
their scrutability and highlight the need for novel approaches 2018). This can lead field experts, such as clinicians, to
to tracing moral responsibility and accountability for the avoid questioning the suggestion of an algorithm even when
actions performed by ML algorithms. Regarding moral it may seem odd to them. The interplay between field experts
responsibility, Reddy et al. (2019) note a common blurring and ML algorithms can prompt “epistemic vices” (Grote
between technical limitations of algorithms and the broader and Berens 2020), such as dogmatism or gullibility (Hauer
legal, ethical, and institutional boundaries in which they 2019), and hinder the attribution of responsibility in distrib-
operate. Even for non-learning algorithms, traditional, linear uted systems (Floridi 2016). To address this issue, Shah’s
conceptions of responsibility prove to offer limited guidance analysis (2018) stresses that the risk that some stakeholders
13
may breach their responsibilities can be addressed, for exam- et al. 2018), according to which algorithmic decisions
ple, by establishing separate bodies for the ethical oversight should be able to withstand the same level of public scrutiny
of algorithms (e.g. DeepMind Health established an Inde- that human decision-making would receive. This approach
pendent Review Panel with unfettered access to the company has been echoed by many others in the reviewed literature
until Google halted it in 2019) (Murgia 2018). However, (Ananny and Crawford 2018; Blacklaws 2018; Buhmann
expecting a single oversight body, like a research ethics com- et al. 2019).
mittee or institutional review board, to “be solely responsi- Problems relating to ‘agency laundering’ and ‘ethics
ble for ensuring the rigour, utility, and probity of big data” shirking’ arise from the inadequacy of existing conceptual
is unrealistic (Lipworth et al. 2017, 8). Indeed, some have frameworks to trace and ascribe moral responsibility. As
argued that these initiatives lack any sort of consistency and Floridi points out, when considering algorithmic systems
can rather lead to “ethics bluewashing”, understood as. and the impact of their actions.
“implementing superficial measures in favour of, the “we are dealing with DMAs [distributed moral actions]
ethical values and benefits of digital processes, prod- arising from morally neutral interactions of (poten-
ucts, services, or other solutions to appear more digi- tially hybrid) networks of agents? In other words, who
tally ethical than one is.” (Floridi 2019b, 187). is responsible (distributed moral responsibility, DMR)
for DMAs?”, (Floridi 2016, 2).
Faced with strict legal regimes, resourceful actors may
also resort to so-called “ethics dumping” whereby unethical Floridi’s analysis suggests ascribing full moral respon-
“processes, products or services” are exported to countries sibility “by default and overridably” to all the agents in the
with weaker frameworks and enforcement mechanisms, network which are causally relevant to the given action of
after which the outcomes of such unethical activities are the network. The proposed approach builds on the concepts
“imported back” (Floridi 2019b, 190). of back-propagation from network theory, strict liability
There are a number of detailed approaches to establishing from jurisprudence, and common knowledge from epistemic
algorithmic accountability in the reviewed literature. While logic. Notably, this approach decouples moral responsibility
ML algorithms do require a level of technical intervention from the intentionality of the actors and from the very idea
to improve their explainability, most approaches focus on of punishment and reward for performing a given action, to
normative interventions (Fink 2018). For example, Ananny focus instead on the need to rectify mistakes (back-propa-
and Crawford argue that, at least, providers of algorithms gation) and improve the ethical working of all the agents in
ought to facilitate public discourse about their technology the network.
(Ananny and Crawford 2018). Similarly, to address the issue
of ad hoc ethical actions, some have claimed that account-
ability should first and foremost be addressed as a matter of
convention (Dignum et al. 2018; Reddy et al. 2019). 9 Conclusion
Looking to fill the convention “gap”, Buhmann et al.
(2019) borrow from the seven principles for algorithms set This article builds on, and updates, previous research con-
out by the Association for Computing Machinery, claiming ducted by our group (Mittelstadt et al. 2016) to review rel-
that through, inter alia, awareness of their algorithms, vali- evant literature published since 2016 on the ethics of algo-
dation, and testing, an organisation should take responsibil- rithms. Although that article is now inevitably outdated in
ity for their algorithms regardless of how opaque they are terms of specific references and detailed information about
(Malhotra et al. 2018). Decisions regarding the deployment the literature reviewed, the map, and the six categories that it
of algorithms should incorporate factors such as desirabil- provides, have withstood the test of time and remain a valu-
ity and the wider context in which they will operate, which able tool to scope ethics of algorithms as an area of research,
should then lead to a more accountable “algorithmic culture” with a growing body of literature focusing on each of the six
(Vedder and Naudts 2017, 219). To capture such considera- categories contributing either to refine our understanding of
tions, “interactive and discursive fora and processes” with existing problems or to provide solutions to address them.
relevant stakeholders, as suggested by Buhmann et al., may Since 2016, the ethics of algorithms has become a cen-
prove a useful means (Buhmann et al. 2019, 13). tral topic of discussion among scholars, technology provid-
In the same vein, Binns (2018) focuses on the political- ers, and policymakers. The debate has gained traction also
philosophical concept of “public reason”. Considering that because of the so-called “summer of AI”, and with it the
the processes for ascribing responsibility for the actions of pervasive use of ML algorithms. Many of the ethical ques-
an algorithm differ, both in nature and scope, in the public tions analysed in this article and the literature it reviews
versus private sector, Binns calls for the establishment of a have been addressed in national and international ethical
publicly shared framework (Binns 2018; see also Dignum guidelines and principles, like the aforementioned European
13
Commission’s European Group on Ethics in Science and the bibliographies of the 118 articles we reviewed as well
Technologies, the UK’s House of Lords Artificial Intelli- as provided on an ad-hoc basis when agreed upon by the
gence Committee (Floridi and Cowls 2019), and the OECD authors as being helpful for clarification.
principles on AI (OECD 2019).
One aspect that was not explicitly captured by the original
map, and which is becoming a central point of discussion Open Access This article is licensed under a Creative Commons Attri-
bution 4.0 International License, which permits use, sharing, adapta-
in the relevant literature, is the increasing focus on the use tion, distribution and reproduction in any medium or format, as long
of algorithms, AI and digital technologies more broadly, to as you give appropriate credit to the original author(s) and the source,
deliver socially good outcomes (Hager et al. 2019) (Floridi provide a link to the Creative Commons licence, and indicate if changes
et al. 2020; Cowls et al. 2021). While it is true, at least in were made. The images or other third party material in this article are
included in the article’s Creative Commons licence, unless indicated
principle, that any initiative aimed at using algorithms for otherwise in a credit line to the material. If material is not included in
social good should address satisfactorily the risks that each the article’s Creative Commons licence and your intended use is not
of the six categories in the map identifies, there is also a permitted by statutory regulation or exceeds the permitted use, you will
growing debate on the principles and criteria that should need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
inform the design and governance of algorithms, and digital
technologies more broadly, for the explicit purpose of social
good.
Ethical analyses are necessary to mitigate the risks while
References
harnessing the potential for good of these technologies, Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar
insofar as they serve the twin goals of clarifying the nature K, Zhang L (2016) Deep learning with differential privacy. In:
of the ethical risks and of the potential for good of algo- Proceedings of the 2016 ACM SIGSAC Conference on Com-
rithms and digital technologies, and translating (Taddeo and puter and Communications Security, pp 308–18. Vienna Austria:
ACM. https://doi.org/10.1145/2976749.2978318. Accessed 24
Floridi 2018b; Morley et al. 2019a, b) this understanding Aug 2020
into sound, actionable guidance for the governance of the Abebe R, Barocas S, Kleinberg J, Levy K, Raghavan M, Robinson DG
design and use of digital artefacts. (2020) Roles for computing in social change. https://arxiv.org/
pdf/1912.04883.pdf. Accessed 24 Aug 2020
Aggarwal N (2020) The norms of algorithmic credit scoring. SSRN
Electron J. https://doi.org/10.2139/ssrn.3569083
Appendix Allen A (2011) Unpopular privacy what must we hide? Oxford Uni-
versity Press, Oxford. https://doi.org/10.1093/acprof:oso/97801
95141375.001.0001
Ananny M, Crawford K (2018) Seeing without knowing: limitations
of the transparency ideal and its application to algorithmic
Methodology accountability. New Media Soc 20(3):973–989. https://doi.
org/10.1177/1461444816676645
Four databases of academic literature were systematically Angwin J, Larson J, Mattu S, Lauren K (2016) Machine bias. https://
www.propublica.org/article/machine-bias-risk-assessments-in-
queried (see: Table 1) to identify literature discussing eth- criminal-sentencing. Accessed 24 Aug 2020
ics and algorithms. Four keywords were used to describe an Arnold M, Bellamy RKE, Hind M, Houde S, Mehta S, Mojsi-
algorithm: ‘algorithm’, ‘machine learning’, ‘software’ and lovic A, Nair R et al (2019) FactSheets: increasing trust in
‘computer program’.3 The search was limited to publications AI services through supplier’s declarations of conformity.
ArXiv:1808.07261. http://arxiv.org/abs/1808.07261. Accessed
from November 2016 to March 2020. 24 Aug 2020
The search identified 4891 unique papers for review.4 Bambauer J, Zarsky T (2018) The algorithmic game. Notre Dame Law
After an initial review of title/abstract, 180 papers were Rev 94(1):1–47
selected for a full review. Of these, 62 were rejected as off- Barocas S, Selbst AD (2016) Big data’s disparate impact. SSRN Elec-
tron J. https://doi.org/10.2139/ssrn.2477899
topic, leaving 118 articles for full review. Bauer WA, Dubljević V (2020) AI assistants and the paradox of
Another 37 articles and books were reviewed and ref- internal automaticity. Neuroethics 13(3):303–310. https://doi.
erenced in this paper to provide additional information org/10.1007/s12152-019-09423-6
regarding specific ethical issues and solutions (e.g., techni- Baumer EPS (2017) Toward human-centered algorithm design. Big
Data Soc 4(2):205395171771885
cal details, examples and tools). These were sourced from Beer D (2017) The social power of algorithms. Inform Commun Soc
20(1):1–13. https://doi.org/10.1080/1369118X.2016.1216147
Benjamin R (2019) Race after technology: abolitionist tools for the new
jim code. Polity, Medford
3
The literature search was limited to English language articles in Benjamin R (2020) 2020 Vision: reimagining the default settings of
peer-reviewed journals and conference proceedings. technology and society. https://iclr.cc/virtual_2020/speaker_3.
4
Many of which were purely technical in nature, especially for “dis- html. Accessed 24 Aug 2020
crimination” and “(transparency OR scrutability OR opacity)”.
13
Berk R, Heidari H, Jabbari S, Kearns M, Roth A (2018) Fairness in Davis E, Marcus G (2019) Rebooting AI: building artificial intelligence
criminal justice risk assessments: the state of the art. Sociol we can trust. Pantheon Books, New York
Methods Res. https://doi.org/10.1177/0049124118782533 Diakopoulos N, Koliska M (2017) Algorithmic transparency in
Binns R (2018) Fairness in machine learning: lessons from political the news media. Digit Journal 5(7):809–828. https : //doi.
philosophy. ArXiv:1712.03586. http://arxiv. org/abs/1712.03586. org/10.1080/21670811.2016.1208053
Accessed 24 Aug 2020 Doshi-Velez F, Kim B (2017) Towards a rigorous science of inter-
Blacklaws C (2018) Algorithms: transparency and accountability. pretable machine learning. ArXiv:1702.08608. http://arxiv.org/
Philos Trans R Soc A Math Phys Eng Sci 376(2128):20170351. abs/1702.08608. Accessed 24 Aug 2020
https://doi.org/10.1098/rsta.2017.0351 Edwards L, Veale M (2017) Slave to the algorithm? Why a right to
Blyth CR (1972) On Simpson’s paradox and the sure-thing principle. J explanationn is probably not the remedy you are looking for.
Am Stat Assoc 67(338):364–366. https://doi.org/10.1080/01621 SSRN Electron J. https://doi.org/10.2139/ssrn.2972855
459.1972.10482387 Ekstrand M, Levy K (2018) FAT* Network. https://fatconference.org/
Boyd D, Crawford K (2012) Critical questions for big data. Inform network. Accessed 24 August 2020
Commun Soc 15(5):662–679. https://doi.org/10.1080/13691 Eubanks V (2017) Automating inequality: how high-tech tools profile,
18X.2012.678878 police, and punish the poor, 1st edn. St. Martin’s Press, New
Buhmann A, Paßmann J, Fieseler C (2019) Managing algorithmic York
accountability: balancing reputational concerns, engagement Fink K (2018) Opening the government’s black boxes: freedom of
strategies, and the potential of rational discourse. J Bus Ethics. information and algorithmic accountability. Inform Com-
https://doi.org/10.1007/s10551-019-04226-4 mun Soc 21(10):1453–1471. https://doi.org/10.1080/13691
Burke R (2017) Multisided fairness for recommendation. 18X.2017.1330418
ArXiv:1707.00093. http://arxiv.org/abs/1707.00093. Accessed Floridi L (2012) Distributed morality in an information society. Sci
24 Aug 2020 Eng Ethics 19(3):727–743. https : //doi.org/10.1007/s1194
Burrell J (2016) How the machine “thinks”: understanding 8-012-9413-4
opacity in machine learning algorithms. Big Data Soc Floridi L (2016) Faultless responsibility: on the nature and allocation
3(1):205395171562251. https://doi.org/10.1177/2053951715 of moral responsibility for distributed moral actions. Philos Trans
622512 R Soc A Math Phys Eng Sci 374(2083):20160112. https://doi.
Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, org/10.1098/rsta.2016.0112
Khlaaf H et al. (2020) Toward trustworthy AI development: Floridi L (2017) Infraethics–on the conditions of possibility of moral-
mechanisms for supporting verifiable claims. ArXiv:2004.07213 ity. Philos Technol 30(4):391–394. https: //doi.org/10.1007/s1334
[Cs]. http://arxiv.org/abs/2004.07213. Accessed 24 Aug 2020 7-017-0291-1
Chakraborty A, Patro GK, Ganguly N, Gummadi KP, Loiseau P (2019) Floridi L (2019b) Translating principles into practices of digital ethics:
Equality of voice: towards fair representation in crowdsourced five risks of being unethical. Philos Technol 32(2):185–193. https
top-K recommendations. In: Proceedings of the Conference on ://doi.org/10.1007/s13347-019-00354-x
Fairness, Accountability, and Transparency—FAT* ’19, 129–38. Floridi L (2019a) What the near future of artificial intelligence could
Atlanta, GA, USA: ACM Press. https://doi.org/10.1145/32875 be. Philos Technol 32(1):1–15. https://doi.org/10.1007/s1334
60.3287570 7-019-00345-y
Cohen J (2000) Examined lives: informational privacy and the sub- Floridi L, Cowls J (2019) A unified framework of five principles for ai
ject as object. Georgetown Law Faculty Publications and Other in society. Harvard Data Sci Rev. https://doi.org/10.1162/99608
Works, January. https://scholarship.law.georgetown.edu/facpu f92.8cd550d1
b/810. Accessed 24 Aug 2020 Floridi L, Taddeo M (2016) What is data ethics? Philos Trans R
Corbett-Davies S, Goel S (2018) The measure and mismeas- Soc A: Math Phys Eng Sci 374(2083):20160360. https://doi.
ure of fairness: a critical review of fair machine learning. org/10.1098/rsta.2016.0360
ArXiv:1808.00023. http://arxiv.org/abs/1808.00023. Accessed Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum
24 Aug 2020 V, Luetge C et al (2018) AI4People—an ethical framework for
Cowls J, Tsamados A, Taddeo M, Floridi L (2021) A definition, bench- a good AI society: opportunities, risks, principles, and recom-
mark and database of AI for social good initiatives. Nat Mach mendations. Mind Mach 28(4):689–707. https: //doi.org/10.1007/
Intell s11023-018-9482-5
Crain M (2018) The limits of transparency: data brokers and com- Floridi L, Cowls J, King TC, Taddeo M (2020) How to design AI for
modification. New Media Soc 20(1):88–104. https : //doi. social good: seven essential factors. Sci Eng Ethics 26(3):1771–
org/10.1177/1461444816657096 1796. https://doi.org/10.1007/s11948-020-00213-5
Cummings M (2012) Automation bias in intelligent time critical deci- Fuster A, Goldsmith-Pinkham P, Ramadorai T, Walther A (2017)
sion support systems. In: AIAA 1st Intelligent Systems Technical Predictably unequal? The effects of machine learning on credit
Conference. Chicago, Illinois: American Institute of Aeronautics markets. SSRN Electron J. https://doi.org/10.2139/ssrn.3072038
and Astronautics. https://doi.org/10.2514/6.2004-6313 Gajane P, Pechenizkiy M (2018) On formalizing fairness in predic-
Dahl ES (2018) Appraising Black-boxed technology: the positive pros- tion with machine learning. ArXiv:1710.03184. http://arxiv.org/
pects. Philos Technol 31(4):571–591. https://doi.org/10.1007/ abs/1710.03184. Accessed 24 Aug 2020
s13347-017-0275-1 Gebru T, Morgenstern J, Vecchione B, Vaughan JW, Wallach H,
Danks D, London AJ (2017) Algorithmic bias in autonomous systems. Daumé III H, Crawford K (2020) Datasheets for datasets.
In: Proceedings of the Twenty-Sixth International Joint Confer- ArXiv:1803.09010. http://arxiv.org/abs/1803.09010. Accessed
ence on Artificial Intelligence, 4691–97. Melbourne, Australia: 1 Aug 2020
International Joint Conferences on Artificial Intelligence Organi- Gillis TB, Spiess J (2019) Big data and discrimination. Univ Chicago
zation. https://doi.org/10.24963/ijcai.2017/654 Law Rev 459
Datta A, Tschantz MC, Datta A (2015) Automated experiments on Grant MJ, Booth A (2009) Types and associated methodologies: a
Ad privacy settings. Proc Priv Enhanc Technol 2015(1):92–112. typology of reviews. Health Inform Lib J 26(2):91–108. https://
https://doi.org/10.1515/popets-2015-0007 doi.org/10.1111/j.1471-1842.2009.00848.x
13
Green B, Chen Y (2019) Disparate interactions: an algorithm-in-the- King G, Persily N (2020) Unprecedented Facebook URLs dataset now
loop analysis of fairness in risk assessments. In: Proceedings available for academic research through social science one. 2020.
of the Conference on Fairness, Accountability, and Transpar- Unprecedented Facebook URLs Dataset now Available for Aca-
ency—FAT*’19, 90–99. Atlanta, GA, USA: ACM Press. https demic Research through Social Science One
://doi.org/10.1145/3287560.3287563 Kizilcec R (2016) How much information? In: Proceedings of the 2016
Green B, Viljoen S (2020) Algorithmic realism: expanding the bounda- CHI Conference on Human Factors in Computing Systems, pp
ries of algorithmic thought. In: Proceedings of the 2020 Confer- 2390–2395. https://doi.org/10.1145/2858036.2858402
ence on Fairness, Accountability, and Transparency, 19–31. Bar- Klee R (1996) Introduction to the philosophy of science: cutting nature
celona Spain: ACM. https://doi.org/10.1145/3351095.3372840 at its seams. Oxford University Press, Oxford
Grgić-Hlača N, Redmiles EM, Gummadi KP, Weller A (2018) Human Kleinberg J, Mullainathan S, Raghavan M (2016) Inherent trade-offs in
perceptions of fairness in algorithmic decision making: a case the fair determination of risk scores. ArXiv:1609.05807. http://
study of criminal risk prediction. ArXiv:1802.09548. http://arxiv arxiv.org/abs/1609.05807. Accessed 24 Aug 2020
.org/abs/1802.09548. Accessed 24 Aug 2020 Kortylewski A, Egger B, Schneider A, Gerig T, Morel-Forster F, Vet-
Grote T, Berens P (2020) On the ethics of algorithmic decision- ter T (2019) Analyzing and Reducing the damage of dataset bias
making in healthcare. J Med Ethics 46(3):205–211. https://doi. to face recognition with synthetic data. http://openaccess.thecv
org/10.1136/medethics-2019-105586 f.com/content_CVPRW_2019/html/BEFA/Kortylewski_Analy
Hager GD, Drobnis A, Fang F, Ghani R, Greenwald A, Lyons T, zing_and_Reducing_the_Damage_of_Dataset_Bias_to_Face_
Parkes DC et al (2019) Artificial intelligence for social good. CVPRW_2019_paper.html. Accessed 24 Aug 2020
ArXiv:1901.05406 http://arxiv.org/abs/1901.05406. Accessed Labati RD, Genovese A, Muñoz E, Piuri V, Scotti F, Sforza G (2016)
24 Aug 2020 Biometric recognition in automated border control: a survey.
Harwell D (2020) Dating apps need women. advertisers need diversity. ACM Comput Surv 49(2):1–39. https: //doi.org/10.1145/293324 1
AI companies offer a solution: fake people. Washington Post Lambrecht A, Tucker C (2019) Algorithmic bias? An empirical
Hauer T (2019) Society caught in a labyrinth of algorithms: disputes, study of apparent gender-based discrimination in the display
promises, and limitations of the new order of things. Society of STEM career ads. Manag Sci 65(7):2966–2981. https://doi.
56(3):222–230. https://doi.org/10.1007/s12115-019-00358-5 org/10.1287/mnsc.2018.3093
Henderson P, Sinha K, Angelard-Gontier N, Ke NR, Fried G, Lowe Larson B (2017) Gender as a variable in natural-language process-
R, Pineau J (2018) Ethical challenges in data-driven dialogue ing: ethical considerations. In: Proceedings of the First ACL
systems. In: Proceedings of the 2018 AAAI/ACM Conference on Workshop on Ethics in Natural Language Processing, 1–11.
AI, Ethics, and Society, 123–29. New Orleans LA USA: ACM. Valencia, Spain: Association for Computational Linguistics.
https://doi.org/10.1145/3278721.3278777 https://doi.org/10.18653/v1/W17-1601
Hill RK (2016) What an algorithm is. Philos Technol 29(1):35–59. Lee MK (2018) Understanding perception of algorithmic decisions:
https://doi.org/10.1007/s13347-014-0184-5 fairness, trust, and emotion in response to algorithmic man-
Hleg AI (2019) Ethics guidelines for trustworthy AI. https://ec.europ agement. Big Data Soc 5(1):205395171875668. https://doi.
a.eu/digital-single-market/en/news/ethics-guidelines-trustworth org/10.1177/2053951718756684
y-ai. Accessed 24 Aug 2020 Lee TN (2018) Detecting racial bias in algorithms and machine
Hoffmann AL, Roberts ST, Wolf CT, Wood S (2018) Beyond fair- learning. J Inform Commun Ethics Soc 16(3):252–260. https
ness, accountability, and transparency in the ethics of algorithms: ://doi.org/10.1108/JICES-06-2018-0056
contributions and perspectives from LIS. Proc Assoc Inform Sci Lee MS, Floridi L (2020) Algorithmic fairness in mortgage lend-
Technol 55(1):694–696. https: //doi.org/10.1002/pra2.2018.14505 ing: from absolute conditions to relational trade-offs. SSRN
501084 Electron J. https://doi.org/10.2139/ssrn.3559407
Hu M (2017) Algorithmic Jim Crow. Fordham Law Review. https:// Lee MK, Kim JT, Lizarondo L (2017) A human-centered approach
ir.lawnet.fordham.edu/flr/vol86/iss2/13/. Accessed 24 Aug 2020 to algorithmic services: considerations for fair and motivating
Hutson M (2019) Bringing machine learning to the masses. Sci- smart community service management that allocates donations
ence 365(6452):416–417. https : //doi.org/10.1126/scien to non-profit organizations. In: Proceedings of the 2017 CHI
ce.365.6452.416 Conference on Human Factors in Computing Systems—CHI
ICO (2020) ICO and The Turing Consultation on Explaining AI Deci- ’17, 3365–76. Denver, Colorado, USA: ACM Press. https://doi.
sions Guidance. ICO. 30 March 2020. https: //ico.org.uk/about org/10.1145/3025453.3025884
-the-ico/ico-and-stakeholder-consultations/ico-and-the-turin Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair,
g-consultation-on-explaining-ai-decisions-guidance/. Accessed transparent, and accountable algorithmic decision-making pro-
24 Aug 2020 cesses: the premise, the proposed solutions, and the open chal-
James G, Witten G, Hastie T, Tibshirani R (2013) An Introduction to lenges. Philos Technol 31(4):611–627. https://doi.org/10.1007/
statistical learning. Springer, New York s13347-017-0279-x
Karppi T (2018) The computer said so: on the ethics, effectiveness, Lewis D (2019) Social Credit case study: city citizen scores in Xia-
and cultural techniques of predictive policing. Soc Media Soc men and Fuzhou’. Medium: Berkman Klein Center Collection.
4(2):205630511876829. https://doi.org/10.1177/2056305118 8 October 2019. https://medium.com/berkman-klein-center/
768296 social -credit -case-study- city-citize n-scores -in-xiamen -and-
Karras T, Laine S, Aila T (2019) A style-based generator architecture fuzhou-2a65feb2bbb3. Accessed 10 Oct 2020
for generative adversarial networks. ArXiv:1812.04948. http:// Lipworth W, Mason PH, Kerridge I, Ioannidis JPA (2017) Ethics and
arxiv.org/abs/1812.04948. Accessed 24 Aug 2020 epistemology in big data research. J Bioethical Inq 14(4):489–
Katell M, Young M, Dailey D, Herman B, Guetler V, Tam A, Binz 500. https://doi.org/10.1007/s11673-017-9771-3
C, Raz D, Krafft PM (2020) Toward Situated interventions for Magalhães JC (2018) Do algorithms shape character? Consid-
algorithmic equity: lessons from the field. In: Proceedings of the ering algorithmic ethical subjectivation. Soc Media Soc
2020 Conference on Fairness, Accountability, and Transparency, 4(2):205630511876830. https://doi.org/10.1177/2056305118
45–55. Barcelona Spain: ACM. https://doi.org/10.1145/33510 768301
95.3372874 Malhotra C, Kotwal V, Dalal S (2018) Ethical framework for
machine learning. In: 2018 ITU Kaleidoscope: machine
13
learning for a 5G Future (ITU K), 1–8. Santa Fe: IEEE. https Perrault R, Yoav S, Brynjolfsson E, Jack C, Etchmendy J, Grosz B,
://doi.org/10.23919/ITU-WT.2018.8597767 Terah L, James M, Saurabh M, Carlos NJ (2019) Artificial Intel-
Martin K (2019) Ethical implications and accountability of algo- ligence Index Report 2019
rithms. J Bus Ethics 160(4):835–850. https://doi.org/10.1007/ Prates MOR, Avelar PH, Lamb LC (2019) Assessing gender bias in
s10551-018-3921-3 machine translation: a case study with google translate. Neural
Mayson SG (2019) ‘Bias In, Bias Out’. Yale Law Journal, no. 128. Comput Appl. https://doi.org/10.1007/s00521-019-04144-6
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3257004. Dignum V, Lopez-Sanchez M, Micalizio R, Pavón J, Slavkovik M,
Accessed 24 Aug 2020 Smakman M, van Steenbergen M et al (2018) Ethics by design:
Milano S, Taddeo M, Floridi L (2020) Recommender systems and necessity or curse? In: Proceedings of the 2018 AAAI/ACM
their ethical challenges. AI Soc. https://doi.org/10.1007/s0014 Conference on AI, Ethics, and Society—AIES ’18, 60–66. New
6-020-00950-y Orleans, LA, USA: ACM Press. https://doi.org/10.1145/32787
Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The 21.3278745
ethics of algorithms: mapping the debate. Big Data Soc. https Rachels J (1975) Why privacy is important. Philos Public Aff
://doi.org/10.1177/2053951716679679 4(4):323–333
Mojsilovic A (2018) Introducing AI explainability 360. https: // Rahwan I (2018) Society-in-the-loop: programming the algorithmic
www.ibm.com/blogs/research/2019/08/ai-explainability-360/. social contract. Ethics Inf Technol 20(1):5–14. https://doi.
Accessed 24 Aug 2020 org/10.1007/s10676-017-9430-8
Möller J, Trilling D, Helberger N, van Es B (2018) Do not blame it Ras G, van Gerven M, Haselager P (2018) Explanation meth-
on the algorithm: an empirical assessment of multiple recom- ods in deep learning: users, values, concerns and challenges.
mender systems and their impact on content diversity. Inform ArXiv:1803.07517. http://arxiv.org/abs/1803.07517. Accessed
Commun Soc 21(7):959–977. https://doi.org/10.1080/13691 24 Aug 2020
18X.2018.1444076 Reddy E, Cakici B, Ballestero A (2019) Beyond mystery: put-
Morley J, Floridi L, Kinsey L, Elhalal A (2019) From what to how: an ting algorithmic accountability in context. Big Data Soc
initial review of publicly available ai ethics tools, methods and 6(1):205395171982685. https://doi.org/10.1177/2053951719
research to translate principles into practices. Sci Eng Ethics. 826856
https://doi.org/10.1007/s11948-019-00165-5 Reisman D, Schultz J, Crawford K, Whittaker M (2018) Algorithmic
Morley J, Machado C, Burr C, Cowls J, Taddeo M, Floridi L (2019) impact assessments: a practical framework for public agency
The debate on the ethics of ai in health care: a reconstruction accountability’. AI Now Institute. https: //ainowi nstit ute.org/aiare
and critical review. SSRN Electron J. https://doi.org/10.2139/ port2018.pdf. Accessed 24 Aug 2020
ssrn.3486518 Richardson R, Schultz J, Crawford K (2019) Dirty data, bad predic-
Murgia M (2018) DeepMind’s move to transfer health unit to Google tions: how civil rights violations impact police data, predictive
Stirs data fears. Financial Times, New York, p 2018 policing systems, and justice. https://papers.ssrn.com/sol3/paper
Noble SU (2018) Algorithms of oppression: how search engines rein- s.cfm?abstract_id=3333423. Accessed 24 Aug 2020
force racism. New York University Press, New York Robbins S (2019) A misdirected principle with a catch: explicability
Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting for AI. Mind Mach 29(4):495–514. https: //doi.org/10.1007/s1102
racial bias in an algorithm used to manage the health of popula- 3-019-09509-3
tions. Science 366(6464):447–453. https: //doi.org/10.1126/scien Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L (2019)
ce.aax2342 The Chinese approach to artificial intelligence: an analysis of
Ochigame R (2019) The invention of “Ethical AI”, 2019. https://thein policy and regulation. SSRN Electron J. https://doi.org/10.2139/
tercept.com/2019/12/20/mit-ethical-ai-artifi cial-intelligence/. ssrn.3469784
Accessed 24 Aug 2020 Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L (2020)
OECD (2019) Recommendation of the council on artificial intelligence. The Chinese approach to artificial intelligence: an analysis of
https: //legali nstru ments .oecd.org/en/instru ments /OECD-LEGAL policy, ethics, and regulation. AI Soc. https://doi.org/10.1007/
-0449. Accessed 24 Aug 2020 s00146-020-00992-2
Olhede SC, Wolfe PJ (2018) The growing ubiquity of algorithms in Rössler B (2015) The value of privacy. https://philpapers.org/rec/
society: implications, impacts and innovations. Philos Trans ROSTVO-9. Accessed 24 Aug 2020
R Soc A Math Phys Eng Sci 376(2128):20170364. https://doi. Rubel A, Castro C, Pham A (2019) Agency laundering and information
org/10.1098/rsta.2017.0364 technologies. Ethical Theory Moral Pract 22(4):1017–1041. https
Olteanu A, Castillo C, Diaz F, Kiciman E (2016) Social data: biases, ://doi.org/10.1007/s10677-019-10030-w
methodological pitfalls, and ethical boundaries. SSRN Electron Sandvig C, Hamilton K, Karahalios K, Langbort C (2016) When the
J. https://doi.org/10.2139/ssrn.2886526 algorithm itself is a racist: diagnosing ethical harm in the basic
Oswald M (2018) Algorithm-assisted decision-making in the public components of software. Int J Commun 10:4972–4990
sector: framing the issues using administrative law rules govern- Saxena N, Huang K, DeFilippis E, Radanovic G, Parkes D, Liu Y
ing discretionary power. Philos Trans R Soc A: Math Phys Eng (2019) How do fairness definitions fare? Examining pub-
Sci 376(2128):20170359. https: //doi.org/10.1098/rsta.2017.0359 lic attitudes towards algorithmic definitions of fairness.
Paraschakis D (2017) Towards an ethical recommendation framework. ArXiv:1811.03654. http://arxiv.org/abs/1811.03654. Accessed
In: 2017 11th International Conference on Research Challenges 24 Aug 2020
in Information Science (RCIS), 211–20. Brighton, United King- Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J
dom: IEEE. https://doi.org/10.1109/RCIS.2017.7956539 (2019) Fairness and abstraction in sociotechnical systems. In:
Paraschakis D (2018) Algorithmic and ethical aspects of recommender Proceedings of the Conference on Fairness, Accountability, and
systems in E-commerce. Malmö Universitet, Malmö Transparency—FAT* ’19, 59–68. Atlanta, GA, USA: ACM
Perra N, Rocha LEC (2019) Modelling opinion dynamics in the age Press. https://doi.org/10.1145/3287560.3287598
of algorithmic personalisation. Sci Rep 9(1):7261. https://doi. Shah H (2018) Algorithmic accountability. Philos Trans R Soc A: Math
org/10.1038/s41598-019-43830-2 Phys Eng Sci 376(2128):20170362. https://doi.org/10.1098/
rsta.2017.0362
13
Shin D, Park YJ (2019) Role of fairness, accountability, and transpar- algorithms: beyond the black box. BMJ. https://doi.org/10.1136/
ency in algorithmic affordance. Comput Hum Behav 98(Septem- bmj.l886
ber):277–284. https://doi.org/10.1016/j.chb.2019.04.019 Webb H, Patel M, Rovatsos M, Davoust A, Ceppi S, Koene A, Dowth-
Sloan RH, Warner R (2018) When is an algorithm transparent? pre- waite L, Portillo V, Jirotka M, Cano M (2019) “It would be pretty
dictive analytics, privacy, and public policy. IEEE Secur Priv immoral to choose a random algorithm”: opening up algorithmic
16(3):18–25. https://doi.org/10.1109/MSP.2018.2701166 interpretability and transparency. J Inform Commun Ethics Soc
Stilgoe J (2018) Machine learning, social learning and the govern- 17(2):210–228. https://doi.org/10.1108/JICES-11-2018-0092
ance of self-driving cars. Soc Stud Sci 48(1):25–56. https://doi. Weller A (2019) Transparency: motivations and challenges.
org/10.1177/0306312717741687 ArXiv:1708.01870. http://arxiv.org/abs/1708.01870. Accessed
Szegedy C, Wojciech Z, Ilya S, Joan B, Dumitru E, Ian G, Rob F (2014) 24 Aug 2020
Intriguing Properties of Neural Networks. ArXiv:1312.6199 Wexler J (2018) The what-if tool: code-free probing of machine learn-
[Cs]. http://arxiv.org/abs/1312.6199. Accessed 18 July 2020 ing models. https://ai.googleblog.com/2018/09/the-what-if-tool-
Taddeo M, Floridi L (2018a) Regulate artificial intelligence to avert code-free-probing-of.html. Accessed 24 Aug 2020
cyber arms race. Nature 556(7701):296–298. https : //doi. Whitman M, Hsiang C-y, Roark K (2018) Potential for participatory
org/10.1038/d41586-018-04602-6 big data ethics and algorithm design: a scoping mapping review.
Taddeo M, Floridi L (2018b) How AI can be a force for good. Science In: Proceedings of the 15th Participatory Design Conference on
361(6404):751–752. https://doi.org/10.1126/science.aat5991 Short Papers, Situated Actions, Workshops and Tutorial - PDC
Taddeo M, McCutcheon T, Floridi L (2019) Trusting artificial intelli- ’18, 1–6. Hasselt and Genk, Belgium: ACM Press. https://doi.
gence in cybersecurity is a double-edged sword. Nat Mach Intell org/10.1145/3210604.3210644
1(12):557–560. https://doi.org/10.1038/s42256-019-0109-1 Wiener N (1950) The human use of human beings.
Taylor L, Floridi L, van der Sloot B (eds) (2017) Group privacy: new Winner L (1980) Do artifacts have politics? Modern Techn Probl Oppor
challenges of data technologies. Springer, Berlin Heidelberg, 109(1):121–136
New York Wong P-H (2019) Democratizing algorithmic fairness. Philos Technol.
Tickle AB, Andrews R, Golea M, Diederich J (1998) The truth https://doi.org/10.1007/s13347-019-00355-w
will come to light: directions and challenges in extracting Xian Z, Li Q, Huang X, Li L (2017) New SVD-based collaborative
the knowledge embedded within trained artificial neural net- filtering algorithms with differential privacy. J Intell Fuzzy Syst
works. IEEE Trans Neural Netw 9(6):1057–1068. https://doi. 33(4):2133–2144. https://doi.org/10.3233/JIFS-162053
org/10.1109/72.728352 Xu D, Yuan S, Zhang L, Wu X (2018) FairGAN: fairness-aware gen-
Turilli M, Floridi L (2009) The ethics of information transparency. erative adversarial networks. In: 2018 IEEE International Confer-
Ethics Inf Technol 11(2):105–112. https: //doi.org/10.1007/s1067 ence on Big Data (Big Data), 570–75. Seattle, WA, USA: IEEE.
6-009-9187-9 https://doi.org/10.1109/BigData.2018.8622525
Valiant LG (1984) A theory of the learnable. Commun ACM Yang G-Z, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R,
27(11):1134–1142. https://doi.org/10.1145/1968.1972 Jacobstein R et al (2018) The grand challenges of science robot-
Veale M, Binns R (2017) Fairer machine learning in the real world: ics. Sci Robot 3(14):eaar7650. https: //doi.org/10.1126/scirob otic
mitigating discrimination without collecting sensitive data. Big s.aar7650
Data Soc 4(2):205395171774353. https: //doi.org/10.1177/20539 Yampolskiy RV (2018) Artificial intelligence safety and security
51717743530 Yu M, Du G (2019) Why are Chinese courts turning to AI?’ The Dip-
Vedder A, Naudts L (2017) Accountability for the use of algorithms in lomat. 19 January 2019. https://thediplomat.com/2019/01/why-
a big data environment. Int Rev Law Comput Technol 31(2):206– are-chinese-courts-turning-to-ai/. Accessed 24 Aug 2020
224. https://doi.org/10.1080/13600869.2017.1298547 Zerilli J, Knott A, Maclaurin J, Gavaghan C (2019) Transparency in
Wang S, Jiang X, Singh S, Marmor R, Bonomi L, Fox D, Dow M, algorithmic and human decision-making: is there a double stand-
Ohno-Machado L (2017) Genome privacy: challenges, techni- ard? Philos Technol 32(4):661–683. https://doi.org/10.1007/
cal approaches to mitigate risk, and ethical considerations in the s13347-018-0330-6
united states: genome privacy in biomedical research. Ann N Zhou Na, Zhang C-T, Lv H-Y, Hao C-X, Li T-J, Zhu J-J, Zhu H et al
Y Acad Sci 1387(1):73–83. https://doi.org/10.1111/nyas.13259 (2019) Concordance study between IBM Watson for oncology
Watson D, Floridi L (2020) The explanation game: a formal framework and clinical practice for patients with cancer in China. Oncologist
for interpretable machine learning. SSRN Electron J. https://doi. 24(6):812–819. https: //doi.org/10.1634/theonc ologi st.2018-0255
org/10.2139/ssrn.3509737
Watson DS, Krutzinna J, Bruce IN, Griffiths CEM, McInnes IB, Barnes Publisher’s Note Springer Nature remains neutral with regard to
MR, Floridi L (2019) Clinical applications of machine learning jurisdictional claims in published maps and institutional affiliations.
13
1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com