Distrust of Artificial Intelligence:
Fuentes & Responses from
Computer Science & Law
Cynthia Dwork & Martha Minow
Social distrust of AI stems in part from incomplete and faulty data sources, inap-
propriate redeployment of data, and frequently exposed errors that reflect and am-
plify existing social cleavages and failures, such as racial and gender biases. Oth-
er sources of distrust include the lack of “ground truth” against which to measure
the results of learned algorithms, divergence of interests between those affected and
those designing the tools, invasion of individual privacy, and the inapplicability of
measures such as transparency and participation that build trust in other institu-
ciones. Needed steps to increase trust in AI systems include involvement of broader
and diverse stakeholders in decisions around selection of uses, datos, and predictors;
investment in methods of recourse for errors and bias commensurate with the risks
of errors and bias; and regulation prompting competition for trust.
W orks of imagination, from Frankenstein (1818) to the film 2001: A Space
Odyssey (1968) and the Matrix series (1999–2021), explore fears that
human-created artificial intelligences threaten human beings due to
amoral logic, malfunctioning, or the capacity to dominate.1 As computer science
expands from human-designed programs spelling out each step of reasoning to
programs that automatically learn from historical data, infer outcomes for indi-
viduals not yet seen, and influence practices in core areas of society–including
health care, education, transportation, finance, social media, retail consumer
negocios, and legal and social welfare bureaucracies–journalistic and scholar-
ly accounts have raised questions about reliability and fairness.2 Incomplete and
faulty data sources, inappropriate redeployment of data, and frequently exposed
errors amplify existing social dominance and cleavages. Add mission creep–
like the use of digital tools intended to identify detainees needing extra supports
upon release to instead determine release decisions–and it is no wonder that big
data and algorithmic tools trigger concerns over loss of control and spur decay
in social trust essential for democratic governance and workable relationships in
general.3
309
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
© 2022 by Cynthia Dwork & Martha Minow Published under a Creative Commons Attribution- No comercial 4.0 Internacional (CC BY-NC 4.0) licencia https://doi.org/10.1162/DAED_a_01918
Failures to name and comprehend the basic terms and processes of AI add to
specific sources of distrust. Examining those sources, this essay ends with poten-
tial steps forward, anticipating both short-term and longer-term challenges.
A rtificial intelligence signifies a variety of technologies and tools that can
solve tasks requiring “perception, cognition, planificación, aprendiendo, commu-
nication, and physical actions,” often learning and acting without over-
sight by their human creators or other people.4 These technologies are already
much used to distribute goods and benefits by governments, private companies,
and other private actors.
Trust means belief in the reliability or truth of a person or thing.5 Associated
with comfort, seguridad, and confidence, its absence infers doubt about the reliabil-
ity or truthfulness of a person or thing. That doubt generates anxieties, alters be-
haviors, and undermines cooperation needed for private and public action. Distrust
is corrosive.
Distrust is manifested in growing calls for regulation, the emergence of watch-
dog and lobbying groups, and the explicit recognition of new risks requiring mon-
itoring by corporate audit committees and accounting firms.6 Critics and advo-
cates alike acknowledge that increasing deployment of AI could have unintended
but severe consequences for human lives, ranging from impairments of friend-
ships to social disorder and war.7 These concerns multiply in a context of declin-
ing trust in government and key private institutions.8
An obvious source of distrust is evidence of unreliability. Unreliability could
arise around a specific task, such as finding that your child did not run the errand
to buy milk as you requested. Or it could follow self-dealing: did your child keep
the change from funds used to purchase the milk rather than returning the unused
money to you? Trust is needed when we lack the time or ability to oversee each
task to ensure truthful and accurate performance and devotion to the interests of
those relying on the tasks being done.
Political theorist Russell Hardin explains trust as “encapsulated interest, en
which the truster’s expectations of the trusted’s behavior depend on assessments
of certain motivations of the trusted. I trust you because your interests encap-
sulate mine to some extent–in particular, because you want our relationship to
continue.”9 Trust accordingly is grounded in the truster’s assessment of the in-
tentions of the trusted with respect to some action.10 Trust is strengthened when
I believe it is in your interest to adhere to my interests in the relevant matter.11
Those who rely on institutions, such as the law, have reasons to believe that they
comport with governing norms and practices rather than serving other interests.
Trust in hospitals and schools depends on assessments of the reliability of the
institution and its practices in doing what it promises to do, as well as its respons-
es to inevitable mistakes.12 With repeated transactions, trust depends not only
310
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Dédalo, la Revista de la Academia Estadounidense de las Artes & SciencesDistrust of Artificial Intelligence: Fuentes & Responses from Computer Science & Law
on results, but also on discernable practices reducing risks of harm and deviation
from expected tasks. Evidence that the institution or its staff serves the interests
of intended beneficiaries must include guards against violation of those interests.
Trust can grow when a hospital visibly uses good practices with good results and
communicates the measures to minimize risks of bad results and departures from
good practices.
External indicators, such as accreditation by expert review boards, can signal ad-
herence to good practices and reason to expect good results. External indicators can
come from regulators who set and enforce rules, such as prohibitions of self-dealing
through bans on charging more than is justifiable for a procedure and prohibiting
personal or institutional financial interests that are keyed to the volume of referrals or
uses.13 Private or governmental external monitors must be able to audit the behavior
of institutions.14 External review will not promote trust if external monitors are not
themselves trusted. De hecho, disclosure amid distrust can feed misunderstandings.15
Past betrayals undermine trust. Personal and collective experiences with dis-
crimination or degradation–along lines of race, class, género, or other personal
characteristics–especially create reasons for suspicion if not outright distrust.
Similarmente, experiences with self-interested companies that make exploitative prof-
its can create or sustain distrust. Distrust and the vigilance it inspires may itself
protect against exploitation.16
These and further sources of distrust come with uses of AI, by which we mean: a
variety of techniques to discern patterns in historical “training” data that are deter-
minative of status (is the tumor benign?) or predictive of a future outcome (qué
is the likelihood the student will graduate within four years?). The hope is that the
patterns discerned in the training data will extend to future unseen examples. Alabama-
gorithms trained on data are “learned algorithms.” These learned algorithms clas-
sify and score individuals as the system designer chose, equitably or not, to repre-
sent them to the algorithm. These representations of individuals and “outcomes”
can be poor proxies for the events of interest, such as using re-arrest as a proxy for
recidivism or a call to child protective services as a proxy for child endangerment.17
Distrust also results from the apparent indifference of AI systems. Learned al-
gorithms lack indications of adherence to the interests of those affected by their
usar. They also lack apparent conformity with norms or practices legible to those
outside of their creation and operations.
When designed solely at the directive of governments and companies, AI may
only serve the interests of governments and companies–and risk impairing the
interests of others.
D espite sophisticated techniques to teach algorithms from data sets, allá
is no ground truth available to check whether the results match reality.
This is a basic challenge for ensuring reliable AI. We can prove that the
311
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
/
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
151 (2) Spring 2022Cynthia Dwork & Martha Minow
learned algorithm is indeed the result of applying a specific learning technique to
the training data, but when the learned algorithm is applied to a previously unseen
individual, one not in the training data, we do not have proof that the outcome is
correct in terms of an underlying factual basis, rather than inferences from indi-
rect or arbitrary factors. Consider an algorithm asked to predict whether a giv-
en student will graduate within four years. This is a question about the future:
when the algorithm is applied to the data representing the student, the answer
has not yet been determined. A similar quandary surrounds risk scoring: qué
is the “probability” that an individual will be re-arrested within two years? Este
question struggles to make sense even mathematically: what is the meaning of
the “probability” of a nonrepeatable event?18 Is what we perceive as randomness
in fact certainty, if only we had sufficient contextual information and computing
fuerza? Inferences about the future when predicated on limited or faulty informa-
tion may create an illusion of truth, but illusion it is.
Further problems arise because techniques for building trust are too often un-
available with algorithms used for scoring and categorizing people for public or
private purposes. Familiar trust-building techniques include transparency so oth-
ers can see inputs and outcomes, opportunities for those affected to participate in
designing and evaluating a system and in questioning its individual applications,
monitoring and evaluation by independent experts, and regulation and oversight
by government bodies.
Trust in the fairness of legal systems increases when those affected participate
with substantive, empowering choices within individual trials or panels review-
ing the conduct of police and other officials. Could participation of those affected
by AI help build trust in uses of AI?19 Quite apart from influencing outcomes, par-
ticipation gives people a sense that they are valued, heard, and respected.20 Par-
ticipatory procedures signal fairness, help to resolve uncertainties, and support
deference to results.21 Following prescribed patterns also contributes to the per-
ceived legitimacy of a dispute resolution system.22
But there are few if any roles for consumers, criminal defendants, padres, o
social media users to raise questions about the algorithms used to guide the allo-
cation of benefits and burdens. Nor are there roles for them in the construction
of the information-categorizing algorithms. Opportunities to participate are not
built into the design of algorithms, data selection and collection protocols, or the
pruebas, revisión, and use of learning algorithms. Ensuring a role for human beings
to check algorithmic processes can even be a new source of further inaccuracies.
An experiment allowing people to give feedback to an algorithmically powered
system actually showed that participation lowered trust–perhaps by exposing
people to the scope of the system’s inaccuracies.23
Suggestions for addressing distrust revolve around calls for “explainability”
and ensuring independent entities access to the learned algorithms themselves.24
312
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
/
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Dédalo, la Revista de la Academia Estadounidense de las Artes & SciencesDistrust of Artificial Intelligence: Fuentes & Responses from Computer Science & Law
“Access” can mean seeing the code, examining the algorithm’s outputs, and re-
viewing the choice of representation, sources of training data, and demograph-
ic benchmarking.25 But disclosure of learning algorithms themselves has limited
usefulness in the absence of data features with comprehensible meanings and ex-
planations of weight determining the contribution of each feature to outcomes.
Machine learning algorithms use mathematical operations to generate data fea-
tures that almost always are not humanly understandable, even if disclosed, y
whose learned combinations would do nothing to explain outcomes, even to ex-
pert auditors.
Regulation can demand access and judgments by qualified experts and, por-
haps more important, require behavior attentive not only to narrow interests but
also to broader public concerns. Social distrust of X-rays produced demands for
regulación; with regulation, professional training, and standards alert to health
efectos, X-rays gained widespread trust.26 Yet government regulators and inde-
pendent bodies can stoke public fears if they contribute to misinformation and
exaggerate risks.27
F or many, reliance on AI arouses fears of occupational displacement. Ahora
white collar as well as blue collar jobs seem at risk. One study from the Unit-
ed Kingdom reported that more than 60 percent of people surveyed worry
that their jobs will be replaced by AI. Many believe that their jobs and opportuni-
ties for their children will be disrupted.28 More than one-third of young Ameri-
cans report fears about technology eliminating jobs.29 Despite some predictions
of expanded and less-repetitive employment, no one can yet resolve doubts about
the future.30 Foreboding may be exacerbated by awareness that, by our uses of
tecnología, we contribute to the trends we fear. Many people feel forced to use
systems such as LinkedIn or Facebook.31 People report distrust of the Internet but
continue to increase their use of it.32
Some distrust AI precisely because human beings are not visibly involved in
decisions that matter to human beings. Yet even the chance to appeal to a human
is insufficient when individuals are unaware that there is a decision or score af-
fecting their lives.
A s companies and governments increase their use of AI, distrust mounts
considerably with misalignment of interests. Airbnb raised concerns
when it acquired Trooly Inc., including its patented “trait analyzer” that
operates by scouring blogs, social networks, and commercial and public databases
to derive personality traits. The patent claims that “the system determines a trust-
worthiness score of the person based on the behavior and personality trait met-
rics using a machine learning system,” with the weight of each personality trait
either hard coded or inferred by a machine learning model.33 It claims to identify
313
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
151 (2) Spring 2022Cynthia Dwork & Martha Minow
traits as “badness, anti-social tendencies, goodness, conscientiousness, openness,
extraversion, agreeableness, neuroticism, narcissism, Machiavellianism, or psy-
chopathy.”34 Although Airbnb asserts that the company is not currently deploy-
ing this software,35 the very acquisition of a “trait analyzer” raises concerns that
the company refuses to encapsulate the interests of those affected.36
Examples of practices harming and contrary to the interests of users abound in
social media platforms, especially around demonstrated biases and invasions of
privacy. Although social media companies offer many services that appeal to us-
ers, the companies have interests that diverge systematically from those of users.
Platform companies largely profit off data generated by each person’s activities on
the site. Por eso, the companies seek to maximize user “engagement.” Each new
data point comes when a user does–or does not–click on a link or hit a “like”
button. The platform uses that information to tailor content for users and to sell
their information to third parties for targeted advertising and other messages.37
Chamath Palihapitiya, former vice president for “user growth” for Facebook, tiene
claimed that Facebook is addictive by design.38 Sean Parker, an original Facebook
investor, has acknowledged that the site’s “like” button and news feed keep users
hooked by exploiting people’s neurochemical vulnerabilities.39
Privacy loss is a particular harm resented by many. Privacy can mean seclusion,
hiding one’s self, identity, and information; it can convey control over one’s per-
sonal information and who can see it; it can signal control over sensitive or person-
al decisions, without interference from others; or it can mean protection against
discrimination by others based on information about oneself. All these meanings
matter in the case of Tim Stobierski, OMS, shortly after starting a new job at a pub-
lishing house, was demonstrating a Facebook feature to his boss when an adver-
tisement for a gay cruise appeared on his news feed.40 He wondered, “how did
Facebook know that I was interested in men, when I had never told another living
soul, and when I certainly had never told Facebook?”41 The Pew Research Center
showed that about half of all Facebook users feel discomfort about the site’s col-
lection of their interests, mientras 74 percent of Facebook users did not know how to
find out how Facebook categorized their interests or even how to locate a page list-
ing “your ad preferences.”42 A platform’s assumptions remain opaque even as us-
ers resent the loss of control over their information and the secret surveillance.43
Tech companies may respond that users can always quit. Aquí, también, a conflict
of interests is present. Facebook exposes individuals to psychological manipula-
tion and data breaches to degrees that they cannot imagine.44 Most users do not
even know how Facebook uses their data or what negative effects can ensue.45 The
loss of control compounds the unintended spread of personal information.
The interests of tech platforms and users diverge further over hateful speech.
Facebook’s financial incentive is to keep or even elicit outrageous posts because
they attract engagement (even as disagreement or disgust) and hence produce
314
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
/
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Dédalo, la Revista de la Academia Estadounidense de las Artes & SciencesDistrust of Artificial Intelligence: Fuentes & Responses from Computer Science & Law
additional monetizable data points.46 Facebook instructs users to hide posts they
do not like, or to unfollow the page or person who posted it, y, only as a third
option, to report the post to request its removal.47 Under pressure, Facebook es-
tablished an oversight review board and charged it with evaluating (only an infini-
tesimal fraction of ) removal decisions. Facebook itself determines which matters
proceed to review.48 Directed to promote freedom of speech, not to guard against
hatred or misinformation, the board has so far done little to guard against foment-
ed hatred and violence.49
Large tech companies are gatekeepers; they can use their position and their
knowledge of users to benefit their own company over others, including third par-
ties that pay for their services.50 As one observer put it, “social media is cloaked in
this language of liberation while the corporate sponsors (Facebook, Google et al.)
are progressing towards ever more refined and effective means of manipulating
individual behavior (behavioral targeting of ads, recommendation systems, repu-
tation management systems etc.).”51
T he processes of AI baffle the open and rational debates supporting democ-
racies, markets, and science that have existed since the Enlightenment. AI
practices can nudge and change what people want, saber, and value.52 Dif-
ferently organized, learned algorithms could offer people some control over site
architecture and content moderation.53
Dangers from social media manipulation came to fruition with the 2020 A NOSOTROS.
presidential election. Some conventional media presented rumors and falsehood,
but social media initiated and encouraged misinformation and disinformation,
and amplified their spread, culminating in the sweeping erroneous belief that
Donald Trump rather than Joe Biden had won the popular vote. False claims of
rigged voting machines, despite the certification of state elections, reflected and
inflamed social distrust.54 The sustainability of our democratic governance sys-
tems is far from assured.
Building trust around AI can draw on commitments to participation, useable
explanations, and iterative improvements. Por eso, people making and deploying
AI should involve broader and diverse stakeholders in decisions around what uses
algorithms are put to; what data, with which features, are used to train the algo-
rithms; what criteria are used in the training process to evaluate classifications
or predictions; and what methods of recourse are available for raising concerns
about and securing genuine responsive action to potentially unjust methods or
resultados. Creative and talented people have devised AI algorithms able to infer
our personal shopping preferences; they could deploy their skills going forward
to devise opportunities for those affected to participate in identifying gaps and
distortions in data. Independent experts in academic and nonprofit settings–if
given access to critical information–could provide much-needed audits of algo-
315
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
151 (2) Spring 2022Cynthia Dwork & Martha Minow
rithmic applications and assess the reliability and failures of the factors used to
draw inferences.
Investment in participatory and information-sharing efforts should be com-
mensurate with the risks of harms. De lo contrario, the risks are entirely shifted to the
consumers, los ciudadanos, and clients who are subjected to the commercial and govern-
mental systems that deploy AI algorithms.
As AI escalates, so should accessible methods of recourse and correction. Estafa-
cerns for people harmed by harassment on social media; biased considerations
in employment, child protection, and other governmental decisions; and facial
recognition technologies that jeopardize personal privacy and liberty will be
echoed by known and unknown harms in finance, law, health care, policing, y
war-making. Software systems to enable review and to redress mistakes should
be built, and built to be meaningful. Designers responding that doing so would be
too expensive or too difficult given the scale enabled by the use of AI algorithms
are scaling irresponsibly. Responsible scaling demands investment in methods of
recourse for errors and bias commensurate with the risks of errors and bias. AI can
and must be part of the answer in addressing the problems created by AI, but so
must strengthened roles for human participation. Government by the consent of
the governed needs no less.55
Self-regulation and self-certification, monitoring by external industry and
consumer groups, and regulation by government can tackle misalignment and
even clashes in the interests of those designing the learning algorithms and those
affected by them. Entities should compete in the marketplace for trust and repu-
tation, face ratings by external monitors, and contribute to the development of
industry standards. Trust must be earned.
authors’ note
We thank Christian Lansang, Maroussia Lévesque, and Serena Wong for thought-
ful research and advice for this essay.
about the authors
Cynthia Dwork, a Fellow of the American Academy since 2008, is the Gordon
McKay Professor of Computer Science in the John Paulson School of Engineer-
ing and Applied Sciences and Radcliffe Alumnae Professor in the Radcliffe Insti-
tute for Advanced Study at Harvard University. Her work has established the pillars
of fault-tolerant distributed systems, modernized cryptography to the ungoverned
316
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Dédalo, la Revista de la Academia Estadounidense de las Artes & SciencesDistrust of Artificial Intelligence: Fuentes & Responses from Computer Science & Law
interactions of the Internet and the era of quantum computing, revolutionized
privacy-preserving statistical data analysis, and launched the field of algorithmic
fairness.
Martha Minow, miembro de la Academia Americana desde 1992, is the 300th Anni-
versary University Professor and Distinguished Service Professor at Harvard Uni-
versity. She also serves as Cochair of the American Academy’s project on Making
Justice Accessible: Designing Legal Services for the 21st Century. She is the author
de, más reciente, Saving the News: Why the Constitution Calls for the Government to Act to
Preserve the Freedom of Speech (2021), When Should Law Forgive? (2019), and In Brown’s
Wake: Legacies of America’s Educational Landmark (2010).
notas finales
1 See “AI in Pop Culture,” ThinkAutomation, https://www.thinkautomation.com/bots
-and-ai/ai-in-pop-culture/.
2 Darrell M. West and John. R. allen, Turning Point: Policymaking in the Era of Artificial Intelli-
gence (Washington, CORRIENTE CONTINUA.: Prensa de la Institución Brookings, 2020).
3 Although many are optimistic about new technologies, concerns over loss of control are
growing. See Ethan Fast and Eric Horvitz, “Long-Term Trends in the Public Perception
of Artificial Intelligence,” in AAAI ’17: Proceedings of the Thirty-First AAAI Conference on Artifi-
cial Intelligence (Menlo Park: California: Association for the Advancement of Artificial Intel-
ligence, 1979), 963.
4 National Security Commission on Artificial Intelligence, Reporte final: Artificial Intelligence in
Context (Washington, CORRIENTE CONTINUA.: National Security Commission on Artificial Intelligence,
2021), https://reports.nscai.gov/final-report/ai-in-context/.
5 “Trust,” Oxford English Dictionary Online, http://www.oed.com/view/Entry/207004
(accessed November 16, 2020).
6 See Deloitte, Managing Algorithmic Risks: Safeguarding the Use of Complex Algorithms and
Machine Learning (Londres: Deloitte Development, LLC, 2017), https://www2.deloitte
.com/content/dam/Deloitte/us/Documents/risk/us-risk-algorithmic-machine-learning
-risk-management.pdf; Simson L. Garfinkel, “A Peek at Proprietary Algorithms,” American
Scientist 105 (6) (2017), https://www.americanscientist.org/article/a-peek-at-proprietary
-algoritmos; and Stacy Meichtry and Noemie Bisserbe, “France’s Macron Calls for
Regulation of Social Media to Stem ‘Threat to Democracy,’” The Wall Street Journal, Ene-
uario 29, 2021, https://www.wsj.com/articles/frances-macron-calls-for-regulation-of
-social-media-to-stem-threat-to-democracy-11611955040.
7 Such as National Security Commission on Artificial Intelligence, Reporte final, “Chapter 7:
Establishing Justified Confidence in AI Systems” and “Chapter 8: Upholding Democrat-
ic Values: Privacy, Civil Liberties, and Civil Rights in Uses of AI for National Security.”
See also Executive Office of the President, Big Data: Seizing Opportunities, Preserving Val-
ues (Washington, CORRIENTE CONTINUA.: Executive Office of the President, 2014), https://obamawhite
house.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014
.pdf.
317
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
/
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
151 (2) Spring 2022Cynthia Dwork & Martha Minow
8 Lee Rainie, Scott Keeter, and Andrew Perrin, “Trust and Distrust in America,” Pew Re-
search Center, https://www.pewresearch.org/politics/2019/07/22/trust-and-distrust
-in-america/. On the psychology of distrust, see Roy J. Lewicki and Edward C. Tomlinson,
“Distrust,” Beyond Intractability, December 2003, https://www.beyondintractability
.org/essay/distrust. See also Embedded Ethics @ Harvard, https://embeddedethics
.seas.harvard.edu/; “Ethics, Computadora, and AI: Perspectives from MIT, Human Contexts
and Ethics,” MIT News, Marzo 18, 2019, https://news.mit.edu/2019/ethics-computing
-and-ai-perspectives-mit-0318; and Berkeley Computing, Data Science, and Society,
https://data.berkeley.edu/hce.
9 Russell Hardin, Trust and Trustworthiness (Nueva York: Fundación Russell Sage, 2002), xix.
10 Ibídem., xx.
11 Ibídem., 4.
12 See Pierre Lauret, “Why (and How to) Trust Institutions? Hospitals, Schools, and Liberal
Confianza,” Rivista di Estetica 68 (2018): 41–68, https://doi.org/10.4000/estetica.3455.
13 AMA Council on Ethical and Judicial Affairs, “AMA Code of Medical Ethics’ Opinions
on Physicians’ Financial Interests,” Opinion 8.0321–Physicians’ Self-Referral, AMA
Journal of Ethics (Agosto 2015), https://journalofethics.ama-assn.org/article/ama-code
-medical-ethics-opinions-physicians-financial-interests/2015-08. For analogous treat-
ment of AI, see Matthew Hutson, “Who Should Stop Unethical A.I.?” The New Yorker,
Febrero 15, 2021, https://www.newyorker.com/tech/annals-of-technology/who-should
-stop-unethical-ai.
14 See Michael Kearns and Aaron Roth, “Ethical Algorithm Design Should Guide Technol-
ogy Regulation,” The Brookings Institution, Enero 13, 2020, https://www.brookings
.edu/research/ethical-algorithm-design-should-guide-technology-regulation/.
15 See Ethan Zuckerman, Distrust: Why Losing Faith in Institutions Provides the Tools to Transform
Them (Nueva York: W.. W.. norton & Compañía, 2021), 60 (describing tendencies of peo-
ple living with broken institutions to wrongly see patterns and conspiracies in random
occurrences).
16 Roderick M. Kramer, “Rethinking Trust,” Harvard Business Review, Junio 2009, https://hbr
.org/2009/06/rethinking-trust; and Christopher B. Yenkey, “The Outsider’s Advantage:
Distrust as a Deterrent to Exploitation,” American Journal of Sociology 124 (3) (2018): 613.
17 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the
Poor (Nueva York: Calle. Martin’s Press, 2018).
18 Ver, Por ejemplo, A. Philip Dawid, “On Individual Risk,” Synthese 194 (2017); and Cyn-
thia Dwork, Michael P. kim, Omer Reingold, et al., “Outcome Indistinguishability,"
in STOC 2021: Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing
(Nueva York: Association for Computing Machinery, 2021). For a treatment of individ-
ual probabilities in risk assessment tools, see Peter B. Imrey and A. Philip Dawid, “A
Commentary on the Statistical Assessment of Violence and Recidivism Risks,” Statistics
and Public Policy 2 (1) (2015): 1–18; and Kristian Lum, David B. Dunson, and James E.
Johndrow, “Closer than They Appear: A Bayesian Perspective on Individual-Level Het-
erogeneity in Risk Assessment,” arXiv (2021), https://arxiv.org/abs/2102.01135.
19 Al sr. tyler, “Procedural Justice, Legitimacy, and the Effective Rule of Law," Delito
and Justice 30 (2003): 283, https://www.jstor.org/stable/1147701?seq=1#metadata_info_
tab_contents.
318
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Dédalo, la Revista de la Academia Estadounidense de las Artes & SciencesDistrust of Artificial Intelligence: Fuentes & Responses from Computer Science & Law
20 See E. Allan Lind y Tom R.. tyler, editores., La psicología social de la justicia procesal (Nuevo
york: Saltador, 1988).
21 Kees van den Bos, Lynn van der Velden, and E. Allan Lind, “On the Role of Perceived
Procedural Justice in Citizens’ Reactions to Government Decisions and the Handling of
Conflicts,” Utrecht Law Review 10 (4) (2014): 1–26, https://www.utrechtlawreview.org/
articles/abstract/10.18352/ulr.287/.
22 Rebecca Hollander-Blumoff and Tom R. tyler, “Procedural Justice and the Rule of Law:
Fostering Legitimacy in Alternative Dispute Resolution,” Journal of Dispute Resolution 1
(2011).
23 Donald R. Honeycutt, Mahsan Nourani, and Eric D. Ragan, “Soliciting Human-in-the-
Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impres-
sions of Model Accuracy,” in Proceedings of the Eighth AAAI Conference on Human Computa-
tion and Crowdsourcing (Menlo Park, California: Association for the Advancement of Artificial
Inteligencia, 2020), 63–72, https://ojs.aaai.org/index.php/HCOMP/article/view/7464.
24 Ver, Por ejemplo, David Leslie, “Project ExplAIn,” The Alan Turing Institute, December
2, 2019, https://www.turing.ac.uk/news/project-explain, which describes six kinds
of “explanation types,” including identifying who is involved in the development and
management of an AI system and whom to contact for human review, as well as the ef-
fects that the AI system has on an individual and on wider society.
25 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, et al., “Datasheets for Datasets,"
arXiv (2018), https://arxiv.org/abs/1803.09010; and Margaret Mitchell, Simone Wu,
Andrew Zaldivar, et al., “Model Cards for Model Reporting,” in FAT* ’19: Actas de
the Conference on Fairness, Accountability, and Transparency (Nueva York: Asociación para Com-
puting Machinery, 2019), 220–229. See also Cynthia Dwork, Michael P. kim, Omer Rein-
oro, et al., “Outcome Indistinguishability,” in STOC 2021: Proceedings of the 53rd Annual
ACM SIGACT Symposium on Theory of Computing (Nueva York: Association for Computing
Machinery, 2021).
26 See Antony Denman, S. Parkinson, and Christopher John Groves-Kirby, “A Comparative
Study of Public Perception of Risks from a Variety of Radiation and Societal Risks,"
presented at the 11th International Congress of the International Radiation Protection
Asociación, Madrid, España, May 23–28, 2004.
27 John M. Osepchuk, “A History of Microwave Heating Applications,” IEEE Transactions on
Microwave Theory and Techniques 32 (9) (1984): 1200, 1213.
28 Jacob Douglas, “These American Workers Are the Most Afraid of A.I. Taking Their Jobs,"
CNBC, Noviembre 7, 2019 (37 percent of people surveyed aged eighteen to twenty-four
expressed fear AI will take their jobs), https://www.cnbc.com/2019/11/07/these
-american-workers-are-the-most-afraid-of-ai-taking-their-jobs.html; and “Technically
Redundant: Six-in-10 Fear Losing Their Jobs to AI,” Industry Europe, Noviembre 3,
2019, https://industryeurope.com/technically-redundant-six-in-10-fear-losing-their-jobs
-to-ai/.
29 douglas, “These Americans Are the Most Afraid of AI Taking Their Jobs.”
30 James E. Bessen, Stephen Impink, Lydia Reichensperger, and Robert Seamans, "El
Business of AI Startups” (Bostón: Boston University School of Law, 2018), https://
ssrn.com/abstract=3293275 or http://dx.doi.org/10.2139/ssrn.3293275.
31 Since the writing of this essay, Facebook has been rebranded as Meta.
319
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
151 (2) Spring 2022Cynthia Dwork & Martha Minow
32 Lee Raine and Janna Anderson, “Theme 3: Trust Will Not Grow, but Technology Usage
Will Continue to Rise as a ‘New Normal’ Sets in,” Pew Research Center, Agosto 10, 2017,
https://www.pewresearch.org/internet/2017/08/10/theme-3-trust-will-not-grow-but
-technology-usage-will-continue-to-rise-as-a-new-normal-sets-in/.
33 Sarabjit Singh Baveja, Anish Das Sarma, and Nilesh Dalvi, United States Patent No.
9070088 B1: Determining Trustworthiness and Compatibility of a Person, Junio 30, 2015,
https://patentimages.storage.googleapis.com/36/36/7e/db298c5d3b280c/US9070088
.pdf (accessed January 11, 2022).
34 Ibídem., 3–4.
35 Aaron Holmes, “Airbnb Has Patented Software that Digs through Social Media to Root
Out People Who Display ‘Narcissism or Psychopathy,’” Business Insider, Enero 6, 2020,
https://www.businessinsider.com/airbnb-software-predicts-if-guests-are-psychopaths
-patent-2020-1.
36 See text above from endnotes 14–17.
37 Tero Karppi, Disconnect: Facebook’s Affective Bonds (Mineápolis: University of Minnesota
Prensa, 2018).
38 Ibídem.
39 Tero Karppi and David B. Nieborg, “Facebook Confessions: Corporate Abdication and
Silicon Valley Dystopianism,” New Media & Sociedad 23 (9) (2021); Sarah Friedman,
“How Your Brain Responds Every Time Your Insta Post Gets a ‘Like,’” Bustle, Septem-
ber 21, 2019, https://www.bustle.com/p/how-your-brain-responds-to-a-like-online
-shows-the-power-of-social-media-18725823; and Trevor Haynes, “Dopamine, Smart-
phones, and You: A Battle for Your Time,” Science in the News Blog, Harvard Medical
School, Puede 1, 2018, https://sitn.hms.harvard.edu/flash/2018/dopamine-smartphones
-battle-time/.
40 Tim Stobierski, “Facebook Ads Outed Me,” INTO, Puede 3, 2018, https://www.intomore
.com/you/facebook-ads-outed-me/#.
41 Ibídem.
42 Paul Hitlin and Lee Rainie, “Facebook Algorithms and Personal Data,” Pew Research
Center, Enero 16, 2019, https://www.pewresearch.org/internet/2019/01/16/facebook
-algorithms-and-personal-data/. New “post-cookie” advertising schemes enlist the
browser to perform user categorization previously carried out by advertising networks.
Heralded as privacy-preserving because the browser is local to the user’s machine, estos
systems are designed to carry out the same segmentation that so many find objection-
capaz; see Bennet Cyphers, “Google’s FLoC Is a Terrible Idea,” March 3, 2021, Electrónico
Frontier Foundation. Standard notions of privacy do not ensure fair treatment. Cyn-
thia Dwork and Deirdre K. Mulligan, “It’s Not Privacy and It’s Not Fair,” Stanford Law
Revisar 66 (2013), https://www.stanfordlawreview.org/online/privacy-and-big-data-its
-not-privacy-and-its-not-fair/ (“The urge to classify is human. The lever of big data,
sin embargo, brings ubiquitous classification, demanding greater attention to the values
embedded and reflected in classifications, and the roles they play in shaping public and
private life.”)
43 Además, “privacy solutions can hinder efforts to identify classifications that unin-
tentionally produce objectionable outcomes–for example, differential treatment that
tracks race or gender–by limiting the availability of data about such attributes.” Dwork
and Mulligan, “It’s Not Privacy and It’s Not Fair.”
320
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
/
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Dédalo, la Revista de la Academia Estadounidense de las Artes & SciencesDistrust of Artificial Intelligence: Fuentes & Responses from Computer Science & Law
44 See Karppi and Nieborg, “Facebook Confessions," 11.
45 Ibídem.
46 Karppi, Disconnect: Facebook’s Affective Bonds.
47 Eugenia Siapera and Paloma Viejo-Otero, “Governing Hate: Facebook and Digital Rac-
ismo,” Television & New Media 22 (2) (2020): 112–113, 122.
48 Evelyn Douek, “What Kind of an Oversight Board Have You Given Us?” University of Chi-
cago Law Review Online, Puede 11, 2020, https://lawreviewblog.uchicago.edu/2020/05/11/
fb-oversight-board-edouek/.
49 See Andrew Marantz, “Why Facebook Can’t Fix Itself,” The New Yorker, Octubre 12, 2020.
50 Lina M. Kan, “Sources of Tech Platform Power,” Georgetown Law Technology Review 2 (2)
(2018): 325.
51 Joshua-Michéle Ross, “The Question Concerning Social Technology,” Radar, Puede 18,
2009, http://radar.oreilly.com/2009/05/the-question-concerning-social.html; and “Do
Social Media Threaten Democracy?” The Economist, Noviembre 4, 2017, https://www
.economist.com/leaders/2017/11/04/do-social-media-threaten-democracy.
52 Ibídem.; and The Social Dilemma, dir. Jeff Orlowski (Roca, Colo.: Exposure Labs, Argent
Pictures, The Space Program, 2020).
53 See Daphne Keller, “The Future of Platform Power: Making Middleware Work," Diario
of Democracy 32 (3) (2021), https://www.journalofdemocracy.org/articles/the-future
-of-platform-power-making-middleware-work/.
54 Ver, Por ejemplo, Aaron Blake, “Trump’s ‘Big Lie’ Was Bigger Than Just a Stolen Election,"
The Washington Post, Febrero 12, 2021, https://www.washingtonpost.com/politics/
2021/02/12/trumps-big-lie-was-bigger-than-just-stolen-election/; Melissa Block, “Can
The Forces Unleashed By Trump’s Big Election Lie Be Undone?” NPR, Enero 16, 2021,
https://www.npr.org/2021/01/16/957291939/can-the-forces-unleashed-by-trumps
-big-election-lie-be-undone; and Christopher Giles and Jake Horton, “U.S. Election
2020: Is Trump Right about Dominion Machines?” BBC, Noviembre 17, 2020, https://
www.bbc.com/news/election-us-2020-54959962.
55 See Richard Fontaine and Kara Frederick, “Democracy’s Digital Defenses,” The Wall
Street Journal, Puede 7, 2021, https://www.wsj.com/amp/articles/democracys-digital
-defenses-11620403161.
321
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
d
a
mi
d
a
r
t
i
C
mi
–
pag
d
/
yo
F
/
/
/
/
1
5
1
2
3
0
9
2
0
6
0
5
8
8
d
a
mi
d
_
a
_
0
1
9
1
8
pag
d
.
/
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
151 (2) Spring 2022Cynthia Dwork & Martha Minow
Descargar PDF