民主 & Distrust in an Era of
人工智能
Sonia K. Katyal
Our legal system has historically operated under the general view that courts should
defer to the legislature. There is one significant exception to this view: cases in which
it appears that the political process has failed to recognize the rights or interests of
minorities. This basic approach provides much of the foundational justifications
for the role of judicial review in protecting minorities from discrimination by the
legislature. 今天, the rise of AI decision-making poses a similar challenge to de-
mocracy’s basic framework. As I argue in this essay, the rise of three trends–privat-
化, prediction, and automation in AI–have combined to pose similar risks to
minorities. 在这篇文章中, I outline what a theory of judicial review would look like in
an era of artificial intelligence, analyzing both the limitations and the possibilities
of judicial review of AI. 这里, I draw on cases in which AI decision-making has been
challenged in courts, to show how concepts of due process and equal protection can
be recuperated in a modern AI era, and even integrated into AI, to provide for better
oversight and accountability.
A lmost forty years ago, in an elegant essay published in Dædalus, J. 大卫
Bolter wrote, “artificial intelligence is compelling and controversial, 不是
for its practical achievements, but rather for the metaphor that lies be-
hind the programs: the idea that human beings should be seen as nature’s digital
computers.”1 “The computer,” Bolter continued, “is a mirror of human nature,
just as any invention reflects to some extent the intellect and character of its in-
ventor. But it is not a perfect mirror; it affects and perhaps distorts our gaze, mag-
nifying certain human capacities . . . and diminishing others.”2
As the author points out, a study of AI, which intrinsically compels us to com-
pare mind and machine, reveals the distortions and inaccuracies within each
realm. Metaphor, in these contexts, can be a useful way to parse the limits of com-
parison between humankind and machines. 关于这一点, Bolter wrote, “we do
not have to become religious converts to artificial intelligence in order to appre-
ciate the computer metaphor. . . . 反而, we can ask in what ways the metaphor
is apt and in what ways it may fail.”3 In other words, the study of artificial intel-
ligence forces us to examine deep, compositional questions: What makes a hu-
322
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
© 2022 by Sonia K. Katyal Published under a Creative Commons Attribution- 非商业用途 4.0 国际的 (CC BY-NC 4.0) 许可证 https://doi.org/10.1162/DAED_a_01919
男人? What makes a machine? 和, 最重要的, what makes something ar-
tificial, or intelligent?
在某种程度上, a similar set of compositional comparisons can be posed to-
ward the relationship between law and democracy. Law is a metaphor of sorts–a
set of artificial principles–that help us to move toward an ideal society; 但是
execution of law intrinsically requires us to compare the artifice of these ideals
with the unpredictable reality of humanity and governance, thus revealing the
distortions and inaccuracies within each realm. Just as computers function as im-
perfect mirrors of human nature–magnifying certain human capacities and di-
minishing others–law, 也, is a reflection of these limitations and possibilities.
And over time, the law has developed its own form of self-regulation to address
these issues, stemming from the risks surrounding human fallibility. Our legal
system has developed an architectural design of separate institutions, a system of
checks and balances, and a vibrant tradition of judicial review and independence.
合在一起, these elements compose part of the design of democracy.
Similar elements, I argue in this essay, must be part of the future of artificial in-
telligence. That is precisely why a study of AI is necessarily incomplete without ad-
dressing the ways in which regulation can play a role in improving AI accountability
and governance. The issues surrounding algorithmic accountability demonstrate
a deeper, more structural tension within a new generation of disputes regarding
law and technology, and the contrast between public and private accountability. 在
the core of these issues, 当然, lies the issue of trust: trust in AI, trust in human-
性, and trust in the rule of law and governance. 这里, the true potential of AI does
not lie in the information we reveal to one another, but rather in the issues it raises
about the interaction of technology, 公众信任, and the rule of law.
The rise of AI in decision-making poses a foundational challenge to democra-
cy’s basic framework. To recuperate trust in AI for humanity’s sake, it is essential
to employ design systems that integrate principles of judicial review as a founda-
tional part of AI-driven architecture. My approach in this essay sketches out three
方面: descriptive, analytic, and normative. 第一的, I describe the background
theory of judicial review to introduce a few themes that are relevant to exploring
the intersection between AI and our legal system. Then I argue that a system of ju-
dicial review is especially needed in light of the rise of three trends that have fun-
damentally altered the course of AI decision-making: privatization (the increased
role of private contractors in making governmental decisions); prediction (这
increased focus on using AI to predict human behaviors, in areas as wide-rang-
ing as criminal justice and marketing); and an increased reliance on automated
决策. These three trends, 我认为, have combined to create a perfect
storm of conflict that calls into question the role of courts and regulation alto-
在一起, potentially widening the gap of protection for minorities in a world that
will become increasingly reliant on AI.
323
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
151 (2) Spring 2022Sonia K. Katyal
最后, I turn to the normative possibilities posed by these challenges. 如何
can we ensure that software designers, drawn by traditional approaches to sta-
tistical, predictive analytics, are mindful of the importance of avoiding disparate
treatment? What protections exist to ensure a potential road map for regulatory
干涉? 这里, drawing on cases in which AI decision-making has been chal-
lenged in the courts, I sketch out some ways due process and equal protection can
be recuperated in a modern AI era, and even integrated into AI, to provide for bet-
ter oversight and accountability.
T he concept of judicial review, 在美国, has long drawn its force
from a famous footnote–perhaps the most famous footnote ever writ-
ten–in the 1938 case U.S. 与. Carolene Products, which involved a consti-
tutional challenge to an economic regulation. In the opinion, written by Justice
Harlan Stone, the Court drew a distinction between economic regulation and oth-
er kinds of legislation that might affect the interests of other groups. This distinc-
的, buried in that “footnote four,” transformed the law’s approach to civil rights,
underpinning the guarantee of equal protection under the Fourteenth Amend-
ment for all citizens in the future.
For economic regulations, the opinion explained, courts should adopt a more
deferential standard of review, erring on the side of trusting the legislature. 如何-
曾经, when it was clear that a piece of legislation targeted “discrete and insular
minorities,” Justice Stone recommended employing a heightened standard of re-
view and scrutiny over the legislation, demanding greater justification to defend
its enaction.4 “When prejudice against discrete and insular minorities may be a spe-
cial condition,” Stone wrote, “which tends seriously to curtail the operation of
those political processes ordinarily to be relied upon to protect minorities,“ 这
law needs to exercise a more “searching inquiry” to justify its actions.
In the footnote, Justice Stone encapsulated a simple, elegant theory: 我们需要
the courts to safeguard minorities from regulations that might disregard or dis-
advantage their interests. 当然, this is not the only reason for why we need
judicial review. The famed Carolene footnote later formed the backbone of a semi-
nal book by John Hart Ely, Democracy and Distrust: A Theory of Judicial Review. Ely’s
work was essentially a longer explication of this idea: by integrating a healthy dis-
trust of the political process, we can further safeguard democracy for the future.
To say that the work is formative would be an understatement, as Democracy and
Distrust has been described as “the single most cited work on constitutional law
in the last century,” and “a rite of passage” for legal scholars.5 By developing the
ideas embodied in Stone’s footnote, Ely put forth a theory, known as “representa-
tion-reinforcement theory,” which posits that courts should generally engage in a
variety of situations, including cases in which it appears that the political process
has failed to recognize the rights or interests of minorities, or where fundamental
324
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
/
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
代达罗斯, 美国艺术学院学报 & SciencesDemocracy & Distrust in an Era of Artificial Intelligence
rights are at stake. This basic theory provides much of the foundational thinking
for justifying the role of the judiciary in protecting minorities from discrimina-
tion and charting a course for judicial review.
Ely’s work has been interpreted to offer a vision of democracy as a function of
procedural values, rather than substantive ones, by focusing on the way that judi-
cial systems can create the conditions for a fair political process.6 One example
of this sort of process malfunction, Ely described, involved an intentional kind
of disenfranchisement: “the ins,” he observed, “are choking off the channels of
political change to ensure that they will stay in and the outs will stay out.”7 A sec-
ond kind of malfunction involved situations in which “no one is actually denied a
voice or a vote,” but representatives of a majority still systematically disadvantage
minority interests “out of a simple hostility or prejudiced refusal to recognize com-
monalities of interest, and thereby denying that minority the protection afforded
other groups by a representative system.”8
Judicial review, under this approach, also exhorts us to explore whether partic-
ular groups face an undue constraint on their opportunity to participate in the po-
litical process.9 For example, if minorities (or other groups) are constrained from
participating fully in the political process, then the theory of representation-rein-
forcement focuses on proxy participation as a solution. 这里, Ely reasoned, 法官
might stand in the place of minorities to ascertain the impact that they may face
and take on the responsibility to craft a more inclusive solution. Or if fundamen-
tal rights are under threat, the Court should also intervene in order to preserve the
integrity of the political process.
This basic theory undergirds much of the institutional and legal relationships
between constitutional entitlements and the role of judges in this process. Like
any other theory, Ely’s approach is not perfect: it has been criticized, and right-
fully so, for focusing too much on process at the expense of substantive constitu-
tional rights.10 But this theory of judicial review also yields both descriptive and
normative insights into the government regulation of AI.
R eading Stone’s and Ely’s concerns in today’s era of AI, one is immediately
struck by their similarity of context. Both were concerned with the risk of
majoritarian control, and designed systems of judicial review to actively
protect minority interests. 今天, those same concerns are almost perfectly repli-
cated by certain AI-driven systems, suggesting that here, 也, judicial review may
be similarly necessary. 和, normatively, just as judicial review is prescribed as a
partial solution to address these risks of majoritarian control in a constitutional
民主, this insight holds similar limits and possibilities in the context of AI
规定.
Put another way, just as our political system often fails to represent the inter-
ests of demographic minorities, AI systems carry the same risks regarding the ab-
325
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
/
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
151 (2) Spring 2022Sonia K. Katyal
sence of representation and participation–but in private industry. 考虑, 为了
例子, that one of the most central causes of biased outcomes in AI stems from
an underlying problem of lack of representation among minority populations in
the data sets used to train AI systems. Machine learning algorithms are, 基本的-
莱, inherently regressive: they are trained on a body of data that is selected by de-
signers or by past human practices. This process is the “learning” element in ma-
chine learning; the algorithm learns, 例如, how to pair queries and results
based on a body of data that produced satisfactory pairs in the past.11 Thus, 这
quality of a machine learning algorithm’s results often depends on the compre-
hensiveness and diversity of the data that it digests.12
因此, bias in AI generally surfaces from these data-related issues of repre-
sentation.13 One problem, as AI scholars Kate Crawford and Meredith Whittaker
have described, is largely internal to the process of data collection: errors in data
收藏, like inaccurate methodologies, can cause inaccurate depictions of real-
ity.14 This absence of representation is a profound cause of the risk of bias in AI. A
second issue of bias comes from an external source. It happens when the underly-
ing subject matter draws on information that reflects or internalizes some forms
of structural discrimination and thus biases the data as a result.15 Imagine, 对于前任-
充足, a situation in which data on job promotions might be used to predict ca-
reer success, but the data were gathered from an industry that systematically pro-
moted men instead of women.16 While the first kind of bias can often be mitigat-
ed by “cleaning the data” or improving the methodology, the latter might require
interventions that raise complex political ramifications because of the structural
nature of the remedy that is required.17
因此, bias can surface in the context of input bias (when the source data
are biased because they may lack certain types of information), training bias
(when bias appears in the categorization of the baseline data), or through pro-
gramming bias (when bias results from an AI system learning and modifying it-
self from incorporating new data).18 此外, algorithms themselves can also
be biased: the choices that are made by humans–what features should be used to
construct a particular model, for example–can comprise sources of inaccuracy as
well.19 An additional source of error can come from the training of the algorithm
本身, which requires programmers to decide how to weigh sources of potential
error.20
All the prior harms may seem representational in nature, but they cause dis-
criminatory effects. If the prior discussion focused on the risks of exclusion from
statistical and historical underrepresentation in a data set, there is also the oppo-
site risk of overrepresentation, which can lead to imprecise perceptions and trou-
bling stereotypes. In these instances, due in part to overrepresentation in the data
放, an algorithmic model might associate certain traits with another unrelated
trait, triggering extra scrutiny. 在这种情况下, it can be hard to prove discrimina-
326
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
代达罗斯, 美国艺术学院学报 & SciencesDemocracy & Distrust in an Era of Artificial Intelligence
tory intent in the analysis; just because an algorithm produces a disparate im-
pact on a minority group, it does not always mean that the designer intended this
result.21
Even aside from concerns about data quality and representation, a second clus-
ter of issues emerges from the intersection of privatization and AI-driven gover-
南斯. Constitutional law scholar Gillian Metzger has presciently observed that
“privatization is now virtually a national obsession.”22 Her work describes a foun-
dational risk that private industry is taking the lead in designing modes of gover-
nance.23 Notably, private contractors exercise a broad level of authority over their
program participants, even when government officials continue to make deter-
minations of basic eligibility and other major decisions.24 These trends toward
privatization and delegation are endemic throughout government infrastructure,
and many draw on machine learning techniques.25 As intellectual property law
scholar Robert Brauneis and information policy law scholar Ellen Goodman have
eloquently noted, “the risk is that the opacity of the algorithm enables corporate
capture of public power.”26
今天, algorithms are pervasive throughout public law, employed in predictive
policing analysis, family court delinquency proceedings, tax audits, parole deci-
西翁, DNA and forensic science techniques, and matters involving Medicaid, oth-
er government benefits, child support, airline travel, voter registration, and ed-
ucator evaluations.27 The Social Security Administration uses algorithms to aid
its agents in evaluating benefits claims; the Internal Revenue Service uses them
to select taxpayers for audit; the Food and Drug Administration uses algorithms
to study patterns of foodborne illness; the Securities and Exchange Commission
uses them to detect trading misconduct; local police departments employ their
insights to predict the emergence of crime hotspots; courts use them to sentence
defendants; and parole boards use them to decide who is least likely to reoffend.28
As legal scholar Aziz Huq has explained, the state uses AI techniques for target-
ing purposes (那是, decisions on who to investigate or how to allocate resources
like aid) and for adjudicatory purposes (in which the state may rely on AI tech-
niques as a stand-in for a judicial determination).29 To these two parameters, 我们
might add on a third, involving AI-driven forensic techniques to aid the state in de-
termining whether a legal violation has taken place: 例如, machine learn-
ing techniques that analyze breath alcohol levels. 在这种情况下, while AI might
aid the state in gathering evidence, the ultimate determination of compliance (或者
lack thereof ) may rest with human judgment. 这里, the selection of a perpetra-
tor might be performed by human law enforcement (who also determine whether
evidence supports that a violation has taken place), but the evidence might be in-
formed by an AI-driven technique.
Many of these tools are privately developed and proprietary. Yet the rise of
proprietary AI raises a cluster of issues surrounding the risk of discrimination:
327
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
151 (2) Spring 2022Sonia K. Katyal
one involving the deployment of AI techniques by private entities that raises le-
gal concerns; and another involving the deployment of AI techniques by public
entities that raises constitutional concerns. 合在一起, these systems can of-
ten impose disparate impacts on minority communities, stemming from both pri-
vate and public reliance on AI. In one example from Pennsylvania, an automated
system called the Allegheny Family Screening Tool was used to determine which
families were in need of child welfare assistance. But the system entailed the risk
of racial disparity: since Black families were more likely to face a disproportion-
ately higher level of referrals based on seemingly innocuous events (like missing a
doctor appointment), they were likely to be overrepresented in the data. 父母
also reported feeling dehumanized within the system by having their family his-
tory reduced to a numerical score. 而且, given the large amount of data the
system processed (and the sensitivity of the data), it carried a serious risk of data
breaches.30
Each of these prior concerns, as Huq points out, maps onto concerns regard-
ing equality, due process, and privacy, 然而, as he notes, each problem is only
“weakly constrained by constitutional norms.”31 Not only would it be difficult to
determine whether someone’s rights were violated, but parties who were singled
out would find it difficult to claim violations of equality, due process, or priva-
赛, especially given the deference enjoyed by the decision-maker.32 Further, 这
opacity of these systems raises the risk of (what I have called elsewhere) “infor-
mation insulation,” which involves an assertion of trade secret protection in sim-
ilar cases.33
Each layer of AI-driven techniques raises profound questions about the rule of
法律. 这里, privatization and automation become intimately linked, often at the
cost of fundamental protections, like due process. The problem is not just that
governmental decision-making has been delegated to private entities that de-
sign code; it is also the reverse situation, in which private entities have significant
power that is not regulated by the government. While the effects of algorithms’
predictions can be troubling in themselves, they become even more problemat-
ic when the government uses them to distribute resources or mete out punish-
ment.34 In one representative case, a twenty-seven-year-old woman with severe
developmental disabilities in West Virginia had her Medicaid funds slashed from
$130,000 到 $72,000 when the vendor began using a proprietary algorithm, 麦-
ing it impossible for her to stay in her family home.35 When she challenged the
determination on grounds of due process, the court agreed with her position, ob-
serving that the vendor had failed to employ “ascertainable standards,” because
it provided “no information as to what factors are incorporated into the APS algo-
rithm,” nor provided an “individualized rationale” for its outcome.36 The district
court concluded that the lack of transparency created an “unacceptable risk of ar-
bitrary and ‘erroneous deprivation[s]’ of due process.”37
328
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
代达罗斯, 美国艺术学院学报 & SciencesDemocracy & Distrust in an Era of Artificial Intelligence
As the previous example suggests, while automation lowers the cost of decision-
制作, it also raises significant due process concerns, involving a lack of no-
tice and the opportunity to challenge the decision.38 Even if the decisions could
be challenged, the opacity of AI makes it nearly impossible to discern all of the
variables that produced the decision. Yet our existing statutory and constitutional
schemes are poorly crafted to address issues of private, algorithmic discrimina-
的. Descriptively, AI carries similar risks of majoritarian control and systemic
prejudice, enabling majority control at the risk of harming a minority. And yet our
existing frameworks for regulating privacy and due process cannot account for
the sheer complexity and numerosity of cases of algorithmic discrimination. 在
part because of these reasons, private companies are often able to evade statuto-
ry and constitutional obligations that the government is required to follow. 因此,
because of the dominance of private industry, and the concomitant paucity of in-
formation privacy and due process protections, individuals can be governed by
biased decisions and never realize it, or they may be foreclosed from discovering
bias altogether due to the lack of transparency.
I f we consider how these biases might surface in AI-driven decision-making,
we can see more clearly how the issue of potential bias in AI resembles the
very problem of majority control that Ely wrote extensively about, 甚至
though it involves privatized, closed, automated decision-making. If our systems
of AI are driven by developers or trained on unrepresentative data, it feeds into the
very risk of majoritarian control that judicial review is ideally designed to prevent.
I want to propose, 然而, another story, one that offers us a different set of pos-
sibilities regarding the building of trust by looking, 再次, to the prospect of judi-
cial review.39 Here, I want to suggest that AI governance needs its own theory of
representation-reinforcement, extending to every person within its jurisdiction
the equal protection of the law, in essentially the same way that the Constitution
purports to.
Where metrics reflect an inequality of opportunity, we might consider em-
ploying a similar form of external judicial review to recommend against adoption
or refinement of these metrics. 在这样做, an additional layer of judicial or quasi-
judicial review can serve as a bulwark against inequality, balancing both substan-
tive and process-oriented values. 这里, we might use judicial review, not as a tool
to honor the status quo, but as a tool to demand a deeper, more substantive equal-
ity by requiring the employment of metrics to address preexisting structural in-
equalities. And if filing an actual legal case in the courts proves too difficult due to
an existing dearth of regulation, then I would propose the institution of indepen-
凹痕, quasi-judicial bodies to ensure oversight for similar purposes.
What would a representation-reinforcement theory–or relatedly, a theory
of judicial review–accomplish in the context of AI? While a detailed account of
329
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
151 (2) Spring 2022Sonia K. Katyal
representation and reinforcement is hard to accomplish in a short essay, I want
to focus on two main sets of possibilities, the first stemming from Ely’s concept
of virtual representation. As I suggested earlier, one core issue with algorithmic
decision-making is that it reflects an inherently regressive presumption: deci-
西翁, and data collected by past practices, adequately reflect–and predict–what
we should do in the future, thereby “freezing” the possibility of a deeper and more
meaningful form of substantive equality.40 Unrepresentative data, 换句话说,
can perpetuate inequalities through machine learning, leading to a feedback loop
that further amplifies existing forms of bias.
有趣的是, Justice Stone and John Hart Ely identified roughly the same con-
cerns regarding the lack of minority representation in the democratic pool, justi-
fying a more aggressive form of intervention and oversight. 换句话说, 只是
as Ely’s theory predicts, disparities in representation–over- or underrepresenta-
tion–can fuel disparate results. Yet Ely’s raising of the “judicial enforceable duty
of virtual representation” enables us to see how profitably it can be recast to en-
franchise the interests of minority populations in an AI-driven context. As Ely ob-
服务, one basic concern is that minorities must always be represented in the po-
litical process, and that we rely essentially on our judicial system to make sure that
this happens.41
这里, one core element to accomplish this goal involves the necessity of cre-
ating a layer of institutional separation between the initial decision-maker (这
AI system) and the reviewer (essentially, the system of judicial review). Like the
division between the judiciary and the legislative branches, AI-driven systems
can and must include systems of independent oversight that are distinct from the
AI systems themselves. And there is evidence that this architectural solution is
taking place. Consider an analogy from Europe’s General Data Protection Reg-
计算 (GDPR), which requires separate data protection impact assessments
(DPIA) whenever data processing “is likely to result in a high risk to the rights
and freedoms of natural persons.”42 Large-scale data processing, automated de-
cision-making, processing of data concerning vulnerable subjects, or processing
that might prevent individuals from exercising a right or using a service or con-
tract would trigger a DPIA requirement.43 Notably, this model extends to both
public and private organizations.44
One could easily imagine how this concept of independent review could be
incorporated more widely into AI-driven systems to ascertain whether a system
risks disparate impacts. A close look at these statements reveals a markedly thor-
ough implementation of the concept of institutional separation: a DPIA state-
ment is meant to be drafted by the organization’s controller in order to show com-
pliance with the GDPR; but the controller represents a separate entity from the or-
ganization processing the data.45 In doing so, the system ensures a form of built-in
virtual representation and review by putting the controller in the same position as
330
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
/
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
代达罗斯, 美国艺术学院学报 & SciencesDemocracy & Distrust in an Era of Artificial Intelligence
a judge to ensure compliance. Additional elements require an assessment of risks
to individuals and a showing of the additional measures taken to mitigate those
risks.46
最后, 现在, as Ely suggests, judicial review is often necessary to ensure
due process. Due process is especially needed in the context of AI so that individ-
uals are able to ascertain the rationale behind AI-driven decisions and to guard
against unclear explanations. In one case, in Houston, a group of teachers success-
fully challenged a proprietary algorithm developed by a private company, SAS,
called the Educational Value-Added Assessment System (EVAAS) to assess pub-
lic school teacher performance, resulting in the dismissal of twelve teachers with
little explanation or context.47 Experts who had access to the source code con-
cluded that the teachers were unable to “meaningfully verify” their scores under
EVAAS.48 Ultimately, the court ruled against adopting use of the software because
of due process concerns, noting, 明显地: “When a public agency adopts a pol-
icy of making high stakes employment decisions based on secret algorithms in-
compatible with minimum due process, the proper remedy is to overturn the poli-
cy.”49 Plainly, the court agreed with the due process concerns, noting that the gen-
eralized explanation was insufficient for an individual to meaningfully challenge
the system’s determination, and the case settled a few months later.50
The Houston case is instructive in underscoring the importance of safeguard-
ing procedural protections like due process. Had it not been for the teachers’ abil-
ity to bring this to a judicial forum to demand due process protection, the AI-
driven injustice they faced would have never seen the light of day. By requiring AI
systems to integrate similar entitlements of due process and independent over-
sight, we can ensure better outcomes and build more trust into the accountability
of AI-driven systems overall.
I n his essay forty years ago, Bolter predicted, “I think artificial intelligence will
grow in importance as a way of looking at the human mind, regardless of the
success of the programs themselves in imitating various aspects of human
想法. . . . 最终, 然而, the computer metaphor, like the computer itself,
will simply be absorbed into our culture, and the artificial intelligence project will
lose its messianic quality.”51
We are still at a crossroads in adapting to AI’s messianic potential. Ely wrote
his masterful work at a time in which AI was just at the horizon of possibility. 然而
the way that AI promises to govern our everyday lives mirrors the very same con-
cerns that he was writing about regarding democracy and distrust. But the debates
over AI provide us with the opportunity to elucidate how to employ AI to build
a better, fairer, more transparent, and more accountable society. Rather than AI
serving as an obstacle to those goals, a robust employment of the concept of judi-
cial review can make them even more attainable.
331
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
/
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
151 (2) Spring 2022Sonia K. Katyal
author’s note
The author thanks Erwin Chemerinsky, 詹姆斯·马尼卡, and Neal Katyal for their
insightful comments and suggestions.
关于作者
Sonia K. Katyal is Associate Dean of Faculty Development and Research, Codirec-
tor of the Berkeley Center for Law & 技术, and Distinguished Haas Chair at
加州大学, 伯克利, School of Law.
尾注
1 J. David Bolter, “人工智能,代达罗斯 113 (3) (夏天 1984): 3.
2 同上。, 17.
3 同上.
4 我们. 与. Carolene Products Co., 304 我们. 144, 153 n.4 (1938).
5 Henry Paul Monaghan, “John Ely: The Harvard Years,” 哈佛法律评论 117 (6) (2004):
1749.
6 Jane S. Schacter, “Ely and the Idea of Democracy,” Stanford Law Review 57 (3) (2004): 740.
7 John Hart Ely, Democracy and Distrust: A Theory of Judicial Review (剑桥, 大量的。: 头发-
vard University Press, 1980), 103.
8 Schacter, “Ely and the Idea of Democracy,” 740.
9 See Ely, Democracy and Distrust, 77, quoted in Schacter, “Ely and the Idea of Democracy,”
741.
10 看, 例如, Lawrence Tribe, “The Puzzling Persistence of Process Based Theories,”
Yale Law Journal 89 (1063) (1980).
11 See Schacter, “Ely and the Idea of Democracy,” 760.
12 See Andrew D. Selbst and Solon Barocas, “Big Data’s Disparate Impact,” California Law
审查 104 (3) (2016): 688.
13 Kate Crawford and Meredith Whittaker, “The AI Now Report: The Social and Economic
Implications of Artificial Intelligence Technologies in the Near-Term” (纽约: AI Now
研究所, 七月 7, 2016, last modified September 22, 2016), 6–7, https://ainowinstitute
.org/AI_Now_2016_Report.pdf [http://perma.cc/6FYB-H6PK (captured August 13, 2018)].
14 同上.
15 同上.
16 同上。, 6.
17 同上.
18 Nizan Geslevich Packin and Yafit Lev-Aretz, “Learning Algorithms and Discrimination,”
in Research Handbook on the Law of Artificial Intelligence, 编辑. Ugo Pagallo and Woodrow Bar-
场地 (切尔滕纳姆, 英国: Edward Elgar Publishing, 2018), 88–133.
332
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
代达罗斯, 美国艺术学院学报 & SciencesDemocracy & Distrust in an Era of Artificial Intelligence
19 Michael L. Rich, “Machine Learning, Automated Suspicion Algorithms, and the Fourth
Amendment,” University of Pennsylvania Law Review 164 (4) (2016): 883–885.
20 同上.
21 See Joshua A. Kroll, Joanna Huey, Solon Barocas, 等人。, “Accountable Algorithms,” Uni-
versity of Pennsylvania Law Review 165 (3) (2017): 693–694.
22 Gillian E. Metzger, “Privatization as Delegation,” Columbia Law Review 103 (6) (2003):
1369.
23 同上。, 1370.
24 同上。, 1387.
25 大卫·S。. 莱文, “The Impact of Trade Secrecy on Public Transparency,” in The Law and
Theory of Trade Secrecy: A Handbook of Contemporary Research, 编辑. Katherine J. Strandburg
and Rochelle C. Drayfuss (切尔滕纳姆, United Kingdom: Edward Elgar Publishing,
2012), 406–441.
26 See Ellen P. Goodman and Robert Brauneis, “Algorithmic Transparency for the Smart
城市,” Yale Journal of Law and Technology 20 (1) (2019): 109.
27 Noting these areas of use, see “Litigating Algorithms: Challenging Government Use of
Algorithmic Decision Systems” (纽约: AI Now Institute, 2018), 5, https://ainow
institute.org/litigatingalgorithms.pdf [https://perma.cc/KZ52-PZAH (captured January
22, 2019)]. For details on government uses of automated decision-making, see Danielle
Keats Citron, “Open Code Governance,” The University of Chicago Legal Forum 2008 (1)
(2008): 356–357.
28 Ronald Bailey, “Welcoming Our New Algorithmic Overlords?” Reason, 十月 1, 2016,
https://reason.com/archives/2016/10/01/welcoming-our-new-algorithmic [https://
perma.cc/YV7L-RK8N (captured August 23, 2018)].
29 Aziz Z. Huq, “Artificial Intelligence and the Rule of Law,” University of Chicago Public Law
and Legal Theory, Research Paper Series 764 (2021): 3.
30 Aziz Z. Huq, “Constitutional Rights in the Machine Learning State,” Cornell Law Review
105 (7) (2020): 1893.
31 同上.
32 同上。, 1894.
33 See Sonia Katyal, “The Paradox of Source Code Secrecy,” Cornell Law Review 104 (5)
(2019): 1240–1241, https://scholarship.law.cornell.edu/clr/vol104/iss5/2/, citing David
S. 莱文, “Secrecy and Unaccountability: Trade Secrets in Our Public Infrastructure,”
Florida Law Review 59 (135) (2007): 111, http://www.floridalawreview.com/2010/david
-s-levine-secrecy-and-unaccountability-trade-secrets-in-our-public-infrastructure/ (迪斯-
cussing the public interest concerns at stake).
34 Hannah Bloch-Wehba, “Access to Algorithms,” Fordham Law Review 88 (4) (2020): 1277.
35 同上。, 1277–1278, citing “First Amended Complaint for Injunctive & Declaratory Relief,”
Michael T. v. Bowling, 不. 2:15-CV-09655, 2016 WL 4870284 (S.D.W. Va. Sept. 13, 2016),
ECF No. 14.
36 同上。, 1278.
37 同上.
333
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
/
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
151 (2) Spring 2022Sonia K. Katyal
38 同上。, 1249.
39 Sandra Wachter, Brent Mittelstadt, and Chris Russell, “Bias Preservation in Machine
学习: The Legality of Fairness Metrics Under EU Non-Discrimination Law,” West
Virginia Law Review 123 (3) (2021): 17, https://papers.ssrn.com/sol3/papers.cfm?抽象的
_id=3792772.
40 同上。, 31.
41 See Brian Boynton, “‘Democracy and Distrust’ after Twenty Years: Ely’s Process Theory
and Constitutional Law,” Stanford Law Review 53 (2) (2000): 406.
42 Andrew D. Selbst, “Disparate Impact in Big Data Policing,” Georgia Law Review 52 (1)
(2017): 170–171. See also “Data Protection Impact Assessments,” United Kingdom In-
formation Commissioner’s Office, https://perma.cc/Q2NL-9AYZ (captured October
13, 2018).
43 同上. Requiring DPIAs if the entity uses “systematic and extensive profiling or automated
decision-making to make significant decisions about people,” processes data or crim-
inal offense data on a large scale, systematically monitors a publicly accessible place,
processes biometric or genetic data, combines or matches data from multiple sources,
or processes personal data in a way that involves online or offline tracking of location
or behavior, among other categories.
44 See Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker, “Algorith-
mic Impact Assessments: A Practical Framework for Public Agency and Accountabil-
ity” (纽约: AI Now Institute, 2018), 7, https://perma.cc/JD9Z-5MZC (captured
十月 13, 2018).
45 Sagara Gunathunga, “All You Need to Know About GDPR Controllers and Processors,”
中等的, 九月 12, 2017, https://medium.com/@sagarag/all-you-need-to-know
-about-gdpr-controllers-and-processors-248200ef4126 [https://perma.cc/8X46-8Y5D
(captured October 13, 2018)].
46 同上.
47 See Houston Federation of Teachers v. Houston Independent School District, 251 F. Supp. 3d at 1168
(S.D. Tex. 2017); and Bloch-Wehba, “Access to Algorithms,” 1282.
48 See Houston Federation of Teachers v. Houston Independent School District, 251 F. Supp. 3d at 1168.
49 See ibid., 1179.
50 See Bloch-Webha, “Access to Algorithms,” 1282–1283.
51 Bolter, “人工智能,” 18.
334
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
d
A
e
d
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
1
5
1
2
3
2
2
2
0
6
0
5
6
5
d
A
e
d
_
A
_
0
1
9
1
9
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
代达罗斯, 美国艺术学院学报 & SciencesDemocracy & Distrust in an Era of Artificial Intelligence
下载pdf