Replacing Bureaucrats with

Replacing Bureaucrats with
Automated Sorcerers?

Bernard W. Glocke

Increasingly, federal agencies employ artificial intelligence to help direct their en-
forcement efforts, adjudicate claims and other matters, and craft regulations or
regulatory approaches. Theoretically, artificial intelligence could enable agencies
to address endemic problems, most notably 1) the inconsistent decision-making
and departure from policy attributable to low-level officials’ exercise of discretion;
Und 2) the imprecise nature of agency rules. But two characteristics of artificial in-
telligence, its opaqueness and the nonintuitive nature of its correlations, threaten
core values of administrative law. Administrative law reflects the principles that
1) persons be judged individually according to announced criteria; 2) administra-
tive regulations reflect some means-end rationality; Und 3) administrative decisions
be subject to review by external actors and transparent to the public. Artificial intel-
ligence has adverse implications for all three of those critical norms. The resultant
tension, at least for now, will constrain administrative agencies’ most ambitious po-
tential uses of artificial intelligence.

A rtificial intelligence/machine learning (AI/ML) algorithms are widely

used in the private sector. We experience the results daily: AI/ML algo-
rithms suggest products for purchase and even finish our sentences. Aber
those uses of AI/ML seem tame and impermanent: we can always reject algo-
rithm-generated suggestions.

Can AI/ML become a resource for government agencies, not just in controlling
traffic lights or sorting mail, but in the exercise of the government’s coercive pow-
ers? The federal government has begun to deploy AI/ML algorithms.1 The embrace
of such technologies will profoundly affect not only the public, but the bureaucra-
cies themselves.2 Might AI/ML bring agencies closer to attaining, in Max Weber’s
Wörter, “the optimum possibility for carrying through the principle of specializing
administrative functions according to purely objective considerations”?3

Before exploring such implications, I will discuss AI/ML’s capabilities and use by
federal agencies, and agencies’ functions and environment. Letzten Endes, we will see
that AI/ML is a sort of empirical magic that may assist in coordinating an agency’s
actions but presents challenges due to its lack of transparency and nonintuitiveness.

89

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

© 2021 von der American Academy of Arts & Sciences Published under a Creative Commons Attribution- NonCommercial 4.0 International (CC BY-NC 4.0) license https://doi.org/10.1162/DAED_a_01861

T he government has long used computers to store and process vast quan-

tities of information.4 But human beings fully controlled the computers
and wrote their algorithms. Programmers had to do all the work of mod-
eling reality: das ist, attempting to ensure that their algorithm reflected the actual
Welt, as well as incorporating the agencies’ objectives.5

AI/ML is much less dependent on the programmer.6 It finds associations and
relationships in data, correlations that are both unseen by its programmers and
nonintuitive. As to the latter, Zum Beispiel, an AI/ML algorithm might predict a
person’s preferred style of shoe based upon the type of fruit the person typically
purchases for breakfast.7 Thus, AI/ML results do not represent cause and effect;
correlation does not equal causation. In der Tat, as in the example above, AI/ML al-
gorithms may rely upon correlations that defy intuitive expectations about rele-
vance; no one would posit that shoppers consider their breakfast choices when
making shoe selections.

The opaque and nonintuitive associations on which AI/ML relies, das ist, AI/
ML’s “black box” quality, have consequences for administrative law.8 Even know-
ing the inputs and the algorithm’s results, the algorithm’s human creator cannot
necessarily fully explain, especially in terms of cause and effect, how the algorithm
reached those results. The programmer may also be unable to provide an intuitive
rationale for the algorithm’s results. While computer experts can describe the al-
gorithm’s conclusion that people with a particular combination of attributes gen-
erally warrant a particular type of treatment, they cannot claim that the algorithm
has established that any particular individual with that combination of attributes
deserves such treatment.9

AI/ML can be used in either a supervised or unsupervised manner. In supervised
learning, training data are used to develop a model with features to predict known
labels or outcomes. In unsupervised learning, a model is trained to identify patterns
without such labels.10

AI/ML is particularly useful in performing four functions: identifying clus-
ters or associations within a population; identifying outliers within a population;
developing associational rules; and solving prediction problems of classification
and regression.11 AI/ML is currently less useful when a problem requires “estimat-
ing the causal effect of an intervention.”12 Nor can such algorithms resolve non-
empirical questions, such as normatively inflected ones, like ethical decisions.13
Presumably, AI/ML is ill-suited for resolving some empirical questions that fre-
quently arise in administrative and judicial contexts, such as resolving witnesses’
differing accounts of past events. In those situations, the data inputs are unclear.

A recent Administrative Conference of the United States (ACUS) study un-

covered considerable agency experimentation with or use of AI/ML.14
Agencies largely employed human-supervised AI/ML algorithms, Und

90

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Dädalus, das Journal der American Academy of Arts & SciencesReplacing Bureaucrats with Automated Sorcerers?

their results were generally used to assist agency decision-makers and agency
management in making their own decisions. A few examples follow.

The Securities and Exchange Commission (SEC) uses AI/ML to monitor the se-
curities markets for potential insider trading. The SEC’s ARTEMIS system focuses
on detecting serial inside traders. A natural language program sifts through 8-K
forms submitted by companies to announce important events that occur between
their regular securities filings. SEC staff then use a natural language processing al-
gorithm to sift through the forms. Dann, a machine learning algorithm identifies
trigger events or market changes that warrant investigation. An official reviews the
output and decides whether further investigation is justified. Wenn ja, SEC staff send
a blue sheet request to broker/dealers for relevant trading records. The blue sheet
data are analyzed with previously requested blue sheet data by an unsupervised
learning model to detect anomalies indicating the presence of insider trading.

The Social Security Administration (SSA) uses several methods to increase the
efficiency of its disability benefits claim adjudication process. It has attempted to
apply algorithms to claim metadata to create clusters of similar cases it can assign
to the same administrative law judge (ALJ). It has also developed an AI/ML analy-
sis of claims to determine the probability of an award of benefits based solely on
certain attributes of the claims. Officials use the results in establishing the order
in which claims are assigned, moving ones likely to be granted to the head of the
Linie. Jedoch, the actual determination of the claim is made by the ALJ.

AI/ML assists adjudicators in preparing disability decisions. The SSA’s Insight
program allows adjudicators to identify errors in their draft decisions, such as er-
roneous citations (das ist, nonexistent regulation numbers) and misapplication of
the vocational grid (the metric used to determine whether sufficient work exists
in the national economy for those of a claimant’s level of exertional ability, Alter,
and education). Insight also assists the SSA in identifying common errors made by
ALJs, outlier ALJs, and areas in which SSA policies need clarification.15

The ACUS report discusses the use of AI/ML to sift through the massive num-
ber of comments made in response to the Federal Communications Commis-
sion’s proposed rollback of its net neutrality rules and the Consumer Finance
Protection Bureau’s use of AI/ML to classify the complaints it receives.16 Algo-
rithms have been deployed to assist agencies in predicting an industry’s potential
response to various alternative formulations of a contemplated regulation. Der
Environmental Protection Agency’s (EPA) OMEGA model “sift[S] through the
multitude of ways . . . automaker[S] could comply with a proposed greenhouse gas
emissions standard to identify the most likely compliance decisions.”17 OMEGA
thus has helped the EPA set greenhouse gas emissions standards “that [protect]
public health and welfare while remaining cognizant of the time and cost burdens
imposed on automakers.”18 OMEGA is not an AI/ML algorithm, but we might see
it as a forerunner of AI/ML algorithms that would perform a similar function.19

91

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

150 (3) Summer 2021Bernard W. Glocke

A dministrative agencies perform a wide array of functions. Administrative

law scholars tend to focus on three broad categories of agency action that
lie at the heart of the government’s coercive powers: enforcement, adju-
dication, and rulemaking. These categories derive from the distinction between
legislating, enforcing the law, and adjudicating legal disputes.

Enforcement. Enforcement involves monitoring regulated entities, identifying
statutory or regulatory violations, and pursuing sanctions for such violations. En-
forcement is largely an executive function.

Darüber hinaus, enforcement has heretofore been considered inherently discre-
tionary: agencies’ limited resources simply do not allow them to be present ev-
erywhere at all times, much less pursue every potential regulatory violation.20
Choosing which regulated entities or activities to investigate can be excluded
from the realm of consequential decisions. If the entity or person under investi-
gation has been complying with the law (or if the government cannot amass suffi-
cient evidence to prove otherwise), no adverse consequence will ensue. Allgemein-
ly, the cost of undergoing investigation and defending oneself in an unsuccessful
government enforcement action is not considered a harm.21

Adjudication. Adjudication involves resolving individuals’ rights against, Ansprüche
of entitlements from, or obligations to the government. Daher, decisions regard-
ing Social Security disability benefits, veterans’ benefits, entitlement to a partic-
ular immigration status, and the grant or revocation of government licenses or
permits, as well as liability for civil fines or injunctive-type relief, are all adjudi-
Kationen. In mass justice agencies, these adjudications differ substantially from
traditional judicial determinations. Traditional judicial decisions often involve
competing claims of right and frequently require making moral judgments in the
course of resolving cases. The specification of rights and obligations is often in-
tertwined with a determination of the applicable facts.22 AI/ML algorithms might
make quite good predictions regarding the results in such cases, but we are chary
about leaving the actual decision to an AI/ML algorithm.

Mass adjudication by administrative agencies can often be much more routin-
isiert. Consider insurance companies’ resolution of automobile accident claims.
The judicially crafted law is complex. Liability turns on each actor’s “reasonable-
ness,” a judgment based on a mixture of law and fact. The complexity represents
an effort to decide whether the injured plaintiff is morally deserving of recovery
from the defendant driver. Fully litigating such cases requires questioning all wit-
nesses to the accident closely. But insurance companies seeking to resolve mass
claims without litigation use traffic laws to resolve liability issues, as an imperfect
but efficient metric.23

Ähnlich, the SSA disability determinations could be considered expressions
of a societal value judgment regarding which members of society qualify as the
deserving poor.24 Such a determination could be unstructured and allow significant

92

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Dädalus, das Journal der American Academy of Arts & SciencesReplacing Bureaucrats with Automated Sorcerers?

room for adjudicators’ application of moral judgments and intuition. But the SSA
hat, of necessity, established a rigid, routinized, five-step process for evaluating
disability claims.25 And the final step involves assessing whether sufficient jobs
the claimant can perform exist in the national economy. That too was routinized
by use of a grid, which provided a yes/no answer for each combination of appli-
cants’ age, Ausbildung, and exertional capacity.26

Another aspect of agency adjudication warrants attention. Much of tradition-
al litigation, particularly suits for damages, involves assessing historical facts, Die
WHO, what, Wann, Wo, and why of past events. But agency adjudications can in-
volve predictions as well as historical facts. Daher, licensing decisions are ground-
ed on predictions regarding the likelihood that the applicant will comport with
professional standards. Likewise, the last step of the SSA disability determination,
whether a person with certain age, educational, and exertional limitations could
find a sufficient number of jobs available in the national economy, is a prediction.
Andererseits, whether an employer committed an unfair labor practice in
treating an employee aversely for union activity is a question of historical fact.

AI/ML excels at making predictions–that is its sine qua non–and predictions
are all we have with regard to future events (or present events we may want to ad-
dress without taking a wait-and-see approach).27 But for an issue such as whether
a particular entity engaged in a specific unfair labor practice, we might want to fo-
cus on the witness accounts and documentary evidence relevant to that situation,
rather than AI/ML-generated correlations.28 Or to use an example from toxic torts,
epidemiological and toxicological studies establishing general causation between a
toxin and a toxic harm may be fine for estimating risks to a population exposed to a
toxin, but do not prove what courts in toxic torts must determine: nämlich, wheth-
er the harm the plaintiff suffered was caused by the plaintiff’s exposure to a toxin.29
Rulemaking. Rulemaking involves promulgation of imperatives of general ap-
plicability akin to statutes. As administrative law scholars Cary Coglianese and
David Lehr suggest, AI/ML’s use in rulemaking is limited because that process in-
volves normative judgments and requires “overlay[ing] causal interpretations on
the relationship between possible regulations and estimated effect.”30 The prod-
uct of agency rulemaking–regulations–may resemble formal legislation, aber die
rulemaking process is designed to be far less onerous. Agencies often promulgate
such regulations by “notice-and-comment” procedures.31 Those procedures seem
deceptively simple, but in practice require the agency to identify and categorize
assertions made in thousands of comments regarding the rule’s propriety. And
with the emphasis on the Office of Management and Budget’s (OMB) regulatory
review of proposed regulatory actions, a significant part of the rulemaking pro-
cess consists of assessing the overall costs and benefits attendant the rule.32

These legislative rules differ from the guidance rules used to constrain lower-
level officials’ discretion, direct their decision-making, or advise the public. Leg-

93

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

150 (3) Summer 2021Bernard W. Glocke

islative rules that are the product of notice-and-comment rulemaking have the
“force of law”: violation of the rule itself is unlawful, even if the action does not
violate the statutory standard implemented by the rule. Rules in the second sense,
guidance rules that merely constrain lower-level officials’ discretion or provide
guidance to the public, do not replace the legal standard enunciated in the stat-
ute upon which they elaborate. They lack the force of law; an agency’s sanction
against violators of such guidance rules can be upheld only if the agency can show
that the rule-violator’s conduct has transgressed the underlying statute.

Zum Beispiel, a federal statute grants the Federal Trade Commission (FTC) Die
power to enjoin unfair and deceptive trade practices. The FTC could issue a guid-
ance rule specifying that gas station operators’ failure to post octane ratings on
gas pumps is inherently deceptive. The guidance might well be based on exten-
sive consumer research the FTC has conducted. If the FTC promulgates a guidance
rule, each time it goes to court to enforce an order it enters against a rule-violator,
it will have to prove that the gas station’s failure to post octane ratings was de-
ceptive. Wenn, Jedoch, the FTC promulgates a force of law rule, das ist, a “legislative
rule,” when it goes to court to enforce an order against a violator, it need merely
show that the octane ratings were not posted. The gas station operator can no lon-
ger mount a defense asserting that its customers were not confused or deceived by
the lack of posted octane ratings.

Legislative rules can be analogized to algorithms. The human lawgiver cor-
relates a trait with a particular mischief the legislative rule is designed to address.
The correlation may often be imperfect; but rules are inherently imperfect. Wie-
immer, we would probably not accept laws based on a nonintuitive correlation of
traits to the mischief to be prevented, even if the correlation turns out to be a pret-
ty good predictor. Even with respect to legislatures, whose legislative judgments
reflected in economic and social legislation are given a particularly wide berth,
courts purport to require some “rational basis” for associating the trait that is tar-
geted with the mischief to be prevented.33 The demands for some intuitive con-
nection, some cause-and-effect relationship between a trait targeted and a harm
to be prevented, is even greater when agencies promulgate regulations.34 And to
carry the analogy further to guidance rules, it is not clear at all that a nonintuitive
connection would be allowed as a guidance rule used to direct the resolution of
agency adjudications.

T he critical internal challenge for government bureaucracies is synchroniz-

ing line-level decision-makers, both with the intended agency policy and
with each other.35 Internal review processes can serve this function, Aber
such processes still require coordination at the review level and involve duplica-
tion of effort. The agency may seek to reduce decision-making metrics to written
rules (either legislative rules or guidance rules).36 Agencies also expend resources

94

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Dädalus, das Journal der American Academy of Arts & SciencesReplacing Bureaucrats with Automated Sorcerers?

on training, and retraining, its lower-level employees. And sometimes agency lead-
ership may encounter bureaucratic resistance, yet another reason some line-level
employees’ determinations might not comport with the leadership’s policy.37

Agencies’ internal structures reflect the fundamental tension between rule-
like and standard-like decision metrics. Rules are decision metrics that do not
vary significantly depending on the circumstances.38 Rules facilitate decisional
consistency, assist line officials’ efforts to follow agency policy, and allow supe-
riors to more easily detect departures. But rules are invariably over-inclusive or
under-inclusive: they sweep within them nonproblematic cases or fail to capture
problematic cases, or both.39 And the simpler the rule, the larger the subset of un-
desirable results.

Zum Beispiel, due to the increasing heart attack risks as people age, In 1960, Die
Federal Aviation Administration promulgated the following rule: “No individual
who has reached his 60th birthday shall be utilized or serve as a pilot on any air-
craft while engaged in air carrier operations.”40 The rule is over-inclusive: viele
pilots over sixty have a very low heart attack risk, far lower than that of many pi-
lots under sixty. A case-by-case determination based on medical records would
surely have led to a more calibrated response. Even a rule that took into account
not only age, but multiple health factors would produce a smaller number of deci-
sions in which relatively risk-free pilots would be grounded.

Some of a rules’ inherent limitations can be counteracted by according discre-
tion to line employees. Reintroducing, or retaining, elements of discretion can be
particularly important when decisions must be based on circumstances or fac-
tors that either: 1) were not envisioned by rule-drafters (rules can quickly be un-
dermined by new scientific, wirtschaftlich, sozial, or other developments); oder 2) can-
not be quantified.41 So agency leadership must accord low-level decision-makers
some discretion.42 But what if rules could be fine-grained, to take virtually in-
numerable factors into account? The subset of wrong decisions would become
narrower.43

Agencies must also contend with various external forces. Agencies’ legitimacy
rests upon their responsiveness to the elected officials of the executive and legis-
lative branches, namely the president and Congress. The president and Congress
must retain the capacity to assert control over agencies, through the exercise of
the executive authority and congressional oversight, Unter anderem, and change agency
behavior by enactment of statutes modifying the law. But even such legislative and
executive oversight is insufficient to ensure agency fidelity to law.44 Thus, Agentur
decisions are generally subject to judicial review as well, to ensure that agencies
remain faithful to their statutory mandates. Trotzdem, judicial review is gener-
ally deferential. On-the-record adjudications need only be based upon “substan-
tial evidence.” Less informal adjudications and regulations need only satisfy the
“arbitrary and capricious” standard of review.45

95

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

150 (3) Summer 2021Bernard W. Glocke

The public, and both the relevant regulated entities and the beneficiaries,
must have notice of their obligations. Regulated entities must be able to pre-
dict how agencies will decide cases, and beneficiaries must also be able to deter-
mine when a challenge to a regulated entity’s actions is warranted. Darüber hinaus, NEIN
agency can long prosper without the general support of the public, or at least key
constituencies.46

W hat are the implications of governments’ use of AI/ML? Use of AI/ML

algorithms will increase uniformity of adjudicatory and enforcement
decisions, and their more fine-grained metrics should minimize the
subset of incorrect decisions.47 But agencies will face a basic decision: should the
algorithms’ decisions be binding or nonbinding?

If binding, many fewer line officials, das ist, bureaucrats, will be needed to im-
plement the program on the ground, and those that remain may well experience a
decline in status within the agency. But in embracing AI/ML algorithms, Agentur
leadership may merely have traded one management problem for another: manag-
ing the data specialists assuming a more central role in the agency’s implementa-
tion of its programs. They will make decisions about the algorithm, the data used to
train it, and the tweaks necessary to keep it current. Nonexpert leadership may feel
even less capable of managing data scientists than the line officials they replaced.

If the algorithm is nonbinding, the key question will be when to permit human
intervention. There is reason to believe that permitting overrides will produce no
better results than relying on the algorithm itself.48 Of course, agency leadership
may disagree. In that case, the challenge will be to structure human intervention
so as to avoid reintroducing the very problem the AI/ML algorithm was created to
solve: unstructured, intuitive discretion leading to discrepant treatment of regu-
lated entities and beneficiaries.

The uniformity wrought by AI/ML algorithms will come at the cost of increas-
ing the opacity of the decision-making criteria and, potentially, the intuitiveness
of the decision metric.

E xplainability is critical within the agency. It is critical to any attempt to

have a line-level, or upper-level, override system. If one does not know
the weight the AI/ML algorithm accorded various criteria, how is one sup-
posed to know whether it gave that consideration appropriate weight? At the
gleiche Zeit, the algorithms’ opacity might lead to staff resistance to such AI/ML
decisions.49 Lack of explainability poses challenges to agency managers seeking
to retain control over policy, because not even the agency head can reliably dis-
cern with precision the policy the AI/ML algorithm applies in producing its deci-
sionen. At best, agency leadership will be dependent on computer and data process-
ing specialists as critical intermediaries in attempting to manage the algorithm.

96

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Dädalus, das Journal der American Academy of Arts & SciencesReplacing Bureaucrats with Automated Sorcerers?

AI/ML algorithm’s lack of explainability impedes the agency’s navigation of
its external environment as well. It complicates relationships with Congress and
the components of the Executive Office of the President (EOP), like the OMB,
with which the agency interacts. The more opaque and less intuitive the explana-
tion of the AI/ML’s metrics and decision-making process, the harder it will be to
convince members of Congress and the relevant EOP components of the sound-
ness of the agency’s decisions. And the more fine-grained the nonintuitive dis-
tinctions between applicants for assistance or regulated entities, the more those
distinctions will be viewed as literally arbitrary (das ist, turning on inexplicable
distinctions) Und, well, bureaucratic. The reaction of the general public will pre-
sumably be even more extreme than that of elected leaders and their staff.

But let us turn to the implications of AI/ML’s lack of explainability for judicial
Rezension. While judicial review of agency decision-making is deferential, it is hardly
perfunctory.50 In many circumstances, agency decisions are a type of prediction,
even though they may not be framed in that way. Does licensing this pilot pose a
risk to public safety? Is this applicant for benefits unable to obtain a job? Diese
are questions in which AI/ML algorithms excel. Aber, as noted earlier, some agency
decisions require a determination regarding past events. Sometimes the facts, eins
might say “the data,” are in dispute. Two people might have a different account of
a key conversation between a management official and an employee central to de-
termining whether an unfair labor practice occurred. Current AI/ML algorithms
are unlikely to provide much assistance in resolving such a contest.

Zusätzlich, if a statute is applicable, an AI/ML algorithm might be incapable
of producing a decision explaining the result to the satisfaction of a court. The Su-
preme Court’s decision in Allentown Mack Sales & Service v. NLRB provides a cau-
tionary tale. Dort, a company refused to bargain with a union, asserting a “rea-
sonable doubt” that a majority of its workforce continued to support the union. In
üben, the National Labor Relations Board (NLRB) required employers making
such an assertion to prove the union’s loss of majority support. The Court held
that an agency’s application of a rule of conduct or a standard of proof that di-
verged from the formally announced rule or standard violated basic principles of
adjudication.51 But that is what an AI/ML algorithm does: it creates a standard
different from that announced, which may well be nonintuitive, and then consis-
tently applies it sub rosa. AI/ML algorithms reveal that certain data inputs are com-
monly associated with particular outcomes to which we accord legal significance,
but fail to show the basis for believing that the correlation held in a particular cir-
cumstance that occurred in the past. Mit anderen Worten, AI/ML can make predictions
about the future, but offers little insight into how the record in the particular case
leads to particular conclusions with respect to legally significant historical facts.

And often, in close cases, an agency can support either decision open to it. Is
the reviewing court to be satisfied with reversing only “clearly erroneous” AI/ML-

97

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

150 (3) Summer 2021Bernard W. Glocke

produced decisions? Some have suggested that courts review the process for deci-
sion-making rather than the outcomes produced by AI/ML algorithms.52

That approach certainly has appeal, but how is the nonexpert (perhaps most-
ly technophobic) judiciary supposed to review the AI/ML algorithms? Courts
faced a similar dilemma when Congress created new regulatory agencies for com-
plex scientific and technological subjects, accorded other agencies more rule-
making power, and permitted more pre-enforcement challenges to regulations.
The court’s response was a “hard-look” approach, ensuring that the relevant fac-
tors were considered, irrelevant factors were not, and that public participation
was guaranteed. 53

Explainability, in another sense, is also important with respect to legislative
rules. Let us say that an agency seeks to make explicit what is implicit in an AI/
ML algorithm. Assume an AI/ML algorithm finds a correlation between long-haul
truck drivers involved in accidents and 1) drivers’ credit scores; 2) certain genom-
ic markers; Und 3) a family history of alcohol abuse. The agency could license or
de-license based on a grid capturing the correlation. How would such a rule fare?
Erste, correlation does not equal causation. Some additional factor(S) more in-
tuitively relevant to a driver’s dangerousness might be propelling the relationship
between it and the three variables. Given the basic requirement for some logical
relationship between a regulation and its purposes, courts will surely demand ei-
ther some intuitive relationship or nonintuitive causal relationship between the vari-
ables and truck driver dangerousness. After all, even if there is a fairly high cor-
relation between the variables and truck driver dangerousness, many individuals
will be excluded from truck driving due to apparently irrelevant factors. The agen-
cy will presumably have to provide the intuitive or causal relationship for the reg-
ulation to avoid its invalidation as “arbitrary and capricious.”54

The example points to another problem. We want to base regulatory limita-
tionen (or provision of benefits) on people’s conduct, not their traits, either im-
mutable, like genomic markers or family history, or mutable but irrelevant, wie
credit score. One’s reward or punishment by the government should turn on con-
duct to be encouraged or deterred, not accidents of birth. And to the extent the
correlation involves a mutable marker, potential truck drivers will focus on im-
proving their performance on a characteristic that does not improve their driving,
like raising their credit score, rather than improving their capabilities as drivers.
And, Natürlich, some characteristics, like race and gender, cannot be used, unless
the agency can proffer a strong justification that is not based on treating an indi-
vidual as sharing the characteristic of his or her group.55

Dritte, the function of notice-and-comment requirements would be under-
mined if the agency can conclude that its process for developing the algorithm is
Klang, and thus that the correlation is valid, even though the reason the correla-
tion makes sense remains a mystery. Commenters themselves would have to in-

98

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Dädalus, das Journal der American Academy of Arts & SciencesReplacing Bureaucrats with Automated Sorcerers?

vestigate the correlation to either prove it is coincidental (essentially disproving
all possible reasons for the existence of the correlation) or identify the underlying
causes driving the correlation.

Zusamenfassend, even if an agency reveals its AI/ML algorithms’ magic, by attempting
to capture an AI/ML-discovered correlation in a legislative rule, the agency’s at-
tempt to promulgate a counterintuitive rule will likely fail.

Briefly turning to agency enforcement efforts, courts have recognized, particu-
larly in the Freedom of Information Act (FOIA) Kontext, the inherent tension be-
tween making sure there is no “secret law” and preventing circumvention of the
law.56 Transparency may mean that the enforcement criteria will become the ef-
fective rule, replacing the law being enforced. And given the complexity of AI/ML
Algorithmen, transparency could have a disparate effect depending on the wealth
and sophistication of the regulated entity.57

Trotzdem, to the extent transparency is desirable, it will be more difficult
to achieve when the AI/ML algorithm is proprietary, as the FOIA probably allows
the agency to withhold such information and the government may feel compelled
to do so.58

T echnology tends to make fools of those who venture predictions. Nev-

ertheless, the potential that AI/ML will reduce the number and status of
line-level employees is present. But before AI/ML makes significant in-
Straßen, agencies will have to grapple with making AI/ML algorithms’ “black box”
magic more transparent and intuitive.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

about the author

Bernard W. Bell is Professor of Law and Herbert Hannoch Scholar at Rutgers Uni-
Vielseitigkeit. He has published in such journals as Stanford Law Review, Texas Law Review,
and North Carolina Law Review.

Endnoten

1 See David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey, and Mariano-Floren-
tino Cuéllar, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies
(Washington, D.C.: Administrative Conference of the United States, 2020).

2 Mark Bovens and Stavros Zourdis, “From Street-Level to System-Level Bureaucracies:
How Information and Communication Technology Is Transforming Administrative
Discretion and Constitutional Control,” Public Administration Review 62 (2) (2002): 174.

99

150 (3) Summer 2021Bernard W. Glocke

3 Max Weber, Economy and Society: An Outline of Interpretive Sociology, Hrsg. Guenther Roth and

Claus Wittich (Berkeley: University of California Press, 1978), 975.

4 The concerns about government data storage and its privacy implications were explored
early on by an influential government advisory committee. See U.S. Department of
Health, Education, and Welfare, Records, Computers, and the Rights of Citizens, Report of the
Secretary’s Advisory Committee on Automated Personal Data Systems (Washington,
D.C.: UNS. Department of Health, Education, and Welfare, 1973).

5 Paul Schwartz, “Data Processing and Government Information: The Failure of the Amer-
ican Legal Response to the Computer,” Hastings Law Journal 43 (5) (1992): 1321, 1342.
6 Even AI/ML algorithms remain dependent on programmers to a certain extent. David
Lehr and Paul Ohm, “Playing with the Data: What Legal Scholars Should Learn about
Machine Learning,” UC Davis Law Review 51 (2) (2017): 653, 672–702. This includes un-
supervised algorithms, which are dependent upon the choice of data set to train them.
7 Andrew Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines,” Ford-
ham Law Review 87 (3) (2018): 1085, 1096–1099. This is eerily similar to the derisive car-
icature of legal realism as suggesting that judges’ decisions may turn on the quality of
their breakfast. See Alex Kozinski, “What I Ate for Breakfast and Other Mysteries of
Judicial Decision Making,” Loyola of Los Angeles Law Review 26 (4) (1993): 993.

8 Lehr and Ohm, “Playing with the Data,” 706; Cary Coglianese and David Lehr, “Regu-
lating by Robot: Administrative Decision Making in the Machine-Learning Era,” The
Georgetown Law Journal 105 (5) (2017): 1147, 1159–1160; and Engstrom et al., Government by
Algorithm, 28.

9 See Lehr and Ohm, “Playing with the Data,” 707–710 (describing four potential types of

explanations for the results produced by AI/ML algorithms).

10 John D. Kelleher and Brendan Tierney, Data Science (Cambridge, Masse.: Die MIT-Presse,

2018), 72–74; and Engstrom et al., Government by Algorithm, 12.

11 Kelleher and Tierney, Data Science, 109.
12 Aziz Z. Huq, “The Right to a Human Decision,” Virginia Law Review 675 (3) (2020): 611,

634.
13 Ebenda.
14 Engstrom et al., Government by Algorithm.
15 Felix F. Bajandas and Gerald K. Ray, “Implementation and Use of Electronic Case Man-
agement Systems in Federal Agency Adjudication,” Administrative Conference Recom-
mendation 2018-3 (Washington, D.C.: Administrative Conference of the United States,
2018), 44–49.

16 Engstrom et al., Government by Algorithm, 59–64.
17 Natural Resources Defense Council v. EPA, 954 F.3d 150, 153 (2d Cir. 2020).
18 Ebenda.
19 Coglianese and Lehr, “Regulating by Robot,” 1174–1175.
20 Heckler v. Chaney, 470 UNS. 821 (1985); and Morrison v. Olson, 487 UNS. 654, 727–728, 731–732

(1988) (Scalia dissenting).

100

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Dädalus, das Journal der American Academy of Arts & SciencesReplacing Bureaucrats with Automated Sorcerers?

21 See Environmental Defense Fund v. Ruckelshaus, 439 F.2d 584, 592 (D.C. Cir. 1971); Nor-Am Agri
Prods. v. Hardin, 435 F.2d 1151 (7th Cir. 1970) (en banc); and Dow Chemical v. Ruckelshaus, 477
F.2d 1317 (8th Cir. 1973).

22 Jerry L. Mashaw, Bureaucratic Justice: Managing Social Security Disability Claims (New Haven,

Conn.: Yale University Press, 1983).

23 H. Laurence Ross, Settled Out of Court: The Social Process of Insurance Claims Adjustment, 2nd ed.

(New York: Routledge, 1980).

24 Matthew Diller, “Entitlement and Exclusion: The Role of Disability in the Social Wel-

fare System,” UCLA Law Review 44 (1998): 361, 372–374.

25 See Bowen v. Yuckert, 482 UNS. 137, 140–142 (1987).
26 Heckler v. Campbell, 461 UNS. 458, 460–462 (1983).
27 Lehr and Ohm, “Playing with the Data,” 670–672.
28 Say A’s computer screensaver is blue 90 percent of the time. But yesterday, B had a view
of the computer screen at a particular time, sagen 9:33 pm. If we wanted to predict the
color of A’s screen in the future, reliance on the 90 percent algorithm is fine. But it
would seem odd to do so if the issue is whether the screen was blue at 9:33 pm yester-
day, given the availability of a witness. Zusamenfassend, reliance on the rule can never tell one
when the rule is incorrect in a particular situation. See Huq, “The Right to a Human
Decision,” 679.

29 Restatement of the Law, Dritte, Torts: Liability for Physical and Emotional Harm, Volumen 1 (Phila-

delphia: American Law Institute, 2010), §28, comm. C(3) & C(4).

30 Coglianese and Lehr, “Regulating by Robot,” 1172–1173.
31 Some can be promulgated even less informally. Sehen 5 U.S.C §553.
32 Jeffrey S. Lubbers, A Guide to Federal Agency Rulemaking, 6th ed. (Chicago: American Bar

Association, 2018), 252–263.

33 Bernard W. Glocke, “Legislative History without Legislative Intent: The Public Justification
Approach to Statutory Interpretation,” Ohio State Law Journal 60 (1) (1999): 1, 35 (dis-
cussing the rational basis test). Granted, such rationality rarely results in invalidation
of federal statutes because of the federal judiciary’s concerns about its institutional po-
sition in a democracy. Ebenda., 28–30.

34 Motor Vehicle Manufacturers Association v. State Farm Insurance, 463 UNS. 29, 43, N. 9 (1983).
35 Sun Ray Drive-In Dairy, Inc. v. Oregon Liquor Control Commission, 517 P.2d 289, 293 (Or. Ct.

App. 1973).

36 Sun Ray Drive-In Dairy, 517 P.2d, 293 (discussing necessity of “written standards and

policies”).

37 Jennifer Nou, “Civil Servant Disobedience,” Chicago Kent Law Review 94 (2) (2019): 349,
349–350, N. 1–4. Bureaucratic resistance is hardly new. President Truman reportedly
quipped that if General Eisenhower became president: “He’ll sit here, and he’ll say,
‘Do this! Do that!’ And nothing will happen. Poor Ike–it won’t be a bit like the Army.”
Richard E. Neustadt, Presidential Power: The Politics of Leadership (New York: John Wiley
& Sons, 1960).

38 See Frederick Schauer, Playing by the Rules: A Philosophical Examination of Rule-Based Decision-
Making in Law and in Life (Oxford: Clarendon Press, 1993), 48–52, 86–87, 104, N. 35.

101

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

150 (3) Summer 2021Bernard W. Glocke

39 Ebenda., 31–34.
40 Air Line Pilots v. Quesada, 276 F.2d 892 (2d Cir. 1961), zert. denied, 366 UNS. 962 (1961).
41 Yetman v. Garvey, 261 F.3d 664, 679 (7th Cir. 2001) (observing that “a strict age sixty cutoff,

without exceptions, [might be] better suited to 1959 than to 2001”).

42 Rule-like adjudicatory criteria may need to have exceptions–Heckler v. Campbell, 461 U.S.,
467, N. 11; und wir. v. Storer Broadcasting, 351 UNS. 192 (1956)–or a robust regime permit-
ting waivers for situations in which the rule is just not appropriate. See Alfred Aman,
“Administrative Equity: An Analysis of Exceptions to Administrative Rules,” Duke Law
Zeitschrift 31 (2) (1982): 277; and Aaron L. Nielson, “Waivers, Exemptions, and Prosecu-
torial Discretion: An Examination of Agency Nonenforcement Practices,” J. Reuben
Clark Law School, Brigham Young University Research Paper No. 17-27 (Washington,
D.C.: Administrative Conference of the United States, 2017).

43 See Schauer, Playing by the Rules, 83, 155.
44 The political branches of government may not be able to respond to every agency action
or may desire or even encourage agencies’ unfaithfulness to the intent of the legislature
that enacted the statute.

45 5 U.S.C. §706(2).
46 For a general discussion of the external environment of agencies and its implications for
agency decision-making, see Rachel Augustine Potter, Bending the Rules: Procedural Poli-
ticking in the Bureaucracy (Chicago: University of Chicago Press, 2019); and Sun Ray Drive-
In Dairy v. Oregon Liquor Control Commission, 517 P.2d at 293–294.

47 This all assumes that circumstances do not fundamentally change. Ali Alkhatib and Mi-
chael Bernstein, “Street-Level Algorithms: A Theory at the Gaps Between Policy and
Discretion,” CHI ’19: Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems Paper No. 530, Mai 2019. Jedoch, AI/ML algorithms are dynamic;
they can continue to revise themselves in light of later data. Coglianese and Lehr, “Reg-
ulating by Robot,” 1159. Legislative rules are notorious for their “ossification.” See,
Zum Beispiel, Lubbers, A Guide to Federal Agency Rulemaking, 404–405. Der 1978 grid is
still used by the SSA. While AI/ML algorithms offer the prospect of better modifying
decision rules in light of changes in the data, one can anticipate judicial discomfort
with nontransparent, nonintuitive rules changes in decision rules outside the normal
“notice-and-comment” procedures applicable to the revision of rules. Sehen, for exam-
Bitte, Appalachian Power Co. v. EPA, 208 F.3d 1015, 1019 (D.C. Cir. 2000).

48 Huq, “The Right to a Human Decision,” 665–667.
49 Engstrom et al., Government by Algorithm, 28 (reporting SEC line-level enforcement staff
seeking more information regarding AI/ML designation of certain investment advisors
as “high risk”).

50 Michael Asimow, Hrsg., A Guide to Federal Agency Adjudication (Chicago: American Bar Asso-

ciation, 2003), 85, N. 9–10.

51 The Court noted that “the consistent repetition of [such a] breach can hardly mend it.”

Allentown Mack Sales & Service v. NLRB, 522 UNS. 359, 374 (1998).

52 Huq, “The Right to a Human Decision,” 675; and Lehr and Ohm, “Playing with the Data,”

710–716.

102

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Dädalus, das Journal der American Academy of Arts & SciencesReplacing Bureaucrats with Automated Sorcerers?

53 See Natural Resources Defense Council v. UNS. Nuclear Regulatory Commission, 547 F.2d 633, 655–
657 (D.C. Cir. 1976) (Bazelon concurring) (“in highly technical areas, where judges are
institutionally incompetent to weigh evidence for themselves, a focus on agency proce-
dures will prove less intrusive, and more likely to improve the quality of decisionmak-
ing, than judges ‘steeping’ themselves ‘in technical matters’”).

54 Here I disagree with Coglianese and Lehr, “Regulating by Robot,” 1202.
55 Coglianese and Lehr believe this can be overcome, if only the AI/ML algorithm is suffi-
ciently fine-tuned and takes multiple factors into account. Ebenda., 1201. I am far more
skeptical. Zum Beispiel, considering life expectancy charts that use race as a variable
would be unconstitutional, even if an AI/ML algorithm were used to make the tables
more complex and fine-tuned by taking numerous other variables into account.

56 See Dirksen v. HHS, 803 F.2d 1456, 1458–1459, 1461–1462 (9th Cir. 1986) (Ferguson dissent-
ing) (the majority’s approach makes “the risk of circumvention . . . indistinguishable
from the prospect of enhanced compliance”). The high-2 exemption that Dirksen ap-
plied was overturned in Milner v. Department of the Navy, 562 UNS. 562 (2011). The FOIA
exemption for law enforcement records that poses a risk of circumvention remains.

57 Engstrom et al., Government by Algorithm, 86–87.
58 Coglianese and Lehr, “Regulating by Robot,” 1210–1211. See also Food Marketing Institute v.
Argus Leader Media, 139 S.Ct. 2356 (Juni 24, 2019) (expanding FOIA exemption 4). Howev-
er, the ACUS study reported that a majority of the AI/ML algorithms in their survey were
developed by the government in-house. Engstrom et al., Government by Algorithm, 18.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

/

e
D
u
D
A
e
D
A
R
T
ich
C
e

P
D

/

l

F
/

/

/

/

/

1
5
0
3
8
9
2
0
6
0
4
8
8
D
A
e
D
_
A
_
0
1
8
6
1
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

103

150 (3) Summer 2021Bernard W. Glocke
PDF Herunterladen