Prediction and Judgment

Prediction and Judgment

Prediction and
Judgment

Avi Goldfarb and
Jon R. Lindsay

Why Artiªcial Intelligence Increases the
Importance of Humans in War

There is an emerg-
ing policy consensus that artiªcial intelligence (AI) will transform interna-
tional politics. As stated in the 2021 report of the U.S. National Security
Commission on AI, “The ability of a machine to perceive, evaluate, and act
more quickly and accurately than a human represents a competitive advantage
in any ªeld—civilian or military. AI technologies will be a source of enormous
power for the companies and countries that harness them.”1 A lack of clarity
over basic concepts, Jedoch, complicates an assessment of the security impli-
cations of AI. AI also has multiple meanings, ranging from big data, machine
learning, robotics, and lethal drones, to a sweeping “fourth industrial revolu-

Avi Goldfarb is the Rotman Chair in Artiªcial Intelligence and Healthcare and Professor of Marketing at the
Universität von Toronto, and a research associate at the National Bureau of Economic Research. Jon R. Lindsay
is Associate Professor at the School of Cybersecurity and Privacy, with a secondary appointment at the Sam
Nunn School of International Affairs, both at the Georgia Institute of Technology.

The authors are grateful for research assistance from Morgan MacInnes and constructive feedback
from Andrea Gilli, Mauro Gilli, James Johnson, Ryder McKeown, Heather Roff, members of the In-
novation Policy Lab at the Munk School of Global Affairs and Public Policy at the University of To-
ronto, and the anonymous reviewers. This project was supported by funding from the Social
Sciences and Humanities Research Council of Canada (File number 435-2017-0041) and the Sloan
Foundation. The authors presented a more limited version of this general argument in a pre-
vious report published by the Brookings Institution: Avi Goldfarb and Jon R. Lindsay, “Artiªcial
Intelligence in War: Human Judgment as an Organizational Strength and a Strategic Liability”
(Washington, D.C.: Brookings Institution Press, November 30, 2020), https://www.brookings
.edu/research/artiªcial-intelligence-in-war-human-judgment-as-an-organizational-strength-and-
a-strategic-liability/.

1. National Security Commission on Artiªcial Intelligence [NSCAI], Final Report (Washington,
D.C.: NSAI, Marsch 2021), P. 7, https://www.nscai.gov/wp-content/uploads/2021/03/Full-
Report-Digital-1.pdf. Russian President Vladimir Putin makes the same point with more ºair:
“Whoever becomes the leader in this sphere will become the ruler of the world,” quoted in Radina
Gigova, “Who Putin Thinks Will Rule the World,” CNN, September 2, 2017, https://www
.cnn.com/2017/09/01/world/putin-artiªcial-intelligence-will-rule-world/index.html; and Keith
Dear, “Will Russia Rule the World through AI? Assessing Putin’s Rhetoric against Russia’s Real-
ität,” RUSI Journal, Bd. 164, NEIN. 5–6 (2019), S. 36–60, https://doi.org/10.1080/03071847.2019
.1694227. Many U.S. ofªcials, such as the chief technology ofªcer of the National Geospatial
Intelligence Agency, draw a stark conclusion: “If the United States refuses to evolve, it risks
giving China or some other adversary a technological edge that Washington won’t be able to over-
kommen,” quoted in Anthony Vinci, “The Coming Revolution in Intelligence Affairs,” Foreign Af-
fairs, August 31, 2020, https://www.foreignaffairs.com/articles/north-america/2020-08-31/coming-
revolution-intelligence-affairs. On Chinese ambitions for an “intelligentized” military, see Elsa B.
Kania, “Chinese Military Innovation in the AI Revolution,” RUSI Journal, Bd. 164, NEIN. 5–6 (2019),
S. 26–34, https://doi.org/10.1080/03071847.2019.1693803.

International Security, Bd. 46, NEIN. 3 (Winter 2021/22), S. 7–50, https://doi.org/10.1162/isec_a_00425
© 2022 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology.

7

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 8

tion.”2 Signiªcant investment in AI and the imaginaries of science ªction only
add to the confusion.

In this article we focus on machine learning, which is the key AI technology
that receives attention in the press, in management, and in the economics liter-
ature.3 We leave aside debates about artiªcial general intelligence (AGI), oder
systems that match or exceed human intelligence.4 Machine learning, or “nar-
row AI,” by contrast, is already widely in use. Successful civil applications in-
clude navigation and route planning, image recognition and text translation,
and targeted advertising. Michael Horowitz describes AI as “the ultimate
enabler” for automating decision-making tasks in everything from public ad-
ministration and commercial business to strategic intelligence and military
combat.5 In 2018, the Department of Defense observed that “AI is poised
to transform every industry, and is expected to impact every corner of the
Department, spanning operations, Ausbildung, sustainment, force protection, Re-
cruiting, healthcare, and many others.”6 We would be surprised, Jedoch, Wenn
AI transformed all these activities to the same degree for all actors who use it.

One of the key insights from the literature on the economics of technology is
that the complements to a new technology determine its impact.7 AI, from this
Perspektive, is not a simple substitute for human decision-making. Rapid ad-
vances in machine learning have improved statistical prediction, but predic-

2. On the potential impacts of these technologies, see the recent special issue edited by Michael
Raska et al., "Einführung,” Journal of Strategic Studies, Bd. 44, NEIN. 4 (2021), S. 451–455, https://
doi.org/10.1080/01402390.2021.1917877.
3. Sehen, Zum Beispiel, Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple
Economics of Artiªcial Intelligence (Cambridge, Masse.: Harvard Business Review Press, 2018), P. 24;
and Jason Furman and Robert Seamans, “AI and the Economy,” in Josh Lerner and Scott Stern,
Hrsg., Innovation Policy and the Economy, Bd. 19 (Chicago: University of Chicago Press, 2018),
S. 161–191.
4. Generally, see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford Univer-
sity Press, 2014). We comment brieºy on AGI in the conclusion.
5. Michael C. Horowitz, “Artiªcial Intelligence, International Competition, and the Balance of
Power,” Texas National Security Review, Bd. 1, NEIN. 3 (Mai 2018), P. 41, https://doi.org/10.15781/
T2639KP49.
6. Summary of the 2018 Department of Defense Artiªcial Intelligence Strategy: Harnessing AI to Advance
Our Security and Prosperity (Washington, D.C.: UNS. Department of Defense, Februar 12,
2019), P. 5, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-
AI-STRATEGY.PDF. On potential military applications, see NSCAI, Final Report; and Daniel S.
Hoadley and Kelley M. Sayler, Artiªcial Intelligence and National Security, CRS Report R45178
(Washington, D.C.: Forschungsdienst des Kongresses, November 21, 2019), https://crsreports
.congress.gov/product/pdf/R/R45178/7.
7. Sehen, Zum Beispiel, Timothy F. Bresnahan, Erik Brynjolfsson, and Lorin M. Hitt, “Information
Technologie, Workplace Organization, and the Demand for Skilled Labor: Firm-Level Evidence,”
Vierteljährliches Journal of Economics, Bd. 117, NEIN. 1 (Februar 2002), S. 339–376, https://doi.org/
10.1162/003355302753399526; and Shane Greenstein and Timothy F. Bresnahan, “Technical Prog-
ress and Co-invention in Computing and in the Uses of Computers,” Brookings Papers on Economic
Activity: Microeconomics (Washington, D.C.: Brookings Institution Press, 1996), S. 1–83.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 9

tion is only one aspect of decision-making. Two other important elements of
decision-making—data and judgment—represent the complements to predic-
tion. Just as cheaper bread expands the market for butter, advances in AI that
reduce the costs of prediction are making its complements more valuable. AI
prediction models require data, and accurate prediction requires more and
better data. Quality data provide plentiful and relevant information without
systemic bias. Data-driven machine prediction can efªciently ªll in informa-
tion needed to optimize a given utility function, but the speciªcation of the
utility function ultimately relies on human judgment about what exactly
should be maximized or minimized. Judgment determines what kinds of pat-
terns and outcomes are meaningful and what is at stake, for whom, und in
which contexts. Clear judgments are well speciªed in advance and agreed
upon by relevant stakeholders. When quality data are available and an organi-
zation can articulate clear judgments, then AI can improve decision-making.
We argue that if AI makes prediction cheaper for military organizations,
then data and judgment will become both more valuable and more contested.
This argument has two important strategic implications. Erste, the conditions
that have made AI successful in the commercial world—quality data and clear
judgment—may not be present, or present to the same degree, for all military
tasks. In military terms, judgment encompasses command intentions, rules of
engagement, administrative management, and moral leadership. These func-
tions cannot be automated with narrow AI technology. Increasing reliance on
AI, daher, will make human beings even more vital for military power, nicht
weniger. Zweite, the importance of data and judgment creates incentives for strate-
gic competitors to improve, protect, and interfere with information systems
and command institutions. Infolge, conºicts over information will become
more salient, and organizational coordination will become more complex. In
contrast with assumptions about rapid robot wars and decisive shifts in mili-
tary advantage, we expect AI-enabled conºict to be characterized by environ-
mental uncertainty, organizational friction, and political controversy. Der
contestation of AI complements, daher, is likely to unfold differently than
the imagined wars of AI substitutes.8

Many hopes and fears about AI recapitulate earlier ideas about the in-

8. Sehen, Zum Beispiel, Paul Scharre, Army of None: Autonomous Weapons and the Future of War (Neu
York: W.W. Norton, 2018); Michael C. Horowitz, “When Speed Kills: Lethal Autonomous Weapon
Systeme, Deterrence and Stability,” Journal of Strategic Studies, Bd. 42, NEIN. 6 (2019), S. 764–788,
https://doi.org/10.1080/01402390.2019.1621174; James Johnson, “Artiªcial Intelligence and Future
Warfare: Implications for International Security,” Defense & Security Analysis, Bd. 35, NEIN. 2 (2019),
S. 147–169, https://doi.org/10.1080/14751798.2019.1600800; and Kenneth Payne, ICH, Warbot: Der
Dawn of Artiªcially Intelligent Conºict (New York: Oxford University Press, 2021).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 10

formation technology revolution in military affairs (RMA) and cyberwarfare.9
Familiar tropes abound regarding the transformative effects of commercial in-
novation, the speed and danger of networked computation, the dominance of
offense over defense, and the advantages of a rising China over a vulnerable
Vereinigte Staaten. But skeptics have systematically challenged both the logic and
empirical basis of these assumptions about the RMA10 and cyberwar.11 Super-
ªcially plausible arguments about information technology tend to ignore im-
portant organizational and strategic factors that shape the adoption and use of
digital systems. As in the economics literature, an overarching theme in schol-
arship on military innovation is that technology is not a simple substitute for
military power.12 Technological capabilities depend on complementary institu-
tionen, skills, and doctrines. Außerdem, implementation is usually marked by
friction, unintended consequences, and disappointed expectations. The RMA
and cyber debates thus offer a cautionary tale for claims about AI. It is reason-
able to expect organizational and strategic context to condition the perfor-
mance of automated systems, as with any other information technology.13

AI may seem different, nevertheless, because human agency is at stake.
Recent scholarship raises a host of questions about the prospect of auto-

9. Sehen, Zum Beispiel, John Arquilla and David Ronfeldt, “Cyberwar Is Coming!,” Comparative Strat-
eins, Bd. 12, NEIN. 2 (Frühling 1993), S. 141–165; Arthur K. Cebrowski and John H. Garstka, “Net-
work-Centric Warfare: Its Origin and Future,” Proceedings, UNS. Naval Institute, Januar, 1998,
S. 28–35; William A. Owens and Edward Ofºey, Lifting the Fog of War (New York: Farrar, Straus
and Giroux, 2000); and Richard A. Clarke and Robert K. Knake, Cyber War: The Next Threat to
National Security and What to Do about It (New York: Ecco, 2010).
10. Sehen, Zum Beispiel, Eliot A. Cohen, “Change and Transformation in Military Affairs,” Journal of
Strategic Studies, Bd. 27, NEIN. 3 (2004), S. 395–407, https://doi.org/10.1080/1362369042000283958;
and Keith L. Shimko, The Iraq Wars and America’s Military Revolution (New York: Cambridge Uni-
versity Press, 2010).
11. Sehen, Zum Beispiel, Erik Gartzke, “The Myth of Cyberwar: Bringing War in Cyberspace Back
Down to Earth,” International Security, Bd. 38, NEIN. 2 (Fallen 2013), S. 41–73, https://doi.org/
10.1162/ISEC_a_00136; Jon R. Lindsay, “The Impact of China on Cybersecurity: Fiction and Fric-
tion,” International Security, Bd. 39, NEIN. 3 (Winter 2014/15), S. 7–47, https://doi.org/10.1162/
ISEC_a_00189; Brandon Valeriano and Ryan C. Maness, Cyber War versus Cyber Realities: Cyber
Conºict in the International System (New York: Oxford University Press, 2015); and Rebecca Slayton,
“What Is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment,” Interna-
tional Security, Bd. 41, NEIN. 3 (Winter 2016/17), S. 72–109, https://doi.org/10.1162/ISEC_a_00267.
12. Reviews include Adam Grissom, “The Future of Military Innovation Studies,” Journal of Strate-
gic Studies, Bd. 29, NEIN. 5 (2006), S. 905–934, https://doi.org/10.1080/01402390600901067; and Mi-
chael C. Horowitz, “Do Emerging Military Technologies Matter for International Politics?” Annual
Review of Political Science, Bd. 23 (Mai 2020), S. 385–400, https://doi.org/10.1146/annurev-
polisci-050718-032725. Sehen, insbesondere, Stephen Biddle, Military Power: Explaining Victory and De-
feat in Modern Battle (Princeton, N.J.: Princeton University Press, 2004); Michael C. Horowitz, Der
Diffusion of Military Power: Causes and Consequences for International Politics (Princeton, N.J.: Prince-
ton University Press, 2010); and Andrea Gilli and Mauro Gilli, “Why China Has Not Caught Up
Noch: Military-Technological Superiority and the Limits of Imitation, Reverse Engineering, Und
Cyber Espionage,” International Security, Bd. 43, NEIN. 3 (Winter 2018/19), S. 141–189, https://
doi.org/10.1162/isec_a_00337.
13. On general patterns of information practice in war, see Jon R. Lindsay, Information Technology
and Military Power (Ithaca, N.Y.: Cornell University Press, 2020).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 11

mated decision-making. How will war “at machine speed” transform the
offense-defense balance?14 Will AI undermine deterrence and strategic stabil-
ität,15 or violate human rights?16 How will nations and coalitions maintain con-
trol of automated warriors?17 Does AI shift the balance of power from
incumbents to challengers or from democracies to autocracies?18 These ques-
tions focus on the substitutes for AI because they address the political, oper-
ational, and moral consequences of replacing people, machines, and processes
with automated systems. The literature on military AI has focused less on the
complements of AI, namely the organizational infrastructure, human skills,
doctrinal concepts, and command relationships that are needed to harness the
advantages and mitigate the risks of automated decision-making.19

14. Kenneth Payne, “Artiªcial Intelligence: A Revolution in Strategic Affairs?" Überleben, Bd. 60,
NEIN. 5 (2018), S. 7–32, https://doi.org/10.1080/00396338.2018.1518374; Paul Scharre, “How
Swarming Will Change Warfare,„Bulletin der Atomwissenschaftler, Bd. 74, NEIN. 6 (2018), S. 385–389,
https://doi.org/10.1080/00963402.2018.1533209; Ben Garªnkel and Allan Dafoe, “How Does the
Offense-Defense Balance Scale?” Journal of Strategic Studies, Bd. 42, NEIN. 6 (2019), S. 736–763,
https://doi.org/10.1080/01402390.2019.1631810; and John R. Allen, Frederick Ben Hodges, Und
Julian Lindley-French, “Hyperwar: Europe’s Digital and Nuclear Flanks,” in Allen, Hodges,
and Lindley-French, Future War and the Defence of Europe (New York: Oxford University Press,
2021), S. 216–245.
15. Jürgen Altmann and Frank Sauer, “Autonomous Weapon Systems and Strategic Stability,”
Überleben, Bd. 59, NEIN. 5 (2017), S. 117–142, https://doi.org/10.1080/00396338.2017.1375263;
Horowitz, “When Speed Kills”; Mark Fitzpatrick, “Artiªcial Intelligence and Nuclear Command
and Control," Überleben, Bd. 61, NEIN. 3 (2019), S. 81–92, https://doi.org/10.1080/00396338.2019
.1614782; Erik Gartzke, “Blood and Robots: How Remotely Piloted Vehicles and Related Technol-
ogies Affect the Politics of Violence,” Journal of Strategic Studies, published online October 3, 2019,
https://doi.org/10.1080/01402390.2019.1643329; and James
Johnson, “Delegating Strategic
Decision-Making to Machines: DR. Strangelove Redux?” Journal of Strategic Studies, published on-
line April 30, 2020, https://doi.org/10.1080/01402390.2020.1759038.
16. Ian G.R. Shaw, “Robot Wars: US Empire and Geopolitics in the Robotic Age,” Security Dialogue,
Bd. 48, NEIN. 5 (2017), S. 451–470, https://doi.org/10.1177/0967010617713157; and Lucy Suchman,
“Algorithmic Warfare and the Reinvention of Accuracy,” Critical Studies on Security, Bd. 8, NEIN. 2
(2020), S. 175–187, https://doi.org/10.1080/21624887.2020.1760587.
17. Heather M. Roff, “The Strategic Robot Problem: Lethal Autonomous Weapons in War,” Journal
of Military Ethics, Bd. 13, NEIN. 3 (2014), S. 211–227, https://doi.org/10.1080/15027570.2014.975010;
Heather M. Roff and David Danks, “‘Trust but Verify’: The Difªculty of Trusting Autonomous
Weapons Systems,” Journal of Military Ethics, Bd. 17, NEIN. 1 (2018), S. 2–20, https://doi.org/
10.1080/15027570.2018.1481907; Risa Brooks, “Technology and Future War Will Test U.S. Civil-
Military Relations,” War on the Rocks, November 26, 2018, https://warontherocks.com/2018/11/
technology-and-future-war-will-test-u-s-civil-military-relations/; and Erik Lin-Greenberg, “Allies
and Artiªcial Intelligence: Obstacles to Operations and Decision-Making,” Texas National Security
Rezension, Bd. 3, NEIN. 2 (Frühling 2020), S. 56–76, https://dx.doi.org/10.26153/tsw/8866.
18. Horowitz, “Artiªcial Intelligence, International Competition, and the Balance of Power”; Ben
Buchanan, “The U.S. Has AI Competition All Wrong,” Foreign Affairs, August 7, 2020, https://
www.foreignaffairs.com/articles/united-states/2020-08-07/us-has-ai-competition-all-wrong; Und
Michael Raska, “The Sixth RMA Wave: Disruption in Military Affairs?” Journal of Strategic Studies,
Bd. 44, NEIN. 4 (2021), S. 456–479, https://doi.org/10.1080/01402390.2020.1848818.
19. A notable exception is Horowitz, “Artiªcial Intelligence, International Competition, und das
Balance of Power.” We agree with Horowitz that organizational complements determine AI diffu-
sion, but we further argue that complements also shape AI employment, which leads us to differ-
ent expectations about future war.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 12

In diesem Artikel, we challenge the assumptions behind AI substitution and ex-
plore the implications of AI complements. An army of lethal autonomous
weapon systems may well be destabilizing, and such an army may well be at-
tractive to democracies and autocracies alike. The idea that machines will re-
place warriors, Jedoch, represents a misunderstanding about what warriors
actually do. We suggest that it is premature to forecast radical strategic conse-
quences without ªrst clarifying the problem that AI is supposed to solve. Wir
provide a framework that explains how the complements of AI (d.h., Daten
and judgment) affect decision-making. Allgemein, automation is advanta-
geous when quality data can be combined with clear judgments. But the con-
summate military tasks of command, ªre, and maneuver are fraught with
uncertainty and confusion. Im Gegensatz, more institutionalized tasks in admin-
istration and logistics tend to have copious data and clear goals, welche sind
conducive to automation. We argue that militaries risk facing bad or tragic out-
comes if they conºate these conditions by prematurely providing autonomous
systems with clear objectives in uncertain circumstances. Umgekehrt, for intel-
ligence and operational tasks that have quality data but difªcult judgments,
teams of humans and machines can distribute the cognitive load of decision-
Herstellung. We expect many if not most military AI tasks to fall into the latter
category, which we describe as human-machine teaming. The net result, we ar-
gue, is that data and judgment will become increasingly valuable and con-
tested, and thus AI-enabled warfare will tend to become more protracted
and confusing.

We develop this argument in ªve parts. Erste, we provide an overview of our
analytical framework, which distinguishes the universal process of decision-
making from its variable political and technological context. This framework
explains how data and judgment affect the human-machine division of labor
in decision-making. Zweite, we describe how strategic and institutional condi-
tionen, which differ in business and military affairs, shape the quality of data
and the difªculty of judgment. We then combine these factors into four differ-
ent categories of AI performance of decision-making tasks, which we illustrate
with commercial and military examples. The penultimate section discusses the
strategic implications of data and judgment becoming more valuable. We con-
clude with a summary of the argument and further implications.

The Political and Technological Context of Decision-Making

Business and military organizations are similar in many ways, but they oper-
ate in very different circumstances. In ªgure 1, we analytically distinguish the
AI-relevant similarities and differences by embedding an economic model of

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 13

Figur 1. The Strategic Context of Decision-Making in Military Organizations

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

decision-making into an international relations framework.20 Decision-making
both shapes and is shaped by the political and technological context. The stra-
tegic environment and organizational institutions affect the quality of data and
judgment, jeweils. Gleichzeitig, innovation in machine learning—
largely driven by the civilian sector—lowers the cost of prediction.

20. Elements depicted in dashed lines in ªgure 1 are important for the overall story, but we will
not discuss them in detail in this article. We include them to depict the full decision-making
process—data, judgment, prediction, action—and to distinguish machine learning from other
types of automation technology. We thus discuss robotics or drones that automate military action
only to the extent that machine learning provides a decision input for them. Similar considerations
about complementarity apply to drones as well; see Andrea Gilli and Mauro Gilli, “The Diffusion
of Drone Warfare? Industrial, Organizational, and Infrastructural Constraints,” Security Studies,
Bd. 25, NEIN. 1 (2016), S. 50–84, https://doi.org/10.1080/09636412.2016.1134189. For the sake of par-
simony, we also omit intelligence, Überwachung, and reconnaissance (ISR) technologies that affect
Daten, as well as information and communication technologies (ICTs) that help coordinate anything
whatsoever. Wieder, the logic of complementarity applies more generally to ICTs; see Lindsay, Infor-
mation Technology and Military Power. While our focus in this article is on theory building rather than
testing, the same framework in ªgure 1 could be used to compare cases (z.B., in business, the mili-
tary, or cross-nationally) that leverage similar AI technology but in different contexts.

International Security 46:3 14

political context: environment and institutions

We adopt standard international relations distinctions between the interna-
tional system and domestic institutions.21 The “strategic environment” in
ªgure 1 refers to the external problems confronting a military organization.
To alter or preserve facts on the ground, through conquest or denial, a mili-
tary needs information about many things, such as the international balance
of power, diplomatic alignments and coalitions, geographical terrain and
weather, the enemy’s operational capabilities and disposition, and interactions
with civil society. These external matters constitute threats, targets, opportuni-
Krawatten, resources, and constraints for military operations. A military also needs
information about internal matters, such as the capabilities and activities of
friendly forces, but these are a means to an end.22 The strategic environment is
ultimately what military data are about, and the structure and dynamics of the
environment affect the quality of the data.

“Institutions and preferences” in ªgure 1 refer to the ways in which a mili-
tary organization solves its strategic problems. This general category encom-
passes bureaucratic structures and processes,
interservice and coalition
Politik, civil-military relations, interactions with the defense industry, Und
other domestic politics. Any of these factors might inºuence the goals and val-
ues of a military organization or the way it interprets a given situation. Organi-
zational institutions embody preferences, whatever their source, which in turn

21. Our modest goal here is to emphasize that strategic and organizational context shapes the per-
formance of AI technology in military decision-making tasks. In this article we do not take a posi-
tion on which of these contextual factors will be more inºuential in any given situation. We also
omit heterogeneity within and dynamic interactions across these factors. Future research could
disaggregate these factors to explore more speciªc hypotheses about AI and military power.
Lindsay, Information Technology and Military Power, S. 32–70, discusses a similar framework in
more detail but with different nomenclature. The general analytic distinction between environ-
ment, Organisationen, and technology is also employed by Barry R. Posen, The Sources of Military
Doctrine: Frankreich, Großbritannien, and Germany between the World Wars (Ithaca, N.Y.: Cornell University Press,
1984). On the interaction between system- and unit-level factors in realist theory, see Gideon Rose,
“Neoclassical Realism and Theories of Foreign Policy,” World Politics, Bd. 51, NEIN. 1 (Oktober 1998),
S. 144–172, https://doi.org/10.1017/S0043887100007814; and Kevin Narizny, “On Systemic Para-
digms and Domestic Politics: A Critique of the Newest Realism,” International Security, Bd. 42,
NEIN. 2 (Fallen 2017), S. 155–190, https://doi.org/10.1162/ISEC_a_00296.
22. From a rationalist perspective, whereby preferences are exogenously speciªed, internal pro-
cesses are instrumental to the fundamental task of knowing and inºuencing the world. In natural
Systeme, Natürlich, internal processes may become infused with value and endogenously shape or-
ganizational behavior, as discussed by Philip Selznick, TVA and the Grass Roots: A Study in the Soci-
ology of Formal Organization (Berkeley: University of California Press, 1949); and Herbert Kaufman,
The Forest Ranger: A Study in Administrative Behavior (Baltimore, Md.: John Hopkins University
Drücken Sie, 1960). Our framework allows for both possibilities by interacting environmental data and
organizational preferences, whatever their source. Außerdem, at the level of analysis of any
given decision task, the “environment” can be analyzed as including “institutions” too. We omit
these complexifying relationships because they only reinforce our general point about the im-
portance of context.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 15

affect the quality of judgment.23 Institutional structures and processes may
produce coordination problems, political controversies, or interpretive dif-
ªculties that make it hard for a military organization to ªgure out what mat-
ters and why.

Außerdem, as discussed below, we expect that the adoption of AI for some
military decision tasks will (endogenously) affect the strategic environment
and military institutions over time. As data and judgment become more valu-
able, strategic competitors will have incentives to improve and contest them.
We thus expect conºicts over information to become more salient while orga-
nizational coordination will become more complex.

technological context: machine learning as prediction

The resurgence of interest in AI since the turn of the millennium has been
driven by rapid advances in a subªeld called machine learning. Machine
learning techniques represent a different approach compared to “Good Old-
Fashioned AI” (GOFAI).24 GOFAI emphasizes deductive theorem proving and
search optimization. Machine learning, by contrast, is a form of statistical pre-
diction, which is the process of using existing data to inductively generate
missing information.25 While the term prediction often implies forecasting
the future, pattern recognition and object classiªcation are also forms of pre-
diction because they ªll in information about situations encountered for the
ªrst time. Machines can automate many prediction tasks that humans perform
Heute (z.B., image recognition, navigation, and forecasting), and they can also
increase the number, accuracy, complexity, and speed of predictions. This has
the potential to alter human workºows. While machines may not make deci-
sionen, they can alter who makes what decisions and when. As machine learn-
ing lowers the cost of prediction, organizations are also innovating ways to
improve data and judgments so that they can make better decisions.

Prediction usually involves generalizing from a set of training data to clas-

23. Internal processes and external situations may affect the goals and values of an organization.
Our framework is agnostic regarding the ultimate source of preferences. Whatever their source,
preferences become embodied in institutions that proximally transmit goals and values to
decision-makers. On the general debate between structuralists and institutionalists about political
and doctrinal preferences, see Posen, The Sources of Military Doctrine; Alexander E. Wendt, “The
Agent-Structure Problem in International Relations Theory,” International Organization, Bd. 41,
NEIN. 3 (Sommer 1987), S. 335–370, http://www.jstor.org/stable/2706749; Stephen van Evera,
“Hypotheses on Nationalism and War,” International Security, Bd. 18, NEIN. 4 (Frühling 1994), S. 5–39,
https://doi.org/10.2307/2539176; and Jeffrey W. Legro and Andrew Moravcsik, “Is Anybody Still
a Realist?” International Security, Bd. 24, NEIN. 2 (Fallen 1999), S. 5–55, https://doi.org/10.1162/
016228899560130.
24. Terrence J. Sejnowski, The Deep Learning Revolution (Cambridge: Massachusetts Institute of
Technology Press, 2018).
25. Agrawal, Gans, and Goldfarb, Prediction Machines, P. 24.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 16

sify or synthesize new data. It became clear in the early 2000s that improve-
ments in computing, Erinnerung, and bandwidth could make machine learning
commercially viable. Firms like Google, Amazon, and Facebook have success-
fully targeted their advertising and digital services by coupling “big data” that
they harvest from consumer behavior with automated prediction techniques.
These same developments have also enabled espionage and surveillance at an
unprecedented scale.26

From an economic perspective, modern AI is best understood as a better,
faster, and cheaper form of statistical prediction. The overall effect on decision-
Herstellung, Jedoch, is indeterminate. This implies that organizations, Militär
and otherwise, will be able to perform more prediction in the future than they
do today, but not necessarily that their performance will improve in all cases.

the decision-making process

Economic decision theory emerged alongside the intellectual tradition of cy-
bernetics.27 As Herbert Simon observed over sixty years ago, “A real-life deci-
sion involves some goals or values, some facts about the environment, Und
some inferences drawn from the values and facts.”28 We describe these ele-
ments as judgment, Daten, and prediction. Together they produce actions that
shape economic or political outcomes. Feedback from actions produces more
Daten, which can be used for more predictions and decisions, or to reinterpret
judgment. The so-called OODA loop in military doctrine captures the same
ideas.29 Decision cycles govern all kinds of decision tasks, from the trivial
(picking up a pencil) to the profound (mobilizing for war). The abstract

26. David V. Gioe, Michael S. Guter Mann, and Tim Stevens, “Intelligence in the Cyber Era: Evolu-
tion or Revolution?” Political Science Quarterly, Bd. 135, NEIN. 2 (Sommer 2020), S. 191–224, https://
doi.org/10.1002/polq.13031.
27. The main ideas from the economics literature on decision-making are summarized in Itzhak
Gilboa, Making Better Decisions: Decision Theory in Practice (Oxford: Wiley-Blackwell, 2011). Siehe auch
John D. Steinbruner, The Cybernetic Theory of Decision: New Dimensions of Political Analysis (Prince-
Tonne, N.J.: Princeton University Press, 1974). On the intellectual impact of cybernetics generally, sehen
Ronald R. Kline, The Cybernetics Moment: Or Why We Call Our Age the Information Age (Baltimore,
Md.: Johns Hopkins University Press, 2015). Classic applications of cybernetic decision theory in-
clude Karl W. Deutsch, The Nerves of Government: Models of Political Communication and Control
(New York: Free Press, 1963); and James R. Beniger, The Control Revolution: Technological and Eco-
nomic Origins of the Information Society (Cambridge, Masse.: Harvard University Press, 1989).
28. Herbert A. Simon, “Theories of Decision-Making in Economics and Behavioral Science,” Amer-
ican Economic Review, Bd. 49, NEIN. 3 (Juni 1959), P. 273, https://www.jstor.org/stable/1809901.
29. OODA stands for the “observe, orient, decide, and act” phases of the decision cycle. Beachten Sie, dass
“orient” and “decide” map to prediction and judgment, jeweils. These phases may occur se-
quentially or in parallel in any given implementation. On the inºuence of John Boyd’s cybernetic
OODA loop in military thought see James Hasík, “Beyond the Brieªng: Theoretical and Practical
Problems in the Works and Legacy of John Boyd,„Zeitgenössische Sicherheitspolitik, Bd. 34, NEIN. 3
(2013), S. 583–599, https://doi.org/10.1080/13523260.2013.839257.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 17

decision model is agnostic about implementation, which means that the logic
of decision might be implemented with organic, organizational, or technologi-
cal components.

Noch, implementation is precisely what is at stake with AI. Figur 1 thus illus-
trates how data, judgment, prediction, and action affect the human-machine
division of labor in decision-making. We highlight the human-machine divi-
sion of labor in decision-making because the task-speciªc implementation of
AI has signiªcant consequences. The universality of cybernetic decision-
making explains why many consider AI to be a general-purpose technology,
like electricity or the internal combustion engine.30 AI can indeed improve pre-
diction, which is a vital input for any sort of decision-making. But AI is not the
only input. Organizations also rely on data and judgment to make decisions
in task-speciªc circumstances. Put simply, AI is a general-purpose technology
that performs differently in speciªc contexts.

the human-machine division of labor

When analyzing or deploying AI, it is necessary to consider particular tasks
that serve particular goals. Machine learning is not AGI. Our unit of analysis,
daher, is the decision task, das ist, the ensemble of data, Vorhersagen, judg-
gen, and actions that produce a speciªc organizational outcome. Most orga-
nizations perform many different and interrelated tasks, such as strategy,
management, human resources, marketing, network administration, manufac-
turing, Operationen, security, and logistics. Military analogs of these tasks in-
clude command, administration, Ausbildung, intelligence, communication, ªre,
maneuver, protection, and sustainment. Within and across these categories are
myriad different tactics, Techniken, and procedures. Any of these tasks, bei
whatever scope or scale, can directly or indirectly support an organization’s
overall mission, which may or may not be well deªned. In der Tat, a task itself
may be poorly deªned, in part because task decomposition is a problem for
managerial judgment.

AI performance in any given task is a function of the quality of data and the
difªculty of judgment. These two complements provide essential context for
automated decision-making. Data are high quality if relevant information is
abundantly available and not systematically biased. Judgment is well deªned
if goals can be clearly speciªed in advance and stakeholders agree on them.

30. Sehen, Zum Beispiel, Iain M. Cockburn, Rebecca Henderson, and Scott Stern, “The Impact of
Artiªcial Intelligence on Innovation: An Exploratory Analysis,” in Ajay Agrawal, Joshua Gans,
and Avi Goldfarb, Hrsg., The Economics of Artiªcial Intelligence: An Agenda (Chicago: Universität
Chicago Press, 2019), S. 115–146.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 18

Tisch 1. The Implications of Data and Judgment for Automation in Decision-Making

Judgment

clear

difªcult

Data

high-quality

low-quality

Fully automated decision-
making is more efªcient.

Full automation is
possible but risky.

Automated predictions can
inform human decisions.

Automated decision-
making is not feasible.

The degree to which data quality is high or low, and judgment is clear or dif-
ªcult, determines the comparative advantages of humans and machines in
decision-making. Im Gegensatz, substitutes determine only the technical poten-
tial for automation (d.h., by reducing the costs of prediction or action). Tisch 1
summarizes the effects of data and judgment on AI performance. The implica-
tions of AI for future war are necessarily speculative, which makes it even
more important to theorize from a solid deductive foundation.

AI Complements in Business and War

Numerous empirical studies of civilian AI systems have validated the basic
ªnding that AI depends on quality data and clear judgment.31 To the extent
that decision-making is indeed a universal activity, it is reasonable to expect
models of it to apply to militaries. But we caution against generalizing from
the business world to military affairs (und umgekehrt). Commercial and mili-
tary organizations perform dissimilar tasks in different contexts. Militaries are
only infrequently “in business” because wars are rare events.32 Objectives such
as “victory” or “security” are harder to deªne than “shareholder value” or
“proªt.” Combatants attempt to physically destroy their competitors, und das
consequences of failure are potentially existential.

One underappreciated reason why AI has been applied successfully in
many commercial situations is because the enabling conditions of quality data
and clear judgment are often present. Peaceful commerce generally takes place
in institutionalized circumstances. Laws, Eigentumsrechte, contract enforcement
mechanisms, diversiªed markets, common expectations, and shared behav-
ioral norms all beneªt buyers and sellers. These institutional features make

31. Detailed arguments and evidence are presented in Agrawal, Gans, and Goldfarb, Prediction
Machines.
32. Gary King and Langche Zeng, “Explaining Rare Events in International Relations,” Interna-
tional Organization, Bd. 55, NEIN. 3 (2001), S. 693–715, https://doi.org/10.1162/00208180152507597.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 19

transactions more consistent and efªcient.33 Consistency, im Gegenzug, provides the
essential scaffolding for full automation. We expect AI to be more successful in
more institutionalized circumstances and for more structured tasks.

Krieg, by contrast, occurs in a more anarchic environment. In the international
System, according to the intellectual tradition of realism, there are no legiti-
mate overarching institutions to adjudicate disputes, enforce international
Vereinbarungen, or constrain behavior.34 Actors must be prepared to defend them-
selves or ally with others for protection. Allies and adversaries alike have
incentives to misrepresent their capabilities and interests, and for the same rea-
sons to suspect deception by others.35 Militarized crises and conºicts abound
in secrecy and uncertainty. War aims are controversial, almost by deªnition,
and they mobilize the passions of the nation, for better or worse.

We expect the absence of constraining institutions in war to undermine the
AI-enabling conditions of quality data and clear judgment. One exception that
proves the rule is that a military bureaucracy may be able to provide scaffold-
ing for some military tasks. Robust organizational institutions, mit anderen Worten,
might substitute for weak international institutions. Noch, there are limits to
what organizations can accomplish in the inherently uncertain and contested
environment of war. The speciªc context of data and judgment will determine
the viability of automation for any given task.

data and the strategic environment

Commercial AI systems often need thousands, if not millions or billions,
of examples to generate high-quality predictions. As deep-learning pioneer
Geoffrey Hinton puts it, “Take any old problem where you have to predict
something and you have a lot of data, and deep learning is probably going to

33. Sehen, Zum Beispiel, R.H. Coase, “The Problem of Social Cost,” Journal of Law and Economics, Bd. 3
(Oktober 1960), S. 1–44, https://doi.org/10.1086/466560; and Oliver E. Williamson, “The Eco-
nomics of Organization: The Transaction Cost Approach,” American Journal of Sociology, Bd. 87,
NEIN. 3 (November 1981), S. 548–577, https://www.jstor.org/stable/2778934. These same features
are associated with liberal perspectives on international politics, such as Robert O. Keohane, “The
Demand for International Regimes,” International Organization, Bd. 36, NEIN. 2 (Frühling 1982),
S. 325–355, https://www.jstor.org/stable/2706525; John R. Oneal and Bruce M. Russett, “The
Classical Liberals Were Right: Democracy, Interdependence, and Conºict, 1950–1985,” International
Studies Quarterly, Bd. 41, NEIN. 2 (Juni 1997), S. 267–293, https://doi.org/10.1111/1468-2478.00042;
and G. John Ikenberry, After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order after
Major Wars (Princeton, N.J.: Princeton University Press, 2001). A suggestive hypothesis that we do
not develop in depth here is that the same conditions that are conducive to liberal institutions
should also be conducive to AI performance.
34. These are standard assumptions in realist theory. Zum Beispiel, Hans J. Morgenthau, Politik
among Nations: The Struggle for Power and Peace (New York: Alfred A. Knopf, 1960); and Kenneth N.
Waltz, Theory of International Politics (Reading, Masse.: Addison-Wesley, 1979).
35. James D. Fearon, “Rationalist Explanations for War,” International Organization, Bd. 49, NEIN. 3
(Sommer 1995), S. 379–414, https://www.jstor.org/stable/2706903.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 20

make it work better than the existing techniques.”36 The need for a large quan-
tity of data, as well as detailed metadata that labels and describes its content, Ist
gut verstanden. But the need for quality data is less appreciated. Two factors
can undermine the relevancy of data. Erste, the data may be biased toward one
group of people or situation.37 Second, data on the particular situation being
predicted may not exist. This latter situation happens surprisingly often, Sei-
cause predictions are especially useful when they provide insight into what
will happen if an organization changes its behavior. If the organization has
never behaved in a certain way, then relevant data will not exist, and the re-
lated statistical prediction will fail.38

A competitor or adversary can exacerbate both problems of data relevancy
by manipulating data to create bias or interdicting the supply of data. If an ad-
versary ªnds a way to access and corrupt the data used to train AI systems,
then predictions become less reliable.39 More generally, if AI becomes good at
optimizing the solution for any given problem, then an intelligent enemy has
incentives to change the problem. In the parlance of AI, the enemy will “go be-
yond the training set” by creating a situation for which there is no prior exam-
ple in data used for machine learning. An adversary could innovate new
tactics that are hard for AI systems to detect or pursue aims that AI systems do
not anticipate.

The sources of uncertainty in war are legion (z.B., poor weather, unfamiliar
terrain, bad or missing intelligence, misperception, enemy deception). As Carl
von Clausewitz famously states, “War is the realm of uncertainty; three quar-
ters of the factors on which action in war is based are wrapped in a fog of
greater or lesser uncertainty.”40 Clausewitz also uses the mechanical metaphor

36. Geoffrey Hinton, “On Radiology,” presented at the Machine Learning and Market for Intelli-
gence Conference, Creative Destruction Lab, Universität von Toronto, Oktober 26, 2016, YouTube
Video, 1:24, https://youtu.be/2HMPRXstSvQ.
37. Bo Cowgill and Catherine E. Tucker, “Algorithmic Fairness and Economics,” Columbia Busi-
ness School Research Paper, SSRN (Februar 14, 2020), https://dx.doi.org/10.2139/ssrn.3361280;
and Jon Kleinberg et al., “Discrimination in the Age of Algorithms,” Journal of Legal Analysis,
Bd. 10 (2018), S. 113–174, https://doi.org/10.1093/jla/laz001.
38. In such cases, causal inference requires different tools because the counterfactual situation is
never observed. There is a rich technical literature on these ideas, rooted in the Rubin causal
Modell. Widely used textbooks are Joshua D. Angrist and Jörn-Steffen Pischke, Mostly Harmless
Econometrics: An Empiricist’s Companion (Princeton, N.J.: Princeton University Press, 2009); Und
Guido W. Imbens and Donald B. Rubin, Causal Inference for Statistics, Sozial, and Biomedical Sciences:
An Introduction (New York: Cambridge University Press, 2015).
39. Battista Biggio and Fabio Roli, “Wild Patterns: Ten Years after the Rise of Adversarial Ma-
chine Learning,” Pattern Recognition, Bd. 84 (Dezember 2018), S. 317–331, https://doi.org/
10.1016/j.patcog.2018.07.023; and Heather M. Roff, “AI Deception: When Your Artiªcial Intelli-
gence Learns to Lie,” IEEE Spectrum, Februar 24, 2020, https://spectrum.ieee.org/automaton/
artiªcial-intelligence/embedded-ai/ai-deception-when-your-ai-learns-to-lie.
40. Carl von Clausewitz, On War, Hrsg. und trans. Michael Eliot Howard and Peter Paret (Princeton,
N.J.: Princeton University Press, 1989), P. 101.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 21

of “friction” to describe organizational breakdowns. “Friction” in information
technology can also create “fog” because the same systems adopted to im-
prove certainty become new sources of uncertainty. Military personnel in net-
worked organizations struggle to connect systems, negotiate data access,
customize software, and protect information security. Computer glitches and
conªguration problems tend to “accumulate and end by producing a kind of
friction that is inconceivable unless one has experienced war.”41 Empirical
studies of wartime practice reveal a surprising amount of creative hacking to
repair and reshape technologies to deal with the tremendous friction of war-
fare in the information age.42 Yet, informal adaptation of data processing sys-
tems can also create interoperability, accountability, and security problems.
Managerial intervention to control these risks creates even more friction. Wir
suggest that AI systems designed to “lift the fog of war” could just as easily
“shift the fog” right back into the organization.43

Although war is rife with uncertainties that can distort or disrupt data, Wir
expect the quality of data to vary by task and situation. Put differently, Die
microstructure of the strategic environment is very important. We expect data
about friendly forces to be more reliable because commanders can mandate re-
porting formats and schedules. We expect logistics and administration reports
to be more reliable than combat reporting, which is more exposed to enemy in-
Interaktion. Intelligence about enemy dispositions and capabilities should be
even less reliable. Auch so, intelligence about “puzzles” (such as the locations
and capabilities of weapon systems) may be more reliable than intelligence
about “mysteries” (such as future intentions and national resolve).44 Data from
technical sensors tend to be better structured and more voluminous than hu-
man intelligence reports, which require signiªcant interpretation. Enemy de-
ception or disinformation operations tend to undermine data quality, as does
intelligence politicization for parochial interests. It is critical to assess the spe-
ciªc strategic context of data, and thus the suitability of AI, for any given deci-
sion task.

41. Ebenda., P. 119.
42. Inter alia, James A. Russell, Innovation, Transformation, and War: Counterinsurgency Operations in
Anbar and Ninewa Provinces, Irak, 2005–2007 (Stanford, Calif.: Stanford University Press, 2010); Tim-
othy S. Wolters, Information at Sea: Shipboard Command and Control in the U.S. Navy, from Mobile Bay
to Okinawa (Baltimore, Md.: Johns Hopkins University Press, 2013); and Nina A. Kollars, “War’s
Horizon: Soldier-Led Adaptation in Iraq and Vietnam,” Journal of Strategic Studies, Bd. 38, NEIN. 4
(2015), S. 529–553, https://doi.org/10.1080/01402390.2014.971947.
43. Generally, Lindsay, Information Technology and Military Power.
44. Gregory F. Treverton, Reshaping National Intelligence for an Age of Information (New York: Nocken-
bridge University Press, 2003), S. 11–13.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 22

judgment and military institutions

Even if there are enough of the right type of data, AI still relies on people to
determine what to predict and why. Commercial ªrms, zum Beispiel, make
many different judgments in determining their business models, corporate
Werte,
labor relations, and negotiating objectives. Military organizations
face analogous management challenges, but they also face unique ones. Mili-
tary judgment also encompasses national interests, political preferences, stra-
tegic missions, commander’s intent, rules of engagement, combat ethics, Und
martial socialization. Because the costs and consequences of war are so pro-
found, all these problems tend to be marked by ambiguity, Kontroverse, Und
painful trade-offs. Judgment thus becomes more difªcult, and ever more
consequential, in military affairs.

There are three types of machine learning algorithms.45 All require human
judgment. Erste, in “supervised learning,” the human tells the machine what
to predict. Zweite, “unsupervised learning” requires judgment about what to
classify and what to do with the classiªcations. Dritte, “reinforcement learn-
ing” requires advance speciªcation of a reward function. A reward function as-
signs a numerical score to the perceived state of the world to enable a machine
to maximize a goal. More complicated strategies may combine these ap-
proaches by establishing instrumental goals in pursuit of the main objective. In
every case, a human ultimately codes the algorithm and deªnes the payoffs for
the machine.

In economic terms, judgment is the speciªcation of the utility function.46 The
preferences and valuations that determine utility are distinct from the strate-
gies that maximize it. To take a trivial example, people who do not mind get-
ting wet and dislike carrying umbrellas will not carry one, regardless of the
weather forecast. People who dislike getting wet and do not mind carrying
umbrellas might always have an umbrella in their bag. Others might carry
an umbrella if the chance of rain is 75 percent but not if it is 25 Prozent. Der
prediction of rain is independent of preferences about getting wet or being
prepared to get wet. Ähnlich, the AI variation on the notorious “trolley prob-
lem” poses an ethical dilemma about life-or-death choices. Zum Beispiel,
should a self-driving car swerve to avoid running over four children at the risk

45. For an accessible introduction, see Sejnowski, The Deep Learning Revolution.
46. Critiques of economic rationality that appeal to cognitive psychology or social institutions un-
derscore the importance of judgment. See Rose McDermott, Political Psychology in International Re-
Beziehungen (Ann Arbor: University of Michigan Press, 2004); and Janice Gross Stein, “The Micro-
Foundations of International Relations Theory: Psychology and Behavioral Economics,” Interna-
tional Organization, Bd. 71, NEIN. S1 (2017), S. S249–S263, https://doi.org/10.1017/S002081831
6000436.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 23

of killing its human passenger? If the AI predicts even odds that someone will
die either way, the car should swerve if all lives are equally valuable, but it
should not swerve if the passenger’s life is worth at least four times as much as
that of a random child. This somewhat contrived dilemma understates the
complexity of the judgment involved. In der Tat, the ethical dilemmas of AI
reinvigorate longstanding critiques of utilitarian reasoning. As Heather Roff
points out, “We cannot speak about ethical AI because all AI is based on em-
pirical observations; we cannot get an ‘ought’ from an ‘is.’ If we are clear eyed
about how we build, Design, and deploy AI, we will conclude that all of the
normative questions surrounding its development and deployment are those
that humans have posed for millennia.”47

If the trolley problem seems far-fetched, consider the case of a self-driving
Uber car that killed a cyclist in Tempe, Arizona.48 The AI had predicted a low
but nonzero probability that a human was in its path. The car was designed
with a threshold for ignoring low-probability risks. The priority of not hitting
humans was obvious enough. Noch, with an error tolerance set to zero, the car
would not be able to drive. The question of where to set the tolerance was a
judgment call. In this case, it appears that the prespeciªed judgment was tragi-
cally inappropriate for the context, but the prediction machine had absolutely
no concept of what was at stake.

A well-speciªed AI utility function has two characteristics. Erste, goals are
clearly deªned in advance. If designers cannot formally specify payoffs and
priorities for all situations, then each prediction will require a customized
judgment. This is often the case in medical applications.49 When there are
many possible situations, human judgment is often needed upon seeing the di-
agnosis. The judgment cannot be determined in advance because it would take
too much time to specify all possible contingencies. Such dynamic or nu-
anced situations require, in effect, incomplete contracts that leave out complex,
situation-speciªc details to be negotiated later.50 Because all situations cannot
be stipulated in advance, judgment is needed after seeing the prediction to in-
terpret the spirit of the agreement.

47. Heather M. Roff, The Folly of Trolleys: Ethical Challenges and Autonomous Vehicles (Washington,
D.C.: Brookings Institution Press, Dezember 17, 2018), https://www.brookings.edu/research/the-
folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/.
48. National Transportation Safety Board [NTSB], “Highway Accident Report: Collision between
Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Ari-
zona, Marsch 18, 2018,” Highway Accident Report NTSB/HAR-19/03 (Washington, D.C.: NTSB,
November 19, 2019), https://trid.trb.org/view/1751168.
49. Trevor Jamieson and Avi Goldfarb, “Clinical Considerations When Applying Machine
Learning to Decision-Support Tasks versus Automation,” BMJ Quality & Safety, Bd. 28, NEIN. 10
(2019), S. 778–781, https://doi.org/10.1136/bmjqs-2019-009514.
50. Williamson, “The Economics of Organization.”

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 24

The military version of incomplete contracting is “mission command,”
which speciªes the military objective and rules of engagement but empowers
local personnel to interpret guidance, coordinate support, and tailor opera-
tions as the situation develops.51 The opposite of mission command, manche-
times described as “task orders,” is more like a complete contract that tells a
unit exactly what to do and how to do it. Standard operating procedures, doc-
trinal templates, and explicit protocols help to improve the predictability of
operations by detailing instructions for operations and equipment handling. In
turbulent environments with unpredictable adversaries, Jedoch, standard-
ized task orders may be inappropriate. The greater the potential for uncer-
tainty and accident in military operations, the greater the need for local
commanders to exercise initiative and discretion. In Clausewitzian terms,
“fog” on the battleªeld and “friction” in the organization require commanders
to exercise “genius,” which is “a power of judgment raised to a marvelous
pitch of vision, which easily grasps and dismisses a thousand remote possibili-
ties which an ordinary mind would labor to identify and wear itself out in so
doing.”52 The role of “genius” in mission command becomes particularly im-
portant, and particularly challenging, in modern combined arms warfare and
multi-domain operations.53 When all possible combinations of factors cannot
possibly be speciªed in advance, personnel have to exercise creativity and ini-
tiative in the ªeld. Modern military operations tend to mix elements of both
styles by giving local commanders latitude in how they interpret, implement,
and combine the tools, tactics, and procedures that have been standardized, In-
stitutionalized, and exercised in advance.

The second characteristic of a well-speciªed AI utility function is that all
stakeholders should agree on what goals to pursue. When it is difªcult for
people to agree on what to optimize, transparent institutional processes
for evaluating or aggregating different preferences may help to validate or le-
gitimate decisions that guide AI systems. Bedauerlicherweise, consensus becomes
elusive as “genius” becomes more geographically distributed, socially collabo-
rative, and technically exacting.54 In an ethnography of divisional command in

51. Zum Beispiel, Department of the Army, ADP 6-0: Mission Command: Command and Control
of Army Forces, Army Doctrine Publication No. 6-0 (Washington, D.C.: UNS. Department of
the Army, Mai 17, 2012), https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18314-ADP
_6-0-000-WEB-3.pdf.
52. Clausewitz, On War, P. 112.
53. Biddle, Military Power; Bart Van Bezooijen and Eric-Hans Kramer, “Mission Command in the
Information Age: A Normal Accidents Perspective on Networked Military Operations,” Journal of
Strategic Studies, Bd. 38, NEIN. 4 (2015), S. 445–466, https://doi.org/10.1080/01402390.2013.844127;
and Ryan Grauer, Commanding Military Power: Organizing for Victory and Defeat on the Battleªeld
(Cambridge: Cambridge University Press, 2016).
54. John Ferris and Michael I. Handel, “Clausewitz, Intelligence, Uncertainty, and the Art of Com-

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 25

Afghanistan, Anthony King writes that “a general must deªne a mission, man-
age the tasks of which it is comprised and motivate the troops.”55 The ªrst of
these three factors—specifying positive objectives and negative limitations—
is the consummate function of judgment; AI offers little help here. AI might
provide some support for the second factor, oversight and administration,
which involves a mixture of judgment and prediction. The third factor is lead-
ership, which is fundamentally a judgment problem insofar as leaders attempt
to socialize common purposes, Werte, and interpretations throughout an or-
ganization. Wieder, AI is of little use for issues of leadership, which become
more important as organizations become geographically and functionally dis-
tributed: “The bureaucratic expertise of the staff has been improved and
their cohesiveness has been condensed so that they are now bound in dense
solidarity, even when they are not co-present.”56 Indeed, “decision-making
leadership—
has proliferated” in all
because a “commander can no longer direct operations alone.”57 According
to King, the commanding general is now less of a central controller and more
of a social focal point for coordinating the complex interactions of “the com-
mand collective.”

three areas—strategy, management,

Collective command, Jedoch, is a collective action problem. In manchen Fällen,
standard operating procedures and socialization rituals can simplify judgment
tasks. King ªnds that “command teams, command boards, principal planning
groups and deputies have appeared to assist and to support the commander
and to manage discrete decision cycles to which the commander cannot at-
tend.”58 Yet, in other cases, personnel from different services, Geäst, oder
units may disagree over how to interpret even basic tactics, Techniken, Und
procedures.59 Disagreement may turn into controversy when mission assign-
ments fall outside the scope of what professionals deem right or appropriate,
as when armies are tasked with counterinsurgency, air forces are tasked with
close air support, or cultural preferences clash.60 More serious disagreements

mand in Military Operations,” Intelligence and National Security, Bd. 10, NEIN. 1 (1995), S. 1–58,
https://doi.org/10.1080/02684529508432286.
55. Anthony King, Command: The Twenty-First-Century General (Cambridge: Cambridge University
Drücken Sie, 2019), P. 438.
56. Ebenda., P. 443.
57. Ebenda., P. 439.
58. Ebenda., P. 440.
59. Harvey M. Sapolsky, Eugene Gholz, and Caitlin Talmadge, US Defense Politics: The Origins of
Security Policy, 3rd ed. (New York: Routledge, 2017), S. 93–116.
60. On the cultural origins of military preferences, see Elizabeth Kier, Imagining War: French and
British Military Doctrine between the Wars (Princeton, N.J.: Princeton University Press, 1997); Jeffrey
W. Legro, Cooperation under Fire: Anglo-German Restraint during World War II (Ithaca, N.Y.: Cornell
Universitätsverlag, 2013 [1995]); and Austin Long, The Soul of Armies: Counterinsurgency Doctrine and
Military Culture in the US and UK (Ithaca, N.Y.: Cornell University Press, 2016).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 26

about war aims and military methods can emerge within the civil-military
chain of command or among coalition partners.61 Just as data availability and
bias vary for any given decision task, we also expect variability in the clarity
and consensus of judgment. Any factors that exacerbate confusion or disagree-
ment in military institutions should be expected to make judgment more
difªcult for AI automation.

AI Performance in Military Decision-Making Tasks

As we have explained in the previous sections, decision-making is a universal
Verfahren, but decision inputs are context speciªc. Even if the same AI technol-
ogy is available to all organizations, the strategic and institutional conditions
that have enabled AI success in the business world may not be present in war.
We thus infer two general hypotheses about the key AI complements of data
and judgment. Erste, stable, cooperative environments are more conducive
for plentiful, unbiased data; umgekehrt, turbulent, competitive environments
tend to produce limited, biased data. Zweite, institutional standardization
and solidarity encourage well-deªned, consensual judgments; umgekehrt, id-
iosyncratic local practices and internal conºict lead to ambiguous, controver-
sial judgments.

The combination of these hypotheses describes four different regimes of AI
performance in military decision-making tasks. Tisch 2 summarizes these cate-
gories by synthesizing the strategic and institutional inputs from the decision-
making context (ªgure 1) into the human-machine division of labor (table 1)
for key military functions. The best case for AI performance is what we call
“automated decision-making.” Quality data and clear judgment are most
likely to be available in highly routinized administrative and logistics tasks
that are more analogous to civilian organizational tasks. Anything that bureau-
cracies can do well, AI can probably help them to do better. The worst case for
AI is the opposite quadrant, in which both automation complements are ab-
sent. We label this category “human decision-making” because AI cannot per-
form tasks characterized by limited, biased data and ambiguous, controversial
judgments. For military strategy and command tasks, the Clausewitzian ex-
tremes of “fog” in the environment and “friction” in the organization require
human “genius.” In the other two quadrants, in which one necessary comple-
ment is present but the other is absent, the human-machine division of labor is

61. Zum Beispiel, Risa Brooks, Shaping Strategy: The Civil-Military Politics of Strategic Assessment
(Princeton, N.J.: Princeton University Press, 2008); and Jeremy Pressman, Warring Friends: Alliance
Restraint in International Politics (Ithaca, N.Y.: Cornell University Press, 2012).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 27

Tisch 2. How Strategic and Institutional Conditions Shape AI Performance in

Military Tasks

Effect of Environment on Data

stability and cooperation
® high-quality data

turbulence and
competition
® low-quality data

Automated Decision-Making

Premature Automation

Full automation can
increase the scale and
efªciency of highly
bureaucratized
administrative and
logistics tasks.

Full automation in
complex ªre and
maneuver tasks
heightens the risks of
targeting error and
inadvertent escalation.

Human-Machine Teaming

Human Decision-Making

Automated decision aids for
intelligence analysis and
operational planning can
augment, but not replace,
human decision-making.

Strategy and command
will continue to rely on
human interpretation
and leadership.

Effect of
Institutions
An
Judgment

standardization and
solidarity
® clear judgment

idiosyncrasy and
conºict
® difªcult judgment

more complicated. The category of “premature automation” describes situa-
tions in which machines receive clear goals in uncertain environments. Wenn
schnell, automated decisions are tightly coupled to lethal actions, this is a recipe
for disaster. The mismatch between the evolving situation and the decisions
encoded in AI systems may not be immediately obvious to humans, welche
heightens the risks of tragic outcomes such as targeting error or inadvertent
escalation. In the converse category of “human-machine teaming,” judgment
tasks are difªcult, but quality data are available. AI decision aids (z.B.,
graphs, tables, map overlays, image annotations, and textual summaries) Profi-
vide predictions that augment but do not replace human decisions, while hu-
mans maintain a healthy skepticism about AI shortcomings. Many operational
planning and intelligence analysis tasks fall into this category, along with tacti-
cal mission support tools. To demonstrate the plausibility of our framework,
we next offer a few commercial examples and explore potential military appli-
cations and further implications in each category.

automated decision-making

Full automation of decision-making can improve performance if there are
quality data and clear judgments. Zum Beispiel, Australia’s Pilbara region has
large quantities of iron ore. The mining sites are far from any major city,
and the local conditions are often so hot that it is hazardous for humans to

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 28

work there. Seit 2016, mining giant Rio Tinto has deployed dozens of self-
driving trucks.62 These trucks have saved operating costs while reducing risk
to human operators. Such automation is feasible because the data are plentiful
relative to the needs at hand—the trucks drive on the same roads each day,
and there are few surprises in terms of human activity. Data collection is there-
fore limited to a small number of roads with few obstacles. The main task for
the AI is to predict whether the path is clear. Once this prediction is made, Die
judgment is well deªned and easy to specify in advance: if the path is clear,
continue; if it is not clear, stop and wait. In other examples of successful auto-
mation, robotic cameras have been deployed in a variety of settings, In-
cluding during basketball games, swimming competitions, and “follow me”
aerial drones.63

We expect AI to be useful for military tasks with clear civilian analogs.
While much of the popular debate about military AI is preoccupied with auto-
mated weaponry, it is likely that many promising applications will be de-
signed to support bureaucratic functions. Bureaucracies are, among other
Dinge, computational systems that gather and process data to render opera-
tions more legible, predictable, and controllable.64 The most bureaucratized
parts of a military organization are therefore good candidates for computa-
tional automation. Administrative transactions tend to be repetitious, welche
generates a large amount of high-quality data that can be used for training and
prediction. Organizations are attracted to standardization because it makes
their equipment, Verfahren, and personnel easier to count, vergleichen, and con-
trol.65 Procedural standardization also constrains organizational behavior,
which makes it easier for managers to specify judgments in advance. More-
über, peacetime administrative tasks are somewhat less exposed to battleªeld
turbulence, reducing the requirement for last-minute interpretation.

We expect automation to improve the efªciency and scale of routinized ac-
tivities that entail ªlling in missing information, measuring technical perfor-
Mance, tracking personnel, and anticipating future needs. In der Tat, AI may
enhance many routine tasks associated with developing budgets, recruiting
and training personnel, identifying leadership potential, scheduling unit ros-
ters, designing and procuring weapon systems, planning and evaluating exer-

62. Agrawal, Gans, and Goldfarb, Prediction Machines, S. 113–114.
63. Ebenda., P. 115.
64. James G. March and Herbert A. Simon, Organizations (New York: John Wiley, 1958).
65. JoAnne Yates, Control through Communication: The Rise of System in American Management (Balti-
mehr, Md.: Johns Hopkins University Press, 1989); and Wendy Nelson Espeland and Mitchell L.
Stevens, “Commensuration as a Social Process,” Annual Review of Sociology, Bd. 24, NEIN. 1 (1998),
S. 313–343, https://doi.org/10.1146/annurev.soc.24.1.313.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 29

cises, caring for the morale and welfare of service members and their families,
and providing health care to service members and veterans.66 At the same
Zeit, it is important to recognize that seemingly trivial procedures can be-
come politicized when budgets and authorities are implicated.67 Even in
the absence of parochialism, the complexity of administrative systems intro-
duces interpretive challenges for personnel. These internal frictions under-
mine the conditions for successful administrative automation.

Logistics supply chains may also be good candidates for automation.
In der Tat, ªrms like DHL and FedEx have leveraged AI to streamline their deliv-
ery networks. Standardized parts, consumption rates, repetitive transactions,
and preventive maintenance schedules generate abundant data about deªned
tasks. Using historical performance data, predictive maintenance systems can
monitor consumption rates and automatically order replacement parts before
a weapon or platform breaks. Zum Beispiel, one U.S. Air Force system uses a
predictive algorithm to decide when mechanics should perform an inspection,
which allows them to tailor the maintenance and repairs for individual aircraft
rather than adhere to generic schedules.68 But we contend that the prediction
of supply and demand for just-in-time delivery will be more difªcult in war.
While bureaucrats may be insulated from the turmoil of the battleªeld, supply
lines are more exposed. The enemy can interdict or sabotage logistics. As war-
time attrition consumes spare parts, units may squabble about which ones
should be resupplied. Friendly units may resort to using platforms and parts
in unconventional ways. All this turbulence will cause predictions to fail,
which essentially shifts AI into the category of premature automation, dis-
cussed below. The classical military solution to such problems is to stockpile
an excess of supplies, precisely because wartime consumption is so hard to
predict.69 If organizations eliminate slack resources with AI systems in pursuit
of efªciency, Jedoch, then they may sacriªce effectiveness when systems en-
counter unforeseen circumstances.70

66. Brian David Ray, Jeanne F. Forgey, and Benjamin N. Mathias, “Harnessing Artiªcial Intelli-
gence and Autonomous Systems across the Seven Joint Functions,” Joint Force Quarterly, Bd. 96,
NEIN. 1 (2020), P. 124, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-96/jfq-96.pdf; Und
Stephan De Spiegeleire, Matthijs Maas, and Tim Sweijs, Artiªcial Intelligence and the Future of De-
fense: Strategic Implications for Small- and Medium-Sized Force Providers (Den Haag, Niederlande:
Hague Centre for Strategic Studies, 2017), S. 91–94, https://www.jstor.org/stable/resrep12564.1.
67. Sapolsky, Gholz, and Talmadge, US Defense Politics.
68. Stoney Trent and Scott Lathrop, “A Primer on Artiªcial Intelligence for Military Leaders,”
Small Wars Journal, August 22, 2018, http://smallwarsjournal.com/jrnl/art/primer-artiªcial-
intelligence-military-leaders.
69. Martin van Creveld, Supplying War: Logistics from Wallenstein to Patton, 2nd ed. (New York:
Cambridge University Press, 2004).
70. Loss of resilience is a longstanding concern associated with organizational automation. Sehen

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 30

In sum, we expect AI to be most useful for automating routine tasks that
are bureaucratically insulated from battleªeld turbulence. Administration and
logistics tasks that are repetitious and standardized are more likely to have
both quality data and clear goals. Humans still provide judgment to deªne
those clear goals, but this happens in advance. Although these conditions
are ideal for automation, they can be elusive in practice, especially if there are
contested resources and personnel decisions. Infolge, even the low-
hanging fruit applications will often fall
into the other three categories
in table 2, particularly human-machine teaming.

human decision-making

At the other extreme, humans still make all the decisions for situations in
which data are of low quality and judgment is difªcult. Machine predictions
degrade without quality data. Glücklicherweise, because judgment is also difªcult in
this category, there is little temptation to automate. There are no commercial
examples in this category because we have not seen AI systems that success-
fully found companies, lead political movements, or set legal precedents by
selbst. Without quality data and clear judgment, such machines would be
of little use. As long as advances in machine learning are best understood as
improvements in prediction rather than AGI, then tasks in this category will
require human beings. People do not always make good decisions, Jedoch,
and successful decisions depend on many different psychological and social
factors, plus good luck.

Strategy abounds with complex and controversial political and moral judg-
gen. What is worth ªghting for, or compromising on? When should allies be
embraced, or abandoned? When is butter worth more than guns, and when
does the stability of deterrence outweigh the pursuit of power? For what na-
tional interests should men and women be inspired to kill, and die? And when
should killers show restraint? The answers to these questions spring from
many sources such as ideology, Psychologie, and domestic politics, but they do
not come from machines. AI systems may win games like Jeopardy, Go, Und
complex video games. War shares some features with some games, wie zum Beispiel
strategic competition and zero-sum payoffs. But war is not a game. Games are
deªned by institutionalized rules, but the failure of institutions in anarchy
gives rise to war.

In Clausewitzian terms, war is the use of violence to impose one’s will on a
reactive opponent. The interaction of political rationality, national passion, Und

Gene I. Rochlin, Trapped in the Net: The Unanticipated Consequences of Computerization (Princeton,
N.J.: Princeton University Press, 1997).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 31

random chance gives war a chaotic quality.71 The problems of formulating
political-military strategy and commanding forces in battle live in the heart of
this chaos. Curiosity, Kreativität, grit, and perseverance become important char-
acter traits, to say nothing of empathy, mercy, and compassion. Whenever
“fog” and “friction” are greatest, and human “genius” is most in demand,
there is little role for AI. Humans will often fail in these circumstances, zu, Aber
at least they have a ªghting chance.

AI systems may still be able to support strategy and command by providing
decision aids that improve the intelligence, planning, and administrative in-
puts to decision-making. Noch, this simply underscores the importance of
decomposing decision tasks into subtasks that AI can support and subtasks
that humans must perform. The partition of judgment tasks itself is an act of
judgment. For ºuid, ambiguous, or controversial practices, which are common
in strategy and command, the boundaries of data, judgment, prediction, Und
action may be difªcult to distinguish from each other, let alone from other de-
cision tasks. Judgment becomes even more important in these situations.

premature automation

In between the extremes of fully automated and fully human decision-making,
there are both risks and opportunities. The mixed cases of premature automa-
tion and human-machine teaming generate most of the worry and excitement
about AI. Reliance on AI is particularly risky in situations in which the data
are of low quality, but the machine is given clear objectives and authorized
to act. The risks are greatest when lethal action is authorized. If data are biased
or incomplete, then it would be better to let humans rather than machines in-
terpret the evolving situation (d.h., human decision-making). If humans mis-
takenly believe that data are abundant and unbiased, Jedoch, then they may
wrongly assume that critical tasks can be delegated to AI (d.h., automated
decision-making). Automation seems seductively feasible, but the deªnition of
the reward function fails to keep up with important changes in the context
of decision-making. Clear judgment in a turbulent environment creates hid-
den dangers.

An example of premature automation is when Amazon built an AI system
to help with its hiring processes.72 The judgment seemed clear: the machine
should select the workers who are likely to succeed in the company. Amazon

71. Alan Beyerchen, “Clausewitz, Nonlinearity, and the Unpredictability of War,” International Se-
curity, Bd. 17, NEIN. 3 (Winter 1992/93), S. 59–90, https://doi.org/10.2307/2539130.
72. Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women,”
Reuters, Oktober 10, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-
insight-idUSKCN1MK08G.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 32

receives thousands of résumés, and a better screening tool could automate the
many hours that human recruiters spend screening them. There were reasons
to be optimistic that this automation would reduce bias and yield higher qual-
ity and more diverse candidates.73 Unfortunately, Amazon’s past applications
and hiring practices meant that the data contained insufªcient examples of
successful women applicants. Without data on successful women applicants,
the AI learned that Amazon should not hire women, and it consequently
screened out résumés that included the word “women.” The existing bi-
ases in organizational processes produced biased AI training data. Fortu-
natürlich, Amazon management realized that the AI exacerbated rather than
solved Amazon’s existing problems, and the company never deployed this
AI tool.

Data may also be biased because they are based on decisions that a predic-
tion machine may not understand. Zum Beispiel, an early AI for chess was
trained on thousands of Grandmaster games. When deployed, the pro-
gram sacriªced its queen early in the game because it had learned that
Grandmasters who do so tend to win. Human Grandmasters only sacriªce
their queen, Jedoch, when doing so generates a clear path to victory.74 While
this issue has been solved in AI chess, the underlying challenge continues.
Even when the utility function is clear, training data is often a result of tacit
assumptions in human decision-making. Sometimes those human decisions—
such as sacriªcing a queen in chess—create biased data that cause AI predic-
tions to fail.

In contrast with chess, the risks of premature automation are more extreme
in the military realm (z.B., fratricide and civilian casualties), but the logic is the
same. Militaries abound with standard operating procedures and tactical doc-
trines that guide the use of lethal capabilities (z.B., instructions for the safe op-
eration of weapon platforms, playbooks for tactical maneuvers, and policies
for employing weapons). To the degree that goals and mission parameters can
be clearly speciªed, tactical operations will appear to be attractive candidates
for automation. To the degree that combat timelines are expected to be ex-
tremely compressed, darüber hinaus, automation may appear to be even more ur-
gent.75 Rapid decision-making would necessitate the pre-speciªcation of goals
and payoffs and the coupling of AI prediction to robotic action. Lethal autono-
mous weapon systems use prediction to navigate complex environments in or-
der to arrive at destinations or follow targets, within constraints that are
supplied by human operators.76 Their targeting systems base their predictions

73. Kleinberg et al., “Discrimination in the Age of Algorithms.”
74. Agrawal, Gans, and Goldfarb, Prediction Machines, P. 63.
75. Horowitz, “When Speed Kills.”
76. Heather M. Roff and Richard Moyes, “Meaningful Human Control, Artiªcial Intelligence, Und

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 33

on training data that identify valid targets. Using algorithms, machines may
rapidly and accurately identify targets at far greater distances than human vi-
sual recognition, and algorithmic target recognition may be collocated with
sensors to reduce response times.77

Many AI-enabled weapons already or imminently exist. The Israeli Harpy
loitering munition can search for and automatically engage targets, and China
has plans for similar “intelligentized” cruise missiles.78 Russia is developing a
variety of armed, unmanned vehicles capable of autonomous ªre or unarmed
mine clearance.79 The United States has been exploring combat applications
for AI in all warªghting domains. In the air, the “Loyal Wingman” program
pairs an unmanned F-16 with a manned F-35 or F-22 to explore the feasibility
of using humans to direct autonomous aircraft, such as the XQ-58A Valkyrie.80
Air combat algorithms that can process sensor data and plan effective combat
maneuvers in the span of milliseconds have already defeated human pilots in
some simulators.81 At sea, die USA. Navy’s LOCUST project explores the feasi-
bility of launching swarms of expendable surface-to-air drones.82 The Defense
Advanced Research Projects Agency’s (DARPA) Continuous Trail Unmanned
Vessel program is designed to search for enemy missile submarines and auto-
matically trail them for months at a time, reporting regularly on their loca-
tions.83 On land, UNS. Marine “warbot companies” equipped with networks of
small robots might provide distributed sensing and precision ªre.84 Auto-

Autonomous Weapons,” paper prepared for the Informal Meeting of Experts on Lethal Autono-
mous Weapons Systems, UN Convention on Certain Conventional Weapons, Genf, April 11–15,
2016, https://article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf.
77. Sehen, Zum Beispiel, the Defense Advanced Research Projects Agency’s (DARPA) Target Recogni-
tion and Adaption in Contested Environments (TRACE) Programm, described in John Keller,
“DARPA TRACE program using advanced algorithms, embedded computing for radar target
recognition,” Military and Aerospace Electronics, Juli 23, 2015, https://www.militaryaerospace
.com/computers/article/16714226/darpa-trace-program-using-advanced-algorithms-embedded-
computing-for-radar-target-recognition.
78. Elsa B. Kania, Battleªeld Singularity: Artiªcial Intelligence, Military Revolution, and China’s Future
Military Power (Washington, D.C.: Center for a New American Security, November 2017), https://
s3.us-east-1.amazonaws.com/ªles.cnas.org/documents/Battleªeld-Singularity-November-2017
.pdf.
79. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, S. 80, 82.
80. Hoadley and Sayler, Artiªcial Intelligence and National Security, P. 13.
81. Norine MacDonald and George Howell, “Killing Me Softly: Competition in Artiªcial Intelli-
gence and Unmanned Aerial Vehicles,” PRISM, Bd. 8, NEIN. 3 (2019), S. 103–126, https://ndupress
.ndu.edu/Portals/68/Documents/prism/prism_8-3/prism_8-3.pdf.
82. LOCUST stands for “Low-Cost Unmanned Aerial Vehicle Swarming Technology.” See Jules
Hurst, “Robotic Swarms in Offensive Maneuver,” Joint Force Quarterly, Bd. 87, NEIN. 4 (2017),
S. 105–111, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-87/jfq-87_105-111_Hurst
.pdf?ver(cid:2)2017-09-28-093018-793.
83. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, P. 91.
84. Jeff Cummings et al., “Marine Warbot Companies: Where Naval Warfare, die USA. National
Defense Strategy, and Close Combat Lethality Task Force Intersect,” War on the Rocks, Juni 28,

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 34

mated counter-battery responses, which accurately retaliate against the origin
of an attack, could give human commanders leeway to focus on second- Und
third-order decisions in the wake of an attack.85 In the cyber domain, AI sys-
tems might autonomously learn from and counter cyberattacks as they evolve
in real time, as suggested by the performance of the Mayhem system in
DARPA’s 2016 Cyber Grand Challenge.86 AI could be especially useful for
detecting new signals in the electromagnetic spectrum and reconªguring elec-
tronic warfare systems to exploit or counter them.87 Space satellites, mean-
while, have been automated from their inception, and space operations might
further leverage AI to enhance surveillance and control.

Much of the AI security literature is preoccupied with the risks posed by au-
tomated weapons to strategic stability and human security.88 Risks of miscal-
culation will increase as the operational context deviates from the training data
set in important or subtle ways. The risk of deviation increases with the com-
plexity and competitiveness of the strategic environment, while the costs of
miscalculation increase with the lethality of automated action. The machine
tries to optimize a speciªc goal, but in the wrong context, doing so can lead to
false positives. AI weapons may inadvertently either target innocent civilians
or friendly forces or trigger hostile retaliation. In these cases, the AI would
have the authority to kill but would not understand the ramiªcations. Der
risks are particularly stark in the nuclear arena. Nuclear war is the rarest of
rare events—keeping it that way is the whole point of nuclear deterrence—so
training data for AI systems is either nonexistent or synthetic (d.h., based on
Simulation).89 Any tendency for AI systems to misperceive or miscalculate
when confronted with uncertain or novel situations could have catastrophic
consequences.90 In short, autonomous weapon systems that combine predic-
tion with action can quickly make tragic mistakes.

2018, https://warontherocks.com/2018/06/marine-warbot-companies-where-naval-warfare-the-
u-s-national-defense-strategy-and-close-combat-lethality-task-force-intersect/.
85. Ray, Forgey, and Mathias, “Harnessing Artiªcial Intelligence and Autonomous Systems across
the Seven Joint Functions," P. 123.
86. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, P. 88.
87. Matthew J. Florenzen, Kurt M. Shulkitas, and Kyle P. Bair, “Unmasking the Spectrum with
Artiªcial Intelligence,” Joint Force Quarterly, Bd. 95, NEIN. 4 (2019), S. 116–123, https://ndupress
.ndu.edu/Portals/68/Documents/jfq/jfq-95/jfq-95_116-123_Florenzen-Skulkitas-Bair.pdf.
88. Zum Beispiel, Payne, “Artiªcial Intelligence”; and Suchman, “Algorithmic Warfare and the
Reinvention of Accuracy.”
89. Rafael Loss and Joseph Johnson, “Will Artiªcial Intelligence Imperil Nuclear Deterrence?” War
on the Rocks, September 19, 2019, https://warontherocks.com/2019/09/will-artiªcial-intelligence-
imperil-nuclear-deterrence/.
90. Vincent Boulanin, Hrsg., The Impact of Artiªcial Intelligence on Strategic Stability and Nuclear Risk,
Bd. 1, Euro-Atlantic Perspectives (Solna, Schweden: Stockholm International Peace Research Insti-
tute, Mai 2019), https://www.sipri.org/sites/default/ªles/2019-05/sipri1905-ai-strategic-stability-

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 35

For most of the examples reviewed in this section, humans should adjust
goals on a case-by-case basis to avoid the substantial operational risks of full
automation. The DARPA Air Combat Evolution (ACE) Programm, which trains
AI pilots in dogªght simulations, highlights risks that can emerge when AI is
given too much decision autonomy in rapidly changing contexts: “at one point
in the AlphaDogªght trials, the organisers threw in a cruise missile to see what
would happen. Cruise missiles follow preordained ºight paths, so they behave
more simply than piloted jets. The AI pilots struggled with this because, para-
doxically, they had beaten the missile in an earlier round and were now
trained for more demanding threats.”91 Experiments like this have encouraged
ACE to focus on “manned-unmanned teaming” rather than full autonomy.
The engineering challenge is then to partition the cognitive load of prediction
and judgment correctly (d.h., to decompose the task into different subtasks) Also
that faster machines and mindful humans can play to their strengths. These ex-
amples show that automation risk from low-quality data increases the impor-
tance of careful human judgment and teaming a human with the machine.
Human personnel are needed to identify when data are incomplete or biased
in the speciªc context of any given decision, and to provide judgment on how
to act on potentially inaccurate predictions. For many tactical ªre and maneu-
ver tasks, full automation is prohibitively risky, but close human supervision
may be able to mitigate that risk.

human-machine teaming

If quality data are available but judgment is difªcult, then AI can still provide
predictions if humans ªrst tell the machines what to do. We describe this cate-
gory as “human-machine teaming” because skilled people can use AI to en-
hance decision-making, but they must guide and audit AI performance in
sensitive or idiosyncratic circumstances. In these situations, quality data will
generate reliable predictions. Owing to the difªculty of prespecifying judg-
ment, Jedoch, most practitioners are not tempted to deploy full automation
because they recognize that doing so may risk creating more bad decisions.

Consider the civilian example of tax law and the ambiguity about whether
investment income should be taxed as business income or capital gains.
Typically, a company would hire a lawyer to collect facts on the case and pre-
dict what the courts are likely to ªnd. Dann, the lawyer would advise the client

nuclear-risk.pdf; and Lora Saalman, “Fear of False Negatives: AI and China’s Nuclear Posture,”
Bulletin of the Atomic Scientists blog, April 24, 2018, https://thebulletin.org/2018/04/fear-of-false-
negatives-ai-and-chinas-nuclear-posture/.
91. “Fighter Aircraft Will Soon Get AI Pilots,” Economist, November 19, 2020, https://www
.economist.com/science-and-technology/2020/11/15/ªghter-aircraft-will-soon-get-ai-pilots.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 36

on a course of action. One ªrm developed an AI that scans tax law decisions to
predict tax liabilities. The AI does not recommend a course of action because
making that judgment requires knowing the client’s risk preferences and com-
fort navigating the legal system. The AI predicts what would happen if the
case were to go to court, but it cannot determine whether going to court is a
good idea. Legal decisions in this task are the product of human-machine
teaming between the predictive AI and the human lawyer, who must interpret
the prediction to judge what advice best serves the client.92

For similar reasons, human-machine teaming ought to ºourish in intelli-
gence and planning organizations. There is much growth potential for AI in
units that are awash in intelligence, Überwachung, and reconnaissance (ISR)
data.93 In remotely piloted aircraft (drone) Operationen, zum Beispiel, the infor-
mation processing burden is intense at even the most tactical level. Nach
to an ethnographic study by Timothy Cullen, “to ºy the aircraft and control
the sensor ball, Reaper and Predator crews had to coordinate the meaning,
Bewegung, and presentation of a myriad of menus, windows, and tables on
16 displays and 4 touch screens with 4 keyboards, 2 trackballs, 2 joysticks, Und
8 levers.”94 Cullen describes a complicated mixture of prediction and judg-
ment as “operators negotiated and constructed a constrained environment in
the ground control station to coordinate verbal, typed, written, pictorial, Und
geographical representations of a mission; to identify patterns in scenes from
the aircraft’s sensors; and to associate those patterns with friendly and enemy
activity.”95 ISR drones generate so much data, “37 years of full motion footage
In 2011 allein,” that “much of the collect goes unanalyzed.”96 AI is able to alle-
viate some of the data processing burden by continuously monitoring multiple
data feeds and highlighting patterns of interest. Noch, Cullen highlights ways in
which aircrew make many, seemingly minor, value judgments about what
they should—and should not—be doing with their sensors and weapons. In
other words, AI provides one complement to the data—the prediction—but it
does not provide the judgment that also underlies decision-making.

In principle, many intelligence tasks might beneªt from machine learning.

92. Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb, “Artiªcial Intelligence: The Ambiguous La-
bor Market Impact of Automating Prediction,” Journal of Economic Perspectives, Bd. 33, NEIN. 2
(Frühling 2019), P. 35, https://doi.org/10.1257/jep. 33.2.31.
93. Keith Dear, “A Very British AI Revolution in Intelligence Is Needed,” War on the Rocks, Octo-
ber 19, 2018, https://warontherocks.com/2018/10/a-very-british-ai-revolution-in-intelligence-is-
needed/.
94. Timothy M. Cullen, “The MQ-9 Reaper Remotely Piloted Aircraft: Humans and Machines in
Action,„Ph.D. dissertation, Massachusetts Institute of Technology, 2011, P. 272.
95. Ebenda., P. 273.
96. Dear, “A Very British AI Revolution in Intelligence Is Needed.”

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 37

Image recognition algorithms can sift through drone video feeds to identify
enemy activity. Facial recognition systems can detect individual targets of in-
terest, while emotional prediction algorithms can aid in identifying the hostile
or benign intent of individuals on a crowded street. Speech recognition, voice
synthesis, and translation systems can alleviate shortages of human translators
for human and signals intelligence, as well as for civil-military relations and
information operations. Allgemein, AI is well suited to the intelligence task of
analyzing bulk data and identifying patterns, Zum Beispiel, in identifying and
tracking terrorist groups or insurgents.97

In der Praxis, intelligence is often more art than science. Intelligence profes-
sionals deal with deceptive targets, ambiguous data, and subtle interpreta-
tions.98 Unforeseen changes in the strategic environment or mission objectives
create new data requirements or, worse, undermine the referential integrity of
existing data. On a case-by-case basis, practitioners draw on their subject mat-
ter expertise and experience to make judgments. Applying this judgment to an
AI prediction is a difªcult but learnable skill. AI predictions become just an-
other input into a complex, and potentially consequential, decision process. Bei
die selbe Zeit, there is much potential for dissensus given the complex rela-
tionships among those who collect, verwalten, and consume intelligence, not to
mention the perennial risks of intelligence politicization.99

A military organization needs to understand not only its adversaries but
also itself. The prospects for AI are sometimes better for command and control
(C2) than for ISR because friendly organizations and processes are easier to
Kontrolle, which produces more reliable reporting data. A lot of staff effort is
consumed by searching for data, querying other organizations for data, Und
reanalyzing and reformatting data in response to emerging information re-
quirements. AI can be used to integrate reporting data from disparate data-
bases, helping to resolve contradictions and view activity in a “common
operational picture.”100 AI-produced decision aids can help personnel analyze
unfolding battleªeld conditions, run simulations of operational scenarios, Und
present options for military commanders to evaluate. Zum Beispiel, to deter-

97. James L. Regens, “Augmenting Human Cognition to Enhance Strategic, Operational, Und
Tactical Intelligence,” Intelligence and National Security, Bd. 34, NEIN. 5 (2019), S. 673–687, https://
doi.org/10.1080/02684527.2019.1579410.
98. Minna Räsänen and James M. Nyce, “The Raw Is Cooked: Data in Intelligence Practice,” Sci-
enz, Technologie, & Human Values, Bd. 38, NEIN. 5 (September 2013), S. 655–677, https://doi.org/
10.1177/0162243913480049.
99. Richard K. Betts, Enemies of Intelligence: Knowledge and Power in American National Security
(New York: Columbia University Press, 2007); and Joshua Rovner, Fixing the Facts: National Secu-
rity and the Politics of Intelligence (Ithaca, N.Y.: Cornell University Press, 2011).
100. Hoadley and Sayler, Artiªcial Intelligence and National Security, P. 12.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 38

mine the best method for evacuating injured soldiers, militaries can use pre-
dictive models based on weather conditions, available routes, landing sites,
and anticipated casualties.101 AI can be used to enhance computerized war-
gaming and combat simulations, offering more realistic models of “red team”
behavior and more challenging training exercises.102 AI could potentially im-
prove mission handoff between rotating units by analyzing unstructured text
(d.h., passages of prose rather than standardized ªelds) in the departing unit’s
reports.103 Yet, as with intelligence, planning tasks cannot be simply delegated
to machines. ISR and C2 reporting systems generate a mass of potentially rele-
vant data, but they are hard to interpret, and associated metadata are often
missing or misleading. In these situations, quality data may generate reliable
Vorhersagen, but human intervention and interpretation is required throughout
the decision process.

Human-machine teaming often entails not only task performance (d.h., bal-
ancing the cognitive load across people and AI) but also task design (d.h., Anzeige-
justing the load as circumstances change). Viewed at a more granular level, A
task that falls into the human-machine teaming category in our framework
might be disaggregated into subtasks that fall into two of the framework’s
other categories. Das ist, human practitioners will have to partition a complex
decision task into either fully automated or fully human decision-making
subtasks. This subdivision requires making mindful decisions about monitor-
ing and controlling the risks of premature automation. Zum Beispiel, menschlich-
machine teaming in drone operations involves having both the drone and the
drone operators perform certain tasks autonomously. The drone might auto-
matically perform ºying tasks (d.h., maintaining course and bearing or reac-
quiring a lost datalink), while human drone operators might deliberate over
legal targeting criteria.

The overall partition (d.h., the location of the human in the loop) should
be adjusted over time as conditions change, which will require humans to be
mindful of how the division of labor between humans and machines relates to
the task environment and the organizational mission. This balance will be fur-
ther complicated by interdependencies across tasks and organizations, Daten
Zugang, interpretability, and interoperability issues, as well as competing priori-
ties such as speed, safety, secrecy, efªciency, effectiveness, legality, cyber-
security, Stabilität, adaptability, und so weiter. Wichtig, as ªgure 1 zeigt an, Die

101. Benjamin Jensen and Ryan Kendall, “Waze for War: How the Army Can Integrate Artiªcial
Intelligence,” War on the Rocks, September 2, 2016, https://warontherocks.com/2016/09/waze-for-
war-how-the-army-can-integrate-artiªcial-intelligence/.
102. Kania, “Battleªeld Singularity," P. 28.
103. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, P. 90.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 39

organizational and political institutions that are exogenous to decision-making
tasks establish the priorities for these different objectives. Humans are the ulti-
mate source of judgment in all AI systems.

The Strategic Implications of Military AI

The central argument of this article is that machine learning is making pre-
diction cheaper, which in turn makes data and judgment more valuable. Das
ªnding also means that quality data and clear judgment enhance AI perfor-
Mance. These conditions vary by decision task, but they are generally harder
to meet in military situations given environmental and institutional complexi-
Krawatten. Organizations that can meet them, Jedoch, may gain a competitive ad-
vantage. Human skills are central to this competitive advantage, and this has
two important strategic implications.

Erste, military organizations that rely on AI have incentives to improve
both data and judgment. These AI complements are sources of strength. Bei
least one of them—judgment—relies wholly on human beings. Even when
goals can be formally speciªed and pre-delegated for tasks in the automated
decision-making category in our framework, humans must engineer the re-
ward function, which they will likely revisit as they monitor system perfor-
Mance. AI adoption may radically change the distribution of judgment by
altering who in an organization makes decisions and about what, Aber
in all cases, humans are ultimately responsible for setting objectives, Herstellung
trade-offs, and evaluating outcomes. There is little chance of this changing
anytime soon given the technical state of the art. The other complement—
data—also relies on human beings. Developing and implementing data pol-
icy necessitates negotiation between data producers and consumers. People
also make nuanced judgments when architecting data infrastructure and man-
aging data quality. AI systems can neither design themselves nor clean their
own data, which leads us to conclude that increased reliance on AI will make
human skills even more important in military organizations.

Zweite, and for the same reasons, adversaries have incentives to complicate
both data and judgment. In a highly competitive environment, organizational
strengths become attractive targets and potential vulnerabilities. Since predict-
able adversaries will play to AI strengths, intelligent adversaries will behave
unpredictably. If AI creates military power in one area, adversaries will create
military challenges in another. Facing an AI-empowered force, the enemy will
attempt to change the game by either undermining the quality of predictions
or making them irrelevant. daher, strategies to contest, manipulate, or dis-
rupt data and judgment become more relevant as military competitors adopt

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 40

AI. The informational and organizational dimensions of war will continue to
increase in salience and complexity. Wieder, this leads us to the conclusion that
more military AI will make the human aspects of conºict more important.

This increased importance of human personnel challenges the emerging
wisdom about AI and war. Many analyses either assume that AI will replace
warriors for key military tasks or speculate that war will occur at machine
speed, which in turn creates ªrst-mover advantages that incentivize ag-
gression and undermine deterrence.104 The states that are ªrst to substitute
machines for warriors, darüber hinaus, are assumed to gain signiªcant military ad-
vantages that will shift the balance of power toward early adopters. These out-
comes are plausible, but they are based on problematic assumptions about AI
substitutability. Conºicts based on AI complementarity may exhibit very dif-
ferent dynamics. We argue that it is more useful to consider the militarized
contestation of AI complements (d.h., data and judgment) than to conceive of
wars between automated military forces. Conºicts in which data and judg-
ment are perennially at stake may be full of friction, Kontroverse, and unin-
tended consequences, and they may drag on in frustrating ways. Zusamenfassend, Wir
expect the growing salience of data and judgment in war to subtly alter strate-
gic incentives. Infolge, AI-enabled conºicts are more likely to be decided
by the slow erosion of resolve and institutional capacity than set-piece battles
between robotic forces.

information contests

The importance of information in war has been increasing for many de-
cades.105 The growth of ISR infrastructure—on and over the battleªeld, at sea
and underwater, and in orbit—has dramatically increased the volume and va-
riety of data available to military organizations. Long-range precision weap-
ons and high-bandwidth datalinks have also expanded the number of things
that militaries can do with all these data, which in turn generates even more
data about friendly operations. Noch, more and better data have not always
translated into more effective military operations. The adoption of information
technology throughout the past century has typically been accompanied by an
increase in the complexity and geographical dispersion of military organiza-
tionen. Data-intensive tasks that emphasize intellectual skills rather than physi-
cal ªghting, such as intelligence, communications, and information operations,

104. Zum Beispiel, Payne, “Artiªcial Intelligence”; Horowitz, “When Speed Kills”; and Johnson,
“Delegating Strategic Decision-Making to Machines.”
105. Lindsay, Information Technology and Military Power, S. 28–31; and Emily O. Goldman, Hrsg.,
Information and Revolutions in Military Affairs (New York: Routledge, 2015).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 41

have proliferated in military organizations. Gleichzeitig, advanced indus-
trialized nations have asked their militaries to perform more complex opera-
tionen. More complexity, im Gegenzug, increases the potential for disagreement and
breakdown. Adversaries have also learned to offset the advantages of the ISR
revolution by either adopting asymmetric tactics to blend in with civilian pop-
ulations or exploiting the potential of space and cyberspace. As advances in
battleªeld sensors make it feasible to detect targets in near real time, enemy
forces learn how to disperse, hide, and deceive.

Zusamenfassend, there may be more data in modern war, but data management has
also become more challenging. Although U.S. weapons may be fast and pre-
cise, UNS. wars in recent decades have been protracted and ambiguous.106 We
argue that AI will most likely deepen rather than reverse these trends. In der Tat,
automation is both a response to and a contributing cause of the increasing
complexity of military information practice.

Just as commanders are already preoccupied with C2 architecture, ofªcers in
AI-enabled militaries will seek to gain access to large amounts of data that are
relevant to speciªc tasks in order to train and maintain AI systems. Units will
have to make decisions about whether they should collect their own data or-
ganically or acquire shared data from other units, government agencies, oder
coalition partners. We expect many relevant databases to be classiªed and
compartmented given the sensitivity of collection techniques or the content it-
self, which will complicate sharing. Units might also choose to leverage public
data sources or purchase proprietary commercial data, both of which are prob-
lematic because nongovernmental actors may affect the quality of and access
to data. As militaries tackle new problems, or new operational opportunities
emerge, data requirements will change, and ofªcers will have to routinely
ªnd and integrate new data sources. AI strategy will require militaries to es-
tablish data policies, and thus negotiating access to data will be an ongoing
managerial—and human—challenge.

We contend that militaries will face not only data access but also data rele-
vancy challenges. Heterogenous data-generating processes allow biases and
anomalies to creep into databases. Although metadata may help to organize
information processing, they are also vulnerable to data friction that only hu-
mans can ªx.107 Cleaning and curating data sources will therefore be as impor-
tant as acquiring them in the ªrst place. To the challenges of producing or
procuring data must be added the challenges of protecting data. Just as supply

106. Sehen, Zum Beispiel, Shimko, The Iraq Wars and America’s Military Revolution.
107. Paul N. Edwards et al., “Science Friction: Data, Metadaten, and Collaboration,” Social Studies of
Wissenschaft, Bd. 41, NEIN. 5 (Oktober 2011), S. 667–690, https://doi.org/10.1177/0306312711413314.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 42

chains become attractive targets in mechanized warfare, data supplies will also
become contested.

Gesamt, we expect the rise of AI to exacerbate the already formidable chal-
lenges of cybersecurity. Cybersecurity professionals aim to maintain the con-
ªdentiality, integrity, and availability of an organization’s data. Two of these
goals—integrity and availability—capture the AI requirements of unbiased
and accessible data, as discussed above. The goal of conªdentiality is also im-
portant insofar as data provide AI adopters with a competitive advantage. In
commerce, AI companies often try to own (rather than buy) the key data that
enable their machines to learn.108 The military equivalent of this is classiªed
Information, which is hidden from the enemy to produce a decision advan-
tage.109 Military organizations will have strong incentives to protect the
classiªed data that military AI systems use to learn. For the same reasons, Anzeige-
versaries will have incentives to steal, manipulate, and deny access to AI learn-
ing data. To date, most discussions of AI and cybersecurity have focused on a
substitution theory of cybersecurity, das ist, using AI systems to attack and de-
fend networks.110 But we argue that a complementary theory of cybersecurity
is just as if not more important. AI will require the entire military enterprise to
invest more effort into protecting and exploiting data. If AI systems are trained
with classiªed information, then adversaries will conduct more espionage.
If AI enhances intelligence, then adversaries will invest in more counter-
intelligence. If AI provides commanders with better information, then adver-
saries will produce more disinformation.

Inevitably, different parts of the bureaucracy will tussle among themselves
and with coalition partners and nongovernmental actors to access and curate a
huge amount of heterogeneous and often classiªed data. Organizations will
also struggle with cyber and intelligence adversaries to maintain control of
their own data while also conducting their own campaigns to collect or manip-
ulate the enemy’s data. To appreciate the strategic implications of AI, Dort-
Vordergrund, it is helpful to understand cyber conºict, most of which to date resembles
espionage and covert action more than traditional military warfare. Tatsächlich,
chronic and ambiguous intelligence contests are more common than fast and
decisive cyberwar.111 Military reliance on AI becomes yet another factor abet-

108. Agrawal, Gans, and Goldfarb, Prediction Machines, S. 174–176.
109. Jennifer E. Sims, “Decision Advantage and the Nature of Intelligence Analysis,” in Loch K.
Johnson, Hrsg., The Oxford Handbook of National Security Intelligence (New York: Oxford University
Drücken Sie, 2010).
110. Zum Beispiel, James Johnson, “The AI-Cyber Nexus: Implications for Military Escalation, Von-
Terrence, and Strategic Stability,” Journal of Cyber Policy, Bd. 4, NEIN. 3 (2019), S. 442–460, https://
doi.org/10.1080/23738871.2019.1701693.
111. Joshua Rovner, “Cyber War as an Intelligence Contest,” War on the Rocks, September 16, 2019,

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 43

ting the rise of cyber conºict in global affairs, und das (ambiguous, confusing,
interminable, gray zone) dynamics of cyber conºict are likely to have a strong
inºuence on the dynamics of AI conºict.

organizational complexity

Just as AI militaries will struggle to procure, clean, curate, protect, and contest
Daten, they will also struggle to inculcate, negotiate, and legitimate judgment.
In der Tat, the challenges of data and judgment go hand in hand. People will ªnd
it harder to interpret a ºood of heterogeneous data. More complex data archi-
tectures will require managers to consider the trade-offs among competing ob-
jectives (d.h., conªdentiality,
integrity, and availability), which may invite
bureaucratic controversy. Noch, judgment is even more fundamental for organi-
zations that rely on AI because humans must both tell AI systems which pre-
dictions to make and determine what to do with the predictions once they
are made. People who code valuations into autonomous systems will have
enormous power because AI increases the scale of the impact of some human
judgments. Zum Beispiel, individual car drivers make judgments about their
own vehicle, whereas the encoded judgments for self-driving cars can affect
millions of vehicles. Each instance of a given autonomous weapon system,
similarly, will likely share algorithms and training data with others. Wann
widely shared judgments are wrong, biased, or self-serving, then the AI sys-
tems guided by them can generate large-scale problems. Good judgment be-
comes particularly desirable as prediction gets better, faster, and cheaper.

A fundamental organizational challenge is to recruit, train, and retain the
human talent required for human-machine teaming. We anticipate that AI sys-
tems will increase the inºuence of junior personnel, giving more leverage to
their judgment and decisions. Noch, we also expect that the junior ofªcers, nicht-
commissioned ofªcers, civilian employees, and government contractors who
maintain and operate AI systems will struggle to understand the consequences
of their actions in complex political situations. Gen. Charles Krulak high-
lights the role of “the strategic corporal” on twenty-ªrst-century battleªelds.112
Krulak argues that operational complexity makes tactical actions more strate-
gically consequential, for better or worse, which places a premium on the char-

https://warontherocks.com/2019/09/cyber-war-as-an-intelligence-contest/; Lennart Maschmeyer,
“The Subversive Trilemma: Why Cyber Operations Fall Short of Expectations,” International Secu-
rity, Bd. 46, NEIN. 2 (Fallen 2021), S. 51–90, https://doi.org/10.1162/isec_a_00418; and Robert
Chesney and Max Smeets, Hrsg., Cyber Conºict as an Intelligence Contest (Washington, D.C.:
Georgetown University Press, bevorstehend).
112. Charles C. Krulak, “The Strategic Corporal: Leadership in the Three Block War,” Marines
Magazine, Januar 1999, https://apps.dtic.mil/sti/pdfs/ADA399413.pdf.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 44

acter and leadership ability of junior personnel. AI will further increase the
burden of judgment on them. Forward personnel will have to see the predic-
tions from AI systems, assess whether the data that created the predictions are
reliable, and make value judgments about how and why automated systems
can advance the mission. Außerdem, AI systems will require constant recon-
ªguration and repair as the context of human-machine teaming changes
during actual operations. Military personnel have long engaged in ªeld-
expedient, bottom-up innovation.113 We expect personnel will likewise hack
AI systems to improve mission performance, as they understand it, even as
unauthorized modiªcations put them into conºict with system conªguration
managers elsewhere in the bureaucracy.114 It is important to emphasize the hu-
man capital requirements of combining a sophisticated understanding of the
politico-military situation with the technical savvy to engineer AI in the ªeld.
The strategic corporal in the AI era must be not only a Clausewitzian genius
but also a talented hacker. This may not be a realistic requirement.

The importance of human-machine teaming is increasingly appreciated in
organizations that implement AI systems. Amid all the hype about AI and
Krieg, plenty of thoughtful work seeks to discern the relative advantages of hu-
mans and machines and to devise methods of pairing them together in order
to improve decision-making.115 As the U.S. Department of Defense AI strategy
Staaten, “The women and men in the U.S. armed forces remain our enduring
source of strength; we will use AI-enabled information, Werkzeuge, and systems to
empower, not replace, those who serve.”116 Yet, the strategy’s stated goal of
“creating a common foundation of shared data, reusable tools, frameworks
and standards, and cloud and edge services” is more of a description of the
magnitude of the problem than a blueprint for a solution.117 As AI creates

113. Kollars, “War’s Horizon.”
114. On the general dynamics of military user innovation see Lindsay, Information Technology and
Military Power, S. 109–135.
115. Andrew Herr, “Will Humans Matter in the Wars of 2030?” Joint Force Quarterly, Bd. 77, NEIN. 2
(2015), S. 76–83, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-77/jfq-77.pdf; Maria
L. Cummings, Artiªcial Intelligence and the Future of Warfare (London: Chatham House, Royal Insti-
tute of International Affairs, Januar 26, 2017); Development, Concepts and Doctrine Centre,
“Human-Machine Teaming,” Joint Concept Note 1/18 (London: UK Ministry of Defence, Mai
2018), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment
_data/ªle/709359/20180517-concepts_uk_human_machine_teaming_jcn_1_18.pdf; and Mick Ryan,
“Extending the Intellectual Edge with Artiªcial Intelligence,” Australian Journal of Defence and
Strategic Studies, Bd. 1, NEIN. 1 (2019), S. 23–40, https://www.defence.gov.au/ADC/publications/
AJDSS/documents/volume1-issue1/Full.pdf. NSCAI’s Final Report, 2021, also emphasizes human-
machine teaming.
116. Summary of the 2018 Department of Defense Artiªcial Intelligence Strategy, 2019, P. 4.
117. Ebenda., P. 7.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 45

potential for large-scale efªciency improvements, it also creates potential
for large-scale collective action problems. New military staff specialties are
sure to emerge to manage data and judgment resources, creating new in-
stitutional equities and integration challenges. Perhaps even more challenging
is the problem of nurturing trust among all the engineers, administrators, ana-
lysts, Betreiber, and lawyers involved in designing, verwenden, and repairing
AI systems.118

As cheap prediction makes human judgment more vital in a wide variety of
tasks, and as more judgment is needed to coordinate human-machine teaming,
we anticipate that military bureaucracies will face complicated command deci-
sions about why, and how, to conjoin humans and machines. Commercial
ªrms that embrace AI often adjust their boundaries and business models by
contracting out tasks involving data, prediction, and action (z.B., manufac-
turing, Transport, advertising, and service provision) while developing
in-house judgment capacities that are too difªcult to outsource.119 Military or-
ganizations, likewise, may ªnd it advantageous to share specialized resources
(sensors, shooters, intelligence products, and logistics) across a decentralized
network of units, even as they struggle to make sense of it all. AI is thus part of
a broader historical trend that has been described with terms like “network-
centric warfare,” “joint force operations,” “integrated multi-domain opera-
tionen,” and “interagency cross-functional teams.” The whole is more than the
sum of its parts, but each part must exercise excellent judgment in how it
leverages shared assets. Historical experience suggests that military inter-
operability and shared sensemaking are difªcult, but not necessarily impossi-
ble, to achieve.120 We thus expect military and political judgment will become
even more difªcult, diffused, and geographically distributed.

In der Tat, the ongoing involvement of the “strategic corporal” in conversa-
tions about politico-military ends could end up politicizing the military. Im
Vereinigte Staaten, as Risa Brooks argues, the normative separation of political ends
from military means has some paradoxically adverse consequences: it enables
service parochialism, undermines civilian oversight, and degrades strategic
deliberation.121 Greater reliance on AI could exacerbate all these problems,

118. Roff and Danks, “‘Trust but Verify.’”
119. Agrawal, Gans, and Goldfarb, Prediction Machines, S. 170–178.
120. Zum Beispiel, C. Kenneth Allard, Command, Kontrolle, and the Common Defense (New Haven,
Conn.: Yale University Press, 1990); and Scott A. Snook, Friendly Fire: The Accidental Shootdown of
UNS. Black Hawks over Northern Iraq (Princeton, N.J.: Princeton University Press, 2000).
121. Risa Brooks, “Paradoxes of Professionalism: Rethinking Civil-Military Relations in the United
Zustände,” International Security, Bd. 44, NEIN. 4 (Frühling 2020), S. 7–44, https://doi.org/10.1162/
isec_a_00374.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 46

precisely because AI is a force multiplier that requires military personnel to ex-
ercise greater judgment. Brooks’s argument implies that an AI-intensive de-
fense bureaucracy could become both more powerful and more politically
savvy. If machines perform the bulk of data gathering, prediction, and tactical
warªghting, then the judgments of human engineers, managers, and operators
will be highly consequential, even as ethical questions of accountability be-
come harder to answer. Some military personnel may be unable to perform at
such a high level of excellence, as attested by the many scandals during the
wars in Iraq and Afghanistan (from targeting errors to prisoner abuse). In-
creasing reliance on AI will magnify the importance of leadership throughout
the chain of command, from civilian elites to enlisted service members.

If a military organization can ªgure out how to recruit, train, and retain
highly talented personnel, and to thoroughly reorganize and decentralize its
C2 institutions, such reforms may help to inculcate and coordinate judgment.
Doing so would enable the military to make the most of human-machine team-
ing in war. If judgment is a source of military strength, Jedoch, then it may
also be a political vulnerability. As organizational and political judgment be-
comes the preeminent source of strength for AI-enabled military forces, we ex-
pect that judgment will also become the most attractive target for adversaries.
If AI relies on federated data and command structures, then adversaries will
pursue wedge strategies to break up military coalitions.122 If the consensus
about war aims depends on robust political support, adversaries will conduct
disinformation and inºuence campaigns to generate controversy and under-
mine popular support.123 If automated systems operate under tightly con-
trolled rules of engagement, adversaries will attempt to manipulate normative
frameworks that legitimize the use of force.124 If AI enables more efªcient tar-
geting, the enemy will present more controversial and morally fraught targets
to test political resolve.125 As prediction machines make some aspects of mili-
tary operations more certain, we argue that the entire military enterprise will
become less certain.

122. Sehen, generally, Timothy W. Crawford, “Preventing Enemy Coalitions: How Wedge Strategies
Shape Power Politics,” International Security, Bd. 35, NEIN. 4 (Frühling 2011), S. 155–189, https://
doi.org/10.1162/ISEC_a_00036.
123. Sehen, generally, Thomas Rid, Active Measures: The Secret History of Disinformation and Political
Warfare (New York: Farrar, Straus and Giroux, 2020).
124. Janina Dill, Legitimate Targets? Social Construction, International Law, and US Bombing (Nocken-
Brücke: Cambridge University Press, 2014); and Ryder McKeown, “Legal Asymmetries in Asym-
metric War,„Review of International Studies, Bd. 41, NEIN. 1 (Januar 2015), S. 117–138, https://
doi.org/10.1017/S0260210514000096.
125. Erik Gartzke and James Igoe Walsh, “The Drawbacks of Drones: The Effects of UAVs on Esca-
lation and Instability in Pakistan,” Journal of Peace Research, bevorstehend.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 47

Abschluss

It is premature to assume that AI will replace human beings in either war or
any other competitive endeavor. To understand the impact of AI in any ªeld, Es
is important to disaggregate decision-making into its components: Daten, judg-
ment, prediction, and action. An economic perspective on AI views machine
learning as more efªcient prediction (and robotics as a more efªcient action),
which makes data and human judgment more valuable. This means that inno-
vation in algorithms and computing power is necessary but not sufªcient
for AI performance. We have argued that the context of decision-making—
where and how organizations use AI and for what purposes—determines
whether automation is possible or desirable. The complementarity of data and
judgment, im Gegenzug, has important implications for the preparation for and con-
duct of AI-enabled war.

We have argued that the strategic environment shapes the quality of data,
and organizational institutions shape the difªculty of judgment, which gives
rise to four different categories of AI performance in military tasks. Quality
data and clear judgment enable “automated decision-making,” which is most
feasible for bureaucratically constrained administration and logistics tasks.
Low-quality data and difªcult judgments, which are common in strategy and
command tasks, necessitate “human decision-making.” Clear judgments ap-
plied to low-quality data create risks of “premature automation,” especially
when AI systems are authorized to execute ªre and maneuver tasks. Quality
data and difªcult judgments can be combined in “human-machine teaming,”
which can be used to improve intelligence and planning tasks. We expect that
viele, if not most, practical military applications of AI are likely to fall into this
last category. Even highly bureaucratized tasks that seem to ªt in the “auto-
mated decision-making” category can require human judgment, especially
when budget and personnel decisions are at stake or when resource scarcity
creates difªcult operational trade-offs. Likewise, highly nuanced command
tasks that seem to ªt in the “human decision-making” category can usually be
broken down into a subset of tasks that might beneªt from AI decision aids.
Most practitioners who implement military AI systems are aware of the risks
of “premature automation” in ªre and maneuver, in part due to widespread
apprehension about “killer robots.”126 To determine the appropriate division
of labor between humans and machines, daher, humans must decide what

126. Roff, “The Strategic Robot Problem.”

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 48

to predict, and they must create data policies and AI learning plans that detail
who should do what with such predictions.127 The dynamic circumstances
of military operations will require ongoing ªnessing of the human-machine
teaming relationship.

Although we agree with the conventional wisdom that AI is potentially
transformative, we disagree about what that transformation might be.128 In
allgemein, we expect that the strategic, organizational, and ethical complexity of
warfare will increase in the AI era. When cheaper prediction is applied in a po-
litical context that is as challenging and uncertain as warfare, then quality data
and sound judgment become extremely valuable. Adversaries, im Gegenzug, Wille
take steps to undermine the quality of data and judgment by manipulating in-
formation and violating expectations. Correcting for adversarial countermea-
sures will further increase the complexity of judgment, which exacerbates the
inherent friction and frustration of war.

We must reemphasize that our focus throughout has been on narrow AI,
particularly the improvements in machine learning that have led to better,
faster, and cheaper predictions. We contend that the recent advances in AI that
have led to media attention, commercial applications, and anxiety about civil
liberties have very little to do with AGI. Some experts believe that AGI will
eventually happen, but this is not what all the current AI hype is about.129
Other experts like Brian Cantwell Smith are outright pessimistic: “Neither
deep learning, nor other forms of second-wave AI, nor any proposals yet ad-
vanced for third-wave, will lead to genuine intelligence.”130 Indeed, the “intel-
ligence” metaphor is very misleading when it comes to understanding what
machine learning actually does.131 Advances in narrow AI, by contrast, have
led to better, faster, and cheaper predictions. Such AI systems are task-speciªc.

127. Heuristics are provided in Agrawal, Gans, and Goldfarb, Prediction Machines, S. 123–151.
128. Sehen, Zum Beispiel, Horowitz, “Artiªcial Intelligence, International Competition, and the Bal-
ance of Power”; and Payne, “Artiªcial Intelligence.”
129. Daniel Kahneman, “Comment on ‘Artiªcial Intelligence and Behavioral Economics,’” in
Agrawal, Gans, and Goldfarb, Hrsg., The Economics of Artiªcial Intelligence, S. 608–610. Gary
Marcus estimated that AGI would arrive between thirty and seventy years from now. See Shivon
Zilis et al., “Lighting Round on General Intelligence,” panel presentation at Machine Learning and
the Market for Intelligence Conference, Creative Destruction Lab, Universität von Toronto, Octo-
ber 26, 2017, YouTube video, 13:16, https://www.youtube.com/watch?v(cid:2)RxLIQ j_BMhk.
130. Brian Cantwell Smith, The Promise of Artiªcial Intelligence: Reckoning and Judgment (Nocken-
Brücke: Massachusetts Institute of Technology Press, 2019), P. xiii. See also Harry Collins, Artiªcial
Experts: Social Knowledge and Intelligent Machines (Cambridge: Massachusetts Institute of Technol-
ogy Press, 1990); and Meredith Broussard, Artiªcial Unintelligence: How Computers Misunderstand
the World (Cambridge: Massachusetts Institute of Technology Press, 2018).
131. A less anthropocentric deªnition requires a longer discussion about the meanings of intelli-
gence, Autonomie, and automation. See Heather M. Roff, “Artiªcial Intelligence: Power to the Peo-
Bitte,” Ethics & International Affairs, Bd. 33, NEIN. 2 (2019), S. 124–140, https://doi.org/10.1017/
S0892679419000121.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Prediction and Judgment 49

If AGI becomes a reality, then such a machine would also provide its own
judgment. AGI would be able to perform the entire decision cycle by itself. In
that case, it is not at all clear what role humans would have in warfare beyond
suffering the consequences of war.132 We argue that AGI speculation carries
the theme of AI substitution to an extreme, whereby a machine would be able
to outwit, overpower, and eliminate any actor who tried to prevent it from
accomplishing its goal.133 This doomsday scenario is often likened to the
“Sorcerer’s Apprentice” segment from the movie Fantasia, in which the epony-
mous apprentice, played by Mickey Mouse, enchants a broom and directs it to
fetch water from the well. As Mickey falls asleep, the broom ends up ºooding
the entire castle. Mickey awakes with alarm and desperately tries to chop
up the broom, but this only results in more and better brooms that overwhelm
his abilities. An eminently useful tactical task turns into a strategic disaster be-
cause of a poorly speciªed objective. Opinions vary on whether the super-
intelligence threat should be taken seriously.134 Nevertheless, the Sorcerer’s
Apprentice scenario dramatizes the importance of judgment for any type
of AI. An AI that only cares about optimizing a goal—even though that goal
was deªned by a human—will not consider the important pragmatic context
that humans may care about.

We have deªned judgment narrowly in economic terms as the speciªcation
of the utility function. The rich concept of judgment, Jedoch, deserves fur-
ther analysis. Just as decision-making can be disaggregated into its compo-
nen, judgment might also be disaggregated into the intellectual, emotional,
and moral capacities that people need to determine what matters and why.
Military judgment encompasses not only the Clausewitzian traits of courage,
determination, and coup d’oeil, but also a capacity for fairness, empathy, Und
other elusive qualities. Some wartime situations merit ruthlessness, devious-
ness, and enmity, while others call for mercy, candor, and compassion. To these
character traits must be added the engineering virtues of curiosity, creativ-
ität, and elegance insofar as personnel will have to reconªgure AI systems
in the ªeld. We expect that the general logic of complementarity will still ap-
ply at this more ªne-grained level. Any future AI that is able to automate
some aspects of judgment, daher, will make other aspects even more valu-

132. For speculation on the consequences of an AGI that is able to formulate and execute politico-
military strategy, see Kenneth Payne, Strategy, Evolution, and War: From Apes to Artiªcial Intelligence
(Washington, D.C.: Georgetown University Press, 2018).
133. Bostrom, Superintelligence; and Stuart Russell, Human Compatible: Artiªcial Intelligence and the
Problem of Control (New York: Viking, 2019).
134. For discussion see Nathan Alexander Sears, “International Politics in the Age of Existential
Threats,” Journal of Global Security Studies, Bd. 6, NEIN. 3 (September 2021), https://doi.org/10.1093/
jogss/ogaa027.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

International Security 46:3 50

able. Außerdem, the rich phenomenology of judgment, which AI makes
more valuable, has important implications for professional military educa-
tion. More technology should not mean more technocracy. Andererseits,
personnel would be wise to engage more with the humanities and reºect on
human virtues as militaries become more dependent on AI. Allgemein, reliance
on AI will tend to amplify the importance of human leadership and the moral
aspects of war.

Am Ende, we expect that more intensive human-machine teaming will re-
sult in judgment becoming more widely distributed in military organizations,
while strategic competition will become more politically fraught. Whatever the
future of automated warfare holds, humans will be a vital part of it.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
ich
S
e
C
/
A
R
T
ich
C
e

P
D

l

F
/

/

/

/

4
6
3
7
1
9
9
5
7
6
8

/
ich
S
e
C
_
A
_
0
0
4
2
5
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3Prediction and Judgment image

PDF Herunterladen