预测与判断
Prediction and
Judgment
Avi Goldfarb and
Jon R. Lindsay
Why Artiªcial Intelligence Increases the
Importance of Humans in War
There is an emerg-
人工智能的政策共识 (人工智能) will transform interna-
tional politics. As stated in the 2021 report of the U.S. 国家安全
Commission on AI, “The ability of a machine to perceive, evaluate, and act
more quickly and accurately than a human represents a competitive advantage
in any ªeld—civilian or military. AI technologies will be a source of enormous
power for the companies and countries that harness them.”1 A lack of clarity
over basic concepts, 然而, complicates an assessment of the security impli-
cations of AI. AI also has multiple meanings, ranging from big data, 机器
学习, 机器人技术, and lethal drones, to a sweeping “fourth industrial revolu-
Avi Goldfarb is the Rotman Chair in Artiªcial Intelligence and Healthcare and Professor of Marketing at the
多伦多大学, and a research associate at the National Bureau of Economic Research. Jon R. Lindsay
is Associate Professor at the School of Cybersecurity and Privacy, with a secondary appointment at the Sam
Nunn School of International Affairs, both at the Georgia Institute of Technology.
The authors are grateful for research assistance from Morgan MacInnes and constructive feedback
from Andrea Gilli, 毛罗·吉利, James Johnson, Ryder McKeown, Heather Roff, members of the In-
novation Policy Lab at the Munk School of Global Affairs and Public Policy at the University of To-
ronto, and the anonymous reviewers. This project was supported by funding from the Social
Sciences and Humanities Research Council of Canada (File number 435-2017-0041) and the Sloan
基础. The authors presented a more limited version of this general argument in a pre-
vious report published by the Brookings Institution: Avi Goldfarb and Jon R. Lindsay, “Artiªcial
Intelligence in War: Human Judgment as an Organizational Strength and a Strategic Liability”
(华盛顿, 华盛顿特区: 布鲁金斯学会出版社, 十一月 30, 2020), https://布鲁金斯学会
.edu/research/artiªcial-intelligence-in-war-human-judgment-as-an-organizational-strength-and-
a-strategic-liability/.
1. National Security Commission on Artiªcial Intelligence [NSCAI], 总结报告 (华盛顿,
华盛顿特区: NSAI, 行进 2021), p. 7, https://www.nscai.gov/wp-content/uploads/2021/03/Full-
Report-Digital-1.pdf. Russian President Vladimir Putin makes the same point with more ºair:
“Whoever becomes the leader in this sphere will become the ruler of the world,” quoted in Radina
Gigova, “Who Putin Thinks Will Rule the World,” 美国有线电视新闻网, 九月 2, 2017, https://万维网
.cnn.com/2017/09/01/world/putin-artiªcial-intelligence-will-rule-world/index.html; and Keith
Dear, “Will Russia Rule the World through AI? Assessing Putin’s Rhetoric against Russia’s Real-
性,” RUSI杂志, 卷. 164, 不. 5–6 (2019), PP. 36–60, https://doi.org/10.1080/03071847.2019
.1694227. Many U.S. ofªcials, such as the chief technology ofªcer of the National Geospatial
Intelligence Agency, draw a stark conclusion: “If the United States refuses to evolve, it risks
giving China or some other adversary a technological edge that Washington won’t be able to over-
come,” quoted in Anthony Vinci, “The Coming Revolution in Intelligence Affairs,” Foreign Af-
博览会, 八月 31, 2020, https://www.foreignaffairs.com/articles/north-america/2020-08-31/coming-
revolution-intelligence-affairs. On Chinese ambitions for an “intelligentized” military, see Elsa B.
Kania, “Chinese Military Innovation in the AI Revolution,” RUSI杂志, 卷. 164, 不. 5–6 (2019),
PP. 26–34, https://doi.org/10.1080/03071847.2019.1693803.
国际安全, 卷. 46, 不. 3 (冬天 2021/22), PP. 7–50, https://doi.org/10.1162/isec_a_00425
© 2022 由哈佛大学和麻省理工学院的校长和研究员撰写.
7
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 8
tion.”2 Signiªcant investment in AI and the imaginaries of science ªction only
add to the confusion.
In this article we focus on machine learning, which is the key AI technology
that receives attention in the press, in management, and in the economics liter-
ature.3 We leave aside debates about artiªcial general intelligence (AGI), 或者
systems that match or exceed human intelligence.4 Machine learning, or “nar-
row AI,” by contrast, is already widely in use. Successful civil applications in-
clude navigation and route planning, image recognition and text translation,
and targeted advertising. Michael Horowitz describes AI as “the ultimate
enabler” for automating decision-making tasks in everything from public ad-
ministration and commercial business to strategic intelligence and military
combat.5 In 2018, the Department of Defense observed that “AI is poised
to transform every industry, and is expected to impact every corner of the
Department, spanning operations, 训练, sustainment, force protection, 关于-
cruiting, healthcare, and many others.”6 We would be surprised, 然而, 如果
AI transformed all these activities to the same degree for all actors who use it.
One of the key insights from the literature on the economics of technology is
that the complements to a new technology determine its impact.7 AI, from this
看法, is not a simple substitute for human decision-making. Rapid ad-
vances in machine learning have improved statistical prediction, but predic-
2. On the potential impacts of these technologies, see the recent special issue edited by Michael
Raska et al., “介绍,》 战略研究杂志, 卷. 44, 不. 4 (2021), PP. 451–455, https://
doi.org/10.1080/01402390.2021.1917877.
3. 看, 例如, Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple
Economics of Artiªcial Intelligence (剑桥, 大量的。: Harvard Business Review Press, 2018), p. 24;
and Jason Furman and Robert Seamans, “AI and the Economy,” in Josh Lerner and Scott Stern,
编辑。, Innovation Policy and the Economy, 卷. 19 (芝加哥: 芝加哥大学出版社, 2018),
PP. 161–191.
4. 一般来说, see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (牛津: Oxford Univer-
城市出版社, 2014). We comment brieºy on AGI in the conclusion.
5. Michael C. Horowitz, “Artiªcial Intelligence, International Competition, and the Balance of
力量,” Texas National Security Review, 卷. 1, 不. 3 (可能 2018), p. 41, https://doi.org/10.15781/
T2639KP49.
6. Summary of the 2018 Department of Defense Artiªcial Intelligence Strategy: Harnessing AI to Advance
Our Security and Prosperity (华盛顿, 华盛顿特区: 我们. 国防部, 二月 12,
2019), p. 5, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-
AI-STRATEGY.PDF. On potential military applications, see NSCAI, 总结报告; and Daniel S.
Hoadley and Kelley M. Sayler, Artiªcial Intelligence and National Security, CRS Report R45178
(华盛顿, 华盛顿特区: Congressional Research Service, 十一月 21, 2019), https://crsreports
.congress.gov/product/pdf/R/R45178/7.
7. 看, 例如, Timothy F. Bresnahan, Erik Brynjolfsson, and Lorin M. Hitt, “Information
技术, Workplace Organization, and the Demand for Skilled Labor: Firm-Level Evidence,”
Quarterly Journal of Economics, 卷. 117, 不. 1 (二月 2002), PP. 339–376, https://doi.org/
10.1162/003355302753399526; and Shane Greenstein and Timothy F. Bresnahan, “Technical Prog-
ress and Co-invention in Computing and in the Uses of Computers,” Brookings Papers on Economic
Activity: Microeconomics (华盛顿, 华盛顿特区: 布鲁金斯学会出版社, 1996), PP. 1–83.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 9
tion is only one aspect of decision-making. Two other important elements of
decision-making—data and judgment—represent the complements to predic-
的. Just as cheaper bread expands the market for butter, advances in AI that
reduce the costs of prediction are making its complements more valuable. 人工智能
prediction models require data, and accurate prediction requires more and
better data. Quality data provide plentiful and relevant information without
systemic bias. Data-driven machine prediction can efªciently ªll in informa-
tion needed to optimize a given utility function, but the speciªcation of the
utility function ultimately relies on human judgment about what exactly
should be maximized or minimized. Judgment determines what kinds of pat-
terns and outcomes are meaningful and what is at stake, for whom, 并在
which contexts. Clear judgments are well speciªed in advance and agreed
upon by relevant stakeholders. When quality data are available and an organi-
zation can articulate clear judgments, then AI can improve decision-making.
We argue that if AI makes prediction cheaper for military organizations,
then data and judgment will become both more valuable and more contested.
This argument has two important strategic implications. 第一的, the conditions
that have made AI successful in the commercial world—quality data and clear
judgment—may not be present, or present to the same degree, for all military
任务. In military terms, judgment encompasses command intentions, rules of
engagement, administrative management, and moral leadership. These func-
tions cannot be automated with narrow AI technology. Increasing reliance on
人工智能, 所以, will make human beings even more vital for military power, 不是
较少的. 第二, the importance of data and judgment creates incentives for strate-
gic competitors to improve, protect, and interfere with information systems
and command institutions. 因此, conºicts over information will become
more salient, and organizational coordination will become more complex. 在
contrast with assumptions about rapid robot wars and decisive shifts in mili-
tary advantage, we expect AI-enabled conºict to be characterized by environ-
mental uncertainty, organizational friction, and political controversy. 这
contestation of AI complements, 所以, is likely to unfold differently than
the imagined wars of AI substitutes.8
Many hopes and fears about AI recapitulate earlier ideas about the in-
8. 看, 例如, Paul Scharre, Army of None: Autonomous Weapons and the Future of War (新的
约克: W.W. 诺顿, 2018); Michael C. Horowitz, “When Speed Kills: Lethal Autonomous Weapon
系统, Deterrence and Stability,》 战略研究杂志, 卷. 42, 不. 6 (2019), PP. 764–788,
https://doi.org/10.1080/01402390.2019.1621174; James Johnson, “Artiªcial Intelligence and Future
Warfare: Implications for International Security,” Defense & Security Analysis, 卷. 35, 不. 2 (2019),
PP. 147–169, https://doi.org/10.1080/14751798.2019.1600800; and Kenneth Payne, 我, Warbot: 这
Dawn of Artiªcially Intelligent Conºict (纽约: 牛津大学出版社, 2021).
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 10
formation technology revolution in military affairs (RMA) and cyberwarfare.9
Familiar tropes abound regarding the transformative effects of commercial in-
创新, the speed and danger of networked computation, the dominance of
offense over defense, and the advantages of a rising China over a vulnerable
美国. But skeptics have systematically challenged both the logic and
empirical basis of these assumptions about the RMA10 and cyberwar.11 Super-
ªcially plausible arguments about information technology tend to ignore im-
portant organizational and strategic factors that shape the adoption and use of
digital systems. As in the economics literature, an overarching theme in schol-
arship on military innovation is that technology is not a simple substitute for
military power.12 Technological capabilities depend on complementary institu-
系统蒸发散, 技能, and doctrines. 此外, implementation is usually marked by
friction, 意想不到的后果, and disappointed expectations. The RMA
and cyber debates thus offer a cautionary tale for claims about AI. It is reason-
able to expect organizational and strategic context to condition the perfor-
mance of automated systems, as with any other information technology.13
AI may seem different, 尽管如此, because human agency is at stake.
Recent scholarship raises a host of questions about the prospect of auto-
9. 看, 例如, John Arquilla and David Ronfeldt, “Cyberwar Is Coming!,” Comparative Strat-
egy, 卷. 12, 不. 2 (春天 1993), PP. 141–165; Arthur K. Cebrowski and John H. Garstka, “Net-
work-Centric Warfare: Its Origin and Future,” Proceedings, 我们. Naval Institute, 一月, 1998,
PP. 28–35; William A. Owens and Edward Ofºey, Lifting the Fog of War (纽约: 法拉尔, 施特劳斯
and Giroux, 2000); 和理查德·A. Clarke and Robert K. Knake, Cyber War: The Next Threat to
National Security and What to Do about It (纽约: Ecco, 2010).
10. 看, 例如, Eliot A. 科恩, “Change and Transformation in Military Affairs,》杂志
Strategic Studies, 卷. 27, 不. 3 (2004), PP. 395–407, https://doi.org/10.1080/1362369042000283958;
and Keith L. Shimko, The Iraq Wars and America’s Military Revolution (纽约: Cambridge Uni-
大学出版社, 2010).
11. 看, 例如, Erik Gartzke, “The Myth of Cyberwar: Bringing War in Cyberspace Back
Down to Earth,” 国际安全, 卷. 38, 不. 2 (落下 2013), PP. 41–73, https://doi.org/
10.1162/ISEC_a_00136; Jon R. Lindsay, “The Impact of China on Cybersecurity: Fiction and Fric-
的,” 国际安全, 卷. 39, 不. 3 (冬天 2014/15), PP. 7–47, https://doi.org/10.1162/
ISEC_a_00189; Brandon Valeriano and Ryan C. Maness, Cyber War versus Cyber Realities: Cyber
Conºict in the International System (纽约: 牛津大学出版社, 2015); and Rebecca Slayton,
“What Is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment,” Interna-
国家安全, 卷. 41, 不. 3 (冬天 2016/17), PP. 72–109, https://doi.org/10.1162/ISEC_a_00267.
12. Reviews include Adam Grissom, “The Future of Military Innovation Studies,” Journal of Strate-
gic Studies, 卷. 29, 不. 5 (2006), PP. 905–934, https://doi.org/10.1080/01402390600901067; and Mi-
chael C. Horowitz, “Do Emerging Military Technologies Matter for International Politics?“ 年度的
政治学评论, 卷. 23 (可能 2020), PP. 385–400, https://doi.org/10.1146/annurev-
polisci-050718-032725. 看, 尤其, Stephen Biddle, Military Power: Explaining Victory and De-
feat in Modern Battle (普林斯顿大学, 新泽西州: 普林斯顿大学出版社, 2004); Michael C. Horowitz, 这
Diffusion of Military Power: Causes and Consequences for International Politics (普林斯顿大学, 新泽西州: 王子-
吨大学出版社, 2010); and Andrea Gilli and Mauro Gilli, “Why China Has Not Caught Up
然而: Military-Technological Superiority and the Limits of Imitation, Reverse Engineering, 和
Cyber Espionage,” 国际安全, 卷. 43, 不. 3 (冬天 2018/19), PP. 141–189, https://
doi.org/10.1162/isec_a_00337.
13. On general patterns of information practice in war, see Jon R. Lindsay, 信息技术
and Military Power (伊萨卡岛, 纽约: 康奈尔大学出版社, 2020).
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 11
mated decision-making. How will war “at machine speed” transform the
offense-defense balance?14 Will AI undermine deterrence and strategic stabil-
性,15 or violate human rights?16 How will nations and coalitions maintain con-
trol of automated warriors?17 Does AI shift the balance of power from
incumbents to challengers or from democracies to autocracies?18 These ques-
tions focus on the substitutes for AI because they address the political, 歌剧-
ational, and moral consequences of replacing people, machines, 和流程
with automated systems. The literature on military AI has focused less on the
complements of AI, namely the organizational infrastructure, human skills,
doctrinal concepts, and command relationships that are needed to harness the
advantages and mitigate the risks of automated decision-making.19
14. Kenneth Payne, “Artiªcial Intelligence: A Revolution in Strategic Affairs?” 生存, 卷. 60,
不. 5 (2018), PP. 7–32, https://doi.org/10.1080/00396338.2018.1518374; Paul Scharre, “如何
Swarming Will Change Warfare,” Bulletin of the Atomic Scientists, 卷. 74, 不. 6 (2018), PP. 385–389,
https://doi.org/10.1080/00963402.2018.1533209; Ben Garªnkel and Allan Dafoe, “How Does the
Offense-Defense Balance Scale?》 战略研究杂志, 卷. 42, 不. 6 (2019), PP. 736–763,
https://doi.org/10.1080/01402390.2019.1631810; and John R. 艾伦, Frederick Ben Hodges, 和
Julian Lindley-French, “Hyperwar: Europe’s Digital and Nuclear Flanks,” in Allen, Hodges,
and Lindley-French, Future War and the Defence of Europe (纽约: 牛津大学出版社,
2021), PP. 216–245.
15. Jürgen Altmann and Frank Sauer, “Autonomous Weapon Systems and Strategic Stability,”
Survival, 卷. 59, 不. 5 (2017), PP. 117–142, https://doi.org/10.1080/00396338.2017.1375263;
Horowitz, “When Speed Kills”; Mark Fitzpatrick, “Artiªcial Intelligence and Nuclear Command
and Control,” 生存, 卷. 61, 不. 3 (2019), PP. 81–92, https://doi.org/10.1080/00396338.2019
.1614782; Erik Gartzke, “Blood and Robots: How Remotely Piloted Vehicles and Related Technol-
ogies Affect the Politics of Violence,》 战略研究杂志, published online October 3, 2019,
https://doi.org/10.1080/01402390.2019.1643329; and James
约翰逊, “Delegating Strategic
Decision-Making to Machines: 博士. Strangelove Redux?》 战略研究杂志, published on-
line April 30, 2020, https://doi.org/10.1080/01402390.2020.1759038.
16. Ian G.R. Shaw, “Robot Wars: US Empire and Geopolitics in the Robotic Age,” Security Dialogue,
卷. 48, 不. 5 (2017), PP. 451–470, https://doi.org/10.1177/0967010617713157; and Lucy Suchman,
“Algorithmic Warfare and the Reinvention of Accuracy,” Critical Studies on Security, 卷. 8, 不. 2
(2020), PP. 175–187, https://doi.org/10.1080/21624887.2020.1760587.
17. Heather M. Roff, “The Strategic Robot Problem: Lethal Autonomous Weapons in War,“ 杂志
of Military Ethics, 卷. 13, 不. 3 (2014), PP. 211–227, https://doi.org/10.1080/15027570.2014.975010;
Heather M. Roff and David Danks, “‘Trust but Verify’: The Difªculty of Trusting Autonomous
Weapons Systems,” Journal of Military Ethics, 卷. 17, 不. 1 (2018), PP. 2–20, https://doi.org/
10.1080/15027570.2018.1481907; Risa Brooks, “Technology and Future War Will Test U.S. 民用-
Military Relations,” War on the Rocks, 十一月 26, 2018, https://warontherocks.com/2018/11/
technology-and-future-war-will-test-u-s-civil-military-relations/; and Erik Lin-Greenberg, “Allies
and Artiªcial Intelligence: Obstacles to Operations and Decision-Making,” Texas National Security
审查, 卷. 3, 不. 2 (春天 2020), PP. 56–76, https://dx.doi.org/10.26153/tsw/8866.
18. Horowitz, “Artiªcial Intelligence, International Competition, and the Balance of Power”; 本
布坎南, “The U.S. Has AI Competition All Wrong,“ 外交事务, 八月 7, 2020, https://
www.foreignaffairs.com/articles/united-states/2020-08-07/us-has-ai-competition-all-wrong; 和
Michael Raska, “The Sixth RMA Wave: Disruption in Military Affairs?》 战略研究杂志,
卷. 44, 不. 4 (2021), PP. 456–479, https://doi.org/10.1080/01402390.2020.1848818.
19. A notable exception is Horowitz, “Artiªcial Intelligence, International Competition, 和
Balance of Power.” We agree with Horowitz that organizational complements determine AI diffu-
锡安, but we further argue that complements also shape AI employment, which leads us to differ-
ent expectations about future war.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 12
在本文中, we challenge the assumptions behind AI substitution and ex-
plore the implications of AI complements. An army of lethal autonomous
weapon systems may well be destabilizing, and such an army may well be at-
tractive to democracies and autocracies alike. The idea that machines will re-
place warriors, 然而, represents a misunderstanding about what warriors
actually do. We suggest that it is premature to forecast radical strategic conse-
quences without ªrst clarifying the problem that AI is supposed to solve. 我们
provide a framework that explains how the complements of AI (IE。, 数据
and judgment) affect decision-making. 一般来说, automation is advanta-
geous when quality data can be combined with clear judgments. But the con-
summate military tasks of command, ªre, and maneuver are fraught with
uncertainty and confusion. 相比之下, more institutionalized tasks in admin-
istration and logistics tend to have copious data and clear goals, 哪个是
conducive to automation. We argue that militaries risk facing bad or tragic out-
comes if they conºate these conditions by prematurely providing autonomous
systems with clear objectives in uncertain circumstances. 反过来, for intel-
ligence and operational tasks that have quality data but difªcult judgments,
teams of humans and machines can distribute the cognitive load of decision-
制作. We expect many if not most military AI tasks to fall into the latter
类别, which we describe as human-machine teaming. The net result, we ar-
gue, is that data and judgment will become increasingly valuable and con-
经测试, and thus AI-enabled warfare will tend to become more protracted
and confusing.
We develop this argument in ªve parts. 第一的, we provide an overview of our
analytical framework, which distinguishes the universal process of decision-
making from its variable political and technological context. This framework
explains how data and judgment affect the human-machine division of labor
in decision-making. 第二, we describe how strategic and institutional condi-
系统蒸发散, which differ in business and military affairs, shape the quality of data
and the difªculty of judgment. We then combine these factors into four differ-
ent categories of AI performance of decision-making tasks, which we illustrate
with commercial and military examples. The penultimate section discusses the
strategic implications of data and judgment becoming more valuable. 我们骗-
clude with a summary of the argument and further implications.
The Political and Technological Context of Decision-Making
Business and military organizations are similar in many ways, but they oper-
ate in very different circumstances. In ªgure 1, we analytically distinguish the
AI-relevant similarities and differences by embedding an economic model of
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 13
数字 1. The Strategic Context of Decision-Making in Military Organizations
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
decision-making into an international relations framework.20 Decision-making
both shapes and is shaped by the political and technological context. The stra-
tegic environment and organizational institutions affect the quality of data and
判断, 分别. 同时, innovation in machine learning—
largely driven by the civilian sector—lowers the cost of prediction.
20. Elements depicted in dashed lines in ªgure 1 are important for the overall story, but we will
not discuss them in detail in this article. We include them to depict the full decision-making
process—data, 判断, prediction, action—and to distinguish machine learning from other
types of automation technology. We thus discuss robotics or drones that automate military action
only to the extent that machine learning provides a decision input for them. Similar considerations
about complementarity apply to drones as well; see Andrea Gilli and Mauro Gilli, “The Diffusion
of Drone Warfare? 工业的, Organizational, 和基础设施限制,” 安全研究,
卷. 25, 不. 1 (2016), PP. 50–84, https://doi.org/10.1080/09636412.2016.1134189. For the sake of par-
simony, we also omit intelligence, 监视, 和侦察 (ISR) technologies that affect
数据, as well as information and communication technologies (ICTs) that help coordinate anything
whatsoever. 再次, the logic of complementarity applies more generally to ICTs; see Lindsay, Infor-
mation Technology and Military Power. While our focus in this article is on theory building rather than
testing, the same framework in ªgure 1 could be used to compare cases (例如, in business, the mili-
tary, or cross-nationally) that leverage similar AI technology but in different contexts.
国际安全 46:3 14
political context: environment and institutions
We adopt standard international relations distinctions between the interna-
tional system and domestic institutions.21 The “strategic environment” in
ªgure 1 refers to the external problems confronting a military organization.
To alter or preserve facts on the ground, through conquest or denial, a mili-
tary needs information about many things, such as the international balance
of power, diplomatic alignments and coalitions, geographical terrain and
weather, the enemy’s operational capabilities and disposition, and interactions
with civil society. These external matters constitute threats, targets, opportuni-
领带, 资源, and constraints for military operations. A military also needs
information about internal matters, such as the capabilities and activities of
friendly forces, but these are a means to an end.22 The strategic environment is
ultimately what military data are about, and the structure and dynamics of the
environment affect the quality of the data.
“Institutions and preferences” in ªgure 1 refer to the ways in which a mili-
tary organization solves its strategic problems. This general category encom-
passes bureaucratic structures and processes,
interservice and coalition
政治, civil-military relations, interactions with the defense industry, 和
other domestic politics. Any of these factors might inºuence the goals and val-
ues of a military organization or the way it interprets a given situation. Organi-
zational institutions embody preferences, whatever their source, which in turn
21. Our modest goal here is to emphasize that strategic and organizational context shapes the per-
formance of AI technology in military decision-making tasks. In this article we do not take a posi-
tion on which of these contextual factors will be more inºuential in any given situation. 我们也
omit heterogeneity within and dynamic interactions across these factors. Future research could
disaggregate these factors to explore more speciªc hypotheses about AI and military power.
Lindsay, Information Technology and Military Power, PP. 32–70, discusses a similar framework in
more detail but with different nomenclature. The general analytic distinction between environ-
蒙特, 组织, and technology is also employed by Barry R. 波森, The Sources of Military
Doctrine: 法国, 英国, and Germany between the World Wars (伊萨卡岛, 纽约: 康奈尔大学出版社,
1984). On the interaction between system- and unit-level factors in realist theory, see Gideon Rose,
“Neoclassical Realism and Theories of Foreign Policy,” 世界政治, 卷. 51, 不. 1 (十月 1998),
PP. 144–172, https://doi.org/10.1017/S0043887100007814; and Kevin Narizny, “On Systemic Para-
digms and Domestic Politics: A Critique of the Newest Realism,” 国际安全, 卷. 42,
不. 2 (落下 2017), PP. 155–190, https://doi.org/10.1162/ISEC_a_00296.
22. From a rationalist perspective, whereby preferences are exogenously speciªed, internal pro-
cesses are instrumental to the fundamental task of knowing and inºuencing the world. In natural
系统, 当然, internal processes may become infused with value and endogenously shape or-
ganizational behavior, as discussed by Philip Selznick, TVA and the Grass Roots: A Study in the Soci-
ology of Formal Organization (伯克利: University of California Press, 1949); and Herbert Kaufman,
The Forest Ranger: A Study in Administrative Behavior (巴尔的摩, 马里兰州。: John Hopkins University
按, 1960). Our framework allows for both possibilities by interacting environmental data and
organizational preferences, whatever their source. 此外, at the level of analysis of any
given decision task, the “environment” can be analyzed as including “institutions” too. We omit
these complexifying relationships because they only reinforce our general point about the im-
portance of context.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 15
affect the quality of judgment.23 Institutional structures and processes may
produce coordination problems, political controversies, or interpretive dif-
ªculties that make it hard for a military organization to ªgure out what mat-
ters and why.
此外, as discussed below, we expect that the adoption of AI for some
military decision tasks will (endogenously) affect the strategic environment
and military institutions over time. As data and judgment become more valu-
有能力的, strategic competitors will have incentives to improve and contest them.
We thus expect conºicts over information to become more salient while orga-
nizational coordination will become more complex.
technological context: machine learning as prediction
The resurgence of interest in AI since the turn of the millennium has been
driven by rapid advances in a subªeld called machine learning. 机器
learning techniques represent a different approach compared to “Good Old-
Fashioned AI” (GOFAI).24 GOFAI emphasizes deductive theorem proving and
search optimization. 机器学习, 相比之下, is a form of statistical pre-
措辞, which is the process of using existing data to inductively generate
missing information.25 While the term prediction often implies forecasting
未来, pattern recognition and object classiªcation are also forms of pre-
diction because they ªll in information about situations encountered for the
ªrst time. Machines can automate many prediction tasks that humans perform
今天 (例如, image recognition, 导航, and forecasting), and they can also
increase the number, 准确性, 复杂, and speed of predictions. This has
the potential to alter human workºows. While machines may not make deci-
西翁, they can alter who makes what decisions and when. As machine learn-
ing lowers the cost of prediction, organizations are also innovating ways to
improve data and judgments so that they can make better decisions.
Prediction usually involves generalizing from a set of training data to clas-
23. Internal processes and external situations may affect the goals and values of an organization.
Our framework is agnostic regarding the ultimate source of preferences. Whatever their source,
preferences become embodied in institutions that proximally transmit goals and values to
decision-makers. On the general debate between structuralists and institutionalists about political
and doctrinal preferences, see Posen, The Sources of Military Doctrine; Alexander E. Wendt, “这
Agent-Structure Problem in International Relations Theory,” 国际组织, 卷. 41,
不. 3 (夏天 1987), PP. 335–370, http://www.jstor.org/stable/2706749; Stephen van Evera,
“Hypotheses on Nationalism and War,” 国际安全, 卷. 18, 不. 4 (春天 1994), PP. 5–39,
https://doi.org/10.2307/2539176; and Jeffrey W. Legro and Andrew Moravcsik, “Is Anybody Still
a Realist?” 国际安全, 卷. 24, 不. 2 (落下 1999), PP. 5–55, https://doi.org/10.1162/
016228899560130.
24. Terrence J. Sejnowski, The Deep Learning Revolution (剑桥: Massachusetts Institute of
Technology Press, 2018).
25. Agrawal, 甘斯, and Goldfarb, Prediction Machines, p. 24.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 16
sify or synthesize new data. It became clear in the early 2000s that improve-
ments in computing, 记忆, and bandwidth could make machine learning
commercially viable. Firms like Google, 亚马逊, and Facebook have success-
fully targeted their advertising and digital services by coupling “big data” that
they harvest from consumer behavior with automated prediction techniques.
These same developments have also enabled espionage and surveillance at an
unprecedented scale.26
From an economic perspective, modern AI is best understood as a better,
faster, and cheaper form of statistical prediction. The overall effect on decision-
制作, 然而, is indeterminate. This implies that organizations, 军队
and otherwise, will be able to perform more prediction in the future than they
do today, but not necessarily that their performance will improve in all cases.
the decision-making process
Economic decision theory emerged alongside the intellectual tradition of cy-
bernetics.27 As Herbert Simon observed over sixty years ago, “A real-life deci-
sion involves some goals or values, some facts about the environment, 和
some inferences drawn from the values and facts.”28 We describe these ele-
ments as judgment, 数据, and prediction. Together they produce actions that
shape economic or political outcomes. Feedback from actions produces more
数据, which can be used for more predictions and decisions, or to reinterpret
判断. The so-called OODA loop in military doctrine captures the same
ideas.29 Decision cycles govern all kinds of decision tasks, from the trivial
(picking up a pencil) to the profound (mobilizing for war). The abstract
26. David V. Gioe, Michael S. 古德曼, and Tim Stevens, “Intelligence in the Cyber Era: Evolu-
tion or Revolution?” Political Science Quarterly, 卷. 135, 不. 2 (夏天 2020), PP. 191–224, https://
doi.org/10.1002/polq.13031.
27. The main ideas from the economics literature on decision-making are summarized in Itzhak
Gilboa, Making Better Decisions: Decision Theory in Practice (牛津: Wiley-Blackwell, 2011). 也可以看看
John D. 施泰因布鲁纳, The Cybernetic Theory of Decision: New Dimensions of Political Analysis (王子-
吨, 新泽西州: 普林斯顿大学出版社, 1974). On the intellectual impact of cybernetics generally, 看
Ronald R. Kline, The Cybernetics Moment: Or Why We Call Our Age the Information Age (巴尔的摩,
马里兰州。: Johns Hopkins University Press, 2015). Classic applications of cybernetic decision theory in-
clude Karl W. Deutsch, The Nerves of Government: Models of Political Communication and Control
(纽约: Free Press, 1963); and James R. Beniger, The Control Revolution: Technological and Eco-
nomic Origins of the Information Society (剑桥, 大量的。: 哈佛大学出版社, 1989).
28. Herbert A. 西蒙, “Theories of Decision-Making in Economics and Behavioral Science,” Amer-
ican Economic Review, 卷. 49, 不. 3 (六月 1959), p. 273, https://www.jstor.org/stable/1809901.
29. OODA stands for the “observe, orient, decide, and act” phases of the decision cycle. 注意
“orient” and “decide” map to prediction and judgment, 分别. These phases may occur se-
quentially or in parallel in any given implementation. On the inºuence of John Boyd’s cybernetic
OODA loop in military thought see James Hasík, “Beyond the Brieªng: Theoretical and Practical
Problems in the Works and Legacy of John Boyd,” 当代安全政策, 卷. 34, 不. 3
(2013), PP. 583–599, https://doi.org/10.1080/13523260.2013.839257.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 17
decision model is agnostic about implementation, which means that the logic
of decision might be implemented with organic, organizational, or technologi-
cal components.
然而, implementation is precisely what is at stake with AI. 数字 1 thus illus-
trates how data, 判断, prediction, and action affect the human-machine
division of labor in decision-making. We highlight the human-machine divi-
sion of labor in decision-making because the task-speciªc implementation of
AI has signiªcant consequences. The universality of cybernetic decision-
making explains why many consider AI to be a general-purpose technology,
like electricity or the internal combustion engine.30 AI can indeed improve pre-
措辞, which is a vital input for any sort of decision-making. But AI is not the
only input. Organizations also rely on data and judgment to make decisions
in task-speciªc circumstances. Put simply, AI is a general-purpose technology
that performs differently in speciªc contexts.
the human-machine division of labor
When analyzing or deploying AI, it is necessary to consider particular tasks
that serve particular goals. Machine learning is not AGI. Our unit of analysis,
所以, is the decision task, 那是, the ensemble of data, 预测, 判断-
评论, and actions that produce a speciªc organizational outcome. Most orga-
nizations perform many different and interrelated tasks, such as strategy,
管理, human resources, 营销, network administration, manufac-
turing, 运营, 安全, and logistics. Military analogs of these tasks in-
clude command, 行政, 训练, 智力, 沟通, ªre,
maneuver, 保护, and sustainment. Within and across these categories are
myriad different tactics, 技巧, and procedures. Any of these tasks, 在
whatever scope or scale, can directly or indirectly support an organization’s
overall mission, which may or may not be well deªned. 的确, a task itself
may be poorly deªned, in part because task decomposition is a problem for
managerial judgment.
AI performance in any given task is a function of the quality of data and the
difªculty of judgment. These two complements provide essential context for
automated decision-making. Data are high quality if relevant information is
abundantly available and not systematically biased. Judgment is well deªned
if goals can be clearly speciªed in advance and stakeholders agree on them.
30. 看, 例如, Iain M. Cockburn, Rebecca Henderson, and Scott Stern, “The Impact of
Artiªcial Intelligence on Innovation: An Exploratory Analysis,” in Ajay Agrawal, Joshua Gans,
and Avi Goldfarb, 编辑。, The Economics of Artiªcial Intelligence: An Agenda (芝加哥: 大学
Chicago Press, 2019), PP. 115–146.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 18
桌子 1. The Implications of Data and Judgment for Automation in Decision-Making
Judgment
清除
difªcult
数据
high-quality
low-quality
Fully automated decision-
making is more efªcient.
Full automation is
possible but risky.
Automated predictions can
inform human decisions.
Automated decision-
making is not feasible.
The degree to which data quality is high or low, and judgment is clear or dif-
ªcult, determines the comparative advantages of humans and machines in
决策. 相比之下, substitutes determine only the technical poten-
tial for automation (IE。, by reducing the costs of prediction or action). 桌子 1
summarizes the effects of data and judgment on AI performance. The implica-
tions of AI for future war are necessarily speculative, which makes it even
more important to theorize from a solid deductive foundation.
AI Complements in Business and War
Numerous empirical studies of civilian AI systems have validated the basic
ªnding that AI depends on quality data and clear judgment.31 To the extent
that decision-making is indeed a universal activity, it is reasonable to expect
models of it to apply to militaries. But we caution against generalizing from
the business world to military affairs (and vice versa). Commercial and mili-
tary organizations perform dissimilar tasks in different contexts. Militaries are
only infrequently “in business” because wars are rare events.32 Objectives such
as “victory” or “security” are harder to deªne than “shareholder value” or
“proªt.” Combatants attempt to physically destroy their competitors, 和
consequences of failure are potentially existential.
One underappreciated reason why AI has been applied successfully in
many commercial situations is because the enabling conditions of quality data
and clear judgment are often present. Peaceful commerce generally takes place
in institutionalized circumstances. Laws, property rights, contract enforcement
mechanisms, diversiªed markets, common expectations, and shared behav-
ioral norms all beneªt buyers and sellers. These institutional features make
31. Detailed arguments and evidence are presented in Agrawal, 甘斯, and Goldfarb, 预言
Machines.
32. Gary King and Langche Zeng, “Explaining Rare Events in International Relations,” Interna-
tional Organization, 卷. 55, 不. 3 (2001), PP. 693–715, https://doi.org/10.1162/00208180152507597.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 19
transactions more consistent and efªcient.33 Consistency, 反过来, provides the
essential scaffolding for full automation. We expect AI to be more successful in
more institutionalized circumstances and for more structured tasks.
战争, 相比之下, occurs in a more anarchic environment. In the international
系统, according to the intellectual tradition of realism, there are no legiti-
mate overarching institutions to adjudicate disputes, enforce international
协议, or constrain behavior.34 Actors must be prepared to defend them-
selves or ally with others for protection. Allies and adversaries alike have
incentives to misrepresent their capabilities and interests, and for the same rea-
sons to suspect deception by others.35 Militarized crises and conºicts abound
in secrecy and uncertainty. War aims are controversial, almost by deªnition,
and they mobilize the passions of the nation, for better or worse.
We expect the absence of constraining institutions in war to undermine the
AI-enabling conditions of quality data and clear judgment. One exception that
proves the rule is that a military bureaucracy may be able to provide scaffold-
ing for some military tasks. Robust organizational institutions, 换句话说,
might substitute for weak international institutions. 然而, there are limits to
what organizations can accomplish in the inherently uncertain and contested
environment of war. The speciªc context of data and judgment will determine
the viability of automation for any given task.
data and the strategic environment
Commercial AI systems often need thousands, if not millions or billions,
of examples to generate high-quality predictions. As deep-learning pioneer
Geoffrey Hinton puts it, “Take any old problem where you have to predict
something and you have a lot of data, and deep learning is probably going to
33. 看, 例如, R.H. Coase, “The Problem of Social Cost,” Journal of Law and Economics, 卷. 3
(十月 1960), PP. 1–44, https://doi.org/10.1086/466560; and Oliver E. 威廉森, “The Eco-
nomics of Organization: The Transaction Cost Approach,” 美国社会学杂志, 卷. 87,
不. 3 (十一月 1981), PP. 548–577, https://www.jstor.org/stable/2778934. These same features
are associated with liberal perspectives on international politics, such as Robert O. 基奥哈内, “这
Demand for International Regimes,” 国际组织, 卷. 36, 不. 2 (春天 1982),
PP. 325–355, https://www.jstor.org/stable/2706525; 约翰·R. Oneal and Bruce M. Russett, “这
Classical Liberals Were Right: 民主, 相互依存, and Conºict, 1950–1985,” International
Studies Quarterly, 卷. 41, 不. 2 (六月 1997), PP. 267–293, https://doi.org/10.1111/1468-2478.00042;
and G. John Ikenberry, After Victory: 机构, Strategic Restraint, and the Rebuilding of Order after
Major Wars (普林斯顿大学, 新泽西州: 普林斯顿大学出版社, 2001). A suggestive hypothesis that we do
not develop in depth here is that the same conditions that are conducive to liberal institutions
should also be conducive to AI performance.
34. These are standard assumptions in realist theory. 例如, Hans J. Morgenthau, 政治
among Nations: 权力与和平的斗争 (纽约: 阿尔弗雷德·A. 克诺夫, 1960); and Kenneth N.
华尔兹, Theory of International Politics (Reading, 大量的。: Addison-Wesley, 1979).
35. 詹姆斯·D. Fearon, “Rationalist Explanations for War,” 国际组织, 卷. 49, 不. 3
(夏天 1995), PP. 379–414, https://www.jstor.org/stable/2706903.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 20
make it work better than the existing techniques.”36 The need for a large quan-
tity of data, as well as detailed metadata that labels and describes its content, 是
well understood. But the need for quality data is less appreciated. Two factors
can undermine the relevancy of data. 第一的, the data may be biased toward one
group of people or situation.37 Second, data on the particular situation being
predicted may not exist. This latter situation happens surprisingly often, 是-
cause predictions are especially useful when they provide insight into what
will happen if an organization changes its behavior. If the organization has
never behaved in a certain way, then relevant data will not exist, 和重新-
lated statistical prediction will fail.38
A competitor or adversary can exacerbate both problems of data relevancy
by manipulating data to create bias or interdicting the supply of data. If an ad-
versary ªnds a way to access and corrupt the data used to train AI systems,
then predictions become less reliable.39 More generally, if AI becomes good at
optimizing the solution for any given problem, then an intelligent enemy has
incentives to change the problem. In the parlance of AI, the enemy will “go be-
yond the training set” by creating a situation for which there is no prior exam-
ple in data used for machine learning. An adversary could innovate new
tactics that are hard for AI systems to detect or pursue aims that AI systems do
not anticipate.
The sources of uncertainty in war are legion (例如, poor weather, unfamiliar
terrain, bad or missing intelligence, misperception, enemy deception). As Carl
von Clausewitz famously states, “War is the realm of uncertainty; three quar-
ters of the factors on which action in war is based are wrapped in a fog of
greater or lesser uncertainty.”40 Clausewitz also uses the mechanical metaphor
36. Geoffrey Hinton, “On Radiology,” presented at the Machine Learning and Market for Intelli-
gence Conference, Creative Destruction Lab, 多伦多大学, 十月 26, 2016, YouTube
视频, 1:24, https://youtu.be/2HMPRXstSvQ.
37. Bo Cowgill and Catherine E. Tucker, “Algorithmic Fairness and Economics,” Columbia Busi-
ness School Research Paper, SSRN (二月 14, 2020), https://dx.doi.org/10.2139/ssrn.3361280;
and Jon Kleinberg et al., “Discrimination in the Age of Algorithms,” Journal of Legal Analysis,
卷. 10 (2018), PP. 113–174, https://doi.org/10.1093/jla/laz001.
38. 在这种情况下, causal inference requires different tools because the counterfactual situation is
never observed. There is a rich technical literature on these ideas, rooted in the Rubin causal
模型. Widely used textbooks are Joshua D. Angrist and Jörn-Steffen Pischke, Mostly Harmless
Econometrics: An Empiricist’s Companion (普林斯顿大学, 新泽西州: 普林斯顿大学出版社, 2009); 和
Guido W. Imbens and Donald B. 鲁宾, Causal Inference for Statistics, Social, and Biomedical Sciences:
一个介绍 (纽约: 剑桥大学出版社, 2015).
39. Battista Biggio and Fabio Roli, “Wild Patterns: Ten Years after the Rise of Adversarial Ma-
chine Learning,” Pattern Recognition, 卷. 84 (十二月 2018), PP. 317–331, https://doi.org/
10.1016/j.patcog.2018.07.023; and Heather M. Roff, “AI Deception: When Your Artiªcial Intelli-
gence Learns to Lie,” IEEE Spectrum, 二月 24, 2020, https://spectrum.ieee.org/automaton/
artiªcial-intelligence/embedded-ai/ai-deception-when-your-ai-learns-to-lie.
40. Carl von Clausewitz, On War, 编辑. and trans. Michael Eliot Howard and Peter Paret (普林斯顿大学,
新泽西州: 普林斯顿大学出版社, 1989), p. 101.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 21
of “friction” to describe organizational breakdowns. “Friction” in information
technology can also create “fog” because the same systems adopted to im-
prove certainty become new sources of uncertainty. Military personnel in net-
worked organizations struggle to connect systems, negotiate data access,
customize software, and protect information security. Computer glitches and
conªguration problems tend to “accumulate and end by producing a kind of
friction that is inconceivable unless one has experienced war.”41 Empirical
studies of wartime practice reveal a surprising amount of creative hacking to
repair and reshape technologies to deal with the tremendous friction of war-
fare in the information age.42 Yet, informal adaptation of data processing sys-
tems can also create interoperability, accountability, and security problems.
Managerial intervention to control these risks creates even more friction. 我们
suggest that AI systems designed to “lift the fog of war” could just as easily
“shift the fog” right back into the organization.43
Although war is rife with uncertainties that can distort or disrupt data, 我们
expect the quality of data to vary by task and situation. Put differently, 这
microstructure of the strategic environment is very important. We expect data
about friendly forces to be more reliable because commanders can mandate re-
porting formats and schedules. We expect logistics and administration reports
to be more reliable than combat reporting, which is more exposed to enemy in-
teraction. Intelligence about enemy dispositions and capabilities should be
even less reliable. 即使是这样, intelligence about “puzzles” (such as the locations
and capabilities of weapon systems) may be more reliable than intelligence
about “mysteries” (such as future intentions and national resolve).44 Data from
technical sensors tend to be better structured and more voluminous than hu-
man intelligence reports, which require signiªcant interpretation. Enemy de-
ception or disinformation operations tend to undermine data quality, 一样
intelligence politicization for parochial interests. It is critical to assess the spe-
ciªc strategic context of data, and thus the suitability of AI, for any given deci-
sion task.
41. 同上。, p. 119.
42. Inter alia, 詹姆斯·A. 拉塞尔, 创新, Transformation, and War: Counterinsurgency Operations in
Anbar and Ninewa Provinces, 伊拉克, 2005–2007 (斯坦福大学, 加利福尼亚州。: 斯坦福大学出版社, 2010); Tim-
othy S. Wolters, Information at Sea: Shipboard Command and Control in the U.S. 海军, from Mobile Bay
to Okinawa (巴尔的摩, 马里兰州。: Johns Hopkins University Press, 2013); and Nina A. Kollars, “War’s
Horizon: Soldier-Led Adaptation in Iraq and Vietnam,》 战略研究杂志, 卷. 38, 不. 4
(2015), PP. 529–553, https://doi.org/10.1080/01402390.2014.971947.
43. 一般来说, Lindsay, Information Technology and Military Power.
44. Gregory F. Treverton, Reshaping National Intelligence for an Age of Information (纽约: 凸轮-
桥大学出版社, 2003), PP. 11–13.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 22
judgment and military institutions
Even if there are enough of the right type of data, AI still relies on people to
determine what to predict and why. Commercial ªrms, 例如, 制作
many different judgments in determining their business models, corporate
价值观,
labor relations, and negotiating objectives. Military organizations
face analogous management challenges, but they also face unique ones. 米利-
tary judgment also encompasses national interests, political preferences, stra-
tegic missions, commander’s intent, rules of engagement, combat ethics, 和
martial socialization. Because the costs and consequences of war are so pro-
成立, all these problems tend to be marked by ambiguity, controversy, 和
painful trade-offs. Judgment thus becomes more difªcult, and ever more
后果性的, in military affairs.
There are three types of machine learning algorithms.45 All require human
判断. 第一的, in “supervised learning,” the human tells the machine what
to predict. 第二, “unsupervised learning” requires judgment about what to
classify and what to do with the classiªcations. 第三, “reinforcement learn-
ing” requires advance speciªcation of a reward function. A reward function as-
signs a numerical score to the perceived state of the world to enable a machine
to maximize a goal. More complicated strategies may combine these ap-
proaches by establishing instrumental goals in pursuit of the main objective. 在
every case, a human ultimately codes the algorithm and deªnes the payoffs for
the machine.
In economic terms, judgment is the speciªcation of the utility function.46 The
preferences and valuations that determine utility are distinct from the strate-
gies that maximize it. To take a trivial example, people who do not mind get-
ting wet and dislike carrying umbrellas will not carry one, regardless of the
weather forecast. People who dislike getting wet and do not mind carrying
umbrellas might always have an umbrella in their bag. Others might carry
an umbrella if the chance of rain is 75 percent but not if it is 25 百分. 这
prediction of rain is independent of preferences about getting wet or being
prepared to get wet. 相似地, the AI variation on the notorious “trolley prob-
lem” poses an ethical dilemma about life-or-death choices. 例如,
should a self-driving car swerve to avoid running over four children at the risk
45. For an accessible introduction, see Sejnowski, The Deep Learning Revolution.
46. Critiques of economic rationality that appeal to cognitive psychology or social institutions un-
derscore the importance of judgment. See Rose McDermott, Political Psychology in International Re-
lations (安娜堡: University of Michigan Press, 2004); and Janice Gross Stein, “The Micro-
Foundations of International Relations Theory: Psychology and Behavioral Economics,” Interna-
tional Organization, 卷. 71, 不. S1 (2017), PP. S249–S263, https://doi.org/10.1017/S002081831
6000436.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 23
of killing its human passenger? If the AI predicts even odds that someone will
die either way, the car should swerve if all lives are equally valuable, 但它
should not swerve if the passenger’s life is worth at least four times as much as
that of a random child. This somewhat contrived dilemma understates the
complexity of the judgment involved. 的确, the ethical dilemmas of AI
reinvigorate longstanding critiques of utilitarian reasoning. As Heather Roff
points out, “We cannot speak about ethical AI because all AI is based on em-
pirical observations; we cannot get an ‘ought’ from an ‘is.’ If we are clear eyed
about how we build, 设计, and deploy AI, we will conclude that all of the
normative questions surrounding its development and deployment are those
that humans have posed for millennia.”47
If the trolley problem seems far-fetched, consider the case of a self-driving
Uber car that killed a cyclist in Tempe, Arizona.48 The AI had predicted a low
but nonzero probability that a human was in its path. The car was designed
with a threshold for ignoring low-probability risks. The priority of not hitting
humans was obvious enough. 然而, with an error tolerance set to zero, 汽车
would not be able to drive. The question of where to set the tolerance was a
judgment call. 在这种情况下, it appears that the prespeciªed judgment was tragi-
cally inappropriate for the context, but the prediction machine had absolutely
no concept of what was at stake.
A well-speciªed AI utility function has two characteristics. 第一的, goals are
clearly deªned in advance. If designers cannot formally specify payoffs and
priorities for all situations, then each prediction will require a customized
判断. This is often the case in medical applications.49 When there are
many possible situations, human judgment is often needed upon seeing the di-
agnosis. The judgment cannot be determined in advance because it would take
too much time to specify all possible contingencies. Such dynamic or nu-
anced situations require, 有效, incomplete contracts that leave out complex,
situation-speciªc details to be negotiated later.50 Because all situations cannot
be stipulated in advance, judgment is needed after seeing the prediction to in-
terpret the spirit of the agreement.
47. Heather M. Roff, The Folly of Trolleys: Ethical Challenges and Autonomous Vehicles (华盛顿,
华盛顿特区: 布鲁金斯学会出版社, 十二月 17, 2018), https://www.brookings.edu/research/the-
folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/.
48. National Transportation Safety Board [NTSB], “Highway Accident Report: Collision between
Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Ari-
zona, 行进 18, 2018,” Highway Accident Report NTSB/HAR-19/03 (华盛顿, 华盛顿特区: NTSB,
十一月 19, 2019), https://trid.trb.org/view/1751168.
49. Trevor Jamieson and Avi Goldfarb, “Clinical Considerations When Applying Machine
Learning to Decision-Support Tasks versus Automation,” BMJ Quality & Safety, 卷. 28, 不. 10
(2019), PP. 778–781, https://doi.org/10.1136/bmjqs-2019-009514.
50. 威廉森, “The Economics of Organization.”
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 24
The military version of incomplete contracting is “mission command,”
which speciªes the military objective and rules of engagement but empowers
local personnel to interpret guidance, coordinate support, and tailor opera-
tions as the situation develops.51 The opposite of mission command, 一些-
times described as “task orders,” is more like a complete contract that tells a
unit exactly what to do and how to do it. Standard operating procedures, 文档-
trinal templates, and explicit protocols help to improve the predictability of
operations by detailing instructions for operations and equipment handling. 在
turbulent environments with unpredictable adversaries, 然而, 标准-
ized task orders may be inappropriate. The greater the potential for uncer-
tainty and accident in military operations, the greater the need for local
commanders to exercise initiative and discretion. In Clausewitzian terms,
“fog” on the battleªeld and “friction” in the organization require commanders
to exercise “genius,” which is “a power of judgment raised to a marvelous
pitch of vision, which easily grasps and dismisses a thousand remote possibili-
ties which an ordinary mind would labor to identify and wear itself out in so
doing.”52 The role of “genius” in mission command becomes particularly im-
重要的, and particularly challenging, in modern combined arms warfare and
multi-domain operations.53 When all possible combinations of factors cannot
possibly be speciªed in advance, personnel have to exercise creativity and ini-
tiative in the ªeld. Modern military operations tend to mix elements of both
styles by giving local commanders latitude in how they interpret, implement,
and combine the tools, tactics, and procedures that have been standardized, 在-
stitutionalized, and exercised in advance.
The second characteristic of a well-speciªed AI utility function is that all
stakeholders should agree on what goals to pursue. When it is difªcult for
people to agree on what to optimize, transparent institutional processes
for evaluating or aggregating different preferences may help to validate or le-
gitimate decisions that guide AI systems. 很遗憾, consensus becomes
elusive as “genius” becomes more geographically distributed, socially collabo-
rative, and technically exacting.54 In an ethnography of divisional command in
51. 例如, Department of the Army, ADP 6-0: Mission Command: 命令与控制
of Army Forces, Army Doctrine Publication No. 6-0 (华盛顿, 华盛顿特区: 我们. Department of
the Army, 可能 17, 2012), https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18314-ADP
_6-0-000-WEB-3.pdf.
52. Clausewitz, On War, p. 112.
53. Biddle, Military Power; Bart Van Bezooijen and Eric-Hans Kramer, “Mission Command in the
Information Age: A Normal Accidents Perspective on Networked Military Operations,》杂志
Strategic Studies, 卷. 38, 不. 4 (2015), PP. 445–466, https://doi.org/10.1080/01402390.2013.844127;
and Ryan Grauer, Commanding Military Power: Organizing for Victory and Defeat on the Battleªeld
(剑桥: 剑桥大学出版社, 2016).
54. John Ferris and Michael I. Handel, “Clausewitz, 智力, Uncertainty, and the Art of Com-
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 25
阿富汗, Anthony King writes that “a general must deªne a mission, 男人-
age the tasks of which it is comprised and motivate the troops.”55 The ªrst of
these three factors—specifying positive objectives and negative limitations—
is the consummate function of judgment; AI offers little help here. AI might
provide some support for the second factor, oversight and administration,
which involves a mixture of judgment and prediction. The third factor is lead-
ership, which is fundamentally a judgment problem insofar as leaders attempt
to socialize common purposes, 价值观, and interpretations throughout an or-
ganization. 再次, AI is of little use for issues of leadership, which become
more important as organizations become geographically and functionally dis-
tributed: “The bureaucratic expertise of the staff has been improved and
their cohesiveness has been condensed so that they are now bound in dense
团结, even when they are not co-present.”56 Indeed, “decision-making
leadership—
has proliferated” in all
because a “commander can no longer direct operations alone.”57 According
to King, the commanding general is now less of a central controller and more
of a social focal point for coordinating the complex interactions of “the com-
mand collective.”
three areas—strategy, 管理,
Collective command, 然而, is a collective action problem. 在某些情况下,
standard operating procedures and socialization rituals can simplify judgment
任务. King ªnds that “command teams, command boards, principal planning
groups and deputies have appeared to assist and to support the commander
and to manage discrete decision cycles to which the commander cannot at-
tend.”58 Yet, in other cases, personnel from different services, 分支机构, 或者
units may disagree over how to interpret even basic tactics, 技巧, 和
procedures.59 Disagreement may turn into controversy when mission assign-
ments fall outside the scope of what professionals deem right or appropriate,
as when armies are tasked with counterinsurgency, air forces are tasked with
close air support, or cultural preferences clash.60 More serious disagreements
mand in Military Operations,” 情报与国家安全, 卷. 10, 不. 1 (1995), PP. 1–58,
https://doi.org/10.1080/02684529508432286.
55. 安东尼·金, 命令: The Twenty-First-Century General (剑桥: 剑桥大学
按, 2019), p. 438.
56. 同上。, p. 443.
57. 同上。, p. 439.
58. 同上。, p. 440.
59. Harvey M. 萨波尔斯基, Eugene Gholz, and Caitlin Talmadge, US Defense Politics: The Origins of
Security Policy, 3RD版. (纽约: 劳特利奇, 2017), PP. 93–116.
60. On the cultural origins of military preferences, see Elizabeth Kier, Imagining War: French and
British Military Doctrine between the Wars (普林斯顿大学, 新泽西州: 普林斯顿大学出版社, 1997); 杰弗里
瓦. Legro, Cooperation under Fire: Anglo-German Restraint during World War II (伊萨卡岛, 纽约: Cornell
大学出版社, 2013 [1995]); and Austin Long, The Soul of Armies: Counterinsurgency Doctrine and
Military Culture in the US and UK (伊萨卡岛, 纽约: 康奈尔大学出版社, 2016).
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 26
about war aims and military methods can emerge within the civil-military
chain of command or among coalition partners.61 Just as data availability and
bias vary for any given decision task, we also expect variability in the clarity
and consensus of judgment. Any factors that exacerbate confusion or disagree-
ment in military institutions should be expected to make judgment more
difªcult for AI automation.
AI Performance in Military Decision-Making Tasks
As we have explained in the previous sections, decision-making is a universal
过程, but decision inputs are context speciªc. Even if the same AI technol-
ogy is available to all organizations, the strategic and institutional conditions
that have enabled AI success in the business world may not be present in war.
We thus infer two general hypotheses about the key AI complements of data
and judgment. 第一的, 稳定的, cooperative environments are more conducive
for plentiful, unbiased data; 反过来, turbulent, competitive environments
tend to produce limited, biased data. 第二, institutional standardization
and solidarity encourage well-deªned, consensual judgments; 反过来, id-
iosyncratic local practices and internal conºict lead to ambiguous, controver-
sial judgments.
The combination of these hypotheses describes four different regimes of AI
performance in military decision-making tasks. 桌子 2 summarizes these cate-
gories by synthesizing the strategic and institutional inputs from the decision-
making context (ªgure 1) into the human-machine division of labor (桌子 1)
for key military functions. The best case for AI performance is what we call
“automated decision-making.” Quality data and clear judgment are most
likely to be available in highly routinized administrative and logistics tasks
that are more analogous to civilian organizational tasks. Anything that bureau-
cracies can do well, AI can probably help them to do better. The worst case for
AI is the opposite quadrant, in which both automation complements are ab-
发送. We label this category “human decision-making” because AI cannot per-
form tasks characterized by limited, biased data and ambiguous, controversial
判断. For military strategy and command tasks, the Clausewitzian ex-
tremes of “fog” in the environment and “friction” in the organization require
human “genius.” In the other two quadrants, in which one necessary comple-
ment is present but the other is absent, the human-machine division of labor is
61. 例如, Risa Brooks, Shaping Strategy: The Civil-Military Politics of Strategic Assessment
(普林斯顿大学, 新泽西州: 普林斯顿大学出版社, 2008); and Jeremy Pressman, Warring Friends: 联盟
Restraint in International Politics (伊萨卡岛, 纽约: 康奈尔大学出版社, 2012).
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 27
桌子 2. How Strategic and Institutional Conditions Shape AI Performance in
Military Tasks
Effect of Environment on Data
stability and cooperation
® high-quality data
turbulence and
competition
® low-quality data
Automated Decision-Making
Premature Automation
Full automation can
increase the scale and
efªciency of highly
bureaucratized
administrative and
logistics tasks.
Full automation in
complex ªre and
maneuver tasks
heightens the risks of
targeting error and
inadvertent escalation.
Human-Machine Teaming
Human Decision-Making
Automated decision aids for
intelligence analysis and
operational planning can
augment, but not replace,
human decision-making.
Strategy and command
will continue to rely on
human interpretation
and leadership.
Effect of
机构
在
Judgment
standardization and
团结
® clear judgment
idiosyncrasy and
conºict
® difªcult judgment
more complicated. The category of “premature automation” describes situa-
tions in which machines receive clear goals in uncertain environments. 如果
迅速的, automated decisions are tightly coupled to lethal actions, this is a recipe
for disaster. The mismatch between the evolving situation and the decisions
encoded in AI systems may not be immediately obvious to humans, 哪个
heightens the risks of tragic outcomes such as targeting error or inadvertent
escalation. In the converse category of “human-machine teaming,” judgment
tasks are difªcult, but quality data are available. AI decision aids (例如,
图表, 桌子, map overlays, image annotations, and textual summaries) 亲-
vide predictions that augment but do not replace human decisions, while hu-
mans maintain a healthy skepticism about AI shortcomings. Many operational
planning and intelligence analysis tasks fall into this category, along with tacti-
cal mission support tools. To demonstrate the plausibility of our framework,
we next offer a few commercial examples and explore potential military appli-
cations and further implications in each category.
automated decision-making
Full automation of decision-making can improve performance if there are
quality data and clear judgments. 例如, Australia’s Pilbara region has
large quantities of iron ore. The mining sites are far from any major city,
and the local conditions are often so hot that it is hazardous for humans to
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 28
work there. 自从 2016, mining giant Rio Tinto has deployed dozens of self-
driving trucks.62 These trucks have saved operating costs while reducing risk
to human operators. Such automation is feasible because the data are plentiful
relative to the needs at hand—the trucks drive on the same roads each day,
and there are few surprises in terms of human activity. Data collection is there-
fore limited to a small number of roads with few obstacles. The main task for
the AI is to predict whether the path is clear. Once this prediction is made, 这
judgment is well deªned and easy to specify in advance: if the path is clear,
continue; if it is not clear, stop and wait. In other examples of successful auto-
运动, robotic cameras have been deployed in a variety of settings, 在-
cluding during basketball games, swimming competitions, and “follow me”
aerial drones.63
We expect AI to be useful for military tasks with clear civilian analogs.
While much of the popular debate about military AI is preoccupied with auto-
mated weaponry, it is likely that many promising applications will be de-
signed to support bureaucratic functions. Bureaucracies are, 除其他外
事物, computational systems that gather and process data to render opera-
tions more legible, predictable, and controllable.64 The most bureaucratized
parts of a military organization are therefore good candidates for computa-
tional automation. Administrative transactions tend to be repetitious, 哪个
generates a large amount of high-quality data that can be used for training and
prediction. Organizations are attracted to standardization because it makes
their equipment, 程序, and personnel easier to count, compare, 和骗局-
trol.65 Procedural standardization also constrains organizational behavior,
which makes it easier for managers to specify judgments in advance. 更多的-
超过, peacetime administrative tasks are somewhat less exposed to battleªeld
turbulence, reducing the requirement for last-minute interpretation.
We expect automation to improve the efªciency and scale of routinized ac-
tivities that entail ªlling in missing information, measuring technical perfor-
曼斯, tracking personnel, and anticipating future needs. 的确, AI may
enhance many routine tasks associated with developing budgets, 招聘
and training personnel, identifying leadership potential, scheduling unit ros-
特尔斯, designing and procuring weapon systems, planning and evaluating exer-
62. Agrawal, 甘斯, and Goldfarb, Prediction Machines, PP. 113–114.
63. 同上。, p. 115.
64. James G. March and Herbert A. 西蒙, 组织机构 (纽约: 约翰·威利, 1958).
65. JoAnne Yates, Control through Communication: The Rise of System in American Management (Balti-
更多的, 马里兰州。: Johns Hopkins University Press, 1989); and Wendy Nelson Espeland and Mitchell L.
Stevens, “Commensuration as a Social Process,” Annual Review of Sociology, 卷. 24, 不. 1 (1998),
PP. 313–343, https://doi.org/10.1146/annurev.soc.24.1.313.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 29
cises, caring for the morale and welfare of service members and their families,
and providing health care to service members and veterans.66 At the same
时间, it is important to recognize that seemingly trivial procedures can be-
come politicized when budgets and authorities are implicated.67 Even in
the absence of parochialism, the complexity of administrative systems intro-
duces interpretive challenges for personnel. These internal frictions under-
mine the conditions for successful administrative automation.
Logistics supply chains may also be good candidates for automation.
的确, ªrms like DHL and FedEx have leveraged AI to streamline their deliv-
ery networks. Standardized parts, consumption rates, repetitive transactions,
and preventive maintenance schedules generate abundant data about deªned
任务. Using historical performance data, predictive maintenance systems can
monitor consumption rates and automatically order replacement parts before
a weapon or platform breaks. 例如, one U.S. Air Force system uses a
predictive algorithm to decide when mechanics should perform an inspection,
which allows them to tailor the maintenance and repairs for individual aircraft
rather than adhere to generic schedules.68 But we contend that the prediction
of supply and demand for just-in-time delivery will be more difªcult in war.
While bureaucrats may be insulated from the turmoil of the battleªeld, supply
lines are more exposed. The enemy can interdict or sabotage logistics. As war-
time attrition consumes spare parts, units may squabble about which ones
should be resupplied. Friendly units may resort to using platforms and parts
in unconventional ways. All this turbulence will cause predictions to fail,
which essentially shifts AI into the category of premature automation, 迪斯-
cussed below. The classical military solution to such problems is to stockpile
an excess of supplies, precisely because wartime consumption is so hard to
predict.69 If organizations eliminate slack resources with AI systems in pursuit
of efªciency, 然而, then they may sacriªce effectiveness when systems en-
counter unforeseen circumstances.70
66. Brian David Ray, Jeanne F. Forgey, and Benjamin N. Mathias, “Harnessing Artiªcial Intelli-
gence and Autonomous Systems across the Seven Joint Functions,” Joint Force Quarterly, 卷. 96,
不. 1 (2020), p. 124, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-96/jfq-96.pdf; 和
Stephan De Spiegeleire, Matthijs Maas, and Tim Sweijs, Artiªcial Intelligence and the Future of De-
栅栏: Strategic Implications for Small- and Medium-Sized Force Providers (The Hague, 荷兰:
Hague Centre for Strategic Studies, 2017), PP. 91–94, https://www.jstor.org/stable/resrep12564.1.
67. 萨波尔斯基, 戈尔茨, and Talmadge, US Defense Politics.
68. Stoney Trent and Scott Lathrop, “A Primer on Artiªcial Intelligence for Military Leaders,”
Small Wars Journal, 八月 22, 2018, http://smallwarsjournal.com/jrnl/art/primer-artiªcial-
intelligence-military-leaders.
69. Martin van Creveld, Supplying War: Logistics from Wallenstein to Patton, 2ND版. (纽约:
剑桥大学出版社, 2004).
70. Loss of resilience is a longstanding concern associated with organizational automation. 看
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 30
总共, we expect AI to be most useful for automating routine tasks that
are bureaucratically insulated from battleªeld turbulence. Administration and
logistics tasks that are repetitious and standardized are more likely to have
both quality data and clear goals. Humans still provide judgment to deªne
those clear goals, but this happens in advance. Although these conditions
are ideal for automation, they can be elusive in practice, especially if there are
contested resources and personnel decisions. 因此, even the low-
hanging fruit applications will often fall
into the other three categories
在表中 2, particularly human-machine teaming.
human decision-making
At the other extreme, humans still make all the decisions for situations in
which data are of low quality and judgment is difªcult. Machine predictions
degrade without quality data. 幸运的是, because judgment is also difªcult in
this category, there is little temptation to automate. There are no commercial
examples in this category because we have not seen AI systems that success-
fully found companies, lead political movements, or set legal precedents by
他们自己. Without quality data and clear judgment, such machines would be
of little use. As long as advances in machine learning are best understood as
improvements in prediction rather than AGI, then tasks in this category will
require human beings. People do not always make good decisions, 然而,
and successful decisions depend on many different psychological and social
因素, plus good luck.
Strategy abounds with complex and controversial political and moral judg-
评论. What is worth ªghting for, or compromising on? When should allies be
拥抱, or abandoned? When is butter worth more than guns, 什么时候
does the stability of deterrence outweigh the pursuit of power? For what na-
tional interests should men and women be inspired to kill, and die? And when
should killers show restraint? The answers to these questions spring from
many sources such as ideology, 心理学, and domestic politics, but they do
not come from machines. AI systems may win games like Jeopardy, Go, 和
complex video games. War shares some features with some games, 例如
strategic competition and zero-sum payoffs. But war is not a game. Games are
deªned by institutionalized rules, but the failure of institutions in anarchy
gives rise to war.
In Clausewitzian terms, war is the use of violence to impose one’s will on a
reactive opponent. The interaction of political rationality, national passion, 和
Gene I. Rochlin, Trapped in the Net: The Unanticipated Consequences of Computerization (普林斯顿大学,
新泽西州: 普林斯顿大学出版社, 1997).
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 31
random chance gives war a chaotic quality.71 The problems of formulating
political-military strategy and commanding forces in battle live in the heart of
this chaos. Curiosity, creativity, grit, and perseverance become important char-
acter traits, to say nothing of empathy, mercy, and compassion. Whenever
“fog” and “friction” are greatest, and human “genius” is most in demand,
there is little role for AI. Humans will often fail in these circumstances, 也, 但
at least they have a ªghting chance.
AI systems may still be able to support strategy and command by providing
decision aids that improve the intelligence, 规划, and administrative in-
puts to decision-making. 然而, this simply underscores the importance of
decomposing decision tasks into subtasks that AI can support and subtasks
that humans must perform. The partition of judgment tasks itself is an act of
判断. For ºuid, ambiguous, or controversial practices, which are common
in strategy and command, the boundaries of data, 判断, prediction, 和
action may be difªcult to distinguish from each other, let alone from other de-
cision tasks. Judgment becomes even more important in these situations.
premature automation
In between the extremes of fully automated and fully human decision-making,
there are both risks and opportunities. The mixed cases of premature automa-
tion and human-machine teaming generate most of the worry and excitement
about AI. Reliance on AI is particularly risky in situations in which the data
are of low quality, but the machine is given clear objectives and authorized
to act. The risks are greatest when lethal action is authorized. If data are biased
or incomplete, then it would be better to let humans rather than machines in-
terpret the evolving situation (IE。, human decision-making). If humans mis-
takenly believe that data are abundant and unbiased, 然而, then they may
wrongly assume that critical tasks can be delegated to AI (IE。, automated
决策). Automation seems seductively feasible, but the deªnition of
the reward function fails to keep up with important changes in the context
of decision-making. Clear judgment in a turbulent environment creates hid-
den dangers.
An example of premature automation is when Amazon built an AI system
to help with its hiring processes.72 The judgment seemed clear: the machine
should select the workers who are likely to succeed in the company. 亚马逊
71. Alan Beyerchen, “Clausewitz, Nonlinearity, and the Unpredictability of War,” International Se-
安全性, 卷. 17, 不. 3 (冬天 1992/93), PP. 59–90, https://doi.org/10.2307/2539130.
72. Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women,”
Reuters, 十月 10, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-
insight-idUSKCN1MK08G.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 32
receives thousands of résumés, and a better screening tool could automate the
many hours that human recruiters spend screening them. There were reasons
to be optimistic that this automation would reduce bias and yield higher qual-
ity and more diverse candidates.73 Unfortunately, Amazon’s past applications
and hiring practices meant that the data contained insufªcient examples of
successful women applicants. Without data on successful women applicants,
the AI learned that Amazon should not hire women, and it consequently
screened out résumés that included the word “women.” The existing bi-
ases in organizational processes produced biased AI training data. Fortu-
内特利, Amazon management realized that the AI exacerbated rather than
solved Amazon’s existing problems, and the company never deployed this
AI tool.
Data may also be biased because they are based on decisions that a predic-
tion machine may not understand. 例如, an early AI for chess was
trained on thousands of Grandmaster games. When deployed, 专业人士-
gram sacriªced its queen early in the game because it had learned that
Grandmasters who do so tend to win. Human Grandmasters only sacriªce
their queen, 然而, when doing so generates a clear path to victory.74 While
this issue has been solved in AI chess, the underlying challenge continues.
Even when the utility function is clear, training data is often a result of tacit
assumptions in human decision-making. Sometimes those human decisions—
such as sacriªcing a queen in chess—create biased data that cause AI predic-
tions to fail.
In contrast with chess, the risks of premature automation are more extreme
in the military realm (例如, fratricide and civilian casualties), but the logic is the
相同的. Militaries abound with standard operating procedures and tactical doc-
trines that guide the use of lethal capabilities (例如, instructions for the safe op-
eration of weapon platforms, playbooks for tactical maneuvers, and policies
for employing weapons). To the degree that goals and mission parameters can
be clearly speciªed, tactical operations will appear to be attractive candidates
用于自动化. To the degree that combat timelines are expected to be ex-
tremely compressed, 而且, automation may appear to be even more ur-
gent.75 Rapid decision-making would necessitate the pre-speciªcation of goals
and payoffs and the coupling of AI prediction to robotic action. Lethal autono-
mous weapon systems use prediction to navigate complex environments in or-
der to arrive at destinations or follow targets, within constraints that are
supplied by human operators.76 Their targeting systems base their predictions
73. Kleinberg et al., “Discrimination in the Age of Algorithms.”
74. Agrawal, 甘斯, and Goldfarb, Prediction Machines, p. 63.
75. Horowitz, “When Speed Kills.”
76. Heather M. Roff and Richard Moyes, “Meaningful Human Control, Artiªcial Intelligence, 和
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 33
on training data that identify valid targets. Using algorithms, machines may
rapidly and accurately identify targets at far greater distances than human vi-
sual recognition, and algorithmic target recognition may be collocated with
sensors to reduce response times.77
Many AI-enabled weapons already or imminently exist. The Israeli Harpy
loitering munition can search for and automatically engage targets, 和中国
has plans for similar “intelligentized” cruise missiles.78 Russia is developing a
variety of armed, unmanned vehicles capable of autonomous ªre or unarmed
mine clearance.79 The United States has been exploring combat applications
for AI in all warªghting domains. In the air, the “Loyal Wingman” program
pairs an unmanned F-16 with a manned F-35 or F-22 to explore the feasibility
of using humans to direct autonomous aircraft, such as the XQ-58A Valkyrie.80
Air combat algorithms that can process sensor data and plan effective combat
maneuvers in the span of milliseconds have already defeated human pilots in
some simulators.81 At sea, 美国. Navy’s LOCUST project explores the feasi-
bility of launching swarms of expendable surface-to-air drones.82 The Defense
Advanced Research Projects Agency’s (DARPA) Continuous Trail Unmanned
Vessel program is designed to search for enemy missile submarines and auto-
matically trail them for months at a time, reporting regularly on their loca-
tions.83 On land, 我们. Marine “warbot companies” equipped with networks of
small robots might provide distributed sensing and precision ªre.84 Auto-
Autonomous Weapons,” paper prepared for the Informal Meeting of Experts on Lethal Autono-
mous Weapons Systems, UN Convention on Certain Conventional Weapons, 日内瓦, April 11–15,
2016, https://article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf.
77. 看, 例如, the Defense Advanced Research Projects Agency’s (DARPA) Target Recogni-
tion and Adaption in Contested Environments (TRACE) 程序, described in John Keller,
“DARPA TRACE program using advanced algorithms, embedded computing for radar target
认出,” Military and Aerospace Electronics, 七月 23, 2015, https://www.militaryaerospace
.com/computers/article/16714226/darpa-trace-program-using-advanced-algorithms-embedded-
computing-for-radar-target-recognition.
78. Elsa B. Kania, Battleªeld Singularity: Artiªcial Intelligence, Military Revolution, and China’s Future
Military Power (华盛顿, 华盛顿特区: Center for a New American Security, 十一月 2017), https://
s3.us-east-1.amazonaws.com/ªles.cnas.org/documents/Battleªeld-Singularity-November-2017
.pdf.
79. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, PP. 80, 82.
80. Hoadley and Sayler, Artiªcial Intelligence and National Security, p. 13.
81. Norine MacDonald and George Howell, “Killing Me Softly: Competition in Artiªcial Intelli-
gence and Unmanned Aerial Vehicles,” PRISM, 卷. 8, 不. 3 (2019), PP. 103–126, https://ndupress
.ndu.edu/Portals/68/Documents/prism/prism_8-3/prism_8-3.pdf.
82. LOCUST stands for “Low-Cost Unmanned Aerial Vehicle Swarming Technology.” See Jules
Hurst, “Robotic Swarms in Offensive Maneuver,” Joint Force Quarterly, 卷. 87, 不. 4 (2017),
PP. 105–111, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-87/jfq-87_105-111_Hurst
.pdf?版本(西德:2)2017-09-28-093018-793.
83. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, p. 91.
84. Jeff Cummings et al., “Marine Warbot Companies: Where Naval Warfare, 美国. 国家的
Defense Strategy, and Close Combat Lethality Task Force Intersect,” War on the Rocks, 六月 28,
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 34
mated counter-battery responses, which accurately retaliate against the origin
of an attack, could give human commanders leeway to focus on second- 和
third-order decisions in the wake of an attack.85 In the cyber domain, AI sys-
tems might autonomously learn from and counter cyberattacks as they evolve
in real time, as suggested by the performance of the Mayhem system in
DARPA’s 2016 Cyber Grand Challenge.86 AI could be especially useful for
detecting new signals in the electromagnetic spectrum and reconªguring elec-
tronic warfare systems to exploit or counter them.87 Space satellites, 意思是-
尽管, have been automated from their inception, and space operations might
further leverage AI to enhance surveillance and control.
Much of the AI security literature is preoccupied with the risks posed by au-
tomated weapons to strategic stability and human security.88 Risks of miscal-
culation will increase as the operational context deviates from the training data
set in important or subtle ways. The risk of deviation increases with the com-
plexity and competitiveness of the strategic environment, while the costs of
miscalculation increase with the lethality of automated action. The machine
tries to optimize a speciªc goal, but in the wrong context, doing so can lead to
false positives. AI weapons may inadvertently either target innocent civilians
or friendly forces or trigger hostile retaliation. In these cases, the AI would
have the authority to kill but would not understand the ramiªcations. 这
risks are particularly stark in the nuclear arena. Nuclear war is the rarest of
rare events—keeping it that way is the whole point of nuclear deterrence—so
training data for AI systems is either nonexistent or synthetic (IE。, 基于
模拟).89 Any tendency for AI systems to misperceive or miscalculate
when confronted with uncertain or novel situations could have catastrophic
consequences.90 In short, autonomous weapon systems that combine predic-
tion with action can quickly make tragic mistakes.
2018, https://warontherocks.com/2018/06/marine-warbot-companies-where-naval-warfare-the-
u-s-national-defense-strategy-and-close-combat-lethality-task-force-intersect/.
85. Ray, Forgey, and Mathias, “Harnessing Artiªcial Intelligence and Autonomous Systems across
the Seven Joint Functions,” p. 123.
86. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, p. 88.
87. Matthew J. Florenzen, Kurt M. Shulkitas, and Kyle P. Bair, “Unmasking the Spectrum with
Artiªcial Intelligence,” Joint Force Quarterly, 卷. 95, 不. 4 (2019), PP. 116–123, https://ndupress
.ndu.edu/Portals/68/Documents/jfq/jfq-95/jfq-95_116-123_Florenzen-Skulkitas-Bair.pdf.
88. 例如, 佩恩, “Artiªcial Intelligence”; and Suchman, “Algorithmic Warfare and the
Reinvention of Accuracy.”
89. Rafael Loss and Joseph Johnson, “Will Artiªcial Intelligence Imperil Nuclear Deterrence?” War
on the Rocks, 九月 19, 2019, https://warontherocks.com/2019/09/will-artiªcial-intelligence-
imperil-nuclear-deterrence/.
90. Vincent Boulanin, 编辑。, The Impact of Artiªcial Intelligence on Strategic Stability and Nuclear Risk,
卷. 1, Euro-Atlantic Perspectives (索尔纳, 瑞典: Stockholm International Peace Research Insti-
图特, 可能 2019), https://www.sipri.org/sites/default/ªles/2019-05/sipri1905-ai-strategic-stability-
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 35
For most of the examples reviewed in this section, humans should adjust
goals on a case-by-case basis to avoid the substantial operational risks of full
自动化. The DARPA Air Combat Evolution (ACE) 程序, which trains
AI pilots in dogªght simulations, highlights risks that can emerge when AI is
given too much decision autonomy in rapidly changing contexts: “at one point
in the AlphaDogªght trials, the organisers threw in a cruise missile to see what
would happen. Cruise missiles follow preordained ºight paths, so they behave
more simply than piloted jets. The AI pilots struggled with this because, 为了-
doxically, they had beaten the missile in an earlier round and were now
trained for more demanding threats.”91 Experiments like this have encouraged
ACE to focus on “manned-unmanned teaming” rather than full autonomy.
The engineering challenge is then to partition the cognitive load of prediction
and judgment correctly (IE。, to decompose the task into different subtasks) 所以
that faster machines and mindful humans can play to their strengths. These ex-
amples show that automation risk from low-quality data increases the impor-
tance of careful human judgment and teaming a human with the machine.
Human personnel are needed to identify when data are incomplete or biased
in the speciªc context of any given decision, and to provide judgment on how
to act on potentially inaccurate predictions. For many tactical ªre and maneu-
ver tasks, full automation is prohibitively risky, but close human supervision
may be able to mitigate that risk.
human-machine teaming
If quality data are available but judgment is difªcult, then AI can still provide
predictions if humans ªrst tell the machines what to do. We describe this cate-
gory as “human-machine teaming” because skilled people can use AI to en-
hance decision-making, but they must guide and audit AI performance in
sensitive or idiosyncratic circumstances. In these situations, quality data will
generate reliable predictions. Owing to the difªculty of prespecifying judg-
蒙特, 然而, most practitioners are not tempted to deploy full automation
because they recognize that doing so may risk creating more bad decisions.
Consider the civilian example of tax law and the ambiguity about whether
investment income should be taxed as business income or capital gains.
通常, a company would hire a lawyer to collect facts on the case and pre-
dict what the courts are likely to ªnd. 然后, the lawyer would advise the client
nuclear-risk.pdf; and Lora Saalman, “Fear of False Negatives: AI and China’s Nuclear Posture,”
Bulletin of the Atomic Scientists blog, 四月 24, 2018, https://thebulletin.org/2018/04/fear-of-false-
negatives-ai-and-chinas-nuclear-posture/.
91. “Fighter Aircraft Will Soon Get AI Pilots,” Economist, 十一月 19, 2020, https://万维网
.economist.com/science-and-technology/2020/11/15/ªghter-aircraft-will-soon-get-ai-pilots.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 36
on a course of action. One ªrm developed an AI that scans tax law decisions to
predict tax liabilities. The AI does not recommend a course of action because
making that judgment requires knowing the client’s risk preferences and com-
fort navigating the legal system. The AI predicts what would happen if the
case were to go to court, but it cannot determine whether going to court is a
good idea. Legal decisions in this task are the product of human-machine
teaming between the predictive AI and the human lawyer, who must interpret
the prediction to judge what advice best serves the client.92
For similar reasons, human-machine teaming ought to ºourish in intelli-
gence and planning organizations. There is much growth potential for AI in
units that are awash in intelligence, 监视, 和侦察 (ISR)
data.93 In remotely piloted aircraft (drone) 运营, 例如, the infor-
mation processing burden is intense at even the most tactical level. 根据
to an ethnographic study by Timothy Cullen, “to ºy the aircraft and control
the sensor ball, Reaper and Predator crews had to coordinate the meaning,
movement, and presentation of a myriad of menus, windows, and tables on
16 displays and 4 touch screens with 4 keyboards, 2 trackballs, 2 joysticks, 和
8 levers.”94 Cullen describes a complicated mixture of prediction and judg-
ment as “operators negotiated and constructed a constrained environment in
the ground control station to coordinate verbal, 打字的, written, pictorial, 和
geographical representations of a mission; to identify patterns in scenes from
the aircraft’s sensors; and to associate those patterns with friendly and enemy
activity.”95 ISR drones generate so much data, “37 years of full motion footage
在 2011 独自的,” that “much of the collect goes unanalyzed.”96 AI is able to alle-
viate some of the data processing burden by continuously monitoring multiple
data feeds and highlighting patterns of interest. 然而, Cullen highlights ways in
which aircrew make many, seemingly minor, value judgments about what
they should—and should not—be doing with their sensors and weapons. 在
也就是说, AI provides one complement to the data—the prediction—but it
does not provide the judgment that also underlies decision-making.
原则, many intelligence tasks might beneªt from machine learning.
92. Ajay Agrawal, 约书亚小号. 甘斯, and Avi Goldfarb, “Artiªcial Intelligence: The Ambiguous La-
bor Market Impact of Automating Prediction,” 经济展望杂志, 卷. 33, 不. 2
(春天 2019), p. 35, https://doi.org/10.1257/jep. 33.2.31.
93. Keith Dear, “A Very British AI Revolution in Intelligence Is Needed,” War on the Rocks, Octo-
误码率 19, 2018, https://warontherocks.com/2018/10/a-very-british-ai-revolution-in-intelligence-is-
needed/.
94. Timothy M. Cullen, “The MQ-9 Reaper Remotely Piloted Aircraft: Humans and Machines in
行动,” 博士. dissertation, 麻省理工学院, 2011, p. 272.
95. 同上。, p. 273.
96. Dear, “A Very British AI Revolution in Intelligence Is Needed.”
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 37
Image recognition algorithms can sift through drone video feeds to identify
enemy activity. Facial recognition systems can detect individual targets of in-
terest, while emotional prediction algorithms can aid in identifying the hostile
or benign intent of individuals on a crowded street. Speech recognition, 嗓音
synthesis, and translation systems can alleviate shortages of human translators
for human and signals intelligence, as well as for civil-military relations and
information operations. 一般来说, AI is well suited to the intelligence task of
analyzing bulk data and identifying patterns, 例如, in identifying and
tracking terrorist groups or insurgents.97
在实践中, intelligence is often more art than science. Intelligence profes-
sionals deal with deceptive targets, ambiguous data, and subtle interpreta-
tions.98 Unforeseen changes in the strategic environment or mission objectives
create new data requirements or, 更差, undermine the referential integrity of
existing data. On a case-by-case basis, practitioners draw on their subject mat-
ter expertise and experience to make judgments. Applying this judgment to an
AI prediction is a difªcult but learnable skill. AI predictions become just an-
other input into a complex, and potentially consequential, decision process. 在
同一时间, there is much potential for dissensus given the complex rela-
tionships among those who collect, manage, and consume intelligence, not to
mention the perennial risks of intelligence politicization.99
A military organization needs to understand not only its adversaries but
also itself. The prospects for AI are sometimes better for command and control
(C2) than for ISR because friendly organizations and processes are easier to
控制, which produces more reliable reporting data. A lot of staff effort is
consumed by searching for data, querying other organizations for data, 和
reanalyzing and reformatting data in response to emerging information re-
quirements. AI can be used to integrate reporting data from disparate data-
bases, helping to resolve contradictions and view activity in a “common
operational picture.”100 AI-produced decision aids can help personnel analyze
unfolding battleªeld conditions, run simulations of operational scenarios, 和
present options for military commanders to evaluate. 例如, to deter-
97. James L. Regens, “Augmenting Human Cognition to Enhance Strategic, Operational, 和
Tactical Intelligence,” 情报与国家安全, 卷. 34, 不. 5 (2019), PP. 673–687, https://
doi.org/10.1080/02684527.2019.1579410.
98. Minna Räsänen and James M. Nyce, “The Raw Is Cooked: Data in Intelligence Practice,” Sci-
恩斯, 技术, & Human Values, 卷. 38, 不. 5 (九月 2013), PP. 655–677, https://doi.org/
10.1177/0162243913480049.
99. Richard K. Betts, Enemies of Intelligence: Knowledge and Power in American National Security
(纽约: 哥伦比亚大学出版社, 2007); and Joshua Rovner, Fixing the Facts: National Secu-
rity and the Politics of Intelligence (伊萨卡岛, 纽约: 康奈尔大学出版社, 2011).
100. Hoadley and Sayler, Artiªcial Intelligence and National Security, p. 12.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 38
mine the best method for evacuating injured soldiers, militaries can use pre-
dictive models based on weather conditions, available routes, landing sites,
and anticipated casualties.101 AI can be used to enhance computerized war-
gaming and combat simulations, offering more realistic models of “red team”
behavior and more challenging training exercises.102 AI could potentially im-
prove mission handoff between rotating units by analyzing unstructured text
(IE。, passages of prose rather than standardized ªelds) in the departing unit’s
reports.103 Yet, as with intelligence, planning tasks cannot be simply delegated
to machines. ISR and C2 reporting systems generate a mass of potentially rele-
vant data, but they are hard to interpret, and associated metadata are often
missing or misleading. In these situations, quality data may generate reliable
预测, but human intervention and interpretation is required throughout
the decision process.
Human-machine teaming often entails not only task performance (IE。, bal-
ancing the cognitive load across people and AI) but also task design (IE。, 广告-
justing the load as circumstances change). Viewed at a more granular level, A
task that falls into the human-machine teaming category in our framework
might be disaggregated into subtasks that fall into two of the framework’s
other categories. 那是, human practitioners will have to partition a complex
decision task into either fully automated or fully human decision-making
subtasks. This subdivision requires making mindful decisions about monitor-
ing and controlling the risks of premature automation. 例如, 人类-
machine teaming in drone operations involves having both the drone and the
drone operators perform certain tasks autonomously. The drone might auto-
matically perform ºying tasks (IE。, maintaining course and bearing or reac-
quiring a lost datalink), while human drone operators might deliberate over
legal targeting criteria.
The overall partition (IE。, the location of the human in the loop) 应该
be adjusted over time as conditions change, which will require humans to be
mindful of how the division of labor between humans and machines relates to
the task environment and the organizational mission. This balance will be fur-
ther complicated by interdependencies across tasks and organizations, 数据
使用权, interpretability, and interoperability issues, as well as competing priori-
ties such as speed, safety, secrecy, efªciency, 效力, legality, 网络-
安全, 稳定, adaptability, 等等. 重要的, as ªgure 1 节目, 这
101. Benjamin Jensen and Ryan Kendall, “Waze for War: How the Army Can Integrate Artiªcial
智力,” War on the Rocks, 九月 2, 2016, https://warontherocks.com/2016/09/waze-for-
war-how-the-army-can-integrate-artiªcial-intelligence/.
102. Kania, “Battleªeld Singularity,” p. 28.
103. Spiegeleire, Maas, and Sweijs, Artiªcial Intelligence and the Future of Defense, p. 90.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 39
organizational and political institutions that are exogenous to decision-making
tasks establish the priorities for these different objectives. Humans are the ulti-
mate source of judgment in all AI systems.
The Strategic Implications of Military AI
The central argument of this article is that machine learning is making pre-
diction cheaper, which in turn makes data and judgment more valuable. 这
ªnding also means that quality data and clear judgment enhance AI perfor-
曼斯. These conditions vary by decision task, but they are generally harder
to meet in military situations given environmental and institutional complexi-
领带. Organizations that can meet them, 然而, may gain a competitive ad-
优势. Human skills are central to this competitive advantage, 这有
two important strategic implications.
第一的, military organizations that rely on AI have incentives to improve
both data and judgment. These AI complements are sources of strength. 在
least one of them—judgment—relies wholly on human beings. 即使当
goals can be formally speciªed and pre-delegated for tasks in the automated
decision-making category in our framework, humans must engineer the re-
ward function, which they will likely revisit as they monitor system perfor-
曼斯. AI adoption may radically change the distribution of judgment by
altering who in an organization makes decisions and about what, 但
in all cases, humans are ultimately responsible for setting objectives, 制作
trade-offs, and evaluating outcomes. There is little chance of this changing
anytime soon given the technical state of the art. The other complement—
data—also relies on human beings. Developing and implementing data pol-
icy necessitates negotiation between data producers and consumers. 人们
also make nuanced judgments when architecting data infrastructure and man-
aging data quality. AI systems can neither design themselves nor clean their
own data, which leads us to conclude that increased reliance on AI will make
human skills even more important in military organizations.
第二, and for the same reasons, adversaries have incentives to complicate
both data and judgment. In a highly competitive environment, organizational
strengths become attractive targets and potential vulnerabilities. Since predict-
able adversaries will play to AI strengths, intelligent adversaries will behave
unpredictably. If AI creates military power in one area, adversaries will create
military challenges in another. Facing an AI-empowered force, the enemy will
attempt to change the game by either undermining the quality of predictions
or making them irrelevant. 所以, strategies to contest, 操纵, 或dis-
rupt data and judgment become more relevant as military competitors adopt
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 40
人工智能. The informational and organizational dimensions of war will continue to
increase in salience and complexity. 再次, this leads us to the conclusion that
more military AI will make the human aspects of conºict more important.
This increased importance of human personnel challenges the emerging
wisdom about AI and war. Many analyses either assume that AI will replace
warriors for key military tasks or speculate that war will occur at machine
速度, which in turn creates ªrst-mover advantages that incentivize ag-
gression and undermine deterrence.104 The states that are ªrst to substitute
machines for warriors, 而且, are assumed to gain signiªcant military ad-
vantages that will shift the balance of power toward early adopters. These out-
comes are plausible, but they are based on problematic assumptions about AI
substitutability. Conºicts based on AI complementarity may exhibit very dif-
ferent dynamics. We argue that it is more useful to consider the militarized
contestation of AI complements (IE。, data and judgment) than to conceive of
wars between automated military forces. Conºicts in which data and judg-
ment are perennially at stake may be full of friction, controversy, and unin-
tended consequences, and they may drag on in frustrating ways. 简而言之, 我们
expect the growing salience of data and judgment in war to subtly alter strate-
gic incentives. 因此, AI-enabled conºicts are more likely to be decided
by the slow erosion of resolve and institutional capacity than set-piece battles
between robotic forces.
information contests
The importance of information in war has been increasing for many de-
cades.105 The growth of ISR infrastructure—on and over the battleªeld, at sea
and underwater, and in orbit—has dramatically increased the volume and va-
riety of data available to military organizations. Long-range precision weap-
ons and high-bandwidth datalinks have also expanded the number of things
that militaries can do with all these data, which in turn generates even more
data about friendly operations. 然而, more and better data have not always
translated into more effective military operations. The adoption of information
technology throughout the past century has typically been accompanied by an
increase in the complexity and geographical dispersion of military organiza-
系统蒸发散. Data-intensive tasks that emphasize intellectual skills rather than physi-
cal ªghting, such as intelligence, 通讯, and information operations,
104. 例如, 佩恩, “Artiªcial Intelligence”; Horowitz, “When Speed Kills”; and Johnson,
“Delegating Strategic Decision-Making to Machines.”
105. Lindsay, Information Technology and Military Power, PP. 28–31; and Emily O. 高盛, 编辑。,
Information and Revolutions in Military Affairs (纽约: 劳特利奇, 2015).
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 41
have proliferated in military organizations. 同时, advanced indus-
trialized nations have asked their militaries to perform more complex opera-
系统蒸发散. More complexity, 反过来, increases the potential for disagreement and
breakdown. Adversaries have also learned to offset the advantages of the ISR
revolution by either adopting asymmetric tactics to blend in with civilian pop-
ulations or exploiting the potential of space and cyberspace. As advances in
battleªeld sensors make it feasible to detect targets in near real time, enemy
forces learn how to disperse, hide, and deceive.
简而言之, there may be more data in modern war, but data management has
also become more challenging. 虽然美国. weapons may be fast and pre-
cise, 我们. wars in recent decades have been protracted and ambiguous.106 We
argue that AI will most likely deepen rather than reverse these trends. 的确,
automation is both a response to and a contributing cause of the increasing
complexity of military information practice.
Just as commanders are already preoccupied with C2 architecture, ofªcers in
AI-enabled militaries will seek to gain access to large amounts of data that are
relevant to speciªc tasks in order to train and maintain AI systems. Units will
have to make decisions about whether they should collect their own data or-
ganically or acquire shared data from other units, government agencies, 或者
coalition partners. We expect many relevant databases to be classiªed and
compartmented given the sensitivity of collection techniques or the content it-
自己, which will complicate sharing. Units might also choose to leverage public
data sources or purchase proprietary commercial data, both of which are prob-
lematic because nongovernmental actors may affect the quality of and access
to data. As militaries tackle new problems, or new operational opportunities
emerge, data requirements will change, and ofªcers will have to routinely
ªnd and integrate new data sources. AI strategy will require militaries to es-
tablish data policies, and thus negotiating access to data will be an ongoing
managerial—and human—challenge.
We contend that militaries will face not only data access but also data rele-
vancy challenges. Heterogenous data-generating processes allow biases and
anomalies to creep into databases. Although metadata may help to organize
information processing, they are also vulnerable to data friction that only hu-
mans can ªx.107 Cleaning and curating data sources will therefore be as impor-
tant as acquiring them in the ªrst place. To the challenges of producing or
procuring data must be added the challenges of protecting data. Just as supply
106. 看, 例如, Shimko, The Iraq Wars and America’s Military Revolution.
107. Paul N. Edwards et al., “Science Friction: 数据, Metadata, and Collaboration,” Social Studies of
科学, 卷. 41, 不. 5 (十月 2011), PP. 667–690, https://doi.org/10.1177/0306312711413314.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 42
chains become attractive targets in mechanized warfare, data supplies will also
become contested.
全面的, we expect the rise of AI to exacerbate the already formidable chal-
lenges of cybersecurity. Cybersecurity professionals aim to maintain the con-
ªdentiality, integrity, and availability of an organization’s data. Two of these
goals—integrity and availability—capture the AI requirements of unbiased
and accessible data, 如上所述. The goal of conªdentiality is also im-
portant insofar as data provide AI adopters with a competitive advantage. 在
商业, AI companies often try to own (rather than buy) the key data that
enable their machines to learn.108 The military equivalent of this is classiªed
信息, which is hidden from the enemy to produce a decision advan-
tage.109 Military organizations will have strong incentives to protect the
classiªed data that military AI systems use to learn. For the same reasons, 广告-
versaries will have incentives to steal, 操纵, and deny access to AI learn-
ing data. 迄今为止, most discussions of AI and cybersecurity have focused on a
substitution theory of cybersecurity, 那是, using AI systems to attack and de-
fend networks.110 But we argue that a complementary theory of cybersecurity
is just as if not more important. AI will require the entire military enterprise to
invest more effort into protecting and exploiting data. If AI systems are trained
with classiªed information, then adversaries will conduct more espionage.
If AI enhances intelligence, then adversaries will invest in more counter-
智力. If AI provides commanders with better information, then adver-
saries will produce more disinformation.
Inevitably, different parts of the bureaucracy will tussle among themselves
and with coalition partners and nongovernmental actors to access and curate a
huge amount of heterogeneous and often classiªed data. Organizations will
also struggle with cyber and intelligence adversaries to maintain control of
their own data while also conducting their own campaigns to collect or manip-
ulate the enemy’s data. To appreciate the strategic implications of AI, 那里-
fore, it is helpful to understand cyber conºict, most of which to date resembles
espionage and covert action more than traditional military warfare. 实际上,
chronic and ambiguous intelligence contests are more common than fast and
decisive cyberwar.111 Military reliance on AI becomes yet another factor abet-
108. Agrawal, 甘斯, and Goldfarb, Prediction Machines, PP. 174–176.
109. Jennifer E. 西姆斯, “Decision Advantage and the Nature of Intelligence Analysis,” in Loch K.
约翰逊, 编辑。, The Oxford Handbook of National Security Intelligence (纽约: 牛津大学
按, 2010).
110. 例如, James Johnson, “The AI-Cyber Nexus: Implications for Military Escalation, 的-
terrence, and Strategic Stability,” Journal of Cyber Policy, 卷. 4, 不. 3 (2019), PP. 442–460, https://
doi.org/10.1080/23738871.2019.1701693.
111. Joshua Rovner, “Cyber War as an Intelligence Contest,” War on the Rocks, 九月 16, 2019,
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 43
ting the rise of cyber conºict in global affairs, 和 (ambiguous, confusing,
没完没了的, gray zone) dynamics of cyber conºict are likely to have a strong
inºuence on the dynamics of AI conºict.
organizational complexity
Just as AI militaries will struggle to procure, 干净的, 策展, protect, and contest
数据, they will also struggle to inculcate, negotiate, and legitimate judgment.
的确, the challenges of data and judgment go hand in hand. People will ªnd
it harder to interpret a ºood of heterogeneous data. More complex data archi-
tectures will require managers to consider the trade-offs among competing ob-
jectives (IE。, conªdentiality,
integrity, and availability), which may invite
bureaucratic controversy. 然而, judgment is even more fundamental for organi-
zations that rely on AI because humans must both tell AI systems which pre-
dictions to make and determine what to do with the predictions once they
被制作. People who code valuations into autonomous systems will have
enormous power because AI increases the scale of the impact of some human
判断. 例如, individual car drivers make judgments about their
own vehicle, whereas the encoded judgments for self-driving cars can affect
millions of vehicles. Each instance of a given autonomous weapon system,
similarly, will likely share algorithms and training data with others. 什么时候
widely shared judgments are wrong, biased, or self-serving, then the AI sys-
tems guided by them can generate large-scale problems. Good judgment be-
comes particularly desirable as prediction gets better, faster, 而且更便宜.
A fundamental organizational challenge is to recruit, 火车, and retain the
human talent required for human-machine teaming. We anticipate that AI sys-
tems will increase the inºuence of junior personnel, giving more leverage to
their judgment and decisions. 然而, we also expect that the junior ofªcers, 非-
commissioned ofªcers, civilian employees, and government contractors who
maintain and operate AI systems will struggle to understand the consequences
of their actions in complex political situations. Gen. Charles Krulak high-
lights the role of “the strategic corporal” on twenty-ªrst-century battleªelds.112
Krulak argues that operational complexity makes tactical actions more strate-
gically consequential, for better or worse, which places a premium on the char-
https://warontherocks.com/2019/09/cyber-war-as-an-intelligence-contest/; Lennart Maschmeyer,
“The Subversive Trilemma: Why Cyber Operations Fall Short of Expectations,” International Secu-
理性, 卷. 46, 不. 2 (落下 2021), PP. 51–90, https://doi.org/10.1162/isec_a_00418; and Robert
Chesney and Max Smeets, 编辑。, Cyber Conºict as an Intelligence Contest (华盛顿, 华盛顿特区:
Georgetown University Press, 即将推出).
112. Charles C. Krulak, “The Strategic Corporal: Leadership in the Three Block War,” Marines
Magazine, 一月 1999, https://apps.dtic.mil/sti/pdfs/ADA399413.pdf.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 44
acter and leadership ability of junior personnel. AI will further increase the
burden of judgment on them. Forward personnel will have to see the predic-
tions from AI systems, assess whether the data that created the predictions are
可靠的, and make value judgments about how and why automated systems
can advance the mission. 此外, AI systems will require constant recon-
ªguration and repair as the context of human-machine teaming changes
during actual operations. Military personnel have long engaged in ªeld-
expedient, bottom-up innovation.113 We expect personnel will likewise hack
AI systems to improve mission performance, as they understand it, 即使作为
unauthorized modiªcations put them into conºict with system conªguration
managers elsewhere in the bureaucracy.114 It is important to emphasize the hu-
man capital requirements of combining a sophisticated understanding of the
politico-military situation with the technical savvy to engineer AI in the ªeld.
The strategic corporal in the AI era must be not only a Clausewitzian genius
but also a talented hacker. This may not be a realistic requirement.
The importance of human-machine teaming is increasingly appreciated in
organizations that implement AI systems. Amid all the hype about AI and
战争, plenty of thoughtful work seeks to discern the relative advantages of hu-
mans and machines and to devise methods of pairing them together in order
to improve decision-making.115 As the U.S. Department of Defense AI strategy
状态, “The women and men in the U.S. armed forces remain our enduring
source of strength; we will use AI-enabled information, 工具, and systems to
empower, not replace, those who serve.”116 Yet, the strategy’s stated goal of
“creating a common foundation of shared data, reusable tools, frameworks
and standards, and cloud and edge services” is more of a description of the
magnitude of the problem than a blueprint for a solution.117 As AI creates
113. Kollars, “War’s Horizon.”
114. On the general dynamics of military user innovation see Lindsay, Information Technology and
Military Power, PP. 109–135.
115. Andrew Herr, “Will Humans Matter in the Wars of 2030?” Joint Force Quarterly, 卷. 77, 不. 2
(2015), PP. 76–83, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-77/jfq-77.pdf; Mary
L. Cummings, Artiªcial Intelligence and the Future of Warfare (伦敦: Chatham House, Royal Insti-
tute of International Affairs, 一月 26, 2017); 发展, Concepts and Doctrine Centre,
“Human-Machine Teaming,” Joint Concept Note 1/18 (伦敦: UK Ministry of Defence, 可能
2018), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment
_data/ªle/709359/20180517-concepts_uk_human_machine_teaming_jcn_1_18.pdf; and Mick Ryan,
“Extending the Intellectual Edge with Artiªcial Intelligence,” Australian Journal of Defence and
Strategic Studies, 卷. 1, 不. 1 (2019), PP. 23–40, https://www.defence.gov.au/ADC/publications/
AJDSS/documents/volume1-issue1/Full.pdf. NSCAI’s Final Report, 2021, also emphasizes human-
machine teaming.
116. Summary of the 2018 Department of Defense Artiªcial Intelligence Strategy, 2019, p. 4.
117. 同上。, p. 7.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 45
potential for large-scale efªciency improvements, it also creates potential
for large-scale collective action problems. New military staff specialties are
sure to emerge to manage data and judgment resources, creating new in-
stitutional equities and integration challenges. Perhaps even more challenging
is the problem of nurturing trust among all the engineers, 管理员, 他-
莱斯特斯, operators, and lawyers involved in designing, 使用, and repairing
AI systems.118
As cheap prediction makes human judgment more vital in a wide variety of
任务, and as more judgment is needed to coordinate human-machine teaming,
we anticipate that military bureaucracies will face complicated command deci-
sions about why, 以及如何, to conjoin humans and machines. Commercial
ªrms that embrace AI often adjust their boundaries and business models by
contracting out tasks involving data, prediction, and action (例如, manufac-
turing, 运输, advertising, and service provision) while developing
in-house judgment capacities that are too difªcult to outsource.119 Military or-
ganizations, likewise, may ªnd it advantageous to share specialized resources
(sensors, shooters, intelligence products, and logistics) across a decentralized
network of units, even as they struggle to make sense of it all. AI is thus part of
a broader historical trend that has been described with terms like “network-
centric warfare,” “joint force operations,” “integrated multi-domain opera-
系统蒸发散,” and “interagency cross-functional teams.” The whole is more than the
sum of its parts, but each part must exercise excellent judgment in how it
leverages shared assets. Historical experience suggests that military inter-
operability and shared sensemaking are difªcult, but not necessarily impossi-
布莱, to achieve.120 We thus expect military and political judgment will become
even more difªcult, diffused, and geographically distributed.
的确, the ongoing involvement of the “strategic corporal” in conversa-
tions about politico-military ends could end up politicizing the military. 在里面
美国, as Risa Brooks argues, the normative separation of political ends
from military means has some paradoxically adverse consequences: it enables
service parochialism, undermines civilian oversight, and degrades strategic
deliberation.121 Greater reliance on AI could exacerbate all these problems,
118. Roff and Danks, “‘Trust but Verify.’”
119. Agrawal, 甘斯, and Goldfarb, Prediction Machines, PP. 170–178.
120. 例如, C. Kenneth Allard, 命令, 控制, and the Common Defense (新天堂,
康涅狄格州: 耶鲁大学出版社, 1990); and Scott A. Snook, Friendly Fire: The Accidental Shootdown of
我们. Black Hawks over Northern Iraq (普林斯顿大学, 新泽西州: 普林斯顿大学出版社, 2000).
121. Risa Brooks, “Paradoxes of Professionalism: Rethinking Civil-Military Relations in the United
状态,” 国际安全, 卷. 44, 不. 4 (春天 2020), PP. 7–44, https://doi.org/10.1162/
isec_a_00374.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 46
precisely because AI is a force multiplier that requires military personnel to ex-
ercise greater judgment. Brooks’s argument implies that an AI-intensive de-
fense bureaucracy could become both more powerful and more politically
savvy. If machines perform the bulk of data gathering, prediction, and tactical
warªghting, then the judgments of human engineers, managers, and operators
will be highly consequential, even as ethical questions of accountability be-
come harder to answer. Some military personnel may be unable to perform at
such a high level of excellence, as attested by the many scandals during the
wars in Iraq and Afghanistan (from targeting errors to prisoner abuse). 在-
creasing reliance on AI will magnify the importance of leadership throughout
the chain of command, from civilian elites to enlisted service members.
If a military organization can ªgure out how to recruit, 火车, and retain
highly talented personnel, and to thoroughly reorganize and decentralize its
C2 institutions, such reforms may help to inculcate and coordinate judgment.
Doing so would enable the military to make the most of human-machine team-
ing in war. If judgment is a source of military strength, 然而, then it may
also be a political vulnerability. As organizational and political judgment be-
comes the preeminent source of strength for AI-enabled military forces, we ex-
pect that judgment will also become the most attractive target for adversaries.
If AI relies on federated data and command structures, then adversaries will
pursue wedge strategies to break up military coalitions.122 If the consensus
about war aims depends on robust political support, adversaries will conduct
disinformation and inºuence campaigns to generate controversy and under-
mine popular support.123 If automated systems operate under tightly con-
trolled rules of engagement, adversaries will attempt to manipulate normative
frameworks that legitimize the use of force.124 If AI enables more efªcient tar-
geting, the enemy will present more controversial and morally fraught targets
to test political resolve.125 As prediction machines make some aspects of mili-
tary operations more certain, we argue that the entire military enterprise will
become less certain.
122. 看, 一般来说, Timothy W. 克劳福德, “Preventing Enemy Coalitions: How Wedge Strategies
Shape Power Politics,” 国际安全, 卷. 35, 不. 4 (春天 2011), PP. 155–189, https://
doi.org/10.1162/ISEC_a_00036.
123. 看, 一般来说, Thomas Rid, Active Measures: The Secret History of Disinformation and Political
Warfare (纽约: 法拉尔, 斯特劳斯和吉鲁, 2020).
124. Janina Dill, Legitimate Targets? Social Construction, International Law, and US Bombing (凸轮-
桥: 剑桥大学出版社, 2014); and Ryder McKeown, “Legal Asymmetries in Asym-
metric War,》 国际研究评论, 卷. 41, 不. 1 (一月 2015), PP. 117–138, https://
doi.org/10.1017/S0260210514000096.
125. Erik Gartzke and James Igoe Walsh, “The Drawbacks of Drones: The Effects of UAVs on Esca-
lation and Instability in Pakistan,” Journal of Peace Research, 即将推出.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 47
结论
It is premature to assume that AI will replace human beings in either war or
any other competitive endeavor. To understand the impact of AI in any ªeld, 它
is important to disaggregate decision-making into its components: 数据, 判断-
蒙特, prediction, and action. An economic perspective on AI views machine
learning as more efªcient prediction (and robotics as a more efªcient action),
which makes data and human judgment more valuable. This means that inno-
vation in algorithms and computing power is necessary but not sufªcient
for AI performance. We have argued that the context of decision-making—
where and how organizations use AI and for what purposes—determines
whether automation is possible or desirable. The complementarity of data and
判断, 反过来, has important implications for the preparation for and con-
duct of AI-enabled war.
We have argued that the strategic environment shapes the quality of data,
and organizational institutions shape the difªculty of judgment, which gives
rise to four different categories of AI performance in military tasks. Quality
data and clear judgment enable “automated decision-making,” which is most
feasible for bureaucratically constrained administration and logistics tasks.
Low-quality data and difªcult judgments, which are common in strategy and
command tasks, necessitate “human decision-making.” Clear judgments ap-
plied to low-quality data create risks of “premature automation,” especially
when AI systems are authorized to execute ªre and maneuver tasks. Quality
data and difªcult judgments can be combined in “human-machine teaming,”
which can be used to improve intelligence and planning tasks. We expect that
许多, if not most, practical military applications of AI are likely to fall into this
last category. Even highly bureaucratized tasks that seem to ªt in the “auto-
mated decision-making” category can require human judgment, 尤其
when budget and personnel decisions are at stake or when resource scarcity
creates difªcult operational trade-offs. 同样地, highly nuanced command
tasks that seem to ªt in the “human decision-making” category can usually be
broken down into a subset of tasks that might beneªt from AI decision aids.
Most practitioners who implement military AI systems are aware of the risks
of “premature automation” in ªre and maneuver, in part due to widespread
apprehension about “killer robots.”126 To determine the appropriate division
of labor between humans and machines, 所以, humans must decide what
126. Roff, “The Strategic Robot Problem.”
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 48
to predict, and they must create data policies and AI learning plans that detail
who should do what with such predictions.127 The dynamic circumstances
of military operations will require ongoing ªnessing of the human-machine
teaming relationship.
Although we agree with the conventional wisdom that AI is potentially
transformative, we disagree about what that transformation might be.128 In
一般的, we expect that the strategic, organizational, and ethical complexity of
warfare will increase in the AI era. When cheaper prediction is applied in a po-
litical context that is as challenging and uncertain as warfare, then quality data
and sound judgment become extremely valuable. Adversaries, 反过来, 将要
take steps to undermine the quality of data and judgment by manipulating in-
formation and violating expectations. Correcting for adversarial countermea-
sures will further increase the complexity of judgment, which exacerbates the
inherent friction and frustration of war.
We must reemphasize that our focus throughout has been on narrow AI,
particularly the improvements in machine learning that have led to better,
faster, and cheaper predictions. We contend that the recent advances in AI that
have led to media attention, commercial applications, and anxiety about civil
liberties have very little to do with AGI. Some experts believe that AGI will
eventually happen, but this is not what all the current AI hype is about.129
Other experts like Brian Cantwell Smith are outright pessimistic: “Neither
deep learning, nor other forms of second-wave AI, nor any proposals yet ad-
vanced for third-wave, will lead to genuine intelligence.”130 Indeed, the “intel-
ligence” metaphor is very misleading when it comes to understanding what
machine learning actually does.131 Advances in narrow AI, 相比之下, 有
led to better, faster, and cheaper predictions. Such AI systems are task-speciªc.
127. Heuristics are provided in Agrawal, 甘斯, and Goldfarb, Prediction Machines, PP. 123–151.
128. 看, 例如, Horowitz, “Artiªcial Intelligence, International Competition, and the Bal-
ance of Power”; and Payne, “Artiªcial Intelligence.”
129. Daniel Kahneman, “Comment on ‘Artiªcial Intelligence and Behavioral Economics,'“ 在
Agrawal, 甘斯, and Goldfarb, 编辑。, The Economics of Artiªcial Intelligence, PP. 608–610. Gary
Marcus estimated that AGI would arrive between thirty and seventy years from now. See Shivon
Zilis et al., “Lighting Round on General Intelligence,” panel presentation at Machine Learning and
the Market for Intelligence Conference, Creative Destruction Lab, 多伦多大学, Octo-
误码率 26, 2017, YouTube video, 13:16, https://www.youtube.com/watch?v(西德:2)RxLIQ j_BMhk.
130. Brian Cantwell Smith, The Promise of Artiªcial Intelligence: Reckoning and Judgment (凸轮-
桥: Massachusetts Institute of Technology Press, 2019), p. xiii. See also Harry Collins, Artiªcial
Experts: Social Knowledge and Intelligent Machines (剑桥: Massachusetts Institute of Technol-
ogy Press, 1990); and Meredith Broussard, Artiªcial Unintelligence: How Computers Misunderstand
世界 (剑桥: Massachusetts Institute of Technology Press, 2018).
131. A less anthropocentric deªnition requires a longer discussion about the meanings of intelli-
根杰斯, autonomy, 和自动化. See Heather M. Roff, “Artiªcial Intelligence: Power to the Peo-
普莱,” Ethics & International Affairs, 卷. 33, 不. 2 (2019), PP. 124–140, https://doi.org/10.1017/
S0892679419000121.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
预测与判断 49
If AGI becomes a reality, then such a machine would also provide its own
判断. AGI would be able to perform the entire decision cycle by itself. 在
that case, it is not at all clear what role humans would have in warfare beyond
suffering the consequences of war.132 We argue that AGI speculation carries
the theme of AI substitution to an extreme, whereby a machine would be able
to outwit, overpower, and eliminate any actor who tried to prevent it from
accomplishing its goal.133 This doomsday scenario is often likened to the
“Sorcerer’s Apprentice” segment from the movie Fantasia, in which the epony-
mous apprentice, played by Mickey Mouse, enchants a broom and directs it to
fetch water from the well. As Mickey falls asleep, the broom ends up ºooding
the entire castle. Mickey awakes with alarm and desperately tries to chop
up the broom, but this only results in more and better brooms that overwhelm
his abilities. An eminently useful tactical task turns into a strategic disaster be-
cause of a poorly speciªed objective. Opinions vary on whether the super-
intelligence threat should be taken seriously.134 Nevertheless, the Sorcerer’s
Apprentice scenario dramatizes the importance of judgment for any type
of AI. An AI that only cares about optimizing a goal—even though that goal
was deªned by a human—will not consider the important pragmatic context
that humans may care about.
We have deªned judgment narrowly in economic terms as the speciªcation
of the utility function. The rich concept of judgment, 然而, deserves fur-
ther analysis. Just as decision-making can be disaggregated into its compo-
尼特, judgment might also be disaggregated into the intellectual, emotional,
and moral capacities that people need to determine what matters and why.
Military judgment encompasses not only the Clausewitzian traits of courage,
决心, and coup d’oeil, but also a capacity for fairness, 共情, 和
other elusive qualities. Some wartime situations merit ruthlessness, devious-
内斯, and enmity, while others call for mercy, candor, and compassion. To these
character traits must be added the engineering virtues of curiosity, creativ-
性, and elegance insofar as personnel will have to reconªgure AI systems
in the ªeld. We expect that the general logic of complementarity will still ap-
ply at this more ªne-grained level. Any future AI that is able to automate
some aspects of judgment, 所以, will make other aspects even more valu-
132. For speculation on the consequences of an AGI that is able to formulate and execute politico-
军事战略, see Kenneth Payne, 战略, 进化, and War: From Apes to Artiªcial Intelligence
(华盛顿, 华盛顿特区: Georgetown University Press, 2018).
133. Bostrom, Superintelligence; and Stuart Russell, Human Compatible: Artiªcial Intelligence and the
Problem of Control (纽约: Viking, 2019).
134. For discussion see Nathan Alexander Sears, “International Politics in the Age of Existential
威胁,” Journal of Global Security Studies, 卷. 6, 不. 3 (九月 2021), https://doi.org/10.1093/
jogss/ogaa027.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
国际安全 46:3 50
有能力的. 此外, the rich phenomenology of judgment, which AI makes
more valuable, has important implications for professional military educa-
的. More technology should not mean more technocracy. 相反,
personnel would be wise to engage more with the humanities and reºect on
human virtues as militaries become more dependent on AI. 一般来说, reliance
on AI will tend to amplify the importance of human leadership and the moral
aspects of war.
到底, we expect that more intensive human-machine teaming will re-
sult in judgment becoming more widely distributed in military organizations,
while strategic competition will become more politically fraught. Whatever the
future of automated warfare holds, humans will be a vital part of it.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
我
s
e
C
/
A
r
t
我
C
e
–
p
d
我
F
/
/
/
/
4
6
3
7
1
9
9
5
7
6
8
/
我
s
e
C
_
A
_
0
0
4
2
5
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3