Jesus' Coming Back

Nuclear Brinkmanship in AI-Enabled Warfare: A Dangerous Algorithmic Game of Chicken

Russian nuclear saber-rattling and coercion have loomed large throughout the Russo-Ukrainian War. This dangerous rhetoric has been amplified and radicalized by AI-powered technology — “false-flag” cyber operations, fake news, and deepfakes. Throughout the war, both sides have invoked the specter of nuclear catastrophe, including false Russian claims that Ukraine was building a “dirty bomb” and President Volodymyr Zelensky’s allegation that Russia had planted explosives to cause a nuclear disaster at a Ukrainian power plant. The world is once again forced to grapple with the psychological effects of the most destructive weapons the world has ever known in a new era of nuclear brinkmanship. 

Rapid AI technological maturity raises the issue of delegating the launch authority of nuclear weapons to AI (or non–human-in-the-loop nuclear command and control systems), viewed simultaneously as dangerous and potentially stabilizing. This potential delegation is dangerous because weapons could be launched accidentally. It is potentially stabilizing because of the lower likelihood that a nuclear strike would be contemplated if retaliation was known to benefit from autonomy, machine speed, and precision. For now, at least, there is a consensus amongst nuclear-armed powers that the devastating outcome of an accidental nuclear exchange obviates any potential benefits of automating the retaliatory launch of nuclear weapons.

Regardless, it is important to grapple with a question: How might AI-enabled warfare affect human psychology during nuclear crises? Thomas Schelling’s theory of “threat that leaves something to chance” (i.e., the risk that military escalation cannot be entirely controlled) helps analysts understand how and why nuclear-armed states can manipulate risk to achieve competitive advantage in bargaining situations and how this contest of nerves, resolve, and credibility can lead states to stumble inadvertently into war. How might the dynamics of the age of AI affect Schelling’s theory? Schelling’s insights on crisis stability between nuclear-armed rivals in the age of AI-enabling technology, contextualized with the broader information ecosystem, offer fresh perspectives on the “AI-nuclear dilemma” — the intersection of technological change, strategic thinking, and nuclear risk. 

In the digital age, the confluence of increased speed, truncated decision-making, dual-use technology, reduced levels of human agency, critical network vulnerabilities, and dis/misinformation injects more randomness, uncertainty, and chance into crises. This creates new pathways for unintentional (accidental, inadvertent, and catalytic) escalation to a nuclear level of conflict. New vulnerabilities and threats (perceived or otherwise) to states’ nuclear deterrence architecture in the digital era will become novel generators of accidental risk — mechanical failure, human error, false alarms, and unauthorized launches. 

These vulnerabilities will make current and future crises (Russia-Ukraine, India-Pakistan, the Taiwan Straits, the Korean Peninsula, the South China Seas, etc.) resemble a multiplayer game of chicken, where the confluence of Schelling’s “something to chance” coalesces with contingency, uncertainty, luck, and the fallacy of control, under the nuclear shadow. In this dangerous game, either side can increase the risk that a crisis unintentionally blunders into nuclear war. Put simply, the risks of nuclear-armed states leveraging Schelling’s “something to chance” in AI-enabled warfare preclude any likely bargaining benefits in brinkmanship.

Doomsday Machine: Schellings Little Black Box”

How might different nuclear command, control, and communication structures affect the tradeoff between chance and control? Research suggests that chance is affected by the failure of both the positive control (features and procedures that enable nuclear forces to be released when the proper authority commands it) and negative control (features that inhibit their use otherwise) of nuclear weapons. For instance, some scholars have debated the impact on crisis stability and deterrence of further automation of the nuclear command, control, and communication systems, akin to a modern-day Doomsday Machine such as Russia’s Perimetr (known in the West as “the Dead Hand”) — a Soviet-era automated nuclear retaliatory launch system, which some media reports claim now uses AI technology.

On the one hand, from a rationalist perspective, because the response of an autonomous launch device (Schelling’s “little black box”) would be contingent on an adversary’s actions —and presumably clearly communicated to the other side — strategic ambiguity would be reduced and thus its deterrence utility enhanced. In other words, the “more automatic it is, the less incentive the enemy has to test my intentions in a war of nerves, prolonging the period of risk.” In the context of mutually assured destruction, only the threat of an unrecallable weapon — activating on provocation no matter what — would be credible and thus effective. Besides, this autonomous machine would obviate the need for a human decision-maker to remain resolute in fulfilling a morally and rationally recommended threat, and by removing any doubt of the morally maximizing instincts of a free human agent in the loop, ensuring the deterrent threat is credible.

On the other hand, from a psychological perspective, by removing human agency entirely (i.e., once the device is activated there is nothing a person can do to stop it), the choice to escalate (or deescalate) a crisis falls to machines’ preprogrammed and unalterable goals. Such a goal, in turn, “automatically engulfs us both in war if the right (or wrong) combination comes up on any given day” until the demands of an actor have been complied with. The terrifying uncertainty, chance, and contingency that would transpire from this abdication of choice and control of nuclear detonation to a nonhuman agent — even if the device’s launch parameters and protocols were clearly advertised to deter aggression — would increase, as would the risk of both positive (e.g., left-of-launch cyber attack, drone swarm counterforce attack, data poisoning) and negative failure (e.g., false flag operations, AI-augmented advanced persistent threat or spoofing) of nuclear command, control, and communication systems. 

Moreover, fully automating the nuclear launch process (i.e., acting without human intervention in the target acquisition, tracking, and launch) would not only circumvent the moral requirement of Just War theory — for example, the lack of legal fail-safes to prevent conflict and protect the innocent — but also violate the jus ad bellum requirement of proper authority and thus, in principle, be illegitimate.

In sum, introducing uncertainty and chance into a situation (i.e., keeping the enemy guessing) about how an actor might respond to various contingencies — and assuming clarity exists about an adversary’s intentions — may have some deterrent utility. If, unlike “madman” tactics, the outcome is in part or entirely determined by exogenous mechanisms and processes — ostensibly beyond the control and comprehension of leaders — genuine and prolonged risk is generated. As a counterpoint, a threat that derives from factors external to the participants might become less of a test of wills and resolve between adversaries, thus making it less costly — in terms of reputation and status — for one side to step back from the brink.

Human Psychology and Threat that Leaves Something to Chance” in Algorithmic War

In The Illogic of American Nuclear Strategy, Robert Jervis writes that “the workings of machines and the reaction of humans in time of stress cannot be predicted with high confidence.” Critics note that while “threats that leave something to chance” introduce the role of human behavioral decision-making into thinking about the threat credibility of coercion, the problem of commitment, and the manipulation of risk, Schelling’s research disproportionately relies on economic models of rational choice. Some scholars criticize Schelling’s core assumptions in other ways.

Two cognitive biases demonstrate that leaders are predisposed to underestimate accidental risk during crisis decision-making. First, as already described, is the “illusion of control,” which can make leaders overconfident in their ability to control events in ways that risk (especially inadvertently or accidentally) escalating a crisis or conflict. Second, leaders tend to view adversaries as more centralized, disciplined, and coordinated, and thus more in control than they are.

Furthermore, “threats that leave something to chance” neglect the emotional and evolutionary value of retaliation and violence, which are vital to understanding the processes that underpin Schelling’s theory. According to Schelling, to cause suffering, nothing is gained or protected directly; instead, “it can only make people behave to avoid it.” McDermott et al. argued in the Texas National Security Review that “the human psychology of revenge explains why and when policymakers readily commit to otherwise apparently ‘irrational’ retaliation” — central to the notion of second-strike nuclear capacity. Because a second-strike retaliation cannot prevent atomic catastrophe according to economic-rational models, it therefore has no logical basis. 

An implicit assumption undergirds the notion of deterrence — in the military and other domains — that strong enough motives exist for retaliation, when even if no strategic upside accrues from launching a counterattack, an adversary should expect one nonetheless. Another paradox of deterrence is threatening to attack an enemy if they misbehave; if you can convince the other of the threat, the damage inflicted on the challenger is of little consequence. In short, deterrence is intrinsically a psychological phenomenon. It uses threats to manipulate an adversary’s risk perceptions to persuade against the utility responding with force. 

Human emotion — psychological processes involving subjective change, appraisals, and intersubjective judgments that strengthen beliefs — and evolution can help explain how uncertainty, randomness, and chance are inserted into a crisis despite “rational” actors retaining a degree of control over their choices. Recent studies on evolutionary models — that go beyond traditional cognitive reflections — offer fresh insights into how specific emotions can affect credibility and deterrence. In addition to revenge, other emotions such as status-seeking, anger, fear, and even a predominantly male evolutionary predisposition for the taste of blood once a sense of victory is established accompany the diplomacy of violence. Thus, the psychological value attached to retaliation can also affect leaders’ perceptions, beliefs, and lessons from experience, which inform choices and behavior during crises. Schelling uses the term “reciprocal fear of surprise attack” — the notion that the probability of a surprise attack arises because both sides fear the same thing — to illuminate this psychological phenomenon.

A recent study on public trust in AI, for instance, demonstrates that age, gender, and specialist knowledge can affect peoples’ risk tolerance in AI-enabled applications, including AI-enabled autonomous weapons and crime prediction. These facets of human psychology may also help explain the seemingly paradoxical coexistence of advanced weapon technology that promises speed, distance, and precision (i.e., safer forms of coercion) with a continued penchant for intrinsically human contests of nerves at the brink of nuclear war. Emotional-cognitive models do not, however, necessarily directly contradict the classical rational-based ones. Instead, these models can inform and build on rational models by providing critical insights into human preferences, motives, and perceptions from an evolutionary and cognitive perspective.

Leaders operating in different political systems and temporal contexts will, of course, exhibit diverse ranges of emotional awareness and thus varying degrees of ability to regulate and control their emotions. Moreover, because disparate emotional states can elicit different perceptions of risk, leaders can become predisposed to overstate their ability to control events and understate the role of luck and chance, and thus the possibility that they misperceive others’ intentions and overestimate their ability to shape events. For instance, scared individuals are generally more risk-averse in their decisions and behavior compared to people who display rage or revenge and who are prone to misdiagnose the nature of the risks they encounter.

A fear-induced deterrent effect in the nuclear deterrence literature posits that the deterrent effect of nuclear weapons is premised on nonrational fear (or “existential bias”) as opposed to rational risk calculation, thus initiating an iterative learning process that enables existentialism deterrence to operate. Whatever the cognitive origins of these outlooks — an area about which we still know very little — they will nonetheless have fundamental effects on leaders’ threat perceptions and cognitive dispositions.

Actors are influenced by both motivated (“affect-driven”) and unmotivated (“cognitive”) biases when they judge whether the other sides pose a threat. Moreover, the impact of these psychological influences is ratcheted up during times of stress and crisis in ways that can distort an objective appreciation of threats and thus limit the potential for empathy. Individuals’ perceptions are heavily influenced by their beliefs about how the world functions, and the patterns, mental constructs, and predispositions that emerge from these are likely to present us. Jervis writes: “The decision-maker who thinks that the other side is probably hostile will see ambiguous information as confirming this image, whereas the same information about a country thought to be friendly would be taken more benignly.”

At the group level, an isolated attack by a member of the out-group is often used as a scapegoat to ascribe an “enemy image” (monolithic, evil, opportunistic, cohesive, etc.) to the group as a unitary actor to incite commitment, resolve, and strength to enable retribution — referred to by anthropologists as “third-party revenge” or “vicarious retribution.” In international relations, these intergroup dynamics that can mischaracterize an adversary and the “enemy” — whose beliefs, images, and preferences invariably shift — risk rhetorical and arm-racing escalatory retaliatory behavior associated with the security dilemma. 

While possessing the ability to influence intergroup dynamics (frame events, mobilize political resources, influence the public discourse, etc.), political leaders tend to be particularly susceptible to out-group threats and thus more likely to sanction retribution for an out-group attack. A growing body of social psychology literature demonstrates that the emergence, endorsement, and, ultimately, the influence of political leaders depend on how they embody, represent, and affirm their group’s (i.e., the in-group) ideals, values, and norms — and on contrasting (or “metacontrasting”) how different these are from those of out-groups.

The digital era, characterized by mis/disinformation, social media–fueled “filter bubbles” and “echo chambers” — and rapidly diffused by automated social bots and hybrid cyborgs — is compounding the effects of inflammatory polarizing falsehoods to support anti-establishment candidates in highly popularist and partisan environments such as the 2016 and 2020 U.S. elections and 2016 Brexit referendum. According to social identity scholars Alexander Haslam and Michael Platow, there is strong evidence to suggest that people’s attraction to particular groups and their subsequent identity-affirming behavior are driven “not by personal attraction and interest, but rather by their group-level ties.” These group dynamics can expose decision-makers to increased “rhetorical entrapment” pressures, whereby alternative policy options (viable or otherwise) may be overlooked or rejected.

Most studies suggest a curvilinear trajectory in the efficiency of making decisions during times of stress. Several features of human psychology affect our ability to reason under stress. First, the large amount of information available to decision-makers is generally complex and ambiguous during crises. Machine-learning algorithms are on hand in the digital age to collate, statistically correlate, parse, and analyze vast big-data sets in real time. Second, and related, time pressures during crises place a heavy cognitive burden on individuals. Third, people working long hours with inadequate rest, and leaders enduring the immense strain of making decisions that have potentially existential implications (in the case of nuclear weapons), add further cognitive impediments to sound judgment under pressure. Taken together, these psychological impediments can hinder the ability of actors to send and receive nuanced, subtle, and complex signals to appreciate an adversary’s beliefs, images, and perception of risk — critical for effective deterrence.

Although AI-enabled tools can improve battlefield awareness and, prima facie, afford commanders more time to deliberate, they come at strategic costs, not least accelerating the pace of warfare and compressing the decision-making timeframe available to decision-makers. AI tools can also offer a potential means to reduce (or offload) people’s cognitive load and thus ease crisis-induced stress, as well as people’s susceptibility to things like cognitive bias, heuristics, and groupthink. However, a reduction in the solicitation of wide-ranging opinions to consider alternatives is unlikely to be improved by introducing new whiz-bang technology. Thus, further narrowing the window of reflection and discussion compounds existing psychological processes that can impair effective crisis (and noncrisis) decision-making, namely, avoiding difficult tradeoffs, limited empathy to view adversaries, and misperceiving the signals that others are conveying.

People’s judgments rely on capacities such as reasoning, imagination, examination, reflection, social and historical context, experience, and, importantly for crises, empathy. According to philosopher John Dewey, the goal of judgment is “to carry an incomplete [and uncertain] situation to its fulfillment.” Human judgments, and the decisions that flow from them, have an intrinsic moral and emotional dimension. Machine-learning algorithms, by contrast, generate decisions after gestating datasets through an accumulation of calculus, computation, and rule-driven rationality. As AI advances, substituting human judgment for fuzzy machine logic, humans will likely cling to the illusory veneer of their ability to retain human control and agency over AI as it develops. Thus, error-prone and flawed AI systems will continue to produce unintended consequences in fundamentally nonhuman ways.

In AI-enabled warfare, the confluence of speed, information overload, complex and tightly coupled systems, and multipolarity will likely amplify the existing propensity for people to eschew nuance and balance during crisis to keep complex and dynamic situations heuristically manageable. Therefore, mistaken beliefs about and images of an adversary — derived from pre-existing beliefs—may be compounded rather than corrected during a crisis. Moreover, crisis management conducted at indefatigable machine speed — compressing decision-making timeframes — and nonhuman agents enmeshed in the decision-making process will mean that even if unambiguous information emerges about an adversary’s intentions, time pressures will likely filter out (or restrict entirely) subtle signaling and careful deliberation of diplomacy. Thus, the difficulty actors face in simultaneously signaling resolve on an issue coupled with a willingness for restraint — that is, signaling that they will hold fire for now — will be complicated exponentially by the cognitive and technical impediments of introducing nonhuman agents to engage in (or supplant) fundamentally human endeavors.

Furthermore, cognitive studies suggest that the allure of precision, autonomy, speed, scale, and lethality, combined with people’s predisposition to anthropomorphize, cognitive offload, and automation bias, may view AI as a panacea for the cognitive fallibilities of human analysis and decision-making described above. People’s deference to machines (which preceded AI) can result from the presumption that (a) decisions result from hard empirically based science, (b) AI algorithms function at speeds and complexities beyond human capacity, or (c) because people fear being overruled or outsmarted by machines. Therefore, it is easy to see why people would be inclined to view an algorithm’s judgment (both to inform and make decisions) as authoritative, particularly as human decision-making and judgment and machine autonomy interface — at various points across the continuum — at each stage of the kill chain.

Managing Algorithmic Brinkmanship

Because of the limited empirical evidence available on nuclear escalation, threats, bluffs, and war termination, the arguments presented (much like Schelling’s own) are mostly deductive. In other words, conclusions are inferred by reference to various plausible (and contested) theoretical laws and statistical reasoning rather than empirically deduced by reason. Robust falsifiable counterfactuals that offer imaginative scenarios to challenge conventional wisdom, assumptions, and human bias (hindsight bias, heuristics, availability bias, etc.) can help fill this empirical gap. Counterfactual thinking can also avoid the trap of historical and diplomatic telos that retrospectively constructs a path-dependent causal chain that often neglects or rejects the role of uncertainty, chance, luck, overconfidence, the “illusion of control,” and cognitive bias.

Furthermore, AI machine-learning techniques (modeling, simulation, and analysis) can complement counterfactuals and low-tech table-top wargaming simulations to identify contingencies under which “perfect storms” might form — not to predict them, but rather to challenge conventional wisdom, and highlight bias and inertia, to highlight and, ideally, mitigate these conditions. American philosopher William James wrote: “Concepts, first employed to make things intelligible, are clung to often when they make them unintelligible.”

James Johnson is a lecturer in strategic studies at the University of Aberdeen. He is also an honorary fellow at the University of Leicester, a nonresident associate on the European Research Council–funded Towards a Third Nuclear Age Project, and a mid-career cadre with the Center for Strategic Studies Project on Nuclear Issues. He is the author of AI and the Bomb: Nuclear Strategy and Risk in the Digital Age (Oxford University Press, 2023). His latest book is The AI Commander: Centaur Teaming, Command, and Ethical Dilemmas (Oxford University Press, 2024). You can follow him on X: @James_SJohnson.

Image: U.S. Air Force photo by Airman First Class Tiarra Sibley

War on the Rocks

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More