Jesus' Coming Back

Mending the “Broken Arrow”: Confidence Building Measures at the AI-Nuclear Nexus

Accidents happen. There have been hundreds of accidents involving nuclear weapons, which the military dubs “broken arrow” events. In 1981 the U.S. Department of Defense released an official “comprehensive” list of 32 such events and details of hundreds more have been published after the end of the Cold War. Yet, despite these close calls and accidents, nuclear weapons are still counted on to deter escalation between nuclear-armed powers. Given the consequences of any nuclear accident and the threat posed by a breakdown in deterrence, even low-risk events must be taken seriously. These stakes have grown in recent months, amidst the war in Ukraine and Russian President Vladimir Putin’s threats to use nuclear weapons if outside powers intervene directly. 

While humanity survived the closest calls of the Cold War, and even if most experts believe Putin will not use nuclear weapons in Ukraine, there can be no allowance for complacency when it comes to reducing nuclear danger. Worse, new dangers may be lurking. Advances in artificial intelligence (AI), robotics, cyber, quantum computing, and other technologies are creating new opportunities for countries to revise their command and control, early warning, and even the platforms they would use in case of war. There is a precedent for using automated systems or automated decision aids in nuclear contexts, yet those systems did not exhibit the same degree of autonomy as in today’s conventional weapons systems. They also always featured direct human oversight. However, if states and militaries ambitiously try to integrate these new capabilities without sufficient care, they might find themselves ill-prepared for the actual changes effected by technological change, and these alterations could prove destabilizing or disruptive, especially when it comes to strategic stability. 

Reducing the risk of nuclear accidents or miscalculations involving AI is in the collective interest of all countries. These shared interests could make cooperation more plausible, even during times of intra-state war and geopolitical tension. To lessen the risk of nuclear conflict, the nuclear powers should work together on a new confidence-building measure to ensure positive human control over the use of nuclear weapons. Something as informal as a joint declaration amongst the five major nuclear powers can lower the political stakes in a world of geopolitical competition, even if the document is not legally binding. 

A joint declaration could function as a building block for cooperation amongst nuclear powers without requiring countries to sacrifice capabilities and keep open the option to update and revise the text as technology is created and practices updated. While AI shows promise in this domain, the technology has not yet reached a level of maturity where states are confident enough to integrate AI into nuclear command and control processes. The nuclear powers, therefore, should reach an agreement now to establish clear norms to disincentivize states from utilizing AI in ways that would destabilize the nuclear balance and increase the likelihood of nuclear use in the future. 

Risk and Reward at the AI-Nuclear Nexus

The introduction of AI-enabled autonomous systems in nuclear operations could create substantial risks because of algorithmic limitations. These systems might rely on inadequate or biased training data or exacerbate the cognitive biases of the humans who use them. Accidents can even emerge from interoperability issues, which arise from introducing increasingly complex functions to existing command and control systems. Together, these factors result in a loss of positive human control over nuclear weapons, thereby opening the door to potential accidents, unintentional conflict, and inadvertent escalation.

Artificial Intelligence and other emerging technologies, when introduced to military and nuclear contexts, could disrupt strategic stability and raise the risk of accidents, unintended escalation, or even conflict. The risk is higher because algorithms generate efficiencies and speed up the decision-making timeline by processing huge swaths of data quickly. In short: algorithms can process data faster than a human and remove the need to have a person complete repetitive, boring tasks. These advantages create an incentive to remove humans from the command and control loop to increase efficiency and also decrease the burden this data is certain to have on operators.

Unsurprisingly, militaries across the globe are building their AI and broader technological capacity to capitalize on the potential competitive advantages artificial intelligence promises. The United States is revamping its data and AI hubs within the Department of Defense to better prepare the U.S. military to leverage this technology in tasks ranging from human resources to warfighting. China is working to “intelligentize” its military, and AI is already being used in new ways in the Russia-Ukraine conflict, from identifying the deceased, to analyzing radio transmissions and satellite imagery, to creating deepfakes, and more efficiently targeting artillery. A natural extension of this trend would see the introduction of artificial intelligence into nuclear command and control procedures, early warning systems, and uncrewed nuclear delivery systems. The hope of integrating more advanced artificial intelligence and autonomous programs into these areas would be to reduce the likelihood of human error or poor judgment in high-stakes scenarios. Theoretically, these algorithms would make broken-arrow events much less likely and improve the safety of nuclear systems and launch processes.

A Human in the Loop

The risk of failure is not obscure. Automated systems have failed in the past, but always with a human in the loop able to use judgment and stop a cascade of errors and potential nuclear catastrophe. For example, in 1983, Lieutenant Colonel Stanislav Petrov, a Soviet radar operator responsible for registering enemy missile launches, received a false early warning alert about a U.S. nuclear attack. This warning should have prompted procedures for potential Russian retaliation. However, Petrov reported the warning up the chain of command as an error, rather than blindly trusting the system — ultimately serving as the last line of defense. 

The danger, of course, is that with developments in AI, people like Stanislav Petrov won’t be in a position to act as a stopgap when technology fails. As artificial intelligence matures, algorithms could greatly improve early-warning capabilities. For example, computer vision can identify and process a huge range of data and situations, such as unusual troop or equipment movements, sort through greater quantities of intelligence more quickly, and can even be used to predict developments related to nuclear weapons. The data, in this sense, can paint a picture of escalation, but the algorithm could be susceptible to making a mistake in how it interprets human actions, leading to a data-driven decision to raise alert status or even use nuclear weapons. In a perfect world, AI would eliminate some of the risks and unpredictability of a human having to determine an adversary’s intent, thereby making deterrence more stable. However, with imperfect data and complex systems, AI could very well make nuclear dynamics more unstable and thereby raising the risks of nuclear use. 

Command and control systems already rely on a degree of automation, and algorithms could make recommendations to commanders about posture change options in a crisis. These changes would also then have an impact on new, AI-enabled weapons that have been developed to overcome human-derived concerns about ensuring retaliatory capabilities against a technologically superior adversary. The allure of uncrewed systems is that they remove humans from the operation of a weapon. A weapon designer can then build systems with greater endurance and range. For deterrence dynamics, an uncrewed, nuclear-armed system can be designed to be more survivable, especially against purpose-built systems intended to defend against a nuclear attack.

The benefit for leaders is that more capable weapons could help decrease fears of falling victim to a bolt-from-the-blue first strike on its nuclear arsenals. These uninhabited delivery platforms can also be recalled, redirected, or launched quickly. The benefits of this type of platform for a second-strike capability are obvious, and an argument could be made that they can enhance deterrence. Yet, artificial intelligence also has the potential to create failure “cascades” where increasingly complex systems can cause minor accidents to become major ones, as human operators lose more and more control of the systems. 

This creates unique challenges for publicizing what capabilities and weapons a military has and for correcting mistakes in conflict. When something goes wrong, it could be difficult to determine whether an action was planned or not, particularly if the automated system that made the decision is poorly understood by the very states using them. Additionally, AI is still a relatively nascent field for militaries and has not yet had time to earn the trust of both AI researchers and potential operators and military leaders that are needed to successfully integrate it into these institutions. Therefore, prematurely introducing AI capabilities into nuclear contexts could do the very opposite of what policymakers hope to achieve, undermining strategic stability rather than strengthening it. 

Confidence-Building Measures Can Reduce Nuclear Risk From AI

One way to decrease the risk of a nuclear accident or miscalculation in an age of artificial intelligence would be for the nuclear powers to agree to always keep a human in the loop for nuclear command and control, with final human confirmation required for a nuclear launch. This has been the standard practice for decades, so codifying it would not require states to make any concessions to an adversary.

The closest analog to a non-human-in-the-loop command and control system was the Soviet Union’s Perimeter system, or “dead hand”. This system was designed as a last resort option to carry out mutually-assured destruction by automatically launching long-range missiles if there was a nuclear attack on the Soviet Union. However, this merely delegated authority automatically to use nuclear weapons from Soviet leadership to more junior officers if it did not receive sensor inputs from Soviet leadership. It was still up to the junior officers to decide whether to use nuclear weapons. Especially given the additional complexity and potential for false positives due to machine learning, committing to keep a human engaged in command and control could reduce the risk of accidents and miscalculations. This would not be a deviation from current nuclear command and control practices, as all nuclear-armed states have humans working in tandem with automated decision aids to launch nuclear weapons. There is also a consensus among nuclear-armed states that humans should control when and how nuclear weapons should be used.

Another critical element of a nuclear-focused confidence-building measure would be an agreement by nuclear states to avoid placing nuclear weapons on uncrewed platforms, especially autonomous platforms. The primary delivery for American and Russian nuclear forces is long-range ballistic missiles. These delivery vehicles are, of course, uncrewed. However, the authority to launch them requires a human-centric command chain, beginning with the leader of each country, and ending with the launch officers. 

For aircraft and submarine-delivered weapons, there are programs and plans amongst all the nuclear powers to further rely on uncrewed systems. We are already seeing some early development work on systems that blur the lines, such as Russia’s Poseidon nuclear-capable underwater drone. While this system may not yet be deployed, it is indicative of how the lines can be blurred by new systems with the potential to operate autonomously. 

For the United States, the Air Force has indicated that its future, optionally crewed bomber, the B-21, will only be armed with nuclear weapons when being flown by a human. Further, in 2019, Lt. General Shanahan, the founding leader of the Department of Defense’s Joint Artificial Intelligence Center, argued that the United States should not and would not give algorithms control over U.S. nuclear command and control. The United Kingdom, too, has emphasized its prioritization of “human-machine teaming” in its defense artificial intelligence strategy, while France has highlighted a similar commitment to subjecting all weapons systems to human command regardless of the degree of autonomy.

How to Move Forward

The war in Ukraine illustrates, for better or worse, nuclear threats will remain a part of international politics. To the extent that fully automating early warning or command and control, or placing nuclear weapons on uninhabited platforms, might increase the risk of nuclear use all states, especially all nuclear-armed states, have an incentive to decrease that risk. Moreover, because no country yet integrates machine learning into its early warning, command and control, or nuclear-armed platforms today, these confidence-building measures would not undermine strategic stability by reducing existing capabilities. Instead, they would help ensure that any integration of AI into nuclear operations occurs more safely and responsibly.

A confidence-building measure focused on agreeing to positive human control over nuclear weapons would reduce nuclear danger. By asking for human engagement in early warning, command and control, and on platforms armed with nuclear weapons, the world could ensure that the next generation of nuclear operations does not increase the risk of accidents or miscalculation for technical reasons. Ideally, this would be captured in a multilateral treaty. However, the prospect of such an agreement is low, given the tensions between nuclear-armed powers. Furthermore, we are still at the dawn of the application of machine learning by militaries, and setting more rigid guides now may create limits that decrease stability or might not be appropriate later, so any agreement would need to be flexible.

Confidence-building measures such as a joint declaration can help to establish aspirations for AI in the nuclear realm and start to narrow down the realm of possibilities (which is especially critical, given the wide-ranging nature of AI) and are therefore the most plausible and actionable path forward. They allow for the establishment of guidelines with the potential to expand as technologies, needs, and circumstances involving militarized AI mature. Additionally, such confidence-building measures avoid prematurely restricting states, which is essential given the early stages of military artificial intelligence research and development. The Cold War demonstrates that even staunch adversaries can agree to reduce nuclear danger when they have shared interests. In light of the war between Russia and Ukraine, those shared interests appear clearer than ever.

Lauren Kahn is a research fellow at the Council on Foreign Relations, where her work focuses on defense innovation and the impact of emerging technologies on international security, with a particular emphasis on artificial intelligence.

Image: U.S. Air Force photo by Senior Airman Jonathan McElderry

War on the Rocks

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More