Jesus' Coming Back

How to Make Military AI Governance More Robust

AI-enabled warfare has reached its “Oppenheimer moment.” From the backroom to the battlefield, AI is now being integrated into the full spectrum of military operations, including in logistics, intelligence collection, wargaming, decision-making, target identification, and weapons systems, with increasing levels of autonomy. The Ukrainian military is flying AI-enabled drones; the Israel Defense Forces are relying on AI to accelerate and expand targeting in Gaza; and the Pentagon is using AI to identify targets for airstrikes. The military AI revolution has arrived, and the debate over how it will be governed is heating up.

To navigate this moment responsibly and at responsible speed, policymakers are racing to develop AI governance frameworks even as AI tools are deployed on the battlefield. In the United States, the Biden administration’s executive order on AI directs the U.S. government to prepare a memorandum on military and intelligence uses of this technology area. It is expected to be finalized soon. The Trump campaign has vowed to rescind that order and is reportedly planning to launch a series of “Manhattan Projects” to roll back “burdensome regulations.” On the international stage, the United States is working with like-minded countries to expand the first-ever international agreement on military use of AI — a non-legally binding declaration — ahead of the second Summit on Responsible Artificial Intelligence in the Military Domain in September.

As AI pervades the battlespace, it is time to implement policies and forge consensus around how it will be governed. And while policy debates finally have moved beyond lethal autonomous weapons systems, governance frameworks still suffer from a narrow focus on military operations and international humanitarian law, leaving critical gaps in protection for civilians. Building on the international agreement, policymakers have a rapidly closing window of opportunity to address these problems and ensure that military AI is truly safe — on and off the battlefield.

Myths and Misconceptions

Before governance issues can be addressed, several myths and misconceptions about military AI must be dispelled. First, there is a tendency to conceive of military AI as though it were a monolithic category, akin to nuclear weapons or ballistic missiles. But AI is a general-purpose technology, encompassing a wide range of use cases and applications. Laws, norms, or policies that are adequate for one application may be inapplicable or inappropriate for another. While the term “military AI” might be a useful shorthand phrase, meaningful discussions on AI governance must consider specific use cases, or at least clusters of use cases that share similar characteristics.

Second, military AI is not limited to military organizations. States rely on military, intelligence, and diplomatic services to shape the battlespace and gain asymmetric advantages. Each of these communities is leveraging different AI applications to carry out their specific missions. Intelligence agencies, in particular, are relying on AI to collect, triage, and analyze vast amounts of data in support of military operations. Any governance regime that focuses exclusively on operational military use, then, will fail to put essential safeguards in place at critical points in the “kill chain” or other decision-making processes.

Finally, the concept of “responsible AI” should not be the only touchstone for governance and regulation. As Commissioner Kenneth Payne argued at a recent Wilton Park dialogue in which I participated, AI will change the strategic balance of power, giving rise to new security dilemmas where the most responsible course of action may be to not regulate beyond what the law already requires. Should policymakers tie their hands to more stringent standards when AI affords adversaries decisive strategic advantages in war? What is the responsible course of action in such a scenario?

More broadly, when it comes to national security, the ambiguity of the “responsible AI” framing leaves a great deal of room for interpretation, undermining efforts to establish global consensus. As we have seen in the cyber domain in recent decades, concepts such as “responsibility,” “security,” and “rights” often do not translate across geopolitical divides.

Responsible AI is not the same as AI governance. And as we race to create shared understandings of what AI governance should look like, it is important to keep in focus that governance is about more than just law.

Governance as International Law

Discussions on military AI governance myopically tend to start at law. For more than a decade, states have debated the merits of concluding a new treaty or instituting a ban on lethal autonomous weapons systems in international institutions and fora, including the United Nations Convention on Certain Conventional Weapons. Yet little progress has been made. States are reluctant to ban AI applications that could have significant strategic utility, and they are unwilling to be transparent about sensitive tools used in military and intelligence operations. Given these realities and rising geopolitical tensions, the prospects for an international treaty or an outright ban appear slim.

Instead, states have seized upon existing law, and international humanitarian law in particular, as the standard for regulating military AI. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which has been endorsed by more than 50 countries, emphasizes that the “use of AI in armed conflict must be in accord with states’ obligations under international humanitarian law, including its fundamental principles.” The United States, United Kingdom, and Russia, among other states, repeatedly have stressed that international humanitarian law is sufficient to regulate lethal autonomous weapons systems.

That view is largely correct. Nothing about AI formally or functionally changes the obligations that states have under domestic or international law. AI, like other technologies that have come before it, is an enabler of war — not a weapon in and of itself. AI is a tool. The effects of that tool, and the laws governing it, depend on how it is used. So long as AI is used in military operations that are compliant with international law, existing law theoretically should provide adequate protection.

It would be a category mistake, however, to turn to international humanitarian law as the only framework to regulate defense-related AI applications. Many AI systems have no direct connection to killing in war, even if they increase its pace, scale, or scope. How should these tools be regulated? For example, states are required under Article 36 of Additional Protocol I to the Geneva Conventions to ensure all “weapons, means or methods of warfare” are subject to legal reviews to ensure compliance with international humanitarian law. But non-weaponized AI applications such as decision support systems would not necessarily be subject to the same rules. Similarly, international humanitarian law simply is not the appropriate body of law when it comes to the development and deployment of AI applications used in intelligence activities that merely contribute to military operations.

As military AI use becomes more ubiquitous, deployment outside of clearly defined armed conflicts becomes more important to regulate. In these settings, adherence to more protective standards — notably, international human rights law — is required. The U.N. General Assembly in a non-binding resolution in March recognized that states must “refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law.” But that resolution did not explicitly address military and intelligence applications of AI. When AI is used outside of armed conflict, it is imperative that the more protective legal regime of international human rights law should apply. Otherwise, as I have argued elsewhere, the exceptional rules that are said to apply in war risk becoming the default regime across the spectrum of military AI use cases.

Just as existing law applies to AI, improper use of AI tools may also contribute to serious violations of international law — potentially on a larger scale and at a faster pace than without the use of such tools. This is especially true in dynamic targeting and fast-paced conflict environments, when cognitive overload and automation bias impede careful review of AI outputs. Misuse or unquestioning reliance on AI decision support systems, for example, may cause human operators to misidentify civilians as combatants and target the former, violating the principle of distinction. If misidentification occurs on a sufficiently large scale, human operators may launch strikes that disproportionately kill civilians compared to the concrete, direct military advantage obtained.

Even if the above issues are addressed, the law will still fall short of robust AI governance due to acute verification and enforcement problems. In contrast to drone strikes, it is virtually impossible to verify whether and when AI tools have been used in war because they do not necessarily alter the physical signature of weapons systems, which often switch between AI and non-AI enabled modes. This technical feature implies it may be difficult for external observers to discern whether and when AI, as opposed to the military system without AI as an enabler, has contributed to a breach of international law. It also underscores problems with ensuring that adequate enforcement mechanisms are in place, since the ease of defection is far greater than with conventional arms control regimes.

Ultimately, the most pernicious myth about military AI is that there is clear agreement on how international law regulates its use, and that existing law is sufficient to ensure responsible use across the full spectrum of AI applications. Similar to global debates on international law protections in cyberspace, states must develop clear positions on when and how relevant legal regimes apply in complex cases. Additional policy guidance is urgently needed to ensure that international humanitarian law is correctly interpreted and applied, as well as to strengthen safeguards around AI applications that fall outside of what international humanitarian law governs.

Beyond the Law

As with other emerging technologies, the solution to AI governance problems may lie more at the level of policy than law. Policy guidance, political declarations, rules of engagement, and codes of conduct play a crucial role in areas where the law is difficult to interpret or apply. Non-legally binding policies also help build consensus on international norms surrounding the development and deployment of military AI.

The Political Declaration represents an important step toward achieving that goal. But the principles contained in that agreement must be strengthened to fully realize its potential. As policymakers prepare for the Responsible AI in the Military Domain Summit 2024, concrete steps must be taken to address the myths and misconceptions surrounding military AI that impede effective governance.

As a starting point, states should expand the scope and ambition of the Political Declaration. At present, the principles apply primarily to Western countries and the military organizations within them. Key powers — notably China, Russia, and Israel — have not endorsed it. That is problematic considering that Ukraine and Gaza are testing grounds for military AI right now.

For those countries that have endorsed the Political Declaration, the scope should be expanded to include all defense-related applications of AI, both within and outside of the military chain of command. The declaration currently only requires that states “ensure their military organizations adopt and implement these principles for the responsible development, deployment, and use of AI capabilities.” This implies that intelligence agencies that are not part of military organizations, such as the Central Intelligence Agency or Mossad, are under no obligation to adhere to the principles contained in the declaration, even if they are using AI to identify targets for military action.

Relatedly, states should take steps to clarify how the law applies across the spectrum of AI use cases. This is crucial for ensuring broader compliance with international humanitarian law, international human rights law, and all other applicable legal regimes.

As states work to implement the Political Declaration, it is imperative to share best practices for mitigating risks in specific use cases, including through legal reviews, codes of conduct, and policy guidance. NATO, as well as the Five Eyes intelligence alliance (Australia, Canada, New Zealand, the United Kingdom, and the United States), would be a natural venue for sharing best practices, given the alignment of strategic interests and established intelligence-sharing mechanisms. But engagement must not stop there; it is essential to develop confidence-building measures with China and the Global South to socialize and institutionalize norms surrounding military AI. This need not start from a values-based approach, but rather from states sharing common interests in ensuring that military AI does not upend global security, stability, and prosperity.

Last but not least, capacity building must be a central focus of the upcoming Responsible AI in the Military Domain summit. States should share not only best practices but technical tools and expertise to ensure that all parties that endorse the Political Declaration are able to fully implement its standards. So far, capacity building has been the missing part of the conversation on how to make military AI safer. Policymakers have the opportunity to change that discourse in the pre-summit meetings that are happening now.

The future of military AI governance hinges on collaborative policy efforts rather than legal regulation. The Political Declaration marks a significant stride toward establishing international norms, but it must go beyond common myths and misconceptions surrounding military AI to be effective. As the next Responsible AI in the Military Domain summit approaches, policymakers should focus on clarifying how the law applies to specific AI applications, identifying policy tools to fill legal gaps in protection, and building capacity to implement governance standards within and outside of traditional alliance structures. In the current geopolitical climate, this is the most viable path for mitigating the risks of military AI, leveraging its opportunities, and upholding global security.

Brianna Rosen (@rosen_br) is a senior fellow at Just Security and a strategy and policy fellow at the University of Oxford’s Blavatnik School of Government.

Image: EJ Hersom

War on the Rocks

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More