Jesus' Coming Back

Autonomous Weapon Systems: No Human-in-the-Loop Required, and Other Myths Dispelled

0

References by Pentagon officials, the think tank world, and various world leaders to autonomous weapon systems often cite a U.S. military policy requirement that doesn’t even exist. Published in 2012 and updated in 2023, Department of Defense Directive 3000.09, Autonomy in Weapon Systems governs the Pentagon’s deployment and use of semi-autonomous and autonomous weapon systems. An autonomous weapon system is a weapon system that, once activated, can select and engage targets without further intervention by an operator. A semi-autonomous weapon system is something like the precision-guided weapons of today. Most prominently, the policy requires that, for some kinds of autonomous weapon systems, senior Defense Department leaders have to do two extra rounds of review, on top of the usual checks all weapon systems go through. This happens once before the system is approved to enter the acquisition pipeline and again before it’s used in the field. The reviews use a simple checklist, based on rules that already exist, to make sure any proposed autonomous weapon system works as it should and follows U.S. law.

Unfortunately, there are myths about current U.S. policy on autonomy in weapon systems that are creating imaginary — and then real — barriers to the U.S. military developing and deploying greater autonomy. And I should know, since the office I worked in in the Pentagon rewrote the updated directive during the Biden administration.

The original 2012 directive was the world’s first policy on autonomous weapon systems, but after a decade, it was time for an update. The original directive was widely misunderstood in multiple ways. Outside the Pentagon, advocacy groups seemed to think that the Department of Defense was stockpiling killer robots in the basement, while inside, many believed that autonomous weapon systems were prohibited. That gap in understanding alone made a refresh worthwhile. Moreover, the war between Russia and Ukraine demonstrated the utility of AI-enabled weapons and their necessity given the way electronic warfare can disrupt remotely-piloted systems.

Additionally, advances in AI and autonomous systems meant the science fiction of a decade prior was now in the realm of the technologically possible in some cases, while the Department of Defense itself had also changed. Since 2012, the Department of Defense has adopted principles for the use of artificial intelligence, created a new organization to accelerate AI adoption (the Chief Digital and Artificial Intelligence Office), and made a number of other reforms. Further, Department of Defense directives have to be reviewed every 10 years and either canceled, extended, or revised. Thus, we updated the directive in 2023.

As often happens, however, updating the policy did not fully address three myths and misunderstandings that had built up over time: First, there is a myth that the directive prohibits either some or all autonomous weapon systems, which is not the case. Second, there is a myth that the directive requires a human in the loop for the use of force at the tactical level, which is also not the case. Third, there is a myth that the directive regulates research and development, experimentation, and prototyping of autonomous weapon systems, which is untrue. These myths are holding back the Department of Defense’s ability to scale autonomy in weapon systems with responsible speed as the technology improves, because they create barriers due to fear of bureaucratic constraints, rather than the state of the technology. We worked to correct these myths, but clearly there is more work to do on this front. Especially when it comes to the second myth, which is perhaps the most pernicious, it may be time to abandon language about humans being “in,” “on,” or “out” of the loop for autonomous weapon systems. The “loop” language creates unnecessary confusion by falsely implying continuous human oversight at the tactical level that even existing conventional weapon systems do not have. Instead, we should emphasize human judgment, clearly reflecting the critical and accountable role humans play in authorizing force before a weapon is deployed.

As the U.S. military prepares for potential combat in the Indo-Pacific without reliable communications, autonomous weapon systems are increasingly critical. Dispelling myths about autonomy is essential to rapidly building an AI-enabled force that maintains human accountability and responsibility.

Let’s take each of these myths in turn.

Myth #1: Fully Autonomous Weapon Systems Are Prohibited

The reality is that there are no types of autonomous weapon systems prohibited by Department of Defense Directive 3000.09. That does not mean there are no rules surrounding autonomous weapon systems. The directive contains several requirements that make explicit criteria that weapons developers should already be meeting.

For example, all semi-autonomous and autonomous weapon systems have to go through the evaluation process described in Section 3 of the directive, which maps onto the rigorous requirements that the Department of Defense already has for ensuring weapon systems function as intended and have minimal failures (some degree of accidents are inevitable).

Some autonomous weapon systems then require additional review by senior officials before they reach the formal development stage (after experimentation and prototyping and prior to acquisition) and again prior to fielding. Autonomous systems designed to protect military bases and ships from various forms of attack (which have existed for decades), as well as non-lethal systems, are carved out from the additional review because existing review processes sufficiently ensure their safe development, deployment, and fielding. Section 4 of the directive lays out the requirements that systems need to meet for approval in that review process. These are commonsense requirements that any weapon system should be able to meet, such as demonstrating the ability to use the system in a way that complies with U.S. law. For example, an autonomous weapon system that could not be used in compliance with international humanitarian law and the law of armed conflict would fail the legal review required in Section 4 of the directive. But the directive is simply restating a requirement that all weapon systems have to meet.

It is also certainly the case that, based on the state of the technology and views of senior leaders, there are some missions where autonomous weapon systems might be more plausible and desirable than others. For example, it is easier to imagine demonstrating the effectiveness of autonomous weapon systems with algorithms able to very accurately target adversary ships or planes than autonomous weapon systems trained to attack individual humans, or even more to make a judgement without human intervention about whether an individual human was a combatant and thus able to be targeted lawfully.

Myth #2: Humans Must Be in the Tactical Loop

There is no requirement for a human in the loop in the directive. Those words do not appear in the document. This omission was intentional. What is required is having appropriate levels of human judgement (Section 1.2) over the use of force, which is not the same as a human in the loop. While the two phrases sound similar, they mean distinctly different things. Appropriate human judgment refers to the necessity for an informed human decision before the use of force, ensuring accountability and compliance with law.

Existing autonomous weapon systems demonstrate the role of human judgment. The Navy has deployed the Phalanx Close-In Weapon System since 1980. It is a giant Gatling gun designed to protect ships from close-in threats, whether missiles, aircraft, or something else. Normally, the system is directly controlled by a human, but if the number of incoming threats is larger than a human can track and engage, the operator can activate an automatic mode that will engage the threats faster than a human could achieve. This system has been used safely for decades, including in the last two years in the Red Sea to protect Navy ships from Houthi missiles. In this case, there is human judgment at the command level authorizing the use of the system to protect the ship, and at the tactical level by a human operator who switches the system into automatic mode. The directive does not require a review of the Phalanx as an autonomous weapon system since it is purely defensive and thus excluded from the requirement for additional review, but it illustrates how, even in the case of an autonomous weapon system, there is human judgment, even when autonomous force is being employed.

Now, imagine a next-generation missile with AI-enabled targeting being used in an air-to-air engagement in a communications-denied environment. In that case, a human commander would have already authorized the use of force, providing human judgment. A human operator would launch the missile, providing tactical human judgement. The missile would then turn on a seeker and look for a target using a computer vision algorithm, vectoring to destroy a target once it is identified. There is no ability to overrule the missile after launch. In this case, there is a decision by an accountable human to authorize the use of force and of the weapon system, just as there is with the use of an AIM-120 air-to-air missile or a radar-guided missile. The difference is that the seeker used to identify the target is now smarter.

Here is a harder case. The collaborative combat aircraft being pursued by the Air Force are designed for autonomy in many areas, including flight, but with the use of force still overseen by a human pilot flying with them. Now, imagine a second-generation collaborative combat aircraft in an active war zone, authorized to target adversary bombers. For the system to be fielded with that level of autonomy, the updated autonomy software would have been through the Pentagon’s rigorous testing and evaluation process and demonstrated the ability to accurately target the relevant adversary aircraft. In that case, a human commander would have authorized the use of force and the use of the collaborative combat aircraft for a given mission, providing human judgment. These autonomous aircraft would then follow the mission orders, launching missiles at adversary bombers once they are identified. The human commander who authorized their use on the mission would be accountable and responsible for the use of force.

A third example is an autonomous tank. This is a harder case because an autonomous ground combat tank is probably one of the hardest things to create and test, given the variety of different circumstances and targets it could encounter. So, an autonomous ground combat tank would probably have a large degree of human oversight and a more constrained mission set, absent substantial advances in AI that truly changed the technological realm of the possible. The rule of thumb is that the “cleaner” the battlefield environment, given current AI technology, the easier it is to envision how autonomous weapon systems might function effectively without reducing human accountability for the use of force.

Stepping back, senior defense leaders sometimes talk about a human in the loop requirement, even though no such requirement exists. Why is this? Senior leaders will occasionally say things that do not reflect official policy, which can be inevitable in such a large military system. For example, a senior Air Force official once talked about the Air Force’s commitment to “meaningful human control” of the use of force, a phrase used by the civil society “Campaign to Stop Killer Robots.” The U.S. government and Department of Defense have consistently opposed the phrase “meaningful human control” because it implies an unrealistic level of human supervision not met by many existing semi-autonomous precision-guided weapon systems, let alone unguided weapons. But even then, the official discussed meaningful human control of the use of force, which is different than meaningful human control of an individual weapon system.

Having a human in the loop can mean different things in tactical and operational contexts, which is what leads to confusion. Since the inconsistencies in how people talk about a human in the loop are endemic, the updated directive only requires human judgment. Operationally, there is always a human responsible for the use of force, meaning there is always a human authorizing lethality, approving a mission, and sending forces into the field. It’s clearer and more consistent to talk about how there is always a human responsible for the use than to talk about a requirement for a human in the loop.

The exception to what I have described here is nuclear weapons. The 2022 Nuclear Posture Review states that “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapons employment.” The phrasing is still awkward in the nuclear context, but arguably makes sense given the unique destructive power of nuclear weapons and the importance of being clear that decisions about nuclear use are made at the highest level.

Myth #3: There are Limits in Research and Development, Prototyping, and Experimentation on Autonomous Weapon Systems

There is nothing in the directive regulating those activities. For autonomous weapon systems where additional senior-level review is required, the first stage of the review process occurs when a weapon system is about to enter the acquisition system after research and development, prototyping, and initial experimentation. The directive does not limit these activities in any way.

Next Steps

The United States has a strong policy on autonomy in weapon systems that simultaneously enables their development and deployment and ensures they could be used in an effective manner, meaning the systems work as intended, with the same minimal risk of accidents or errors that all weapon systems have. Department of Defense Directive 3000.09 should reinforce confidence that any autonomous weapon systems the U.S. military develops and fields would enhance the capabilities of the military and comply with international humanitarian law and the law of armed conflict. Addressing these myths can help turn that into a reality.

The Trump administration could, of course, decide to revise or even replace the directive, but at present it still governs policy on autonomy in weapon systems. Currently, policy requires additional review of some kinds of autonomous weapon systems, but does not prohibit anything or require a human in the loop. Instead, the requirements in the directive are an aggregation of the requirements that all weapon systems need to meet to ensure they can be used effectively in ways that enhance the ability of the United States military to achieve its objectives in a war. Thus, following the requirements does not place an undue burden on any military service that wishes to develop an autonomous weapon system. They just need to prove it can be effectively and legally used, like any weapon system.

However, these continuing misinterpretations about Department of Defense policy threaten to undermine the adoption of autonomy in weapon systems with responsible speed. Moving forward, the Department of Defense should more clearly communicate to its stakeholder communities that defense policy does not prohibit or restrict autonomous weapon systems of any sort. It only requires that some autonomous weapon systems go through an additional review process on top of the reviews that all weapon systems are required to undergo.

The Department of Defense should also direct officials across the services to discuss the importance of human responsibility for the use of force, rather than the need for a human in the loop, given the way the conflation of tactical and operational loops can quickly lead to confusion.

Finally, the existence of the directive, however, provides a reminder to senior leaders to take an extra look at autonomous weapon systems that might otherwise raise eyebrows or where operators might have initial hesitation about using them. By ensuring that capabilities go through the review process, the Department of Defense can increase trust and confidence among warfighters in ways that would make their end use, if needed, more effective.

Finally, the directive also sends a strong signal internationally. In concert with the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, the directive provides a role model for capacity building as countries make their own policy decisions about incorporating autonomy into their weapon systems, building on lessons learned from the Russo-Ukrainian War or elsewhere.

Michael C. Horowitz is the Richard Perry professor at the University of Pennsylvania and senior fellow for technology and innovation at the Council on Foreign Relations. The views in this article are those of the author alone and do not represent those of the Department of Defense, its components, or any part of the U.S. government.

Image: Air Force Research Laboratory

War on the Rocks

Jesus Christ is King

Leave A Reply

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More