Jesus' Coming Back

An Ethical Mine Field? On Counter-Mobility and Weapon Autonomy

The Ukrainian counter-offensive in 2023 delivered a grim lesson in the brutal effectiveness of counter-mobility operations. Advancing Ukrainian forces found themselves in vast Russian minefields in the Zaporizhzhia and Donetsk regions, where in some places multiple anti-tank mines were stacked in deadly vertical layers. For many Ukrainian crews, survival came down to the superior resilience of Western-supplied armored vehicles.

Anti-tank mines ended up playing a crucial role in impeding and eventually thwarting the Ukrainian effort to break the land bridge to Crimea. NATO states in Europe took notice. Now, a concerted effort is underway to ensure that if Russia ever targets alliance territory, it will risk a taste of its own medicine.

The comeback of counter-mobility in Europe due to Russia’s war against Ukraine will include the use of anti-vehicle mines to shape, disrupt, and destroy an enemy’s movement. Weapon autonomy will also be playing an integral role in this effort.

We need not fear AI and weapon autonomy in this specific application. If done right, AI can make the use of anti-vehicle mines more compliant with international humanitarian law by reducing the risk of collateral damage.

Weapon Autonomy

“Autonomous weapons” are not their own fixed category. Whereas a main battle tank, a nuclear warhead, or a cluster munition is a describable object, autonomy is a cross-cutting enabler. It allows machines to operate without human intervention. Hence, trying to define “lethal autonomous weapon systems” as a clearly delineated weapons class is a non-starter. Instead, the issue is best understood as a shift in the interaction between humans and machines in warfighting.

Weapon autonomy is about the delegation of functions in the kill chain, about who or what — human or machine — is doing what, when, and where in the battlespace. Autonomy in the kill chain’s “critical functions,” that is, the selection and engagement of targets, grants speed and helps autonomous systems prevail over remotely operated, slower systems. It is also where legal concerns about responsibility and accountability, security concerns about flash wars, and ethical qualms about dehumanizing warfare come into play.

While weapon autonomy is not necessarily problematic (nor new, with older and potentially life-saving systems such as Patriot featuring it for decades), it does raise risks if human control is not retained in a differentiated, context-dependent manner. This renders it a proverbial minefield from security, legal, and ethical points of view. So, can autonomy make the use of mines more ethical?

Mines

A victim-activated tripwire anti-personnel landmine is not a weapon with autonomy in its critical functions. The mine exists in only two states: off or on. It does not select targets, and thus it cannot discriminate between legitimate and illegitimate targets, civilians and combatants, often harming innocent civilians many years after hostilities ended. That’s why 164 countries prohibit them. They are not compatible with the obligations — specifically the principle of distinction prohibiting indiscriminate attacks — deriving from existing international humanitarian law.

Anti-tank mines with sensors and a feedback mechanism to perform target selection can be said to feature weapon autonomy, albeit crude. The Ottawa Convention (Art. 2 (1)), which bans anti-personnel mines, excludes “mines other than anti-personnel mines” (as long as they are not equipped with “anti-handling devices”). In other words, anti-tank mines without booby traps are legal, and they feature autonomous target selection and engagement.

Counter-Mobility in Europe

With Russia’s full-scale invasion of Ukraine, Europe’s security architecture collapsed. Russia is now widely considered to pose a threat above and beyond Ukraine. The Estonian Ministry of Defense estimates that a victorious Russia with reconstituted armed forces could be able to attack a NATO country within five years. Part of the reaction in the Baltic states as well as in Finland and Poland is to establish defensive lines providing counter-mobility.

Mapped, smart, networked, and remotely deactivatable anti-vehicle mines are a cornerstone of this effort. The remote control feature makes mines more easily clearable after hostilities and redeployable, thus more economical. It also allows blue forces to safely cross the minefield at any given time.

Counter-Mobility and Meaningful Human Control: The Example of PARM

The engineering solutions for target selection in anti-tank mines have so far been remarkably simple. Take the German Panzerrichtmine (PARM) as an example. Unlike the buried circular metal cases most people associate with anti-tank mines, PARM does not detonate vertically when a vehicle drives over it. It is an off-route mine ambushing the enemy horizontally from cover. When triggered, a tripod fires a Panzerfaust-style shaped-charge warhead. PARM is thus able to hit the vulnerable side of the tank. It penetrates reactive armor and bypasses active protection systems because the warhead comes in low, indistinguishable from ground clutter. The overall result is a high likelihood of immobilizing the target. Interestingly, PARM’s target selection sensor registers pressure by gauging the amount of light in a thin fiber-optic cable. It is safe to cross that cable in a car, but a tank will compress it to a degree that the mine goes off. Tracked vehicles tend to destroy the cable, also triggering the mine. PARM deactivates automatically after a period of time, rendering it Ottawa-compatible, but it cannot be remotely deactivated. It is legal to deploy this Cold War mine that relies on 1980s technology today.

So what would newer technology, including AI, mean for applying weapon autonomy in an anti-vehicle mine?

A near-consensus view holds that using weapon autonomy requires “meaningful human control” or “appropriate levels of judgment” to be retained in the human-machine relationship throughout the weapon’s entire lifecycle. Simply put, this means control by design and control in use.

The human-machine interface has to provide adequate situational awareness to the operator to be able to foresee the results of the weapon deployment after activation. The operator needs to be able to retake control over the autonomously working system at any point during its operation. Consequently, and lastly, the operator can be held responsible for the outcomes. The trifecta of foreseeability, administrability, and traceability is, in a nutshell, what constitutes meaningful human control.

Keeping that in mind, and before we examine the old PARM more closely in this regard, let us now imagine a hypothetical, AI-enabled PARM. This hypothetical mine would be in an encrypted mesh network, allowing for it to be remotely activated and deactivated. For target selection, it would make use of sensor data fusion and machine learning in addition to the fiber-optic cable. Let’s assume a Russian T-72 main battle tank as one of its target profiles. The mine would use a passive thermo-optic camera and draw on the machine learning model to classify input by looking for specific features, such as the overall silhouette of the object in question. How big is it? Does it run on tracks? Does it have a turret? It would also classify the target’s heat and exhaust signature. In addition, it would have an acoustic sensor: If the target looks like a T-72, and it has the heat signature of a T-72, does it also sound like a T-72? Lastly, it would gauge the pressure exerted on the cable. In sum, this hypothetical new PARM would only trigger if the target is as heavy as a main battle tank, looks like a T-72, emits heat and exhaust like a T-72, and sounds like a T-72.

No Undue PARM?

From an ethical perspective, it is sometimes argued that life-and-death decisions should not be made by machines (I myself have done so). However, in the case of anti-tank mines, this is an already accepted practice, even while using comparably simple low-tech sensors like a fiber-optic cable.

From a legal point of view, anti-tank mines without anti-handling devices that self-deactivate after a certain period are considered compliant with existing international law. And arguably you would not want to build a new system that is worse than its predecessor. There is an incentive to iterate so the new mine is more discriminate, thus further reducing the risk of infringing on edge cases in international humanitarian law. After all, there is a genuine risk that the old PARM could be triggered by a heavy civilian truck with a pebble stuck in its tire, thus exerting an unusual amount of pressure in a single tight spot on the fiber-optic cable.

These arguments map onto the considerations on weapon autonomy and meaningful human control. The key question regarding autonomy — namely, who or what, human or machine, is deciding what, when, and where — has already been answered for the case of anti-tank mines: Selection and engagement happen in the mine. While this remains unchanged, our hypothetical future PARM features more foreseeability, administrability, and thus traceability than the old one. After all, the successor can be remotely deactivated (more administrability) and makes use of additional sensors and a much more sophisticated target profiling mechanism (more foreseeability) to allow for using the mine in a more discriminate manner. In short, it allows for more meaningful human control and is thus preferable for legal reasons.

This is not to say that using the AI techniques described above does not come with its own challenges and risks. Machine learning systems are probabilistic, so the first issue would be to which standard this new PARM should be certified. Is a 1:1,000,000 probability of a false positive safe enough? Or would it have to be 1:10,000,000? Then there is the issue of machine learning systems being susceptible to poisoned data or getting fooled during operation with manipulated sensor information. Research in the field of adversarial AI demonstrates that even small changes in input data can trip up computer vision systems. Further complications derive from a scenario in which the communications link to the mine is severed. Should the weapon then render itself inoperable due to the diminished degree of meaningful human control? Lastly, while the hypothetical PARM described above does indeed not (yet) exist, technology is advancing rapidly, and some mines currently entering the market work quite similarly. Should they adhere to the exact specifications above? For instance, should they keep the fiber-optic cable as an analog fallback sensor to prevent an enemy from tricking a mine relying solely on computer vision into firing at a mini-van?

Questions like these — that is, “How good is good enough from an engineering point of view?” and “Can we trust this system under real-world conditions?” ­— are legal and technical but also deeply political. They will most likely be answered differently from one military to another, depending on standards, safety culture, and experiences gathered during research and development.

But still, it stands to reason that any anti-vehicle mine that is less likely to hit unintended targets and that can be deactivated remotely and thus more easily cleared is better than what currently exists and can already legally be used.

Conclusion

Mines are a nasty business. But counter-mobility is back in Europe, and I argued that using AI-enabled anti-tank mines to autonomously engage “military objects by nature,” as the International Committee of the Red Cross puts it, can render these weapons both safer and more effective. Anti-tank mines are a niche case in which “more” and smarter weapon autonomy (by using additional sensors and more sophisticated, tighter target profiles in a stationary weapon) is straightforwardly better than “less” (relying solely on a tripwire-style cable to perform the same function in the kill chain).

Proper international regulation remains highly desirable for weapon autonomy, of course. But based on my own ten years of experience with engaging at the U.N., the diplomatic debate has largely remained an exchange of quite abstract views and sweeping declarations on international humanitarian law, ethics, and security in general. But progress in technology has made these discussions much less hypothetical than they used to be in the beginning. Policymakers, diplomats, the military community, and civil society could seek a more technologically grounded, pragmatic and focused approach.

A blanket ban or quantifiable limits on a weapons category — akin to nuclear arms control or the prohibition processes on landmines and cluster munitions — is impossible due to the cross-cutting, enabling nature of weapon autonomy. But the international community could — and most definitely should — stipulate meaningful human control as a norm in a binding legal instrument, affirming foreseeability, administrability, and traceability as obligations under the law whenever autonomy in weapon systems is used. And if such a multilateral, binding international legal instrument is not in the cards (for now), then stakeholders should at least hasten their efforts to establish best practices and norms of responsible development for the prudent as well as lawful application of weapon autonomy. Exploring specific use cases, such as anti-tank mines, to determine the scope of military benefits as well as ethical and legal boundaries is what will put lawful and responsible applications and shared rules of the road for weapon autonomy within reach.

PD Dr. Frank Sauer is the head of research at the Metis Institute for Strategy and Foresight at Bundeswehr University Munich. His research focuses on nuclear issues and emerging disruptive technologies, particularly the military applications of AI and robotics. Frank co-hosts the acclaimed German-language podcast Sicherheitshalber.

The author would like to thank Ryan Evans, Ulrike Franke, Franz-Stefan Gady, Paul Scharre, and Vanessa Vohs for their helpful comments and suggestions on the manuscript.

Image: Main Directorate of the State Emergency Service of Ukraine in Kharkiv Oblast via Wikimedia Commons

War on the Rocks

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More