Jesus' Coming Back

The AI revolution is already here

In just the last few months, the battlefield has undergone a transformation like never before, with visions from science fiction finally coming true. Robotic systems have been set free, authorized to destroy targets on their own. Artificial intelligence systems are determining which individual humans are to be killed in war, and even how many civilians are to die along with them. And making all this the more challenging, this frontier has been crossed by America’s allies. 

Ukraine’s front lines have become saturated with thousands of drones, including Kyiv’s new Saker Scout quadcopters that “can find, identify and attack 64 types of Russian ‘military objects’ on their own.” They are designed to operate without human oversight, unleashed to hunt in areas where Russian jamming prevents other drones from working.

Meanwhile, Israel has unleashed another side of algorithmic warfare as it seeks vengeance for the Hamas attacks of October 7. As revealed by IDF members to 972 Magazine, “The Gospel” is an AI system that considers millions of items of data, from drone footage to seismic readings, and marks buildings in Gaza for destruction by air strikes and artillery. Another system, named Lavender, does the same for people, ingesting everything from cellphone use to WhatsApp group membership to set a ranking between 1 and 100 of likely Hamas membership. The top-ranked individuals are tracked by a system called “Where’s Daddy?”, which sends a signal when they return to their homes, where they can be bombed. 

Such systems are just the start. The cottage industry of activists and diplomats who tried to preemptively ban “killer robots” failed for the very same reason that the showy open letters to ban on AI research did too: The tech is just too darn useful. Every major military is at work on their equivalents or better, including us. 

There is a debate in security studies about whether such technologies are evolutionary or revolutionary. In many ways, it has become the equivalent of medieval scholars debating how many angels could dance on the head of a pin when the printing press was about to change the world around them. It really is about what one chooses to focus on. Imagine, for instance, writing about the Spanish Civil War in the 1930s. You could note both sides’ continued use of rifles and trenches, and argue that little was changing. Or you could see that the tank, radio, and airplane were advancing in ways that would not just reshape warfare but also create new questions for politics, law, and ethics. (And even art: think of the aerial bombing of Guernica, famously captured by Picasso.) 

What is undebatable is that the economy is undergoing a revolution through AI and robotics. And past industrial revolutions dramatically altered not just the workplace, but also warfare and the politics that surrounds it. World War I brought mechanized slaughter, while World War II ushered in the atomic age. It will be the same for this one.

Yet AI is different than every other new technology in history. Its systems grow ever more intelligent and autonomous, literally by the second. No one had to debate what the bow and arrow, steam engine, or atomic device could be allowed to do on its own. Nor did they face the “black box” problem: where the scale of data and complexity means that neither the machine nor its human operator can effectively communicate “why” it decided what it did. 

And we are just at the start. The battlefield applications of AI are quickly expanding from swarming drones to information warfare and beyond, and each new type raises new questions. Dilemmas erupt even when AI merely provides options to a human commander. Such “decision aids” offer dramatic gains in speed and scale: the IDF system sifts through millions of more items of data, ginning up target lists over 50 times faster than a team of human intelligence officers ever could. This drastically grows the accompanying carnage. Supported by Gospel, Israeli forces struck more than 22,000 targets in the first two months of the Gaza fighting, roughly five times more than in a similar conflict there a decade ago. And Lavender reportedly “marked some 37,000 Palestinians as suspected ‘Hamas militants,’ most of them junior, for assassination.” It also calculated the likely collateral damage for each strike, with acceptable collateral damage reported by IDF members to have been set between 15 to 100 expected civilian casualties. 

The issues of AI in warfare goes beyond the technical. Will AI-driven strategies achieve desired outcomes, or are we fated to live out the moral of every science fiction story, where the machine servant ultimately harms its human master? Indeed, Israel seems bent on proving this problem in real time. As one IDF officer who used Lavender put it, “In the short term, we are safer, because we hurt Hamas. But I think we’re less secure in the long run. I see how all the bereaved families in Gaza — which is nearly everyone — will raise the motivation for [people to join] Hamas 10 years down the line. And it will be much easier for [Hamas] to recruit them.”

The political, ethical, and legal quagmire surrounding AI in warfare demands immediate attention, with rethinks on everything from our training to acquisition to doctrinal plans. But ultimately we must recognize that there is one aspect that is not changing: human accountability. While it is easy to blame faceless algorithms for a machine’s action, ultimately a human is behind every key decision. It is the same as when driverless car companies try to escape responsibility for when their poorly designed and falsely marketed machines kill people in our streets. In systems like Gospel and Lavender, for instance, it was the human, not the machine, who decided to change the level of concern about civilian casualties or to tolerate a reported 10-percent error rate. 

Just as in business, we need to set frameworks to govern the use of AI in warfare. This must now include not just mitigating the risks, but also ensuring that the people behind them are forced to take better care in both their design and use, including by understanding they are ultimately responsible, both politically and legally. This also applies to U.S. partners in industry and geopolitics, who are now pushing these boundaries forward, enabled by our budget money. 

The future of warfare hangs in the balance, and the choices we make today will determine whether AI becomes a harbinger of a new era of digital destruction.

P.W. Singer is a best-selling author of such books on war and technology as Wired for War, Ghost Fleet, and Burn-In, senior fellow at New America, and co-founder of Useful Fiction, a strategic narratives company.

Defense One

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More