The Future of Algorithmic Warfare Part III: Stagnation
Editor’s Note: What follows is an excerpt from the authors’ forthcoming book, Information in War: Military Innovation, Battle Networks, and the Future of Artificial Intelligence.
What would a complete failure of the current race to field artificial intelligence and machine learning (AI/ML) systems across the U.S. military look like? The American military profession is equally blessed and cursed by technological determinism and the belief that new gadgets will solve old problems. This line of thinking pervades notions about offsets and the belief that precision can counter mass in modern war.
The scenario below is a premortem of sorts, welcoming the reader to imagine a world in which technological change moves backward as much as forward and is uneven. This red team technique is meant to illustrate how failure emerges as a way to prevent it. The analysis builds on the earlier scenarios published by War on the Rocks that explored how a broken bureaucracy and a failure to create a common understanding of how AI/ML affect the character of war. All of these scenarios are adapted from our recent book — Information in War: Military Innovation, Battle Networks, and the Future of Artificial Intelligence. In the book, we use a range of historical cases on the adoption of information technology to imagine how the U.S. military will react to the latest wave of interest in AI/ML. Based on these diverging histories, we see different futures on the horizon that call for prudence and a more robust dialogue about how people, bureaucracy, and knowledge networks collide with any new technology.
In the scenario below, the future is bleak. Old ideas about war combine with industrial-age bureaucracy to limit the extent to which any new technology can produce an enduring advantage. AI/ML becomes another false promise sacrificed on the altar of the clash of wills. The defense bureaucracy struggles to adapt and falls back on enduring ideas about war despite the availability of new technology. The man on horseback remains nostalgic and lost in dreams of past battles that leave him unable to adapt to the future of war.
In this alternative future, the U.S. military never escapes the gravity of old ideas about war and legacy bureaucracy. Despite the current wave of enthusiasm about AI/ML, there is a non-zero chance of this future. Yes, services are in a race to develop new battle networks, but the extent to which they become new doctrine and fighting formations is still uncertain.
* * *
It is 2040. The Chairman of the Joint Chiefs of Staff drives to the Pentagon in a refurbished antique sports car with a security drone escorting him and his digital personal assistant reading him the daily news. The traffic was heavier than normal. It was a beautiful spring morning with cherry blossoms adding color to the otherwise blurred labyrinth of white and gray marble, concrete, and steel buildings etching Northern Virginia. Many drivers like himself opted to enjoy the morning commute and drive themselves rather than turn on their limited automated driving mode, which restricted them to a boring fifty-mile-an-hour experience. He preferred to drive but have a machine read to him, cataloguing the information he found useful and highlighting for his staff the questions he wanted answered when he pulled into the office. Truth be told, he didn’t like reading that much anyway.
The old general had tailored his personal assistant, Chesty, to sound like an antique Devil Dog. The algorithms scrapped old audio files and even tailored period-specific metaphors from World War II. It was another example of the fun but trivial ways software mediated everyone’s engagement with the world. Chesty grunted and read him the headlines.
“Air Force General Stops Swarm Experiment Testing Automated Airspace Agents Citing a Need to Better Integrate Human Air Traffic Controllers and Pilots.”
Chesty added to the headline, “We make generals today based on their ability to write a damned letter. Those kinds of men can’t get us ready for war.”
“New Report: Chinese AI-Driven Simulators Rely on Scraping Social Media to Replicate U.S. Military Decision-Making beyond Doctrine Struggles in Early Tests.”
Chesty added to the headline, “There are not enough Chinese communists in the world to stop a fully armed Marine regiment from going wherever they want to go.”
“Army Surgeon General Requests Additional Psychologist and Social Workers for Marriage Counseling Citing Critical Deficiencies in Digital Physician Services Providing Bad Couple’s Counseling.”
Chesty added to the headline, “When the Marine Corps wants you to have a wife, you will be issued one.”
The general asks Chesty to limit the commentary and provide an overview by a congressional committee on the status of AI integration across the U.S. military. He had been forced to review these reports ever since he was a company commander at infantry training exercises in Twentynine Palms. He remembered his first of many of these tedious experiments. It was an especially hot summer day, and the company had been operating in the field for five days with a mix of fragile tablets, cheap drones, and all sorts of weird antennas. His Marines were dirty and tired, but the scientists and industry reps, clad in tactical-chic pants and polo shirts etched with melting sun block and company logos, kept asking why whatever the latest AI gizmo was didn’t work. He always felt there was an undertone of “you Spartans don’t get it” when they spoke.
As a young captain, he assumed he had to participate in these experiments because money was on the line. The Marine Corps wasn’t going to change as much as it was going to take some sucker’s money and try crazy stuff in the desert. These experiments even produced some good laughs.
He remembered during that iteration how a lance corporal snuck up on an automated sentry designed to detect enemy movement around secure patrol bases by placing a tortoise shell in front of his face. The machine assumed he was an endangered species and sent a message causing all sentries to stand down. It was always this way. In those days, the generals got briefed on magic, but grunts saw the truth.
The latest report signaled that despite increasing data and analytical optimization in the private sector due to years of 5G connectivity, new chip designs, and better algorithms, there remained real concerns about transforming the military. The report interviewed more than one hundred combat leaders from across the services. Even though it was anonymous, he knew them, or at least how they thought about war. The robots could go screw themselves.
During his career, years of counter-insurgency and gray zone campaigns left many officers suspicious of how much technology could change war. As junior officers, they got their first taste of combat hunting elusive enemies and finding their firepower curtailed by rules of engagement and groups that seemed to disappear in valleys and villages. These same officers took command of larger formations and heard leaders preach the gospel of great power conflict, as if the 2020s were somehow the 1980s or — worse — the 1930s. They planned freedom of navigation patrols and updated war plans, learning to see time and distance factors and task organization as more important than any exquisite new capability. They fought to remain in the cockpit of aircraft and at the helm of Navy ships, upholding a view that no machine despite its speed could replace the essence of master and commander and human judgment. In war colleges they wrote long, philosophical monographs about traditions, command and control, and historical cases that saw the past as almost prelapsarian, a place where legends commanded regiments, groups, and flotillas that outfought determined adversaries.
There was no leader to this movement. Over the years it coalesced in Slack, WhatsApp, and Signal diatribes and articles in War on the Rocks. At some point, a civilian author named the movement the “Clausewitzian Third Wave.” The group saw the benefit of technology but avoided hard decisions over force structure and testing new concepts, preferring to see war as an enduring human struggle. A favorite topic in these forums were Generals Charles Krulak and Al Gray. Countless war college papers revisited, and even retold, their legacy, distorting the past to justify a combined arms renaissance and the importance of small unit tactics and decision-making. These debates spilled over into procurement decisions, with the number of people historically in a squad defining the design parameters of combat vehicles and the need for a human in the cockpit creating spiraling costs.
Over the course of his career, the Chairman remembered being on the margins of this group. He watched as civilians, usually millennials with grand ideas and thin resumes, were appointed to new administrations and pressed for technological change. These civilians called the old ways outdated and a relic of twentieth-century warfare. He watched as the movement pushed back, using social media and whispers in the halls of Congress to limit the extent to which any new whiz kid could challenge age-old traditions and fundamental human truths about war. Networks of gray beards and retirees fostered the insurgency.
He saw these battles in the latest Congressional commission reports. A cohort of business leaders, LinkedIn charlatans, and beltway bandits all outlined why the military again was behind in integrating commercial AI applications. These people didn’t have a clue. They hadn’t seen the legacy contracts and legal red tape around sharing data that distorted training any algorithm. They didn’t realize that battlefield bandwidth wasn’t as fast as the network of sensors that let their civilian ass effortlessly disappear into shady virtual reality worlds. They didn’t understand the constant intelligence requirements to take pictures of enemy vehicles at every angle and in every weather condition to train AI image recognition software. They were novices who failed to realize the war was not reducible to simple patterns like the purchasing habits of consumers.
The old general knew he was now part of an old guard defined by the Clausewitzian Third Wave. He knew he could trust that the next cohort of junior officers would discover the same hard truths and keep the nerds at bay. The geeks just didn’t understand combined arms, what it took to create a fighting force, and, most of all, killing. They never would.
* * *
A cursory look at historical cases beyond a desire rooted in Whig history to see the past as progress to the present illustrates why U.S. policy-makers today need to prevent the scenario outlined above. From early experiments with radar to the development of a global network of unmanned aerial surveillance and strike capabilities, there are more failures than success stories. For every radar success story like the United Kingdom and the Chain Home early warning radar network, there was a wide range of half-baked ideas about death rays across major global powers in the interwar period. Despite the success the United States experienced in fielding a new generation of unmanned attack and reconnaissance aircraft after 2001, there was a rich, underreported prehistory of drones that failed to break the enduring image of battle as a human endeavor.
Yet the past need not be prologue. The current moment calls for a widespread embrace of AI/ML capabilities and ushering in a new era of bottom-up experimentation. Just as the Ukrainian society through initiatives like DELTA COP has shown how to build narrow AI/ML capabilities from the bottom up, the U.S. military should accelerate current experiments like the Global Information Dominance Exercise. There are also a growing number of service-level initiatives to reimagine warfare, such as the ongoing Marine coding effort integrated with the Army’s Futures Command in Austin, Texas. This effort has already produced an invaluable algorithm that enables maximizing commercial radar sensing capabilities to meet fleet and joint force maritime domain awareness `requirements.
To realize sustained change, the heart of these initiatives will need reside in planning and how the military reimagines the proverbial man on horseback as part of a larger disaggregated decision network. Understanding the balance of human judgment, creativity, and model-generated perspectives will prove essential for operational art in the twenty-first century. To understand how best to fuse data and the mind for war, though, requires bold experimentations and wargames that test different combinations of systems. The future is still in the making. Every military professional, as well as concerned citizens, whether AI/ML enthusiast or skeptic, should become part of it.
Benjamin Jensen, Ph.D., is a professor of strategic studies at the School of Advanced Warfighting in the Marine Corps University and a senior fellow for future war, gaming, and strategy at the Center for Strategic and International Studies. He is also an officer in the U.S. Army Reserve.
Christopher Whyte, Ph.D., is an assistant professor of homeland security and emergency preparedness at Virginia Commonwealth University.
Col. Scott Cuomo, Ph.D., currently serves as a senior U.S. Marine Corps advisor within the Office of the Undersecretary of Defense for Policy. He helped co-author these essays while participating in the Commandant of the Marine Corps Strategist Program and also serving as the service’s representative on the National Security Commission on Artificial Intelligence.
The views they express are their own and do not reflect any official government position.
Image: AI Generated art by Dr. Benjamin Jensen
Comments are closed.