Jesus' Coming Back

AI At War

It is widely believed that the world is on the brink of another military revolution. AI is about to transform the character of warfare, as gunpowder, tanks, aircraft, and the atomic bomb have in previous eras. Today, states are actively seeking to harness the power of AI for military advantage. China, for instance, has announced its intention to become the world leader in AI by 2030. Its “New General AI Plan” proclaimed that: “AI is a strategic technology that will lead the future.” Similarly, Russian President Vladimir Putin declared: “Whoever becomes the leader in this sphere will become ruler of the world.” In response to the challenge posed by China and Russia, the United States has committed to a “third offset” strategy. It will invest heavily in AI, autonomy, and robotics to sustain its advantage in defense. 

In light of these dramatic developments, military commentators have become deeply interested by the question of the military application of AI. For instance, in a recent monograph, Ben Buchanan and Andrew Imrie have claimed that AI is the “new fire.” Autonomous weapons controlled by AI — not by humans — will become increasingly accurate, rapid, and lethal. They represent the future of war. Many other scholars and experts concur with them. For instance, Stuart Russell, the eminent computer scientist and AI pioneer, dedicated one of his 2020 BBC Reith Lectures to the military potential of AI. He professed the rise of slaughterbots and killer robots. He described a scenario in which a lethal quad-copter the size of a jar could be armed with an explosive device: “Anti-personnel mines could wipe out all the males in a city between 16 and 60 or all the Jewish citizens in Israel and unlike nuclear weapons, it would leave the city infrastructure.” Russell concluded: “There will be 8 million people wondering why you can’t give them protection against being hunted down and killed by robots.” Many other scholars, including Christian Brose, Ken Payne, John Arquilla, David Hambling, and John Antal, share Russell’s belief that with the development of second-generation AI, lethal autonomous weapons — such as killer drone swarms — may be imminent. 

Military revolutions have often been less radical than initially presumed by their advocates. The revolution of military affairs of the 1990s was certainly important in opening up new operational possibilities, but it did not eliminate uncertainty. Similarly, some of the debate about lethal autonomy and AI has been hyperbolic. It has misrepresented how AI currently works, and what its potential effects on military operations might, therefore, be in any conceivable future. Although remote and autonomous systems are becoming increasingly important, there is little chance of autonomous drone swarms substituting troops on the battlefield, or supercomputers replacing human commanders. AI became a major research program in the 1950s. At that time, it operated on the basis of symbolic logic — programmers coded input for AI to process. This system was known as good old fashioned artificial intelligence. AI made some progress, but because it was based on the manipulation of assigned symbols, its utility was very limited, especially in the real world. An AI “winter,” therefore, closed in from the late 1970s and throughout the 1980s. 

Since the late 1990s, second-generation AI has produced some remarkable breakthroughs on the basis of big data, massive computing power, and algorithms. There were three seminal events. On May 11 1997, IBM’s Deep Blue beat Garry Kasparov, the world chess champion. In 2011, IBM’s Watson won Jeopardy!. Even more remarkably, in March 2016, AlphaGo beat the world champion Go player, Lee Seedol, 4-1. 

Deep Blue, Watson, and AlphaGo were important waypoints on an extraordinary trajectory. Within two decades, AI had gone from disappointment and failure to unimagined triumphs. However, it is important recognize what second-generation AI can and cannot do. It has been developed around neural networks. Machine learning programs process huge amounts of data through their networks, re-calibrating the weight that a program assigns to particular pieces of data, until, finally, it generates coherent answers. The system is probabilistic and inductive. Programs and algorithms know nothing. They are unaware of the real world and, in a human sense, unaware of the meaning of the data they process. Using algorithms, machine learning AI simply builds models of statistical probability from massively reiterated trials. In this way, second-generation AI identifies multiple correlations in the data. As long as it has enough data, probabilistic induction has become a powerful predictive tool. Yet, AI does not recognize causation or intention. Peter Thiel, a leading Silicon Valley tech entrepreneur, has articulated AI’s limitations eloquently: “Forget science-fiction fantasy, what is powerful about actually existing AI is its application to relatively mundane tasks like computer vision and data analysis.” Consequently, although machine learning is far superior to a human at limited, bounded, mathematizable tasks, it is very brittle. Utterly dependent on the data on which it has been trained, even the tiniest change in the actual environment — or the data — renders it useless.

The brittleness of data-based inductive machine learning is very significant to the prospect of an AI military revolution. Proponents or opponents of AI imply that, in the near future, it will be relatively easy for autonomous drones to fly through, identify, and engage targets in an urban areas, for instance. After all, autonomous drone swarms have already been demonstrated — in admittedly contrived and controlled environments. However, in reality, it will be very hard to train a drone to operate autonomously for combat in land warfare. The environment is dynamic and complex, especially in towns and cities — civilians and soldiers are intermixed. There do not seem to be any obvious data on which to train a drone swarm reliably — the situation is too fluid. Similarly, it is not easy to see how an algorithm could make command decisions. Command decisions require the interpretation of heterogeneous information, balancing political and military factors, all of which require judgement. In a recent article, Avi Goldfarb and Jon R. Lindsay have argued that data and AI are best for simple decisions with perfect data. Almost by definition, military command decisions involve complexity and uncertainty. It is notable that, while Google and Amazon are the pre-eminent data companies, their managers do not envisage a day when an algorithm will make their strategic and operational decisions for them. Data, processed rapidly with algorithms, helps their executives to understand the market to a depth and fidelity that their competitors cannot match. Information advantage has propelled them to dominance. However, machine learning has not superseded the executive function. 

It is, therefore, very unlikely that lethal autonomous drones or killer robots enabled by AI will take over the battlefield in the near future. It is also improbable that commanders will be replaced by computers or supercomputers. However, this does not mean that AI, data, and machine learning are not crucial to contemporary and future military operations. They are. However, the function of AI and data is not primarily lethality — they is not the new fire, as some claim. Data — digitized information stored in cyberspace — are crucial because it provides states with a wider, deeper, and more faithful understanding of themselves and their competitors. When massive data sets are processed effectively by AI, this will allow military commanders to perceive the battlespace to a hitherto unachievable depth, speed and resolution. Data and AI are also crucial for cyber operations and informational campaigns. They have become indispensable for defense and attack. AI and data are not so much the new fire as a new form of digitized military intelligence, therefore, exploiting cyberspace as a vast new resource for information. AI is a revolutionary way of seeing “the other side of the hill.” Data and AI are a — maybe even the — critical intelligence function for contemporary warfare. 

Paul Scharre, the well-known military commentator, once argued that AI would inevitably lead to lethal autonomy. In 2019, he published his best-selling book, Army of None, which plotted the rise of remote and autonomous weapon systems. There, Scharre proposed that AI was about to revolutionize warfare: “In future wars, machines may make life and death decisions.” Even if the potential of AI still enthuses him, he has now substantially changed his mind. Scharre’s new book, Four Battlegrounds, published in February 2023, represents a profound revision of his original argument. In it, he retreats from the cataclysmic picture that he painted in Army of None. If Army of None were an essay in science fiction, Four Battlegrounds is a work of political economy. It addresses the concrete issues of great-power competition and the industrial strategies and regulatory systems that underpin it. The book describes the implications of digitized intelligence for military competition. Scharre analyses the regulatory environment required to harness the power of data. He plausibly claims that superiority in data, and the AI to process it, will be militarily decisive in the superpower rivalry between United States and China. Data will afford a major intelligence advantage. For Scharre, there are four critical resources that will determine who wins this intelligence race: “Nations that lead in these four battlegrounds — data, compute, talent, and institutions [tech companies] — will have a major advantage in AI power.” He argues that the United States and China are locked into a mortal struggle for these four resources. Both China and the United States are now fully aware that whoever gains the edge in AI will, therefore, be significantly advantaged politically, economically, and, crucially, militarily. They will know more than their adversary. They will be more efficient in the application of military force. They will dominate the information and cyber spaces. They will be more lethal.

Four Battlegrounds plots this emerging competition for data and AI between China and the United States. It lays out recent developments and assesses the relative strengths of both nations. China is still behind the United States in several areas. The United States has the leading talent, and is ahead in terms of research and technology: “China is a backwater in chip production.” However, Scharre warns against U.S. complacency. Indeed, the book is animated by the fear that the United States will fall behind in the data race. Scharre, therefore, highlights China’s advantages — and its rapid advances. With 900 million internet users already, China has far more data than the United States. Some parts of the economy, such as ride-hailing, are far more digitized than in the United States. WeChat, for instance, has no American parallel. Many Chinese apps are superior to U.S. ones. In addition, the Chinese state is also uninhibited by legal constraints or by civil concerns about privacy. The Chinese Communist Party actively monitors the digital profiles of its citizens — it harvests their data and logs their activities. In cities, it employs facial recognition technology to identify individuals. 

State control has benefited Chinese tech companies: “The CCP’s massive investment in intelligence surveillance and social control boosted Chinese AI companies and tied them close to government.” The synergies between government and tech in China are close. China also has significant regulatory advantages over the United States. The Chinese Communist Party has underwritten tech giants like Baidu and Alibaba: “Chinese investment in technology is paying dividends.” Scharre concludes: “China is not just forging a new model of digital authoritarianism but is actively exporting it.”

How will the U.S. government oppose China’s bid for data and AI dominance? Here Four Battlefields is very interesting — and it contrasts markedly with Scharre’s speculations in Army of None. In order for the U.S. government to be able to harness the military potential of data, there needs to be a major regulatory change. The armed forces need to form deep partnerships with the tech sector. They “will have to look beyond traditional defense contractors and engage in start-ups.” This is not easy. Scharre documents the challenging regulatory environment in the United States in comparison with China: “in the U.S., the big tech corporations Amazon, Apple, Meta (formerly Facebook) and Google are independent centers of power, often at odds with government on specific issues.” Indeed, Scharre discusses the notorious protest at Google in 2017, when employees refused to work on the Department of Defense’s Project Maven contract. Skepticism about military applications of AI remain in some parts of the U.S. tech sector.

American tech companies may have been reluctant to work with the armed forces but the Department of Defense has not helped. It has unwittingly obstructed military partnerships with the tech sector. The Department of Defense has always had a close relationship with the defense industry. For instance, in 1961, President Dwight D. Eisenhower warned about the threat that the “military-industrial complex” posed to democracy. The Department of Defense has developed an acquisition and contracting process that has been primarily designed for the procurement of exquisite platforms: tanks, ships, and aircraft. Lockheed Martin and Northrop Grumman have become adept at delivering weapon systems to discrete Department of Defense specifications. Tech companies do not work like this. As one of Scharre’s interviewees noted: “You don’t buy AI like you buy ammunition.” Tech companies are not selling a specific capability, like a gun. They are selling data, software, computing power — ultimately, they are selling expertise. Algorithms and programs are best developed iteratively in relation to a very specific problem. The full potential of some software or algorithms to a military task may not be immediately obvious even to a tech company. Operating in competitive markets, tech companies, therefore, prefer a more flexible, open-ended contractual system with the Department of Defense — they need security and quick financial returns. Tech companies are looking for collaborative engagement, rather than just a contract to build a platform. 

The U.S. military and especially the Department of Defense has not always found this novel approach to contracting easy. In the past, the bureaucracy was too sluggish to respond to their needs — the acquisition process took seven to 10 years. However, although many tensions exist and the system is far from perfect, Scharre records a transforming regulatory environment. He describes the rise of a new military-tech complex in the United States. Project Maven, of course, exemplifies the process. In 2017, Bob Work issued a now famous memo announcing the “Algorithmic Warfare Cross Functional Team” — Project Maven. Since the emergence of surveillance drones and military satellite during the Global War on Terror, the U.S. military had been inundated with full-motion video feeds. That footage was invaluable. For instance, using Gorgon Stare, a 24-hour aerial surveillance system, the U.S. Air Force had been able to plot back from a car bomb explosion in Kabul in 2019, which killed 126 civilians, to find the location of safe houses used to execute the attack. Yet, the process was very slow for humans. Consequently, the Air Force started to experiment with computer vision algorithms to sift through their full-motion videos. Project Maven sought to scale up the Air Force’s successes. It required a new contracting environment, though. Instead of a long acquisition process, Work introduced 90-day sprints. Companies had three months to show their utility. If they made progress, their contracts were extended — if not, they were out. At the same time, Work de-classified drone footage in order that Project Maven could train its algorithms. By July 2017, Project Maven had an initial operating system, able to detect 38 different classes of object. By the end of the year, it was deployed on operations against ISIS: “the tool was relatively simple, and identified and tracked people, vehicles, and other objects in video from ScanEagle drones used by special operators.” 

Since Project Maven, the Department of Defense has introduced some other initiatives to catalyze military-tech partnerships. The Defense Innovation Unit has accelerated relations between the department and companies in Silicon Valley, offering contracts in 26 days rather than in months or years. In its first five years, the Defense Innovation Unit issued contracts to 120 “non-traditional” companies. Under Lt. Gen. Jack Shanahan, the Joint Artificial Intelligence Centre has played an important role in advancing the partnership between the armed forces and tech companies for human assistance and disaster relief operations, developing software to map wildfires and post-disaster assessments — whether these examples in Scharre’s text imply more military applications is unclear. After early difficulties, the Joint Enterprise Defense Infrastructure, created by Gen. James Mattis when he was secretary of defense, has reformed the acquisition system for tech. For instance, in 2021, the Department of Defense helped Anduril develop an AI-based counter-drone system with nearly $100 million.

Four Battlegrounds is an excellent and informative addition to the current literature on AI and warfare. It complements the recently published works of Lindsay, Goldfarb, Benjamin Jensen, Christopher Whyte, and Scott Cuomo. The central message of this literature is clear. Data and AI are and will be very important for the armed forces. However, data and AI will not radically transform combat itself — humans will still overwhelmingly operate the lethal weapon systems, including remote ones, which kill people, as the savage war in Ukraine shows. The situation in combat is complex and confusing. Human judgement, skill, and cunning are required to employ weapons to their greatest effect there. However, any military force that wants to prevail on the battlefields of the future will need to harness the potential of big data — it will have to master digitized information flooding through the battlespace. Humans simply do not have the capacity to do this. Headquarters will, therefore, need algorithms and software to process that data. They will need close partnerships with tech companies to create these systems and data scientists, engineers, and programmers in operational command posts themselves to make them work. If the armed forces are able to do this, data will allow them to see across the depth and breadth of the battlespace. It will not solve the problems of military operations — fog and friction will persist. However, empowered by data, commanders might be able to employ their forces more effectively and efficiently. Data will enhance the lethality of the armed forces and their human combat teams. The Russo-Ukrainian War already gives a pre-emptive insight into the advantages that data-centric military operations afford over an opponent still operating in analogue. Scharre’s book is a call to ensure that the fate of the Russian army in Ukraine does not befall the United States when its next war comes.

Anthony King is the Chair of War Studies at the University of Warwick. His latest book, Urban Warfare in the Twenty-First Century, was published by Polity Press in July 2021. He currently holds a Leverhulme Major Research Fellowship and is researching into AI and urban operations. He is planning to write a book on this topic in 2024.

Image: Department of Defense

War on the Rocks

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More