The Middle East’s AI Warfare Laboratory
On a chilly morning in November 1911, Lt. Giulio Gavotti, an Italian pilot, leaned out from the cockpit of his monoplane over the oases and farmlands of modern-day Libya and tossed four small grenades onto an encampment of Ottoman soldiers. Widely covered in the international press, the bombardment was ultimately ineffective and caused no casualties. Yet it is acknowledged today as the start of a revolution in military affairs — the first recorded instance of explosives that were dropped from a powered aircraft during an armed conflict, ushering in the age of Guernica, Dresden, and Hiroshima.
More than a century later, in March 2020, Western media went abuzz again with reports of yet another quantum military leap that occurred only a few kilometers from the Italian aviator’s sortie. According to a U.N. investigation, a Turkish-made Kargu-2 drone engaged the vehicles and troops of a Libyan militia “without requiring data connectivity between the operator and the munition effect”. As such, it may have been the first instance of an attack by a “lethal autonomous weapons system.” However, an exhaustive investigation and years of debate left experts dubious about whether the strike had taken place without human input.
What largely escaped the discussions over the Kargu incident is the fact that, like the innovation of the Italian pilot at the turn of the previous century, it happened during a conflict of marginal importance at the time, waged by second-tier state powers and their local proxies on the periphery of Eurasia, a region that was then and is still believed to be the world’s geopolitical core. While much of the focus in the West today regarding the weaponization of AI has been directed at that Eurasian heartland — on the developmental threat posed by China and on Ukraine and Russia’s battlefield innovations — the Middle East and North Africa region remains uniquely vulnerable to the uncontrolled and lethal application of these technologies.
To begin with, many Middle Eastern conflicts fall short of total war, waged by state and non-state actors using a range of unconventional tactics. In such contexts, AI technologies hold out the promise of conferring unique and decisive advantages, while adding new operational and ethical challenges. Moreover, wars in this region are characterized by the routine flouting of international norms of warfare — often abetted by outside powers — which puts the region even more at risk from the misuse of weaponized AI.
In many respects, Israel’s recent military campaign in the Gaza Strip epitomizes the intersection of these trends, while also highlighting the great dangers to civilians posed by this technology. It has also demonstrated Israel’s qualitative edge in the regional AI arms race, which has been joined by other ambitious and interventionist Middle Eastern states. The escalatory spiral of this competition, along with the region’s history of conflicts and entrenched rivalries, underscores the urgent need to regulate the development and use of AI weapons better through informal compacts that emerge from within the Middle East itself.
Why Middle East Conflicts Are So Conducive to the Use of Weaponized AI
The Middle East and North Africa are arguably the most conflict-ridden and militarized regions in the world. Four out of 11 of the “extreme conflicts” identified in 2024 by the Armed Conflict Location and Event Data organization occurred there, and six of its 16 countries were listed as conflict zones. Wars in the region often occur on the lower rungs of the escalation ladder and are waged through a blend of irregular tactics, subversion, disinformation, cyberattacks, and the deployment of standoff weapons like drones and ballistic missiles. Because of their promise of imparting greater precision, speed, lethality, and deniability, AI systems are likely to integrate well into these dominant modes of warfighting, amplifying their physical and psychological effects.
The topography of many Middle Eastern conflicts adds to the allure of deploying artificial intelligence. Presently, AI-enabled technologies show the most promise in the aerial domain of warfare, especially in accelerating the targeting cycle of air-to-ground strikes. For such functions, the object recognition capabilities of the current class of algorithms work best in topographically less complex and less populated environments, such as the deserts or shrub steppes predominating the region. During its recent campaigns against non-state militants in Iraq, Syria, and Yemen, for example, the U.S. military made extensive use of algorithms developed under Project Maven to distinguish between different classes of non-human targets including tanks, trucks, and air defense sites.
Relatively basic algorithms are also well-suited to maritime military operations, especially in the littoral sea lanes and chokepoints of the Suez Canal, the Bab al-Mandeb, and the Strait of Hormuz. In those environments, AI systems currently favor defensive and surveillance functions, such as the use of pattern and signature recognition to provide forewarning of seaborne and aerial attacks on ships and coastal infrastructure. Even so, future AI advancements will also improve offensive naval capabilities, as shown already by the Ukrainian military’s use of sea drones for deep strikes against Russian warships in the Black Sea. Such systems will undoubtedly prove attractive to Middle Eastern states and non-state actors that have made maritime disruption a centerpiece of their warfighting strategy, most notably Iran and its Yemeni proxy, the Houthis.
Given the Middle East’s high degree of urbanization, it is unsurprising that wars have often reached their decisive climax in cities and suburbs, such as Aleppo, Raqqa, Mosul, and Sirte. Combat in the three-dimensional battlespaces of such densely built settings imposes acute hardships on belligerents, offsetting advantages in mass, mobility and firepower, and degrading command-and-control. Therefore, artificial intelligence presents the promise of easing, if not erasing, some of these challenges. While the current generation of AI solutions still face difficulties in urban settings, especially from the presence of visual, sonic, and thermal “clutter,” it is only a matter of time before the technology evolves to overcome these limitations.
Among the most relevant AI applications for urban warfare are battle management systems that can help commanders at all levels obtain a clearer picture of a dynamic cityscape. At the tactical level, small autonomous drones and robots fitted with sensors or munitions can move over mounds of rubble, through the interior rooms of buildings, and inside sewers and tunnels. More controversially, algorithmic pattern-recognition tools, based often on behavioral and biometric data, can provide early warning of an impending insurgent or terrorist attack in densely populated areas. Yet without the appropriate safeguards, this capability is fraught with ethical risks, especially when it is directed at specific ethnolinguistic or religious communities — a particular concern in the Middle East, given the salience of these identities as factors in conflicts.
Finally, there is a normative aspect to Middle Eastern wars that should raise additional worries about the militarization of AI. State and non-state actors in regional conflicts have historically flouted international conventions governing warfare with alarming frequency, especially those regarding the protection of civilians. In many instances, outside powers have enabled and condoned these transgressions to shield their Middle Eastern allies and clients from scrutiny and sanctions. In Libya, for example, repeated breaches of U.N. arms embargos by the United Arab Emirates and other regional actors went unpunished, in part because the United States and other Security Council members wanted to protect their local partners. More recently, the Biden and Trump administrations have armed, funded, and defended Israel’s military campaign in Gaza despite its repeated violations of international law, while China and Russia have stayed silent on the egregious abuses of Iran’s regional proxies. External actors have also committed rather than simply enabled these violations, as demonstrated by recent U.S. airstrikes against the Houthis, further normalizing such behavior.
How Middle Eastern States Are Driving the AI Arms Race
Despite these risks, ambitious and powerful Middle Eastern states are pressing ahead in the race for weaponized AI. Many are led by deeply autocratic regimes that are using this technology to fight terrorists and criminals, but also to silence political dissidents and journalists. Yet the clear regional leader in applying AI for both internal security and military operations is not an autocracy, but a democracy — albeit an increasingly imperiled one.
Prior to the Gaza War, the Israel Defense Forces had invested in and deployed militarized AI, benefitting from the country’s well-funded technological sector and its close collaboration with the military. In the past, AI was used mostly for population surveillance and border policing, exemplified by AI-powered robotic twin gun turrets installed atop a wall on the occupied West Bank. In 2021, Israel utilized AI-enabled intelligence processing and targeting systems during the Unity Intifada, which Israeli commentators described as the “world’s first AI war.” Two years later, it built upon this experience to deploy this technology on a much larger scale, with the launch of Israel’s incursion into the Gaza Strip following Hamas’s Oct. 7 massacres and hostage-taking.
The results have been troubling. On the one hand, AI has enhanced commanders’ situational awareness, lessened the human costs of tunnel mapping, improved the speed of military strikes, and boosted the survivability of troops. Yet the benefits afforded by such functions have been overshadowed by mounting civilian deaths reportedly resulting from this technology’s safeguards being loosened or eliminated. For example, the Israel-based +972 Magazine found that the “Where’s Daddy?” application has been used to alert Israeli military personnel when a suspected Hamas militant entered a specified area, often a family home, upon which unguided “dumb” bombs were then dropped. U.N. experts have expressed deep concern about the Israeli military’s use of this and other AI targeting systems, including “Lavender” and “Gospel,” warning about the “lowered human due diligence to avoid or minimise civilian casualties.”
Elsewhere in the region, the oil-rich Gulf states of Saudi Arabia and the United Arab Emirates are using their wealth to fuel domestic development of AI and to attract investment from abroad, especially China and the United States. Undertaken as part of an economic diversification strategy, their investments have improved the capacity of their domestic security services to counter illicit networks and violent extremist actors, but also to monitor and suppress activists.
Externally, the two Gulf monarchies are harnessing AI technology to guard against cyberattacks and misinformation, bolster their air and coastal defenses, and assist in the development of semi-autonomous loitering munitions and drone swarms. Riyadh and Abu Dhabi currently appear to be prioritizing domestic economic development, but in the past they have pursued destabilizing military interventions, conducting joint airstrikes and waging proxy wars in Yemen, and, in the case of the UAE, in Libya and Sudan. The adoption of AI into their militaries could embolden them toward greater adventurism abroad, especially if the new technology is perceived to lower the costs of such meddling.
Across the Gulf, the Islamic Republic of Iran is endeavoring to build an AI arms industry in the face of crippling sanctions. It lacks the sophistication of other regional powers’ programs, but the regime in Tehran likely sees great value in incorporating AI into its efforts to rebuild Iran’s power projection capabilities in the wake of Israel’s punishing strikes against Hamas and Hizballah. Specifically, Iranian officials may believe that transferring this technology to its proxy allies could be a way to signal Iran’s continued viability as a patron and reassert control, while restoring some measure of psychological deterrence against its foes. This logic may also be guiding Iran’s recent pronouncements about AI-enhancements to its fleets of long-range missiles and unmanned aerial vehicles, including the formidable Shahed loitering munition. Tehran is also enlisting AI to strengthen its cyber operations abilities, which constitutes yet another critical pillar in its national defense strategy.
Turkey’s prowess in AI lags other Middle Eastern powers, despite its much-hyped Kargu-2 strike in Libya. To be sure, Turkey has carved out a niche in so-called “drone diplomacy,” marketing and deploying its unmanned aerial vehicles across Europe, Asia, and Africa. Yet this quantitative edge is not matched by a commensurate level in quality: bereft of Gulf states’ oil wealth and Israel’s technological base, Ankara lacks the resources to emerge as a regional frontrunner in military-use AI. Currently, Turkey appears to be prioritizing its drone production over AI research and development, although President Erdogan’s ambitions to project power across the Mediterranean and into Africa may compel it to make such investments.
The Need for Region-Led Dialogue on Military AI
The net effect of weaponized AI on the regional balance of power remains unclear. Yet as long as it is perceived to reduce the physical and political risks of war-making, AI arms may promote prolonged conflicts and spark new ones. Moreover, the future use of AI is not limited to the conventional militaries of the Middle East’s powerhouses: If the proliferation pathways of drones are any guide, these technologies may find their way to non-state militants through a combination of smuggling, homegrown experimentation, and sales. Such diffusion is likely to offset the deterrence power or battlefield advantages the region’s leading states believe they currently possess through AI.
These risks underscore the urgency of governing and regulating this technology. Such discussions have already started. In 2023, Austria tabled a motion at the United Nations to apply international law to lethal autonomous weapons. The resolution enjoyed widespread support, but five of the eight abstentions were by the Middle East’s leading users of military-use AI. Concurrent with these deliberations, the Biden administration pushed for greater governance of military AI, but those initiatives are now in jeopardy as President Donald Trump pushes for AI deregulation and chafes at international arms control agreements.
In light of Washington’s ambivalence and the United Nations’ halting progress, it is vital that regulations or norms about AI technology’s usage emerge from within the Middle East. The prospects for this happening through a formal multilateral institution are not encouraging, given the region’s repeated failure to establish a region-wide security forum. Any near-term discussions on militarized AI will likely happen between clusters of like-minded states, such as Gulf Cooperation Council members or parties to the Abraham Accords, who may on arms control and monitoring mechanisms as a form of confidence-building. These modest steps should be encouraged along with the establishment of informal norm-making bodies, like an international experts’ group tasked with investigating the civilian impacts of these systems.
Ultimately, the deep-seated drivers behind the AI arms race in the Middle East are unlikely to be resolved soon. That said, the aftermath of the Gaza War, along with doubts of America’s future as a security guarantor, has hastened the realization in many capitals that the best hope for stability in the Middle East is through local dialogue and de-escalation. Ensuring that discussions about the use of artificial intelligence for military purposes are included in these initiatives seems the most feasible way for mitigating the risks of this new technology in an already conflict-racked region.
Frederic Wehrey is a senior fellow in the Middle East Program at the Carnegie Endowment for International Peace and a former U.S. Air Force intelligence officer.
Andrew Bonney is a former research assistant in the Carnegie Middle East Program.
Image: Midjourney