Jesus' Coming Back

How the West Can Match Russia in Drone Innovation

Since the start of Russia’s invasion of Ukraine, the use of AI for military operations has been one of the most debated topics across public media and the open-source literature. But for all the praise Ukrainian innovation has garnered, there is too little recognition of how effective Russia’s more reckless approach to AI has been. 

Ukrainian and Russian forces have used AI for decision-making and data analysis when processing information received from multiple sensors and observation points, including drones, uncrewed aerial vehicles, manned aircraft, satellites, and ground-based forces and systems. But there have also been differences in the way both sides employed AI. Ukrainian and Western AI has focused on fast identification, tracking, and targeting. Russia, in turn, has used loitering munitions, as well as different command and control and intelligence, surveillance, and reconnaissance systems, to meet its need for precision targeting. 

Put simply, the focus of Western AI-enabled systems is on the left side of the observe, orient, decide, and act loop. But while the West prioritizes faster targeting and enhanced warfighter capabilities, Russia is attempting to make strides to automate the entire kill chain. In short, Russia’s aggressive military and volunteer-driven AI use stands in contrast to the United States’ cautious and responsible, if under-resourced, approach. Now, the U.S. Department of Defense needs to urgently prioritize AI assurance to ethically compete in the dynamic AI battlefield. 

Russian Innovation

While the United States has focused on methodically fixing technology acquisition and data integration challenges, Russia’s military efforts in Ukraine are being bolstered by a thriving landscape of funding, manufacturing, and opportunistic deployment of commercial technologies into operational environments. 

Russia’s AI development extends beyond official military channels. The rapidly growing and sophisticated ecosystem that supports the country’s military in Ukraine is made up of numerous volunteer efforts that develop and supply fighting equipment and systems to the front. This supply process is taking advantage of commercial off-the-shelf technologies to build and assemble weapons like first-person-view style drones — light, one-way attack (kamikaze) uncrewed aerial vehicles. This drone’s operator usually sees an image transmitted by this aerial vehicle similar to what would be seen from an aircraft pilot’s seat. Such efforts are a natural outgrowth of the rapid acquisition and mass-scale use of commercial drones and related systems during this invasion. They are also aided by donations from Russian citizens, private businesses, and wealthy individuals, enabling both purchasing power capacity and rapid innovation. Some of these organizations further benefit from direct links to the Russian government, with access to funding, technologies, and government protection for their efforts.

The Russian Ministry of Defense frequently touts AI-enabled small and mid-sized drones and uncrewed aerial vehicles — often during major events like its annual military expo and forum. Commercial off-the-shelf technology apparently enables Russian volunteer efforts to incorporate AI into their drone development. In August 2023, specialists from the “Tsar’s Wolves” announced that their Shturm 1.2 heavy quadcopter drone utilizes AI, claiming that this semi-autonomous uncrewed aerial vehicle can make decisions and drop projectiles independently of the human operator. According to these developers, an operator places a crosshair on a target on the uncrewed aerial vehicle control panel, and then this drone independently calculates the time and distance before releasing its munitions. 

Such “point and click” AI claims are difficult to corroborate in the absence of proof. If true, they suggest a growing sophistication of Russian volunteer organizations that are experimenting with limited AI-enabled image recognition and terrain mapping algorithms. The Shturm 1.2 drone is allegedly equipped with a thermal imaging camera and can also be used as a one-way kamikaze drone. This indicates that despite a sophisticated interior, it’s still cheap enough to be expendable in combat. 

In August 2023, another Russian volunteer group unveiled the Ovod (Gadfly) first-person-view drone. According to its developers, the drone’s onboard AI system allows for attacking static and dynamic targets, with mission accuracy of up to 90 percent. There is a claim that Ovod was tested in combat in Ukraine. Also in August 2023, a volunteer effort called “Innovators for the Front” exhibited “Aqua-22” AI-enabled quadcopter for autonomous operations. The developers claimed that this drone can autonomously recognize adversary equipment, manpower, and other objects. While this also appears to be a “homegrown” effort, the developers also admitted that this drone was a joint development with the military-affiliated research and development institution. 

In fact, some Ukrainian volunteers and military experts are concerned by several instances of Russia’s limited AI-enabled first-person-view drones that are already appearing at the front in early 2024. Today, Russia actively tests its commercial systems in live military operations, providing insights into their practical efficacy. While this allows for a review of advanced technology in combat, this approach is antithetical to American democratic values. In the United States, testing an AI-enabled system in combat directly violates the Department of Defense ethical AI principles and long-established international norms. Instead, the Department of Defense is playing the long game by conscientiously (though rather unwillingly) dedicating resources to a comprehensive expansion of its testing infrastructure, aimed at fostering a robust AI assurance framework and adherence to U.S. values and regulations, like Department of Defense Directive 3000.09. 

Plausibility and Analysis: How Much Can Be Done on the Fly?

How plausible are Russian claims, and how likely are these volunteer organizations to field an actual AI-enabled military drone? Many Russia-based commentators and drone enthusiasts claim AI is necessary in this war, given the need for all kinds of drones to operate more autonomously to avoid multilayered countermeasures like electronic warfare that permeate the Ukrainian battlespace. 

The public discussion on the Russian side points to the possibility of ground-based commands to kamikaze-type drones with onboard AI systems, along with AI-enabled drone swarms converging on identified adversary personnel, weapons, and systems. Given that many major militaries around the world are working on such developments, it’s not far-fetched to conclude that organizations tasked with developing combat first-person-view drones would also consider how such advanced technology may aid their efforts. Such Russian volunteer efforts are also competing with Ukrainian initiatives working on AI-enabled first-person-view drones. 

In the United States, however, the focus has been on developing alternatives to positioning, navigation, and timing capabilities, testing and hardening systems for adversarial actions, and implementing mission autonomy by enabling inference on the platforms themselves rather than relying on communications. But America will likely face some of the same needs Russia does today. The likely loss of communications, global positioning systems, and other connectivity is as much a driver of autonomous systems development as is the desire to speed up military decision-making.

Russian volunteer claims about AI-enabled drones may be grounded in related international developments. The same month when Russian volunteers made their August 2023 announcements, a Switzerland-based effort unveiled AI technology for its racing “Swift” drone to beat human racers. According to the developers, this drone reacted in real time to the data collected by its onboard camera, while an artificial neural network used data from the camera to localize this drone in space and detect the obstacles and pathways along the racetrack. This data was fed to a control unit based on a deep neural network that chose the best pathway to finish the racing circuit. The drone was also trained in a simulated environment where it taught itself to fly by trial and error, using machine learning called reinforcement learning. The developers made the drone fly autonomously via precise positions provided by an external position-tracking system while it recorded data from its camera — this process allowed it to autocorrect errors in interpreting data from the onboard sensors. While this achievement marked a milestone in drone development, some international commentators cautioned that human drone pilots can recover rapidly from mistakes and collisions, while Swift “fumbled significantly when faced with unexpected physical changes, like abrupt weather shifts.”

AI software is a key component in drones, with many commercial companies selling computer vision systems, object and terrain recognition, and related technologies. Moscow-based hive.aero develops “autonomous drone solutions for regular monitoring,” advertising a neural network to analyze input data such as photo, video, symbols, text, sound, thermal images, chemical reagents, and spatial indicators, and noting that “the possibilities of a neural network for defining typical cases are practically endless.” Some companies even advertise inexpensive AI software downloads for less than $50 for small and medium-sized commercial drones.

Public sources highlight the use of AI software for drones and uncrewed aerial vehicles to analyze data from onboard cameras and process this information to identify, extract, and classify features. Such AI software may be installed on embedded processing devices such as general-purpose graphics processing units, central processing units, and systems-on-a-chip. However, onboard AI processing systems can consume a large amount of resources and may pose challenges for drones with limitations in size, weight, and power consumption.

This last point is important given that while Russia’s Shturm first-person-view drone is relatively large, the Ovod remains small in size. Claims about onboard AI computing needing a lot of power have to be taken into account when scrutinizing the Russian announcements above. Shrinking the required computational and power requirements with existing drones is an issue the United States is actively trying to solve as well. But those efforts are being tested in carefully designed experiments rather than on the battlefield and do not include any kinetic capability whatsoever. 

Another contrast is that Russia repeatedly demonstrates a focus on minimizing drone operator involvement in combat, and possibly taking the first-person-view drone’s operator out of the equation altogether with the help of AI is key, given that each side in this war now hunts drone operators as priority targets. Of course, it’s also important to consider that not all of the Russian AI-enabled plans for first-person-view drones described here will eventually end up on the battlefield in their present form, with current testing and evaluation probably leading to changes or even complete drone redesign. At the same time, there are recent rumors of “AI-enabled” Russian first-person-view drones on the Ukrainian battlefield, although it is hard at this point to prove such claims. Still, with readily available key software and components that can be procured by a number of legal or illegal schemes, Russia’s private and volunteer-sector plans for AI-enabled drones may not be that far-fetched. 

Although U.S. Department of Defense Directive 3000.09 does not explicitly prohibit AI-enabled weapons systems without a human in the loop, there has not been any evidence that America is pursuing such capabilities. Project Maven uses commercial AI and drone technologies but only for inference, with the latter taking place on the ground rather than on the platform. The Department of Defense Chief Digital and AI Office’s Smart Sensor abolishes the need for ground processing, effectively performing the functions that link the collection and the dissemination of information to intelligence consumers via the Reaper drone platform, in preparation for degraded communications with ground control stations and operators. Still, the vision is simply to augment humans in their mundane tasks and shift them to supervisory duties requiring less command and control, rather than replace them altogether.

American Caution

The United States has largely embraced the long-term view of AI’s role in national security and is tackling its processes first, starting with acquisitions. The conventional acquisition system, marked by slow, linear processes and preoccupied with fairness and safety above all, doesn’t align with the dynamic demands for AI-enabled systems. The Chief Digital and AI Office has continued its Tradewinds acquisition program, primarily aimed at empowering small businesses against traditional defense companies. 

Additional process focus has been on addressing data integration challenges rather than delivering battlefield-ready capability. While combined joint all domain command and control remains the Chief Digital and AI Office’s priority, its focus is still on digitizing battle management rather than adopting commercial drone technologies in military operations. Integrating AI with legacy technologies and ensuring interoperability among and between different service branches and mission sets, all while ensuring basic functionality and survivability of new technology remain a tremendous challenge, requiring frequent iteration and experimentation.

Still, the biggest limitation to the Department of Defense’s capacity to arm the warfighter with AI-enabled systems remains its inability to assure these systems to the satisfaction of its many stakeholders. Independent government testing of contractor technology has proven necessary on many occasions, but today, lack of infrastructure, methodologies, resources, and personnel threatens this important function. Unlike Russia’s apparently swift and ruthless approach to acquiring and fielding commercial drones, the United States has opted for a deliberate and careful strategy that might prove insufficient in times of conflict. When it comes to traditional defense technology, the United States has long had a set of standards and regulatory bodies in place to ensure that the military fields systems that are an asset and not a liability to the warfighter. However, lack of equivalents for rigorous test, evaluation, and assurance for AI-enabled systems might force a choice between fielding unassured systems or not fielding AI-enabled systems at all.

In the United States, an autonomous or semiautonomous weapon system, similar to the ones being fielded in Russia today, has to go through predevelopment and pre-fielding reviews, per Department of Defense Directive 3000.09 on the guidance for autonomous and semiautonomous armed platforms. Those acquiring this system would have to demonstrate to a review board that they have minimized the probability and consequences of failures that could lead to an unintended engagement. That would require rigorous, resource-intensive, and repeated test and evaluation of technology, to include operational and live-fire testing.

America’s steadfast commitment to safety and security assumes that the United States has the three to five years to build said infrastructure and test and redesign AI-enabled systems. Should the need for these systems arise sooner, which seems increasingly likely, the strategy will need to be adjusted. Balancing assurance considerations and urgency to deploy AI-enabled systems to the warfighter poses a challenge. An even bigger challenge is optimizing readiness in all domains while balancing investments across traditional and emerging technologies. Still, as we start seeing the wolves circle, the only solution to upholding both competitiveness in AI and democratic values is unwavering advocacy, leadership, sponsorship, and investment in all aspects of AI assurance, as highlighted in a recent report by the National Academies of Sciences, Engineering, and Medicine. 

Conclusion

The war in Ukraine is pushing innovation on both sides to the limit, forcing the adversaries to adapt and adopt the latest in military and civilian technologies for combat. While Ukraine had the initial lead in such innovation in 2022, by late 2023 to early 2024 the Russian military and the volunteer efforts have caught up, adopting many similar technologies and concepts while building on these developments to fit their needs. The volunteer communities in Russia are especially tuned into the latest technical developments, given that many volunteers come from academia and the high-tech sector, and often contribute to drone research, development, and assembly after their full-time jobs. Drone use in the Russo-Ukrainian war is only going to grow, with first-person-view and small quadcopter drones becoming dangerous technologies in the field in ever-increasing numbers. Enabling their operations with AI is the next logical and technological step already undertaken by both belligerents. 

The “homegrown” AI use is likely to accelerate and might make drone warfare even deadlier. Meanwhile, the U.S. Department of Defense is much more focused on augmenting human warfighters than on replacing them. It is taking a measured and thoughtful approach, navigating the intricacies of maneuvering a large existing bureaucracy and infrastructure. When ample time permits the refinement of such a system, strategizing for the long term could yield favorable results. However, the imperative question beckons: What measures would we be willing to undertake if our preparation falls short as the clock ticks down? 

The rapid integration of unvetted commercial technologies into military operations, as observed in the approaches adopted by Russia and Ukraine, might not align with the rigorous standards upheld by the United States. In light of this, Washington should critically contemplate the alternative strategies. At the current rate, the United States may well face a set of choices, both of which can be unacceptable: taking unknown or unquantified risks on fielding AI-enabled systems, or entering a conflict with mostly traditional military systems while adversaries move forward with the first choice. The only way to avoid this set of choices is to urgently channel leadership, resources, infrastructure, and personnel toward assuring these technologies. With many in the Department of Defense leadership now keenly aware of the magnitude of the AI assurance issue for the department, Pentagon funding allocations should reflect a commitment to securing a technological edge while remaining true to America’s democratic values. 

Samuel Bendett is an adjunct senior fellow with the Center for a New American Security’s Technology and National Security Program

Dr. Jane Pinelis is a chief AI engineer with The Johns Hopkins University Applied Physics Laboratory

Image: Russian Ministry of Defense

War on the Rocks

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More