Jesus' Coming Back

Mastering Human-Machine Warfighting Teams

0

Many of America’s top military commanders predict that mastering teaming between humans and increasingly capable AI algorithms and autonomous machines will provide an essential advantage to the warfighters of the future. The chief of staff of the Air Force has stated that “the military that masters human-machine teaming is going to have a critical advantage going forward in warfare.” Similarly, the commanding general of Army Futures Command believes that the integration of human beings and machines will result in a dramatic evolution — and possibly a revolution — in military operations.

Most of the discussion about human-machines teams has focused on the use of machines to replace humans in combat. Future Army doctrine hopes to avoid trading “blood for first contact” by using autonomous machines for dangerous reconnaissance missions or breaching operations. Wargamers exploring the use of the Collaborative Combat Aircraft often employed them as decoys, jammers, active emitters, and other missions that risked their loss in highly contested environments. Similarly, Navy aspirations for unmanned ships and aircraft often involve risky activities such as delivering supplies in contested environments or mine countermeasure operations. These concepts seek to remove humans from the most hazardous parts of the battlefield by using fearless and tireless machines in their place.

While reducing the risk American servicemembers face in combat is always a worthy objective, simply performing the same tasks with robots instead of humans would not result in a revolutionary change to future warfare. Instead, if military leaders hope to achieve dramatic improvements on the battlefield, human-machine teams will need to learn how to effectively leverage the complementary skillsets of their members.

To accomplish this, the military’s approach to human-machine teaming should change in three ways. First, efforts to train the human component of human-machine teams should focus on the instinctive brain instead of the reasoning brain. Attempts to have AI algorithms explain how they reason result in ineffective human-machine teams. Instead, leveraging people’s innate ability to unconsciously identify patterns in behavior seems to yield superior results. Second, the military should ensure AI developers do not simply pick the lowest-hanging fruit to improve the accuracy of their models. Instead, they should develop products that have complementary — not duplicative — skillsets within a human-machine team. Finally, we should avoid becoming overwhelmed by AI hype. For all the breathtaking advancements AI researchers have achieved, war is fundamentally a human activity with immense tacit knowledge only held by humans. Humans remain the most important part of the human-machine team.

The Need For Teams

Because the mechanics behind how machine intelligences work greatly differ from the underpinnings of biological intelligence, humans and machines bring different strengths and weaknesses to a combined human-machine team. When these differences are optimally combined, human-machine teams become more than the sum of their parts — outperforming both human performance and machine performance at accomplishing their assigned tasks.

Unfortunately, human instincts about how to interact with AI and autonomous machines on combined teams often lead them astray. These misalignments result in human-machine teams that perform worse on a task compared to an AI algorithm acting without human involvement — teams that are less than the sum of their parts. If ineffective techniques for collaboration result in human-machine teams that similarly underperform when executing military tasks, it could present the Defense Department with a dilemma. The department’s leadership would have to choose between allowing AIs to act without human oversight or ceding combat advantage to adversaries without the same moral reservations about the technology. China’s recent refusal to sign a joint declaration at the 2024 summit on Responsible Artificial Intelligence in the Military Domain calling for humans to maintain control over military AI applications vividly illustrates the risks this dilemma poses for the U.S. military. Consequently, overcoming these challenges and teaching humans how to leverage the complementary skillsets found within human-machine teams could prove essential to ensuring that human operators can effectively choose and control outcomes when employing AI-enhanced tools and thus ensure that AI is used ethically and responsibly in future military conflicts.

Understanding the divergent strengths of human and machine intelligence provides the foundation for successfully integrating humans with intelligent machines. Machines often outperform humans on tasks that require the ability to analyze and remember massive quantities of data, repetitive tasks that require a high degree of accuracy, or tasks that benefit from super-human response rates. For example, AI that is optimized for playing computer strategy games dominate their human opponents by coordinating the activities of thousands of widely dispersed units to achieve a singular strategic purpose. In these games, AI can “march divided, fight united” on a truly massive scale — beyond the ability of any single human brain to comprehend or counter.

In contrast, humans often hold an advantage over machine intelligence for tasks that require tacit knowledge and context, or where human senses and reasoning still retain superiority over sensors and algorithms. For example, an AI may be able to analyze imagery to locate a battalion of enemy vehicles, but it will not understand why those vehicles have been positioned at that location or what mission their commander has most likely tasked them to accomplish. Grand strategy will be an even greater mystery to a machine — today’s AI algorithms can calculate that an adversary can be defeated, but they will never understand which potential adversaries should be defeated and why. Warfare is an intrinsically human activity — consequently, the conduct of warfare is filled with tacit human knowledge and context that no data set will ever fully capture.

Current Efforts

Many defense research initiatives investigating how to form effective human-machine teams have focused on understanding and enhancing human trust in machine intelligence by developing AI algorithms that can explain the reasoning behind their outputs. As the Defense Advanced Research Projects Agency’s XAI program explains, “Advances in machine learning … promise to produce AI systems that perceive, learn, decide, and act on their own. However, they will be unable to explain their decisions and actions to human users. This lack is especially important to the Department of Defense, whose challenges require developing more intelligent, autonomous, and symbiotic systems. Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage these artificially intelligent partners.” The life-or-death stakes of many military applications for AI would seem to enhance this requirement that servicemembers understand and trust the reasoning behind any actions taken by AI applications.

Unfortunately, experimental studies have repeatedly demonstrated that adding explanations to AI increases the likelihood that humans will defer to the AI’s “judgement” without improving the accuracy of the team. Two factors seem to underpin this result. First, humans typically believe that other humans tell the truth by default — if they do not detect indications of deception, they will tend to trust that their teammate is providing correct information. Because AIs never exhibit typical human indicators of deception, when an AI that has proved reliable in the past explains how it arrived at its answer, most humans unconsciously assume that it is safe to accept that result or recommendation. Second, AI explanations only provide the human with information about how the AI arrived at its decision — they do not provide any information about how one should arrive at the correct answer. If the human does not know how to determine the correct answer, the primary effect of reading the AI’s explanation will be to increase their belief that the AI has rigorously applied itself to the problem. On the other hand, if the human already knows how to determine the correct answer, any explanation from the AI is unnecessary — the human will already know whether the answer is right or wrong.

A Better Way

Instead of relying on explainable AI to deliver effective human-machine teams, the Defense Department should consider two alternative approaches. One promising approach focuses on helping humans develop effective mental models to guide their interactions with their machine counterparts. Effective mental models often play a similar role in human teams — when you have worked with a teammate for a long time, you develop a strong understanding of their strengths and weaknesses and instinctually understand how to collaborate with them. Repeated interactions with machine intelligences under realistic conditions can similarly develop effective human-machine teams. Integrating prototype AIs into military exercises and training (with safety protocols such as minimum safe distances between dismounted humans and robotic vehicles or limitations on the complexity of maneuvers allowed for AI-controlled equipment) could help the human element of human-machine teams learn how to work with their machine “teammates.” Postponing this learning until AI tools show greater maturity risks falling behind potential enemies with greater real-world experience, such as Russia, and forcing American soldiers to catch up while under enemy fire.

Additionally, when the Defense Department intends for an AI model to assist humans rather than replacing them, it needs to ensure that these AIs have complementary skillsets with their human teammates. Sometimes, the easiest tasks to teach AI to accomplish are tasks that humans already perform well. For example, if an AI model is designed to identify improvised explosive devices, the simplest task will be to train it to identify images of such devices that have not been well camouflaged. However, the greatest value for a human-machine team might be to teach the model to identify improvised explosive devices that are only detectable through complicated analysis of multiple sensor types. Even if this second AI model can detect a much smaller percentage of devices in a training set compared to an AI model optimized to identify the easiest cases, the second model will be more useful to the team if all of the devices it detects would go undetected by humans. The Defense Department should ensure that metrics used to judge AI models measure skills needed by the combined human-machine team and do not merely judge the performance of the AI model in isolation.

Finally, the Defense Department should ensure that humans remain the dominant partner in any human-machine team. The strength of human-machine teams derives from their ability to leverage the complementary skills of their members to achieve performance superior to either humans or machines alone. In this partnership, humans will retain the dominant role because the knowledge and context they bring to the team add the greatest value. War is an inescapably human endeavor. An AI algorithm can learn how to optimally achieve an objective, but only humans will understand which objectives are the most important to achieve and why those objectives matter.

Only humans understand why we make war — thus, humans will remain the most important part of any human-machine team in warfare.

James Ryseff is a senior technical policy analyst at RAND, a nonprofit, nonpartisan research institution.

Image: Tech. Sgt. Jordan Thompson

War on the Rocks

Leave A Reply

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More