Jesus' Coming Back

Is there a Human in the Machine? AI and Future Warfare

Scottish philosopher David Hume observed that “there is a universal tendency among mankind to conceive all beings like themselves … we find faces in the moon, armies in the clouds.” Humans have always had a fondness for anthropomorphism, the tendency to attribute human-like traits to non-humans. Psychologists, philosophers, and anthropologists have considered the origin of anthropomorphism as an evolutionary and cognitive adaptive trait, particularly concerning theistic religions. Scholars speculate that for evolutionary reasons, early hominids such as the great apes interpreted ambiguous shapes and objects such as clouds and rocks as faces or bodies to improve their genetic fitness to avoid predatory animals and other threats (known as “animism”). It is far better for a hunter to mistake a boulder for a bear than to mistake a bear for a boulder. This penchant for anthropomorphism, the tendency to attribute human-like traits to non-humans, has significant implications for military AI.

AI technology is already being infused into military machines. Autonomous weapons that can attack without human intervention have already supported human fighter pilots in multiple scenarios, including refueling in mid-air, escorting heavy bombers, and acting as decoys to absorb enemy fire. Many AI-enabled machines are designed to look and act like humans. For hybrid human–machine teams to work together effectively, they will need the same qualities that teams of human soldiers rely on: trust, acceptance, tolerance, and social connection. While the limitations of AI technology in human–machine interactions are well understood, the impact of our natural tendency to anthropomorphize AI agents on the psychological and motivational aspects of hybrid military operations has received much less attention. How does anthropomorphizing AI influence human–machine operations? And what are the potential consequences of this phenomenon?

Anthropomorphism in AI by Design

The way human users perceive their interactions with AI systems is, in part, influenced by deliberate choices made by the designers. From depictions of Alan Turing’s early computational machines to the modern-day technological infamy of large language model chatbots — such as OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini — researchers often use human-like traits, concepts (e.g., “understand,” “learn,” and “intelligence”), and expertise when referring to AI systems to highlight the similarities of humans and AI algorithms. Designers have created machines (e.g., robots, digital assistants, avatars, and social bots) with human-like features that elicit familiar psychological attitudes exhibited by humans toward other humans, including trust, reliability, and a sense of control. However, the tendency of popular culture and media coverage to emphasize the human-like qualities (e.g., emotional, cognitive, sentient, conscious, and ethical) of AI and robots inadvertently expounds false notions about what AI can and cannot do. Critics argue that these conceptualizations mislead system operators into believing that AI understands the world like humans do through intuition, perception, introspection, memory, reason, and testimony.

The success of AI weapons systems has reinforced the view that anthropomorphism is an essential trait for current AI-powered, human–machine interactions. The high-profile successes of some autonomous weapon systems has contributed further to the public and scientific concern that the development of AI depends on emulating the human brain and human psychology. In reality, designing effective anthropomorphic AI systems is easier said than done. The sheer complexity of human interaction patterns makes human–machine interactions not only about replicating psychology but also about cultural and social dimensions. Even if human–machine interfaces were purely cognitive, the amount of neurodiversity in human cognition is genuinely striking.

Military Human–Machine Interactions in Tactical Hybrid Teaming

AI augmentation could support several capabilities that enable many operations, such as unmanned underwater, ground, and aerial vehicles; unmanned quadruped ground vehicles; interactive embodied robots; and digital assistants and avatars to support command decision-making, including face and voice recognition to interpret enemy intentions and anticipate their behavior.

Boeing and the U.S. Air Force are collaborating in the development of a “Loyal Wingman” project of supersonic autonomous combat drones capable of flying in formation with fifth-generation F-35 fighters. AI agents will navigate and manipulate their environment, selecting optimum task-resolution strategies and defending jets in autonomous joint attack missions. Recent unsupervised, pre-trained, deep-learning (a sub-field of machine learning) networks have been tested on autonomous vehicles to cope with real-world nonlinear problems. However, these new machine-learning approaches are not trustworthy in safety-critical nonlinear environments. Studies like this demonstrate that hybrid teaming is optimized when the behaviors and intentions of AI are accurately perceived, anticipated, and communicated to human pilots. Anthropomorphic cues and terms can help achieve this objective. For instance, social robotic studies demonstrate that a critical precondition in successful human–machine interactions is how humans perceive non-human agents’ expertise, emotional engagement, and perceptual responses. Recent studies also demonstrate that anthropomorphic digital assistants and avatars appear more intelligent and credible to humans than non-anthropomorphic ones.

Anthropomorphism can also impact how AI technology (e.g., chatbots, deep-fake technology, and AI-augmented adversarial attacks) can magnify deception tactics and information manipulation. For example, using new-generation, AI-enhanced aerial combat drones such as the Loyal Wingman aircraft in asymmetric offensive operations, AI systems might be trained — or, eventually, autonomously “learn”— to suppress or use specific anthropomorphic cues and traits to generate false flags or other disinformation operations. In this way, AI-anthropomorphism could offer militaries significant and novel tactical advantages, especially in asymmetric information situations. Interpreting the mental state of a human combatant in close physical contact (through their gestures and facial expressions) is generally easier than interpreting the mental state of drones, digital assistants, and other vehicles that hide gestures and facial expressions. Therefore, understanding anthropomorphism’s determinants and drivers can shed light on the conditions under which these effects will be most impactful. Ultimately, the design of AI agents for hybrid teaming must incorporate both anthropomorphism’s positive and potentially negative psychological implications.

The Consequences of Military AI–Anthropomorphism

In war, perceiving an AI agent as having human-like qualities has significant ethical, moral, and normative implications for both the perceiver and the AI agent. Attributing human characteristics to AI, explicitly or implicitly, can expose soldiers in hybrid teams to considerable physical and psychological risks.

Ethical and Moral

When individuals attribute human-like qualities to an AI agent in the military, many positive and negative ethical, moral, and normative consequences unfold for both the human perceiver and the AI entity in question.

Anthropomorphic terms like “ethical,” “intelligent,” and “responsible” in the context of machines can lead to false tropes implying that inanimate AI agents are capable of moral reasoning, compassion, empathy, and mercy — and thus might act more ethically and humanely in warfare than humans. This expectation may induce a shift from viewing AI technology as a tool to support military operations to becoming a source of moral authority. In its drone report, the European Remotely Piloted Aviation Systems Steering Group stated that “citizens will expect drones to have an ethical behavior comparable to the human one, respecting some commonly accepted rules.” AI ethical reasoning in war would look very different from human-centric notions. Philosophically and semantically, it is easy to confuse machines behaving ethically (functional ethics) with machines being used ethically by humans in operational contexts (operational ethics).

In a recent report on the role of autonomous weapons, the U.S. Defense Department’s Defense Science Board alluded to this problem, concluding that “treating unmanned systems as if they had sufficient independent agency to reason about morality distracts from designing appropriate rules of engagement and ensuring operational morality.” Using anthropomorphic language to conflate human ethics and reasoning with a machine’s inductive, statistical reasoning — on the false premise that the two are similar — risks abdicating control over our ethical decision-making to machines. In short, granting AI systems anthropocentric agency is not ethically or morally neutral. Instead, it presents a critical barrier to conceptualizing AI’s many challenges as an emerging technology.

Trust and Responsibility

Were military personnel to perceive AI agents as more capable and intelligent than they are (automation bias), they may become more predisposed to “social loafing,” or complacency, in tasks that require human–machine collaboration, such as target acquisition, intelligence gathering, or battlefield situation-awareness assessments. For example, drivers who use anti-lock braking were found to drive faster and closer to vehicles ahead of them than those who did not. The risk of unintended consequences is even higher with AI agents due to the lack of knowledge about how AI systems make decisions, an issue known as the “black box problem.” As discussed below, some of these risks could be mitigated and controlled through appropriate monitoring, design, and training.

People tend to mistakenly infer an inherent connection between human traits and machines when their performance matches or surpasses humans. Moreover, people are likelier to feel less responsible for the success or failure of tasks that use human-like interfaces, treating AI agents as scapegoats when the technology malfunctions. However, if the decisions and actions of AI agents during combat appear “human-like,” this may even increase the perceived responsibility of the humans who designed the algorithms or collaborated with AI agents in hybrid teaming. Paradoxically, advances in autonomy and machine intelligence may require more, rather than less, contributions from the human operator to cope with the inevitable unexpected contingencies that fall outside of an algorithm’s training parameters.

The Dehumanization of War

The psychological mechanisms that make people likely to attribute human-like qualities can also increase our understanding of when and why people do the opposite. If AI-enabled weapons systems in human–machine interactions are anthropomorphized, drawing human warfighters physically and psychologically further away from the battlefield, soldiers risk becoming conditioned to view the enemy as inanimate objects, neither base nor evil, and things devoid of inherent worth. Although the “emotional disengagement” associated with a dehumanized enemy is considered conducive to combat efficiency and tactical decision-making, reduced levels of interaction also reduce the desire to understand, develop social connections, or empathize with others, resulting in dehumanization.

Treating AI systems as trustworthy agents can also cause users to form inappropriate attachments to AI agents. This idea is supported by recent research on social chatbot usage during the pandemic that found human users accepted greater contact in human–machine teaming when presented with a threat or stressful situation, such as loneliness, anxiety, or fear.  Moreover, this propensity increases humans’ emotional bond with the chatbot. While social chatbots are digital, predictive tools, they have been designed with names and even personalities, resulting in society treating them as if they have consciousness.

As a corollary, soldiers in anthropomorphized hybrid teaming might (1) view their AI “teammates” as deserving of more protection and care than their human adversary, and/or (2) become intoxicated by their power over an adversary and thus more predisposed to dehumanize the enemy, justifying past wrongdoings and excessive, potentially immoral acts of aggression.

Conclusion: Managing Future Human–Machine Teaming

AI-anthropomorphism and its impact on human–machine interactions in military hybrid collaboration need to be acknowledged and understood by the AI and defense research community, its users, and the broader constituents of the socio-technical ecosystem if they desire to realistically anticipate the opportunities, challenges, and risks associated with hybrid tactical teamwork. Deploying highly autonomous AI agents entails a series of socio-technical and psychological challenges. Human warfighters must understand the AI algorithmic design regarding functionality; the limitations and biases in human and machine perception, cognition, and judgment; and the risks associated with delegating decision-making to machines.

Policymakers, designers, and users should consider several possible measures to maximize the advantages and minimize the risks in future human–machine interfaces. First, AI-driven systems should be designed to monitor biases, errors, adversarial behavior, and potential anthropomorphic risk. Policymakers should also incorporate “human” ethical norms in AI systems while retaining the role of humans as moral agents by keeping humans in the loop as fail-safes. Second, human-operator training that emphasizes “meaningful human control” could foster a culture of collective vigilance against automation bias and complacency in hybrid teaming. Third, militaries should educate combatants and support staff about the possible benefits and risks of anthropomorphizing AI agents. Fourth, human–machine interfaces must be regulated to counteract the potential impact of dehumanization, groupthink, and other concerns related to diffused moral responsibility. Specifically, policy, safety, legal, and ethical issues should be examined before the technology is deployed, and professional military education needs to include training in these areas, particularly how to respond to the actual needs, practical realities, and legal and ethical considerations of human–machine interactions. Militaries should also closely coordinate force-structuring decisions with training exercises to maximize human–machine communications, especially when communications across the chain of command are restricted or compromised.

These efforts should optimize human–machine communication and establish appropriate trust, acceptance, and tolerance levels in human–machine interactions. Ensuring human operations are active across the entire decision-making continuum could improve users’ perception of an AI system. Appropriately calibrating their trust and confidence in human–machine interactions to maximize successful teaming outcomes remains challenging.

James Johnson is a senior lecturer in strategic studies at the University of Aberdeen. He is also an honorary fellow at the University of Leicester. He has authored three volumes on AI and future war, including The AI Commander: Centaur Teaming, Command, and Ethical Dilemmas  (OUP, 2024); AI and the Bomb: Nuclear Strategy and Risk in the Digital Age (OUP, 2023); and Artificial Intelligence and the Future of Warfare: The USA, China, and Strategic Stability (MUP, 2021).

Image: ChatGPT

War on the Rocks

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More