Jesus' Coming Back

Too Much Tech Can Ruin Wargames

0

The U.S. defense community has developed a healthy obsession with innovation. AI, modeling and simulation, machine learning — these are the buzzwords driving conversations about the future of warfare. But when it comes to integrating these technologies into one of the military’s most valuable tools for strategic insight — the wargame — the defense community would do well to proceed with caution.

Over the past few years there has been increasing discussion within the wargaming community about the need to integrate AI, data-driven models, and computer simulations into military wargames. There have been a series of articles, books, special issues of journals, conference panels, and presentations devoted to this topic. Those advocating for more technology in wargames focus on a variety of benefits, such as enhanced player experience, assistance in processing vast amounts of data, and greater analytic rigor. For instance, integrating generative AI can replicate adversary tactics, improve operational planning efforts, and create diverse scenarios. We agree that there are many benefits of integrating more technology into the wargame process, but do not agree that more technology in wargames necessarily equates to greater analytic rigor.

There’s been a growing chorus of critics arguing that wargaming within the U.S. Navy and at the U.S. Naval War College is behind the times when it comes to adopting new technologies to strengthen the analytic rigor of our wargames. They say that our methodology hasn’t evolved, that it ignores modern data science, and that it resists incorporating powerful new technological tools. Our colleagues at the U.S. Naval Postgraduate School have also felt the pressure to “modernize,” which many equate to blanket “computerization” of  wargaming processes such as adjudication of game outcomes. This push for what is perceived as enhanced analytical rigor is driven by the rationale that incorporating data-driven models and algorithmic adjudication will lead to more valid and informative wargame outcomes. But it is wrong to believe that greater integration of technology into wargames necessarily increases their analytic rigor. Although the aspiration to improve the validity of wargames is understandable, this approach is fundamentally problematic and does not deliver the value that proponents often anticipate.

The practice of wargaming has long served as a cornerstone of military education and planning. Since its formal introduction into the U.S. Naval War College in 1887, it has provided a vital space for military leaders and strategists to simulate complex conflict scenarios, hone their analytical and decision-making skills, and explore strategic and operational concepts.

But wargaming is not a relic. It’s a research method with a specific purpose: to explore human decision-making under stress, in competition, and with incomplete information. That doesn’t mean it’s perfect or that it can’t evolve. But layering in AI and simulation tools without a clear understanding of the purpose of wargames risks distorting the method — and, worse, producing bad analysis with a false sense of precision. Given the wide-ranging influence of wargames in the U.S. Navy and broader defense community, one should be clear-eyed about the trade-offs and risks involved. Military wargamers should be cautious in how they integrate new technologies into a well-established research method.

What a Wargame Is (and Isn’t)

Wargames are often misunderstood. They’re not predictive models. They don’t produce precise measures of effectiveness or deliver statistically significant forecasts of conflict outcomes. Seeking quantitative rigor in a wargame represents a basic misunderstanding of what the method can produce.

Instead, they are structured exercises that simulate aspects of military conflict to generate qualitative insights about strategy, operations, risk, and decision-making. Francis McHugh defines a wargame as “a simulation of selected aspects of a conflict situation in accordance with predetermined rules, data, and procedures to provide decision-making experience or to provide decision-making information that is applicable to real-world situations.” They’re especially powerful when exploring so-called “wicked problems” — messy, complex challenges like contested logistics, multi-party deterrence, alignment of force structure to future threats, and other aspects of high-end, multi-domain warfare. The enduring value of wargaming, therefore, has historically resided in its capacity to immerse participants in a simulated environment where they grapple with the complexities of warfare, make critical decisions, and learn from the consequences, all within a framework that acknowledges the vital role of human agency and the inherent unpredictability of conflict.

In a typical game, players develop courses of action, make operational decisions, and write orders. An opposing team does the same. An adjudication team — often made up of subject-matter experts — assesses the outcome using best-available data and professional military and scientific judgment. They debate and discuss the combat engagements, using stochastic processes to assist (yes, even including dice).

This method has limits. It’s not meant to answer every research question, nor does it deliver quantitative rigor in the way some analysts might prefer or require. But dismissing wargames because they aren’t “data-driven” enough misses the point. That’s like criticizing a historian for not using regression analysis. It’s the wrong tool for the job.

Comparison of Wargaming and Models & Simulation

False Precision and the Tech Temptation

Integrating AI or models and simulation into wargames might sound like a natural evolution. Why not make the adjudicated outcomes more accurate? Why not use models to adjudicate combat engagements more efficiently?

Because in most cases, wargamers don’t need more precision to have better wargames.

When a human adjudicator makes a call on some combat engagement — say, on whether a carrier survives a missile salvo — their reasoning is transparent, debatable, and adjustable. When a model makes the same call, human beings often accept the result without question — so called “automation bias,” mistaking precision for analytical usefulness. AI and modeling tools can create a veneer of scientific authority, especially when they output detailed charts, probabilistic kill chains, or dynamic visualizations. But these tools are only as good as the data and assumptions they’re built on — and when it comes to future warfare, that data is incomplete and speculative.

Modeling an operational-level conflict involving thousands of interacting platforms across land, air, sea, cyber, and space requires assumptions on everything from adversary behavior to the physics of untested systems. This creates black boxes — systems that produce outputs that analysts can’t fully or easily explain, interrogate, or trust. In many cases, those assumptions are hidden deep in code.

The attempt to enhance accuracy through the inclusion of modeling and simulation inadvertently creates an illusion of precision that leads to overconfidence in wargame results. This overconfidence fosters a perception of accurate outcome prediction rather than a more nuanced understanding of the causal mechanisms and the influence of the decisions made by the participants. That’s a dangerous foundation for conceptualizing future conflict, particularly when those concepts inform policies, plans, and investments critical to future warfare.

And that’s not just an epistemological problem. It changes how players behave. If they’re reacting to an algorithm instead of a dynamic adversary or a team of experienced adjudicators, the game shifts from a strategic decision-making exercise to an attempt to “beat the machine.” This raises the risk for negative learning among military commanders participating in the game and faulty conclusions for analysts developing the wargame report.

For example, agent-based models require programmed behaviors for all adversary platforms for virtually all possible interactions they may have with U.S. forces. Often, these behaviors are based upon assumptions of adversary behavior, and when these assumptions compound across thousands of platforms over multiple days’ combat, the outcome may appear to be scientific or predictive, yet is built upon shaky analytic foundations. This may give players a false sense of how the adversary will fight, leading to negative learning outcomes and potentially unjustified changes in war plans to beat a contrived model of adversary behavior.

Although it is tempting to integrate new technologies into wargames, wargamers can’t let these technologies distort what matters most. Wargames should focus on operational art and adaptation, not turn into exercises in outguessing a model’s algorithm. The latter approach leads to brittle thinking, not better planning. By creating instances of perceived confidence in the knowledge of how events will unfold based on simulation outputs rather than cultivating the capacity to visualize and understand the systemic complexity of the environment, wargamers risk generating a false sense of security that could have catastrophic consequences in reality.

This over-reliance on simulated outcomes might hinder the development of the critical thinking skills necessary to adapt to the unforeseen challenges and the dynamic nature of actual warfare.

Models Can Help — Just Not Like That

This doesn’t mean wargamers should reject the integration of modeling, simulations, and AI into wargaming outright. There are smart ways to integrate these into the wargaming process to improve the value of wargames while also guarding against faulty conclusions and negative learning. Used carefully, technology can support, not replace, wargame adjudication and analysis.

For example, models can help pre-test tactical engagements to support faster adjudication during games. Wargamers know going into a game that certain combat engagements will be likely, e.g., submarines versus mines, ballistic missile versus surface combatant, etc. Running simulations of these engagements ahead of time can provide adjudicators with a range of probabilistic outcomes that they can then draw upon during the game. At the U.S. Naval War College, we have hosted several adjudication forums where experts from across the military and scientific communities meet to discuss best-available data and determine common combat engagement probabilities. These feed into our wargames as trusted and validated “look up” tables to assist adjudicators as wargames unfold. Models can contribute to this process.

Models and simulations can also support adjudicators in parsing complex interactions in the information environment — especially when multiple sensor systems are involved across space and cyber domains. The interactions of these systems are often harder for humans to rapidly discern in the midst of a wargame adjudication period, so models can assist in determining “who sees whom” in the battlespace. Often, the ways in which these various electronic sensors interact with one another are governed more by physics than anything else, and models and simulations are well suited to determine these interactions.

AI tools could also assist analysts after the game, sifting through notes, surveys, orders, and debriefs to identify patterns or extract key themes more efficiently. This is currently a manually intensive process for wargame analysts. AI assistance could help analysts turn around game reports and briefs to decision-makers more rapidly, ultimately increasing the velocity of the research cycle.

But these approaches to technological integration support human judgment — they don’t replace it.

Let the Method Lead

Wargaming doesn’t need to resist data modeling, computer simulation, and AI. But it should resist bad methodology. Moving forward, it is crucial to rebalance the equation in military analysis by advocating for a judicious and conceptually sound approach that leverages the native strengths of both wargaming and simulation.

The central value of a wargame lies in what people do when the unexpected happens. How do commanders respond when a carrier is sunk? How does an adversary react when a plan fails? Those are the insights that matter. The true power of wargames is not what happens in the adjudication cell, but rather resides in the dialogues that occur in the game cells. Those are the dynamics that shape strategy. And they can’t be automated.

Wargames have proven to be valuable tools for military planners throughout history, often only involving model ships on a game floor. Wargames during the inter-war period at the Naval War College were of the most rudimentary design, yet were highly impactful in their educational value for future naval commanders in the Pacific during World War II. Fleet Adm. Chester W. Nimitz famously concluded in a lecture at the U.S. Naval War College that “The war with Japan had been re-enacted in the game rooms here by so many people and in so many different ways that nothing that happened during the war was a surprise — absolutely nothing except the Kamikaze tactics toward the end of the war; we had not visualized those.” Future commanders had a decision-making edge over their adversary by the exposure they received in wargames to plausible dilemmas in the Pacific. The value came from thinking, not computing.

Military wargames should progress into the future while preserving the power of what a wargame is and what it does — and we can do that even without advanced technology. Pushing AI and modeling into games without a clear understanding of their limits is not modernization — it’s misapplication. Wargames work because they create spaces where smart people wrestle with hard problems in real time. That’s where insight lives. That’s what we should protect.

Jonathan Compton, Ph.D., is chair of the War Gaming Department at the U.S. Naval War College and was formerly a senior analyst and wargame subject-matter expert in the Office of the Secretary of Defense. He holds a Ph.D. in formal research methods and world politics.

Joseph Mroszczyk, Ph.D., is an assistant professor in the War Gaming Department at the U.S. Naval War College and serves as lead analyst on chief of naval operations-directed wargames. He also serves as an intelligence officer in the U.S. Navy Reserve. He holds a Ph.D. in political science from Northeastern University.

Matthew Tattar, Ph.D., is an associate professor in the War Gaming Department at the U.S. Naval War College and serves as lead analyst on chief of naval operations-directed wargames. He is also the author of Innovation and Adaptation in War (MIT Press, 2025) and serves as an officer in the U.S. Navy Reserve. He holds a Ph.D. in politics from Brandeis University.

The views expressed here are those of the authors alone and do not necessarily represent the official views, policies, or positions of the U.S. Department of Defense or its components, to include the Department of the Navy or the U.S. Naval War College.

Image: Staff Sgt. Jessica Avallone via DVIDS

War on the Rocks

Jesus Christ is King

Leave A Reply

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More