What We Mean When We Call Something an Intelligence Failure
When most people hear the words “intelligence failure,” they think of a surprise event that an intelligence service failed to predict.
But what if that’s all wrong?
Are the assumptions surrounding that term based on an inaccurate understanding of the capabilities of intelligence? Has the term evolved to include problems beyond the scope of intelligence community responsibilities? Is it premature to immediately label a surprise attack an intelligence failure?
To address these questions, I seek to critically review what we mean by intelligence failure and how the term is used and perceived in the public sphere. Our country would be better off if this point of view moved beyond intelligence organizations and the academy, into the halls of Congress and newsrooms, whose perspectives are swayed by the narratives generated by the intelligence failure moniker. This diverse group’s more enlightened understanding of the capabilities and influence of intelligence can positively impact reform initiatives about the mission, structure, funding, and use of intelligence.
Defining the Term Intelligence Failure
Intelligence failure is one of the most researched topics in intelligence studies. To be sure — intelligence has and will make errors. Faulty analysis can result from mistakes in intelligence collection, analysis, and dissemination. In 1962, U.S. intelligence assessments incorrectly assessed that the Soviet Union would not place missiles in Cuba. Before the Yom Kippur War in 1973, U.S. and Israeli intelligence agencies assessed that Arab armies would not attack despite having intelligence showing military activity consistent with an attack; and in 2004, U.S. and other Western intelligence services relied too heavily on the information of an unreliable Iraqi defector regarding Iraqi weapons of mass destruction capabilities.
But amid the plethora of intelligence failure case studies, no single, commonly agreed-upon definition of intelligence failure appears to exist. These case studies also revealed three attributes associated with intelligence failure that deserve critical review — a failure to predict, a policymaker’s failure to act on intelligence, and failures from other government organizations.
Realistic Expectations of Intelligence
Intelligence studies scholar Mark Lowenthal suggests that the real intelligence failure is failing to adequately explain the role of intelligence and its limitations to the public. Many assumptions about the intelligence community are rooted in flawed expectations about intelligence capabilities. One of those expectations associated with intelligence failure is the ability to predict the date and time of a surprise event or a military attack. In the case of public pronouncements of intelligence failure, the definition of prediction must be understood from the perspective of the policymaker, journalist, and public.
The U.S. intelligence community explicitly states it does not engage in prediction. Prediction is not mentioned in the National Intelligence Strategy that defines the type of analysis provided to policymakers. And even though intelligence practitioners and seasoned policymakers would consider it a self-evident truth that intelligence cannot predict, the expectation for the intelligence community to predict events remains part of the political, journalistic, and public discourse. Immediately following the 9/11 attacks, Porter Goss (who would later become director of the CIA) said that “the job of the intelligence community is prediction,” and former general counsel of the CIA, Jeffrey Smith, agreed that “the CIA’s job is to predict.” However, the issue is not whether intelligence should predict an event on a specific date and time — the issue is that the world is fundamentally unpredictable.
The connotation of prediction in this sense is that intelligence should have known specifically the who, how, when, and where of an attack or surprise event. However, prediction is based on analyzing trends with a linear, progressive, and repeatable course. Yet human activity rarely behaves that way. Complexity science suggests that a complex system, like the international system, comprises many interacting components whose emergent global behavior is more complex than can be predicted. And retrospective coherence theory suggests that in an unordered information environment, patterns that emerge can be perceived as they happen, but they cannot be predicted. Even casual observation would show that the current world structure behaves nonlinearly, has uncertain cause and effect dynamics, and is not predictably repeatable. Two purported intelligence failures illustrate how an unforeseeable act can precipitate an unpredictable and chaotic chain of events.
The event that precipitated the fall of the Berlin Wall was an impromptu press conference on Nov. 9, 1989 where a novice East German bureaucrat made ambiguous statements about proposed border crossing policies that many interpreted as indicting an immediate policy change.. Within hours, crowds of people flocked to gated checkpoints along the wall, and East German guards began to allow people to pass unchecked. This unscripted and unplanned event triggered a cascade of subsequent actions that culminated in the fall of the Berlin Wall. Labeling this event as an intelligence failure presumes that U.S. intelligence agencies could predict that a low-level official would utter words that, unbeknownst even to that official, would spark a public reaction that resulted in the destruction of portions of the Berlin Wall that same day.
On Aug. 15, 2021, Kabul fell to Taliban forces hours after Afghan President Ashraf Ghani fled the country. Within hours of the fall of Kabul, news outlets, cable news, and national security experts labeled it an intelligence failure. But what was the failure of intelligence? It could not be considered a surprise given the numerous warnings in preceding months that the Afghan government would most likely collapse in the immediate wake of a U.S. military withdrawal. Claims of a failure to predict cannot be substantiated given there were no indications that the Afghan president would flee the country and that the Afghan army would collapse when it did. Indeed, even the Taliban was surprised by the rapid collapse of the Kabul government.
Stating that intelligence does not predict does not relieve it of the mission to provide warning of attacks or other surprise events. The 2019 U.S. National Intelligence Strategy states that anticipatory intelligence looks to the future as foresight (identifying emerging issues), forecasting (developing potential scenarios), or warning. The differences between these types of intelligence are a blurred overlap and not always distinctive. For example, the literature on estimates and warning indicates that the value of intelligence warning depends on variables like the credibility of sources, probability of judgments, the proximity of warning to the probable event, and individual policymaker decision-making styles. However, the distinction between prediction, as used with intelligence failure, and estimative anticipatory intelligence is stark.
The zero-sum nature of “failure to predict” commands the immediate narrative and is engrained in the national discussion as the single cause of the surprise. It sets a tone that prejudices subsequent reviews. It requires subsequent discourse to explain why intelligence did not predict instead of a thoughtful examination of how the entire national security enterprise performed. Eliminating the presumption of prediction in public discourse can help thwart erroneous assumptions about intelligence capabilities and blunt the inertia of a misleading narrative.
Failure to Act on Intelligence
The intelligence failure label has also been applied to instances where policymakers fail to “act” on intelligence. This includes a failure by decision-makers to act on intelligence appropriately and a failure to make sound policy based on intelligence. This is not about whether a surprise event was an intelligence failure or a policy failure. The issue is that when policymakers fail to “act,” it is also considered an intelligence failure.
For example, one author argued that the coronavirus was the worst intelligence failure in U.S. history. However, the author also noted that the intelligence community issued a steady drumbeat of warnings about a coronavirus outbreak far enough in advance to allow for better preparation. Despite these warnings, the author concluded that the crisis was overwhelmingly the sole responsibility of the White House. It was also suggested that these alerts had little impact on senior administration officials, implying that there was likely nothing the intelligence community could have told the White House that would have made any difference.
Another article refers to the Oct. 7, 2023, Hamas attack as an intelligence failure, even though Israeli leaders were unwilling to listen and heed the warnings of the predictive intelligence they received from their intelligence system. Reporting also emerged that months before the attack, Israeli military leaders deemed intelligence assessments that Hamas military training activity was indicative of a large scale attack as “fantasies”. Reports also revealed that more than a year before the attack, Israeli leadership had the Hamas battle plan for the Oct. 7, 2023, attack.
The term “acting on intelligence” is ill-defined and subjective. Policymakers can delay decisions because of contradicting policy advice, political considerations, waiting for more information, or personal uncertainty. The information can also be ignored due to cognitive bias or powerful institutional bias. These can be considered “pink flamingo” events, where the “known knowns” are ignored in spite of the information resembling a bright and ugly bird. While intelligence is obligated to ensure the policymaker fully understands the analysis, efforts by intelligence analysts to persuade, convince, or “push harder” on the policymaker can come perilously close to if not actually advocating for a specific policy. An uncomfortable truth is that policymakers are free to disagree with or completely disregard intelligence assessments. Given this, it is unclear how policymakers’ failure to “act” or create a “sound” policy can be considered a failure of intelligence.
Whole of Government Failures
Decades of case studies on intelligence failures show that intelligence is rarely the sole reason we are surprised or unprepared for an attack or other event. Instead, problems with foreign policy, defense policy, and other government activities often contribute to unpreparedness and the element of surprise. While this is commonly understood in intelligence and national security studies circles, the dominant intelligence failure moniker suppresses that knowledge among junior civilian and military policy staff, journalists, and the general public. This is not a matter of distinguishing between policy failures and intelligence failures — it highlights that shortcomings in government organizations beyond the intelligence community can also lead to a lack of anticipation and preparedness.
For example, the congressional review of the 1941 Pearl Harbor attack faulted the intelligence offices of the War and Navy Departments for failing to recognize the significance of intercepted diplomatic messages from Tokyo to Honolulu. It also cited supervisory, administrative, and lack of coordination between the two commands as contributing factors to being surprised by the attack.
The 9/11 Commission concluded that a lack of imagination that airplanes would be used as weapons against targets is principally associated with the intelligence community but was part of a shared view in other elements of the U.S. government. The report also argued that the failure of previous U.S. policy in response to previous al-Qaeda attacks may have signaled that such attacks are risk-free. The commission also concluded that the most severe weaknesses in agency capabilities were in the domestic arena, explicitly calling out the Federal Aviation Administration’s inability to take aggressive, anticipatory security measures. The report also stated that one of the more significant contributing factors was how policymakers set intelligence priorities and allocated resources.
After the Oct. 7 Hamas attack, credible reporting indicated that in 2021, the Israel Defense Forces considered the possibility of infiltration into Israeli communities or an invasion as negligible and directed the focus of intelligence away from Hamas personnel and toward the threat of rockets launched from Gaza. The Israeli military also reasoned that the border barrier between Israel and Gaza denied Hamas the possibility of invading Israel. In addition, Israeli political leadership focus was on the West Bank and had directed the transfer of Israeli units from the Gaza border to the West Bank. If accurate, these reports could indicate that political leadership and military preparedness contributed to systemic anomalies similar to other purported intelligence failures.
Conclusion
The concept of intelligence failure continues to be the subject of a complex discussion that is connected to the equally complex concept of intelligence success. For example, the 1962 intelligence estimate that concluded the Soviet Union would not place missiles in Cuba also advised that surveillance be maintained in case the missiles were placed in Cuba. By conventional definition, this would be an intelligence failure because intelligence did not predict the placement of missiles, but also an intelligence success because it was U.S. intelligence that detected and identified the missiles in Cuba. Another intelligence estimate in 1990 accurately assessed that a post-Warsaw Pact Yugoslavia would dissolve amidst violent ethnic clashes. Still, that assessment had no apparent impact on U.S. policy, and U.S. policymakers seemed surprised when the violence began. By conventional definition, this would be an intelligence success because of an accurate “prediction” but also an intelligence failure because policymakers failed to act. These scenarios highlight the need for plain language clarity in the discourse and explanations of intelligence failure and success.
Over the past 75 years, the term intelligence failure has experienced a form of semantic drift known as broadening, where the meaning of a word becomes more inclusive than the original definition. Intelligence failure has become a binary catchphrase that misrepresents the capabilities of intelligence and the complex relationship between intelligence analysis, warning, and policymaker decision-making. Impulsive claims of intelligence failure immediately after a surprise event provide cognitive closure and instantaneous, focused, causal attribution. It offers “intelligence screwed up” as a simple explanation that “appeals to the yearnings of the general public then colors discussion and debate among even the more sophisticated.”
Accepting that intelligence services cannot predict every future event can leave policymakers and the public feeling naked and vulnerable. In her seminal case study on the Pearl Harbor attack, Roberta Wohlstetter concluded that it is only human to want unique and univocal guarantees from intelligence. But in a cautionary note, she added, “if the study of Pearl Harbor has anything to offer for the future, it is this: We have to accept the fact of uncertainty and learn to live with it.” Ambiguity, uncertainty, and surprise are characteristics of the world. Labeling surprise events as simply an intelligence failure cannot alter this reality.
Gary Gomez is a research fellow with the foreign policy think tank fp21, focusing on the intelligence-policy relationship. His research in the field, 22 years as an intelligence consumer, and 20 years of intelligence community experience have resulted in unique perspectives on how non-intelligence professionals perceive and utilize intelligence.
Image: Midjourney
Comments are closed.