Jesus' Coming Back

Researchers sound alarm on dual-use AI for defense

0

A growing number of Silicon Valley AI companies want to do business with the military, making the case that the time and effort they’ve put into training large neural networks on vast amounts of data could provide the military new capabilities. But a study out today from a group of prominent AI scholars argues that dual-use AI tools would increase the odds of innocent civilians becoming targets due to bad data—and that the tools could easily be gamed by adversaries. 

In the paper, published by the AI Now Institute, authors Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker point out the growing interest in commercial AI models—sometimes called foundation models—for military purposes, particularly for intelligence, surveillance, target acquisition, and reconnaissance. They argue that while concerns around AI’s potential for chemical, biological, radiological, and nuclear weapons dominate the conversation, big commercial foundation models already pose underappreciated risks to civilians.

Some companies pitching foundation model-based AI to the military are “proposing commercial models that already have been pre-trained on a set of commercial data. They are not talking about military-exclusive commercial models that have been purely trained on military data,” Khlaaf told Defense One. 

The problem lies in how these foundation models are trained: on large amounts of publicly available data, often including personal data that has made its way to the web via data brokers or regular websites and services. “Data is the fundamental issue here. It causes a huge amount of concerns and vulnerabilities, and it’s not being accounted for when we talk about things like traceability broadly,” said Whittaker, who currently serves as president of the Signal Foundation.

These neural nets don’t use reasoning like a human would. Other research has shown they are essentially doing pattern matching on a huge scale, finding combinations of words to create statistically-sound extrapolations of what words might come next, or taking data points on millions of individuals and drawing correlation connections between them (as opposed to causative connections.) Foundation models perform well in scenarios where the cost of false positives is low, such as  trying to come up with interesting points for a research paper or a list of people to market a product to.

Applied to military contexts—specifically the task of surveilling a population for potential targets—the cost of a false positive could be an innocent life. 

Some militaries, such as Israel, through its Where’s Daddy and Lavender programs, are already employing pattern-extrapolating foundation models to assemble target lists. That’s contributing to the normalization of these tools for targeting in warfare, the authors say. 

A report from Israel-based publication +972 found that the error rate for Lavender alone was near 10 percent.

Speaking of Israeli use of AI in its Gaza operations (and beyond) Khlaaf said “We’re seeing… data being pulled from WhatsApp, metadata, right? We’re seeing data pulled from Google Photos and other sources, of course, that we’re unaware of and haven’t really been covered. So the data is inaccurate and [the models] sort of attempt to find patterns that may not exist, like being in a WhatsApp group with a…Hamas member should not qualify you for [a death] sentence. But in this case, that is what we’re observing.”

Said Whittaker, these Israeli tools function based “on surveillance of a given population, right? One of the reasons Ukraine can’t implement systems with the type of characteristics of Lavender and Where’s Daddy is they don’t have that very fine-grain population-level surveillance data, of say, all of Russia’s population. So you see here the role of personal information in informing systems that then use these data models.”

This problem is one reason the Defense Department puts such an emphasis on “traceability of data” in its AI ethics principles

As Defense One has reported, the Defense Department’s AI principals list is considerably more detailed and specific than similar AI ethics guidelines from Silicon Valley companies. But it’s also a voluntary framework, and the Defense Department has given itself work arounds. The authors argue that a self-adopted ethical framework doesn’t protect civilians against commercial AI models that are trained on personal data, especially when the government seems willing to make exceptions for commercial AI tools.

Said West: “We’ve seen this propensity, you know, even like the introduction of fast-tracking Fed RAMP in order to promote rapid adoption of generative AI use cases, the creation of these carve outs, which is why these sort of voluntary frameworks and higher order principles are insufficient, particularly where we’re dealing with uses that are very much life-or-death stakes, and where the consequences for civilians are very significant.”

Even the Navy’s chief information officer, Jane Rathbun, has said commercial foundation models are “not recommended for operational use cases.”

Adoption of these models by the Defense Department would also introduce vulnerabilities into operations, they argue, since publicly-available data can be gamed or manipulated by adversaries. 

“The ubiquitous and unfettered use of web-scale datasets for training commercial foundation models has led to the exploitation and use of several avenues that allow adversarial actors to execute poisoning attacks ‘that guarantee malicious examples will appear in web-scale datasets,’” the paper notes. 

That broadens the ramifications of the paper’s findings beyond the question of what tools the Defense Department may or may not adopt. In an era where such tools are increasingly commonplace, the personal data of American civilians becomes a national strategic vulnerability. And the Biden White House’s executive orders on bulk meta data collection and safe AI don’t go far enough to keep Americans’ data out of the hands of adversaries armed with similar models, the authors said.

However, Whittaker said, the broadening of U.S. privacy laws to more effectively cover personal data “could actually be hugely beneficial in reducing adversarial access to the type of personal data that, one, is used to train [large language models] and, two, is extractable via exactly the kinds of attacks Heidi [Khlaaf] has studied, to which there are no current remediations.”

Defense One

Leave A Reply

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More