Jesus' Coming Back

The US intelligence community is embracing generative AI

The normally secretive U.S. intelligence community is as enthralled with generative artificial intelligence as the rest of the world, and perhaps growing bolder in discussing publicly how they’re using the nascent technology to improve intelligence operations.

“We were captured by the generative AI zeitgeist just like the entire world was a couple of years back,” Lakshmi Raman, the CIA’s director of Artificial Intelligence Innovation said last week at Amazon Web Services Summit in Washington, D.C. Raman was among the keynote speakers for the event, which had a reported attendance of 24,000-plus.

Raman said U.S. intelligence analysts currently use generative AI in classified settings for search and discovery assistance, writing assistance, ideation, brainstorming and helping generate counter arguments. These novel uses of generative AI build on existing capabilities within intelligence agencies that date back more than a decade, including human language translation and transcription and data processing.

As the functional manager for the intelligence community’s open-source data collection, Raman said the CIA is turning to generative AI to keep pace with, for example, “all of the news stories that come in every minute of every day from around the world.” AI, Raman said, helps intelligence analysts comb through vast amounts of data to pull out insights that can inform policymakers. In a giant haystack, AI helps pinpoint the needle.

“In our open-source space, we’ve also had a lot of success with generative AI, and we have leveraged generative AI to help us classify and triage open-source events to help us search and discover and do levels of natural language query on that data,” Raman said.

A ‘thoughtful’ approach to AI

Economists believe generative AI could add trillions of dollars in benefits to the global economy annually, but the technology is not without risks. Countless reports showcase so-called “hallucinations”—or inaccurate answers—spit out by generative AI software. In national security settings, AI hallucinations could have catastrophic consequences. Senior intelligence officials recognize the technology’s potential but must responsibly weigh its risks.

“We’re excited to see about the opportunity that [generative AI] has,” Intelligence Community Chief Information Officer Adele Merritt told Nextgov/FCW in an April interview. “And we want to make sure that we are being thoughtful about how we leverage this new technology.”

Merritt oversees information technology strategy efforts across the 18 agencies that comprise the intelligence community. She meets regularly with other top intelligence officials, including Intelligence Community Chief Data Officer Lori Wade, newly-appointed Intelligence Community Chief Artificial Intelligence Officer John Beieler and Rebecca Richards, who heads the Office of the Director of National Intelligence’s Civil Liberties, Privacy and Transparency Office, to discuss and ensure AI efforts are safe, secure and adhere to privacy standards and other policies.  

“We also acknowledge that there’s an immense amount of technical potential that we still have to kind of get our arms around, making sure that we’re looking past the hype and understanding what’s happening, and how we can bring this into our networks,” Merritt said.  

At the CIA, Raman said her office works in concert with the Office of General Counsel and Office of Privacy and Civil Liberties to address risks inherent to generative AI.

“We think about risks quite a bit, and one of the risks we really think about are, how will our users be able to use these technologies in a safe, secure and trusted way?” Raman said. “So that’s about making sure that they’re able to look at the output and validate it for accuracy.”

Because security requirements are so rigorous within the intelligence community, far fewer generative AI tools are secure enough to be used across its enterprise than in the commercial space. Intelligence analysts can’t, for example, access a commercial generative AI tool like ChatGPT in a sensitive compartmented information facility—pronounced “skiff”—where some of their most sensitive work is performed.

Yet a growing number of generative AI tools have met those standards and are already impacting missions.

In March, Gary Novotny, chief of the ServiceNow Program Management Office at CIA, explained how at least one generative AI tool was helping reduce the time it took for analysts to run intelligence queries. His remarks followed a 2023 report that the CIA was building its own large language model.

In May, Microsoft announced the availability of GPT-4 for users of its Azure Government Top Secret cloud, which includes defense and intelligence customers. Through the air-gapped solution, customers in the classified space can make use of a tool very similar to what’s used in the commercial space. Microsoft officials noted security accreditation took 18 months, indicative of how complex software security vetting at the highest levels can be even for tech giants.

Each of the large commercial cloud providers are making similar commitments. Google Cloud is bringing many of its commercial AI offerings to some secure government workloads, including its popular Vertex AI development platform. Similarly, Oracle’s cloud infrastructure and associated AI tools are now available in its U.S. government cloud.

Meanwhile AWS, the first commercial cloud service provider to serve the intelligence community, is looking to leverage its market-leading position in cloud computing to better serve growing customer demands for generative AI.

“The reality of generative AI is you’ve got to have a foundation of cloud computing,” AWS Vice President of Worldwide Public Sector Dave Levy told Nextgov/FCW in a June 26 interview at AWS Summit. “You’ve got to get your data in a place where you can actually do something with it.”

 At the summit, Levy announced AWS Public Sector Generative AI Impact Initiative, a two-year, $50 million investment aimed at helping government and education customers address generative AI challenges, including training and tech support.

“The imperative for us is helping customers understand that journey,” Levy said.

On June 26, AI firm Anthropic’s chief executive officer Dario Amodei and Levy jointly announced the availability of Anthropic’s Claude 3 Sonnet and Claude 3 Haiku AI models to U.S. intelligence agencies. The commercially-popular generative AI tools are now available through the AWS Marketplace for the U.S. Intelligence Community, which is essentially a classified version of its commercial cloud marketplace.

Amodei said that while Anthropic is responsible for the security of the large language model, it partnered with AWS because of its superior cloud security standards and reputation as a public sector leader in the cloud computing space. Amodei said the classified marketplace, which allows government customers to spin up and try software before they buy it, also simplifies procurement for the government. And, he said, it gives intelligence agencies the means to use the same tools available to adversaries.

“The [Intelligence Community Marketplace] makes it easier, because AWS has worked with this many times, and so we don’t have to reinvent the wheel,” Amodei said. “AI needs to empower democracies and allow them to function better and remain competitive on the global stage.”

Defense One

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More