A ‘ChatGPT’ For Satellite Photos Already Exists
Scene: A U.S. adversary is at work on a new type of drone, ship, or aircraft and it’s your job to find it, wherever it is.
Not long ago, that task would take a massive effort of human, signals, and open-source intelligence collection. But a researcher from AI company Synthetaic has created a tool that will allow users to find virtually any large object that exists in any satellite photo of the Earth within just one day. It’s also the sort of capability the National Geospatial-Intelligence Agency is also looking to develop, and it could radically shift strategic advantage on the battlefield.
Corey Jaskolski, founder and CEO of Synthetaic, dubbed his satellite image scanning tool Rapid Automatic Image Categorization, or RAIC. After the Chinese weather balloon incident caught the nation’s attention in January, Jaskolski applied RAIC to satellite photos of the Earth’s surface, as collected by geospatial satellite imaging company Planet. He was able to trace the balloon’s origins to China in just a matter of days.
Now, Jaskolski says, the company is using those lessons to further reduce the time. “Our goal is to be able to ingest the entire Planet daily take [of Earth images] and be able to process that all in less than 24 hours. So if you wanted to literally look for balloon launches around the entire world, we could give you a daily update of that every day. Let you know if there was a balloon launched anywhere.”
Interest around new publicly available AI tools has been spiking, thanks to new generative pretrained transformer—or GPT—tools that allow users to write essays, build business plans, and perform complex tasks with a simple prompt. The national security community has a similar need, but for AI applications for the vast expanse of satellite, surveillance, and other data that could help uncover adversary activities and new capabilities.
But it’s not necessarily a straightforward task, as Jaskolski learned when he attempted to find the origin of that Chinese balloon—a thing that had never been photographed in the open, much less labeled and inserted into a dataset readable by a machine-learning algorithm.
“Normally with an AI, you have to have a bunch of labeled examples for the AI to learn, so, and it’s not a small amount of data. Like when Facebook and Google train an AI, they commonly train on a billion labeled images, not even, you know, thousands or millions, but literally a billion labeled images,” Jaskolski said. “The thing that would normally stop an AI from finding this balloon is we don’t have any data. We don’t have any labels. We don’t know what it looks like from space.”
The RAIC is part of a new class of AI tools that don’t require a massive, labeled dataset to generate what Jaskolski describes as an understanding of what to look for. He was able to teach it to look for the balloon based only on a single hand-made drawing.
“We started out with technologies that are used for generative AI transformers and GaNS. [But] instead of using that technology to generate images, we use that technology in order to basically understand the data domain,” he said.
In essence, by continuously looking at satellite images, the RAIC tool develops a familiarity that comes close to expertise. So when it scans satellite imagery, it has a rudimentary understanding of what’s unusual, and can look for specific unusual objects. And the input doesn’t have to be precise. Jaskolski says his drawing depicted what a balloon might look like in satellite data, and RAIC was able to find it. Then, once they found the actual balloon in one of the satellite datasets, RAIC was able to look for that in other images.
“After a couple days really searching for it in Alaska in Canada, we decided to just bite the bullet and ingest that massive amount of Earth across China, Japan, South Korea, North Korea, and the ocean, open ocean and Aleutian islands,” he said. They also used wind modeling to narrow down where the balloon may have started its flight.
That brought them to islands 300 miles off the coast of China. “At that point we got really excited…And so from there, we find it five or six more times, all the way back to the hidden island.”
At last week’s Planet conference in Washington, Microsoft President Brad Smith described a future in which people could ask image-based search tools to find objects, just as we ask search engines for recommendations today. Microsoft is a major investor in OpenAI, the best-known GPT platform.
“I do believe that this next era of AI, you know, with GPT based-technology, is a query-able Earth,” Smith said.
NGA has already taken control of Project Maven, the Pentagon’s flagship AI program for image analysis. At the Planet conference, NGA head Vice Adm. Frank Whitworth said the agency is trying to turn Maven from an experimental effort into a program of record, “which means we will need to be very clear on the efficacy of every dollar.”
The agency is “experimenting with [geographical intelligence] AI programs that integrate large language models to allow analysts to ask and answer specific intelligence questions,” an NGA spokesperson told Defense One. “We see a future where these models can be trained with big spatial data to answer questions in space and time.”
Comments are closed.