Jesus' Coming Back

How Russia-, China-linked actors use OpenAI for disinformation

A new report from AI company OpenAI reveals that actors based in Russia and China have used generative artificial intelligence to bulk up their disinformation operations around topics like Ukraine, Taiwan, Moldova, and the United States.

The report details five separate information operations over the last three months that used OpenAI tools to improve their effectiveness or reach. Together, they provide a window into how adversaries can use advanced AI tools to effect perceptions of geopolitical events. 

A central finding: Generative AI can allow operators with a very limited command of English (or, potentially other languages) to sound much more authentic, and can imbue posts and comments with character to make them seem more like they came from actual native speakers. Some actors used the tools to scale up the number of comments they could post across various platforms, creating the impression of massive popular sentiment against the United States, Ukraine, or other targets. That’s critical, since poor language use is one of the few telltale signs online users look for when deciding whether internet content is legitimate. 

A pro-Russian group called Bad Grammar used OpenAI tools to accuse “the presidents of Ukraine and Moldova of corruption, a lack of popular support, and betraying their own people to Western ‘interference’. English-language comments on Telegram focused on topics such as immigration, economic hardship, and the breaking news of the day. These comments often used the context of current events to argue that the United States should not support Ukraine,” according to the report. Russia has been targeting more information operations toward Moldova of late, signaling it may be a potential target for Russian invasion. 

Another group operating out of Russia, called Doppelganger, used the tool to post English, French, German, Italian, and Polish content, and made it appear to be far more popular than it was. “Each time the campaign posted a meme or video on 9GAG, three to five accounts would reply, usually with simple messages such as ‘hahaha’ or ‘lol’. Each of these accounts only ever engaged with this campaign’s content; most were created on the same date. This behavior often attracted critical comments from other users, many of whom called the accounts out as ‘bots.’”

The use of AI to manipulate posts’ reach was also common among the actors who “used our models to generate large quantities of short comments that were then posted across Telegram, X, Instagram and other sites.”

Chinese actors were less likely to use the tools to sling troll content, instead using AI to refine their operations and scale up analysis of platforms, their security flaws, online audience sentiment, etc. One Chinese group called Spamouflage “used the tools to debug code, seek advice on social media analysis, research news and current events, and generate content that was then published on blog forums and social media.” And they “used our models to summarize and analyze the sentiment of large numbers of social media posts, especially Chinese-language posts. The people acting on behalf of IUVM used our models to create website tags, which then appear to have been automatically added to the group’s website.”

By OpenAI’s analysis, none of the campaigns achieved much impact, as measured by Breakout Scale. But they do show how adversaries are already looking to use U.S.-based AI tools to influence audience perception on foreign social media platforms like Telegram but domestic ones like X, formerly Twitter. 

National security officials have been warning for months about the rising threat of AI for election-related disinformation, and ODNI Director Avril Haines in March said that AI may have played a critical role in Slovakia’s election earlier this year, by allowing pro-Russian actors to create and spread deep fake audio content purporting to show government leaders engaging in corruption. 

OpenAI’s report follows the company’s efforts to get legislation to make sure AI-generated content is clearly labeled—but it also shows that while detection of AI-generated content is improving, bad actors can still generate and spread it far faster than moderators can act. 

Defense One

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More