Russia’s election-manipulation efforts aim undermine Ukraine aid, NSA says
Russia, which has worked to sway U.S. elections since at least 2016, will focus this year on undermining U.S. political support for Ukraine, a top National Security Agency official said.
“I think where we diverge in this election cycle is Russia is very motivated to make sure that the focus on support to Ukraine is disrupted. I think you’ll see the themes of their activities all pushed through a lens of ‘what is going to erode support for Ukraine’,” Rob Joyce, the NSA’s outgoing cybersecurity director, told reporters Friday.
Russia spends upwards of $1.5 billion per year to sway people’s opinions to support Russian interests, according to an October analysis from the Lithuanian think tank Debunk.org. That money goes, among other things, to produce televised propaganda via channels like RT and set up fake social media profiles. Meanwhile, X/Twitter has far fewer safeguards to protect the service from manipulation by Russia and China since Elon Musk’s takeover in 2022, former employees and others have warned. For example, as many as one-third of the X interactions connected to one tweet from U.S. President Joe Biden about assassinated Russian dissident Alexei Navalny were fake, accoring to a recent report from risk analyst Ian Bremmer’s GZero media.
But Russia also uses information gathered from hacking to bolster specific narratives, such as the use of hacked DNC emails in the 2016 election to tip the race in favor of Donald Trump. And earlier this month, Germany said Russia used an intercepted phone call to attempt to divide Ukraine’s Western allies—which German officials characterized as an act of “information war.”
Joyce warned that the rise of new consumer-facing AI tools, like ChatGPT, will allow Russia and other actors to scale up its disinformation efforts.
“What’s really different in this [election] cycle is the rise of commercially available AI tools, and the ability to generate content that looks believable, seems plausible, but also is generated at scale. A lot of the actors who would want to do malign influence aren’t native English speakers, and so there will be things in their messaging that don’t catch cultural errors, that don’t catch grammatical errors. And now … GPT tools fix all of that…They can have one person cranking out a lot of material that sounds plausible and believable at scale.”
Joyce pointed to the recent discovery of a robocall in which an AI impersonates Joe Biden and discourages people from voting in a primary race as an example of where such efforts are headed. “I expect there’s going to be more issues like that,” he said.
Comments are closed.