WASHINGTON — With all-out war in Ukraine entering its third brutal year, a new report says the Russian disinformation strategy online looks a lot like its battle tactics on the ground: launch wave after wave of low-skilled grunts and hope that somebody makes it through.
One alarming difference on the internet, however, is artificial intelligence. While both sides have struggled to apply AI to the physical battlefield, when it comes to information war, AI translation software, AI-generated narration for videos, chatbots like ChatGPT, and the rise of generative AI overall could give Moscow an essentially limitless supply of digital cannon fodder, according to a new report from the Atlantic Council’s Digital Forensic Research Lab.
“Russia has doubled down on its worldwide efforts to undermine Kyiv’s international standing in an attempt to erode Western support and domestic Ukrainian morale,” says the report, authored by a dozen international experts, mostly Europeans.
There is some good news for Ukraine, the authors emphasize. In 2023, “international sanctions, a damaged reputation, and the ban of state-sponsored RT and Sputnik in many Western countries” all took their toll on Russian disinformation efforts, the report found.
In response, in 2023 Russia shifted its efforts from official outlets to social media, the report said, with Moscow increasingly exploiting not only the established standby of Eastern Europe, Telegram, but also pro-Russian and/or inept moderators on Chinese-owned TikTok and Elon Musk’s “X,” formerly Twitter. While RT, Sputnik, and on-record statements from Russian diplomats remain a major tool of propaganda in the developing world, social media has become the number one weapon in the West.
Yet despite their Cold War reputation as devilishly devious subversives, the analysts found that today’s Russian information warriors are consistently pretty lazy.
RELATED: Ukraine War: Vast hacker ‘militias’ do little damage – but can rally mass support, says study
As part of Moscow’s propaganda push to paint Ukraine as hopelessly corrupt, for instance, over 12,800 TikTok accounts — the largest disinformation op ever uncovered on the platform — posted videos of luxury cars, jewelry, and villas, with AI-generated voiceovers saying these were bought by Ukrainian officials with Western aid. Yet many of photos of expensive homes were simply copied from real estate websites that listed the real buyers and addresses, making the videos easy to debunk, if social media users actually bothered to check.
“While the campaign was not extremely sophisticated from a fact-checking perspective, its employment of nearly 13,000 TikTok accounts allowed it to garner hundreds of millions of views,” the report noted.
Other campaigns were equally crude relabeling of pilfered content. One widely shared video showed a drug cartel soldier with an anti-tank weapon, with a voiceover claiming the weapon was a Javelin donated to Ukraine, then sold on the black market. AFP quickly found the original news article from Mexico.
Another clip showed a protest in Ukraine and claimed it was against President Volodymyr Zelenskyy and the war, when in fact the crowd was urging their local government to cut infrastructure spending and spend the money on weapons. Another effort created clones of legitimate European news sites at similar-seeming internet addresses and filled them with fake articles, which pro-Russian trolls and bots then promoted on social media.
“Posts and articles appeared in multiple languages with often poor proficiency, indicating non-native authors or the use of a machine-translation tool,” the report said. More technical investigations discovered many supposedly Western sites had filenames in Russian with dates in Russian time zones, often hosted on Russian servers.
There were some isolated outbreaks of cleverness. In one 2023 operation, the authors wrote, “Russia hacked Ukrainian media outlets to plant forged documents on their websites, then subsequently delete them; this effort allowed the perpetrators to present archived copies of the planted materials as evidence that Ukrainian media had reported the story then covered it up.”
One of the study’s authors, Roman Osadchuk, said that the “main idea behind Russian propaganda did not change much” from earlier years.
“It is ‘throw as much as you can, and wait for something to stick,’ or more formally, ‘firehose of falsehood,’” he said in an email to Breaking Defense.
Part of Russia’s problem is crude Soviet-style management practices, Osadchuk said: “They invest a lot of resources, but usually, the people who do these tasks manually are not motivated by quality, but rather by achieving quantitative metrics: number of comments, content produced, etc. Therefore, there is no initiative to be extremely creative.” Put another way, the rank-and-file propagandists just want to produce enough junk to meet the quota.
On the other hand, the collapse of Twitter/X.com moderation under Elon Musk made it much easier to get away with posting disinformation, the authors found. By slashing staff, removing “state-sponsored” labels from outlets like RT, and letting any user buy the blue check mark formerly reserved for “verified” users, “X became a space where Russian disinformation appears almost immediately after pro-Kremlin sources [and] users with blue ticks consistently publish news that either aligns with or sources information from pro-Kremlin Telegram channels or media,” said Osadchuk. “It is quite an amplifier of the pro-Kremlin messaging, which is quite cheap for Russia to use, and quite effective.”
And in 2024, the rise of AI may make the generation of convincing garbage easier still, he warned.
“It is helping quite significantly,” Osadchuk said. “AI narration [can] obfuscate immediate Russian presence since artificial voice helped to remove accent. [AI] translations became better. … Deepfakes became slightly more convincing. They’re still quite bad, but again, the development in the field plays to Russia’s advantage.”