Russian network found using genAI to spread disinformation

CopyCop aims to fuel discord and weaken support for Ukraine

Russian network found using genAI to spread disinformation

Security analysts at Recorded Future's Insikt Group have unearthed a Russian influence network named "CopyCop," which employs generative AI to distort and repurpose authentic news for its own aims.

The researchers first identified CopyCop's activities in March this year, translating and altering articles from mainstream media using AI to introduce political biases tailored to manipulate Western audiences.

The operation has taken news from a wide range of media entities, including the BBC, Fox News, La Croix and TV5Monde.

The campaign not only spoofs legitimate websites, creating deceptive domains like bbc-uk[.]news, but also publishes content on news platforms that claim - falsely - to be UK-based, like the London Crier.

The narratives CopyCop presents align with Russian geopolitical interests, aiming to ignite discord over the Israel-Hamas conflict and weaken support for Ukraine.

Among the stories uncovered by Recorded Future researchers are pieces alleging that the UK government has outlawed Islam and is considering the establishment of a NATO 'buffer zone' around Ukraine.

Some of the disinformation efforts are also designed to create rifts between the UK and US governments.

With the 2024 US elections on the horizon, CopyCop's narratives have been decidedly partisan, bolstering Republican candidates while undermining Democrats and attacking the Biden administration's policies.

The infrastructure of CopyCop is linked to the disinformation hub DCWeekly, managed by John Mark Dougan, a fugitive in Russia since 2016.

Russian state-sponsored entities like Doppelgänger and Portal Kombat have been boosting CopyCop's content, indicating a concerted effort to spread misinformation.

Recorded Future has expressed concern about the potential impact of AI-generated content like this on voters, especially with elections looming in the UK and US later this year.

If CopyCop proves successful it may inspire other influence operations to adopt similar AI-driven strategies, threatening Western democratic systems.

The campaign also threatens to undermine public trust in established media organisations, further polarising political discourse.

"The implications of CopyCop's activities are profound, as they represent a significant escalation in the utilisation of AI-powered tools for nefarious purposes," said Clément Briens, Insikt Group, Recorded Future.

"By exploiting the capabilities of Large Language Models, they have achieved unprecedented reach and effectiveness in their efforts to shape public perception.

"If CopyCop succeeds in building engagement and staying persistent, other influence operations and networks will likely follow this model in the near future. AI-enabled influence networks will likely increase challenges for public and private organisations to monitor and defend elections and other democratic processes from foreign malign influence. Additionally, these networks will increase brand and reputational risk for legitimate media organisations."

The report [pdf] highlights the increasing need for collaboration between governments, tech corporations and civil society.

"Public-sector organisations are urged to heighten awareness around threat actors like CopyCop and the risks posed by AI-generated disinformation," the researchers said.

Earlier this year, French agency Viginum uncovered another Russian disinformation campaign, using a network of 193 websites to spread pro-Russia disinformation.

The campaign, dubbed Portal Kombat, was aimed at undermining Western support for Ukraine and used a "massive content sharing automation," the researchers said.