Software

AI + ML

OpenAI is very smug after thwarting five ineffective AI covert influence ops

That said, use of generative ML to sway public opinion may not always be weak sauce


OpenAI on Thursday said it has disrupted five covert influence operations that were attempting to use its AI services to manipulate public opinion and elections.

These influence operations (IOs), the super lab said, did not have a significant effect on audience engagement or in amplifying the reach of the manipulative messages.

"Over the last three months, our work against IO actors has disrupted covert influence operations that sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts," the biz said.

The campaigns have been linked to two operations in Russia, one in China, one in Iran, and a commercial company in Israel.

One operation out of Russia dubbed "Bad Grammar" focused on Telegram, targeting people in Ukraine, Moldova, the Baltic States, and the United States. The other, known as "Doppelganger," posted content about Ukraine on various internet sites.

The Chinese threat actor, referred to as "Spamouflage," praised China and slammed critics of the country.

The influence operation from Iran, known as the International Union of Virtual Media, celebrated Iran and condemned Israel and the US.

And the Israel-based firm STOIC created content about the Gaza conflict and Histadrut, Israel's trade union.

According to OpenAI, these manipulation schemes rated only two on the Brookings’ Breakout Scale, a scheme to quantify the impact of IOs that ranges from one (spreads within one community on a single platform) to six (provokes a policy response or violence). A two on this scale means the fake content appeared on multiple platforms, with no breakout to authentic audiences.

The OpenAI report [PDF] into this whole affair finds that these influence operations are often given away by errors their human operators have failed to address. "For example, Bad Grammar posted content that included refusal messages from our model, exposing their content as AI-generated," the report says.

"We all expected bad actors to use LLMs to boost their covert influence campaigns — none of us expected the first exposed AI-powered disinformation attempts to be this weak and ineffective," observed Thomas Rid, professor of strategic studies and founding director of the Alperovitch Institute for Cybersecurity Studies at Johns Hopkins University’s School of Advanced International Studies, in a social media post.

OpenAI's determination that these AI-powered covert influence campaigns were ineffective was echoed in a May 2024 report on UK election interference by The Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute.

"The current impact of AI on specific election results is limited, but these threats show signs of damaging the broader democratic system," the CETaS report found, noting that of 112 national elections that have either taken place since January 2023 or will occur in 2024, AI-based meddling was detected in just 19 and there's no data yet to suggest election results were materially swayed by AI.

That said, the CETaS report argues that AI content creates second-order risks, such as sowing distrust and inciting hate, that are difficult to measure and have uncertain consequences.

Rid suggested that as more competitors develop tools for synthetic content creation and OpenAI's share of the market declines, the Microsoft-championed lab will be less able to detect abuses of this sort. He also noted that OpenAI, in its discussion of IOs, doesn't address other forms of synthetic content abuse, including fake product reviews, ad bots, fraudulent marketing copy, and phishing messages. ®

Send us news
11 Comments

AI firms and civil society groups plead for passage of federal AI law ASAP

Congress urged to act before year's end to support US competitiveness

Voice-enabled AI agents can automate everything, even your phone scams

All for the low, low price of a mere dollar

Gary Marcus proposes generative AI boycott to push for regulation, tame Silicon Valley

'I am deeply concerned about how creative work is essentially being stolen at scale'

Open source LLM tool primed to sniff out Python zero-days

The static analyzer uses Claude AI to identify vulns and suggest exploit code

Socket plugs in $40M to strengthen software supply chain

Biz aims to scrub unnecessary dependencies from npm packages in the name of security

Anthropic's latest Claude model can interact with computers – what could go wrong?

For starters, it could launch a prompt injection attack on itself...

AI-driven e-commerce fraud is surging, but you can fight back with more AI

Juniper Research argues the only way to beat them is to join them

US Army turns to 'Scylla' AI to protect depot

Ominously-named bot can spot trouble from a mile away, distinguish threats from false alarms, says DoD

UK’s new Minister for Science and Technology comes to US touting Britain's AI benefits

$82B in investment shows we've still got it as a nation

Sorry, but the ROI on enterprise AI is abysmal

Appen points to, among other problems, a lack of high-quality training data labeled by humans

Lab-grown human brain cells drive virtual butterfly in simulation

Could organoid-driven computing be the future of AI power?

Anthropic's Claude vulnerable to 'emotional manipulation'

AI model safety only goes so far