OpenAI Reports Misuse of AI by Russian and Israeli Groups for Disinformation

Recently, OpenAI released its first report detailing how its AI tools have been exploited for disinformation. The report reveals that the company disrupted disinformation campaigns in Russia, China, Israel, and Iran.

These malicious actors used OpenAI’s generative AI models to create and spread propaganda on social media, translating content into various languages. Despite these efforts, none of the campaigns managed to gain significant traction or reach large audiences, as per the report.

The rapid growth of the generative AI industry has raised concerns among researchers and lawmakers about its potential to increase online disinformation. Companies like OpenAI, known for creating ChatGPT, have been working to address these concerns by implementing safeguards, with varying success.

OpenAI Reports Misuse of AI by Russian and Israeli Groups for Disinformation

OpenAI’s 39-page report is one of the most detailed accounts of how AI software is used for propaganda. The company’s researchers identified and banned accounts associated with five covert influence operations over the past three months, involving both state and private actors.

In Russia, two operations created and distributed content critical of the US, Ukraine, and several Baltic nations. One operation used an OpenAI model to debug code and create a bot for posting on Telegram. In China, operatives generated text in multiple languages, including English, Chinese, Japanese, and Korean, which they posted on platforms like Twitter and Medium.

Iranian actors produced full articles attacking the US and Israel, translating them into English and French. An Israeli political firm named Stoic ran a network of fake social media accounts, creating content such as posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

Some of the disinformation spreaders that OpenAI banned were already known to researchers and authorities. In March, the US Treasury sanctioned two Russian men linked to one of the detected campaigns. Meta also banned Stoic from its platform this year for policy violations.

The report emphasizes that disinformation campaigns use generative AI, but it is not the sole tool. They posted AI-generated material alongside traditional formats like manually written texts and memes copied from the internet.

“All of these operations used AI to some degree, but none used it exclusively,” the report stated. This indicates that AI is seen as a tool to enhance certain aspects of content generation, such as making more convincing foreign language posts, rather than a standalone solution for propaganda.

Although none of the campaigns had notable impacts, their use of AI technology shows how malicious actors are leveraging generative AI to scale up propaganda production. AI tools make writing, translating, and posting content more efficient, lowering the barrier to creating disinformation campaigns.

Over the past year, malicious actors have used generative AI globally to influence politics and public opinion. This includes deepfake audio, AI-generated images, and text-based campaigns aimed at disrupting election processes. This trend has led to increased pressure on companies like OpenAI to restrict the misuse of their tools.

OpenAI announced that it plans to release similar reports periodically and will continue to remove accounts that violate its policies. This ongoing effort underscores the company’s commitment to combating the misuse of AI in disinformation campaigns.

See Also: OpenAI Starts Training New Model: GPT-5 in the Works?

Onsa Mustafa

Onsa is a Software Engineer and a tech blogger who focuses on providing the latest information regarding the innovations happening in the IT world. She likes reading, photography, travelling and exploring nature.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>