OpenAI Says State-Backed Actors Used Its AI for Disinformation

OpenAI, the company behind ChatGPT, announced on Thursday that it has disrupted five covert influence operations over the past three months that attempted to use its AI models for deceptive activities. According to a blog post by OpenAI, these campaigns originated from Russia, China, Iran, and an Israeli private company.
The threat actors sought to exploit OpenAI’s powerful language models for generating comments, articles, social media profiles, and debugging code for bots and websites. However, OpenAI, led by CEO Sam Altman, noted that these operations “do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services.”
Concerns are mounting as companies like OpenAI face scrutiny over fears that AI tools like ChatGPT and image generator Dall-E can quickly and massively generate deceptive content. This is particularly worrisome with major elections on the horizon, and nations like Russia, China, and Iran known for using covert social media campaigns to influence public opinion before polling days.
One disrupted operation, dubbed “Bad Grammar,” was a previously unreported Russian campaign targeting Ukraine, Moldova, the Baltics, and the United States, creating short political comments in Russian and English on Telegram. Another known Russian campaign, “Doppelganger,” used OpenAI’s AI to generate comments in multiple languages across platforms like X.
OpenAI also dismantled the Chinese “Spamouflage” operation, which used its models to research social media, generate multi-language text, and debug code for websites like the previously unreported revealscum.com. An Iranian group, the “International Union of Virtual Media,” was stopped from using OpenAI to create articles and content for state-linked websites.
Additionally, OpenAI disrupted a campaign by the Israeli commercial company STOIC, which used its models to generate content across Instagram, Facebook, Twitter, and affiliated websites. This campaign was also flagged by Meta earlier in the week.
The operations posted across platforms like Twitter, Telegram, Facebook, and Medium, but none managed to engage a substantial audience, according to OpenAI. In its report, the company highlighted AI leverage trends such as generating high volumes of text/images with fewer errors, mixing AI and traditional content, and faking engagement through AI replies.
OpenAI emphasized that collaboration, intelligence sharing, and safeguards built into its models were crucial in allowing these disruptions.

Leave a Reply

Your email address will not be published. Required fields are marked *