OpenAI, an artificial intelligence company, has announced that it has uncovered and disrupted several online campaigns that were using its technology to manipulate public opinion worldwide.
On May 30, Sam Altman, the founder of OpenAI, revealed that the company had taken action against accounts involved in covert influence operations. These malicious actors were utilizing AI to generate comments for articles, create fake names and bios for social media accounts, and even translate and proofread texts.
One of the operations, known as “Spamouflage,” used OpenAI to conduct research on social media platforms and generate content in multiple languages across platforms like X, Medium, and Blogspot. The aim of this operation was to manipulate public opinion and influence political outcomes. Additionally, the AI technology was used for debugging code and managing databases and websites.
Another operation, called “Bad Grammar,” specifically targeted countries such as Ukraine, Moldova, Baltic States, and the United States. The perpetrators employed OpenAI models to run Telegram bots and generate political comments.
Furthermore, a group known as “Doppelganger” used AI models to generate comments in English, French, German, Italian, and Polish. These comments were then posted on platforms like X and 9GAG, with the intention of manipulating public opinion.
OpenAI also uncovered an operation named the “International Union of Virtual Media” that used its technology to generate long-form articles, headlines, and website content on their affiliated website.
The company also disrupted a commercial entity called STOIC, which employed AI to generate articles and comments on social media platforms such as Instagram, Facebook, X, and other websites associated with the operation.
OpenAI emphasized that the content posted by these operations covered a wide range of issues. Ben Nimmo, a principal investigator for OpenAI, stated in the report that their case studies provided examples of some of the most widely reported and long-lasting influence campaigns currently active.
This disclosure marks the first time that a major AI company has revealed how its specific tools were being used for online deception, according to The New York Times.
OpenAI concluded that, so far, these operations had not experienced any significant increase in audience engagement or reach as a result of their services.
In a related magazine article, sci-fi author David Brin suggests that AI companies should deploy their technologies against each other to prevent an AI apocalypse.