Meta, the company that owns Facebook and Instagram, has revealed its plan to address the issue of misusing generative artificial intelligence (AI) in order to protect the integrity of the electoral process on its platforms leading up to the 2024 European Parliament elections in June.
In a blog post on February 25, Marco Pancini, Meta’s head of EU Affairs, explained that the principles of the platform’s “Community Standards” and “Ad Standards” will be extended to AI-generated content as well. Pancini emphasized that AI-generated content will also be subject to review and fact-checking by independent partners. One of the ratings will indicate if the content has been “altered,” meaning it has been faked, manipulated, or transformed using audio, video, or photos.
The platform already requires photorealistic images created with Meta’s AI tools to be clearly labeled. However, this recent announcement reveals that Meta is developing new features to label AI-generated content made with tools from other companies, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, which users may upload to any of Meta’s platforms.
Furthermore, Meta plans to introduce a feature that allows users to disclose when they have shared an AI-generated video or audio, so that it can be flagged and labeled accordingly. Failure to disclose this information may result in penalties.
Advertisers running political, social, or election-related ads that have been altered or created using AI must also disclose this usage. According to the blog post, between July and December 2023, Meta removed 430,000 ads across the European Union for failing to include a disclaimer.
The importance of this topic has increased as major elections are scheduled to take place worldwide in 2024. Both Meta and Google have previously addressed the issue of AI-generated political advertising on their platforms. On December 19, 2023, Google announced that it would restrict responses to election-related queries on its AI chatbot Gemini, previously known as Bard, and its generative search feature in the lead-up to the 2024 US presidential election.
OpenAI, the developer of the AI chatbot ChatGPT, has also taken steps to address concerns about AI interference in global elections by establishing internal standards to monitor activity on its platforms.
In addition, on February 17, 20 companies, including Microsoft, Google, Anthropic, Meta, OpenAI, Stability AI, and X, signed a pledge to combat AI election interference, acknowledging the potential risks associated with uncontrolled AI manipulation.
Governments worldwide have also taken action to combat the misuse of AI in local elections. The European Commission initiated a public consultation on proposed guidelines for election security to mitigate the democratic threats posed by generative AI and deepfakes.
In the United States, AI-generated voices used in automated phone scams were banned and made illegal after a deepfake of President Joe Biden circulated in scam robocalls, misleading the public.
In other news, Google has announced plans to address diversity issues in its Gemini AI, while ChatGPT has experienced some unusual behavior, raising concerns in the AI community.