The European Commission is set to impose a requirement on tech platforms such as TikTok, X, and Facebook to identify content generated by artificial intelligence (AI) in order to protect upcoming European elections from misinformation. To address the democratic threats posed by generative AI and deepfakes, the commission has launched a public consultation on proposed election security guidelines for very large online platforms (VLOPs) and very large online search engines (VLOSEs).
The draft guidelines provide examples of potential measures to mitigate election-related risks. These include actions specific to generative AI content, planning for risk reduction before or after an electoral event, and offering clear guidance for European Parliament elections. Generative AI has the ability to deceive voters and manipulate electoral processes by creating and spreading inauthentic, misleading synthetic content related to political figures, events, election polls, contexts, and narratives.
Open for public consultation in the European Union until March 7, the draft election security guidelines recommend notifying users on relevant platforms about possible inaccuracies in content generated by generative AI. The guidelines also suggest directing users to authoritative sources of information and state that tech giants should implement safeguards to prevent the creation of misleading content that could significantly impact user behavior.
Regarding AI-generated text, the current recommendation for VLOPs/VLOSEs is to indicate, whenever feasible, the specific sources of information used as input data in the generated outputs. This allows users to verify the reliability of the information and provide further context.
The draft guidance on risk mitigation draws inspiration from the recently approved legislative proposal, the AI Act, and its non-binding equivalent, the AI Pact.
In relation to concerns about advanced AI systems, such as large language models, which have arisen since the viral spread of generative AI in 2023 (bringing tools like OpenAI’s ChatGPT to prominence), the commission did not specify when exactly companies would be required to label manipulated content under the EU’s content moderation law, the Digital Services Act.
However, Meta, in a blog post, announced its plans to introduce new guidelines regarding AI-generated content on Facebook, Instagram, and Threads in the coming months. Any content recognized as being AI-generated, whether through metadata or intentional watermarking, will be visibly labeled.
Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change.