The European Commission has officially reached out to Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, inquiring about how they are managing the risks associated with generative artificial intelligence (AI) that may mislead voters. In a press release on March 14, the commission declared that it is seeking further information from these platforms regarding the measures they are taking to address the risks associated with generative AI. These risks include “hallucinations,” the widespread dissemination of viral deepfakes, and the potential manipulation of automated services that could affect voter perception. These requests are being made in accordance with the Digital Services Act (DSA), which is the EU’s updated set of regulations for e-commerce and online governance. The eight platforms mentioned are classified as very large online platforms (VLOPs) and are required to evaluate and address systemic risks, as well as adhere to other provisions outlined in the rulebook. The commission emphasizes that the questions asked pertain to both the creation and dissemination of generative AI content. The commission, which oversees the compliance of VLOPs with the specific DSA regulations concerning Big Tech, has identified election security as a primary focus for enforcement. It has recently been seeking input on election security regulations for VLOPs while also developing formal guidance in this area. The commission states that these information requests are intended to contribute to the development of guidance. The platforms have until April 3 to provide information regarding election protection, which is categorized as urgent, and the European Union aims to finalize the election security guidelines by March 27. The commission has highlighted that the cost of generating synthetic content is decreasing significantly, which intensifies the threat of deceptive deepfakes being circulated during elections. As a result, it is increasing its scrutiny of major platforms that have the ability to widely disseminate political deepfakes. Under Article 74(2) of the DSA, the commission has the authority to impose fines for inaccuracies, incompleteness, or misinformation provided in response to information requests. Failure to respond by VLOPs and VLOSEs could result in the imposition of periodic penalty payments. Despite the technology industry agreement reached at the Munich Security Conference in February, which aims to address the use of deceptive AI during elections and is supported by several platforms currently receiving RFIs from the commission, the European Commission is still requesting information.