The European Union’s artificial intelligence (AI) law, known as the EU AI Act, has been given final approval by the European Parliament on March 13. This milestone makes it one of the first comprehensive sets of AI regulations in the world. The purpose of the EU AI Act is to ensure that AI is trustworthy, safe, and respects the fundamental rights of the EU while promoting innovation.
The legislation received overwhelming support with 523 votes in favor, 46 against, and 49 abstentions. Prior to the voting, EU Parliament members Brando Benifei and Dragos Tudorache spoke at a virtual press conference, describing it as a historic day in the journey towards regulating AI. Benifei emphasized that the legislation would promote the development of safe and human-centric AI, aligning with the priorities of the EU Parliament.
The process of developing the legislation began five years ago and gained momentum in the past year as powerful AI models became more prevalent. In December 2023, after lengthy negotiations, a provisional agreement was reached, followed by a 71-8 vote by the Internal Market and Civil Liberties Committees to endorse the agreement in February 13.
Following today’s approval, minor linguistic adjustments will be made during the translation phase of the law, as EU laws are translated into the languages of all member states. The bill will then undergo a second vote in April and be published in the official EU journal, likely in May.
The EU AI Act categorizes machine learning models into four groups based on the level of risk they pose to society, with high-risk models subject to the strictest regulations. The top category, “unacceptable risk,” bans all AI systems that pose a clear threat to safety, livelihoods, and rights, including social scoring by governments and toys with voice assistance that encourage dangerous behavior. Examples of high-risk applications include critical infrastructures, law enforcement, migration control, and administration of justice.
The legislation also addresses limited-risk applications, focusing on transparency and ensuring users are aware when interacting with AI systems. Furthermore, the EU AI Act allows for the “free use” of minimal-risk AI, including AI-enabled video games and spam filters.
Lawmakers have included provisions for generative-AI models, such as AI chatbots, which have gained popularity. Developers of general-purpose AI models will be required to provide detailed summaries of the training data used and comply with EU copyright law. Additionally, deepfake content generated using AI must be labeled in accordance with the law.
The EU AI Act faced opposition from local businesses and tech companies, who expressed concerns about overregulation stifling innovation. However, upon its approval, the EU Parliament received praise from IBM, with its vice president and chief privacy and trust officer, Christina Montgomery, commending the EU for its comprehensive and smart AI legislation. She recognized the risk-based approach as aligning with IBM’s commitment to ethical AI practices and contributing to the development of open and trustworthy AI ecosystems.