In a landmark decision, European Union member states on Tuesday gave their final approval to the AI Act, the world’s first major legislative framework designed to regulate artificial intelligence. This pioneering law sets comprehensive rules for the use and development of AI technologies, aiming to balance innovation with essential safeguards.
“The adoption of the AI Act is a significant milestone for the European Union,” stated Mathieu Michel, Belgium’s Secretary of State for Digitization. “With the AI Act, Europe emphasizes the importance of trust, transparency, and accountability when dealing with new technologies while ensuring that this fast-changing technology can flourish and boost European innovation.”
The AI Act introduces a risk-based regulatory approach, categorizing AI applications according to the level of risk they pose. Applications deemed “unacceptable,” such as social scoring systems, predictive policing, and emotional recognition in schools and workplaces, are outright banned. High-risk AI systems, including those used in autonomous vehicles, medical devices, financial services, and education, will undergo stringent scrutiny to protect public safety and fundamental rights.
Impact on U.S. Tech Giants
The new regulations are expected to have profound implications for companies worldwide, especially for major U.S. tech firms that operate within the EU. Matthew Holman, a partner at the law firm Cripps, highlighted the unprecedented nature of the AI Act. “The EU AI Act is unlike any law anywhere else on earth,” he explained. “It establishes a detailed regulatory regime for AI for the first time.”
Holman noted that U.S. technology companies have been closely monitoring the development of this legislation. “There has been substantial investment in public-facing generative AI systems, and these companies will need to ensure compliance with the new, sometimes onerous, requirements,” he added.
The EU Commission will enforce the law, with potential fines for non-compliance reaching up to 35 million euros ($38 million) or 7% of a company’s annual global revenue, whichever is higher. The necessity for updated legislation became clear after the launch of OpenAI’s ChatGPT in November 2022, which exposed gaps in existing laws concerning advanced AI capabilities and the use of copyrighted material.
Implementation Process
Despite its adoption, the AI Act’s stringent requirements will not be implemented immediately. Dessi Savova, a partner at Clifford Chance, pointed out that the restrictions on general-purpose AI systems, which include generative AI technologies, will begin 12 months after the Act comes into force. Furthermore, currently available generative AI systems, such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, will benefit from a 36-month transition period to achieve full compliance.
“Agreement has been reached on the AI Act, and now the focus must shift to its effective implementation and enforcement,” Savova commented. This phased approach aims to give companies sufficient time to adapt to the new regulations, ensuring a smooth transition while maintaining the integrity and safety of AI innovations within the EU.
This new legislative framework marks a significant step in global efforts to regulate AI, setting a precedent that could influence future policies worldwide.
I am a law graduate from NLU Lucknow. I have a flair for creative writing and hence in my free time work as a freelance content writer.