Progress has been made on the world’s first comprehensive AI law – the European Artificial Intelligence Act (AI Act). On December 8, 2023, a political agreement on the draft of the EU AI Act was reached. This milestone followed intense negotiations among the EU's legislative bodies, including the Council of the European Union, the European Parliament, and the European Commission. We are another step closer to seeing the law come into force. Here’s what you need to know as Head of Legal if you want to prepare your company for compliance.
In this article:
- What is the EU AI Act?
- What is the significance of the latest EU AI Act agreement?
- When will the EU AI Act be enforced?
- What does the EU AI Act mean for your company?
- Are there specific rules for generative AI applications?
- What penalties are to be expected in case of non-compliance?
- Stay informed as Head of Legal and prepare your company for compliance
What is the EU AI Act?
The EU AI Act is a legal framework initiated by the European Union to balance innovation in developing AI technologies with protecting fundamental rights. The regulation aligns with the broader EU Data Strategy and addresses the challenges posed by the rapid development of AI technologies.
The law was proposed in 2021 but took time to make real progress due to the launch of new generative AI tools, most notably ChatGPT, that took the world by storm. Questions of how to categorise and regulate generative AI led to ongoing debates among legislators, causing a delay.
What is the significance of the latest EU AI Act agreement?
In short – the legislators agreed on the draft of the EU AI Act, allowing for further development. The agreement on the AI Act showcases the EU's commitment to leading the way in AI regulation.
Introduced in April 2021, the act has undergone substantial refinement, emphasising the EU's role as a global influencer in shaping international standards, akin to the impact of the General Data Protection Regulation (GDPR)—a phenomenon often referred to as the Brussels Effect.
To quote Ursula von der Leyen, President of the European Commission:
“Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act. The EU's AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU.”
When will the EU AI Act be enforced?
The EU AI Act is inching closer, though enforcing it will take some time. But while the last details of the regulation are being finalised, organisations engaged in AI system development or deployment must take proactive steps to get ready for compliance once the act comes into effect.
Here’s a rough timeline of how the EU AI Act will come into force over the next 6-24 months:
- 6 months: The act will focus on prohibited systems during this initial phase.
- 12 months: Obligations for general-purpose AI governance will become applicable.
- 24 months: The full force of the AI Act, encompassing obligations for high-risk systems, will be in effect.
Related: Discover what the legislation could mean for privacy and compliance in your business— Download your complete guide to the EU AI Act.
What does the EU AI Act mean for your company?
Since the agreement, the European Commission has provided guidelines to clarify the background and scope of the AI Act so you can prepare your organisation accordingly.
Key takeaways include:
- The EU AI Act aligns with OECD (Organisation for Economic Co-operation and Development) guidelines for AI system classification.
- Both public and private entities placing AI systems in the EU or affecting EU residents are subject to the regulation.
-
The AI Act introduces dedicated rules for general-purpose AI models (including large generative AI models) to ensure transparency. Specific, powerful models that could pose systemic risks will be subject to additional binding obligations.
-
A new AI Office will oversee enforcement and coordinate governance among national supervisory authorities.
- Risk categories. The EU AI Act categorises AI systems into four different levels of risk:
- Minimal risk
This category includes most AI systems currently in use or expected to be used. Minimal-risk applications, such as AI-enabled spam filters, fall under existing regulations without additional legal obligations as they pose minimal or no risk to citizens' rights or safety. Companies can voluntarily commit to extra codes of conduct for these systems. - High risk
Limited in number, high-risk AI systems and safety components have a higher potential to impact fundamental rights and freedoms. These systems are subject to specific requirements and obligations, including, for example, carrying out a fundamental rights impact assessment, a conformity assessment and implementing risk management and quality management systems.
Examples cover critical infrastructures (e.g., water, gas, and electricity), medical devices, educational access or recruitment systems, and those used in law enforcement, border control, and democratic processes. Biometric identification, categorisation and emotion recognition systems are also seen as high-risk. - Unacceptable risk
Harmful AI uses that violate fundamental rights; this category of AI systems leads to a ban. Examples include AI systems manipulating human behaviour to override users' free will, such as voice-assisted toys encouraging dangerous behaviour in minors, systems enabling 'social scoring' by governments or companies, and specific predictive policing applications. Prohibited uses of biometric systems include emotion recognition in workplaces and educational institutions, real-time remote biometric identification in publicly accessible spaces, and untargeted scraping. - Limited/specific transparency risk
Refers to certain AI systems with a risk of manipulation (e.g., chatbots). Users interacting with these systems must be aware they're dealing with a machine. Additionally, deep fakes and other AI-generated content need clear labels. Users must be informed when biometric categorisation or emotion recognition systems are used. Providers must design systems to mark synthetic content in a machine-readable format, making it detectable as artificially generated or manipulated.
Are there specific rules for generative AI applications?
According to the act, generative AI applications must comply with transparency requirements:
- Clear indication when content is AI-generated.
- Designing models to prevent the generation of illegal content.
- Publishing summaries of copyrighted data used for training.
- Systems that can pose systemic risks will be subject to additional binding obligations, including model evaluations, risk management and reporting.
You might also be interested: The future of AI and privacy: Examining the impacts of ChatGPT
What penalties are to be expected in case of non-compliance?
Companies not complying with the EU AI Act rules will be fined as follows:
- Up to €35m or 7% of worldwide annual turnover for non-compliance with prohibited practices.
- Up to €15m or 3% of worldwide annual turnover for other breaches, including those for general-purpose AI models.
- Up to €7.5m or 1.5% of worldwide annual turnover for supplying incorrect, incomplete, or misleading information.
- SMEs will be subject to the lower threshold.
Stay informed as Head of Legal and prepare your company for compliance
As the EU AI Act unfolds, companies need to get on board with its rules for AI. This agreement shows the EU is serious about responsible AI. Stay in the know and prepare for compliance to thrive in this new AI era. If you have compliance questions, feel free to contact DataGuard for assistance.