The EU AI Act moves a step closer: A crucial agreement reached

Progress has been made on the world’s first comprehensive AI law – the European Artificial Intelligence Act (AI Act). On December 8, 2023, a political agreement on the draft of the EU AI Act was reached. This milestone followed intense negotiations among the EU's legislative bodies, including the Council of the European Union, the European Parliament, and the European Commission. We are another step closer to seeing the law come into force. Here’s what you need to know as Head of Legal if you want to prepare your company for compliance.

In this article:

What is the EU AI Act?

The EU AI Act is a legal framework initiated by the European Union to balance innovation in developing AI technologies with protecting fundamental rights. The regulation aligns with the broader EU Data Strategy and addresses the challenges posed by the rapid development of AI technologies.

The law was proposed in 2021 but took time to make real progress due to the launch of new generative AI tools, most notably ChatGPT, that took the world by storm. Questions of how to categorise and regulate generative AI led to ongoing debates among legislators, causing a delay.

What is the significance of the latest EU AI Act agreement?

In short – the legislators agreed on the draft of the EU AI Act, allowing for further development. The agreement on the AI Act showcases the EU's commitment to leading the way in AI regulation.

Introduced in April 2021, the act has undergone substantial refinement, emphasising the EU's role as a global influencer in shaping international standards, akin to the impact of the General Data Protection Regulation (GDPR)—a phenomenon often referred to as the Brussels Effect.

To quote Ursula von der Leyen, President of the European Commission:

“Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act. The EU's AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU.”

When will the EU AI Act be enforced?

The EU AI Act is inching closer, though enforcing it will take some time. But while the last details of the regulation are being finalised, organisations engaged in AI system development or deployment must take proactive steps to get ready for compliance once the act comes into effect.

Here’s a rough timeline of how the EU AI Act will come into force over the next 6-24 months:

  • 6 months: The act will focus on prohibited systems during this initial phase.
  • 12 months: Obligations for general-purpose AI governance will become applicable.
  • 24 months: The full force of the AI Act, encompassing obligations for high-risk systems, will be in effect.
Related: Discover what the legislation could mean for privacy and compliance in your business— Download your complete guide to the EU AI Act.

 

What does the EU AI Act mean for your company?

Since the agreement, the European Commission has provided guidelines to clarify the background and scope of the AI Act so you can prepare your organisation accordingly.

Key takeaways include:

  1. The EU AI Act aligns with OECD (Organisation for Economic Co-operation and Development) guidelines for AI system classification.
  2. Both public and private entities placing AI systems in the EU or affecting EU residents are subject to the regulation.
  3. The AI Act introduces dedicated rules for general-purpose AI models (including large generative AI models) to ensure transparency. Specific, powerful models that could pose systemic risks will be subject to additional binding obligations.

  4. A new AI Office will oversee enforcement and coordinate governance among national supervisory authorities.

  5. Risk categories. The EU AI Act categorises AI systems into four different levels of risk:

  • Minimal risk
    This category includes most AI systems currently in use or expected to be used. Minimal-risk applications, such as AI-enabled spam filters, fall under existing regulations without additional legal obligations as they pose minimal or no risk to citizens' rights or safety. Companies can voluntarily commit to extra codes of conduct for these systems.
  • High risk
    Limited in number, high-risk AI systems and safety components have a higher potential to impact fundamental rights and freedoms. These systems are subject to specific requirements and obligations, including, for example, carrying out a fundamental rights impact assessment, a conformity assessment and implementing risk management and quality management systems.

    Examples cover critical infrastructures (e.g., water, gas, and electricity), medical devices, educational access or recruitment systems, and those used in law enforcement, border control, and democratic processes. Biometric identification, categorisation and emotion recognition systems are also seen as high-risk.
  • Unacceptable risk
    Harmful AI uses that violate fundamental rights; this category of AI systems leads to a ban. Examples include AI systems manipulating human behaviour to override users' free will, such as voice-assisted toys encouraging dangerous behaviour in minors, systems enabling 'social scoring' by governments or companies, and specific predictive policing applications. Prohibited uses of biometric systems include emotion recognition in workplaces and educational institutions, real-time remote biometric identification in publicly accessible spaces, and untargeted scraping.
  • Limited/specific transparency risk
    Refers to certain AI systems with a risk of manipulation (e.g., chatbots). Users interacting with these systems must be aware they're dealing with a machine. Additionally, deep fakes and other AI-generated content need clear labels. Users must be informed when biometric categorisation or emotion recognition systems are used. Providers must design systems to mark synthetic content in a machine-readable format, making it detectable as artificially generated or manipulated.

Are there specific rules for generative AI applications?

According to the act, generative AI applications must comply with transparency requirements:

  • Clear indication when content is AI-generated.
  • Designing models to prevent the generation of illegal content.
  • Publishing summaries of copyrighted data used for training.
  • Systems that can pose systemic risks will be subject to additional binding obligations, including model evaluations, risk management and reporting.

You might also be interested: The future of AI and privacy: Examining the impacts of ChatGPT

What penalties are to be expected in case of non-compliance?

Companies not complying with the EU AI Act rules will be fined as follows:

  • Up to €35m or 7% of worldwide annual turnover for non-compliance with prohibited practices.
  • Up to €15m or 3% of worldwide annual turnover for other breaches, including those for general-purpose AI models.
  • Up to €7.5m or 1.5% of worldwide annual turnover for supplying incorrect, incomplete, or misleading information.
  • SMEs will be subject to the lower threshold.

Stay informed as Head of Legal and prepare your company for compliance

As the EU AI Act unfolds, companies need to get on board with its rules for AI. This agreement shows the EU is serious about responsible AI. Stay in the know and prepare for compliance to thrive in this new AI era. If you have compliance questions, feel free to contact DataGuard for assistance.

 

About the author

Ander Lozano Zurita Ander Lozano Zurita
Ander Lozano Zurita

Ander Lozano Zurita is a legal expert with a focus on data privacy (EU and UK GDPR). As a Privacy Consultant at DataGuard, he is leveraging his knowledge and experience working with international companies to support mainly corporate customers and drive DataGuard’s expansion into the UK. As a lawyer, he specialised in business law and legal tech, and over the years he has gained practical experience dealing with cross-border data transfers and different privacy laws around the world. During his studies at the Instituto Tecnológico Autónomo de México and the IE University in Madrid, Spain, he was able to expand his knowledge and understanding of the GDPR. After that, he worked for three years in different international law firms where he advised customers of all sizes.

Explore more articles

Contact Sales

See what DataGuard can do for you.

Find out how our Privacy, InfoSec and Compliance solutions can help you boost trust, reduce risks and drive revenue.

  • 100% success in ISO 27001 audits to date 
  • 40% total cost of ownership (TCO) reduction
  • A scalable easy-to-use web-based platform
  • Actionable business advice from in-house experts

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • External data protection officer
  • Audit of your privacy status-quo
  • Ongoing GDPR support from a industry experts
  • Automate repetitive privacy tasks
  • Priority support during breaches and emergencies
  • Get a defensible GDPR position - fast!

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • Continuous support on your journey towards the certifications on ISO 27001 and TISAX®️, as well as NIS2 Compliance.
  • Benefit from 1:1 consulting
  • Set up an easy-to-use ISMS with our Info-Sec platform
  • Automatically generate mandatory policies
Certified-Icon

100% success in ISO 27001 audits to date

 

 

TISAX® is a registered trademark of the ENX Association. DataGuard is not affiliated with the ENX Association. We provide consultation and support for the assessment on TISAX® only. The ENX Association does not take any responsibility for any content shown on DataGuard's website.

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • Proactive support
  • Create essential documents and policies
  • Staff compliance training
  • Advice from industry experts

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • Comply with the EU Whistleblowing Directive
  • Centralised digital whistleblowing system
  • Fast implementation
  • Guidance from compliance experts
  • Transparent reporting

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Let's talk