8 Min

The EU AI Act: What are the obligations for providers?

Does your organisation create or develop AI systems intended for the EU market? If so, you’re considered a provider under the EU AI Act (the “Act”). Among the various parties involved in the supply chain, providers of AI systems bear the most significant obligations.  

Find out which requirements you need to meet as a provider now that the EU AI Act came into force on 1 August 2024. This landmark regulation marks a significant milestone, as the world’s first comprehensive regulation for artificial intelligence. While the obligations under the Act will be applied in phases, it’s important to be aware of the upcoming requirements that may affect you. 

This article covers: 


How does the EU AI Act define AI?
 

Before exploring the specific classifications and requirements, let’s first take a closer look at how the EU AI Act defines Artificial Intelligence. According to the Act, an AI system includes four main elements:

  • Systems operating with varied levels of autonomy 
  • Systems that can adapt as they learn 
  • Systems that learn and generate outputs 
  • Generated outputs that can influence physical or virtual environments 

The final element is critical, as a system that has the potential to alter our surroundings poses risks to society, stressing the need for legislation to protect individuals from negative consequences. 

Related: Discover the ultimate guide to the EU AI Act 

 

What risk categorisation levels does the EU AI Act establish?  

Various AI systems can threaten individuals, but the potential risks differ. Let’s find out how the EU AI Act categorises different risk levels and what this means for specific AI systems.  

1. Minimal risk: permitted 

AI systems classified as minimal risk, such as chatbots and spam filters, are considered to have little to no impact on individuals’ rights. Therefore, they must only meet certain transparency criteria when engaging directly with individuals.   

2. Limited risk: permitted 

Systems that pose only limited risks to individuals are mostly unregulated and are only subject to some transparency requirements. General Purpose AI systems (GPAI) fall under this classification if there is no systemic risk involved, such as a negative impact on public health, fundamental rights, or society. 

Related: The future of privacy: Examining the impacts of ChatGPT

 

3. High risk: permitted 

High-risk systems are subject to the most stringent requirements and standards under the EU AI Act. These systems are often used in crucial areas such as essential commercial and governmental services, employment, safety components, biometrics, and critical infrastructure. 

Typical use cases for these systems include recruiting, credit checks, and admissions. The improper use of such systems could have significant consequences for individuals, which is why the EU AI Act focuses on regulating them. 

4. Unacceptable risk: prohibited 

Lastly, certain systems are categorised as posing unacceptable risks to people’s safety, livelihoods, and rights and are therefore banned under the EU AI Act. 

These prohibited systems include those that exploit vulnerabilities, manipulate or mislead individuals, derive emotions, scrape facial recognition databases, or use biometric data to categorise people (e.g., social scoring systems). 

Who is defined as a provider within the EU AI Act?  

Many parties are involved in the AI systems supply chain, including providers and deployers. In this article, we focus on the role of providers, who create or develop AI systems that are placed on the market. This definition includes companies or government agencies. 

Remember that much like the GDPR, the EU AI Act has an extraterritorial scope. This means that even if a provider is based outside the EU, they might be required to appoint a representative within the EU. 

Watch this on-demand webinar for a deep dive: Video | The EU AI Act II: the providers awaken (dataguard.uk) 

 

Obligations for providers of high-risk AI systems  

Providers of high-risk AI systems face the most stringent requirements as they develop and create the AI systems. Let’s find out which obligations you must meet as a provider.

1. Compliance with requirements (Article 16)

Providers must ensure their high-risk AI systems comply with the requirements set out in Section 2 (Articles 8-15) of the EU AI At.  

2. Risk management system (Article 9)

As a provider, you need to establish, implement, document, and maintain a risk management system throughout the lifecycle of high-risk AI systems to effectively identify and mitigate risks—internally and externally. 

Related:  What is risk management, and how can companies identify risks? 

 

3. Data and data governance (Article 10)

Data sets used by the AI system must be relevant, representative, error-free, and complete to the extent possible. Providers must also implement data governance measures to ensure data quality and integrity to address potential biases in the data. Otherwise, training AI systems with poor data quality can have a permanent negative impact, especially in the context of high-risk systems.

4. Technical documentation (Article 11) 

Providers must prepare and maintain technical documentation before the system is placed on the market or put into service. This obligation follows the essential baseline of the EU AI Act, which views AI systems as products that need to be regulated.

5. Record-keeping and documentation (Articles 12, 18 & 19)

Providers are required to automatically record events (logs) and maintain these logs over the system's lifetime. Specific documentation must be retained and made available to authorities for at least 10 years to trace particular moments in the system’s lifetime in case of an error. This emphasises the long-term accountability that regulators expect for certain AI systems.

6. Transparency and provision of information (Article 13) 

Providers are required to inform users that they are interacting with an AI system and to provide clear instructions for its use. They must also supply deployers with detailed instructions on the operation, limitations, and risks associated with the AI system.

7. Human oversight (Article 14)

Providers must establish appropriate oversight measures, including human review, to ensure that AI systems function as intended and don’t pose risks to individuals' health, safety, or fundamental rights. 

This emphasises the relevance of human controls, encouraging critical scrutiny of the impact these systems can have on individuals.

8. Accuracy, robustness, and cybersecurity (Article 15)

Additionally, providers must ensure AI systems are accurate, robust, and secure, by implementing measures designed to mitigate risks related to cybersecurity throughout their lifecycle.

9. Quality management system (Article 17) 

To ensure compliance with the EU AI Act, providers must implement a documented quality management system (QMS) related to their AI systems. If your organisation already holds an ISO 27001 certification, you may leverage your existing information security management system (ISMS) to fulfil some of the QMS requirements.

Related: 12 benefits of ISO 27001: Compliance and certification

 

10. Corrective actions (Article 20)

The EU AI Act recognises that processes may fail or not work out the first time, as developing an AI system is not something that happens overnight. This obligation regulates how to deal with situations where a system doesn’t produce the desired output. 

In these cases, providers must take corrective actions, including withdrawal, disabling, or recalling non-conforming AI systems, and comply with any information or instruction given by competent authorities.

11. Cooperation with authorities (Article 21) 

Providers are required to cooperate with competent authorities following reasoned requests and provide information and documentation related to conformity.  

12. Authorised representatives (Article 22)

Providers in third countries not based in the EU are required to appoint an authorised representative in the EU to carry out certain tasks.

13. Responsibilities across the AI value chain (Article 25) 

Distributors, importers, deployers or other third parties may be considered providers of high-risk systems in case of actions such as rebranding and substantial modifications. 

14. Conformity assessment (Article 43) 

Providers must ensure that the high-risk AI system undergoes the relevant conformity assessment procedure before being placed on the market. This assessment verifies that the system meets all regulatory requirements.

15. EU Declaration of Conformity (Article 47) 

This obligation is closely linked to the previous one. It requires providers to draw up and obtain an EU declaration of conformity, stating that the high-risk AI system complies with the relevant requirements of the EU AI Act. This document must be available to regulatory authorities upon request.

16. CE Marking (Article 48) 

As a known standard for products in the EU, providers are required to affix the CE marking to the high-risk AI system to indicate conformity with the regulation. This marking demonstrates that the system complies with the EU AI Act and other relevant EU legislation.  

 

17. Registration (Article 49) 

Providers must register themselves and their AI systems in the EU database before placing a system on the market.

18. Post-market monitoring (Article 72) 

Once you’ve done everything you need to do to place your AI system on the market, you need to establish and document a post-market monitoring system proportional to the nature and risks of the AI technologies. This system must ensure continuous compliance and identify any emerging risks.

19. Reporting of serious incidents (Article 73) 

Providers must report any serious incidents to the competent supervisory authority. The reporting timeframe depends on the severity of the incident.

This obligation is similar to other regulations, like the GDPR. Many organisations may already have processes and procedures in place that can be leveraged to comply with the EU AI Act. 
 

Obligations for providers of GPAI 

The requirements for high-risk AI systems are the most extensive ones. Still, there are also requirements for providers of GPAI. Discover the requirements you need to meet if you develop and provide a GPAI.

1. Technical documentation (Article 53) 

As with high-risk systems, providers of GPAI must create and maintain technical documentation related to the AI system. This documentation should include information on the training and testing processes and the evaluation results.  

2. Transparency obligations (Article 50)

Providers also are required to ensure transparency when AI systems interact directly with individuals, making it clear to users that they are interacting with an AI system. 

3. Human oversight (Article 14)

Similarly to high-risk systems, GPAI providers must ensure that human oversight measures are implemented for general purpose AI models. These measures should be appropriate to the risk and context of use.

4. Post-market monitoring (Article 72) 

Providers must establish and document a post-market monitoring system for GPAI, similar to the requirement for high-risk systems. This system should be proportional to the nature and risks of the AI system involved.

5. Assessment and mitigation of systemic risks (Article 55)

The EU AI Act recognises that GPAI might also have systemic risks. Therefore, providers must assess and mitigate potential systemic risks that may stem from the development, market placement, or use of GPAI models. If you, as a provider, already have a risk management system in place, this isn’t an obligation that should be difficult to achieve.  
 

Obligations for providers of limited-risk AI systems 

As the risks associated with certain AI systems are lower, the EU AI Act imposes fewer requirements on organisations developing them. When providing AI systems classified as limited risks, your obligations include the following.

1. Transparency and provision of information (Article 13) 

Transparency is key—this is already the case with GDPR and now the EU AI Act. Providers of limited-risk AI systems need to provide clear instructions on how to use AI systems and the required information to ensure transparency. This way, providers help users understand the AI system’s capabilities, limitations and how their data is being processed—even if it’s not personal data in this case.  

2. AI literacy (Article 4)

Providers are also obliged to take measures to develop the AI literacy of both their staff and individuals using AI systems on their behalf. 

3. Voluntary code of conduct (Article 69) 

Providers of limited-risk AI systems are encouraged to adopt and adhere to voluntary codes of conduct to ensure the ethical and responsible use of these systems.

There are compulsory requirements because AI systems can impact our environment, rights and freedoms. Still, the EU AI Act encourages providers of any AI system to adopt best practices, balancing innovative technologies with ethical and responsible innovation. 
 

The next steps for your organisation

Now is the time for your business to prepare for the upcoming obligations of the EU AI Act to stay compliant. Evaluate your business needs and strategy regarding the implementation of the AI Act.

Identify the current or planned use of AI systems in your organisation and compare it to the risk classification levels. Proactively plan the implementation of an AI governance framework to stay ahead of the regulations. Be mindful of the staggered enforcement of the Act and prioritise which requirements and risks to address first.

Do you need any additional information about the EU AI Act? Download your complete guide on the EU AI Act and get an overview of everything you need to know.

Tags

About the author

Ander Lozano Zurita Ander Lozano Zurita
Ander Lozano Zurita

Ander Lozano Zurita is a legal expert with a focus on data privacy (EU and UK GDPR). As a Privacy Consultant at DataGuard, he is leveraging his knowledge and experience working with international companies to support mainly corporate customers and drive DataGuard’s expansion into the UK. As a lawyer, he specialised in business law and legal tech, and over the years he has gained practical experience dealing with cross-border data transfers and different privacy laws around the world. During his studies at the Instituto Tecnológico Autónomo de México and the IE University in Madrid, Spain, he was able to expand his knowledge and understanding of the GDPR. After that, he worked for three years in different international law firms where he advised customers of all sizes.

Explore more articles

Contact Sales

See what DataGuard can do for you.

Find out how our Privacy, InfoSec and Compliance solutions can help you boost trust, reduce risks and drive revenue.

  • 100% success in ISO 27001 audits to date 
  • 40% total cost of ownership (TCO) reduction
  • A scalable easy-to-use web-based platform
  • Actionable business advice from in-house experts

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • External data protection officer
  • Audit of your privacy status-quo
  • Ongoing GDPR support from a industry experts
  • Automate repetitive privacy tasks
  • Priority support during breaches and emergencies
  • Get a defensible GDPR position - fast!

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • Continuous support on your journey towards the certifications on ISO 27001 and TISAX®️, as well as NIS2 Compliance.
  • Benefit from 1:1 consulting
  • Set up an easy-to-use ISMS with our Info-Sec platform
  • Automatically generate mandatory policies
Certified-Icon

100% success in ISO 27001 audits to date

 

 

TISAX® is a registered trademark of the ENX Association. DataGuard is not affiliated with the ENX Association. We provide consultation and support for the assessment on TISAX® only. The ENX Association does not take any responsibility for any content shown on DataGuard's website.

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • Proactive support
  • Create essential documents and policies
  • Staff compliance training
  • Advice from industry experts

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Get to know DataGuard

Simplify compliance

  • Comply with the EU Whistleblowing Directive
  • Centralised digital whistleblowing system
  • Fast implementation
  • Guidance from compliance experts
  • Transparent reporting

Trusted by customers

Canon  Logo Contact Hyatt Logo Contact Holiday Inn  Logo Contact Unicef  Logo Contact Veganz Logo Contact Burger King  Logo Contact First Group Logo Contact TOCA Social Logo Contact Arri Logo Contact K Line  Logo Contact

Let's talk