The EU AI Act came into force on 1 August 2024, representing the world’s first comprehensive artificial intelligence regulation. If you count yourself or your organisation as “any individual or entity that uses an AI system within their professional scope, excluding personal and non-professional activities”, you are defined as a deployer.
The EU AI Act comes with certain rules and duties for those deploying AI systems. Get ahead by learning about banned AI applications, rules for high-risk uses, and what’s expected from deployers. Let’s get started.
This article covers:
How is AI defined within the EU AI Act?
Before we dive in deeper, let’s take a brief look at how the EU AI Act defines Artificial Intelligence. There are four main components AI systems entails they:
- Can operate with varying levels of autonomy
- May adapt as they learn
- Learn and generate outputs
- Generate outputs that can influence physical or virtual environments
The last component is critical, as a system that can influence our environment poses risks to society which highlights the need for regulatory involvement to avoid negative consequences for individuals.
Which risk classification levels are defined in the EU AI Act?
AI systems can threaten the fundamental rights of individuals, but there are different levels to consider, as the risks systems carry aren’t equally severe. Let’s find out how the EU AI Act defines different risk classification levels and their implications on the operation of AI systems.
1. Unacceptable risk: prohibited
At the top of the risk pyramid are systems classified as posing unacceptable risks, threatening individuals’ safety, livelihood, and rights. This is why specific systems will be prohibited by law.
These include systems that manipulate or deceive individuals, exploit vulnerabilities, infer emotions, scrape facial recognition databases, and categorise individuals based on their biometric data (e.g., social scoring systems).
2. High risk: permitted
Systems classified as high risk have the most requirements and obligations in the EU AI Act. They often appear in the context of safety components, biometrics, critical infrastructure, employment, and essential private and public services.
Typical use cases for high-risk AI systems are recruitment, credit checks, and admissions. In case of incorrect use, the impact on individuals would be significant in these areas. This is why the EU AI Act focuses on regulating these systems.
3. Limited risk: permitted
Systems with limited risks are mainly unregulated and only have some transparency requirements. Unless there is a systemic risk involved, like adverse effects on public health, safety, security, fundamental rights, or society, General Purpose AI (GPAI) falls under this category.
4. Minimal risk: permitted
AI systems with minimal risks, like chatbots and spam filters, are deemed to have little to no impact on individuals’ rights, and they are only required to fulfil some transparency requirements when interacting directly with individuals.
Related: The future of privacy: Examining the impacts of ChatGPT
Who is a deployer under the EU AI Act?
The supply chain of AI systems involves many different parties, including providers, deployers, importers and distributors, product manufacturers, authority representatives of providers, and affected persons.
This article focuses on the role of deployers who use AI systems within their organisation, including companies or government agencies; personal use is not included. Most companies will fall under this category when using AI systems in their organisation.
When you’re allowing a specific AI system to be used by your staff or customers, you are deploying that system into your offering – internally or externally. That’s when you must consider the risk of doing that and the obligations that fall from it.
Watch this on-demand webinar for a deep dive: Video | The EU AI Act I: the rise of the deployers (dataguard.uk)
Which obligations do deployers of high-risk AI systems have?
AI systems classified as high risk have the most requirements in the EU AI Act. Let’s have a closer look at the obligations organisations have when deploying them.
1. Technical and organisational measures (Article 26)
Considering the GPDR, you’re probably already very familiar with the obligation to implement TOMs in your organisation. Similarly, deployers must take appropriate technical and organisational measures to ensure AI systems are used as intended, following the provider’s instructions and ensuring proper functioning.
Watch this video: Video | What are TOMs (youtube.com)
2. Human oversight (Article 26)
Many of the concerns and worries about the direction of some of the high-risk AI systems include the negative effects on individuals’ rights regarding discrimination. A system trained with a lot of data doesn’t necessarily mean that it’s not biased.
That’s why deployers must assign competent individuals to oversee high-risk AI systems and provide necessary resources and training for effective oversight. Implementing this human element helps keep the systems under control and spot mistakes, avoiding negative impacts on individuals, which is especially relevant for high-risk AI systems.
3. Data management (Article 26)
Deployers must ensure that input data is relevant, representative, error-free, and complete to the extent possible. Implementing TOMs and, therefore, already having a good structure in place will also help you meet this obligation.
Solid data management is important to any business. Whether you have to comply with the EU AI Act or not, you want to make sure your data is accurate and kept up to date. The efforts you already put into your data management can then be repurposed to meet your obligations as a deployer of AI systems.
4. Continuous monitoring (Articles 26, 72, and 73)
Deployers must regularly monitor the AI system’s operation to detect anomalies or risks, follow the provider’s instructions, and immediately report any serious incidents or risks to the provider and relevant authorities.
Look at what you already do in terms of incident management and monitoring the effectiveness of any process you’ve implemented in your organisation and see how you can incorporate the AI element into that.
You want to consider all these obligations as one big holistic governance framework and combine them as much as possible. Monitoring how the data is put into the system is also linked to your data management, for example.
You might also be interested: Discover the ultimate guide to the EU AI Act
5. Corrective actions (Article 20)
As a deployer of an AI system, you need to comply with providers’ information on the necessary corrective actions related to the system, which may include withdrawing, disabling, or recalling it.
6. Logging and documentation (Articles 26 and 12)
Deployers must maintain logs generated by the AI system for at least six months. In case of an error, you can go back and find out where the source is, allowing you to report incidents transparently to providers and the authorities to take corrective actions. So, these obligations are very closely linked to each other.
7. Fundamental rights impact assessment (Article 27)
Conduct assessments to evaluate the impact of AI systems on fundamental rights, such as non-discrimination. You need to know about the fundamental rights and the effect deploying an AI system has on them in a broader context, as they go beyond the rights of privacy and data protection.
8. Worker and public information (Article 26)
Much like needing an employee privacy notice under the GDPR, deployers must inform workers and their representatives about the deployment and use of high-risk AI systems in the workplace.
9. Registration and information requirements (Articles 49 and 71)
Deployers must also register their information and that of their systems in the EU database and provide the required information. There’s more emphasis on deployers playing a part in ensuring AI systems are safe, as they have to take ownership of how the systems are used.
General obligations for deployers of AI systems
Besides the specific requirements for high-risk AI systems, deployers have some general obligations, including AI literacy and transparency.
Regardless of whether the AI systems used are classified as high-risk, there needs to be a degree of awareness and sensitivity about the potential impact and reach these systems can have on individuals, even if it might seem like a lower risk. This means that deployers have to offer training courses and take measures to develop the AI literacy of staff and individuals using AI systems on their behalf.
In addition, deployers must maintain transparency about the use of AI systems, disclosing the operation of certain systems, such as emotion recognition, biometrics, and deep fakes.
What are the next steps for the EU AI Act?
Do you know what will happen next after the EU AI Act came into force on August 1, 2024? Let’s find out what will happen, and how to prepare your organisation for compliance.
Timeline on what’s to come
Provisions on prohibited AI systems will take effect on February 2, 2025, meaning the use of these banned systems must be discontinued by then. On August 2, the rules for general-purpose AI systems begin to apply. The general application date of the act is one year later, on August 2, 2026. The year after, high-risk product and safety components regulations also come into place.
Even though there is still some time ahead, it’s important to start preparing as early as possible. This way, you can ensure your organisation complies with the EU AI Act as soon as the regulation come into place.
Will the EU AI Act affect the UK?
In the UK, there is no regulation like the EU AI Act that would define roles such as AI system deployers or providers. Instead, various regulators offer their own guidelines for using AI. For example, the Information Commissioner’s Office (ICO) gives broad advice on using personal data, while the competition and market authority provides guidance across different industries. The Financial Conduct Authority (FCA) also has specific guidelines for using AI in the financial sector.
The main focus for the next steps is on explainability and transparency. As an organisation using AI systems in the UK, you need to be able to explain how the system is being used and what impact it has on individuals. There are also other guidelines over the ethical and responsible use of AI systems, but there’s no ban on any specific AI systems like in the EU AI Act.
It’s advised that a risk assessment is conducted to understand the risks involved in using particular AI systems. Taking these results into account, you can consider whether your organisation should implement the system. Involve different parties within your organisation in the discussion, like your DPO, to gain a better understanding.
How does AI affect your GDPR compliance journey?
The EU AI Act offers a complementary approach to AI governance from a privacy perspective. Still, it’s worth refreshing your knowledge about the GDPR when preparing your organisation for the upcoming obligations regarding AI systems, as there is a significant overlap between privacy and AI. One of the main reasons is that a lot of the data AI systems process is personal data, and GDPR also addresses the privacy-related risks of these systems. The following data protection requirements apply to the use of AI systems:
Data protection principles
Ensure compliance with GDPR principles such as data minimisation and purpose limitation when handling personal data.
Rights of data subjects
Provide individuals with clear information about the use of AI systems and ensure any automated decision-making complies with GDPR provisions, protecting data subjects’ rights and providing means for them to contest decisions and seek human intervention.
Security and integrity of data
Use TOMs like anonymisation and encryption to protect personal data and ensure GDPR compliance.
Interoperability between the AI Act and GDPR
Align AI Act requirements with GDPR to ensure a comprehensive compliance framework and conduct data protection impact assessments (DPIAs) as GDPR requires.
Need more information on the EU AI Act?
Everything you need to know about the EU AI Act in one place: download the ultimate guide on the EU AI Act.