A practical guide to the EU AI Act
The first in a series on how to prepare for the coming EU AI Act
Created on January 7|Last edited on March 24
Comment
The EU AI Act (the “Act”) is a broad and complex regulatory framework and it’s still evolving. While the ban on prohibited AI systems starts in February 2025, the majority of the Act comes into effect in August 2026. It will require a great deal of preparation for companies around the world, with non-compliance resulting in penalties of up to 7% of annual worldwide revenues.
We’ve heard from many AI developers and companies that it’s difficult to navigate the EU AI Act. Much of it is due to the lack of details on what satisfies the often vague requirements under the Act. For example, the Act requires the implementation of a risk management system for use of high risk AI systems, but no how-to manual currently exists. In contrast, the US government has focused on implementation guidance that goes hand-in-hand with AI regulation, such as releasing the National Institute of Standards and Technology (NIST) AI Risk Management Framework and related controls.
This is the first in a series of articles that aims to provide practical information and analysis that might be helpful to those of you navigating the Act. We will also provide updates as implementation guidance becomes clearer under this new set of regulations.
In this piece, we’ll focus on what actually falls under the EU AI Act, plus look into a real-world test case of how to react to the coming regulatory regime.
A brief overview of the EU AI Act
The EU AI Act is focused on regulating the uses of AI systems (i.e. products and services powered by AI). It applies to all AI systems, except for a limited set of exclusions: AI systems used solely for national security purposes, cross-border law enforcement, scientific research, pre-production development, personal activity, and certain free, open source models.
The Act also applies to a wide range of roles: providers, deployers, importers, distributors and manufacturers of AI systems. Compliance obligations applicable to providers (who develop and supply the AI system) and deployers (who use the AI system) will be most relevant. The Act affects not only EU-based companies, but also non-EU entities offering AI systems in the EU or affecting individuals in the EU.
AI systems are categorized into four levels of risk by the EU AI Act:
- Unacceptable Risk: These AI systems are prohibited outright. Examples include systems used for social scoring by the government or ones that exploit vulnerabilities of specific groups.
- High Risk: These AI systems impact safety, fundamental rights or critical infrastructure, and are subject to stringent compliance requirements (Articles 8 - 27). Fundamental rights include systems that process biometrics, or ones that concern law enforcement or access to essential services.
- Limited Risk: These AI systems must be transparently disclosed to users (Article 50). This includes GenAI tools, such as chatbots or AI image generators.
- Minimal Risk: These AI systems remain unregulated and constitute a large proportion of AI using traditional machine learning techniques, such as features in word processors, recommendation engines on shopping or entertainment apps, and spam filters.
Finally, the Act regulates general purpose AI (GPAI). Most model builders aren’t building systems this large but for organizations building cutting edge AI systems, it will mean additional reporting requirements. They’re broken down by size:
- For GPAI with systemic risk, defined as involving >10^25 FLOPS cumulative computation for model training (or otherwise designated as systemic risk by the EU Commission), stringent compliance requirements apply (Article 55). This includes conducting model evaluation and adversarial testing, and reporting of serious incidents to the AI Office and national authorities. The systemic risk category is targeted at models like GPT-4 and Llama 3 400B.
- For GPAI without systemic risk, defined as ≤ 10^25 FLOPS cumulative computation for model training, obligations include maintaining technical and integration documents, ensuring copyright compliance, and providing training content summaries (Article 53). In addition, analysis is conducted based on the four risk categories above.
Note: GPAI requirements only apply to AI system providers.
💡
Let’s explore a scenario that demonstrates the interplay between the roles and categories of risk under the Act.
Example scenario
The recruiting department at a large, global corporation (we’ll call it Real Corp) is evaluating a tool that scores resumes for thousands of open positions, including those located in the EU. The tool was developed by a vendor (we’ll call it VCorp) using an open-source LLM.
Which relevant roles can be identified under the EU AI Act?
The providers in this scenario are the organization that makes available the LLM and the vendor who developed the scoring tool and wants to sell it to Real Corp. The deployer is Real Corp, who will be using the tool to score resumes.
Are there any relevant exclusions?
Activities such as fine-tuning, training and setup fall under the exclusion of pre-production development. That’s because those would happen before the tool is deployed by Real Corp in the EU. However, since there is a possibility that the tool will be deployed into production, each provider and deployer should be prepared to meet the requirements of the Act.
What is the category of risk?
Since Real Corp will use this tool to help it score resumes, this falls into the high risk category.
It’s worth noting that a supposedly high risk use case can still be exempted if it “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of the decision making.” For instance, if the AI system is intended to perform a narrow procedural task or intended to improve the result of a previously completely human activity, an exemption could apply.
For our purposes, we’ll assume that this employment use case falls into the high risk category.
What are the “high risk” obligations of the vendor?
As a provider, the vendor (VCorp) must comply with Articles 8-15 of the Act by implementing the following requirements:
- A comprehensive risk management system
- Proper data governance that ensures data quality and prevents biases
- Technical documentation demonstrating VCorp’s compliance with the Act
- Logging capabilities used to identify situations that may result in the AI system presenting a risk and for production monitoring
- Transparent operation and instructions, enabling users to interpret output and use it appropriately
- Design for effective human oversight
- Appropriate levels of accuracy, robustness, and cybersecurity
💡
What about the organization that released the LLM?
If the model is purpose-built and not a GPAI model, it remains subject to the provider obligations under the high risk category.
Since in our scenario, the model is a free, open-source GPAI model without systemic risk (meaning it was trained with less than 10^25 FLOPS of cumulative compute, and not otherwise designated as systemic risk by the EU Commission), only two obligations in Article 55 apply: copyright compliance and publication of a summary of the content used for model training. In addition, it is subject to the provider obligations under the high risk category.
The vendor (VCorp) fine-tuning or enhancing the LLM is not subject to GPAI requirements, but must follow the rest of the Act.
For GPAI with systemic risk, obligations are much more onerous—we’ll address that in an article later in our series.
What are the “high risk” obligations of Real Corp?
- Follow VCorp’s instructions for use
- Assign human oversight to qualified people with the necessary support
- Input data that is relevant and representative of the intended purpose
- Monitor for risk and incidents
- Keep logs for at least 6 months
- Conduct a data protection impact assessment (DPIA), as described in Article 27
- Cooperate with regulatory authorities
- Inform affected persons, such as the job applicants
How to start preparing for the EU AI Act
First and foremost, you should start by categorizing your AI systems. Providers and deployers in particular should identify their AI systems and use cases, and assess which risk category each AI system belongs to, or in the case of GPAI systems, whether they involve systemic risk. As we move through the year, it’s prudent to continuously monitor for EU Commission guidance on implementation of the Act—note that the official implementation guidance timeline has been updated with additional key dates here. We will also be providing our commentary and analysis as the guidance becomes available.
No information contained in this article should be construed as legal advice from Weights & Biases or the individual author, nor is it intended to be a substitute for legal counsel on any subject matter.
Add a comment
Tags: Articles
Iterate on AI agents and models faster. Try Weights & Biases today.