Europe’s EU AI Act: An FAQ About What We Know So Far
The European Union is debating AI regulations. While the law isn't settled, here's what you should know and how to prepare for a more regulated future
Created on August 22|Last edited on December 5
Comment
Most observers believe the passage of Europe’s first big AI regulation framework is around the corner. The law, currently titled the AI Act, will have broad implications for businesses leveraging AI, either in their products or for internal processes. And, akin to GDPR, the AI Act will likely have broad implications for non-European companies doing business in Europe.
In this article, we’ll touch on what those implications are, how you can start preparing your organization, and how W&B can help. We’ve also outlined what we know about the law in a tidy PDF you can download with the button below.

What we're covering:
What is the EU’s AI act all about? What sorts of AI are considered high risk?Are any systems banned outright? What about lower-risk systems? Will this law affect them? How should you prepare? How can W&B help with this? Conclusion
Ok, let’s get started:
What is the EU’s AI act all about?
The AI Act is all about risk-based regulation. The higher the risk for a potential machine learning solution, the higher the scrutiny. It’s worth noting that a majority of AI systems–things like spam filters or agents in a video game–likely will not fall under this framework as they’ve been deemed “minimal risk.”
What sorts of AI are considered high risk?
Broadly speaking, high-risk systems are ones that “negatively affect safety or fundamental rights.” This includes systems that would already fall under EU product safety legislation, like transportation or medical devices. It also includes any use cases associated with law enforcement, education, critical infrastructure, etc.
These systems will not be de facto illegal. The EU does plan to assess these before they can be marketed in Europe, however. The nature of that assessment is a bit nebulous as of this publication, but, especially considering Europe’s privacy laws are generally stricter, it’s wise to err on the side of caution here.
Are any systems banned outright?
Any system that violates fundamental rights will be prohibited under the law. The rubric here isn’t fully sketched out, but things like social scoring and facial recognition in public spaces are two frequently cited examples of prohibited AI under the new law.
What about lower-risk systems? Will this law affect them?
As mentioned above, for lower-risk systems, the rules are far more relaxed. Where high risk ones will be subject to scrutiny before they’re sold in Europe, lower-risk systems mostly have to alert users they’re interacting with an AI. Essentially, your chatbot will have to announce it’s a chatbot. Interestingly, any systems that employ “emotional recognition” or “biometric categorizations” fall here as well. So, while not forbidden, you’ll have to be transparent with your users.
How should you prepare?
First, make a self-assessment for where on the risk scale your AI solutions sit. The EU’s AI Act assessment page points to capAI as a first step here. If you’re on the higher scale, you’ll likely want to start work as soon as possible.
It’s also a solid recommendation to make sure you understand the underlying data that trained your AI systems. Datasets that contain intellectual property (which can include a lot of generative AI, depending on the underlying training data) or personally identifiable information (PII) will almost certainly receive more scrutiny than, say, an OCR model.
How can W&B help with this?
Simply put, you can log and artifact pretty much any part of your ML workflow with Weights & Biases. That means you’ll know, for example, which datasets trained which models and which models those models trained down the line. Understanding the lineage of your models is generally considered best practice, but explaining certain riskier systems may become a de facto requirement to do business globally. If you don’t currently have the ability to do that, it’s wise to get started sooner rather than later.
DOWNLOAD OUR AI ACT GUIDE
Conclusion
While the EU AI Act isn’t on the books quite yet, we do have a general sense of how it will work and what you’ll need to do. For a lot of companies, thankfully, the answer is very little. But for companies whose products tangibly affect people’s everyday lives, whether it’s the train they ride or their credit score? Those companies will be subject to additional regulatory scrutiny.
Regardless, it’s advised that you have some way of demonstrating data provenance and model lineage if you don’t already. Since they increase velocity, these are table stakes for most large, collaborative machine learning teams. But with regulators starting to put up guard rails around machine learning systems, mature MLOps practices are going to be the difference between products that launch and ones that don’t.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.