EU AI Act Guide 2024: Compliance, Timeline & Penalties Explained
- While the EU AI Act is now law, businesses have until February 2025 before the first major restrictions kick in.
- Most provisions take full effect by August 2026, creating a window for businesses to adapt their AI systems methodically. Yes, that might seem far off, but considering the depth of changes some systems will need, itâs hardly excessive.
- Non-compliance carries serious consequences â organizations can face fines reaching âŹ35 million or 7% of total worldwide annual turnover.
Letâs be clear about one thing â artificial intelligence has settled in and is here to stay.
And now, with the EU AI Act, we finally have real, practical rules about how to use it responsibly.
Itâs the GDPR moment for AI.
Remember the Wild West days of AI? When businesses could use any algorithm, anywhere, without explaining how it worked? Those days are over.
For the first time, we know exactly what âresponsible AIâ means â not in theory, but in detailed, actionable requirements.
Read on to find out how this Act will impact your business.

What is the EU AI Act?
The EU AI Act is the worldâs first comprehensive law governing how businesses can develop and use artificial intelligence, setting strict rules for AI systems based on risk levels. Effective from August 2024, it carries penalties of up to âŹ35 million or 7% of global revenue for violations.
The Act doesnât just apply to European companies. Whether your business is based in the US, UAE, or anywhere, if your AI system touches the lives of EU citizens in any way, these rules apply to you.
The Act specifically outlines whatâs prohibited (like social scoring systems), others are considered high-risk and need extra due diligence (think healthcare or infrastructure), and some just need to be transparent about being AI (like chatbots). More on this in a minute.
Does the EU AI Act Apply to My Business?
As per official documentation, if your AI systems affect EU citizens or operate in EU markets, these rules likely apply to your business. The EU AI Act identifies three key roles that determine how the rules affect your business.
- Providers: These are companies creating or substantially modifying AI systems. Whether youâre building a machine learning model from scratch or significantly adapting an existing one, youâre considered a provider.
- Deployers: This category represents businesses putting AI to work. Youâre a deployer if your company uses AI tools for customer service, data analysis, or decision-making. This includes everything from using simple chatbots to implementing complex automated systems. Yes, youâre still responsible for ensuring they meet EU requirements when serving European users. However, primary compliance responsibility often lies with the tool provider.
- Importers and Distributors: These are the businesses that help non-EU AI systems reach European markets. They carry an important responsibility â making sure these systems meet EU requirements before they ever reach European users.
Thereâs some good news as well, especially for smaller businesses: the EU AI Act isnât trying to squash innovation or overwhelm smaller companies. If youâre running a startup or small business, youâll face lower fines if things go wrong, and youâll have access to special testing environments (called regulatory sandboxes) to make sure youâre getting things right.
If you are still confused, ask yourself these questions:
- Do your AI systems interact with EU residents?
- Are you developing, modifying, or deploying AI tools?
- Do you handle high-risk applications like recruitment, credit scoring, or healthcare?
A âyesâ to any of these suggests youâll need to align with the Actâs requirements â but donât worry, weâll cover exactly what that means in the next section.
đĄ Related Blog:
EU AI Act Risk Categories Explained
The EU AI Act creates a clear framework that helps businesses understand their obligations based on their AI systemâs potential impact. There are four categories you need to understand.
1. Unacceptable Risk
These are AI applications that simply arenât allowed in the EU market. This category includes systems that could seriously harm people or manipulate their behavior in dangerous ways.
For example, a social credit scoring system that rates citizens based on their behavior would be banned. Similarly, AI that uses subliminal techniques to influence peopleâs choices or exploits vulnerabilities of specific groups, like children or elderly people, is strictly prohibited.
2. High-Risk Systems
This is where most business-critical AI applications fall. These systems can be used but need robust controls and ongoing monitoring.
For these systems, businesses need to:
- Maintain detailed documentation about system design and purpose
- Ensure human oversight of AI decisions.
- Conduct thorough risk assessments.
- Implement quality management systems.
- Monitor performance after deployment.
A hiring algorithm that screens job applications would qualify as high-risk because it significantly affects peopleâs livelihoods. The same goes for AI systems that assess creditworthiness, detect fraud, or help make medical diagnoses.
Most businesses from the financial services sector fall under this category.
3. Limited Risk
Take customer service chatbots or image generation tools â they need to be clearly labeled as AI, but donât require the intensive oversight of high-risk systems. The key here is transparency: users should always know when theyâre interacting with AI rather than humans.
4. Minimal Risk
This covers AI applications with minimal impact on peopleâs rights or safety. AI-powered spam filters or basic recommendation systems for entertainment content are some examples. While these systems still need to follow the general principles of responsible AI use, they face the lightest regulatory requirements.
Now that you know whether you need to follow this Act or not, and the risk level you fall under, letâs understand how to comply with the EU AI Act.
Related Solutions
EU AI Act Compliance Requirements
Before the EU AI Act, different companies took different approaches, and it wasnât always clear what âresponsible AI useâ really meant. The Act just changes this by setting clear expectations and specific steps businesses need to take.
Core Requirements for All AI Systems
Every business using AI, regardless of risk level, needs to start with the basics. This means creating clear documentation about your AI systems and establishing basic governance structures.
The most basic requirement is knowing your AI systems inside and out. Sounds obvious. But youâd be surprised how many businesses discover theyâre using more AI than they realized during their first audit. From that automated email sorter to your customer service system â they all count.
High-Risk System Requirements
Now, if youâre using AI for something that significantly impacts peopleâs lives â like deciding who gets a loan or who gets hired â youâre in high-risk territory. This is where the Act gets serious, but for good reason.
Think about it: if an AI system were making decisions about your business, wouldnât you want to know itâs being carefully monitored?
Hereâs what this looks like in practice:
- Data and Training Controls: Youâll need to ensure your training data is high-quality and representative. For instance, if youâre using AI in recruitment, your training data should include diverse candidate profiles to prevent bias.
- Risk Management Systems: This means continuously monitoring your AI systems for potential issues. A financial services company using AI for credit decisions would need regular checks to ensure their system isnât developing unfair biases over time.
- Human Oversight: Real people need to be able to supervise and override AI decisions when necessary.
- Record Keeping and Documentation: This includes keeping records of training data, methodologies, and any significant decisions made by the AI.
Transparency Requirements
One of the EU AI Actâs clearest messages is about being honest with people. Using a chatbot? Let people know theyâre talking to AI. Generated content through AI? Label it clearly.
But what if you fail to comply with these regulations? Next section is just about that.
EU AI Act Fines and Penalties
| Violation | Fine |
|---|---|
| Use of a prohibited AI system | Up to âŹ35 million or 7% of total worldwide annual turnover |
| Non-compliance with high-risk AI system obligations | Up to âŹ15 million or 3% of total worldwide annual turnover |
| Non-compliance with limited-risk AI system obligations | Up to âŹ15 million or 3% of total worldwide annual turnover |
| Non-compliance with GPAI obligations | Up to âŹ15 million or 3% of total worldwide annual turnover |
| Providing incorrect or misleading information to authorities | Up to âŹ7.5 million or 1% of total worldwide annual turnover |
How to Prepare for the EU AI Act
The EU AI Act is coming, deadlines are looming, and thereâs pressure to get things right.
For most businesses, the real challenge isnât understanding what to do â itâs figuring out how to implement changes efficiently without disrupting operations.
Take identity verification, for example. If youâre running a financial service or handling sensitive customer data, youâre probably already using some form of AI-powered verification. Under the new Act, these systems need to be transparent, fair, and auditable. Thatâs a tall order if youâre building everything from scratch. But you donât have to.
You can meet compliance requirements by partnering with providers whoâve already done the heavy lifting. Think about it â why reinvent the wheel when you can use proven solutions that are built with compliance in mind?
If you are looking for API solutions, Signzy offers ready-to-use APIs for KYC, KYB, identity verification, and document validation. Explore our suite today!

Tanya Narayan
Tanya is a Product Marketing Manager at Signzy and a GrowthX Fellow, with a strong focus on SaaS and fintech. She specializes in go-to-market strategy, customer research, and positioning to help teams bring products to market effectively. She has also cleared the Company Secretary foundation level, reflecting her grounding in corporate and compliance fundamentals.

