What is the EU AI Act?

The EU AI Act is the European Union’s landmark law to regulate artificial intelligence, and the world’s first comprehensive AI legislation, creating a legal framework to address AI’s risks while encouraging innovation.

Robert Greco
October 27, 2025
15 min read
Share

Table of contents

The EU AI Act is the European Union’s landmark law to regulate artificial intelligence, and the world’s first comprehensive AI legislation. Formally known as Regulation (EU) 2024/1689, it creates a legal framework to address AI’s risks while encouraging innovation.

The Act follows a risk-based approach, i.e. it imposes rules in proportion to an AI system’s potential to cause harm. By setting standards for trustworthy AI, the EU aims to ensure AI systems in Europe are safe, transparent, and respect fundamental rights.

High-Level Summary

The EU AI Act introduces requirements for AI developers (providers) and users (deployers) depending on the system’s risk level. Most obligations fall on those who provide or put high-risk AI systems on the market, even if they are based outside the EU. Users (organizations deploying AI in their operations) have fewer duties but must still ensure proper use of high-risk AI. Below is a summary of why these rules are needed, why they matter to you, and how organizations can start applying them.

Why do we need rules on AI?

AI technology can deliver huge benefits – from better healthcare to efficient transport – but it also poses new risks. Unlike traditional software, advanced AI might operate in opaque ways. In many cases it’s impossible to know why an AI system made a decision, which makes it hard to tell if someone was unfairly disadvantaged (for example, in a hiring decision or a public benefits application).  

Existing laws haven’t fully kept pace with such challenges. The EU AI Act is designed to fill this gap by preventing harmful outcomes and ensuring Europeans can trust AI.

Put simply, the rules aim to make AI safe and accountable. They ban the most dangerous AI practices and set requirements for others to minimize risks. This helps address concerns like bias or discrimination by AI, lack of transparency, and safety failures, which are not adequately covered by current regulations.

Why should you care?

If your organization uses or develops AI systems, the EU AI Act will likely affect you. Non-compliance can result in hefty fines – up to €35 million or 7% of worldwide annual turnover for the most serious violations. But beyond fines, it’s about protecting your business and stakeholders. For example, AI tools used in HR (such as CV-sorting software for recruitment) are classified as “high-risk” and must meet strict requirements to ensure fairness. If you rely on AI for hiring, customer service, or decision-making, you’ll need to understand the Act to avoid illegal practices (like secretive AI that manipulates users) and to maintain trust with employees and customers.

Compliance teams should pay attention because the Act introduces new obligations similar to data protection laws but focused on AI. And HR teams should care because AI that profiles people or automates decisions about individuals (from job applications to performance evaluations) will be regulated to prevent discrimination and uphold transparency. In short, caring about the AI Act means caring about ethical, lawful AI use – which is increasingly critical for reputation and risk management.

How can organizations apply it?

Applying the AI Act starts with understanding your AI systems and their risk category. A practical first step is to inventory the AI applications your company uses or provides, and determine which risk level each falls under (see the risk levels below). For each system:

  • If it’s high-risk: Be prepared to implement a range of controls before and while using the AI. This includes risk assessments, strict data governance to avoid bias, keeping detailed documentation and logs, human oversight, and ensuring outcomes are accurate and explainable. You will likely need to undergo conformity assessments (sometimes with a notified body) before deploying such AI, and register the system in an EU database of high-risk AI.
  • If it’s limited-risk: Plan for transparency measures, like user disclosures (e.g. tell people they are interacting with an AI chatbot) and labeling of AI-generated content. These are generally simpler obligations (discussed more below).
  • If it’s minimal-risk: No specific action is needed under the Act, but it’s still good practice to follow ethical guidelines and monitor these systems.

For any AI system, organizations should integrate AI compliance into their processes. This means updating procurement checklists, vendor requirements, and internal policies to include AI Act considerations. Provide training to staff about AI transparency and risk (the Act even introduces “AI literacy” obligations, emphasizing education around AI). Leverage guidance documents and tools provided by regulators.  

For instance, an interactive compliance checker tool is available to help companies figure out if the Act applies to their AI and what obligations they have. It can guide you through questions about your AI’s purpose, users, and impact. Finally, consider seeking professional legal advice for a thorough compliance strategy – especially if you deploy AI in sensitive areas like HR, finance, or healthcare.

EU AI Act: different rules for different risk levels

Source: European Commission AI Act website
Source: European Commission AI Act website

The EU AI Act defines four tiers of AI systems based on risk: unacceptable, high, limited, and minimal or no risk. The level of regulation increases with the risk. Unacceptable-risk AI is banned outright. High-risk AI is allowed but heavily regulated. Limited-risk AI is subject to transparency requirements, and minimal-risk AI is mostly unregulated. Below we explain each category:

Unacceptable risk

Unacceptable-risk AI systems are completely prohibited under the Act. These are AI uses considered a clear threat to people’s safety, livelihoods or rights. The law bans practices such as:

  • Manipulative or exploitative AI that materially distorts someone’s behavior in a harmful way (e.g. a toy using voice AI to trick children into unsafe acts).
  • Social scoring systems that rank people based on personal behavior or characteristics (similar to a “social credit” system).
  • Biometric identification and surveillance in public spaces, like real-time facial recognition in crowds, as well as AI that categorizes people by biometric features (e.g. by ethnicity or gender) in sensitive contexts.
  • AI that predicts criminal behavior or risk (so-called predictive policing) based solely on profiling or personal data, without any human assessment.

Some narrow exceptions exist. For instance, certain remote biometric identification (facial recognition) may be allowed by law enforcement after the fact (not in real-time) to investigate serious crimes, but only with court approval and under strict conditions. Generally, though, these high-risk surveillance and manipulation uses of AI are banned outright to protect fundamental rights.

High risk

High-risk AI systems are those that could significantly affect people’s safety or fundamental rights if they malfunction or are misused. The Act lays out two main high-risk categories:

AI as safety components of regulated products: If AI is used in products already subject to EU safety standards – like AI in cars, aviation, medical devices, toys, or industrial machines – and the product requires a conformity assessment, then the AI is high-risk. For example, an AI that controls the braking in an autonomous car would be high-risk.

AI in critical domains with major impact on individuals: AI systems used in specific areas listed in the Act are high-risk and must be registered in an EU database before deployment. These areas include:

  • Critical infrastructure management (e.g. AI managing electricity or water supply)
  • Education and vocational training (e.g. AI grading exams or determining school admissions)
  • Employment and HR (e.g. AI for recruiting, promotion, or monitoring performance of employees)
  • Essential private/public services (e.g. credit scoring that decides on loans, or eligibility for welfare benefits)
  • Law enforcement (e.g. AI to evaluate evidence or predict criminal recidivism, excluding fully automated profiling)
  • Migration and border control (e.g. lie detection at borders, visa application screening)
  • Administration of justice (e.g. AI assisting judges or predicting legal outcomes)

Because these high-risk AI systems pose serious concerns, they come with strict obligations. Providers must assess and mitigate risks before putting such AI on the market and throughout its lifecycle. There are detailed requirements for high-risk AI, including ensuring high-quality training data (to reduce bias), logging AI operations for traceability, providing extensive technical documentation, implementing human oversight, and achieving adequate accuracy, robustness, and cybersecurity.

High-risk AI systems also require a conformity assessment (an evaluation to verify they meet all requirements) before they can be commercially deployed. Once in use, they may be subject to ongoing audits or monitoring. Importantly, people have the right to file complaints about high-risk AI with national authorities if they suspect the AI is not compliant or is causing harm. All of this ensures that high-risk AI is tightly controlled to prevent accidents or abuses.

Limited risk

Limited-risk AI systems are those that are largely allowed but must meet transparency requirements. The idea is to make sure people know when AI is being used in certain interactions, to preserve trust and agency. Key obligations for limited-risk AI include:

  • Disclosing AI involvement: If a person is interacting with an AI system that could be mistaken for human, they should be informed. For example, chatbots or AI assistants must clearly tell users that they are AI and not human. This way, users can make informed decisions (e.g. knowing a customer service chat is automated).
  • Labeling “deepfakes” and AI-generated content: Any content (images, video, audio, or text) that is generated or significantly modified by AI should be clearly marked as such, especially deepfake-style content. If AI creates a realistic video of someone or writes an article, viewers/readers should be alerted that it’s AI-generated. This transparency helps combat misinformation and manipulation.
  • Generative AI transparency: Providers of generative AI systems (like large language models or image generators) must ensure that AI-produced outputs are identifiable as AI-made. They also have to build in reasonable safeguards to prevent the AI from producing illegal content (for instance, disallowing prompts that would generate hate speech or explicit illegal material) and publish a summary of any copyrighted data used in training their models. This last requirement, tied to EU copyright law, means if an AI was trained on copyrighted works, the provider should disclose that information in a summary form.

Aside from these transparency-focused duties, limited-risk AI is not subject to the heavy oversight that high-risk AI is. The goal is simply to keep users informed and safe when AI is operating behind the scenes. For organizations, complying with limited-risk rules often means adding the right notifications or labels in your AI-driven services.

Minimal or no risk

The vast majority of AI systems today fall into the minimal or no-risk category. These are AI applications that do not pose significant risks to users’ rights or safety, so the Act does not impose any new rules on them. Examples include AI in spam filters, AI that auto-categorizes emails, or AI features in video games. Such everyday AI tools can continue to be used freely.

Of course, even minimal-risk AI must still comply with existing laws (like general consumer protection or privacy laws), but the AI Act itself does not add extra obligations for them. Organizations should still keep an eye on these systems in case their risk level changes (for instance, an AI update might introduce new capabilities that move it into a higher risk category).

Governance and implementation

Implementing a groundbreaking law like the AI Act requires new governance structures and guidance for businesses. The EU is setting up bodies and processes to enforce the Act, and providing support to help organizations comply.

Implementation

Enforcement of the AI Act will be a joint effort between EU-level and national authorities. A new European AI Office will coordinate enforcement and oversight across the EU. This office will work with regulators in each Member State (often the existing market surveillance or digital authorities) to supervise how AI providers and users are following the rules.  

An EU AI Board and a Scientific Panel of experts will also be established to advise on AI governance and technical standards. These bodies will help ensure consistent application of the law across all countries.

The European Parliament has created a dedicated AI Act implementation working group as well. This group of MEPs monitors how the rules roll out and interacts with the Commission and AI Office, making sure the Act’s enforcement stays on track and actually supports innovation in Europe’s digital sector.  

In practice, what this means for organizations is that there will be guidance and points of contact at national and EU level for AI regulation questions. As we approach full enforcement, expect your national regulators to issue more detailed instructions on compliance procedures, conformity assessments, and how to register high-risk systems.

Supporting compliance

Recognizing that these rules are new and complex, the EU has introduced measures to help companies comply. One early initiative was the AI Pact, a voluntary program launched by the European Commission to encourage companies to start following the AI Act’s key principles before the law fully kicks in. Companies signing up to the AI Pact commit to things like transparency and risk management in advance, which not only prepares them for future compliance but also helps regulators gather feedback on implementing the Act.

In mid-2025, the Commission rolled out specific tools focusing on general-purpose AI (GPAI), the large AI models that power services like chatbots and image generators. Three notable instruments were introduced:

  • Guidelines for GPAI providers: These clarify exactly which obligations apply to providers of general-purpose AI models under the Act. They break down responsibilities so that even smaller companies can understand if they fall under the rules.
  • A Code of Practice for AI: This is a voluntary set of best practices developed with industry and experts, covering areas like transparency, copyright respect, and security for AI models. Providers who follow the Code of Practice can more easily meet the Act’s expectations (and demonstrating adherence might help in showing regulators that you are acting in good faith).
  • A transparency and risk disclosure template: The Act requires that providers of large AI models publish a summary of the training data (especially noting any large datasets and sources used). A standard template has been provided to make this easier. Using this template, companies can disclose the needed info (like the kinds of data and processing techniques used in training) in a clear, consistent way.

All these measures aim to reduce the administrative burden on businesses and provide clarity. For example, having a ready-made template or guideline means you don’t have to guess at what the regulators expect – it’s spelled out for you.  

The European AI Office and national authorities are also expected to issue further guidance, FAQs, and possibly run outreach or training sessions (some EU countries may start “AI sandboxes” as described later, which double as compliance learning environments). The overall message is that regulators want to work with businesses to implement the Act, not just drop rules on them – especially in the early years.

Compliance Checker

To assist organizations in figuring out their obligations, independent groups have created user-friendly tools. One notable example is the EU AI Act Compliance Checker, an interactive questionnaire and flowchart available online. You can input details about your AI system – such as what it does, who uses it, and how it makes decisions – and the tool will help determine if it falls under the Act and what risk category and requirements might apply.

This kind of tool is especially helpful for compliance teams in the early stage of analysis. Instead of wading through legal text, you can get a quick, tailored readout of obligations (for instance, it might tell you “your system is high-risk, so you need to implement X, Y, Z controls, and register it with authorities”).  

Of course, it’s not a substitute for legal advice, but it provides a useful roadmap. The Compliance Checker also stays updated – as the Act gets finalized and clarified, the tool’s logic is refined (with changelogs noting updates in 2024-2025 to match the final law).

For a compliance or HR team, using such a checker can demystify the AI Act. It’s a good starting point to then discuss with your legal counsel or technical teams what steps to take next.

Transparency requirements

Transparency is a core theme of the EU AI Act’s governance approach. Beyond the risk-based tiers, the Act explicitly mandates transparency for certain AI outputs and interactions. We touched on some of these under “limited risk,” but let’s summarize the key transparency requirements that organizations must prepare for:

  • AI-generated content disclosure: If your organization publishes or uses content (text, images, videos, audio) that was generated or significantly modified by AI, you must clearly label it as such. For instance, if you use an AI tool to create a realistic marketing video with AI-generated actors or voices, the audience should be informed that it’s AI-generated. This helps prevent deception and builds trust with consumers and the public.
  • Generative AI systems (e.g. chatbots like ChatGPT): While these aren’t automatically “high-risk,” the Act imposes specific transparency rules on them. Providers must inform users that they are interacting with AI and not a human, whenever that might not be obvious. They also need to publish summaries of copyrighted data used for training their models. In practice, this means if an AI model was trained on, say, news articles or books, the provider should release a summary listing those sources (to comply with EU copyright law and allow artists or publishers to know their work was used).
  • Safety and misuse warnings: Generative AI providers are expected to build in safeguards against illegal content generation. This is a form of transparency too – it means being transparent about the model’s limitations and steps taken to avoid misuse. Users should be made aware, for example, if the AI has content filters and what they cover.
  • Public database of high-risk AI: Although not a “label” on content, it’s worth noting as a transparency measure that the EU will maintain a database where all high-risk AI systems must be registered before deployment. This database will be accessible to the public and will list key information about each high-risk system (like its purpose, provider, conformity assessment status, etc.). For organizations, this means if you deploy a high-risk AI, some details about it will become public record. It’s intended to foster accountability and allow oversight by civil society and affected persons.

In summary, transparency requirements ensure that AI doesn’t operate in a sneaky or unaccountable way. Compliance teams should thus plan to incorporate clear disclosures at appropriate points – whether it’s a notice on a website (“This chatbot is AI-powered”), watermarks or captions on AI-generated media, or documentation published on your website about your AI systems and training data.

EU AI Act compliance timeline

Application timeline

The EU AI Act was officially adopted in mid-2024, and its provisions kick in on a staggered schedule to give organizations time to adapt. Here’s the timeline for when different parts of the Act take effect:

  • August 2024: The Act entered into force (became law). This started the clock on the implementation timelines.
  • February 2, 2025: The first key provisions started applying six months after entry into force. This includes the ban on unacceptable-risk AI systems, which became legally enforceable from this date. Also, new AI literacy obligations (measures to educate or inform about AI, likely aimed at public sector and providers) began to apply from this date.
  • May 2025 (approx.): About nine months in, the EU is encouraging codes of practice to be in place. (In context, these would be industry codes or voluntary guidelines developed to align with the Act. The Act itself notes that some codes of conduct should be ready within 9 months to guide implementation.)
  • August 2, 2025: Rules for general-purpose AI (GPAI) – especially transparency obligations on generative AI – become applicable 12 months after entry into force. From this point, providers of large AI models need to comply with those specific requirements (like the training data transparency and safeguards).
  • August 2, 2026: The bulk of the AI Act requirements take effect 24 months after entry into force. This is the date by which most providers and users of AI need to be in full compliance for systems in scope. Think of this as the general “AI Act is now live” deadline.
  • August 2, 2027: Certain high-risk AI systems embedded in products get an extended timeline of 36 months, pushing their compliance deadline to 2027. This extra year (beyond the 2026 date) acknowledges that sectors like medical devices or automotive might need longer to update complex AI systems already in development or in use. By this date, however, all high-risk AI must comply. The European Parliament specifically noted that high-risk systems have more time to adapt, but no later than 36 months after the law enters into force.

It’s important to keep an eye on these dates. For compliance planning, 2025 is when the first obligations (like stopping any banned AI practices and meeting transparency duties for generative AI) really start.  

2026 is the big one for most obligations, and 2027 is the final cutoff for remaining high-risk use cases. The staggered approach is meant to make the transition smoother – regulators will likely issue more guidance during these phases. Make sure your organization tracks any announcements or updates from EU authorities as these milestones approach.

Encouraging AI innovation and start-ups in Europe

A major goal of the AI Act is to strike a balance between regulation and innovation. The EU wants to support AI developers, especially start-ups and small companies, so that compliance is not a barrier to entry. In fact, the law explicitly mentions small and medium-sized enterprises (SMEs) dozens of times and includes provisions to help them.

Small Business Guide

For start-ups and SMEs, the EU AI Act provides tailored measures to ease compliance and foster innovation:

  • AI Regulatory Sandboxes: Every EU member state will establish at least one AI sandbox – a testing environment where companies can develop and experiment with AI systems under the supervision of regulators. Within these sandboxes, normal regulatory requirements are relaxed or waived temporarily. SMEs get priority access to sandboxes, which will be provided free of charge with simple, streamlined procedures. This allows start-ups to test AI solutions in real-world conditions without immediate fear of penalties, while also learning how to meet the Act’s requirements. Any compliance documentation produced during sandbox experiments can later be used to show regulators that the AI meets the rules.
  • Reduced compliance costs and fees: The Act mandates that conformity assessment and other regulatory fees be proportionate to the size of the company. Regulators will regularly evaluate and work to lower compliance costs for smaller providers. In short, an SME should not face the same compliance bill as a tech giant. This includes possibly reduced certification fees and simplified processes to lessen financial burden on start-ups.
  • Simplified documentation and training: Recognizing that small businesses have limited resources, the European Commission will develop simplified technical documentation templates for SMEs. These templates will be accepted by authorities for showing compliance, sparing SMEs from drafting complex docs from scratch. Additionally, training programs tailored to SMEs will be offered to help them understand and meet the AI Act obligations. This might include online tutorials, workshops, or helpdesk support.
  • Dedicated support and communication: SMEs will have access to dedicated channels for guidance on the AI Act. The idea is that a small business should be able to get quick answers on how the law applies to them. Whether through an official helpdesk or an AI Office contact point, there will be someone to assist with compliance questions unique to smaller players.
  • Involving SMEs in rule-making: The EU is also including SMEs in the ongoing governance of AI regulation. SMEs and start-ups will be invited to participate in standard-setting efforts and the Act’s advisory forums. This ensures that standards (for example, technical standards for AI safety or transparency) consider the perspective of smaller actors and don’t overly favor big companies. Under the Act’s AI Advisory Committee, there will likely be seats or consultation opportunities for SME representatives.
  • Proportionate obligations for general-purpose AI providers: If an SME is providing a general-purpose AI model, the Act says obligations should be scaled to their capacity. For instance, the voluntary Code of Practice for AI has separate Key Performance Indicators (KPIs) tailored for SME providers, acknowledging that a two-person start-up cannot be expected to meet the same benchmarks as a multinational. This principle of proportionality is meant to prevent the Act from stifling small innovators.

Overall, the EU AI Act doesn’t just set rules and leave SMEs to fend for themselves, it actively encourages innovation by creating a safer space for experimentation and by lowering barriers to compliance.  

If you’re a start-up founder or part of a small tech company, you should take advantage of these supports: join a regulatory sandbox if possible, use the simplified templates, and engage with the provided training and forums. Not only will this help ensure you comply with the law, it can actually improve your AI product (through feedback from regulators and early risk mitigation) and increase trust with investors and customers (who will see that your AI is being developed responsibly within the EU framework).

By understanding the EU AI Act’s requirements and timeline, compliance, legal, and HR teams can start preparing now. Whether it’s auditing your current AI tools, updating policies, or educating your staff, proactive steps will put you ahead of the curve.  

The Act is certainly detailed, but its core message is straightforward: AI in Europe should be lawful, ethical, and transparent by design.

With the resources and guidelines now emerging, organizations of all sizes can align their AI practices to these values – ensuring not only legal compliance but also more trustworthy AI outcomes for everyone involved.

Sources

The information in this article is based on the EU AI Act text and official summaries, including the European Commission and European Parliament publication, as well as analysis from EU AI Act compliance resources. These sources offer further details and can be consulted for a deeper dive into specific provisions.

Subscribe to newsletter
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share