On this page
Testing with AI Test Management Best practices
17 min read
23 Apr 2026

EU AI Act Compliance: How to Comply with the AI Act?

If you are building or deploying AI in the EU, you probably face the common compliance problem others face. The obligations under the EU AI Act are already live, some deadlines have already passed, and the gap between what organizations have documented and what regulators will expect to see is wider than most compliance leads realize. A vendor building a recruiting algorithm, an HR team deploying that tool, and a cloud provider distributing the underlying model into the EU can each face completely different legal obligations under the same regulation. This guide maps out what EU AI Act compliance actually looks like for each role, from risk classification to enforcement timelines, so your team can act on the right obligations at the right time rather than discovering gaps when it is already expensive to close them.

Key Takeaways

  • The EU AI Act entered into force on August 1, 2024, with compliance obligations phased in over time, from prohibited practices being enforceable since February 2025 to full high-risk framework implementation by August 2026-2027.
  • Compliance obligations vary by role in the AI value chain, with providers responsible for conformity assessments and technical documentation while deployers must implement human oversight and monitor system operations.
  • Organizations must start by building a comprehensive AI inventory, classifying each system’s risk level, implementing role-based controls, ensuring AI literacy among staff, and establishing vendor governance with strong contractual rights.

Your EU AI Act obligations are already live, and most teams are still figuring out what those obligations actually are. šŸ‘‡

What Is the EU AI Act and Why Does It Matter?

The European AI Act is the world’s first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies to any organization that develops, deploys, distributes, or imports AI systems affecting people in the European Union, regardless of where that organization is based. A company headquartered in the United States, Canada, or Singapore that sells AI-powered software to EU customers is within scope. The regulation is extraterritorial in exactly the same way as GDPR, and EU AI Act GDPR alignment is not coincidental. Both frameworks share the same underlying logic: regulate based on impact on people, not based on where the technology was built.

The Artificial Intelligence Act is built on one core idea: regulation should be proportionate to risk. The framework protects fundamental rights and safety while leaving space for lower-stakes applications to operate with minimal friction. In practice, your compliance burden depends almost entirely on what your system does and who it affects.

The four risk categories under the European AI Act:

Risk Tier What It Means Examples
Unacceptable risk Outright banned. No compliance pathway. Social scoring by public authorities, manipulative subliminal systems, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools
High risk Strict pre-market requirements and ongoing monitoring Employment screening, credit scoring, law enforcement AI, educational assessment, essential public services
Limited risk Transparency duties, must disclose AI involvement Chatbots, deepfake generators, emotion recognition tools outside banned contexts
Minimal or no risk Largely outside the regulatory perimeter Spam filters, AI-powered video games, recommendation systems with low impact

Most AI applications in the EU fall into the minimal-risk category. The real compliance burden lands on high-risk use cases, and that boundary is more nuanced than it first appears. The same underlying model can land in different risk buckets depending on the deployment context. A general-purpose language model may be low-risk on its own. Embed it into a hiring assistant that ranks candidates, and that application moves into high-risk territory. The EU-AI-Act requires you to trace the downstream use case, not evaluate the model in isolation. Risk assessment is an ongoing discipline tied to how systems are used in production, not a box you check at launch.

What Are the Key Compliance Requirements Under the EU AI Act?

EU AI Act compliance requirements scale with risk. For most organizations, the focus lands on high-risk AI systems, where the burden is substantial and begins well before launch.

What do providers of high-risk AI systems need to do?

  • Providers, meaning entities that develop or place high-risk AI systems on the market, must complete a conformity assessment demonstrating the system meets mandatory technical and governance standards before deployment. The EU AI Act high-risk requirements cover seven areas:
  • Risk management. Identify and document potential harms throughout the system lifecycle, then implement controls to reduce those risks to acceptable levels. Risk management is not a one-time audit. It is a continuous process that runs from design through post-deployment monitoring.
  • Data governance. Training and operational datasets must be relevant, representative, and free from bias that could produce discriminatory outcomes. Your team must document data sources, how data was collected, how it was prepared, and the rationale behind preparation choices. This is one of the areas where the Europe Artificial Intelligence Act technical documentation requirements become operationally demanding quickly.
  • Technical documentation. Development logs, testing results, dataset descriptions, model architecture details, accuracy metrics, known limitations, and governance decisions must all be maintained and accessible. Auditors treat technical documentation as primary evidence during conformity assessments.
  • Transparency and user information. Clear instructions covering how the system works, what it is designed to do, what its limitations are, what operating conditions it requires, and how to deploy it safely. This documentation must be written for the deployer, not just for technical teams internal to the provider.
  • Human oversight. Specific people must be assigned to monitor the system, interpret its outputs, and intervene when something goes wrong. Those individuals need training, authority, and the tools to act. Assigning oversight on paper without equipping those people to actually exercise it does not satisfy the requirement.
  • Post-market monitoring. Track real-world performance, respond to incidents, and report serious malfunctions to the relevant national authorities. Monitoring plans must be designed before launch, not retrofitted after something goes wrong.
  • Quality management system. Structured internal processes covering the full AI lifecycle, from design review gates to post-deployment feedback loops. This is a governance architecture requirement, not just a documentation task.
  • For certain high-risk AI systems, registration in the EU database before going live is also required. The database is still being built out, but providers of systems in Annex III categories should monitor its development closely.

What do deployers of high-risk AI systems need to do?

When your team deploys a high-risk system built by a third party, the obligations are different but equally real. EU AI Act requirements for deployers include following the provider’s instructions for use, ensuring input data is relevant and representative for your specific use case, assigning human oversight, monitoring system operation, and notifying individuals when AI is used to make or assist decisions that affect them significantly.

Public authorities and essential service providers must also conduct a fundamental rights impact assessment before first use. This assessment evaluates how the system could affect the rights of the people it touches, including rights related to non-discrimination, privacy, due process, and access to services.

One point that gets missed in most compliance programs: deployers cannot offload all responsibility onto the vendor. If something goes wrong with a high-risk system your team is operating, the fact that you purchased it from a third party is not a defensible position on its own. Your organization is accountable for how it deploys the system, how it governs it, and how it responds when issues arise. Strong aqua cloud compliance infrastructure helps deployers build the operational evidence trail that demonstrates active governance rather than passive reliance on vendor assurances.

What do general-purpose AI model providers need to do?

General-purpose AI models, meaning models trained on broad data and capable of performing a wide range of tasks, face their own compliance track. Providers must maintain technical documentation, share information with downstream providers and deployers, implement a copyright compliance policy, and publish a summary of training data. Where a GPAI model is assessed as carrying systemic risk, additional obligations apply: adversarial testing, incident reporting to the Commission, cybersecurity measures, and energy efficiency reporting.

The Code of Practice for General-Purpose AI, published in July 2025 and endorsed by the Commission and AI Board, provides a structured path for GPAI providers to demonstrate compliance with these obligations before harmonized standards are finalized.

How Is the EU AI Act Enforced and Who Is Responsible?

Enforcement of the Artificial Intelligence regulation in EU operates through a coordinated structure involving both Member States and a new EU-level authority. Each Member State designates national competent authorities responsible for supervising AI systems within their jurisdiction. These authorities can investigate complaints, conduct audits, request documentation from providers and deployers, and impose administrative penalties.

The European AI Act established the European AI Office, which sits within the European Commission. The AI Office provides centralized oversight specifically for general-purpose AI models and systemic-risk systems that operate across borders, where fragmented national enforcement would be inadequate. It chairs the AI Board, which brings together representatives from all Member States to align on interpretation and application of the Act. That coordination mechanism exists specifically to prevent organizations from playing regulatory arbitrage across different national authorities.

Commission enforcement powers over GPAI governance provisions became active on 2 August 2025. The full enforcement toolkit, covering the complete high-risk framework, becomes operational in August 2026.

Penalties under the European union AI act are structured in three tiers:

  • Prohibited practice violations: up to €35 million or 7% of worldwide annual turnover, whichever is higher
  • Other operator obligation violations: up to €15 million or 3% of global turnover
  • Incorrect or misleading information provided to authorities: up to €7.5 million or 1% of global turnover

Authorities assess penalties case by case, weighing the nature and gravity of the violation, whether there was intent, the degree of cooperation, and whether the organization is an SME or startup. For large enterprises, the percentage-of-turnover calculation typically produces the higher figure. The structure mirrors GDPR enforcement, where headline figures set the stakes and actual penalties depend on the specifics of each case.

What Is the EU AI Act Implementation Timeline?

The Act rolls out in phases, and different parts of your AI operation hit different deadlines. Missing the fact that certain obligations are already enforceable is one of the most common mistakes compliance teams make when they start this process late.

Date What Applies
1 August 2024 Act entered into force
2 February 2025 Prohibited practices and AI literacy obligations enforceable
2 August 2025 General-purpose AI model rules and governance provisions
2 August 2026 Most remaining provisions, including the full high-risk framework
2 August 2027 High-risk AI embedded in safety-critical regulated products under Annex I

One important caveat: in November 2025, the European Commission published its Digital Omnibus on AI proposal, which links high-risk AI obligations to the availability of harmonized standards. If adopted, the backstop dates would shift to December 2027 for Annex III systems and August 2028 for Annex I systems. The European Parliament voted to support aspects of a delayed application in March 2026. Until the amendment is formally adopted, treat the current legal dates as the operative baseline and build toward them.

The phased structure matters because EU AI Act compliance cannot be treated as a single project with one due date. Different systems within the same organization may hit different deadlines. Obligations must be mapped to specific systems and tracked independently. Implementation deadlines in QA rarely align neatly with regulatory timelines, which makes early inventory and classification work more valuable than any last-minute compliance sprint.

Harmonized standards, the technical specifications that create a presumption of conformity when followed, are still in development. Notified bodies for conformity assessments are still being accredited. The compliance infrastructure is maturing, but that is a reason to build defensible internal controls now, not a reason to wait for the ecosystem to settle.

key-milestones-for-eu-ai-act-compliance.webp

What Are the Steps to EU AI Act Compliance?

Step 1: Build a complete AI inventory

Compliance starts with visibility. You cannot classify risk, assign controls, or track obligations for systems you do not know exist. Catalog every AI system, model, tool, and automated decision-support mechanism your organization develops, procures, or deploys. Go beyond officially sanctioned AI projects and include vendor tools, open-source models, embedded third-party components, and shadow IT.

For each entry, document who the provider is and who the deployer is, the specific use case and end users, whether outputs influence decisions about natural persons, whether the system processes personal data, and which domain it operates in. The inventory is a living document. Every new procurement decision, every new model integration, every new use case expansion needs to be assessed and logged.

Without this inventory, your team has no foundation for anything else. Risk classification, control assignment, vendor governance, and audit readiness all depend on knowing what your organization is actually running.

Step 2: Classify risk and identify regulatory trigger points

Once the inventory exists, build a classification workflow that every AI initiative passes through. Screen each system across four dimensions.

First, prohibited practice risk. The February 2025 EU AI Act guidance from the Commission gives your team enough detail to build a reliable screen for banned applications today. Any system that scores or ranks people based on social behavior across unrelated contexts, manipulates users through subliminal techniques, exploits vulnerabilities of specific groups, or conducts real-time biometric identification in public spaces without a narrow legal exception is out.

Second, high-risk status. Check Annex I, covering AI used as safety components in regulated products like medical devices, vehicles, and infrastructure, and Annex III, covering standalone uses in sensitive domains including employment, credit decisions, law enforcement, education, access to essential services, and border control. A system that screens job applicants, ranks students, or determines access to financial services is likely in high-risk territory regardless of how its underlying model is technically described.

Third, transparency triggers. Chatbots must disclose they are AI. Deepfake generators must label synthetic content. Emotion recognition systems must notify the people being assessed. These obligations apply independently of the risk tier.

Fourth, GPAI dependencies. Identify where your systems rely on foundation models or general-purpose AI components, and trace what obligations flow to your organization as a downstream deployer of those models.

Classification is not a one-time exercise. Use cases evolve, models are updated, and new features shift what a system does in practice. Build a review process that re-screens systems periodically and whenever a material change occurs.

Step 3: Implement role-based controls

The controls your team needs depend on your position in the AI value chain. Many organizations occupy multiple roles simultaneously, particularly when they fine-tune or adapt a third-party model for a specific use case.

If your team is a provider of a high-risk AI system, the core priorities are technical documentation sufficient for a regulatory audit, a quality management system covering the full AI lifecycle, conformity assessment completion before market placement, risk-based testing against defined accuracy and safety criteria, structured logging enabling post-market monitoring, and user instructions that deployers can actually follow. Documentation must be granular enough that an external auditor who did not build the system can reconstruct how it works, what it was tested against, and what decisions were made during development.

If your team is a deployer, the priorities shift to oversight assignment with real authority and training, instructions-of-use compliance, input-data discipline for your specific context, individual notification obligations, worker notice where required, fundamental rights impact assessments for public authorities, and contractual rights from your vendors to access the technical and governance information you need to meet your own obligations. A vendor that cannot or will not provide adequate transparency is a compliance risk. Deploying their system in a high-risk context while lacking the information to govern it properly may make your team’s compliance position indefensible.

If your team is a GPAI provider, controls must cover model documentation at the required level of detail, downstream information sharing, copyright compliance governance, and, where systemic risk applies, adversarial testing, incident reporting, and cybersecurity measures.

If you are a provider or deployer and deal with layered obligations, you need more than a checklist. aqua cloud gives your team a test and requirement management platform built for regulated environments, where traceability between requirements, test cases, and results is not optional. aqua’s AI Copilot generates context-specific test cases and compliance documentation aligned to EU AI Act technical documentation requirements, while keeping proprietary data secure. Granular permissions, version-controlled artifacts, and audit-ready evidence management mean your team can demonstrate active governance rather than passive policy. aqua integrates with Jira, Azure DevOps, GitHub, GitLab, and CI/CD pipelines, so compliance fits into the workflows your team already uses.

Ensure 100% Compliance with EU AI Act in 2026

Try aqua for free

Step 4: Treat AI literacy as an operational control, not a training event

EU AI Act literacy requirements under Article 4 require providers and deployers to ensure staff have sufficient AI literacy, calibrated to their technical background, experience, education, the context of the AI system, and the people affected by its outputs. A single awareness session will not satisfy this requirement for teams building or governing consequential systems.

Role-specific obligations apply across the organization. Developers and data scientists need to understand the technical requirements for high-risk systems. Product managers and procurement teams need to recognize high-risk triggers during vendor evaluation and feature design. Customer support teams need to understand what the AI system does and does not do. Legal and compliance teams need deep familiarity with the regulatory text and its practical implications. Executive leadership needs enough understanding to make governance decisions and sign off on conformity assessments.

AI literacy also underpins virtually every other compliance obligation. Accurate classification, prohibited-practice identification, vendor review, system monitoring, and incident escalation all depend on staff understanding what the regulation requires and what it means for their specific role. Teams that deprioritize literacy tend to find compliance gaps late, through reactive legal work at the worst possible moment.

Step 5: Build vendor governance with contractual teeth

Many organizations deploy third-party AI systems into regulated contexts. That arrangement does not transfer compliance liability to the vendor. Your team still needs contractual rights to obtain technical and governance documentation, restrictions on undocumented model changes that could affect your system’s behavior in production, notification obligations for incidents or significant updates, sufficient transparency to explain or defend decisions made by the system, and clear escalation paths when serious incidents occur.

Vendor due diligence for EU AI Act purposes is more demanding than standard procurement review. You need to understand whether the vendor’s system would be classified as high-risk in your deployment context, what conformity assessment they have completed, what their post-market monitoring process looks like, and whether their documentation gives your team what it needs to meet its own deployer obligations. Vendors who treat this information as proprietary are a compliance risk that no contract can fully mitigate.

Step 6: Build and maintain operational evidence

Regulators will look past policy statements and ethical AI commitments and ask for operational proof. Your team needs to be able to produce a structured AI inventory with risk classifications and the rationale behind them, instructions of use and oversight assignment records, test protocols with documented results and sign-offs, incident logs and governance decision trails, model documentation and conformity assessment records, and training completion records for staff in oversight roles.

Compliance artifacts need to be structured, version-controlled, and retrievable. When an auditor asks how a model’s accuracy was validated for a specific use case in a specific deployment context, the answer must point to a testing protocol, documented results, and sign-offs, not a general statement about best practices. The software testing strategies that generate defensible evidence for regulated systems are substantially more rigorous than typical development testing. Building that rigor into your standard engineering workflow before 2026 enforcement begins is the difference between compliance confidence and a reactive scramble.

What Compliance Tools and Resources Are Available for the EU AI Act?

A growing ecosystem of tools and guidance documents supports organizations working toward EU AI Act compliance. Here is what is available now and what to watch for.

EU AI Act compliance checker. The official tool helps you determine whether your AI system falls under the Act’s scope, what risk category it belongs to, and what obligations apply. It is a useful starting point for initial triage and for building a repeatable classification workflow, though it does not substitute for legal analysis on complex cases.

European Commission official guidance. The Commission has published detailed FAQs covering prohibited practices, AI literacy requirements, general-purpose AI obligations, and enforcement timelines. These documents represent the authoritative source for EU AI Act guidance and are required reading for anyone building a compliance program. They are updated as the regulatory environment evolves, so monitoring them is an ongoing task rather than a one-time exercise.

Code of Practice for General-Purpose AI. Published in July 2025 and endorsed by the Commission and AI Board, this Code provides GPAI providers with a structured path to demonstrate compliance with transparency, copyright, and safety obligations ahead of harmonized standards being finalized.

Code of Practice on AI-generated content marking and labelling. The second draft was published in March 2026, with the final version expected around June 2026. For teams whose systems generate or manipulate content reaching the public, this Code will define in practical terms what “identifiable” and “labelled” mean under the Act.

Harmonized standards. Still in development, with the first batch expected in 2026. Once published in the Official Journal, following these standards creates a presumption of conformity with the corresponding requirements. Until then, your team must rely on internal risk-based judgments and defensible control frameworks that demonstrate the same outcomes the standards will eventually codify.

Industry-specific resources. Trade associations, legal advisory groups, and sector-specific working groups are publishing compliance playbooks for employment AI, financial services AI, healthcare AI, and law enforcement AI. These resources translate the horizontal regulatory requirements of the european AI act into vertical contexts where precedents are clearer and the compliance pathway is more defined.

EU database for high-risk AI systems. Still being built out, but registration requirements for certain Annex III systems will apply from August 2026. Providers should monitor the database’s development closely and factor registration timelines into their go-to-market planning for new AI products.

Internal cross-functional teams. The most underrated compliance resource is the people in your organization who understand how your systems actually work. EU AI Act compliance requires input from engineering, product, procurement, HR, legal, privacy, security, and business units. Building a cross-functional working group early, rather than routing everything through legal, produces better outcomes and faster identification of the gaps that matter most.

AI Act EU compliance deadlines are not slowing down. So the gap between policy documentation and operational evidence is where you might potentially get exposed. aqua cloud is the test and requirement management platform built for exactly this environment. aqua’s AI Copilot generates thorough, context-specific test cases and compliance documentation aligned to EU AI Act technical documentation requirements, while keeping your proprietary data secure and private. Built-in traceability maps relationships between requirements, tests, and results, creating the audit trail regulators expect to see during conformity assessments. Risk-based testing capabilities let your team prioritize high-risk scenarios in line with the Act’s classification framework, and integrations with Jira, Azure DevOps, GitHub, GitLab, and CI/CD pipelines mean compliance fits directly into the workflows your team already relies on. The software testing strategies that hold up under regulatory scrutiny are the ones built on structured evidence, and aqua gives your team the infrastructure to produce it consistently.

Reduce compliance documentation effort by up to 70% while building the audit-ready evidence trail

Try aqua for free

Conclusion

EU AI Act compliance is an active legal regime with obligations already in force and enforcement expanding significantly through 2026 and 2027. The prohibited practices ban and AI literacy requirements have been enforceable since February 2025. GPAI governance provisions went live in August 2025. The full high-risk framework arrives in August 2026. Organizations that are still in the assessment phase need to move faster. The practical path forward is sequential and concrete. Build the inventory. Classify the risk. Assign role-appropriate controls. Document governance decisions and testing results. Train the people responsible for oversight. Establish vendor governance with contractual substance. And build the operational evidence that regulators will expect to see, not the policy documentation they will look past.

On this page:
See more
Speed up your releases x2 with aqua
Start for free
step

FOUND THIS HELPFUL? Share it with your QA community

Frequently Asked Questions

Who needs to comply with the EU AI Act?

Any organization that develops, deploys, distributes, or imports AI systems that affect people in the European Union must comply, including organizations headquartered outside the EU. Obligations differ significantly depending on your role in the AI value chain. Providers, meaning those who develop and place AI systems on the market, face the most demanding requirements, including conformity assessments and technical documentation. Deployers, meaning those who use AI systems in a professional context, must implement human oversight, follow instructions of use, and, in some cases, conduct fundamental rights impact assessments. Distributors and importers carry their own subset of obligations. The regulation also applies to organizations providing general-purpose AI models used by others as components in their systems. If your AI system affects EU residents, the Artificial Intelligence Act regulation applies regardless of where your organization is based.

Is the EU AI Act being enforced?

Yes. Enforcement began in phases. The prohibited practices ban became enforceable on 2 February 2025, meaning AI systems that engage in banned practices like social scoring by public authorities, manipulative subliminal techniques, or unauthorized real-time biometric identification in public spaces are already subject to regulatory action. AI literacy obligations under Article 4 became enforceable on the same date. Commission enforcement powers over general-purpose AI model governance provisions activated on 2 August 2025. The full enforcement toolkit for high-risk AI systems under the european AI act framework, including the complete Annex III provisions, becomes operational in August 2026. National competent authorities in Member States are being designated and resourced during 2025 and 2026, and the European AI Office is already operational.

Does the EU AI Act apply to the USA?

Yes, in practice. The EU AI act applies to any organization whose AI systems affect people in the European Union, regardless of where the organization is based. A US company that deploys AI in EU markets, sells AI products to EU customers, or operates AI systems that process data about EU residents falls within the scope of the european union AI act. The extraterritorial reach mirrors GDPR’s approach and for similar reasons: the EU regulates based on where the impact occurs, not where the technology originates. US organizations with EU operations or EU customers need to map their AI systems against the Act’s requirements with the same urgency as European organizations.

What are the key obligations for AI providers under the EU AI Act?

Providers of high-risk AI systems face the most demanding obligation set. Before placing a system on the market, they must complete a conformity assessment demonstrating the system meets all mandatory requirements. The core obligations cover seven areas: a documented risk management system covering the full system lifecycle, data governance ensuring training and operational data is relevant and unbiased, technical documentation detailed enough to support external audit, transparency documentation enabling safe deployment by others, human oversight mechanisms with assigned and trained personnel, post-market monitoring with incident reporting, and a quality management system governing the full AI lifecycle. Providers must also register certain systems in the EU database before launch. For general-purpose AI model providers, obligations cover model documentation, downstream transparency, copyright compliance, and where systemic risk is present, adversarial testing and Commission incident reporting.

How can companies prepare for audits under the EU AI Act?

Audit readiness under artificial intelligence regulation EU frameworks is fundamentally about operational evidence rather than policy documentation. Auditors will ask to see a structured AI inventory with risk classifications and the reasoning behind them, technical documentation covering model architecture, training data, testing protocols, and known limitations, conformity assessment records, human oversight assignments with evidence that those individuals are trained and equipped to act, post-market monitoring plans and incident logs, and AI literacy training records for staff in relevant roles. The documentation must be version-controlled, retrievable, and specific to each system rather than generic. Organizations operating under EU AI act regulation that have been running AI governance programs informally, relying on tribal knowledge and undocumented decisions, tend to find the gap between their actual governance and their documented governance largest at exactly the moment an auditor asks to see evidence. Building structured documentation into your standard AI development and deployment workflow now, rather than reconstructing it before an audit, is the most defensible path forward.