The EU AI Act is already enforceable. If your organization builds or deploys AI systems with any footprint in Europe, parts of this regulation apply to your team right now. Otherwise, you may face [penalties of up to ā¬35 million or 7% of global annual turnover](https://artificialintelligenceact.eu/article/99/). Many compliance leads are still figuring out what "conformity assessment" means in practice, which obligations fall to providers versus deployers, and where QA fits in. This guide covers all of it: risk categories, technical requirements, role-based responsibilities, penalties, and what your team needs to do at each stage.
Here is what you actually need to know to stay compliant š
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence.
Its core principle is straightforward: the rules that apply to an AI system depend on the harm it could cause.
A chatbot handling customer service questions faces different obligations than an AI model screening job applicants or deciding who gets a loan. The regulation splits AI systems into four categories:
The Act also draws a clear line between two types of actors. A provider is the company that builds and places an AI system on the market. A deployer is the organization using that system in its operations. An HR software vendor and the company using it to screen resumes carry different legal burdens. That distinction runs through every section of the regulation.
The rollout is staggered, and some obligations are already active:
Teams planning for EU AI Act compliance requirements 2026 should treat the August deadline as a hard cutoff.
Check the full AI Act timeline in another blog post by aqua.
As you’re handling the complexities of the EU AI Act compliance requirements, one thing becomes clear: traceable QA and comprehensive documentation are where compliance lies. And this is exactly how aqua cloud, an AI-powered test and requirement management solution, delivers exceptional value. With its domain-trained AI Copilot, aqua helps your team automate and enhance compliance documentation, using RAG technology that grounds AI suggestions in your specific context and requirements. The platform’s built-in traceability between requirements, risks, and test cases creates a continuous audit trail that aligns with the EU AI Act’s demands for technical documentation and record-keeping. With automated test case generation that incorporates regulatory requirements, your team can achieve the systematic validation that high-risk AI systems demand. aqua also integrates with tools your team already uses, including Jira, Azure, Selenium, GitHub, and 12+ other tools from your tech stack, so compliance workflows fit into your existing processes.
Pursue competitive advantage with aqua's AI-powered test management.
AI systems started making real-world decisions that affected fundamental rights: who gets hired, who qualifies for housing, and who gets flagged by law enforcement. Existing law could not keep up. GDPR addressed some privacy risks but could not regulate accuracy, robustness, or explainability. Product safety rules covered certain categories but were not designed for software that learns and changes after deployment.
The EU AI Act fills that gap with a unified, risk-based framework and clear accountability across the AI supply chain. If an AI system causes harm, your provider cannot blame the data. Your deployer team cannot claim they did not understand how it worked. Every participant in the chain has defined obligations, and the law holds each of them to those obligations.
For your QA team, that context matters. Testing is no longer just about verifying that a system works as intended. Your team is helping prove that it works safely, fairly, and transparently under conditions the law specifies.
If your AI system is used in the EU, or if your company provides AI tools to EU-based users, you are in scope. The Act applies regardless of where you are headquartered. Your compliance obligations then depend on your role.
Providers face the heaviest requirements. These are entities that develop AI systems, place them on the market, or put them into service under their own name. Your responsibilities as a provider include:
Taken together, these obligations define the full scope of EU AI Act high-risk AI requirements your provider team must satisfy before market placement.
Deployers are organizations using AI systems in their operations. Your team as a deployer does not need to rebuild the system or prove it meets technical standards. However, you must:
Importers, distributors, and authorized representatives have lighter but still enforceable obligations. Importers verify that providers met their compliance duties before bringing a system into the EU. Distributors check that documentation, CE marking, and registration are in place. If your team substantially modifies a high-risk AI system or changes its intended purpose, you may move from deployer to provider status under the law.
For QA professionals, these distinctions shape your testing strategy directly. Testing a proprietary AI tool your company built is provider-level work. Evaluating a third-party AI product for procurement is deployer-level work. The EU AI Act requirements for high-risk AI systems shift accordingly.
The compliance obligations under the EU AI Act vary by risk level, role, and deployment context. For high-risk AI systems, the requirements are detailed and enforceable across the full product lifecycle.
Risk management under the EU AI Act is a continuous process, not a pre-launch checklist. If your team provides high-risk AI, you must establish and maintain a risk management system that:
The Act treats risk management as a legal requirement because AI systems can produce harmful outcomes even when they perform exactly as designed. A hiring tool might consistently filter out qualified candidates from underrepresented groups. A credit model might perform well on average but fail for specific demographic segments.
For your QA team, test plans need to include risk-based software testing, bias detection, edge-case validation, and scenario analysis that covers atypical inputs. Governance structures matter just as much. Your engineering, legal, compliance, and QA teams need to work together throughout development.
Providers remain responsible throughout the lifecycle, including after deployment. If a risk emerges post-launch, your team cannot pass accountability to the deployer.
For EU Annex IV specifically, you need technical documentation that maps your system architecture, training data, performance metrics, and human oversight measures. Most GRC tools can track whether you have these docs but can't create them.
Data is a compliance variable under the EU AI Act. High-risk AI systems must be developed and validated using datasets that are:
Your team must document where data comes from, how it was collected, what pre-processing was applied, and whether it covers edge cases and underrepresented groups. For QA professionals, this creates a direct testing responsibility: validating data quality, not just model behavior. You need to assess whether your test datasets reflect production conditions and whether they are updated as real-world conditions change.
If your team provides input data to a high-risk AI system, such as uploading candidate profiles to an AI screening tool, that data must be relevant and sufficiently representative for the intended use. QA professionals on the deployer side should treat input data validation as a compliance checkpoint, not a preprocessing detail.
The EU AI Act technical documentation requirements for high-risk systems are specific. Your documentation must be detailed enough for regulators to assess compliance and for deployers to understand the system’s capabilities and limitations. It must cover:
High-risk AI systems must also automatically generate and preserve logs capturing relevant events, decisions, and inputs throughout operation. If an AI system makes a decision that harms someone, regulators and affected individuals need to reconstruct what happened. Without logs, that reconstruction is not possible.
For your QA team, this creates two responsibilities. First, validate that logging mechanisms actually work: confirm the system captures the right information, stores it securely, and makes it retrievable. Second, treat your test results and validation records as compliance evidence. Test plans, test cases, results logs, and defect tracking all become part of the audit trail, not just internal project artifacts.
EU AI Act transparency requirements operate on two levels.
Provider-to-deployer transparency requires your team to supply deployers with the information they need to operate the system lawfully. This includes:
Deployer-to-user transparency requires that individuals affected by AI-driven decisions be informed. If AI is used in a workplace context, affected employees must be informed before deployment. When the system produces outputs that lead to decisions with legal or significant effects on someone, that person must be told AI was involved. In certain cases, they are entitled to an explanation of how the system reached its output.
EU AI Act explainability requirements mean that explanation outputs must be understandable to non-experts while accurately reflecting the system’s decision process. Your QA team needs to validate that explanation mechanisms exist, produce plain-language outputs, and accurately represent the system’s reasoning. Technically correct explanations that a non-expert cannot interpret do not satisfy the law. Understanding the boundary between privacy vs confidentiality in security testing is also relevant here, since explainability outputs often touch personal data that your team must handle carefully.
High-risk AI systems must be designed and deployed so that qualified individuals can exercise meaningful oversight. This is an operational role with authority and real process behind it.
For your provider team, that means building oversight mechanisms into the system’s design:
For your deployer team, the obligation is to assign oversight to specific individuals, ensure those individuals are trained and equipped for the role, and give them the authority to act. Oversight cannot be a rubber-stamp process. The human role must be substantive.
From a QA perspective, human oversight is a testable condition. Can the assigned overseer see the information they need? Can they intervene in time to prevent harm? Do they have the tools to act on their judgment? If your testing reveals that overseers cannot understand outputs or cannot override the system in edge cases, that is a compliance gap that must be resolved before deployment.
High-risk AI systems must meet documented performance relevant to their purpose. If your system claims a specific accuracy level, you must prove it under conditions that reflect real-world deployment, including testing on representative data and validating performance across demographic subgroups.
Robustness requires the system to perform reliably under a range of conditions, including edge cases and unexpected inputs. For your QA team, robustness testing means stress-testing the system with noisy data, incomplete inputs, and deliberate adversarial examples. The system must fail safely when conditions are not ideal, not just perform well when they are.
EU AI Act security requirements mandate protections appropriate to the risks your system faces, including:
Security testing must be part of your validation process. It is a cross-functional responsibility that includes your QA team, not something to hand off to a separate security team at the end of a release cycle.
Compliance does not end at deployment. Providers of high-risk AI must establish post-market monitoring systems that:
A serious incident is one that leads to death, serious harm to health, serious damage to property or the environment, or a serious infringement of fundamental rights. When such an incident occurs or is linked to your AI system, you are legally required to report it. That requires a compliance pipeline running from detection through investigation, remediation, and regulatory notification.
Your deployer team has a parallel obligation. If your team identifies a serious incident or suspects the system is producing non-compliant outputs, you must inform the provider and, in some cases, authorities directly. QA professionals on the deployer side need production monitoring processes, user feedback loops, and continuous validation that the system is behaving as expected in your specific context.
Teams preparing for the EU AI Act often end up using a combination: GRC for controls, AI governance tooling for lifecycle artifacts, and a defined process that connects them so Annex IV reporting and bias monitoring are defensible.
High-risk AI systems carry the heaviest compliance burden under the EU AI Act. These are tools used in employment decisions, credit assessments, law enforcement, education, and access to essential services. Mistakes, bias, or failures in these contexts can directly harm people’s rights and opportunities. The Act treats these systems like regulated products, and compliance extends well beyond writing good code.
Before placing a high-risk AI system on the market, your team must carry out a conformity assessment proving the system meets all EU AI Act high-risk AI systems requirements. For most high-risk systems, you can conduct this assessment internally using documented controls. For certain categories, such as AI used in biometric identification or critical infrastructure, the law requires third-party assessment by a notified body.
The conformity assessment is not a one-time event. If the system is substantially modified in ways that affect its safety, performance, or intended purpose, your team must repeat the assessment. Every major update creates a compliance checkpoint, not just the initial deployment.
For QA teams, your test results, defect logs, performance metrics, and robustness reports form the evidence base that makes the conformity assessment possible. If you cannot demonstrate that the system meets accuracy, robustness, and risk management standards through documented testing, the assessment cannot proceed. Legal, engineering, data science, and QA all contribute to the evidence package.
Once a high-risk AI system passes its conformity assessment, your team must:
The CE mark signals to authorities and users that the system meets legal requirements. For software-based AI systems, it typically appears in documentation, user interfaces, or accompanying materials. Registration in the EU database gives regulators and the public visibility into what high-risk AI systems are deployed, by whom, and for what purposes.
CE marking and registration are the final steps in the pre-market compliance process. Your test documentation is part of the audit trail that supports the CE mark.
Once deployed, your team must continue monitoring system performance, identify risks emerging from real-world use, and update the system to maintain compliance. This includes:
System updates create a compliance checkpoint. Even minor updates need evaluation to confirm they do not introduce new risks or degrade accuracy, robustness, or transparency. Post-deployment testing is part of the compliance lifecycle for your team, not an optional extra.
Your deployer team must also monitor actively, watch for signs that the system is not working as intended, report serious incidents to the provider, and stop using the system if outputs violate the law or the provider’s instructions.
Compliance under the EU AI Act is distributed across multiple actors. No one can pass accountability entirely to someone else.
Providers bear primary responsibility. Your team is accountable for designing, building, validating, and monitoring the system throughout its lifecycle. Placing a high-risk AI system on the market makes your organization legally responsible for ensuring it meets the Act’s standards, regardless of how deployers use it.
Deployers are responsible for lawful use. If your team feeds inappropriate data into a compliant system, ignores oversight requirements, or deploys the system in an unauthorized context, your organization is liable. Deployers are active compliance participants, not passive end users.
Importers and distributors verify that required documentation, CE marking, and registration are in place before bringing a system into the EU market. If your team knows or should have known that a system does not comply, you can be held accountable for making it available.
For QA professionals, your role maps directly to your organization’s position in the supply chain. Testing a proprietary tool your company built means provider-level validation. Evaluating a third-party system for procurement means deployer-level verification. Compliance is a cross-functional discipline, and your QA team sits at the center of the evidence it requires.

The EU AI Act penalty structure is tiered based on severity:
The higher figure applies in each case. For smaller organizations, these fines can be existential. For large ones, 7% of global turnover is a material financial event.
Learn more about penalties for non-compliance from another one of our blog posts.
Financial penalties are not the only risk. A public finding that your AI hiring tool discriminated against protected groups, or that your credit model failed its conformity assessment, creates trust damage that compounds over time. Customers, partners, and regulators all factor compliance history into future decisions.
Enforcement infrastructure is being built out now. National authorities are establishing AI oversight bodies, the European AI Office is coordinating cross-border enforcement, and conformity assessment processes are rolling out across member states. Waiting for full enforcement to mature before acting is not a viable strategy. Your team’s compliance work now will be far less costly than remediation under regulatory pressure later.
| Violation type | Maximum fine |
|---|---|
| Prohibited AI practices or high-risk system failures | ā¬35 million or 7% of global annual turnover |
| Other significant violations | ā¬15 million or 3% of global annual turnover |
| Providing incorrect or misleading information | ā¬7.5 million or 1% of global annual turnover |
As EU AI Act enforcement timelines roll out, your team needs practical tools to implement compliance at scale. aqua cloud, an AI-powered test and requirement management platform, is built for the specific challenges of AI regulation. Its risk management capabilities let your team document risks, implement mitigation measures, and provide evidence of systematic testing: all of which are EU AI Act high-risk systems requirements your organization must meet. With aqua’s ISO 27001 certification and compliance)) with standards like DORA and GDPR, you are building on a foundation that already aligns with key European regulatory frameworks. aqua’s domain-trained AI Copilot uses your organization’s own documentation to ground its outputs, so AI-generated test cases and requirements stay relevant and accurate to your specific regulatory context. aqua connects with tools across your existing stack, including Jira, Azure DevOps, and GitHub, so compliance processes integrate into the workflows your team already relies on.
Achieve continuous compliance with 97% less documentation effort using aqua.
Try aqua for free
The EU AI Act sets concrete, enforceable EU AI Act compliance requirements for any organization building or deploying AI in Europe. Your QA team is now a core part of the compliance function, producing the documentation and test evidence that regulators will examine. Organizations that integrate compliance into their development and testing workflows from the start will be in a stronger position when audits begin.
Any organization that builds, deploys, imports, or distributes AI systems used in the EU must comply, regardless of where your company is headquartered. Your specific obligations depend on your role: providers face the most stringent requirements, while deployers, importers, and distributors each have defined duties.
Your team must establish a continuous risk management process that identifies foreseeable risks, evaluates their severity, implements control measures, and reassesses the system after modifications or when new risks emerge from real-world operation.
Build audit readiness into development from the start. Your team should maintain technical documentation, preserve system logs, record test results and validation evidence, and keep risk management records current. Regulators will expect a traceable history from design through deployment.
A provider builds and places an AI system on the market. A deployer uses that system in their operations. Your provider team is responsible for technical compliance, conformity assessments, and post-market monitoring. Your deployer team is responsible for lawful use, human oversight, and informing affected individuals when AI is used in decisions about them.
A new conformity assessment is required when your system is substantially modified in ways that affect its safety, performance, or intended purpose. Substantial changes such as retraining on new data, changing the system’s scope, or altering core decision logic all restart the assessment process.