Software QA strives for perfection, but reaching it is not just hardly possible, but rarely feasible. So when to stop software testing? Find out and even get a checklist below.
When teams are under pressure to release, one of the first questions that comes up is whether there is a shortcut to deciding when to stop testing in software testing. The Pareto Principle is a popular answer: stop at 20% of the effort because you have probably found 80% of the issues by then.
Especially for non-QA stakeholders, it might be tempting to simply apply the good old Pareto Principle. It would mean that software testing should be stopped as early as 20% into the process, as you must have found 80% of the issues by that point. That would sure save quite a lot of money and speed up releases, right?
Unfortunately, things are not as simple as that. Depending on the industry, there may be simply too many (potential) critical issues to cut QA effort by 80%. Reducing expenses and releasing features faster might be tempting, but banking software testing implies catching anything remotely unsecure. Then there is a separate matter of regression: you can’t just let devs do one round of fixes and trust it didn’t cause some new bugs. Alas, the Pareto Principle is too lean for software QA.
Even though the 80/20 rule can’t be applied to software QA, there is still a need to balance resources and outcome — hopefully, a relatively issue-free software product. How do you do that in QA? The answer is adding exit criteria in testing plan.
Want to dive a bit deeper into this? Our latest video could be just the thing for you.
Testing exit criteria serve as the agreed finish line for a testing cycle. They tell everyone on the team, not just QA, what conditions need to be met before the software can move forward.
The purpose of exit criteria in a test plan is to guide you as a checklist would. Just like the lack of particularly vital and/or loved products makes you go to the grocery store, reaching all exit criteria is a stop time in testing of software. Let’s look at some of them.
While resolving Critical and Major defects is a crucial checkpoint, modern QA teams require more than just defect resolution for product stability and success. You need a solution that helps you identify and fix those defects and also optimises your entire testing workflow. That’s where aqua cloud, an AI-powered test management solution (TMS) steps in.
Backed by precision and quality, aqua cloud provides a centralised hub for all your testing needs. With its AI-powered capabilities, you can auto-generate test cases, prioritise defects, and ensure complete test coverage. Its powerful analytical features offer real-time insights into your testing progress, helping you make informed decisions about when to stop and move forward confidently. Capture (1-click bug recording tool) integration gives you the most powerful bug-tracking combo in QA, while AI-Copilot addresses all the issues you have along the way. All in-all, aqua will be your partner on your path to deliver a stable, high-quality product without compromising efficiency.
Combine 100% of your testing efforts in one place and stop when you need
There comes a time when it’s time to stop testing. Modern Agile development is fueled by very short iterations, which means you have a fixed deadline that can’t be moved. Yes, a feature that is still too raw to go to production will be postponed. As far as sprint planning goes, you will still have to stop the quality assurance effort it for now.
This is the simplest exit criteria for system testing. If you don’t have the means to carry on, you will have to be content with what issues the QA team found so far.
It’s one thing to stop the QA work because you didn’t have the time to go deep enough through absolutely everything. We have discussed why it is unavoidable and makes sense to do. On the other hand, your effort should be wide enough to at least have an idea of how all key pieces of software are performing. Achieving full requirements coverage is a good reason to move on.
Unlike requirements coverage, there’s no need to push for the 100% mark. Still, the absolute majority of code that is planned to go into production should be covered with test cases, preferably wrapped into test scenarios for smoother quality assurance. Automated testing tools or solutions that help you manage them, such as aqua, are a great help as well.
Different companies use different scales at which they grade defects. There is also an entirely separate conversation about the colloquial vs factual difference between severity and priority. For the purposes of this article, let’s go with the following classification of defect severity:
1
Critical
2
Major
3
Minor
4
Low
So, when to stop testing? Simple: when you fixed all Critical and Major defects. There are both software development and client relation reasons not to make the new version of your product more unstable than the previous one. Resolving all defects of the two highest severity types gives you that.
If you are looking for more QA strategy input, look no further than template. We applied 20 years of our experience in QA to create a simple, easy-to-tailor template that breaks down defect handling and much more.
Get a testing strategy template that enables us to release 2 times faster
Not every part of the application carries the same risk. Applying the same testing exit criteria across all modules regardless of their importance leads to either over-testing low-risk areas or under-testing critical ones.
A practical approach is to assign a risk level to each module or feature before the testing cycle starts. The criteria for stopping then shift depending on that classification.
For high-criticality modules, such as payment flows, authentication, or data submission in regulated industries, all critical and major defects must be resolved before testing can stop. Test coverage should be close to complete and all functional tests must pass.
For medium-criticality modules, the bar is slightly lower. Critical defects must be fixed, major defects should be resolved or have a documented workaround, and coverage should be sufficient to confirm the main flows work.
For low-criticality modules, the team can stop once critical defects are resolved and basic functionality is confirmed. Minor defects can be deferred to a future release without blocking sign-off.
Documenting this classification upfront saves a lot of debate later. When the Go / No Go meeting comes around, the team can point to risk tiers and defect status rather than arguing about whether “enough” testing has been done.
Quality Assurance teams track a lot of metrics to analyse the state of the product, their progress on the upcoming release, and the general productivity and success of the team. These metrics can also be used to define testing exit criteria as well. Some of them include:
The following checklist covers the most common testing exit criteria used across projects. It is meant as a starting point. Some items may not apply to your context, and you may need to add criteria specific to your product or industry.
Test execution
Defect status
Coverage
Functional sign-off
Process
Adapt this per project by adjusting percentage thresholds, adding module-specific criteria, or splitting the checklist into tiers based on the risk classification described above.
Although the test case pass rate is not required to be 100%, all functional tests should be green before a new version of the product goes live. It doesn’t matter whether some things are a bit wonky, since that’s what Minor and Low severity defects are for. All key features, however, still should work — even if not user scenarios do.
A good example here would be QA testing in Insurance. One of the primary insurance models means automatic coverage at partner clinics with documentation handled by a health facility and no actual money exchanging hands. There are also plans that provide partial or full reimbursement at non-partner clinics when the client has to file an insurance claim.
Filing an insurance claim is a key function of insurance companies software, and you can’t release a new version if clients can’t apply for reimbursement. Your QA specialists, however, could learn that the app fails to fill out claim data based on the photo of a bill, but users can still enter everything manually and send the claim. As long as the key functionality — actually sending the claim and getting reimbursed — is still present, your team can release a new version of the software.
Last but not least, there’s a Go / No Go meeting where all the tech people decide whether you’re ready to release the new version. If the previous exit criteria indicate that things are ready on the QA side, that’s when you stop.
Here are sample exit criteria — feel free to exclude some, add more, or change values:
Test exit criteria help you find the right balance between covering requirements and not overextending your effort. Use the ideas above, apply your product’s specific needs, and consider regulatory requirements to define exit criteria that suit.
Defining exit criteria is essential, but having the right tools to ensure you meet them is just as important. To confidently stop testing at the right moment, you need a solution that provides clear visibility into your progress and ensures no gaps in test coverage.
With aqua cloud, you get 100% visibility into every phase of your testing. Its AI-powered features help you track progress, automate requirements and test case creation, and guarantee full test coverage. aqua cloud empowers you to meet your exit criteria without unnecessary delays or risks. With aqua cloud, you take away the pain of testing with data-driven insights and customisable KPI alerts, helping you make the perfect decision on when to stop testing. Take control of your QA process and achieve complete oversight with German-quality precision.
Stay informed about your testing efforts all the time: know when to stop testing with an AI-powered TMS
The decision to stop testing should be based on a risk management assessment and should be made in collaboration with stakeholders.
The main steps of testing are:
The final step in software testing is typically the closure of the testing phase, which includes activities such as documenting the results, fixing any remaining issues, and formally accepting the software. This is followed by maintenance and support activities, which may involve further testing and quality assurance processes.
The key is to frame it in terms of risk, not process. Non-technical stakeholders do not need to know the defect severity taxonomy in detail, but they do need to understand what risks remain and whether the team considers them acceptable for release. A short summary works well: how many tests were run, how many passed, what defects are still open and why they are not blocking release, and what the team would monitor post-launch. Tying this to the testing exit criteria agreed at the start of the project also helps. If those criteria are met, the decision becomes straightforward to explain.
In waterfall, testing exit criteria are typically defined once at the start of the project and apply to a single, final testing phase before release. The focus is on reaching a defined quality threshold across the whole system before anything ships. In Agile, exit criteria apply at multiple levels: per sprint, per feature, and per release. Sprint-level criteria tend to be lighter, such as all user story acceptance tests passing and no new critical defects introduced. Release-level criteria are more comprehensive and resemble the waterfall approach more closely. The main practical difference is that Agile teams revisit and sometimes adjust their testing exit criteria between cycles, while waterfall teams tend to treat them as fixed from the start. Neither approach is inherently better. The right level of formality depends on the product, the team, and the regulatory environment.