How to know when we should stop our testing?

Software QA strives for perfection, but reaching it is not just hardly possible, but rarely feasible. So when to stop software testing? Find out and even get a checklist below.

when to stop testing

Is the Pareto Principle good enough?

Especially for non-QA stakeholders, it might be tempting to simply apply the good old Pareto Principle. It would mean that software testing should be stopped as early as 20% into the process, as you must have found 80% of the issues by that point. That would sure save quite a lot of money and speed up releases, right?

Unfortunately, things are not as simple as that. Depending on the industry, there may be simply too many (potential) critical issues to cut QA effort by 80%. Reducing expenses and releasing features faster might be tempting, but banking software testing implies catching anything remotely unsecure. Then there is a separate matter of regression: you can’t just let devs do one round of fixes and trust it didn’t cause some new bugs. Alas, the Pareto Principle is too lean for software QA.

Even though the 80/20 rule can’t be applied to software QA, there is still a need to balance resources and outcome — hopefully, a relatively issue-free software product. How do you do that in QA? The answer is adding exit criteria in testing plan.

Defining text exit criteria

The purpose of exit criteria in a test plan is to guide you as a checklist would. Just like the lack of particularly vital and/or loved products makes you go to the grocery store, reaching all exit criteria is a stop time in testing of software. Let’s look at some of them.

testing exit points

Time

There comes a time when it’s time to stop testing. Modern Agile development is fueled by very short iterations, which means you have a fixed deadline that can’t be moved. Yes, a feature that is still too raw to go to production will be postponed. As far as sprint planning goes, you will still have to stop the quality assurance effort it for now.

Test Budget

This is the simplest exit criteria for system testing. If you don’t have the means to carry on, you will have to be content with what issues the QA team found so far.

Requirements coverage

It’s one thing to stop the QA work because you didn’t have the time to go deep enough through absolutely everything. We have discussed why it is unavoidable and makes sense to do. On the other hand, your effort should be wide enough to at least have an idea of how all key pieces of software are performing. Achieving full requirements coverage is a good reason to move on.

Test coverage

Unlike requirements coverage, there’s no need to push for the 100% mark. Still, the absolute majority of code that is planned to go into production should be covered with test cases, preferably wrapped into test scenarios for smoother quality assurance. Automated testing tools or solutions that help you manage them, such as aqua, are a great help as well.

Defect severity

Different companies use different scales at which they grade defects. There is also an entirely separate conversation about the colloquial vs factual difference between severity and priority. For the purposes of this article, let’s go with the following classification of defect severity:

Defect Severity

1

Critical

2

Major

3

Minor

4

Low

So, when to stop testing? Simple: when you fixed all Critical and Major defects. There are both software development and client relation reasons not to make the new version of your product more unstable than the previous one. Resolving all defects of the two highest severity types gives you that.

Further metrics

Quality Assurance teams track a lot of metrics to analyse the state of the product, their progress on the upcoming release, and the general productivity and success of the team. These metrics can also be used to define testing exit criteria as well. Some of them include:

  • Threshold of open defects (any severity)
  • Defect rate percentage
  • Test case pass/failure rate

Functional testing success

Although the test case pass rate is not required to be 100%, all functional tests should be green before a new version of the product goes live. It doesn’t matter whether some things are a bit wonky, since that’s what Minor and Low severity defects are for. All key features, however, still should work — even if not user scenarios do.

A good example here would be QA testing in Insurance. One of the primary insurance models means automatic coverage at partner clinics with documentation handled by a health facility and no actual money exchanging hands. There are also plans that provide partial or full reimbursement at non-partner clinics when the client has to file an insurance claim.

Filing an insurance claim is a key function of insurance companies software, and you can’t release a new version if clients can’t apply for reimbursement. Your QA specialists, however, could learn that the app fails to fill out claim data based on the photo of a bill, but users can still enter everything manually and send the claim. As long as the key functionality — actually sending the claim and getting reimbursed — is still present, your team can release a new version of the software.

Go / No Go meeting

Last but not least, there’s a Go / No Go meeting where all the tech people decide whether you’re ready to release the new version. If the previous exit criteria indicate that things are ready on the QA side, that’s when you stop.

Test Exit Criteria Checklist

Here are sample exit criteria — feel free to exclude some, add more, or change values:time to stop software testing

Related Articles

Test management solutions are not necessarily standalone solutions. Some can be complementing other tools, such…

photo
Denis Matusovskiy
11 mins read

StatCounter reports that 64.53% of users worldwide use Chrome as their primary browser. It’s not…

photo
Olga Ryan
6 mins read

Test management solutions are a great asset for better QA, but you get even more…

photo
Denis Matusovskiy
12 mins read
Subscribe to aqua blog

Get the latest posts delivered right to your inbox