For context, artificial intelligence in this article refers to its modern state and not the ideal goal. We live in the world of narrow or weak AI, which beats humans at individual tasks such as trying out basic troubleshooting options faster than a developer would. We’re still years or decades away from truly strong AI that would do almost anything a human could. It means that artificial intelligence tests won’t happen without human input, but you can minimise the effort that much.
How does AI implementation improve the software testing process?
In essence, artificial intelligence in software testing is the natural evolution of automated QA. AI test automation goes a step further than emulating manual work. “The machine” also decides when and how to run the tests in the first place.
Innovation doesn’t end here. Artificial intelligence tests are already a thing. Depending on the implementation, tests will be modified and/or created from scratch without any human input. This is a wonderful solution if project complexity makes you wonder how to test — AI could very well be the answer.
Benefits of AI
This section alone warrants a series of articles depending on the definition among other factors. Let’s stick to the benefits of AI tests and other uses of artificial intelligence for QA.
- AI automated testing is a time saver. We’ve covered using test automation tools to achieve scheduling miracles, but let’s take things up a notch. What if you could also maintain useful tests only? For instance, you can automatically sunset or suspend tests that never fail to investigate if they are indeed a waste of time.
- Test consistency can be increased quite a bit. It’s natural to occasionally run into flaky tests that fail for no apparent reason. It is possible to automatically flag such tests for artificial intelligence’s review that will identify a coding issue or point you to a conceptual flaw found across several tests.
- Test maintenance becomes much less cumbersome. It is especially relevant for B2C solutions that are often tweaking the user interface daily (if not more frequently) for A/B purposes. Such small changes could still be disruptive for tests imitating user journey, e.g. a button is simply not there anymore. Combining artificial intelligence + test automation means that your tests are adjusted for the UI changes without human input.
Here is some advice coming from the trial and error of companies at the bleeding edge of artificial intelligence testing.
- Know what you’re getting into. Pushing for test automation without adequate preparation is a huge time sink. Just like with automated tests, lacking a senior specialist that will lead the way is catastrophic.
- Get your test suite in order. Missing or incorrect labels, typos, and legacy databases may all skew data that will be used by artificial intelligence to improve your testing.
- Write down goals for implementing AI. This includes business goals that you hope to tackle (e.g. measurable improvements in retention through smoother UX), QA goals that will verify your AI endeavour was worth the effort, and some AI testing benchmarks to see if you’re on the right track.
- Give your colleagues a heads up. Incorporating artificial intelligence into testing is a lengthy process, and it may affect availability of QA specialists and their output at least short-term. Your Project Manager, Product Owner, and upper management will appreciate advance notice on such a drastic change. Naturally, developers should also be in the know, especially if they handle unit testing for the project.
- Make sure your test management is just as innovative. AI tests have little use if your team is still stuck doing QA on Excel. You need a dedicated test management solution that is friendly to third-party artificial intelligence tools.
Methods for AI-based software test automation
Methods for incorporating artificial intelligence into software testing mainly come from the most popular AI techniques. They are Machine Learning, Natural Language Processing (NLP), Automation/Robotics, and Computer Vision. Below are some examples of how these techniques are used for QA.
- Pattern recognition employs machine learning to find patterns in test and/or test execution that can be turned into actionable insights. If issues with one and the same class make multiple tests fail, your AI solution will ask the team to have another look at the potentially problematic code. Pattern recognition can also be used for your software’s code itself to spot and predict potential vulnerabilities.
- Self-healing corrects automated tests if they start to become a headache. Flaky tests can eventually be traced to the route of the problem. Seemingly unreproducible defects will be caught and resolved. Tests that fix themselves are a real game changer as your project gets bigger.
- Visual regression testing keeps both your software and tests for it in working order. This is where the UI tweaking example from earlier slots in. Good self-healing eliminates a lot of redundant work, empowers the product team to be more ambitious about A/B testing, and helps them respond to something trending really fast.
- Data generation is really useful alongside a primary software test tool. Artificial intelligence can be employed to parametrise tests on a bigger scale, e.g. generating tons of profile pictures with rare resolution and metadata to see if users are able to upload them fine.
Best testing tools for AI software testing
Let’s look at some tools employing the methods described above.
Launchable uses pattern recognition to see how likely a test will fail. This information can be used to cut through the testing suite and eliminate some clear redundancies. Also, you can group tests and for instance run only the most problematic ones before deploying a hotfix. Launchable’s most recognised client is BMW.
Percy is a visual regression testing tool. It is great for keeping your UI tests relevant and also helps you maintain consistency of user interface across different browsers and devices. Google, Shopify, and Canva are all in Percy’s client portfolio.
mabl is a neat test automation platform with self-healing functionality. It preaches a low-code approach yet can be used perfectly fine the traditional way. Riot Games, jetBlue, and fellow IT companies like Stack Overflow and Splunk are featured on mabl’s website as clients.
Avo Test Data Management
Avo has a dedicated tool for managing test data, and the functionality includes AI data generation as well. The solution claims to mimic real-world data at large scale with some data discovery on top. Avo is used by Sony, PwC, and also one of the aqua’s clients — Tech Mahindra.
Artificial intelligence methods in software testing are a truly powerful tool that pushes efficiency even further than regular automation does. Some subsets may seem a little excessive (e.g. data generation was a thing before people started labelling everything “AI”), but self-healing tests and pattern recognition are no small feats. Implementing AI in your quality assurance routine is certainly worth the effort as long as you formulate adequate goals and get the right people.
Introducing AI into your software testing, however, is meaningless without a good test management solution. You need a solid test organisation dabble with AI, and any serious effort adds the complexity of juggling multiple artificial intelligence QA tools. Make sure that you find a good all-in-one test management solution before you set on a software testing AI journey.