Start testing early
Good quality assurance always starts early. The primary reason is the time and monetary cost of late testing. Your QA specialists may identify serious flaws that are near- or outright architecture-level problems. Such issues will be difficult if not impossible to fix, requiring awkward workarounds or even a project reboot.
The consequences of late testing are exacerbated if you are working on a live service product. Whether it is a marketplace or a booming video game, you usually won’t have the resources to add features and rewrite a flawed solution at the same time. It is hard to achieve both even if you are prioritising growth over profits at the moment.
Here’s what a former software engineer of Twitter had to say about the performance of the company’s Android app:
‘I think there are three reasons the app is slow. First, it’s bloated with features that get little usage. Second, we have accumulated years of tech debt as we have traded velocity and features over performance. Third, we spend a lot of time waiting for network responses.
Frankly, we should probably prioritise some big rewrites to combat 10+ years of tech debt and make a call on deleting features aggressively.‘
If this Twitter saga has taught us anything, it’s this. Don’t leave testing to the last minute, and you will avoid decade-long performance bottlenecks plus all drama that comes from it.
Other key practices of performance testing were mentioned in the video.
Pick your metrics
On the surface, it may be easy to tell whether your app works well or not. If it loads things fast and doesn’t crash, it’s good to go. A software performance testing strategy, however, has more nuance and thus needs more detail especially for testing SaaS applications.
Load Time and Response Time are some of the most impactful metrics that quantify the performance of your solution. Then there are metrics like Time to First Byte, which may not have the same real-life impact but prove extremely important for Google ranking if you’re running a website.
Stability metrics are important to include even if you are not expecting a massive number of visitors. Maximum Requests per Second, Peak Response Time, Throughput, and Bandwidth are all important indicators. Naturally, Uptime is arguably the most important metric even if you’re running an online flower shop with 10 daily visitors.
Build a testing software suite
It’s not just different metrics that you will have to juggle. There are 6 primary types of performance testing, and you may need more than just one solution to nail performance testing. JMeter is an amazing tool for load testing, but ReadyAPI will be just as important if you are making an API performance testing strategy.
Another important consideration is aligning performance testing with other parts of your QA software package. If your company uses Selenium for test automation, you might as well automate performance testing with a Selenium-based solution. It’s the same with the low-code/no-code solution if you are using one.
You will also greatly benefit from a single solution to orchestrate all these different tests. Our advice is to have a look at existing QA infrastructure, pick an enterprise performance testing tool, and find an integration-friendly test management solution to govern all the tools.
The limits of AI tech are an important consideration for picking your tools, too. These limits have recently been pushed by a number of tools that utilise the algorithm behind ChatGPT to boost QA at scale. We cover the latest development and even compare tools in our overview of AI testing trends.
Learn the 5 AI testing trends to save 12.8 hrs/week per specialist
Organise your tests
It gets confusing to manage tests from multiple tools, but even one solution can get messy real fast. You need to establish good naming conventions, define the structure for test cases, and make sure that your team sticks to it.
A good structure extends beyond test cases. You can organise them into test scenarios, establish dependencies, and improve your bug reporting culture. Making sure that all functional requirements are covered by performance tests is a natural goal that you should still keep track of.
Establishing a good routine is just one half of the equation: you need to follow it as well. You can start by regularly bringing up any protocol-related issues in retrospective meetings. Using test management solutions with workflow functionality is a great way to ease the transition and future onboarding.
Ask your users
The flip side of good metrics is that performance testing can get too numbers-driven. If you look at nothing but milliseconds, it is easy to forget their real impact on the end-user. It is amazing that your users can quickly pick the size of the shoes they are about to order, but they had most likely filtered by size in the first place. When working with limited resources, it may be better to drive the performance testing effort and developer’s time for optimisation elsewhere.
You may also consider studying heatmaps and/or entire sessions of users that both brought you new business and left without a purchase. Looking at the buying process in a busy season, you will probably find that users who made it to the checkout will likely complete their order. Looking at limited QA resources and server capacity, it is better to make sure that choosing products is smooth enough even at peak load.
Our list of best practices for creating performance testing strategy ended up covering more than actual testing. After all, you need to establish good processes and keep the end-user in mind no matter which type of testing you run. Adjust these tips to your team and make top performance your trademark.
One hub to make your performance testing count