By combining the language capabilities of ChatCPT's AI technology with the precision and efficiency of automated testing, businesses can streamline their testing process and produce higher quality software. But could it be a threat to real quality assurance now? Let’s see in this article.
Despite the doom-and-gloom headlines, ChatGPT won’t steal your testing job. It’ll actually make you better at it. Think of it as your coding friend that can churn out test cases in seconds or spot edge cases you might miss.
Here’s your starting move: feed ChatGPT a user story and ask it to generate negative test scenarios. You’ll be surprised how it uncovers weird edge cases that slip past manual reviews.
Just remember: always double-check its suggestions since AI can hallucinate invalid test conditions. Teams using AI-assisted testing report nearly doubled productivity, but the magic happens when you combine AI speed with human judgment.
I guess it’s quite obvious that ChatGPT is artificial intelligence. But what actually is it? Here is the definition of ChatGPT given by itself:
“ChatGPT is a language model developed by OpenAI, an artificial intelligence research laboratory. It is based on the transformer architecture and is trained on a large corpus of text data from the internet.
ChatGPT can generate human-like text based on the input provided, making it capable of performing a wide range of language-related tasks such as text completion, question answering, and text generation.
It has been fine-tuned on specific tasks, such as answering customer service queries and providing responses to users in a conversational context, making it a popular tool for building chatbots and other conversational AI applications.”
ChatGPT became available for mass use in November 2022. Developed by OpenAI,this chatbot made a huge buzz in the IT world. A lot of specialists found this tool a pretty good “partner in crime”. It helps accomplish certain tasks faster, like writing simple code lines for developers, verifying information and fact-checking for journalists, and writing original copy for marketers. Some tech people also got pretty gloomy for the same reason: wouldn’t ChatGPT also jeopardise their jobs?
No matter the attitude, employees should exercise caution when utilising AI tools as they may be susceptible to spreading false information and removing the personal touch from tasks such as writing. Despite its increasing use, many organisations have yet to establish clear guidelines for employee utilisation of AI technology.
Then there is Microsoft. As the biggest partner and investor of Open AI (the parent company of ChatGPT), Microsoft have recently authorised their employees to utilise the chatbot for work purposes as long as confidential information is not shared.
We also made an in-house GPT tool that we happily share with others. aqua’s AI Сopilot creates entire test cases from scratch, completes test drafts, and helps you prioritise the QA effort. Unlike with utilising ChatGPT for testing, our solution uses the context of your software and its test suite to make highly personalised suggestions.
Test smarter, not harder: try our AI-powered testing tool today
You can slash your test case creation time by throwing ChatGPT at the grunt work. Start with a simple prompt like “generate boundary test cases for a login form with username 8-30 characters” — you’ll get edge cases you might’ve missed.
Ask it to brainstorm negative scenarios after feeding it your app’s context. Teams’ report test coverage nearly doubled when they started using AI for the “what could go wrong” thinking.
Pick one feature you’re testing this week and ask ChatGPT to generate Gherkin scenarios for it. You’ll spot gaps in your current test suite within minutes.
Common pitfall most people repeat: just copy-pasting what it gives you. The AI excels at volume and variety, but you still need to validate that each case makes business sense for your specific application.


You’ll get the best results from ChatGPT and other AI tools when you treat them as smart assistants rather than magic solutions. The key is staying in the driver’s seat and letting AI generate test scenarios or draft scripts, but you need to review everything it spits out. Here’s what works: be really specific with your prompts. Instead of asking ‘write test cases for login,’ try ‘create negative test cases for a mobile login form with email/password fields, focusing on validation errors.’
Important: Most teams throw sensitive customer data into public AI models. Don’t do this. Use anonymised test data or create mock datasets instead.
Start by picking one repetitive task. Maybe it is generating edge case scenarios for user input validation. Track how much time you save versus manual creation. Teams using aqua cloud find AI cuts their initial test design time by about 42%, though you’ll spend roughly 15% of that time on review and refinement. The sweet spot isn’t replacing human judgment but amplifying it.
Write a Test Case to create and edit a new Requirement
Test Case for Creating and Editing a New Requirement
Test Objective: To verify that a user can successfully create and edit a new requirement in the requirements management system.
Test Environment: Requirements Management System, Internet Browser
Test Steps:
Expected Results:
Test Conclusion:
If the results of the test match the expected results, the system’s ability to create and edit a new requirement has been verified.
Here is the same test case created by our tester manually in aqua:


Open aquaWeb
Create Requirement
Name
Save & Close
Check
Open Requirement
Edit Requirement
Save & Close
Reopen the Requirement
Edit description
We tried another request to see how ChatGPT will cope a more complicated test scenario:



Test Case for Creating and Editing a Test Case and Tracking the Last Editor
Test Objective: To verify that a user can create a test case, edit it, and track the last editor of the test case.
Test Environment: Test Management System, Internet Browser
Test Steps:
Expected Results:
Test Conclusion:
If the results of the test match the expected results, the system’s ability to create and edit a test case and track the last editor has been verified.
And here the same test case executed in aqua:
Login of U1
Logout U1
Login of U2
Logout U2
Login of UserA
Step1 was edited by U1:

Step2 was edited by U2:

"ChatGPT executed the task perfectly taking into account that input data was limited to a short request sent into a ChatGPT window. However, the lack of context makes this test case too general: it does not cover individual company requirements."
Take the guesswork out of testing with aqua’s AI tool
ChatGPT can be used to test the accuracy and effectiveness of chatbots by generating test inputs and evaluating the outputs. This can help identify and fix issues with chatbots before they reach end-users.
Natural Language Processing (NLP) is a field of computer science that deals with enabling computer programs to understand and interpret human language, both written and spoken, in a manner that resembles human comprehension.
ChatGPT can be used to evaluate and improve NLP models by generating test inputs and evaluating the model’s ability to accurately process and understand natural language.
Sentiment Analysis, also known as Opinion Mining, is a branch of Natural Language Processing (NLP) that focuses on determining the sentiment expressed in a given piece of text. It aims to categorise the sentiment expressed as positive, negative, or neutral. This technique is widely used by businesses to gauge customer sentiment towards their brand and products by analysing feedback, enabling them to gain valuable insights and better understand the needs of their customers.
ChatGPT can be used to generate test inputs for sentiment analysis models, helping to evaluate their accuracy and identify areas for improvement.
To improve the accuracy of a Voice Assistant model, companies need to get a vast amount of data by recording speech samples to train the voice recognition system, making it more accurate and natural for users. This follows another problem — how to test it when there is so much data. And ChatGPT can partially help by generating test inputs for voice assistant applications or evaluating their ability to recognise and process spoken commands accurately.
These are just a few examples of voice assistant and IoT applications that can all be tested and enhanced with extra test data:
However, this technology is continually evolving and being integrated into new devices and products. In order to bring their models to perfection, developers outsource tasks to a global community of users, who are paid for their work to collect enough data. But this approach is a time- and money-consuming process, while ChatGPT can provide sample data faster with less effort. This also boosts (inherently flawed) security by obscurity and eliminates ethical concerns for using underpaid labour from developing countries to train AI models used in voice assistants.
Our team has prepared an overview of AI testing trends to cover what is possible with ChatGPT and beyond. The ebook also includes a comparison of test management solutions with AI functionality to help you put insights to good use.
Learn the 5 AI testing trends to save 12.8 hrs/week per specialist
The future of testing with AI holds a lot of potential for improving the efficiency and accuracy of the testing process. With AI’s ability to analyse large amounts of data, automate repetitive tasks, and make predictions, it has the potential to revolutionise the way software is tested. Here are a few ways AI is expected to impact testing in the future:
The use of AI in testing is expected to greatly improve the efficiency and accuracy of the testing process, allowing teams to focus on more complex tasks and deliver high-quality software faster.
AI tools like ChatGPT are transforming test creation, but they come with real blind spots you need to watch for. Vague requirements? Your AI will spit out generic tests that miss the mark, especially in specialised business domains. Self-healing automation sounds brilliant until a major UI redesign throws everything off course, or dynamic elements start shifting around like they’re playing hide-and-seek.
Another underrated problem: sentiment analysis tools often miss sarcasm completely (think customer reviews saying ‘just perfect’ about a broken product). AI-generated test data can also bake in biases unless you specifically ask for diverse user scenarios, something most people forget to do.
So you need to always pair AI output with a quick sanity check. Ask it to poke around edge cases, clarify any wonky business logic, and double-check that security protocols stay intact. Your AI assistant is powerful, but it’s not psychic, so treat it like a smart junior developer who needs clear direction.
Slide rules were used by engineers and other mathematically involved professionals for centuries before the invention of the pocket calculator. However, the introduction of calculators did not make these professionals obsolete. Instead, it made their work faster, more accurate, and more efficient. Similarly, AI tools can assist QA engineers in performing their tasks more effectively and efficiently, without necessarily replacing them.
The use of AI tools brings benefits of automating testing for tasks that are repetitive and time-consuming, such as regression testing. QA engineers get to focus on higher-level testing activities that require human creativity, intuition, and problem-solving skills. These tools can help identify patterns, analyse data, and generate insights that can be challenging to detect manually. They also reduce the risk of human error, which can be especially important in safety-critical applications.
Summing up, you shouldn’t hope to replace humans by AI testing but you can’t dismiss it either. QA engineers must be open to embracing these tools and adapting their skill sets to take advantage of them. By leveraging AI tools, QA engineers can improve their work, provide more value to their organisations, and ultimately enhance the quality of the software they deliver.
Reduce clicks by testing with AI — get more time for improvement
ChatGPT stands for Chat Generative Pre-trained Transformer — it’s a chatbot developed by OpenAI using supervised learning and reinforcement learning.
AI can predict where bugs may occur, automate manual testing tasks, continuously monitor applications in real-time, and improve test coverage. This will allow teams to focus on more complex tasks and deliver high-quality software faster.
One of the key benefits of using AI in software testing is its ability to identify patterns and detect anomalies that might be difficult for human testers to identify. This is particularly true in cases where there are large datasets involved, as it can be easy for human testers to miss critical issues.
In addition, AI can help to optimise testing by identifying which areas of the software are most critical to test, and which tests are likely to be the most effective at uncovering issues. This reduces the amount of time and resources required for testing, while still maintaining a high level of quality.