I would be a very rich man if I would get paid a dime every time I hear this statement when meeting a potential customer.
“I already know the test cases that we need.”
Frankly, I have a really hard time understanding this statement and always need to be careful not to reply with “You’ve got to be kidding me”.
During my professional career I have reviewed and analyzed hundreds of industrial system specifications that have also been used as an input to the manual test design process. Looking at the complexity of these specifications and actual software systems that implement these specifications, it’s really difficult to understand how people believe that by applying a purely manual approach to test design they have managed to identify and cover all the required aspects in their testing. This is especially true when I look at the amount and complexity of calculations that the Conformiq test generation engine needs to perform when generating test cases out of these specifications (Note that the Conformiq input is models that are computer readable system specifications called “system models” meaning that they describe how the system is supposed to work). If I would do the same work using the same set of combinatorics in my head that our test generation platform does, it would take a huge amount of time, I would make tons of mistakes, and the end result would contain huge number of omissions, errors and redundancy. I would hardly be able to assess the completeness and coverage of my hand crafted test suite and besides, the test cases would be close to impossible to maintain and update on every system revision. Still, this same manual process is accepted in the industry more or less as a norm where people are even confident enough to say, “Yes, I know the tests that we need”. Seriously? Is this ignorance, over confidence, or just a simplified view that the limited testing of happy path use cases is sufficient to thoroughly cover all the complexity in today’s critical applications?
As opposed to the traditional approach to test design, the approach developed and adopted by Conformiq is fully automated. Here we create simple, high-level formal models of the system under test (SUT), i.e., the application, which are then used in fully automatic test case generation. So now the question is, “How does this automated approach to test design know what test cases we need”?
First, the algorithmic approach to test design does not accidentally miss a test case that is dictated by the requirements for an error handling case, or a limit value of a data parameter, or an expiration of a rarely activated timer.
Second, the algorithmic approach to test design eliminates randomly incorrect tests.
Third, with the algorithmic approach to test design there are fewer missing tests, because the algorithm does not accidentally miss corner cases.
Fourth, with the algorithmic approach to test design there are fewer redundant test cases because the resulting test sets are rigorously optimized by a computer and checked for importance.
Fifth, with the algorithmic approach to test design the generated tests are always related to the requirements so the quality of the generated test suite is always measurable.
Finally, the whole process of algorithmic test design is systematic and repeatable.
So, in summary, it’s very typical that when the Conformiq automated test design approach is applied in these very same situations, we see improvements in quality, fault detection, traceability, and maintainability, while at the same time that we also see reductions in cost and time. This is all done without spending (wasting) time to understand the details of the use cases and their tests, just understanding the application to be tested. So it seems that, after all, we did not fully know and understand the test cases that we need, nor did we even need to.