Generating Tests for Mutation Analysis

Aug 5, 2019 | Blogs

Recently I wrote a blog post about mutation testing. That post was a high level overview of what mutation testing is and how mutation analysis can be applied in automatic test generation. This time I was thinking of writing a more hands-on blog post about how to interpret and understand test cases generated by Conformiq’s recently introduced mutation testing equipped automated test design technology. Recall from that earlier post that each mutant introduces a small change into the original model each mimicking a potential software bug and then we automatically design and generate test cases that detect mutants by causing the behavior of the original version to differ from the mutant. This we call killing of a mutant.

What is so cool and important with mutation testing is that we are generating test cases for particular types of software bugs. This has two implications:

  1. the test cases generated are stronger (they find more bugs) compared to ones generated from covering requirements and structural model aspects, and
  2. the test cases are easier to understand because they verify particular type of software faults.

Item (1) is crucial for us quality assurance professionals as we burn so many calories in trying to do our best to make sure that systems and applications work as they should. However, I would argue that item (2) is as important for us humans because this feature of mutation testing provides us an explanation of the generated test cases. As test cases are generated for making sure that the application under test is void of software bugs, our the test generator can tell us what type of software bug a particular test case checks. This in contrast with traditional coverage based test generation where we attempt to tie the “purpose” of each test to covering a particular model aspect such as a state or a transition.

Let’s kick start our example by importing an example model into our Conformiq Designer workbench. Here we use Triangle model which is a small and simple model for classifying a triangle provided as an input to the system. The default test design configuration of this particular example already enables mutation testing and targets various model mutations so it is sufficient for us to simply generate tests from this model. You should get 17 test cases with Conformiq Designer 5.0.

Now navigate to the “Test Targets” view. This view lists all the test targets gathered from the model and documents how the generated test cases are related to the generated test cases. That is, Test Targets is actually our traceability matrix. Expand the node for “Model Mutations” and zoom into the “Conditional Operator Replacement”. This category of mutations models software bugs where conditional Boolean operators such as && and || are replaced. Pick the first model mutation underneath this category (“Conditional And expression in Triangle.cqa:40”). This model mutation pertains to the following model fragment

else if (t.a == t.b && t.a == t.c) …

If you further expand the above node, you see that we have one test target there, namely “Killed mutation ‘t.a == t.b || t.a == t.c’”. This test target models a software bug where we have accidentally written conditional OR instead of conditional AND. That is, this model mutation models a software bug where instead of writing t.a == t.b && t.a == t.c we wrote t.a == t.b || t.a == t.c. As you can see, the mutant has replaced the conditional and operator with or operator and that’s why this particular type of model mutation is called conditional operator replacement.

In order to verify that the system or application under test is void of this type of software bug, we must generate a test case that is strong enough to detect this mutant. Condition t.a == t.b && t.a == t.c holds if t.a, t.b and t.c all have the exact same value. However if we replace && with || it suffices that either t.b or t.c is equal to t.a. Given this information, we now want to have a test case that detects this behavioral difference. We want to have a test case that triggers this model part with some data values that (1) is accepted by the mutated model version and (2) rejected by the original.

After having generated those 17 test cases, we can see in the Test Targets view that Conformiq Designer has been able to design a test case with test data that fulfills the above criteria. The test case that we see has t.a = t.b = 3 and t.c = 4. If we take this test case and run that against the original system, we see different behavior being triggered than with the mutated system. The original system classifies this particular triplet of integers as an isosceles triangle (which is correct) while the mutated version classifies as an equilateral triangle.

And that’s more or less all there is to it! All you as an engineer need to do is to enable the support for mutation testing and Conformiq Designer takes care of the rest. We get a compact set of extremely strong test cases which are explained in terms of potential software bugs in the system that the tests check for!

But then again, you as a quality assurance professional do not need to care about the details of all this. The whole idea and purpose of tools like Conformiq Designer and Creator is that all the combinatorial complexity that goes into designing exceptional quality test cases is “outsourced” to a computer that can perform huge number of complex calculations well beyond what we as human can do.