Combinatorial Test Data Generation with Conformiq

Oct 31, 2014 | Blogs

Suppose a system model states that when a message comes in, it is forwarded out unchanged. This particular message has a number of fields, some of them integers, some strings. For some reason there is cause to suspect that the forwarding feature in the real implementation might be flawed, so we would like to have a number of different message combinations to test this particular forwarding feature. However, because the model predicts that the message is forwarded verbatim, Conformiq Designer will generate only one test for this.

The “combinatorial test data generation” support in Conformiq Designer can be used to overcome the stated challenge. The idea with combinatorial testing is the insight that not every “variable” contributes to every failure and most failures are triggered by a single parameter value or interactions between a relatively small number of parameters.

For example, assume 3 variables in the model A, B, and C with possible values below:

A = {1,2,3}
B = {11,12,13}
C = {21,22,23}

Overall in testing, combinatorial methods have been shown to significantly reduce cost and increase quality for software and system testing. The basis for combinatorial testing, as sketched above, is the interaction principle which states that most software failures are induced by single “factor” faults or by a combinatorial effect–that is interaction–of two factors, with progressively fewer failures induced by interactions between more factors. Therefore if a significant part of the faults in the system under test can be induced by a combination of N or fewer factors, then testing all N-way combinations of values provides a high rate of fault detection. This is also what has been empirically observed over the years.

While combinatorial methods have shown to be an effective way of finding software faults, it is crucial to keep in mind that even while combinatorial data testing is an important piece of test design, it’s only a part of the whole picture and by itself has only limited value and use. Even if we design an “optimized” set of data combinations for stimulating the system under test by applying combinatorial methods, we also need to be able to understand how the system should react to each such an input combination. This is part of the “test oracle problem” which can be a huge endeavor, may take considerable time, and is a very error prone process.

Conformiq allows users to combine both combinatorial testing methods – combinatorial test data creation and test oracle generation. This means that Conformiq Designer will automatically design and create test cases with data combinations used as stimuli to the system under test combined each with an exact expected response from the SUT. There is no need to do additional (manual) test design.

With pair wise testing we aim to generate tests where two variables are in “interaction”. As the variable A in the example above can have values 1, 2 and 3 and B 11, 12 and 13, Conformiq Designer will generate (when instructed to do so) interactions of those parameters that are {1,11} {1,12} {1,13} {2,11} {2,12} … {3,12} and {3,13}. With all pairs and with NWise where N=2 you say that the above variables are in NWise=2 interaction meaning that you need to find a set of values where you have all the pairs of values represented. Now when you have more than two interacting parameters, Conformiq Designer simply expands to include them so next we need to take C also in to account so C interacting with A would give us {1,21} {1,22} … {3, 23} and then when B is interacting with C would give us {11,21} {11,22} … {13,33}.

With all combinations we aim to generate tests where all the variables are in interaction. With all combinations the “solution space” is greater than for all pairs because one big solution for all combinations can contain multiple pairs (For example solution {1,11,21} contains 3 pairs, namely {1,11} {1,21} and {11,21}).

In some cases which we also see in practice, all pairs is not “good enough” while all combinations cause Conformiq Designer to generate a test suite of impractical size. For these cases, Conformiq Designer also supports N wise interaction testing where N>2. With the example above, setting N=3 would get the same set as with all combinations, but the difference comes when you have more than 3 interacting parameters.

Modeling Explicit Data Values

Suppose that you want to test a web application with the following features (given that we a priori understand all the values that the system can accept; interesting values to be at least):

Parameters of a web application

Operating System

Windows           XP

Windows   Vista




32 bit

64 bit









There are 5 x 2 x 4 x 2 = 80 different combinations of the testing parameters. The amount of combinations even with this trivial example is already quite large which already demonstrates the need for having methods for narrowing down the number of combinations without significantly sacrificing the quality of the testing at the same time. The idea therefore is to pick as small a subset of all the possible combinations that would still provide a high rate of fault detection. In order to apply combinatorial testing with Conformiq Designer, we would next model the information in the table above in to a Conformiq model as follows:

static combine_allpairs {
  msg.os belongs_to {
    "Windows XP", "Windows Vista", "Windows 7", "Linux", "Mac" };
  msg.arch belongs_to { 32, 64 };
  msg.browser belongs_to { "Firefox", "Chrome", "IE", "Safari" };
  msg.js_enabled belongs_to { true, false };

From this model fragment, Conformiq Designer will create an optimized set of test cases that would aim to cover “all pairs combinations” as described above in as compact test set as possible.

As mentioned earlier, the creation of the test input combinations is only one side of the coin and, as such, is not enough to make good judgments about the correct operation of the system. Users also need to understand how the system should react to the input combinations if they can even stimulate the system with any of them. In the example above, for example, the system could, provided that it operates correctly and according to the specification, reject invalid combinations such as {“Linux”, “IE”} and respond to the input stimuli in a different manner than with most of the other combinations. Provided that this information (that is, combination “Linux” and “IE” should be rejected by the system) has been identified in the specification and then modeled so the Conformiq Designer tool will produce a test case where the test stimulates the system with the above input combination expecting that the system will reject the input, while also producing a report stating that the input combination is served in a different manner than the rest of the combinations. It also could be that this input combination should not be accepted at all in which case the Conformiq Designer tool would produce a report stating that there is no test for this particular input combination while also producing a coverage report detailing that fact. This allows the end user to gain a better understanding of the overall testing and will also help to improve the model and even the specification.

A full example is listed below

system {
  Inbound in : Message;
  Outbound out : Accept, Reject;
record Message {
  String os;
  int arch;
  String browser;
  boolean js_enabled;
record Accept { }
record Reject { }

void main()
  Message x = cq_receive<Message>(in);
  static combine_allpairs "Web Application" {
    x.os belongs_to {
      "Windows XP", "Windows Vista", "Windows 7", "Linux", "Mac" };
    x.arch belongs_to { 32, 64 };
    x.browser belongs_to { "Firefox", "Chrome", "IE", "Safari" };
    x.js_enabled belongs_to { true, false };
  if (x.browser == "IE" && x.os == "Linux") {
    Reject r;
  } else {
    Accept a;

Dynamic Test Data Generation

In certain cases users do not fully understand all the interesting values that they should use in stimulating the system and therefore they cannot really meaningfully describe a table as in the previous section for those input parameters. Also there are numerous cases where they have identified a part of the model as being more important or to carry more “risks” than other model parts and therefore they want to apply combinatorial testing methods for testing these model parts more extensively than others without having the desire or even the possibility to describe the data values that would be considered by the combinatorial methods.

In order to apply combinatorial testing methods in such situations, Conformiq Designer also includes a support for dynamic combination coverage where the user does not need to explicitly define the interesting input parameter values (like in the example above) but the Conformiq Designer engine will also calculate them on the fly directly from the model. Part of the model from which Conformiq Designer is to automatically generate data combinations is explicitly modeled in the textual modeling language by combine_allpairs, combine_nwise(int n) and combine_all constructs for example as follows:

combine_all {
  require (msg.a == 1 || msg.a == 2 || msg.a == 3);
  require (msg.b == "a" || msg.b == "b" || msg.b == "c");

The above example would introduce 9 different goals that Conformiq Designer aims to cover which would allow Conformiq Designer to generate 9 different test cases to test all 9 combinations.

Dynamic combinatorial testing support in Conformiq Designer enables one to place an arbitrary model part within a combinatorial block and thereby combine combinatorial testing methods with other testing methods such as equivalence-class partitioning and boundary value analysis as the following simple model fragment demonstrates:

combine_allpairs {
  if ( > 0 && < 10) {
    requirement "Valid identifier";
    if (sessions.contains( {
      requirement "Valid session";
      require (msg.x == 1 || msg.x > 15);
    } else {

While dynamic combinatorial testing support as described here is a hugely valuable tool for creating a great quantity of test cases for features that we want test more extensively (as it relieves the user from the error prone process of designing and enumerating possible data values explicitly and manually), the downside is that Conformiq Designer cannot report they are known a priori and/or our testing practices can tolerate the risk of potentially missing a data value due to a human error while designing the data values. Therefore, “static” combinatorial testing support may be better suited as Conformiq Designer can report the detailed coverage with full traceability information for the test generation results.