In this post I’ll go through the basics for “stochastic use case testing”. It is sometimes called also “Markov chaining” or “Markov testing”. There are variations of this technique, of course, but my aim here is to cover the common ground and share some thoughts on where methods like this are best applied. Continue reading
According to the recent TIOBE index (February 2012), Java is still the #1 active programming language in the world, even though it has been losing ground to C# (.NET) and Objective-C (Apple’s devices). The importance to model-based testing and Conformiq is that we use Java as the language for system models that drive test generation. Some other vendors use proprietary languages, or more obscure ones such as, for instance, OCL or TTCN-3. Neither one makes the top 100 list. Remember to ask your prospective MBT vendor about the language(s) used in the vendor’s tooling. Relying on a language with a very small user community could mean resource shortages and excessive training costs later.
In several commercial and academic projects we have struggled with the problem of the lack of a proper language to model reactive systems. Such models should be used to specify and test (in a model-based fashion) reactive systems. Currently, either no real language at all was used (like XML), or some old fashioned, hard to use or weakly supported language like LOTOS or Promela was used.
Then Lars continues on to propose a new language for specifying / modeling reactive systems.
Now I have to comment this. Continue reading
People ask often from us if model-based testing with Conformiq Designer can be a good fit for e.g. Agile, Test-Driven Development, Scrum or RUP. Now instead of providing a short answer (“yes”), I will try to explain in this post how model-based testing changes a software process, where that change occurs, and what is its impact. I will also give you a checklist to assess the maturity of your process when it comes to potentially adopting MBT. This post should give you keys to assess how model-based testing can fit the particular software processes, organizational contexts and management systems in place where you work. Continue reading
One complaint against computer-generated test cases is that they differ from those designed by humans. Somehow, computer-generated test cases have a different feel to them, and it is sometimes difficult for humans to grasp what is the crux or focal point of a test case produced by a model-based test generator, such as Conformiq Designer™. But why do the tests look different? And does it matter? Are human-designed test cases inherently better than computer-generated ones, or vice versa?
The structure and feel of a human-designed test case set is the result of aiming to fulfill multiple goals at once. One is test coverage, i.e. the (perceived or estimated) capability of the designed test set to actually spot potential faults. In other words, it is the efficiency of the test set in probing the system under test at various function points. Another goal is understandability and maintainability, i.e. the capability of human operators to later update and modify the test set while also understanding its structure and content. A third design goal is low redundancy, i.e. avoidance of test cases that do not contribute to test coverage in any significant manner. To achieve this goals, a human test designer employs his or her cognitive system that is very different from that of a computer. The human approach to design problem like test design problem is hierarchical and plan-driven, as humans have to handle larger problems by splitting them into subproblems and smaller tasks. In short, the human approach to test case design is driven by the structure of human brain and the different goals of (manual) test case design. Continue reading