Software Processes Accommodating MBT

Feb 28, 2012 | Blogs

People ask often from us if model-based testing with Conformiq Designer can be a good fit for e.g. Agile, Test-Driven Development, Scrum or RUP. Now instead of providing a short answer (“yes”), I will try to explain in this post how model-based testing changes a software process, where that change occurs, and what is its impact. I will also give you a checklist to assess the maturity of your process when it comes to potentially adopting MBT. This post should give you keys to assess how model-based testing can fit the particular software processes, organizational contexts and management systems in place where you work.

To begin, all software processes incorporate test design. In the textbook version of the traditional waterfall model, for instance, system tests are designed based on system functional specification, module tests based on module-level specifications, unit tests based on the code, and so on. Theoretically, system test design can start as soon as the system-level specification is ready. In an agile process, test case design can be driven be user stories, and new test cases are created and old ones updated when the collection of user stories evolves. In a test-driven development process, test case design is based on user stories or system specification documents, and occurs before coding. In all these processes, even though they differ in their overall nature, there is a test design subprocess, and it is driven (for most parts) not by the code itself, but by some other system documentation (implicit or explicit) that is independent from the code itself and usually predates it.

MBT Changes Something

Model-based testing changes the process in that it introduces a new artifact (model) into the process, and the model stands between the system documentation and the actual tests. So, in a waterfall model, a system functional specification would be modeled, and then tests would be generated from the model. In an agile process, the essence of the user story collection would be maintained in a model, and then tests would be generated from it. In a test-driven development process, the tests would be generated from the model before coding takes place. In this way, the process does not fundamentally change. Test case design is just factored into two steps: creating a model, and letting computer generate tests from the model and maintain them later.


In practice, however, the processes are more detailed, and it is those details that can thwart model-based testing adoption. For example, implementing an (automated) test case based on a single, detailed user story can be very quick. If the current process is tuned so that engineers design detailed user stories, and it is then expected that they are turned into vis-a-vis test cases rapidly, it will be challenging to drop in model-based testing without changing anything else. The reason is that user stories are kind of vertical slices through the system behavior, whereas a system model represents a more holistic view of the intended operation, and it is not efficient to reverse engineer to holistic view from user stories alone. If model-based testing is employed in an agile process, user stories should be designed on a higher level, and some of the details should be pushed to the model only.

Similarly, a real-world waterfall model involves iterations and retroactive fixes to system-level specifications, as often specification errors and omissions are only detected in the implementation stage. When model-based testing is used within a waterfall process, it is efficient to first create a draft system model, and then update it and add details when the system progresses towards implementation stage. Also, it might be than instead of handling systems, subsystems, components and modules all separately, some of those testing levels could be combined on the modeling level, e.g. generate both component and subsystem tests from the same [library of] models.

In general, model-based testing fits most of the software development processes, but taking MBT into use is going to require some fine-tuning of the process to accommodate the timing, planning and resource estimation changes it causes. A typical failure in MBT adoption is not to allow for that fine-tuning to take place. That happens often because of (irrational) organizational rigidity and (irrational) negative preconceptions towards MBT by those who are not really involved with it.


Therefore, the top five questions you should ask in order to assess whether your process can accommodate MBT conveniently are not about the structure or nature of your software process itself, but the following:

  1. Is my organization open to adapt and fine-tune the process to accommodate MBT in an efficient way?
  2. Can I refine the schedule of my process so that there is enough time for modeling?
  3. Can I move from very detailed user stories and use cases to higher-level use cases plus more detailed information in models?
  4. Can I measure the actual impact of MBT on test coverage, fault detection and defect slip rate when I deploy MBT? Are my KPIs for testing well defined so that I can find by objective measurement if my MBT deployment is successful?
  5. Will those parts of my organization who will not be actively involved with MBT have an open and supportive stand towards adopting this type of a new technology, or will they be inherently rigid, indifferent or hostile towards it?