Top Three Problems with Model-Based Testing

According to the recent model-based testing users survey, the top three problems MBT users face are in this order (see p. 26 of the report):

  1. Modeling is too hard
  2. Models blow up (the tool in use does not scale to complex models)
  3. Generated tests miss bugs

These are all real problems and we have certainly seen customers face them (after all, the Conformiq tool suite was strongly represented in the survey). Here some comments.

Creating Conformiq Designer models is hard because creating models is [quite near to] programming. But is programming hard? Yes and no. It can be taught and learned, so you do not need to be a Nobel prize winner to be able to program, but it happens often that organizations deploy model-based testing to teams that are not strong in programming and software architecture. This is natural: traditional testing teams are strong in product knowledge, defect finding and analysis but not in software development. But this leads, also naturally, to the problem that model-based testing deployment becomes difficult. This is common and prevalent problem: astounding 71% of all respondents reported being somehow exposed to this problem during their model-based testing deployments. The solution? Get programmers into the teams when you deploy model-based testing. Your mileage may wary, but our solid experience is that there is no easy alternative available.

The scalability issue with complex model occurs in all those tools that use variants of the state-space exploration paradigm to process the models, for example Conformiq Designer, SpecExplorer, and some solutions from France, and the question is what your vendor is doing to alleviate the issue and push the envelope. Some tools try to ease the state-space explosion problem by pushing the problem to the user (e.g. allowing the user to "slice" or "restrict" the model using "use cases"), whereas some try to provide a more technological solution. We fall to this latter category, as showcased for example by our unique distributed test generation back-end (the Conformiq Grid solution). And why the technology route? Because pushing the problem to the human operators starts to eat the productivity gains.

That model-based testing misses bugs is partly a problem with expectations. A model-based testing tool can test the system under test only based on information that is available in the corresponding model. Tools do not have much contextual knowledge built into them, and that is a big difference compared to experienced product testers. The amount of information that needs to be put into the models can be higher than what is initially expected. Getting contextual knowledge and built-in fault models into model-based testing tools themselves is a new area of research and there is still lots of unknown territory to discover there.

 

Share on LinkedInShare on FacebookTweet about this on TwitterEmail this to someone
Continue exploring
Products Case studies Demo