Are you doing functional testing, and facing these challenges?
- Does your product have to get to market faster?
- Is its complexity increasing in every release?
- Are your release cycles getting even shorter?
- Do you need to reduce testing costs?
If you are facing any, or all, of these challenges, you are reading the right article!
In many conferences, forums, and discussions we hear a lot of talk about new development practices and methodologies, to address these issues. But our testing challenges still remain.
Agile testing is good, but not enough, since it does not solve the problem of automating testing so it can be performed rapidly enough. Model-based testing means many different things to many different people, and too often is incomplete.
So, how can you get faster time-to-market with a product that is continually becoming more and more complex, and with constant pressure to reduce cost? It is unlikely that the solution is to continue to use +20 year old testing practices.
In fact, so-called automation of testing by using record and playback is still manual testing. Reducing test automation backlog is a great objective, but the reality is, you will never be able to code all your regression test scripts. There is never enough time or budget to accomplish this. So this approach is not powerful enough to be the way out.
Starting from this point, what exactly is the problem? Commonly we see people confusing the quality or effectiveness of testing with the raw number of tests. How many tests do you have? And so what? Are you sure they cover every corner case? Based on your tests, do you know the actual coverage of the functionality you are supposed to test? And if, as in many projects, you have hundreds or thousands of tests, do you know what each of those tests actually does? Do you have time to execute all these tests?
The key point is: it does not matter how many tests you have, but the quality and comprehensiveness of these tests.
When you create tests without a model of the system to be tested, the only way to judge the quality of your tests is the number of tests and maybe, your idea of requirements coverage.
Model-based testing (MBT) brings a new dimension to testing, a new way to measure quality: functional coverage. Functional coverage gives you more information on what you are testing, and what you are not testing. You will not model tests or test scenarios, but the functionality you want to test, captured in a high-level model. This gives you great leverage. It's always better to describe a landscape by showing a picture than to describe it with words. By graphically modeling your functionality, you can better understand the functionality to be tested, be able to automatically cover all aspects of your functionality, and therefore generate better and more comprehensive tests, for code quality and reliability that affects your customers.
Let the test design tool describe this landscape for you! The tool that operates in this way will test for functions that you did not see the first time or at all because of the complexity of the application, especially the negative paths.
Another big time saver and quality improvement is that a test design tool, that generates test cases directly from a model of the expected correct system operation will also generate the test oracles, i.e., the expected results, when you execute test cases.
So, in the end, you will have more time to test, more time to test your application deeper, and you will KNOW what you have tested and why. As a tester, I know that confidence is a life saver – or at least a sleep saver. But please be aware, and I will repeat, I am not talking about simply modeling tests, but modeling the entire system software functionality to be tested.
To know why, please see my article https://www.conformiq.com/2015/10/how-can-model-based-testing-mbt-generate-tests-that-i-cannot-think-of/
In addition, models based testing will help you keep on track for every release of your application. For design changes, you can quickly update your model, and then automatically generate updated tests and test scripts. Secondly, between application releases you can evaluate your old test suite. What percent of your functionality is covered by your current test suite? You can generate new tests to increase the coverage of your test suite, and see which of the previous test cases become invalid due to any changes in the system operation captured in the model. You can manage complexity at a high level, every time you use and update your model.
In other words, I recommend you keep your expert knowledge in a graphical model. Your tool should generate test cases for you, based on your modeled functionality, and automatically update your test scripts and documentation so you have an impact analysis. You will save lots of time and certainly reduce cost and time to market.
Since the test design tool understands the full system operation to be tested, it will then generate test cases that have 100% coverage of the system’s operation. This is hugely different from creating test cases or scenarios manually and then getting 100% coverage of what you originally wrote. What did you miss? You won’t know until a user trips over it after delivery. Hopefully it won’t be a critical bug.
In conclusion, good test coverage does not mean hundreds and thousands of test cases, but known good test cases based on complete functional coverage. So why not demand 100% functional coverage every time?
Alexis Despeyroux is a Conformiq technical field application engineering testing specialist.