Why testing cannot be automated? Interview with Michael Bolton. Part II
There is no such thing as automated testing, any more than there is automated research or automated programming. There are some tasks within programming that can be automated, but we don’t call compiling “automated programming”.
In the previous post we started a discussion with a widely known software testing specialist Michael Bolton. Today we continue and cover the issues of automation and rapid software testing.
1. Michael, as one of the Rapid Software Testing course creators, can you, please, explain what rapid testing actually is? Is it a philosophy, an approach, a set of skills?
Rapid Testing is all three of those things. We describe rapid testing is a mindset and a skill set on how to do testing quickly, inexpensively, expertly, and credibly. It requires deep thinking about products, problems, models, processes, tools, and interactions between people. It is an approach that focuses on reducing waste and helping to speed up the project by investigation and exploration throughout development. It also requires the tester to develop skills in many domains: critical thinking skills, scientific thinking skills, using heuristics; emphasizing the use of lightweight, flexible tools; framing testing; applying oracles; identifying coverage and the gaps in it; and reporting on all of those things.
The premises of rapid software testing are listed here; a framework for describing rapid software testing is here.
2. Automate or not to automate? Is automation testing the panacea in quality assurance world?
Is automating research the panacea in the academic world? If you look at expert researchers, they use computerized tools all the time. But nobody talks about automating research.
There is no such thing as automated testing, any more than there is automated research or automated programming. There are some tasks within programming that can be automated, but we don’t call compiling “automated programming”.
Instead of thinking about “automated testing”, try thinking about tool-assisted testing. If you do that, you’ll be more likely to think about how tools can help you perform all kinds of tasks within testing: generating data; visualizing data; statistical analysis; monitoring or probing the state of the system; recording; presenting data; helping with exploring high-speed, high-volume, long sequence testing; the list goes on and on.
Checking of course can be automated, but you can’t automate the processes that surround the performance of the check: modeling and identifying risk; identifying a way to check for outputs and outcomes that would expose problems; programming and encoding those checks; deciding when to run the checks; and setting up mechanisms to launch them. After the check is been performed humans make decisions about whether the check has revealed important information or might have missed something important. And after that, humans make decisions on what happens next. With the exception of the actual execution of the check, we need humans at every step. None of that stuff can be or will be performed by machinery anytime soon.
So, use tools by all means. Don’t limit your concept of what they can do for you. But beware: a tool amplifies whatever you are. If you’re a thoughtful and critical tester, tools can help sharpen your testing. But if you don’t apply tools thoughtfully and skillfully and critically — if you’re a bad tester — the tools will help you to do bad testing faster and worse than ever.
Most importantly, remember that the tester and his or her skills — not the tools, not the process model, not the documentation—are at the center of testing.
Michael thanks for sharing your experience and ideas. We`ll be glad to see you and talk to you again.
You can follow Michael on Twitter.