Blog

“How on earth will we approach the challenge?”, by Paul Gerrard. Part 2

It seems inevitable that although there will always be a need and opportunity to do manual testing, a much larger proportion of testing will have to be performed by tools than we are currently used to.
26 March 2015
Interviews
The article by a1qa
a1qa

If you missed the first part of the interview, you can read it here

a1qa: How will tools be used to test the IoE?

Paul Gerrard: It seems inevitable to me that although there will always be a need and opportunity to do manual testing, a much larger proportion of testing will have to be performed by tools than we are currently used to. The tools will need to execute very large numbers of tests. The challenge is not that we need tools. The challenge will be, “how do we design the hundreds, thousands or millions of tests that we need to feed the tools?”

So for example, suppose we need to test a smart city system that tracks the movement of vehicles and the usage of car parking in a town. We’ll need a way of simulating the legal movements of cars around the town along roads that contain other cars that are following their own journey. The cars must be advised of their nearest car space and the test system must direct cars to use the spaces that they have been advised of. Or not. We could speculate on some heuristics that guide our test system to place cars where their owners want them. But…. life happens and our simulation might not reflect the vagaries of the weather, pedestrians, drivers and the chaotic nature of traffic in cities.

Now, not every system will be as complex as this. But the devices now being field tested in homes, hospitals and public places all have their nuances, complications and unexpected events. A healthcare application that warns a person of upcoming doctor’s appointments, schedules your prescribed drugs and monitors your vital signs could trigger hundreds or thousands of scenarios.

Increasingly, small simple systems will interact with other systems and become more complex entities that need testing. It’s only going one way.

a1qa: Why do we need a ‘New Model for Testing’?

Paul Gerrard: The current perspectives, styles or schools of testing will not accommodate emerging approaches to software development such as continuous delivery and, for example new technologies such as Big Data, the Internet of Things and pervasive computing. These approaches require new test strategies, approaches and thinking. Our existing models of testing (staged, scripted, exploratory, agile, interventionist) are mostly implementations of testing in specific contexts.
Our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches especially testing the IoE.

I have proposed an underlying model of testing that is context-neutral and I have tried to shed some light on what this might be by postulating the Test Axioms, for example. The Axioms are an attempt to identify a set of rules or principles that govern all testing. Some people, who have used them think they work well. They don’t change the world, they just represent a set of things to think about – that’s all. But, if you choose them to be true, then you can avoid the quagmire of debates about scripted versus unscripted testing, the merits and demerits of (current) certifications or the value of testing and so on.

The New Model for Testing is an extension to this thinking. The model represents the thought-processes that I believe are going on in my own head when I explore and test. You might recognize them and by doing so, gain a better insight into how you test too. I hope so. As George Box said, ‘essentially, all models are wrong, but some are useful’. This model might be wrong, but you might find it useful. If you do find it useful, let me know. If you think it’s wrong, please let me know how I might improve it.

The New Model of Testing attempts to model how testers think and you can see a full description of the model, the thinking behind it and some consequences here.

a1qa: The New Model mentions Test Logistics. What is that?

Paul Gerrard: When tests are performed on-the-fly, based on mental models, the thought processes are not visible to others; the thinking might take seconds or minutes. At the other extreme, complex systems might have thousands of things to test in precise sequence, in complicated, expensive, distributed technical environments with the collaboration of many testers, technicians and tool-support, taking weeks or months to plan and apply.

Depending on the approach used, very little might be written down or large volumes of documentation might be created. I call the environmental challenges and documentary aspect ‘test logistics’. The environmental situation and documentation approach is a logistical, not a testing challenge. The scale and complexity of test logistics can vary dramatically. But the essential thought processes of testing are the same in all environments.

So, for the purpose of the model, I am going to ignore test logistics. Imagine, that the tester has a perfect memory and can perform all of the design and preparation in their head. Assume that all of the necessary environmental and data preparations for testing have been done, magically. Now, we can focus on the core thought processes and activities of testing.

The model assumes an idealized situation (like all models do), but it enables us to think more clearly about what testers need to think about.

a1qa: Can you summarise what the New Model says?

Paul Gerrard: At the most fundamental level, all testing can be described in this way:

1. We identify and explore sources of knowledge to build test models
2. We use these models to challenge and validate the sources of knowledge
3. We use these models to inform (development and) testing.

I make a distinction between exploration and testing. The main difference from the common view is that I will use the term Exploration to mean the elicitation of knowledge about the system to be tested from sources of knowledge.

I have hinted that by excluding the logistical activities from the New Model, then the processes can be both simplified and possibly regarded as universal. By this means, perhaps the core testing skills of developers and testers might coalesce. Testing logistics skills would naturally vary across organisations, but the core testing skills should be the same.

From the descriptions of the activities in the exploration and testing processes, it is clear that the skills required to perform them are somewhat different from the traditional view of testing as a staged activity performed exclusively by independent test teams. Perhaps the New Model suggests a different skills` framework. As a challenge to the status-quo, I have put together a highly speculative list of skills that might be required.

I hope the model stimulates new thinking and discussion in this field.

a1qa: What is the future for testing and testers?

Paul Gerrard: Of course, that’s a huge question and all we can do is speculate on what might happen. My suggestions below are partly informed by what I have seen in the technology and the testing markets and friends and colleagues who have shared their experiences with me.

Certification?

Love them or hate them, the certification schemes are not going away. The market is there and there are plenty of training providers willing to fulfill the need. I would say that their popularity varies by country, but also the level of credibility of the schemes varies by geography too. I would argue that the certified syllabuses map directly to what I call logistics, so their value is limited. This is being recognized by more and more practitioners and training providers.

There is hope that people recognize the limited value of certified training and invest in broader skills training. The New Model suggests what these skills might be. The existing organisations are crippled by their scale and won’t be improving their output any time soon. I believe that better certification schemes will only emerge when a motivated, informed group of people choose to make a stand and create them.

Manual vs automated testing?

There is a curious position that some people are taking. Some folk say that testing performed by tools is easy, less valuable, significant or effective than testing crafted by people, particularly when conducted in an improvisational style. This is not a stance that can be justified, I think. Most of the software on the planet doesn’t have a user interface at all, so has to be tested using tools and naturally, these tests must be scripted in some way.

Web services, for example, might be simple to test, but can also be devilishly complex. The distinction of ‘manual’ and ‘automated’ testing in terms of ease, effectiveness and value is fatuous. The New Model attempts to identify the critical thinking activities of testing. What I have called the ‘application of tests’ may be performed by tools or people, but ONLY people can interpret the outcomes of a test. And that is all I have to say.

I should add that more and more, test automation in a DevOps environment is seen as a source of data for analysis just like production systems so analysis techniques and tools are another growing area. I wrote a paper that introduces ‘Test analytics’ here.

a1qa: Should testers learn how to write code?

Paul Gerrard: I have a simple answer – yes.

Now it is possible that your job does not require it. But the trend in the US and Europe is for job ads to specify coding skills and other technical capabilities. More and more, you will be required to write your own utilities, download, configure and adapt open source tools or create automated tests or have more informed conversations with technical people – developers. New technical skills do not subtract from your knowledge, they only add to it. Adding technical skills to your repertoire is always a positive thing to do.

If you are asked to take a programming or data analytics course, take that opportunity. If no one is asking you to acquire technical skills, then suggest it to your boss and volunteer.

a1qa: What does ‘Shift-Left’ mean for testers?

Paul Gerrard: It seems like every company is pursuing what is commonly called a ‘shift-Left’ approach. It could be that test teams and testers are removed from projects and developers pick up the testing role. Perhaps testers (at least the ‘good’ ones) are being embedded in the development teams. Perhaps Testers are morphing into business or systems analysts. At any rate, what is happening is that the activities, or rather, the thinking activities of testing are being moved earlier in the development process.

This is entirely in line with what testing leaders have advocated for more than thirty years. Testers need to learn how to ‘let go’ of testing. Testing is not a role or stage in projects it is an activity. Shift-left is a shift in thinking, not people. Look at this as an opportunity, not a threat.

‘Test early, test often’ used to be a mantra that no one followed. It now seems to be flavour of the month. My advice is don’t resist it. Embrace it. Look for opportunities to contribute to your teams’ productivity by doing your testing thinking earlier. The left hand side of the New Model identifies these activities for you (enquiring, modeling, challenging and so on). If your test team is being disbanded, look at it as an opportunity to move your skills to the left.

There is something of a bandwagon for technical people advocating continuous delivery, DevOps and testing/experimenting in production. It seems hard for testers to fit into this new way of working, but again, look for the opportunity to shift left and contribute earlier. Although this appears to be a technical initiative, I think it is driven more by our businesses. I learned recently that marketing budgets are often bigger than company IT budgets nowadays.Think about that.

Marketers may be difficult stakeholders to deal with sometimes, but their power is increasing, they want everything and they want it now. The only way to feed their need for new functionality is to deliver in continuous, small, frequent increments. If you can figure out how you can add value to these new ways, speak up and volunteer. Of course large, staged projects will continue to exist, but the pressure to go ‘continuous’ is increasing and opportunities in the job market require continuous, DevOps and analytics skills more and more. Embrace the change, don’t resist it.

I wish you the best of luck in your leftwards journey.

Reach Paul via Twitter and Linkedin.

Paul thank you sharing your views and experience. We hope to talk to you again.

More Posts

The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
8 December 2023,
by a1qa
3 min read
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
From dissecting novel industry trends to navigating effective ways of enhancing software quality — let’s recall all a1qa’s roundtables. Join us!
Big data testing
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
QA trends
Quality assurance
Test automation
Usability testing
Web app testing
6 top reasons why business should invest in software quality
9 November 2023,
by a1qa
4 min read
6 top reasons why business should invest in software quality
We congratulate you on the World Quality Day with the article by Alina Karachun, Account director at a1qa, having 10+ years of QA expertise. Delve into it to explore the reasons why businesses should prioritize software quality.
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
Quality assurance
alina
25 July 2023,
by a1qa
4 min read
Interview with Alina Karachun, Account director at a1qa: unearthing the power of a true IT leader
Read the interview with Alina Karachun, Account director at a1qa, about the importance of creativity and feedback for executives and their teams, what is ethical leadership, and many more.
Interviews
Quality assurance
debated technologies
30 May 2023,
by a1qa
3 min read
a1qa tech voice: Managing director at a1qa, North America, discusses pros and cons of much-debated technologies
Nadya Knysh, Managing director at a1qa, North America, puts a spotlight on 6 current technologies, discussing their positives and negatives.
General
Interviews
Test automation
10 March 2020,
by a1qa
6 min read
Dedicated team model in QA: all you should know about it
Check on everything you should know about when to apply, how to run and pay for a dedicated team in QA.
Interviews
QA consulting
Quality assurance
30 September 2019,
by a1qa
4 min read
“Every team member is responsible for software quality”: interview with Head of QA at worldwide media resource
We continue talking about unsurpassed software quality. Consider how to make QA more efficient using shift-left and continuous testing.
Interviews
8 December 2017,
by a1qa
4 min read
a1qa: one-stop shop for first-rate QA services
Dmitry Tishchenko, Head of a1qa Marketing and Pre-Sales Department, answers the questions of The Technology Headlines. 
Interviews
Quality assurance
17 August 2017,
by a1qa
4 min read
From requirements specification to complex business analysis: interview with a1qa head of BA
Check how we at a1qa converge business knowledge with IT skills to deliver maximum value. 
Interviews
QA consulting
1 August 2017,
by a1qa
4 min read
Interview with head of a1qa test automation center of excellence
Dmitry Bogatko on how to manage the in-house Center of Excellence delivering value to the company's projects. 
Interviews
Test automation

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.