Blog

A story by an anti-bottleneck tester. Interview with Eric Jacobson

Eric Jacobson has been testing software for 14 years. After managing other testers at Turner Broadcasting’s traffic system, he accepted a position as Principal Test Architect at Atlanta based Cardlytics.
15 January 2015
Interviews
Article by a1qa
a1qa

Eric Jacobson has been testing software for 14 years. He entered the IT industry teaching end-user software courses but a development team at Lucent Technologies convinced him to become a tester. Later, Eric became lead tester of Turner Broadcasting’s traffic system, responsible for generating billions of dollars annually via ad placement. After managing other testers at Turner, he accepted a position as Principal Test Architect at Atlanta based Cardlytics.

Eric is a highly rated conference speaker and has been posting his thoughts to improve testing on www.testthisblog.com, nearly every week since 2007. He also enjoys playing clawhammer banjo, woodworking, caving, and spending time with his son, daughter and wife.

a1qa: In one of your posts you mentioned that resolving tests by “PASS” or “FAIL” prevents you from actual testing. Then what is actual testing?

Eric Jacobson: The “actual testing” I was referring to is the open-ended investigation part of software testing outsourcing. I was contrasting it to “checking”. It took me years to notice but the way I documented tests was actually impeding my testing. I’ve been stuck in a mindset that test cases must resolve to “PASS” or “FAIL”… for both test planning and exploratory testing sessions.

This is a big deal! I suspect most testers are stuck in the same trap. It’s like a test documentation language blindness. Even if a test began as an investigation, my brain still framed it to resolve as “PASS” or “FAIL”. For example, a test that began as “I wonder if I can submit an order with no line items.”, would get documented as “Submit button should not be active unless line items are populated.” The former might generate better testing while the latter might limit creativity. The big mindshift here is NOT to box yourself in by forcing every test instruction to resolve to PASS or FAIL. As a tester, it’s a very liberating idea.

What if instead, you just list the trigger to perform an open-ended investigation? If you have a list of these investigation triggers, you can resolve them as “DONE”. The assumption is they were completed and any bugs or issues have been otherwise shared.

a1qa: You questioned the testers’ “power to declare something as a bug”. Isn’t it a tester’s job?

Eric Jacobson: Ha ha! Well kind of. But I think our role as testers can be improved upon with a slight variation.

How about, a tester’s job is to raise the possibility that something may be a problem? Raising the possibility forces a magical little thing called a “conversation” to take place. The conversation might be, “oh that’s nasty, log a bug!” or “that sounds like a problem John just noticed, talk to him” or “actually, it’s by design, the requirements are stale”. A conversation might reduce rejected bugs, duplicate bugs, or bugs prematurely fixed by eager programmers. The conversation provides us testers with feedback about what is or isn’t important. From there we can adjust.

The conversation might be inconvenient, especially for lazy testers or introverted product managers. Beyond that, it’s hard to find disadvantages. This didn’t occur to me until I read one of Michael Bolton’s clever thought reversals. He asked if testers should have the power to delete bug reports? Most would answer, no that’s something the team or the stakeholders should decide. I agree. I would rather have a bug report repository that accurately reflects all known threats to the product. If we don’t want testers independently making decisions to remove bug reports, maybe we should use the same level of scrutiny for putting things into the bug repository.

a1qa: Is there any ideal testing approach or model that you would recommend for following to optimize tester’s work?

Eric Jacobson: No. …did I pass? Was that a trick question? My context-driven-test approach mentors just sighed with relief.

a1qa: “Anti-Bottleneck Tester”, who is this person?

Eric Jacobson: I used that term in a favorite blog post. I probably need a better term. An anti-bottleneck tester is a tester who makes decisions and suggestions that help development teams deliver software quicker, without letting its quality suffer. This is the tester I strive to be. It’s a far cry from the tester I was 13 years ago. I was a quality cop. “You can’t ship until I stamp it Certified for Production”. I was the bottleneck…and proud of it.

We messed up big time and gave ourselves a bad reputation. Some of us are still doing it. I just heard a story about a tester who said it would take three days to test a change to a GUI control’s default value. Nobody is impressed by that answer. My response would have been three minutes.

These days, programmers are moving so fast, they don’t have time to wait on testers who don’t bother learning development technologies or refining test practices. I love thinking of ways to keep up. I did a talk at a couple US test conferences title, “You May Not Want To Test It”. The basic premise was, instead of testing everything because of a factory-like process, consider only testing where you can add value. I listed 10 patterns of things testers might want to “rubber stamp” instead of spending time testing. For example, production bugs that can’t get any worse, subjective UI changes, race conditions too technical for some testers to set up. These are all things best tested by non-testers and a tester who sees that and delegates the work, is able to spend more time testing things where their skills can be more effective.

Another anti-bottleneck tester practice is to suggest compromises that enable on-time shipping such as, let’s give the users the option of shipping with the bug. I also think testers should spend less time logging trivial bugs and more time hunting for non-trivial bugs. The urge to log trivial bugs is probably left over from the ancient infamous bug count metric.

a1qa: If reporting trivial bugs is a waste of time, does it mean QA engineers should skip them?

Eric Jacobson: It might. First of all, we are talking about “trivial” bugs here. So by definition, the threat to the product is trivial. What does trivial mean? If the product-under-test has hundreds of bugs, some are probably trivial and may never get fixed. If the product-under-test has 10 bugs, there may not be any trivial bugs.

This will sound crazy but I’ll say it anyway. I think testers are more likely to hurt their reputations by logging trivial bugs than by missing non-trivial bugs. Logging trivial bugs reflects poorly on your testing skills, especially if you miss non-trivial bugs.

At my previous job I ran bug triage meetings. I hate to say it but here is what typically happens. Three or four times in each meeting, we read a bug report that made the team laugh at how trivial it was. Someone always said, “Ha ha, who logged that?”. There was nothing worse than looking at the history and seeing a tester’s name…another tester obsessed with perfecting cosmetic stuff on the GUI while the support line rings all day because the product keeps timing out.

a1qa: What quality means personally to you?

Eric Jacobson: As a user, what comes to mind is the integrity of the development team. And by development team I mean testers, programmers, and product owners. Do they have a reputation of being transparent about problems, eager to fix problems, interested in listening and responding to users? If the answer is yes, I’m likely to forgive production bugs and continue using the product.

As a tester, this means it might be more important to react quicker to problems in the field than to hold out for “perfect software”. My father, a small business owner, taught me the customer is always right. When the needs of the customer, like time to market and new functionality, outweigh my tester concerns for certain bug fixes that affect me personally, I try to weigh everything. However, first impressions are important. They say an audience decides if they like a presenter during the first five seconds. If this is true of software, we had better get the core stuff right. As the context-driven-testing principle says,

 “Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.”

Eric thank you for sharing your viewpoint and ideas. We hope to talk to you again and discuss a few more topics.

More Posts

2 July 2024,
by a1qa
6 min read
Interview with Mike Urbanovich: How to build a robust test automation strategy?
The Head of testing department at a1qa answers the questions on how to smartly build a winning test automation strategy and talks about the advantages you may obtain with it.
Interviews
Test automation
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
8 December 2023,
by a1qa
3 min read
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
From dissecting novel industry trends to navigating effective ways of enhancing software quality — let’s recall all a1qa’s roundtables. Join us!
Big data testing
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
QA trends
Quality assurance
Test automation
Usability testing
Web app testing
6 top reasons why business should invest in software quality
9 November 2023,
by a1qa
4 min read
6 top reasons why business should invest in software quality
We congratulate you on the World Quality Day with the article by Alina Karachun, Account director at a1qa, having 10+ years of QA expertise. Delve into it to explore the reasons why businesses should prioritize software quality.
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
Quality assurance
alina
25 July 2023,
by a1qa
4 min read
Interview with Alina Karachun, Account director at a1qa: unearthing the power of a true IT leader
Read the interview with Alina Karachun, Account director at a1qa, about the importance of creativity and feedback for executives and their teams, what is ethical leadership, and many more.
Interviews
Quality assurance
debated technologies
30 May 2023,
by a1qa
3 min read
a1qa tech voice: Managing director at a1qa, North America, discusses pros and cons of much-debated technologies
Nadya Knysh, Managing director at a1qa, North America, puts a spotlight on 6 current technologies, discussing their positives and negatives.
General
Interviews
Test automation
10 March 2020,
by a1qa
6 min read
Dedicated team model in QA: all you should know about it
Check on everything you should know about when to apply, how to run and pay for a dedicated team in QA.
Interviews
QA consulting
Quality assurance
30 September 2019,
by a1qa
4 min read
“Every team member is responsible for software quality”: interview with Head of QA at worldwide media resource
We continue talking about unsurpassed software quality. Consider how to make QA more efficient using shift-left and continuous testing.
Interviews
8 December 2017,
by a1qa
4 min read
a1qa: one-stop shop for first-rate QA services
Dmitry Tishchenko, Head of a1qa Marketing and Pre-Sales Department, answers the questions of The Technology Headlines. 
Interviews
Quality assurance
17 August 2017,
by a1qa
4 min read
From requirements specification to complex business analysis: interview with a1qa head of BA
Check how we at a1qa converge business knowledge with IT skills to deliver maximum value. 
Interviews
QA consulting

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.