Blog

“Once we saved $500.000 with load testing.” Interview with Scott Moore

With over 18 years of IT experience with various platforms and technologies, Scott Moore has tested some of the largest applications and infrastructures in the world. In October 2010, Scott became President and CEO of Northway Solutions Group and currently holds HP certifications for ASE, ASC, and CI.
1 May 2014
Interviews
Performance testing
Article by a1qa
a1qa

With over 18 years of  IT experience with various platforms and technologies, Scott Moore has tested some of the largest applications and infrastructures in the world. He has mentored and developed testing services for Big Five services firms, top insurance companies, and major financial institutions in the US. He is a Certified Instructor and Certified Product Consultant in HP’s LoadRunner and Performance Center products. In October 2010, Scott became President and CEO of Northway Solutions Group and currently holds HP certifications for ASE, ASC, and CI. This is a special treat for a1qa to talk over performance & load testing features with Scott.

a1qa: Scott, you have devoted your career to performance & load testing. Based on your experience, what issues typically arise when you start a new project?

Scott Moore (S.M.): I started my career in load testing back in 1998. I found a passion for it and it is something I love to do. I started Loadtester in 2004 and merged with AdvancedQA in 2010 to form Northway Solutions Group. Among all the disciplines of web application testing, we still focus on performance testing, but we also offer services around functional automation, implementing QA process, and training – mostly for the HP software solutions.

While it is different from client to client, there are a few common things I see. There are those companies that already understand the importance of performance testing and understand where it fits in the software development lifecycle. In those cases, it is easy to jump in and get the project done with minimal issues. They generally have good requirements, test environments that match production, and respond quickly when you need something to complete a task. Unfortunately, this is not the experience very often. Then there is the company that has no experience with performance testing. They don’t know what they don’t know. They usually have their heart in the right place, and want to do the right thing, but they don’t know what the right thing is. In those cases, they generally listen to us as trusted advisors and we’re eventually able to get testing completed successfully. However, there is a lot of hand holding, and the projects normally take longer than expected. Many times the people you are working with are dealing with things they never had to think about before. As long as they are willing to do what it takes to get the job done, we can be successful. The last type of company is somewhere in between. They think they already have everything in place and have a lot of approvals and political issues to deal with. They may have a defined process for QA in general, but it usually isn’t performance testing friendly. Not all players on the team are as involved as they should be, and there is generally a lot of confusion. This is probably the most common type of company that we work with. In some cases, it can be hard to be successful.

Typical issues that arise would be a poor understanding of performance testing tasks and deliverables, invalid test environments, bad or not enough test data, bad or undefined performance requirements, unresolved functional defects on the application under test, and access restrictions for monitoring the environment under load. I thought that by now some companies would learn and there would be some standards put into place by now, but we face some of the same issues on projects today as we did in the late 90’s.

a1qa: Based on your experience, can you remember concrete cases when load testing saved a lot of money and reduced risks for the client?

S.M.:Yes! Some specific examples would be:

We found a single outdated DLL file that caused a company to over purchase their hardware by $500,000.00. Performance testing would have saved them that much after only two days of investigation. Another time, we located a bottleneck in a single file that had some bad performing code that was shared by four other applications in the company. Correcting that shared code fixed all five applications, saving thousands of dollars in development troubleshooting time alone. In another project, testing results isolated five lines of code in an application that accounted for a 1000% performance improvement when re-written, saving hundreds of thousands in hardware purchases and software development time. At one client there was a load balancer that had an outdated firmware version. When updated, end user experience for page times went from 30 seconds to 3 seconds.

I have also seen it work the other way. I have seen companies go into revolt after finding out the cost of a load testing exercise would be $20,000.00. They decided to risk it and roll out their application without a performance test. They lost $2 million in clients the first year because of performance problems. I could share story after story about this. When I hear the words “performance testing is too expensive”, I always have to ask, “Compared to what? What will an hour of downtime in production cost in real dollars? What will it cost your reputation? What would a bad review in the iTunes store translate into in terms of dollars?” At the end of the day, the business has to weigh the cost verses the risks. It’s their decision.

We try not to take projects without success being defined up front in the statement of work. There needs to be a good pre-qualification as to WHY performance testing services are needed and what is expected from the exercise. If you can’t go into a project knowing you can be successful, then why do it? Many times I see testers blindly going through their tasks to test an application without knowing why they are doing it and what everyone hopes they will accomplish. If you cannot go into a project knowing that you will add value by reducing the risks of deploying an application, or if you cannot demonstrate that finding bottlenecks before end users will save quantifiable amounts of money, then you really should not engage. When done properly, performance testing usually pays for itself in the first round of test execution when something unexpected is found.

a1qa: Do you consider HP LoadRunner to be the best solution for performance testing?

S.M.:Overall, I do. Especially for the Enterprise customer. It could be overkill for a small shop that only has a need for end user timings for web/http traffic and web services calls on a small web site, or is doing basic “science experiment” testing on the developer side. With the new version 12 of LoadRunner, there is a free 50 virtual user license. I think this opens the product up to smaller shops who don’t have a huge virtual user requirement but still want to use a standardized load testing product.

a1qa: Why would this tool better as compared to others on the market?

S.M.:“Better” really depends on context and the situation at hand. Are there other products worthy to be used instead of LoadRunner? Sure, depending on your requirements and budget. There are three main reasons I prefer LoadRunner. The first reason is protocol support. No other product has an many transport protocols as LoadRunner. It allows you to test web applications, Xenapp deployed applications, mainframe “green screen” applications, ERP and CRP packaged applications like SAP, Oracle, and Siebel, and many more with the same product and the same basic process. No other product on the market that I know of has this level of flexibility across applications. For a large enterprise, this is critical.

Secondly, the ability to correlate end user timings with all of the native LoadRunner and SiteScope monitors gives the performance engineer a lot of data to pinpoint where bottlenecks arrive in real-time. The monitors are integrated into the tool once configured, so you don’t have to rely on third party solutions or additional resources until you hone in on a specific tier or problem spot. Again, I don’t know of another product that supports the number of technologies as LoadRunner does. In the Enterprise, you never know what you will run across, so this is very beneficial.

The third thing I like about LoadRunner is the Analysis engine. This component takes end user timings and correlates the data from the monitors gathered during test execution, storing them all in one place. This gives the performance engineer a powerful tool for sorting and displaying data in a way that makes sense for technical and non-technical roles involved in the testing project. I think this really separates LoadRunner from other products.

All of that said, performance engineering is a skill that should be tool agnostic. The process should work regardless of the tool, but some tools make it easier for the engineer to deliver than others. In my experience, I have found that to be true with LoadRunner. Many times the tool is blamed as the problem, when it may be the tool not being used properly. Whichever product is used, as long as you get the trustworthy results you need to be successful, that is the “better” one.

a1qa: Over the last few years, more companies are choosing “cloud” solutions for performance testing to avoid paying for equipment that is idle. Some companies say this allows them to deploy a test environment more quickly. Would you agree this is the best approach when performance testing is less frequent?

S.M.Generating load to and from the cloud has been a hot topic. It further complicates the requirements from a testing point of view. Is it a private or public cloud, or a mixture? Is it testing against production or a production-like environment? I understand the argument that having a production-like testing environment and a full-featured load testing lab can be expensive compared to doing everything as a service. However, it needs to be in context. Enterprise shops are usually testing all the time for multiple projects and require a test environment as close to production as possible with a lot of controls in place to ensure that test results are accurate. Any and all variables that can be removed, should be. It’s not easy to spin up a complex SAP environment on the fly whether it is in the cloud or not. For those companies with a traditional 3-tier LAMP stack web site on virtual servers, it may make sense to deploy and test against a “like” environment as needed in the cloud, and then shut it down when idle to save some money. The same issues with the application under test that I mentioned earlier concerning environment, functionality, data, and monitoring access still exists whether you generate load to or from the cloud. In version 12 of LoadRunner, it now supports the ability to generate load from Generators configured in Amazon EC2. Other products have this capability. The correct approach is still situational.

Unfortunately, in some cases we’re seeing a shift away from a more controlled testing process to an uncontrolled (almost whimsical) approach to performance testing in the name of “Agile”. There is an attitude that we can just spin up some virtual machines or test against production like we’re kicking the tires on a car and seeing what the results are. We get a few graphs and that satisfies the business that things are OK. Then everyone wonders why there are still performance issues in production. In my opinion, regardless of whether you test your application continuously or once a year, the most important decision about the approach you decide on should be one that enables the most realistic load tests and eliminates the most risks by removing as many variables and unknowns as possible. If we’re looking to the cloud for a solution because it will accomplish the same thing cheaper and faster without changing our process, that is great. If it is just thinking “well, we don’t have the time and money to do it the right way, so let’s just spin something up, throw some load at it and see what happens”, then I think that is a poor excuse.

a1qa: Scott, thanks for sharing your point of view. We will be glad to see you as our guest again.

Follow Scott on Google+, Facebook and Twitter.

More Posts

black-friday
5 November 2024,
by a1qa
4 min read
Get ready for Black-Friday-to-Cyber-Monday shopping: 5 testing types to include in your QA strategy
What’s your nightmare during Black Friday and Cyber Monday shopping? If it’s a loss of sales, read about the ways to prevent this in the article.
Cybersecurity testing
Functional testing
Localization testing
Performance testing
Usability testing
Why do bugs get missed
27 September 2024,
by a1qa
7 min read
Why do bugs get missed? Learn the problems and tips to avoid them
Still, finding overlooked bugs after the app goes live? Let’s find out why this happens and how to fix it.
Performance testing
QA consulting
Quality assurance
Test automation
QA for retail software
29 August 2024,
by a1qa
4 min read
QA to address key pain points in retail 
Explore how QA helps address the main challenges that retailers face when developing software.
Cybersecurity testing
Functional testing
Performance testing
Usability testing
QA to ensure smooth migration to the cloud
15 August 2024,
by a1qa
3 min read
QA to ensure smooth migration to the cloud
Learn how effectively migrate to the cloud by implementing QA activities.
Functional testing
General
Migration testing
Performance testing
Quality assurance
Test automation
2 July 2024,
by a1qa
6 min read
Interview with Mike Urbanovich: How to build a robust test automation strategy?
The Head of testing department at a1qa answers the questions on how to smartly build a winning test automation strategy and talks about the advantages you may obtain with it.
Interviews
Test automation
Shift-left testing for better software performance
25 April 2024,
by a1qa
4 min read
Optimizing software performance with shift-left testing
Still in doubt whether to include performance testing from the initial development stages? Learn the benefits companies obtain with shift-left performance testing.
Performance testing
QA consulting
Quality assurance
Test automation
Telecom trends 2024
15 April 2024,
by a1qa
5 min read
QA’s role in adopting telecom trends for 2024 
Let’s dive into the transformative trends set to redefine the telco industry in 2024 and discover QA strategies to adopt them with precision.
Cloud-based testing
Cybersecurity testing
Functional testing
General
Migration testing
Performance testing
QA trends
Quality assurance
Test automation
On the pulse of 2024: optimizing the adoption of eHealth trends with QA
15 February 2024,
by a1qa
4 min read
On the pulse of 2024: optimizing the adoption of eHealth trends with QA
Generative AI, cybersecurity, AR/VR — come and explore how these trends are reshaping the future of healthcare and how QA helps implement them with confidence.
Cybersecurity testing
Functional testing
Performance testing
QA trends
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
8 December 2023,
by a1qa
3 min read
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
From dissecting novel industry trends to navigating effective ways of enhancing software quality — let’s recall all a1qa’s roundtables. Join us!
Big data testing
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
QA trends
Quality assurance
Test automation
Usability testing
Web app testing

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.