Know when to choose automated vs. manual testing 7 questions to ask before you select software testing tools

Can we fully automate our software testing?

Your boss has jumped on the bandwagon to automate software testing. Don't despair. Software testing expert Matt Heusser walks through what to say -- and do -- to keep everyone happy.

Most of us are familiar with the term automated software testing. It sounds great, but clearly, it's not as simple as pressing a button.

What does it really mean to adopt an automated software testing strategy? If manual software testing is removed from the equation, the QA team faces dramatic changes -- or, perhaps, extinction. But fully automated QA doesn't guarantee quality; it might even serve as a detriment.

So, is it possible to fully automate software testing? And, if so, is it a good idea? Let's explore that question with a discussion of the value of test automation.

How automated testing creates value

Most test automation runs an application through an algorithm, with a start place, a change and an expected result. The first time those tests pass, the automation is complete, but the feature is not done until the checks run. So, the automated tests haven't delivered value yet. The test-fix-retest loop helps complete tests faster and provides clear instruction to guide correct behavior. However, when the automated checks are created, they don't actually find problems. Once everything passes the initial tests, changes occur so the software can detect breakage when the expected result is not correct.

At this point, we run into a maxim of test tooling: After a single change in the software, test automation essentially becomes change detection.

Bear in mind: The programmer's job is to create change, and that, in turn, creates a maintenance burden. When someone must debug a test, confirm that the software actually changed, and then update the test -- say, to add the now-required phone number field when creating a user.

With this basic understanding, you see how automated testing can benefit your organization. Then, the question becomes: How much should you try to automate software testing?

Determine how much automation is enough

Let's say you're doing a software demo for a customer or senior executive. The product isn't in production yet; you're just showing what you've done to get feedback for the next iteration. The vice president of finance asks what happens if you create an invoice that is past due the day you create it. It's a good question and, essentially, a test idea -- the kind of thing no one thought of before. If the software works one way, it's fine; if not, it's a new feature request, not really a bug.

The person at the keyboard tries to answer the question. Do you tell him to stop -- that you need to create and run an automated test before you can answer that question? I certainly hope not.

There are plenty of test ideas like this, things you think of in the moment to explore, especially when testing a new feature that is part of an existing system. Most ideas you want to try just once. Automating these tests into code that runs all the time is wasteful and expensive. Your boss certainly doesn't want every little idea institutionalized. Moreover, does your boss want to automate test design -- the development of test ideas?

There is no magical box into which you can feed requirements as Word documents and pop out test conditions and expected results. When most people say test automation, they typically mean automated test execution and evaluation, plus perhaps setup. They want to click a button, run all the preexisting checks and get results. A fully automated software testing strategy implies that a thumbs-up is sufficient to move to production without further research and analysis.

In reality, that is 100% regression test automation -- you exclude performance, security, and new platform or browser support and just say, "Once any change has been tested in isolation, it can roll to production after the tooling passes." A few of the companies I have worked with have achieved this standard.

Moreover, it still leaves us with the test maintenance problem.

3 ways to perform test maintenance

There are two popular approaches to make test maintenance more efficient. The first is to write thinly sliced features that run quickly and are easy to debug, sometimes called DOM-to-database tests. Another approach is to isolate the code into components that deploy separately and simply do not have GUI automated checks, focusing automation efforts "under the hood."

A third, newer approach to maintenance is to use machine learning and predictive intelligence to figure out if the software is actually broken and then self-heal. Sometimes, a UI change doesn't impact features at all -- it only changes the elements' position on the screen, causing the locators to fail. In this case, the software can use a history of where elements are stored to essentially guess about the location of the submit button and recheck. If the software passes under these conditions, the AI can adjust the check to self-heal. Some companies have tried this approach with moderate success, reducing the test maintenance burden without increasing their false-pass rates.

Overall, my advice to organizations that question the viability of a 100% automation policy is simple: Take a step back, and breathe. Ask reasonable questions. Don't be a know-it-all, don't be a doormat, don't enable and don't overly obstruct. Work with the boss to define terms, focus on end results and come up with the means to achieve those results -- whatever level of automated or manual testing it requires.

Next Steps

Devising a test automation strategy: Getting started

Seven ways to know when to automate testing

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close