Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Learn the value of exploratory testing vs. scripted testing

Listen to this podcast

In this podcast, testing expert Matt Heusser explains how exploratory and scripted tests differ and how they complement each other.

Organizations can apply radically different testing methods to their software release schedules. Compare exploratory testing vs. scripted testing, for example, to see how contrasting approaches to software assessment work.

Scripted testing techniques and tools can save time and prove particularly effective when checking aspects or values of an application that are in a sequence. In contrast, exploratory testing takes application programmer roles into account, giving them the flexibility to test as they build. This latter approach, a type of Ad hoc testing, emphasizes that testers learn the skills necessary to find unexpected changes and alter their tests on the fly.

While the dichotomy of exploratory testing vs. scripted testing is real, testers can benefit from both methods if they strike the right balance between the two for their team and application. Check out this episode of Test and Release, which features Matt Heusser of Excelon Development, for some pointers on both approaches.

Heusser lays out what he calls a "brave" approach to test. Organizations should approach testing as an intelligent, evolving field, with automation available to aid success rather than undercut the value of a human tester. He urges teams to shun the idea of testers as "slightly smarter monkeys" and instead dive into testing and build up considerable skills. Those teams should study risks, he says, where bugs come from and how they're injected into software -- all with limited information under time pressure.

Transcript - Learn the value of exploratory testing vs. scripted testing

Hello and welcome to the initial episode of SearchSoftwareQuality's podcast. I'm Ryan Black; I'm an assistant editor for the site. And joining me today is Matt Heusser. Matt, can you tell us a little bit about yourself?

Matt Heusser: Sure. I'm the managing director of Excelon Development, which is a small consultant company in the Midwest [in the] United States. I was a programmer, a project manager [and] QA lead for the first 18 years of my career or so. And then I've been a consulting software tester and a consultant for the past seven.

Great. I primarily know you as a contributor to our site, and I very often will see you writing about testing topics. And I know you recently wrote about exploratory testing. I think it was about mobile software, which gave me the idea for this podcast in which we want to give an overview defining and differentiating details between exploratory testing vs. scripted testing. And then maybe we'll get into how they kind of intersect and complement each other.

Heusser: Sounds great.

Let's start off with a general question. When you hear exploratory testing, what does that mean to you, Matt?

Heusser: So, when you think of a test in school, like your teacher is going to give you a test, they usually have a standard 50-question, multiple-choice exam, and it's testing your knowledge. When you think about a car inspection, it's a test; it's a 15-point inspection, and they look at the results. When we're testing software, we have an opportunity to do what I think is a little bit better. We have the opportunity to say, 'Well, that's really interesting. That didn't quite do what I expected.' And then to change the test while we were performing it, viewing it [as] more of an improvisational performance, more like improv jazz versus viewing it as a sort of set of steps that you must follow. [Exploratory testing] is more like a chess game; we're adapting to what's actually happening.

So, it's almost like a test is the wrong word for it. I like the improv jazz metaphor you go on. It's like you're trying to get a fully 360-degree understanding of something, but it's not necessarily checking something like you mentioned, like a test in school where you're just trying to check and see if someone knows the material.

Heusser: We have an opportunity to do one better by simultaneously -- or, in parallel -- doing test design, test execution, learning and recording. So, the learning activity actually seeds into the next stop. That's a very different model or paradigm for a lot of people. A lot of people, when they think about testing, there's a sort of cult of stable, predictable and repeatable [cadence] which came out of the 1980s when McDonald's and Walmart sort of took over the world by breaking any piece of work down into its isolated component parts and then reassembling it perfectly. Yet, software is different every time. The one thing I know about this build is that it will be different than the last build by definition. Otherwise, why test it, right?

Why would we perform the same kind of testing on it as we did last time when the change that's introduced is different every single time? It's not an assembly line. Exploratory testing was an attempt, that term was an attempt popularized by a guy named James Bach. I think it was defined by Cem Kaner. It was an attempt to explain, doing it one better, [a] way of thinking about software testing.

That made me think of the word I was trying to reach for earlier, because I read that someone was describing exploratory testing as less of practice and more of a mindset. Would you agree with that framing?

Heusser: Yeah, I think so. In testing, we talk about how there's ways of coming up with test ideas, like equivalent classes and boundaries and all these sorts of things. And exploration is your style, your way of thinking as you're performing the testing. It's more an approach than a technique.

Is there any particular type of application or software that exploratory testing is most well-suited for, or would it have utility if I apply the approach to any sort of application or software development team?

Heusser: Well, there's few and far between software. I've spent most of my career working on commercial software in a free market, where there are competitors and where the customer's adoption of the software will matter whether or not it's good in the marketplace. I've done some internal software; in that case, the economics are different because the CIO might pay for it and force the customer service reps to use it. And that's just so different. But in those environments, even in both of those, for the most part, it is pretty standard.

If you can reduce your specification to symbolic logic and then you can create test data that you can run and there's no real customer interface, there's no real human who can just bounce around an event-driven system and do whatever they feel like. So, if you're talking about an embedded system to drive the transmission of an automobile or a router, then there are other approaches that might be really interesting. There's this thing called model-driven testing, but those pesky humans and that pesky vague ambiguous specification, as soon as you introduce those, then I want to pull for my exploratory testing quick.

I see. Then, to come at that from the opposite end, what would be the big obstacles/reasons a team wouldn't want to go with exploratory and maybe would want to go with model-based testing instead?

Heusser: The main reason I find -- so, there's a couple of different perspectives about testing. One perspective is that testing is easy and any fool can do it and we should just write all the steps down and pass it off to anyone and we can get -- you know, find some dumb monkeys to do it. And if they fail, we just get slightly smarter monkeys. So, that [outlook] is [that] testing is a no-skill activity, where the goal is to drive the skill out of testing.

The other one is that testing is the sort of magical thing that some people have and others don't, and we need to get the gurus to do it. In neither of those worlds do we say testing is a skill that can be learned and needs to be adjusted based on the risks for this build, versus last build, versus what we've had in the past.

There's a lot of that Walmart/McDonald's thinking about software development. A lot of people have computer science degrees, and what programmers do is they automate things. They take a simple, straightforward process, and they automate it. That is, to some extent, the definition of programming, right?

So, they see testing, and they're like, 'Oh, a simple, straightforward, repeatable business process. We should just automate all of it.' And you can have significant success with lots of different tooling and automation approaches to reduce the time it takes to do testing or to sort of make the testing more powerful. There's a lot there.

But in all these cases people will say, 'Testing is weird. Testing is hard. I don't want testing to be a skill. Let's just make it so we can press a button and testing will be done and I don't have to think about it. Or let's just write it all down, and then we can give it to anybody to run, and I don't have to worry about sort of this union of skilled testers walking out, and I'm left holding the bag. It's scary, and I'm afraid of it.'

All of these other approaches are sort of ways to deal with the symptoms of the reality that 'testing is scary and I'm afraid of it.' So, teams that find testing scary and are vaguely afraid of it are going to try these other activities. The brave thing to do is to say, 'Let's study it and get really good at it. Let's study risks, study where software bugs come from, study how they're injected into the software, and let's get really good at finding them, and let's [get] really good at assessing risks with limited information and time pressures.' And I think that's kind of the ballgame of good testing.

It sounds like the people who don't quite see that, they'll almost look at it as, 'Oh, I'm simplifying the equation of the testing I need to do by removing the human element.' From their perspective, they're simplifying things.

Heusser: Absolutely, and there's a number of other ways. If testing is one form of risk management, there are a bunch of things you can do to manage risk, right? There was a time in history when Facebook and Twitter could just kind of throw their software at the customers and let the customers find the bugs. There's lots of ways to reduce risks. What I'd prefer to have is someone that has studied the considered approach and can say, 'Because of these factors, we have chosen this approach. And even if there, we do [DevOps] combined with cloud-based mitigation of risks combined with intense monitoring.' If they can answer the question and their approach doesn't include exploration, I'm much more comfortable. But it turns out exploration is surprisingly easy to learn and teach in order to achieve a minimal level of effectiveness. And if the whole team does it all the time as an activity, or a small percentage of the time -- especially for new feature development -- you can reduce a lot of risk for supercheap.

Just maybe as a quick aside, you mentioned exploratory testing is relatively easy to learn. What are some of the avenues people could take to familiarize themselves with the practice?

Heusser: So, it's relatively straightforward to get the sort of 101-level understanding of how to attack a website and find the obvious bugs, right? If you Google, 'Quick attacks, how do I do them?' you will find articles -- some of them I've written -- on how to overwhelm software with information that isn't expected.

Like a distributed denial-of-service attack or something.

Heusser: Yeah, sure, but just type in a number that's too big, and click submit.

Oh, so even simpler than a DDoS attack.

Heusser: Yep. You type in a word when it's expecting a number -- leave something blank that is supposed to be filled in. There's a whole bunch of these, and they are sorts of classic quick attacks. It's just like, 'Here's how to overwhelm the software.' And then there is what we call 'common failure mode.' I know, for a web-based app, I would try these things; I know, for a native iOS app, I would try those things; I know, for a Chromebook app, I might try these other things. So, you study quick attacks, you study failure modes for your platform and then you study the history of the bugs found for your software, and you can get pretty far. [There is risk that there are] people who do a one-hour class or one-day class, and they think they're testing experts, and it's more like: a day to learn, a lifetime to master.

To maybe pivot somewhat, but I think we were kind of talking around it a few minutes ago: When you were talking about automation, would a lot of that stuff pertain to specifically scripted testing?

Heusser: Well, most automation approaches that I see are scripted in nature. Scripted testing is click, click, click, inspect, right? That's not actually what the test is going to do because they will find something interesting. They're going to jump off-script, and they will come back, and there's no guarantee it will be exactly where they jumped off. And even if the tests do, the state of the software will be different than what it was supposed to be at that point because they jumped off-script and played around a little bit.

Because you only have a specific understanding of what the app is going to be when you write the initial script, there's no way you could write a script unless you're using the advanced AI to predict what the application will be then, right?

Heusser: Well, it depends. Sometimes, scripts are written after the fact. Sometimes, we're just rerunning the same script that we run every two weeks. But even then, what the humans are going to do when they find a problem is they're going to jump off the script. If they never find any problems, then the value of the script is questionable. So, we never actually, really cohost that anyway. To some extent, even scripted testing has a component of exploration to it. That's why one of my colleagues, James Bach, says that all good testing is exploratory to some extent. He's kind of dropped the very term that he introduced once. I still think it's, by far, valuable for education to sort of make the distinction.

For test automation, when people say that, they mean, 'We're going to drive the user interface pretending to be a customer, and we're going to do click, click, click, click, type, submit, click, check value,' right? Tests are literally scripted and only checking for the things that are on a very, very specific line. That's getting better. There are tools that can grab more and more of the screen. And there are tools where, when you get an error, you can look at it, and you can say, 'That is the expected change for this new requirement. So, I'm going to click and grab that error image that's going to become the OK image.'

While the tools are getting better, automation, for the most part, is very -- it's implementing the literal parts of scripted testing and forgetting about sort of that human element. Computers are very good. At the bottom of every scripted test is a hidden assertion that no one ever writes down. The assertion is: And nothing else weird happens. And automation is not particularly good at fighting that. So, you usually want some combination of both.

I think that'd be a good point to maybe transition to how exactly exploratory and scripted testing can complement one another or fill in each other's weaknesses. One specific question I had was: Would you ever think of exploratory testing as a way to maybe find the aspects of an application you would later set up automated checks of or automated scripts to check those aspects?

Heusser: There's an important distinction here that our audience might not be getting which is what testing versus checking [is], which is not my term either. That came from a guy named Michael Bolton who has a lot of work with James. And the distinction is checking is a part of testing. And what that piece is is that, 'Type in this, do this, see this.' It's algorithmically defined in advance exactly what should happen, and it ignores all the side noise if something else were to happen, right?

Because you're just checking for that one specific thing.

Heusser: Right. And that's not necessarily bad. When you make automation and it runs through your user interface and it all passes once, that's great. And then you make a change and run it again. It's no longer really test automation. It's change detection, and the job of the programmer is to create change. So, you're going to have these failures that you go look at it and you say, 'Yup, it failed to click the submit button because we added the middle name to it; middle name is required. So, I ran the old test and went all the way through. And it gave me an error message, which it should do, because the middle name is not populated and it's required now. So, now I need two tests. I need to see the error. See the error message, fill in the middle name, and then click submit, and then it should work, and now, the middle name should be on it.''

So, you've got this maintenance burden that comes with automation. But I'm sorry, I jumped to the testing versus checking, and I forgot the question.

The initial question was: Can exploratory testing be used to effectively find aspects of an application to later set up automated checks of?

Heusser: One thing that commonly happens is that we get the user interface, and there are different schools of thought on this. There's a lady named Angie Jones, who's on Twitter, and she will create the automation before the code is written. And then, when the code is written, then you have a bill; she can click go, and she can get through 90% to 95% of her scenarios. But, most of the time, for most of the companies that I work with, when we start our journey, the first build is pretty darn buggy. And they want feedback really soon. They want feedback today to fix it, right? So, if we take two to three days creating automation, it's going to slow down this feedback loop. We're going to then get to create all the automation. Oh, and I got stuck halfway through creating it because this particular function doesn't work at all. So, I couldn't actually create all the automation, but along the way of creating the automation, I found these four bugs. Please fix them so I can finish creating the automation. So, it can run once and pass, right? There's no value in it until it actually all passes because I can't even run it. But what we do instead usually is we do a quick pass exploration and give them, 'Here's eight bugs I found.' Then, we check to see if those bugs are fixed and retest the feature. And now the feature can be automated, and the automation can actually pass. The value there for that tooling, at the GUI level, is there are a couple of [values].

One, it tells us that we're done, right? The story isn't done until the automation runs. And another one is that, two weeks from now or two hours from now, we want to rerun all of the automation to see if the change didn't break anything -- so, regression. The change didn't break anything big and obvious that's on the path of our test. There's some value in that. That's the real value there.

One more question: How would you strike the balance between exploratory vs. scripted testing? Is it a 50-50 split, 40-60, etc.?

Heusser: I belong to something called the Context-Driven School of Software Testing. We don't talk about it much anymore. It's kind of like [what] the testing with the Agile Manifesto was, and I would say that the value of any practice only exists in a given time. It's really hard for me to give you broad guidelines.

What I would instead do is come to your shop, talk for an hour or two hours, whatever, you know, talk amongst yourselves and say, 'Do we need more or less of it? More or less of what, right?' So, right now, if we're doing 100% exploratory testing, what are the consequences of that? It takes two weeks to do a release. So, we basically write software for two weeks, and then we test it for two weeks. And that seems like a big waste of time. We're losing half of our productivity with bug checks.

OK, then. And then, while we're fixing it, what are our problems? Well, most of the time, they break login for any build, so that's pretty bad, right? We have to wait for a build, and then we can't even log in, and we have to send them a bug back. I can't log in. We have [this] silly back and forth. So, maybe we start tooling around to run login test on every build, and if it doesn't pass login, it doesn't get promoted to test. But why are we breaking login in every build? That's really weird. We shouldn't be doing that. So, we can also do root cause analysis on it. By now, we have to have a lot of automated test because our software is really buggy.

That brings up a second question of, 'Why is our software so buggy?' That's more of a root cause to fix than, 'Let's just create a huge amount of selenium or some other tool-based automation to cover all this stuff because it's so buggy. Oh, what if we just have less bugs?'

So, those are the kind of conversations I want to have. And the outcome of that is to invest more or less time in tooling, more or less time in exploration and/or more or less time in scripted testing. It's a big pie. Let's experiment for two weeks and come up with a different percentage. We talk about best practices a lot, where the term best practice is going around a lot. In medicine, if you give a prescription without first diagnosing the patient -- keeping in mind how every patient is different -- that's not a best practice. That's malpractice, right?

We have to look at the organization. What are your goals? How often do you want to release? What amount of risk are you willing to tolerate? What do your customers look like? What amount of defects have you injected in the past? What does the technical team look like? Where are their strengths? What is it? Do you have a testing role? Where are their strengths? Do you have energy for the team where everybody is doing a little bit of exploration? Or do we hate it, and we don't want to touch it, and we just want to have specialists do it? Or do we hate it and don't want to touch it and just want to automate all of it, and then we have to have some really hard conversations?

And those are relatively mature conversations to have. I mean, I've worked with teams that just have never been able to -- or, in the past, haven't had -- those conversations. They just kind of always do what they've always done, and then they yell at each other when the outcome is bad. And I think we can do better.

I see. So, I think that just about wraps it up, though. I want to thank you again, Matt, for joining me today and talking about this.

Heusser: My pleasure. Thanks, Ryan.

+ Show Transcript

Next Steps

Find the right software testing methods for your dev process

How to plot out a test automation strategy

4 ways to use record and playback test automation tools

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close