Manage Learn to apply best practices and optimize your operations.

Measure UX or risk losing customers, says consultant

Listen to this podcast

A quality customer experience helps ensure software product viability. So, why do organizations stop short in their UX efforts? In this podcast, Isabel Evans espouses the value of UX.

ORLANDO, Fla. -- No two end users are alike, so don't assume their experiences are alike either.

User experience (UX) is derived from a diverse mix of metrics, attributes and attitudes, not a simple inquiry into site load times. To measure UX effectively, organizations need to treat it as a multifaceted characteristic of application performance, especially as they aim to ship software faster than ever. Even as UX fits into a feedback loop as a core DevOps practice, app dev teams can struggle to strike that balance between swift delivery and front-end quality.

Isabel Evans, consultantIsabel Evans

Isabel Evans, an independent quality and testing consultant, spoke about why testers should evaluate and measure UX here at the STAREAST conference in May 2019. While dedicated UX design roles are on the rise, she noted that testers should undertake some of that user evaluation, as they are the primary advocates of software quality.

"[Testers] don't think enough about the diversity of people in the way that the UX designers do," Evans said. "We're doing testing. We think [our test strategy] works for us, and we think it's obvious because we're IT people, and we think that things are intuitive, for example, and they're actually not."

The bottom line is that unhappy customers won't stick around long. In this episode of the Test & Release podcast, Evans provides insights into how to keep them onboard, now and in the future, by measuring and acting upon UX. She starts by defining three oft-misused terms.

Transcript - Measure UX or risk losing customers, says consultant

Editor's note: Evans spoke with site editor David Carty. The transcript has been lightly edited for clarity and brevity.

Isabel, thank you for joining me.

Isabel Evans: My pleasure. Thank you for having me.

Sure. Right off the bat, we wanted to discuss the differences between the terms: UI, usability, and UX. These tend to get conflated a little bit and maybe we've heard a little bit of that over the course of the conference. Can you kind of break those three terms down for us?

Evans: Okay, yes. UI stands for user interface. So it is the interface between the human and the computer, the person and the computer. It's the thing that you can see on the screen, or that handles the tool that you're holding, or the product that's got embedded software. It's the surface of that. It's the thing that the human body or senses encounter. So that user interface, very often in software terms, is what we see on the screen. And when we're testing that, we might be looking at things like, 'Does it tab properly? Does it go in an expected order? How many keystrokes do you have to do to get to the next command?' That type of thing. It's very much focused on the software.

That contributes to the usability of a piece of software. When we're measuring usability, we're not just looking at the user interface. We're looking at a whole flow of activities. We're looking at how the person is using the system, whether they can carry their tasks out effectively and efficiently, the level of satisfaction they're feeling with their ability to carry those tasks out. Now in order to test those, we may well be observing what a user does. And you can automate some parts of that, or use tooling on some parts of that, so the types of things that people might do, in terms of tooling, is make recordings to show what people are doing with their hands, where they're moving the mouse around the screen, what they're doing with finger strokes, where they're hesitating in the flow of what they're doing, and so on and so forth.

When you look at user experience, you're coming at another level again. And when we talk about user experience, we're talking about the entirety of a person's experience with a product, or a service, or an organization, or a combination thereof. So it might be not just the interaction with this piece of software and not just the ability to carry these tasks out but the interactions with other systems, with other people in the organization, what the whole experience is like. And we start to move towards things that we measure more qualitatively, very specifically, people's emotional responses to the experience they've just had. So whereas at the UI, we might count this number of steps for somebody to traverse through the screen, and at usability we might be saying how long did it take somebody to carry that task out and were they happy with the result, with UX we're measuring things like did they trust it, was it credible? If it was a game, was it seductive, was it playful? And those are the types of words we're using and suddenly, they're words that, for a lot of people in IT, we're not used to using. They are words that are definitely about emotion and how people feel about things.

Sure, things that are very difficult to measure.

Evans: Absolutely. Things that are difficult to measure but if we bring in work from other disciplines, if we look at the [human-computer interaction] HCI and the UX discipline, the sort of people working in that are coming in from psychology, sociology, anthropology. They are looking at the problems in different sorts of ways. So there are ways you can do qualitative measures. They are ways that you can observe what people are doing, ways you can interview them. Again, there are tools that people use to contribute. So you might look at eye tracking, eye movements. There are some teams that use brainwave measurements. People measure the amount of sweat in somebody's palm or how hard they're holding onto the mouse or the other part of the tool. What's the level of tension in their body, how they're experiencing this haptically [related to sensations in the body]? All those types of things. But it inevitably comes down to how did you feel about that, or how did we observe you were feeling about that?

So these are difficult things like we're talking about to try to measure and then act upon. Which I imagine is one of the reasons why we are seeing an increase in UX specialist-type roles and bringing people into an organization to make sense of these things that are difficult to collect and then make sense of. Is that the trend that you're seeing that these types of roles are being coveted and are on the rise?

Evans: Absolutely, they are. And they're separate to the testing community. UX designers, it's a different discipline. I was at a meeting last year in the UK and I think there were 30 or 40 people at a meeting. It was a sort of special interest group meetup type thing one evening. And they had an open mic and there were six companies there, each looking to recruit UX designers. The whole of that room could've found a job that evening if they'd been looking. It's definitely a growing area.

For me, because it focuses very much at the beginning of the life cycle of a product, and because it's holistic, it is about setting quality in its biggest sense in place well before we start writing code. So that discipline, that specialism -- they have techniques that they're using where they will do work before we get as far as an IT project in order to understand what product or service is needed, how does that have to be supported, what are the human aspects of this, how are people going to interact? And they will do designs and what they call a design sprint which might just be five or 10 days to get to the point of having prototypes and those prototypes then get handed to a team who are good at design and development. But that whole process, through understanding who is going to use this system, what are the personas, what archetypal people can we identify who are going to use this, how do we show diversity in that, how do we make sure we're inclusive? All of those sorts of thoughts. What are people going to be doing with this product? Why would they want to use it? What are they going to use it for? How are they going to interpret it? What emotions are they going to be feeling?

Now all of that then leads to some wire-framing and some prototyping which informs the start of the IT development.

A lot of times, we see UX boiled down to some simple metrics, things like load times and other sort of technical terms. It sounds like really what we would call UX goes a little bit beyond that. So, how would you classify things like this, measurements of how a user interacts with an application? Some of what the folks downstairs in the expo are calling UX terms.

Evans: Yeah. They're a part of it. And this is interesting. This is one of the reasons why UX is wider than usability, because if you only measure usability, you're not measuring the whole user experience. Lots of things like performance, and security, and reliability -- they are important as feeds into that user experience. Now those are not words that ordinary people will use, and they shouldn't have to. It's about the engine and how we deliver it. And I'm aware this is a podcast, so me drawing you a diagram at this point is fatuous, but I need to draw it to describe it to you.

What I like to think of it is as a pyramid of attributes. And so if at the top, we have the user experience, at the tip of this pyramid, and there we're looking at which attributes are important in terms of things like the emotions, trust, worthiness, credibility, playfulness, whatever else, and the haptics, how it feels sensually to people. The human attributes. What is important about this experience as a whole? Underneath that, so a wider part of the pyramid, if you like, or a layer cake, or whatever you want to call it, we have some attributes called "quality in use" attributes. Now those are defined in a standard, ISO 25010 -- such a nerd I am, I know the standard number. And in there, you've got things like the flexibility of that system. You've got safety from risk and harm. And you've got efficiency and effectiveness, the way that people are carrying their tasks out. Now ISO 25010 lays down definitions of those attributes, and it also lays down metrics for each of them. So ways that you can measure them. And you need a mix of those to give the right user experience but it's going to be a different mix depending on the user experience you want to end up with. So, for example, the level of flexibility you allow in what the user does. For some systems, you want to allow lots of flexibility. Many different users using it in different ways. But for others, you want minimized flexibility because you know you have a specific group of people using it who, to make it feel safe for them, they need to feel that they're not going to make mistakes. There's one way to do this and once they've learned how to do that it's always gonna stay the same. So that's how that sort of fits in.

And then below that "quality in use" level, again in the same standard, you have the idea of product quality attributes. Now those product quality attributes are the things that, in IT, and specifically in testing, we know and love. So there's a series of functional attributes like functional suitability, accuracy of response, and so on and so forth. There's performance-related ones like time behavior, response time, throughput on networks, all those sorts of things. There's reliability ones. There's security ones and a whole bunch of other things. Interoperability, maintainability, you name it.

Now some of those are ones that are more obviously person-facing like security, and some, like maintainability, kind of, only IT people care about, but if you know that the business is going to want that piece of software to change, you need to make it maintainable so can you get the flexibility further up the line. So in terms of what people are measuring using different tools in automation and in the testing they're doing, at this bottom level, you need to measure performance, security, blah, blah, blah, blah, blah, in order that you have a foundation on which you can make sure you've got the quality in use, which gives you a foundation on which you can make sure you've got the user experience. So it's kind of like we're putting the building blocks together. You can't call that UX in my view, but it is a contributor towards it.

Sure. You need even more steps further up the pyramid in order to get the full user experience.

Evans: Absolutely.

Now I would imagine as you go further up toward the top, these are things, like we say, that are more difficult to measure. Maybe they fall a little bit more outside the scope of what IT would care about. Is that fair to say? I mean, is that something more in the business side they would like to collect these additional metrics?

Evans: I think IT should care about it, and I think testers should care about it, but you will frequently find that it is actually the UX designers who go and do that testing, and they might have focus groups. When they talk about user testing, what they're talking about is bringing in groups of people and observing what they do with the product and then asking them questions about what that was like. And there's a very particular way that they do that. So it is a specialist form of testing. I think if we're unaware of that when we're down in the technicalities, we're missing a trick, because there are things we could be thinking about which would preempt potentially some of the problems that people might have at that top level. And in particular, I think we don't think enough about the diversity of people in the way that the UX designers do. Does that make sense? We're doing testing. We think this works for us, and we think it's obvious because we're IT people, and we think that things are intuitive, for example, and they're actually not.

So, I don't know if this fits in quite with what you're saying, but what's coming to mind is a piece of work that was done by the [Organization for Economic Cooperation and Development]. It was a survey of people, working people aged between I think it was either 16 or 18 and 60 to 65-ish, but it was adults, not too old, not too young, in the 33 richest countries in the world. And they were looking at people's ability to use computers, and IT, and software, and so on and so forth. A quarter of those people, that's people in work in the richest countries, could not do anything with a computer, just have never used one, didn't know how to, were not connected in any way, weren't using the internet, all the rest of it. Then you've got another, about a quarter to a third who could do very, very simple tasks indeed. And then another quite big chunk who could do slightly more complex tasks. And then there's this tiny sliver which is about 3% to 5% which is where IT people sit and other people who we would see as being technically literate. And, of course, when you talk to IT people, there's an assumption that everybody's connected, everybody's got a mobile phone, everybody's got a smartphone, everybody's constantly on social media, everybody's doing online banking, everybody's buying stuff online -- and it's just not true.

There's a chunk of the population that doesn't have a bank account at all. There's a chunk of population that is not connected digitally, doesn't have mobile coverage in the areas they're living and working, doesn't have broadband connection. There are people who are simply not interested. And when you think that's amongst people who are in work, and then you've got all the people who are unemployed, homeless or otherwise disenfranchised, and then you've got all the people who aren't in the 33 richest countries, and you've got the older people and the younger people. You're kind of thinking, "Wow. You know, we really are just building things for that tiny sliver of people."

I'm sure it must feel like that sometimes, right?

Evans: Yeah. And then when you think about, if we're getting all our feedback on our success through Twitter, and Instagram, and stuff, we're in this little bubble where we're self-fulfilling.

So, yes, we should be thinking about this because even if somebody else in our organization is actually doing that work, that testing, we should be engaging with them to find out what they're doing in early stages and to get involved with observing that testing to make sure that our preconceptions are met. But also because if you think about the testing mindset, we could take that back right into that UX design phase and be sitting there saying, 'Are these all the personas? Have we thought about diversity enough? What else would that person want? Do people really do this? How many people are still using that type of device, or operating system, or whatever? Is that now too complex? Have we put too much functionality in? Is this really what we want, what our users want? How are people going to feel about that?' Which is kind of the questions the UX people are asking already, but I think having an extra mindset in this questioning would help.

Some of those top-level things we talk about, that I would imagine cannot get as easily folded into a feedback cycle as some of the stuff on the bottom, the product quality kinds of metrics that are collected immediately, do we have a sense of how often some of these top-level things are collected versus the UX metrics that we tend to hear about more frequently?

Evans: I think it's becoming increasingly the case that this happens. For some organizations, it's been going on for decades. So this company called Philips who make household devices and for decades they've had a house in the Netherlands, and they've put the latest whatever they're designing in there, and people go and live in the house and are filmed, like, "Oh, so they did that with it. Right, okay. Oh, they couldn't find that button." So some organizations have been engaged in this for a long time. In terms of it around IT, the bigger organizations, if you look at how Apple operates, for example, they're very focused around -- Steve Jobs said that, 'You put the person first. The technology follows on from who the person is.' So the people in those organizations are driving it and probably living it and breathing it.

I think a lot of organizations who made in-house software for a long time even thought that usability was not of importance, because people had to use their software whereas in fact, what we're seeing is increasing evidence that if people are forced to use your software it becomes even more important. Where am I going with this? Because you asked about how often did I think they were measured, and I've done a politician thing, sweeping off elsewhere. I think, probably, it used not to be measured very much. It's being measured more. I am finding more people talking about it. The very fact people are claiming it means that it's being more talked about. I think as usual, people are going to look for things that are easy to measure rather than the right things to measure.

However, the UX community is growing and thriving, and they will be pushing for these ways of doing it. And if they're going to use tools, they're going to be using tools that will be coming from the biomedical, psychological sorts of areas and applying them in a different sort of way.

These kind of measurements will be much more technologically enabled in the next few years I would imagine, right? Especially as we're talking about haptic response and as the internet of things becomes more prevalent too, I imagine these things will be more ubiquitous.

Evans: If you look at what's being researched in that community, you look at the toolset they've got, they have a good toolset already to do a lot of this. I had a very interesting conversation with a UX designer a couple of years ago, and I was talking about the software testing toolset, and he said, 'Yeah, I think the UX testing toolset is much better because we just wouldn't put up with the sorts of interfaces and the sorts of tools that you're using in software testing.' I thought that was very striking. So they have a toolset. It's a rich and powerful toolset. And it enables them to do these measures really well and as frequently as they want. But, you know, if you're going to do user testing with real people, you might have 10 people coming and each test might take a couple of hours, because you've got to get somebody to do something and then interview them afterwards. And then actually, as I know for some things I'm doing at the moment, just going through the videos and the recordings of what people have done and looking for the key points, all of that's very time-consuming. It's qualitative. It's hard work to do.

And if you compare that with measuring, say, analytics of website or app use, that's quite quick, and it's continuous, and all the rest of it, but it's also kind of self-fulfilling to me, because it's telling you what the people who are already using that are doing, and it's telling you what people are feeling forced to do. It isn't necessarily telling you what they would do if you gave them a choice, because you can only find that out by asking them -- and that's time-consuming.

So, you've got a tradeoff there between quick analytics and they'll tell you so much, and then investing some time in and real work in.

Maybe this will be the last question I ask you then. Could you make the case, or try to make the case for an organization that might not believe it needs to go fully up at the pyramid to collect these sorts of metrics to do that hard work that we're talking about? Everybody wants to move faster. Everybody's adopting Agile or DevOps, or has it somewhere on their roadmap. So, how do you convince an organization like that to take a step back and commit the time to pull in these metrics that will not only give you a response to your app that is unique and fully thought-out, but will also set them up for the long term when these metrics might be more readily available?

Evans: There's two things I point to. One of them is actually conversations out on the blogs around user experience, and indeed people are talking about customer experience saying, 'This is now the real differentiator for organizations.' If you aren't providing a good user experience, people will go somewhere they're getting a good user experience. It's commercial now. It's a commercial imperative. So from a business point of view, if you don't do this, you will lose customers.

The second part of it is actually something that came out of Jeff Payne's keynote yesterday. Did you see that?

Yeah.

Evans: And he was talking about the levels of maturity, and did you see about the amount of automation went down as organizations move from low maturity up to middle maturity?

Right, it's not at what you would necessarily guess.

Evans: Yes. But I started thinking, 'Yeah, I can understand that.' And Jeff then talked about this and said, 'Yeah, people try and go faster.' So they try and automate everything and then they realize the quality's gone down, and they've got more problems. And they have to stop, and step back, and do some more things manually, and think about it. And then they work out what it's really worth automating, but there [is] still stuff there that's done manually.

So my suggestion would be, if you take that as a pattern that organizations will go, 'Right. Yeah, we can measure UX just by doing analytics in this tool and [with] a number of tabs, number of key presses. Job done, tick. Oh, hang on. We're still losing customers. What's going on here? Right, okay. We need to go to a usability lab. We need to get in a UX designer, do all the bits of the pyramid. We've slowed ourselves down. We found out now what the problems are.' And as you move through UX maturity, you move from saying, 'Oh, these stupid users,' through saying, 'Why are they having problems?' to, 'We see what the problems are. Now we know how to prevent them.' Then you get into the UX process. Then you know which bits you can automate and what tools you need, and which bits you're doing by hand still.

So, I think it's that maturity thing we're going through. And I guess there'll be casualties on the way. There'll be organizations who have gone so far down the route of saying, 'We want to do this as fast as possible,' that they will just lose customers because the customers are not happy.

When you look at it again as a human experience, continuously deploying automatically sounds brilliant from an IT point of view, but somebody the other day said to me, 'Oh, we don't have to think about the customers anymore. They don't get a choice. We just need to deploy it.' From the customer's point of view on the other end, I can think of applications I use, and I'm sure you can -- every time it comes up with a little thing saying, 'Oh, just wait while we upgrade this to improve your experience.' You're going, 'You're not. You are just not improving my experience. First of all, I wanted to do something just then and secondly, I know that what you're about to deliver me means I'm going to have to go and find things again because you've just decided to redesign the interface for some glamorous reason of your own. I will not have the flow through my work. I just wanted to do this now. This task was important to me. Who can I go to instead?'

So, I think organizations will get pushed towards this because the user experience is going to be the differentiator and ordinary human beings, they don't want their life changed fast. They want to be able to do their work fast and efficiently. They don't want their world changed fast.

You could certainly say they're pretty opposed to that, generally speaking. And it just seems like another clash of quality and speed which is the kind of thing that we talk about at conferences like this. It's interesting to see how people deal with these things.

Evans: Yeah.

Great. Isabel, thank you so much for joining me.

Evans: Thank you for having me.

+ Show Transcript
Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close