Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Artificial intelligence and machine learning forge path to a better UI

Listen to this podcast

Carnegie Mellon University's Chris Harrison talks about the future of the user interface in this episode of 'Schooled in AI.'

In the fourth episode of Schooled in AI, Carnegie Mellon University's Chris Harrison, director of the Future Interfaces Group, talks about how he and his students are using artificial intelligence and machine learning to redefine human-computer interaction.

Here are three highlights:

  • Skinput and OmniTouch, two projects that strive to transform nontraditional surfaces into input surfaces, give a glimpse into Harrison's cutting-edge UI research.
  • Synthetic sensors could redefine how companies and customers go about making a room "smart."
  • CIOs should expect the future user interface to be "multimodal," where a task will determine the type of interface that's needed to get a job done.

To learn more about Harrison and his research, listen to the podcast by clicking the player above or read the transcript below.

Transcript - Artificial intelligence and machine learning forge path to a better UI

Hey, I'm Nicole Laskowski, and this is Schooled in AI. We've all had a less than desirable interaction with a user interface -- and it can leave a bad taste in your mouth. Here's a simple way to look at it -- and I'm borrowing this analogy from a column I read recently in a publication called Usability Geek: A bad UI is equivalent to a cranky receptionist.

As so much of what we do becomes app-ified, the UI is taking on a prominent role for developers, but also for employers and employees, because a cranky receptionist could affect the quality of the work that employees do -- and their happiness while they're doing it. And that's where researchers like Chris Harrison come in. He's a computer scientist and the director of the Future Interfaces Group at Carnegie Mellon University. Harrison and his team are on a mission to find new ways for humans to interact with machines.

Chris HarrisonChris Harrison

Harrison: What I research is human-computer interaction -- so not just designing computers that crash less or are faster or have databases that are more robust. But directly improving that human interface between the computer and the user and making it not only more efficient, but also enjoyable and blessed with all the other things that we associate with modern technology.

Harrison called UI 'a complex landscape' because it includes everything from classics like the QWERTY keyboard, which is more than 125 years old, to newcomers like augmented reality headsets that layer digital information onto the physical environment. For his part, Harrison and his team are experimenting with the internet of things (IoT), artificial intelligence and machine learning to develop new, way-outside-the-box approaches to UI.

Take a project called Skinput, which projects a screen onto the body -- onto the skin -- that then acts as the input surface.

Harrison: We wanted to have fluid interactive experiences on the go that weren't marred by a small user interface.

Like, for instance, the screen on smart watches, he said. The smart watch is a powerful computer, but its small screen means a user's interactions with the device tend to be pretty simplistic.

Harrison: And so the idea with moving onto the skin was to provide people this big interactive canvas as a playground, but keep the device small.

Skinput, CMU, UI, user interface, Carnegie Mellon University
Skinput turns the body into an input surface.

Skinput and another project called OmniTouch, which projects screens onto tables or notebooks, aren't just attempts to recast the relationship between a computer's size and its interactive capabilities. They are also attempts to redefine mobile computing. When you're in a conference room and want to share a document with the person sitting next to you, what do you do?

Harrison: You email it, which might go through a server, even though they're sitting four feet across from you. You might have to send the document through an email server in China or Europe or somewhere else.

Why not turn that conference table into an interactive surface where files can be shared, amended, moved around? Harrison called it a 'missed opportunity' that he and his team are currently trying to solve.

Harrison: Computing is small today, it can be incredibly small. You can have a computer in an inexpensive tag that you're putting on your food or woven into your clothes. So, we know that computers are small. But what we can't let get small because of human factors -- not computing factors -- is the size of the interface. So, we need to look at clever ways to provide big interactive experiences irrespective of the size of the computer.

For the last three years, artificial intelligence and machine learning have become a foundational set of technologies that Harrison is incorporating into his research.

Harrison: That's because humans and machine learning are a very natural pairing. Signals from humans are inherently ambiguous. When I make a gesture in my living room, is that for my smart TV to turn on or my stereo, or am I just gesturing as part of a conversation with my family. It's ambiguous.

Humans do a good job of resolving the ambiguity because of social cues and context, but devices don't have that kind of social awareness.

Harrison: And so the way we take the kind of glossy, ambiguous, messy human inputs and make a pretty good prediction about them is to apply techniques like artificial intelligence and machine learning.

The scenario of using technology to make a room interactive also relies heavily on the internet of things. IoT sensors, which can be embedded into conference rooms or office equipment or kitchen appliances, are the backbone for turning something analog into something digital and something digital into something interactive. And while IoT and AI are distinct fields, for Harrison …

Harrison: They're sort of like a big Venn diagram where all of these different things interact. And so a lot of the research we're doing in my lab uses machine learning and AI to resolve ambiguity, but the domain is something like IoT and smart environments because we ultimately want to improve the human experience in those environments. If it doesn't do that, then why are you building the technology?

The combo of the two is the basis for a project Harrison's team developed called the synthetic sensor. It's a Wi-Fi-enabled sensor board that fits in the palm of your hand and plugs into any wall socket. It's made up of a suite of the most common sensors used in commercial and academic settings that essentially track environmental conditions.

Harrison: It absorbs all this data -- so vibrations and sound and light and humidity and about 19 different sensor channels that we capture. And we interpret those signals using [artificial intelligence and] machine learning. And we extrapolate what we call a synthetic sensor.

Rather than rip out analog kitchen appliances and replace them with their 'smarter' counterparts or retrofit an appliance with something like LG's SmartThinQ sensor tag, a single sensor board can collect raw data about the entire room. Machine learning is used to uncover the unique patterns or signatures of the appliances, which enable the sensor board to differentiate between, say, a dripping faucet and a running dish washer.

Harrison: It's all been virtualized. So, all the developer or the end user has to do is say, 'Oh, I can go into my Google Now or ask Siri, "When will my dishwasher be done?" or, "Is the dishwasher done?"' And it will answer that by drawing on the sensor feed. And the feed is entirely virtualized through techniques like artificial intelligence and machine learning.

User interfaces like synthetic sensors are developed by rapid prototyping, which means building, as Harrison said, 'the crudest possible experience that encapsulates the vision.' Harrison and his team can put the cheap prototype in front of people to get their feedback and then iterate.

Harrison: In our field where humans have to touch and interact and experience the technology, the feel of a device, the feel of an experience is so important.

And by moving quickly, Harrison and his team can figure out what ideas are and aren't worth pursuing, which is critical for researchers who are trying to sketch out what the future may look like and in a field that has become increasingly competitive.

Harrison: A lot of corporate research labs and even nonresearch labs are innovating in our space. Obviously, human-computer interaction and interactive technologies, it's not limited to academia. That may have been true 30 to 40 years ago when most people didn't have computers. But now, everyone is an inventor, even if you're not at a company -- hobbyists are doing amazing things with technology. But it means that everyone's moving fast and we need to move fast as well if we're going to be a part of the innovative cycle that relates to interaction.

Much of Harrison and his team's work is still in prototype phase, so maybe don't get your hopes up about Skinput just yet. But companies -- and CIOs -- need to pay attention to the thinking behind interfaces like Skinput and synthetic sensors that are being propelled by artificial intelligence and machine learning.

Harrison: What the enterprise, I think, really cares about is the efficiency of the worker. You want to give your employees the tools to do their jobs as efficiently and as pleasantly as possible. And if you disregard the human interface -- and that's probably the biggest part of the equation -- then you're not going to necessarily be the most efficient enterprise or even have happy employees.

One important thing to note is that, while voice has come on the scene as a powerful way of interacting with a machine, the future of the UI isn't going to be dominated by voice or touch alone, Harrison said.

Harrison: I think what you'll see is a divergence of ways to interact with technologies that play to different strengths. There's a reason why we have certain things that have stuck around for a really long time -- like keyboards. So, voice is really good for things like questions and commands. You know, 'Where is the closest pizza?' Or, 'Turn on my smart TV.'

But it's really bad for things that require creation. Like if I want to put together a PowerPoint presentation or edit a spreadsheet, even if I had a screen, it would be incredibly frustrating to do that by voice command.

Instead, the way employees interact with a machine will be dictated by the task they need to get done.

Harrison: And so what I think will happen in future computing interfaces is we'll see an increasing blend of different modalities, what's called multimodal input. And it'll play to the different strengths.

The reality is, this is already being played out today. We use smartphones and smart watches and laptops because we know that one device is better at certain tasks than the others.

Harrison: So, it's not that when the smartphone arrived it replaced all other forms of computing; merely, it added to the ecosystem. And interaction techniques are going to be very similar and have been similar -- they're additive and their relative share changes over time, but it isn't that one thing is supremely better than the other and everything else will die.

+ Show Transcript

Next Steps

Carnegie Mellon University's Louis-Philippe Morency talks multimodal machine learning and mental health
Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close