Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Designing human-centered IoT interactions: Break out of the glass box

The internet of things. The phrase suggests an object connecting to and sharing information with other objects … but it doesn’t seem to factor you, the human being, into the mix. In order for IoT to be meaningful, it must include the human touch. This sense of human touch currently involves poking a finger at the touch-glass medium of a smartphone or screen-based device. Such poking is extremely common, but hardly ideal. We devote a ridiculous amount of time to staring down our omnipresent slabs of glass, an action that regularly disconnects us from our physical environments. Our phones have become the “universal remotes” of the connected domain, and they stand in the way of more natural interactions with our environment. As a designer and lover of objects, I know it’s important to develop more intuitive and instinctual, even inspirational, connections with the connected things around us. If we push beyond the glass console interface, we can create more of a direct relationship with, and trust in, these objects.

So, how do we break out of the glass box? We start by analyzing our essential communication instincts and design interactions that work with human behaviors, rather than against them. When you analyze our relationship with our devices and environment, it becomes obvious that when commanding any combination of things, we want to understand our context at that moment, have full control over the object and enable collaboration with other individuals and objects. Once we comprehend the three C’s, we can unlock more sophisticated and personalized interaction methods with IoT. Doing so will broaden our understanding of the device, create a more trusted relationships with IoT and lead to deeper user commitment.

Let’s define each of these human instincts:

  • Understanding context
    People desire interactions that understand their present context. Obviously, gesture controls are not favorable when driving a vehicle and voice commands are challenging in a crowded public space. So, people want the ability to modify the interaction based on changes in their context (location, mood and so forth). Also, context-awareness is important in the discovery of smart objects, recognizing the possible relationships they have with one another.
  • Having control
    People want to command the objects they interact with and understand the larger system of which that object is a part. They may wish for “superpowers” that allow them to feel in control of the environment around them at every moment. They want technology to enhance their abilities as human beings rather usurp their control. Or to command their domain without constantly having to learn technological protocols. This feeling of instant expertise is becoming more and more important as the rapid pace of tech development places many in perpetual catch-up mode.
  • Enabling collaboration
    People want to conduct their orchestra of connections and connected devices. That means collaborating not only with a multitude of surrounding objects, but with colleagues, friends, and family — including all the devices they control. This dialogue with the people in their lives and the devices within their daily routines needs to be enriching and productive and non-judgmental.

Building off a foundation of these three instincts, we can start to imagine more meaningful interactions with IoT. Tech is endlessly evolving, but these instincts endure, so they can provide an anchor to establishing long-lasting human interaction behavior that fulfills people’s functional and emotional needs. Let’s imagine how a few of these technologies could provide a more human-oriented approach to achieving our instinctual needs.

Voice and gesture (today)

As this technology develops, voice combined with gesture will allow you to acknowledge specific objects and have quick and precise control of them. Turning on a particular light within a connected home could be as easy as pointing to that light and speaking the command. Performing more detailed interactions might be as simple as signifying a dimming command with your hands and then gesturing, with a rotating knob motion, to turn the light up or down. We see plenty of products that take advantage of the user’s voice to control the device, and promising technologies, like Google Soli, which could provide a number of touchless gestures for virtual interaction. Voice combined with gesture promise to become a deeper and more humanized method for interacting with our devices, especially when you consider how much smaller our screens are getting.

Projected interfaces (tomorrow)

Interfaces are being freed from tiny screens to inhabit our three-dimensional context. They will soon be projected over our field of view and integrated with objects in real space. Whether it’s a wearable device or a projection onto a surface close to you, advances in augmented interfaces will allow people to scan the room, recognizing and controlling devices within the network. Guided by an overall AI layer, suggestions of new types of device connections and skills could be created to allow people to orchestrate their lives more meaningfully. Your coffee table could turn into an entertainment console when needed, and your kitchen counters would provide a temporary surface for dynamic controls of the multitude of appliances with in the room. Projected technology is becoming more and more available, with companies like Lightform that allow you to build specific augmented reality experiences to fit the 3D contours of your environment. Projected interfaces will allow for a more natural interaction, in both the proximity of the interface and the dynamic comprehension of the context at hand.

Brain-computer interface (the future)

Brain-to-thing communication is evolving fast, and one day soon your brain command could remotely control the sensing and actuation of IoT devices. This will be a direct communication pathway between your mind and the device, eliminating the need for the existing delivery methods. Someday you’ll be able to focus on an object and your thoughts will be translated in direct physical actions. Right now, it’s fascinating to see universities like John Hopkins studying the ability of patients with spinal cord injuries to control prosthetics and other devices through brain activity. Although this technology is further down the road, it will be very powerful. We’re talking about a direct cognitive interaction, skipping the middle step of having to use your hands and/or voice, to complete a command.

Test and iterate

So, which technology is appropriate for which user function? It’s almost impossible to reliably predict how people will react to a design. This is why user testing is so important, not only to gauge the interactive naturalness of a variety of users, but to understanding if users are actually ready for it. All the technologies outlined above must be evaluated to prove they offer a more humanized form of interaction. As automation and connectivity increases, people will expect the objects around them to respond to the more natural interfaces of touch, voice and even by their mere presence in a room. Humans are creatures of habit, and the best interfaces recognize learned behavior while catering to our essential human instincts. As technology integrates more tightly with the human senses, we’re laying the foundation for a future in which humans and computers are united more closely together than ever before. Let’s design a future in which people put away their screens and interact directly with the world around them.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Data Center
Data Management