Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Winning the voice wars: Making the human-machine interface work

As fueled by Amazon, Google and Apple at the recent CES, 2018 will be the year of the voice wars, with companies developing a myriad of dazzling voice integrations set to change the way we interact in a digital world. Already one in six Americans now own a smart speaker, according to research from NPR, and Gartner has stated that by 2020, 30% of web browsing will be driven by voice.

So what will this voice-driven world look like?

  • The acceptance of voice as an interface will accelerate the deployment of smart home devices. Expect to see a lot more voice-controlled lights, heating, TVs and washing machines.
  • Voice-controlled games are coming. Hardcore gamers with high-reflex games will use voice recognition for collaboration and context information; social gaming will increasingly use voice.
  • Voice recognition will be the key to driving the adoption of telemedicine; however, this will take longer as the tolerance for failure will be almost zero based on the potential health implications.
  • However, in our increasingly voice-driven world, noise pollution will become a growing issue. Personal electronics have already created issues and voice recognition is likely to extend this. Noise pollution has started appearing on government agendas and voice recognition may be the thing that finally starts driving legislation.

So what is voice recognition?

Voice recognition as a technology is relatively simple; the real challenge is that by giving computers more human interaction mechanisms, people expect them to behave a lot more like humans, with all the flexibility.

The fundamental change is the shift of responsibility for making the human-machine interface work. Traditionally it was the human who was responsible, which is why most enterprise IT systems require specific training and instruction to be able to use them. But with voice, by putting computing into a much more natural setting, there is an assumption that it is the machine which is responsible for adapting itself to the human.

Take the case of an e-commerce site. When individuals buy a product on a website, they fill in the boxes with the relevant information in a structured process. However, when using voice recognition, users don’t provide information in a set order, and so the software can easily become confused.

This requires a different way of thinking when creating and testing software. It means understanding your users much better, understanding how and why people are using your product, understanding the different types of people using your product, and then designing all this into the actual product. With voice-controlled products looking set to dominate the technology landscape in 2018 and beyond, testing voice recognition will also need to become an integral part of software development. This will drive the shift from looking for code anomalies to testing and monitoring the user experience.

If we are to realize the possibilities of a voice-driven world, then significant changes will be required in the way that we build and create software and applications. As the major tech players continue to battle for market share, the winner will be determined by the organization that can pivot quickest to address our brave, new, voice-driven world.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

CIO
Security
Networking
Data Center
Data Management
Close