Chatbots are growing in influence, and developers are now applying artificial intelligence to improve chatbot performance by deepening their personality and making them more lifelike.
But this raises some problems. The acceptance of chatbots is still a challenge. Even worse, some attempts at building a chatbot with personality have been taken very negatively, such as with the ethical questions arising from Google Duplex imitating humans. Along with the business and ethical challenges, there's also the technical issue of controlling AI chat learning, as shown in the now classic example of Microsoft Tay quickly learning racism, misogyny and antisemitism. There's still an opportunity for chatbots utilizing AI, but the questions are still open as to how to mesh technology and ethics.
Throughout the history of technology, its impact on society and the ethics of that impact have always been up for debate. I'm sure the person who invented the wheel was told he or she would be destroying jobs of bearers. Today, we discuss AI and ethics in multiple ways, including real-world bias (as opposed to statistical bias), privacy of data, visual identification and more. When technology directly communicates with individuals, the ethics come to the forefront. In part, that is based on how people view technology.
"Artificial intelligence ethical issues aren't fundamentally different from those facing other technologies," said Bernhardt Trout, professor of chemical engineering at MIT. Trout teaches an ethics class for engineering students and recently created an MIT Professional Education course on the ethics of AI. "One critical point," continued Bernhardt, "is the perennial discomfort with new technologies. Organizations leveraging AI must be cognizant not only of good ethical approaches, but of the expectations and ethics of the audience being addressed by AI-driven chatbots and other technologies."
That is directly applicable to the problems related to the reactions people have to chatbots. How, then, to utilizing the potential benefits of AI, while not alienating a questioning audience?
The Jeff Bot
What's needed is a use case, a place where effectiveness and ethics can be implemented on a focused yet large enough scale to address concerns. For that, we can turn to sports.
When you talk about professional sports, off the field one thing is very clear: Sports fans are ardent. Shall we say, sometimes, rabid? Fans have opinions that they want to share with each other.
Social media has massively increased the reach of fans, players and teams. All businesses involved in the sports industry are looking at how to utilize social media involvement to expand and improve brand image.
Sky Sports, which is sort of like the ESPN of Europe, wanted to be more engaged with soccer fans on social media, but it didn't want to hire the thousands of people that might be needed to keep an eye on all the sources of commentary, including Facebook Messenger, Skype, Slack and Sky's own internet properties.
So Sky Sports began to talk with companies about monitoring tools and eventually turned to GameOn, which uses AI to improve chat. The technology provides the ability to scan large numbers of social media posts. It uses natural language processing (NLP) for sentiment analysis and intent mapping.
As mentioned above, companies are concerned about the maturity of machine learning and natural language generation (NLG) for response, and nobody wants to be the next Tay. Therefore, Sky made the decision to use canned responses.
"Artificial intelligence is key to the future of providing corporate interactions with the market," said Kalin Stanojev, co-founder & chief product officer at GameOn. "AI, however, is a very new technology and is still growing. It's important for companies to understand where it can be applied in a business process and where other techniques are more valuable. While NLP has a strong place in addressing social media, machine learning still needs to advance before NLG is more widely accepted in branded conversations."
However, evidence suggests that people are more polite to personalized chats rather than a corporate labeled chat account. To develop a chatbot with personality, Sky Sports turned to one of its most well-known commentators. Jeff Stelling is a pundit covering the English Premier League, and he worked with GameOn to create replies that had his unique tone rather than a corporate tone.
To avoid ethical issues related to informing people they are talking with a bot, Sky made the decision to brand the chatbot the "Jeff Bot." This bot can address questions and comments in Mr. Stelling's voice, while making it clear that the responses aren't coming directly from Jeff.
"Chatbots are very useful, but, as with anything, they can be misused," said Alex Beckman, co-founder and CEO at GameOn. "We were able to work with Mr. Stelling to capture his voice, while openly producing a chatbot. The result was an engaging conversation helping to drive Sky Sports' brand."
Lessons being learned
Companies are still struggling to understand how to better build relationships with the market in today's internet-connected landscape. That includes the key aspects of social media chats and interaction on websites. Chatbots are critical to creating more immediate communications, but there are risks to deploying a chatbot with personality.
What is clear from the lessons emerging from companies as large as Google and as small as GameOn is that bots must operate in a more transparent environment. Customers want to be engaged, but they want to know with whom they are engaging. Personality in a chatbot helps, but it must be clear the personality is artificial.
It is not sufficient to slap a chatbot up on a website or social media platform, just as with other types of marketing communications. How the engagement happens matters.