The general public is poorly informed about the current state of AI technologies, and researchers and the journalists who cover their work are doing a poor job of explaining to people what recent advancements in AI are really about.
That's the view of Zachary Lipton, a professor and researcher at Carnegie Mellon University. And given some of the recent coverage of AI in the popular press, it's hard to argue with the perspective.
In a presentation at the recent MIT EmTech 2018 conference in Cambridge, Mass., Lipton talked about how people with little technical skill and understanding are increasingly becoming influencers in the world of AI, taking advantage of the hype cycle around AI to promote themselves. This, in turn, feeds a media system that gives more attention to splashy quotes than to deep technical explanations.
How many times in the last year have we read headlines about Elon Musk's warnings to humanity about the coming robot apocalypse? Musk's statements seem more informed by Hollywood than any factual understanding of the current state of AI technologies.
Or, think about the number of articles you've likely seen discussing The Singularity, a purely theoretical phenomenon involving machines gaining consciousness that most AI researchers say is at least a century away -- if it's ever going to happen.
Current state of AI has little to do with Hollywood's ideas
These topics are a far cry from what today's AI is really all about, which is machine learning. The most advanced applications, from natural language generation to facial recognition, are driven purely by pattern matching.
Developers have indeed found astoundingly clever ways to apply pattern matching to a wide array of tasks, but this technology is never going to think for itself, reason or possess genuine emotion. It's certainly never going to independently decide to turn the launch keys on the world's nukes. Yet, these are the sorts of things most people think about when they think about AI.
In his view, Lipton said researchers need to do a better job of clearly describing what their technologies are doing. Rather than just slapping the term AI on something and waiting for venture capital funding to come rolling in, they should articulate specifically what their models do and how. And journalists need to stop hyping every research paper with AI in its title as a sign of burgeoning machine intelligence.
Lipton loses me somewhat when he talks about how people with expert-level development skills need to be the drivers of AI discussions. It implies that, to have a respectable opinion about AI, one needs to have a Ph.D. from a top-flight research university.
This is untenable in a world where machine learning is increasingly being injected into sensitive areas of daily life, like credit decisioning, policing and education. I may not know how to train a neural network, but I know enough to feel wronged when I'm denied a line of credit because a model looking at factors beyond my control gives the bank a recommendation that is only somewhat interpretable.
And the flip side is people with technical expertise don't always possess the firmest grasp of how their models play out in the real world. I'm sure it was a very knowledgeable person who built the image classification tool for the Google Photos app that classified an African American user and his friend as gorillas.
We've seen, too often, the arrogance of coders who believe their tools are inherently worthwhile and are blind to how they actually affect people. Anointing these people as the arbiters of suitable opinion on the current state of AI seems dangerous. Banishing journalists, who Lipton described as often being "overmatched by slick PR efforts," is no answer.
AI today demands engaged public
Still, the need for a broad range of voices ultimately bolsters Lipton's larger point, which is that a world increasingly run by algorithms demands a public that is knowledgeable about what machine learning is and how machine learning works -- at least in broad strokes. AI technologies will only grow more common and play a larger role in people's lives. If regular people are going to be affected by AI, regular people need to understand it.
If journalists and other laypeople are going to stay involved in the discussion, we do need to do better. It's not enough to just talk about AI and how it's going to benefit a certain industry or area of life. We need to be specific about what the technology is and how it works. Most of all, we need to stop anthropomorphizing strings of code that are capable of little more than making predictions based on observed patterns.