your123 - stock.adobe.com

Examining AI pioneer Geoffrey Hinton’s fears about AI

The 'godfather of AI' claims AI will be misused for political gain and to manipulate humans. His resignation from Google came weeks after tech leaders called for an 'AI pause.'

When prominent computer scientist and Turing Award winner Geoffrey Hinton retired from Google due to his concerns that AI technology is becoming out of control and a danger to humans, it triggered a frenzy in the tech world.

Hinton, who worked part-time at Google for more than a decade, is known as the "godfather of AI." The AI pioneer has made major contributions to the development of machine learning, deep learning and the backpropagation technique, a process for training artificial neural networks.

In his own words

While Hinton attributed part of his decision to retire Monday to his age, the 75-year-old also said he regrets some of his contributions to artificial intelligence.

During a question-and-answer session at MIT Technology Review's EmTech Digital 2023 conference Wednesday, Hinton said he has changed his mind about how AI technology works. He said he now believes that AI systems can be much more intelligent than humans and are better learners.

"Things like GPT-4 know much more than we do," Hinton said, referring to the latest iteration of research lab OpenAI's large language model. "They have sort of common sense knowledge about everything."

The more technology learns about humans, the better it will get at manipulating humans, he said.

Hinton's concerns about the risks of AI technology are analogous to those of other AI leaders who recently called for a pause in the development of AI.

While the computer scientist said he does not think a pause is possible, he said the risks of AI technology and its misuse by criminals and other wrongdoers -- particularly those who would use it for harmful political ends -- can become a danger to society.

"What we want is some way of making sure that even if they're smarter than us, they're going to do things that are beneficial for us," Hinton said. "We need to try and do that in a world with bad actors who want to build robot soldiers that kill people."

Image of Geoffrey Hinton at MIT EmTech
Geoffrey Hinton speaks at MIT EmTech Digital about his fears surrounding generative AI.

AI race and need for regulation

While Hinton clarified that his decision to leave Google was not because of any specific irresponsibility on the part of the tech giant about AI technology, the computer scientist joins a group of notable Google employees to sound the alarm about AI technology.

Last year, ex-Google engineer Blake Lemoine claimed the vendor's AI chatbot LaMDA is aware and can hold spontaneous conversations and have human feelings. Lemoine also said that Google acted with caution and slowed down development after he provided it with his data.

Even if some consider that Google has been suitably responsible in its AI efforts, the pace at which major tech vendors have introduced new AI systems has spurred Google to scramble faster in what has become a frantic AI race.

However, both Google and archrival Microsoft may be moving too fast to assure enterprise and consumer users of AI technology that the AI innovations are safe and ready to use effectively.

"They're putting things out at a rapid pace without enough testing," said Chirag Shah, a professor in the information school at the University of Washington. "We have no regulations. We have no checkpoints. We have nothing that can stop them from doing this."

But the federal government has taken note of problems with AI and generative AI technology.

On Thursday, the Biden administration invited CEOs from AI vendors Microsoft, Alphabet, OpenAI and Anthropic to discuss the importance of responsible and trustworthy innovation.

The administration also said that developers from leading AI companies, including Nvidia, Stability AI and Hugging Face will participate in public evaluations of the AI systems.

But the near total lack of checkpoints and regulation makes the technology risky, especially as generative AI is a self-learning system, Shah said.

Unregulated and unrestrained generative AI systems could lead to disaster, primarily when people with unscrupulous political intentions or criminal hackers misuse the technology.

"These things are so quickly getting out of our hands that it's a matter of time before either it's bad actors doing things or this technology itself, doing things on its own that we cannot stop," Shah said. For example, bad actors could use generative AI for fraud or even to try to trigger terrorist attacks, or to try to perpetuate and instill biases.

However, as with many technologies, regulation follows when there's mass adoption, said Usama Fayyad, professor and executive director at the Institute for Experiential AI at Northeastern University.

And while ChatGPT has attracted more than 100 million since OpenAI released it last November, most of use it only occasionally. Users also aren't relying on it on a daily basis, like they do with other popular AI tools such as Google Maps or Translate, Fayyad said.

"You can't do regulation ahead of understanding the technology," he continued. Because regulators still don't fully understand the technology, they are not yet able to regulate it.

"Just like with cars, and with guns and with many other things, [regulation] lagged for a long time," Fayyad said. "The more important the technology becomes, the more likely it is that we will have regulation in place."

Therefore, regulation will likely come when AI technology becomes embedded into every application and help most knowledge workers do their jobs faster, Fayyad said.

AI tech's intelligence

Fayyad added just because it "thinks" quickly doesn't mean AI technology will be more intelligent than humans.

"We think that only intelligent humans can sound eloquent and can sound fluent," Fayyad added. "We mistake fluency and eloquence with intelligence."

Because large language models follow stochastic patterns (meaning they follow common practices but also include a bit of randomization), they're programmed to tell a story -- but may end up telling the wrong story. In addition, their nature is to want to sound smart, which can make humans see them as more intelligent than they really are, Fayyad said.

Moreover, the fact that machines are good at discrete tasks doesn't mean they're smarter than humans, said Sarah Kreps, John L. Wetherill Professor in the department of government and an adjunct law professor at Cornell University.

"Where humans excel is on more complex tasks that combine multiple cognitive processes that also entail empathy, adaptation and intuition," Krepps said. "It's hard to program a machine to do these things, and that's what's behind the elusive artificial general intelligence (AGI)."

AGI is software that possesses the general cognitive abilities of a human, which would theoretically enable it to perform any task that a human can do. AGI still does not formally exist.

Next steps

For his part, Hinton has claimed that he's bringing the problem to the forefront to try to spur people to find effective ways to confront the risks of AI.

Meanwhile, Krepps said Hinton's decision to speak up now, decades after first working on the technology, could seem hypocritical.

"He, of all people, should have seen where the technology was going and how quickly," she said.

On the other hand, she added that Hinton's position may make people more cautious about AI technology.

The ability to use AI for good requires that users are transparent and accountable, Shah said. "There will also need to be consequences for people who misuse it," he said.

"We have to figure out an accountability framework," he said. "There's still going to be harm. But if we can control a lot of it, we can mitigate some of the problems much better than we are able to do right now."

For Hinton, the best thing might be to help the next generation try to use AI technology responsibly, experts advised.

"What people like Hinton can do is help create a set of norms around the appropriate use of these technologies," Kreps said. "Norms won't preclude misuse but can stigmatize it and contribute to the guardrails that can mitigate the risks of AI."

Esther Ajao is a news writer covering artificial intelligence software and systems.

Tech News This Week - 05-05-2023

Next Steps

The fear surrounding generative AI

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close