Getty Images

Former Google CEO outlines dangers of generative AI

Mitigating risks from generative AI tools such as ChatGPT means involving humans in final decision-making and establishing guardrails.

The rise of generative AI technologies such as ChatGPT will create new opportunities and risks, from new ways to spread misinformation to creating dangerous biological viruses.

ChatGPT and other large language models that can craft speeches, essays, software code and more will provide enhancements in education, healthcare, science and many fields, according to former Google CEO Eric Schmidt. He spoke during an online discussion with the Carnegie Endowment for International Peace earlier this month. Today he serves on a variety of boards and is a visiting fellow at MIT.

Schmidt said the technology will likely change how humanity perceives the world. He gave the example of a podcast created a month ago by generative AI tools of an interview between Joe Rogan and Steve Jobs, who died in 2011 and was Schmidt's close friend. Schmidt said listening to the podcast was a "shock to my existence, to my system."

"When I heard Steve's voice synthesized by a computer as though he was alive today, talking in his style with his insights, I almost started crying," he said.

Schmidt said that while the tools can do a significant amount of good, they can also cause harm -- something he said he witnessed through his experience as Google's CEO.

"There are people who will use this for terrible outcomes," Schmidt said. "We don't have a solution in society for this."

Schmidt said that's why parameters are needed for AI.

Risks of generative AI

Schmidt said there are three significant dangers that could result from generative AI, the first being the creation of killer biological viruses.

"Viruses turn out to be relatively simple to construct," he said. "An AI system using generative design techniques, plus a database of how biology actually works, and a machine that makes the viruses, which do exist, can start building terrible viruses."

Second, bad actors can use generative AI tools to create and target misinformation, which Schmidt said could lead to violence.

Lastly, Schmidt said generative AI can be dangerous when its decision-making is faster than humans, particularly in critical situations.

"The systems I'm describing have the interesting property that they make mistakes," he said. "You don't want these systems flying the airplane -- you want it advising the pilots. You don't want them in something involving life-critical things, because they make mistakes."

Using ChatGPT, Schmidt said he asked the tool to write an essay about why all skyscrapers more than 300 meters tall should be made of butter. The tool wrote an essay describing the benefits of using butter to build skyscrapers.

He submitted the same query two days later and ChatGPT changed its mind, crafting an essay on why butter isn't strong enough to be used when building skyscrapers. Part of the problem with large language models today is that they "hallucinate," Schmidt said.

Whenever there is generative design, I want there to be a human who could be held responsible.
Eric SchmidtFormer CEO, Google

"You can give it a series of questions which will cause it to believe that up is down, and gravity doesn't exist, and left becomes right -- things which are nonsensical to any human being," he said. "We don't fully understand the nature of how these large language models work, and we know they make mistakes."

Addressing the risks

When using generative AI tools, Schmidt said it's important for businesses and consumers to use the tools in an advisory way, leaving any final decision-making to humans.

"Whenever there is generative design, I want there to be a human who could be held responsible," he said.

AI's ability to cause harm means it shouldn't be released without some restrictions in certain areas, Schmidt said.

As an example, Schmidt said that when he was in college, an undergraduate student designed a trigger mechanism for a nuclear warhead, which was immediately classified. He said governments and developers alike will need to identify similar situations where new technologies or information generated by the technologies might need to be classified and withheld from release.

"My industry has taken the position that this stuff is just good, we'll just give it to everyone. I don't think that's true anymore -- it's too powerful," Schmidt said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Next Steps

ChatGPT could boost phishing scams

Google embeds generative AI features in cloud and Workspace

Analysts recap Enterprise Connect hybrid work trends, generative AI

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close