kras99 - stock.adobe.com

AI companies losing public trust in safety

Researchers find that more than half of Americans polled believe AI companies aren't considering ethics when developing the technology, and nearly 90% favor government regulations.

Most Americans think AI companies prioritize profits over safety, and recent missteps have reinforced that perception, experts said.

Bad publicity follows the industry as it presses forward with AI advancements billed as immensely valuable to business and people's lives. The latest example was OpenAI's attempt last week to wow the world with its voice improvements in ChatGPT.

Actress Scarlett Johansson accused the company of copying her voice from the Spike Jonze-directed romance film Her, in which a lonely writer falls in love with an AI companion. OpenAI denied the accusation, and The Washington Post cited records this week showing that the company did not copy Johansson's voice.

The controversy came the same week in which Jan Leike, who headed OpenAI's safety team, left the company. Leike said on X, formerly known as Twitter, that the company's "safety culture and processes have taken a backseat to shiny products."

OpenAI disbanded Leike's Superalignment team after his departure. Other companies that have restructured or scaled back AI safety efforts include Amazon, Google, Meta, Microsoft and X, fueling criticism that profit eclipses ethics.

"AI ethics-washing is rampant," said Kashyap Kompella, CEO of consulting firm RPA2AI Research. "Many companies have grand statements of their responsible AI principles, but most of it is performative theater, a tick-in-the-box exercise."

Concerns about AI

The AI industry's actions have tainted its image with the public. Researchers at the Markkula Center for Applied Ethics at Santa Clara University in California polled 3,000 adult Americans last November. They found that 68% were "very concerned" or "somewhat concerned" about AI's impact on humanity.

The poll showed that most Americans distrust AI companies, with 55% believing that those companies are not considering ethics when developing AI. The lack of trust contributed to 86% of Americans polled believing that AI companies should be regulated and 83% agreeing that the government should create more explicit AI regulations.

Trust [in AI companies] is probably at the lowest level that I've seen in the years that I've been paying attention.
Irina RaicuDirector, internet ethics program, Markkula Center for Applied Ethics at Santa Clara University

"Trust is probably at the lowest level that I've seen in the years that I've been paying attention," said Irina Raicu, director of Markkula's internet ethics program. "People are just afraid of them, their motivations, their impact on society, and whether they really mean to be careful stewards of the tools that they're developing."

That credibility gap can affect investors and sales to the many enterprises evaluating and buying AI technology, said Ann Skeet, senior director of leadership ethics at the Markkula Center. Since the beginning of the AI craze, which started with the launch of OpenAI's ChatGPT in 2022, businesses have struggled with data privacy and security concerns.

"Reputational harm can translate into a loss of shareholder value," Skeet said. "A significant misstep might cost them in terms of their reputation in the marketplace."

Governments have joined people and enterprises in scrutinizing the AI industry. California, Colorado, Illinois and Texas have introduced or passed laws to protect people against AI abuses. In March, the European Parliament approved the EU AI Act to provide a regulatory framework for operating AI systems.

"It shows that the public and regulators are tired of counting on these internal governance efforts to keep us all safe," Raicu said. "The public feels that those kinds of efforts are just not sufficient, so there needs to be some laws."

AI trust poll question and results graphic.
More than half of Americans polled by the Markkula Center for Applied Ethics said they do not trust the companies creating AI.

Despite the setbacks, the AI industry can turn around the perception of untrustworthiness. The Markkula poll showed that a significant portion of respondents remain optimistic, with 48% agreeing that AI currently has a positive impact on their lives and 45% remaining "very excited" or "somewhat excited" about the potential of AI.

Also, some experts remain optimistic that the industry will regain its footing and act responsibly.

"In principle, it seems to me, it's not impossible," said Michael Bennett, Northeastern University's business lead for responsible AI. "Now, would I bet my retirement funds? I wouldn't go that far. But I still think it's possible."

Possible, yes, but only after dramatic change, experts said.

"For those companies to build trust with consumers, they're going to need to demonstrate a pretty deep commitment to developing trustworthy technology," Skeet said.

Antone Gonsalves is an editor at large for TechTarget Editorial, reporting on industry trends critical to enterprise tech buyers. He has worked in tech journalism for 25 years and is based in San Francisco. Have a news tip? Please drop him an email.

Next Steps

Vendors struggle to prevent GenAI use in child sexual abuse

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close