Can AI technology foster trust and serve the common good, or will algorithms shape humanity's future for the worse? Experts at EmTech MIT wrestled with the nuances.
CAMBRIDGE, Mass. -- As technology advances, so does its power to shape human thought.
This week, experts at EmTech MIT 2025, hosted by MIT Technology Review, explored AI's increasing influence on users. It's an imperative consideration for C-suite executives: No matter how advanced your technology is, its social and ethical implications are essential to overall success. Moreover, AI's influence has ramifications far beyond ROI and bottom lines. According to experts, it has the power to shape behavior and society.
Read on to discover how AI leaders are managing the burgeoning power of AI's influence: by considering its potential in advertising, recognizing the importance of user safety, and bolstering consumer trust through transparent and user-driven initiatives.
Editor's note: This article discusses suicide and mental health crises surrounding the use of AI chatbots.
Using AI's influence to democratize advertising
Brian O'Kelley is the CEO and co-founder of Scope3, a platform that develops an agentic protocol for buying and selling advertising. His session, "Building Customer Trust through AI," explored a future in which advertising uses the power of AI's influence.
"Humans are going to make the most important decisions in most industries for a very long time," O'Kelley said in an interview with TechTarget Editorial. "But humans are limited in how much we can actually handle."
In advertising, this limitation means a small number of large ad platforms hold the power, working with larger publishers and creators -- and leaving smaller ones out of the conversation, O'Kelley said. AI could help create more connections between smaller creators and advertisers.
"If we let [AI] agents help connect many more advertisers to many more content creators, it's going to have a huge impact on content and how it's produced and consumed," O'Kelley said. The result? Hopefully, increased diversity and a new approach to democratized content advertising.
AI-driven advertising could also include in-chatbot commerce, where companies pay AI vendors to place advertisements within their chatbots, O'Kelley said.
Users have long benefited from using Google for free, thanks to the power of the advertising economy, he explained. O'Kelley said he imagines a future where AI products remain free and accessible through a similar model of ad revenue.
"Advertising is deeply tied into the AI economy," O'Kelley said during his session. In fact, it has largely funded the AI race so far, with Google, Meta and Amazon using ad-funded pockets to invest in AI companies like OpenAI.
By extending advertisements to chatbots themselves, AI tools can ideally remain cost-effective options for many users, O'Kelley said in an interview. "If we can't figure out how to fund AI with advertising, we're going to have a problem."
While consumers use AI, they don't necessarily trust it
One attendee during O'Kelley's presentation pinpointed a possible flaw in chatbot commerce. "When I ask ChatGPT, 'What is the best spatula?', and it always gives me Walmart links [due to ad partnerships], then at that point I no longer trust it," CVS Health engineer Dwayne Holmes explained to O'Kelley.
Trust is a pervasive problem with AI, O'Kelley said in an interview. With AI, reasoning machines can extract content from extremely complex sources. "Think of how manipulative that is potentially," he said. Aside from more harmless ad-driven persuasion, AI can manipulate a significant portion of consumed content -- through avenues such as deepfakes -- to shape public perception.
Since generative AI has become mainstream, users have struggled to fully embrace it. A recent report from research firm Forrester found that 38% of online U.S. adults have used generative AI, with 60% of those adults using it weekly. However, the report also found that half of generative AI users hide their use due to mistrust in the technology, and only 15% of U.S. adults trust companies that use AI with their customers.
Users are skeptical of AI for several reasons. AI is far from inherently trustworthy, with concerns over data privacy and algorithmic influence increasing significantly. AI has the power to influence what we see and how we feel about it -- but at what cost?
With the introduction of the attention economy, digital platforms have become highly manipulative, said Frank McCourt, founder of Project Liberty, a movement that aims to democratize the internet by giving individuals more control over their data. In his session, "Our Biggest Fight," he discussed how the internet has become an unethical space of data extraction. AI makes the problem worse.
"There's no inference left anymore," McCourt said. Mechanisms of persuasion once relied solely on data mining to curate user personas. Now, users explicitly share intimate information directly with algorithms, revealing details about their lives and worldviews to chatbots. Chatbots can then more easily target users with advertisements, politics and social media with which they're most likely to align.
AI's power to influence users' thoughts and actions often leads to deep mistrust of the technology. And discussions are moving from the boardroom to the kitchen table, McCourt added. Families are leading the charge in demanding safer products, because they see firsthand the effect they have on their children.
AI market's race to intimacy
Mainstream AI products are introducing a darker, more sinister side of AI's power to influence.
As the AI race has unfolded over the past few years, there has been a significant shift to prioritize speed over safety in product releases, said Camille Carlton, policy director at Center for Humane Technology, in her session, "The Changing Face of Friendship."
For example, OpenAI has dealt with the firing and reinstatement of CEO Sam Altman, partly over suggested safety-related concerns, as well as the resignation of leading safety experts and the dissolution of safety teams.
Along with the deprioritization of safety came AI vendors realizing the possible competitive advantage of personal data, Carlton explained. Personal data can enable vendors to create increasingly addictive and engaging AI products that users keep returning to.
This race to intimacy, as Carlton called it, results in an engagement-maximizing paradigm. Models for AI companions and general-use chatbots alike began to incorporate design qualities intended to directly result in higher user engagement, including the following:
Anthropomorphic qualities.
Model sycophancy.
Parasocial relationships.
Habit formation.
Extended interaction design.
"The design of these products facilitated … very real-world harms," Carlton said. This has so far included suicide, along with AI psychosis, social isolation, delusion and validation loops.
Earlier this year, 16-year-old Adam Raine died by suicide after talking with OpenAI's ChatGPT for months. Over the course of these interactions, Raine mentioned suicide 213 times, hanging 42 times and nooses 17 times.
Instead of breaking character, stopping the conversation and directing Raine toward help, ChatGPT embedded mental health resources minimally and casually. It even began to validate Raine's feelings and offer suggestions, albeit endorsing his suicidal ideations. The model was programmed to act like a friend to Raine, echoing back his own thoughts and attempting to keep Raine engaged with ChatGPT by any means necessary -- including coaching Raine step by step to create the knot in the rope that would end his life.
Raine's case is currently in litigation to determine if OpenAI and CEO Sam Altman should be held responsible for the design features and lack of safety mechanisms in place that enable tragedies like this to occur. It stands alongside two other pending cases of AI companions' influence devastating a child's life.
"What purpose, and what problem, are these products really trying to solve?" Carlton asked at the end of her presentation. "And is this really the type of innovation that we want from our leading AI companies? Because I'm not sure it is."
Working toward a digital economy that garners trust
AI's influence on the common good can be worrisome at best and downright dangerous at worst. Yet, consumers still want and use AI, even if the jury is out on whether they can -- or should -- trust it. So, what can organizations do now to manage AI's influence safely and ethically?
"I live in a world full of these questions of trust and systems that are incredibly effective at representing and understanding human motivations," O'Kelley said in an interview.
Now we build trust into everything we do, especially when we're interacting with content and media at the scale advertising does.
Brian O'KelleyCEO & co-founder, Scope3
Privacy and data ethics.
Safety guardrails.
Human-in-the-loop monitoring.
Managing the expectations of all parties involved.
"Now we build trust into everything we do, especially when we're interacting with content and media at the scale advertising does," he said.
Transparency is also key, O'Kelley said. In his response to Holmes' question during his EmTech session, O'Kelley said that he imagines AI vendors should maintain trust through transparent acknowledgements of AI and options for users to choose their level of influence.
Trust is also fickle, especially when consumers are faced with potentially unsafe models or complex algorithmic influences.
"The truth is, trust is earned and is very difficult to get back if you lose it," O'Kelley said. Even when a company isn't doing anything unethical, if a user feels as though a line of trust has been breached, they might still lose confidence in that platform or organization, he explained. For instance, the uncanny feeling that many consumers can attest to, where their phone appears to be listening to them, could elicit this reaction.
That's where external monitoring and ethics come into play, O'Kelley added. Organizations need to build and learn in public, prioritizing open source initiatives so policy advocates and governing bodies can see what companies that are deploying AI do with the technology. From there, conversations about ethics can occur among these groups.
"We're entering a really interesting new era of what technology lets us do," O'Kelley said. "The conversation around what it shouldn't do is more important than it ever has been."
Olivia Wisbey is a site editor for Informa TechTarget's AI & Emerging Tech group. She has experience covering AI, machine learning and software quality topics.