Getty Images
Implications of Apple AI generating false news summaries
The hallucinations show a deeper issue with generative technology and provide an opportunity for publishers to guide tech companies into refining their models and products.
Apple is developing a software update after its Apple Intelligence system generated inaccurate news summaries.
The BBC first alerted the iPhone maker in mid-December of the hallucinations its AI system was making. The U.K. news organization alerted Apple that its intelligence system generated a false summary that made it appear that BBC News claimed Luigi Mangione, the man accused of killing United Healthcare insurance CEO Brian Thompson, had shot himself.
Other publishers like ProPublica also alerted Apple that its Intelligence system was generating false summaries.
While Apple did not respond to the BBC in December, it did respond after the National Union of Journalists called for the company to remove Apple Intelligence. Reporters without Borders has also asked Apple to take down its AI system, after Apple said it is working on the technology.
The news makes Apple the latest vendor to fall victim to generative AI system hallucinating.
For example, in October 2024, AI search vendor Perplexity was sued by Dow Jones & Co. and the New York Post after its AI system allegedly hallucinated fake sections of news stories.
Google had to make technical improvements last year to its AI overviews summaries after it gave some wrong answers to users' questions.
On Jan. 16, Apple disabled the summaries for news apps on its iPhone, iPad and Mac software.
The issue of hallucination and Apple's embarrassment
While Apple is the latest vendor experiencing the drawback of AI technology, its troubles speak to a greater issue, said Chirag Shah, professor of Information Science at the University of Washington.
"The deeper issue is hallucination," Shah said. The nature of the AI models is to generate things, he added, explaining that when they're trying to summarize and synthesize, it's likely that they're going to make mistakes.
Chirag ShahProfessor, University of Washington International School
"That's not something you can just debug because it's the very nature of these LLMs and the way they operate," Shah continued.
Apple has reportedly told BBC News that it will provide an update that will label when a summarization is generated by Apple Intelligence. That is not enough, Shah said.
"Most people don't understand how these headlines or the summaries or synthesis are being generated," he said. "There is no quick fix. The right thing to do is don't use it until there are further developments, further understanding and mitigation efforts."
The hallucinated summaries are also an embarrassment for the iPhone maker, said Michael Bennett, AI adviser at Northeastern University.
Before launching Apple Intelligence, many saw Apple as behind in the AI technology market. The release of Apple Intelligence was supposed to bring the vendor into the market in a revolutionary way.
"They've been more or less playing catch-up and making a lot of noise about relatively modest steps in comparison to other players in the field," Bennett said. "This type of hallucinated summarization ... seems both, like I said, an embarrassment, and potentially a pretty serious legal liability."
He added that this type of hallucinated summaries could be the basis of a strong defamation claim because Apple Intelligence is misstating news articles and attributing them to news sources.
"I'm a little surprised by the blasé-ness of the response, Apple's response," Bennett said. "It seems like a fairly big deal again on the legal liability side, and as far as Apple's reputation and brand goes, like a real tarnish."
Opportunity for publishers
While it's clear that publishers themselves might not be held legally liable, publishers working with AI companies like Apple and Google may want to include more protection for themselves.
"[Publishers] need to be talking to the big AI companies about what measures need to be in place to minimize the likelihood of these types of false summaries and attributions back to publishers," Bennett said.
He added that there should be new language and new clauses in addition to what is already in place to prevent these types of incidents from affecting their brands. Publishers should also work to take the lead in guiding large AI vendors in refining their models.
"This seems like a ripe opportunity for publishers to take the lead and say, 'you need to refine your models,'" Bennett said. "If that's not technically, financially feasible, then you need to stop attributing these false summarizations to publishers and if you don't, then we're going to sue you."
This case also seems like an issue the Federal Trade Commission could get involved with because consumers did pay for services like the new iPhone with the Apple Intelligence feature. If that service is flawed, the consumer is not getting what they paid for, Bennett said.
However, it's likely that before things escalate to that level, Apple will take steps to fix the issue.
Apple did not immediately respond to Informa TechTarget's inquiry for comment.
Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems.