Interpreting the rapid evolution of generative AI systems

In the two years that 'Targeting AI' has covered the increasingly pervasive world of AI technology, generative AI has become one of the dominant memes in business and society.

For two years, the Targeting AI podcast has hosted technologists, vendors, users and critics of AI technology. The podcast started diving into the state of AI shortly after OpenAI's groundbreaking release of ChatGPT spurred what was to be a meteoric rise of generative AI systems.

In this anniversary episode, the podcast welcomes back its first guest, Michael Bennett, an AI expert, lawyer, academic and writer. He is associate vice chancellor for data science and AI strategy at the University of Illinois Chicago.

Michael Bennett, associate vice chancellor for data science and AI strategy, University of Illinois ChicagoMichael Bennett

On the podcast, Bennett reflected on the significance of generative AI systems and the outsized impact they've had on business, society and culture since ChatGPT burst on the tech scene in November 2022.

"Every day since then, virtually, we've had public conversations. The media has been falling all over itself to understand the technology, to help the public come to grips with what it means for them in terms of work, life, investment, knowledge production -- and that's been a continuous conversation for two and a half years now," Bennett said.

"It's a striking development, when you think about it. I'm not sure there's been another technological innovation that has so singularly focused the public's imagination around its implications," he continued.

The media has been falling all over itself to understand the technology.
Michael BennettAssociate vice chancellor for data science and AI strategy, University of Illinois Chicago

Bennett noted that for many people, generative AI systems have a "shocking capacity to mimic human understanding and human intelligence." And they have brought into stark reality many of the themes long promised by science fiction.

"Everyone that's engaged with this technology has come away with at least a modicum of a sense that it is light-years ahead of many types of artificial intelligence that they may have been involved with before ChatGPT's arrival," Bennett said. "And the technology as well ... rides on a large foundation of pop-cultural images and stories."

While its usefulness and potential are great, the threats and dangers of this now widely proliferating technology are equally rife, Bennett said. Large language models, as has been broadly documented, are prone to hallucinations, bias and inaccuracy. And many people fear that they are capable of far worse -- even to the point of posing an existential threat to humankind if they ever become truly autonomous and more powerful than their human creators.

"I think it's impressive that many of us get, as often as we do, more or less rational, effective, useful responses from [LLMs]," Bennett said. "But by the same token, because so much of that material is problematic ... efforts to actually train these models such that the outputs are 99.9% of the time sensible, rational and not biased can only be so effective."

Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of AI, analytics and data management technologies, and big tech and federal regulation. Esther Shittu is an Informa TechTarget news writer and podcast host covering AI software and systems. Together, they host the Targeting AI podcast series.

Dig Deeper on AI technologies