sdecoret - stock.adobe.com
Despite economic uncertainty and a few hiccups regarding ethics and bias, 2022 was a year of change for AI technologies.
Here is a look back at the stories that dominated the year.
Can AI be sentient?
Blake Lemoine, a former engineer at Google, broke the internet when he declared that Google's Language Model for Dialogue Applications (LaMDA), a large language model (LLM) in development, is sentient.
Blake claimed that based on LaMDA's answers to his questions, the LLM can communicate its fears and have feelings the way a human can.
Google later fired Lemoine for choosing to "persistently violate clear employment and data security policies that include the need to safeguard product information," the company said in a statement.
From Dall-E 2 to Stable Diffusion to ChatGPT, generative AI has made AI more accessible to not only enterprise users but also consumers.
However, with more accessibility comes more responsibility.
As everyday consumers use Dall-E 2 to create striking images, artists and creators are concerned that these models are not only erasing a need for them but also stealing their artwork.
This has also brought up concerns about copyright infringement laws when it comes to work created -- or stolen -- by AI.
According to Michael G. Bennett, director of education curriculum and business lead for responsible AI at the Institute for Experiential AI, while some artists may have a case for copyright infringement if the artwork generated shows a strong similarity to the artist's work, if the work does not show a strong similarity, most artists will not have a case. Many times, the work being produced is the same as if a young artist were to be inspired by an older artist.
Despite the concerns behind generated artwork and generative AI, many creators are embracing it. Marketing vendor Omneky uses Dall-E to generate ads for customers.
Metaverse, avatars and metaverse technologies
The metaverse dominated early 2022, with vendors such as Nvidia describing what they believe the next iteration of the internet will look like. Nvidia introduced an avatar engine. Meta -- Facebook's parent company -- invested heavily in metaverse technology. Researchers and analysts described a metaverse that includes holographic avatars that could be used to train medical professionals and others.
Vendors, including Hour One, introduced technologies such as a 3D news studio, where users can choose an avatar newscaster.
Finally, consumers could reimagine themselves with Prisma Labs' Lensa app, which generated avatars for users.
While many agree that we're still far away from the metaverse that vendors and enterprises alike are describing, the technologies are certainly growing more rampant, making the metaverse more real to all.
Enterprises turn to AI to help ease the Great Resignation
The coronavirus pandemic made many re-evaluate their employment status. This, in turn, meant many employees resigned in favor of other opportunities.
For enterprises in the restaurant industry, this meant many positions were left unfilled. From pizza chain Jet's Pizza to Panera Bread, enterprises turned to AI tools and technologies that could supplement human workers so that those left on staff could focus their attention elsewhere.
"We're starting to see AI move out of that theoretical space of all the cool things AI can do," Liz Miller, an analyst at Constellation Research, told TechTarget Editorial back in August. "We're starting to see real, specific kinds of business acceleration-minded applications coming off of the workbench and getting into real life."
Voice cloning and James Earl Jones
Liz MillerAnalyst, Constellation Research
When news broke that James Earl Jones was lending his voice to speech-to-speech voice cloning for future appearances of his Star Wars character Darth Vader, it gave insight into the growing use of the technology.
There are two types of voice cloning: speech-to-speech and text-to-speech.
Text-to-speech is used in contact center environments, where synthetic speech can communicate with consumers before they speak to agents. It's especially helpful when agents speak different languages.
AI and the war in Ukraine
When the war between Russia and Ukraine began earlier this year, AI was a tool used to spread disinformation. Bad actors used the technology to create videos that spread misinformation. Deepfakes or AI-generated humans were created to spread anti-Ukrainian discourse.
The use of deepfakes in the war showed how easy it has become for everyday consumers to create them without coding knowledge.
The use of AI in the war also shows the psychological impact machine learning can have.
"Machine learning is exceptionally good at learning how to exploit human psychology because the internet provides a vast and fast feedback loop to learn what will reinforce and or break beliefs by demographic cohorts," said Mike Gualtieri, an analyst at Forrester Research.