7 generative AI challenges that businesses should consider
The promise of revolutionary, content-generating AI models has a flip side: the perils of misuse, algorithmic bias, technical complexity and workforce restructuring.
The mainstreaming of generative AI offers here-and-now-capabilities, the promise of future advances -- and more than a few pitfalls.
This form of artificial intelligence technology hit its stride in 2022 with the release of OpenAI's ChatGPT and Dall-E 2. Those tools, and successors such as Google's Bard, make high-quality, AI-generated content an everyday reality. Each of those products is built on a language model, a type of machine learning model honed on massive amounts of training data.
Users can now tap AI models for a range of content creation chores: Text, image, video, audio and synthetic data are all in the mix. But the potential benefits of the technology come at some cost. Here are seven generative AI challenges business leaders should take into account.
1. Handling technical complexity
Generative AI models may contain billions or even trillions of parameters, making them a complex undertaking for the typical business.
"These models are impractically large to train for most organizations," said Arun Chandrasekaran, vice president, analyst, tech innovation, at Gartner. The necessary compute resources can make this technology expensive and ecologically unfriendly, he said, so most near-term adoption will likely see businesses consuming generative AI through cloud APIs with limited tuning.
The difficulty in creating models leads to another issue: the concentration of power in a few, deep-pocketed entities, Chandrasekaran added.
2. Tackling legacy systems
Incorporating generative AI into older technology environments could raise additional issues for enterprises. IT leaders will face decisions on whether to integrate or replace older systems.
This article is part of
What is generative AI? Everything you need to know
For example, financial institutions considering how a language model could be used to determine fraud will probably find the emerging technology at odds with how its current systems handle that task, said Pablo Alejo, partner at consultancy West Monroe.
Legacy systems "have a very specific way of doing that, and now you've got generative AI that's leveraging way different types of thinking," Alejo explained. "Organizations have to find new ways to either create integrations or adopt new capabilities, with new technologies, that enable them to reach the same outputs, or outcomes, faster and more effectively."
3. Avoiding technical debt
Generative AI could end up joining legacy systems as technical debt if businesses fail to achieve significant change through its adoption.
An enterprise deploying AI models for customer support might declare an optimization victory on the grounds that human agents will handle fewer cases. But workload reduction isn't going far enough, according to Bill Bragg, CIO at enterprise AI SaaS provider SymphonyAI. A business would need to significantly reduce the number of agents in front-line support roles to justify the investment in AI, he noted.
"If you don't take something away, how have you optimized?" Bragg said. "All you've done is add more debt to your processes."
4. Reshaping some of the workforce
Generative AI will likely restructure how work gets done in many fields, a prospect that raises job-loss concerns. An article on the Chinese video game industry states job opportunities for artists are vanishing as companies employ AI-based image generators.
Pablo AlejoPartner at West Monroe
But some executives suggest it's not all doom and gloom. AI might reduce the number of agents in the customer support example, but the technology would also create other roles, Bragg said. A business would need staff to oversee and improve the AI-assisted customer experience, he reasoned. Employees who once fielded customer questions on slow laptops would instead drive the next data and technical improvements. Bragg referred to this transition as "going from the doer to the trainer."
Similarly, Alejo said generative AI will remove some types of jobs but also "open up brand new types of jobs that those same people can take advantage of."
5. Monitoring for potential misuse and AI hallucinations
AI models lower the cost of content creation. That helps businesses but also helps threat actors who can more easily modify existing content to create deep fakes. Digitally altered media can closely mimic the original and be hyperpersonalized. "This includes everything from voice and video impersonation to fake art, as well as targeted attacks," Chandrasekaran said.
While threat actors are able to misuse generative AI systems, the models themselves can lead users astray: AI hallucinations provide misinformation and make up "facts." Hallucination rates, depending on the domain, could be 10% to 20% of the AI tools' responses, Chandrasekaran added.
6. Keeping tabs on legal concerns and algorithmic bias
The emerging technology can also bump into intellectual property issues, exposing businesses to legal action. "Generative AI models have the added risk of seeking training data at massive scale, without considering the creator's approval, which could lead to copyright issues," Chandrasekaran said.
Algorithmic bias is another source of legal risk. Generative AI models, when trained on faulty, incomplete or unrepresentative data, will produce results that are systemically prejudiced. Unchecked, AI bias spreads through the systems and influences decision-makers relying on the results, potentially leading to discrimination.
Flawed AI models "can propagate downstream bias in the data sets, and the homogenization of such models can lead to a single point of failure," Chandrasekaran said.
7. Providing coordination and oversight
Newer technologies often compel organizations to launch centers of excellence (CoE) to focus on effective adoption and rollout. Such centers could play an important role in generative AI.
"If you don't have a team working on how to understand this capability and take advantage of it, you are risking obsolescence," Alejo warned. "Centers of excellence should be existing across every industry, across every organization."
Such a specialized group can also craft policies for governing the acceptable use of generative AI. "The CoE," Alejo advised, "should lead policy design and decisions for how different individuals across an organization can use it." The center, he added, should enlist the review and input of key stakeholders, including legal, IT, risk and, potentially, other departments such as marketing, HR and R&D.