The death of the seat: AI's effect on UC pricing models
AI costs are threatening traditional SaaS budgets. UC and CX leaders face new pricing models, vendor lock-in risks and the challenge of defining AI ROI.
The SaaS model is facing its deepest crisis in decades, with value shifting from logins to automated outputs, nearly upending the traditional pay structure. The reason is simple: Running generative AI costs a massive amount of money.
And now, boardrooms across the globe are obsessed with how they will pay for AI.
Many vendors have caught on, realizing the per-user licensing model is flawed for AI. If it continues, organizations pay for a digital wrapper rather than for the intelligence that addresses their unique requirements. For IT leaders accustomed to fixed monthly costs, this shift is a total monkey wrench.
Gartner predicted global software spending will hit $1.43 trillion this year, yet analysts warn of a hidden AI tax. While this figure is staggering, the reality is that IT teams are already struggling to keep their budgets in the black. The firm also found that while spending has risen by 15%, nearly 9% of that increase is driven by price hikes on existing software, meaning companies pay more for the same experience.
To cut through the noise, vendors like Avaya, Genesys, Talkdesk and RingCentral are testing new strategies, including consumption-based AI credits and outcome-based models. They want to balance AI costs with customer satisfaction.
Blair Pleasant, president and principal analyst at COMMfusion, said IT leaders have a few options to avoid sticker shock.
"Vendors need to provide real time consumption visibility, along with dynamic threshold alerts with real time notifications that are triggered at various points of monthly budget consumption," Pleasant said. "It's a must-have to provide alerts based on usage and amount spent, especially if there are sudden spikes."
Exchanging seats for tokens
Pleasant offered some suggestions to curb AI costs, including using consumption caps with automated kill switches (which many vendors are presently doing), falling back to rules-based automation and implementing a human handoff once a credit limit is reached to stop overage charges.
In outcome-based models, companies pay for a resolved interaction rather than a seat, but the industry is still struggling to define success. Oliver Jouve, chief product officer at Genesys, pointed out that a clear, consistent definition of a win remains a challenge for AI pricing.
"In CX [customer experience], it could mean completing specific operational actions, such as increasing customer resolution and faster call-handling times, or achieving a business outcome, like improving return on investment and customer loyalty," Jouve said. "That variability makes it difficult to standardize pricing around outcomes in any scalable or reliable way."
Nailing down these definitions would ensure vendors don't charge for poor-quality AI experiences. But there is no unified definition for what a resolution actually means.
"For some companies, resolution means that if a customer doesn't call back about the same issue within three days. For others, it can be one day," Pleasant said. "It's important for businesses to understand what resolved interaction means when talking to different vendors offering this."
The rise of AI currency
It's not merely about saving money; it's about avoiding sticker shock. Adopting pay-as-you-go frameworks plays a part in that, something the market is moving toward as a whole, Jouve said.
"That's why many organizations are gravitating toward consumption-based pricing, which allows them to pay for what they actually use," he said. "In fact, a study found that nearly 50% of buyers prefer this model."
An emerging approach is AI currency or a unit of consumption that can be applied across different use cases and capabilities," Jouve said.
The benefits of AI in the contact center are attractive. AI can drastically lower average handle time and slash head count, but the human fallout is already here. Giants like Amazon, Salesforce, IBM and Lufthansa have laid off nearly 50,000 employees in the past year alone, according to data from Challenger, Gray and Christmas.
These cuts do more than just swap people for software. They signal a pivot from casual experimentation to full-scale deployment that's capable of replacing human roles entirely. While automation increases computing costs, pinpointing the break-even point is difficult, Pleasant said.
"I don't think there's a set break-even point, and it will vary," she said. AI is most cost-effective in high-volume, basic interactions, such as password resets and questions about order shipments. However, costs rise sharply when interactions involve multiple steps and real time reasoning with agentic AI, as well as multimodal interactions, such as documents, images and voice, she said.
Jouve argued that an AI currency or token-based model protects consumers as vendors retire aging tech. Pricing is decoupled from the complexity of AI delivery.
"Whether the underlying AI is predictive, generative, conversational or employing a mix of large language models [LLMs] and large action models [LAMs], usage is measured in a consistent unit that keeps costs predictable," Jouve explained.
Hybrid licensing models offer compromise
Many vendors are introducing hybrid models: a base subscription plus usage fees, adding a fresh layer of complexity for IT procurement teams. In line with this, Pleasant said vendors must keep base costs low for smaller organizations.
Most organizations today are focused on using AI to drive value across the entire business. That requires a pricing model built for evolution, not one constrained by predefined metrics.
Oliver JouveChief Product Officer, Genesys
"It provides the happy medium of predictability needed for budgeting purposes, plus accounting for increasing and variable AI compute costs," Pleasant said.
The goal is a model that works for the entire enterprise, Jouve said.
"Most organizations today are focused on using AI to drive value across the entire business," he said. "That requires a pricing model built for evolution, not one constrained by predefined metrics."
Token-based consumption gives companies the flexibility to move quickly, test new ideas and apply AI to deliver the most value without being limited by how success is defined in any one part of the organization, Jouve said.
CFOs crave transparency, measurable value
An often-overlooked -- and costly -- AI expense is obtaining clean data, Pleasant said.
"We all know about garbage in, garbage out, and it's really important that organizations have the right labeled data and clean data," she said. "This data has to be continuously refreshed and updated; it's not a one-and-done."
There are also long-term risks with workflow portability. If a company builds everything around one vendor's AI, future price hikes create a lock-in effect. Moving data is easy, but moving complex workflows? That's a different story, said Pleasant.
"A lot of companies are using multiple LLMs to avoid vendor lock-in, but they don't do this for the workflows," Pleasant said.
Jouve said he believes tokens provide the transparency CFOs crave.
"They want to see how AI investment translates into measurable, repeatable value. That's where outcome-based pricing often falls short, because it is difficult to define, track or audit," he said. "For CFOs, what resonates more is clarity and control."
While subscriptions will stay for some tools, the seat is dying elsewhere, Pleasant said.
"As we move more into agentic AI that reasons and takes action, the per-user model will be replaced," she said. While the idea of a seat was once a measurable and comfortable metric, in a world dominated by agentic AI, counting heads is no longer the standard for quantifying AI value and ROI.
Moshe Beauford is a writer with nearly a decade of experience covering enterprise technology, including AI, unified communications and customer experience.