sdecoret - stock.adobe.com

ChatGPT advanced AI, but OpenAI faces hurdles in keeping lead

Model makers disrupted AI, pushing Google to innovate and boosting Microsoft. Yet four years later, enterprises remain loyal to cloud providers, leaving the survival of the startups uncertain.

Despite shifting dynamics among AI model creators, enterprises and cloud providers, the rapid rate at which digital technology progresses means there will always be a need for innovators like OpenAI.

With the even more dramatic pace of generative AI (GenAI) development, it's hard to imagine that only a short time has passed since OpenAI stunned the world with its release of the ChatGPT large language model (LLM) in late 2022.

At the time, it was inconceivable that one could talk to a neural network system or a machine learning model and get a response back.

The release of ChatGPT was so momentous -- marking an inflection point in the development of digital technology -- that it raised questions about which vendors would control GenAI technology.

Now, as the GenAI market starts to shake out and pick winners and losers, OpenAI is in danger of being eclipsed by the tech giants' rapidly advancing GenAI machines. Google, notably, has been flooding the market with rapid-fire product releases after being caught unawares by OpenAI's bold move in 2022.

ChatGPT and the change of AI technology

ChatGPT not only brought OpenAI to the forefront of GenAI technology, but it also led to OpenAI and rival independent foundation model providers-- notably Anthropic and Cohere -- getting a chance to partner with tech giants such as Google, Microsoft, AWS and Oracle. The emergence of GenAI also created room for open source innovators, including Meta and smaller vendors such as Mistral and DeepSeek.

Image of Google and OpenAI logos
OpenAI has served as a catalyst for the AI race for Google and legacy tech vendors.

During the initial stage of the technology shift from traditional predictive AI to a new form of AI that could create text, images, video and sound, the state of technology seemed uncertain. While foundation model providers were rising meteorically to the top of the tech industry, questions arose about whether OpenAI could overtake established AI system vendors such as Google. At first, and to some extent still, enterprises were concerned about the privacy implications of ChatGPT since it was trained on the internet. Yet, the appeal of the technology did not prevent its use, whether secretly within the enterprise or above board.

The GenAI race soon heated up as Microsoft invested more in OpenAI and both consumer and enterprise users seemed drawn to the new chat interface.

"This idea about LLMs would not have taken off if it wasn't for that interface," said Mark Beccue, an analyst at Enterprise Strategy Group, now part of Omdia. "It changed everything. I just changed the trajectory and unleashed the idea of generative AI."

OpenAI's innovative leap enabled people to speak to AI models through a simple chat interface.

More than that, OpenAI showed many a whole new world of possibilities, said Gartner analyst Chirag Dekate.

"When ChatGPT came out, it unlocked the imagination of what generative AI could do for the first time," Dekate said.

Before ChatGPT, LLMs that generated text and translated languages existed. However, the birth of ChatGPT also arrived with the introduction of reinforcement learning from human feedback, which lets humans provide commentary to LLMs' responses.

OpenAI's revolutionary gambit encouraged by Microsoft's support led to major investment in other AI startups. Google and Oracle invested strategically in Cohere, while AWS partnered with Anthropic.

"These companies have invested extremely heavily in AI for a long time," Beccue said. "It takes this kind of investment and knowledge that takes time to build, and they were ready when these innovations came."

Forced to innovate

However, while the tech giants were ready to advance this new and wildly popular form of AI technology, they were also somehow unprepared in certain ways.

"Companies that were focused on a traditional cloud sort of play ... they were forced to innovate," Dekate said. "They were forced to revisit their assumptions because moments like ChatGPT, like OpenAI, essentially changed the industry altogether."

He added that OpenAI gave the push needed to drive the required transformations in AI.

It is well known that Google had the talent and the research to take AI technology to the next level. However, the cloud vendor moved cautiously, and it wasn't until it started to gain criticism for its initially guarded approach that it started being more aggressive with its release of AI technology.

"OpenAI forced Google to productize," said Sarah Kreps, a Cornell University government professor with a focus on tech. OpenAI also helped Microsoft because the company did not have to build its general-purpose models. Instead, it integrated OpenAI into Office, Azure and GitHub.

While OpenAI has been the leading force in GenAI innovation, Anthropic is close behind, Kreps said.

"Anthropic's rise has similarly pushed Google to open its wallet and fast-track Gemini," she said, referring to Google's most popular family of GenAI models.

Moreover, Dekate said OpenAI has found success in delivering code generation and code creation in a unique way.

AI startups like both OpenAI and Anthropic became catalysts for the tech giants. Even Meta, AWS and IBM jumped into the GenAI arena, issuing their own series of open and proprietary models and GenAI platforms. Microsoft, too, distanced itself somewhat from its OpenAI and started releasing its own GenAI products.

These startups disrupted the pace -- as well as the direction-- of big tech's AI agendas.
Sarah KrepsProfessor, Cornell University

"These startups disrupted the pace -- as well as the direction -- of big tech's AI agendas," Kreps said. "Cloud providers are no longer just infrastructure vendors; they're now distribution channels for foundation models."

The partnerships Microsoft formed with OpenAI, AWS with Anthropic and Oracle with Cohere were more than just a way of competing, Dekate said.

"You have these frontier models that deliver exceptional performance,” he said. “But to power this exceptional performance, you need a fairly cogent infrastructure to support your pretraining scaling and model scaling."

This need for performance and infrastructure drove the partnerships between cloud and model providers.

While it is likely that AI technology would have experienced spectacular growth and popularity without these partnerships, it might have taken longer than it did.

"As we've seen with any kind of scientific discovery and then application, it is an inevitable trajectory that mankind will take whatever has gone before and apply it in new ways or refine it, optimize it," said Bradley Shimmin, an analyst at The Futurum Group. "I think we would be where we are, but I am not sure we would have gotten here so quickly."

These partnerships led to a greater acceleration and a new level of "coopetition" -- cooperation and competition -- among vendors, in which vendors offer services and products from a vendor that competes with them.

Strained partnerships and old faithful

However, those alliances and investments have changed nearly four years since the ChatGPT moment.

For example, OpenAI once leaned heavily on Microsoft, but it has now turned to other investors like Softbank. Earlier this year, the AI vendor revealed that SoftBank is leading a $40 billion investment to advance AI research and expand its computational infrastructure.

The Microsoft-OpenAI friction highlights some of the strains within the relationship between model providers and the giant cloud companies.

Kreps said that the more mature a model provider becomes, the more it starts selling directly to customers, launching API platforms and even raising more money. So, even more friction builds.

Part of the shift is that while the foundation model providers have advanced the technology, enterprises remain comfortable with the cloud providers they've always used and trusted.

"Enterprises do not have the luxury of starting with a clean sheet," Dekate said. "They have to design, implement, scale and build in a manner that does not increase technical debt."

Therefore, while enterprises were willing to experiment with innovators, they returned to what they knew.

"What seems to be happening is enterprises are tapping into Anthropic, OpenAI, Gemini, [Meta] Llama-like experiences, but they're doing so in the context of the existing experiential bias, which is their existing enterprise architecture, existing foundations," Dekate said.

Despite model diversity, Kreps said the three dominant cloud providers -- AWS, Google and Microsoft -- remain in control of the integration layer, billing, compliance and IT procurement pathways.

Enterprises prefer centralized procurement and security, so they're keen to access the AI models through their existing cloud relationships, Kreps said.

"In the long run, AI capabilities will become another service tier bundled into broader cloud commitments, even if the underlying models come from outside labs," she said.

Enterprises are also willing to try open source innovators and smaller players like France-based GenAI upstart Mistral, which has sought – with a notable degree of success so far -- to differentiate itself as a native European tech innovator, Shimmin said.

"We have evolved as an enterprise marketplace in terms of expectations for software and AI, in particular, over the last three years," he said.

Before ChatGPT, predictive AI models needed to be 85% to 89% accurate for recommendation predictions. However, those percentages have lowered since ChatGPT because enterprises realize that value can be measured in other ways, plus smaller models are financially conducive and can be applied to vertical markets.

Some enterprises are taking a hybrid approach by using, say, a smaller, fine-tuned open source model and a model from OpenAI that they can obtain through an API, Shimmin said.

However, a hybrid approach can't keep model makers relevant forever in the GenAI market.

From IBM to AWS and Google to the growing open source market with Grok and others, many vendors now make their own models.

"Models don't necessarily bring all the value," Beccue said. "It's what you do with the model that brings the value, and everybody understands that."

OpenAI timeline from 2015-2023

A continuous disruptor

The model makers understand this, especially OpenAI, which is why it is working frantically to continue to stay a market disruptor.

Most recently, OpenAI revealed that it acquired the startup iO founded by noted former iPhone designer Jony Ive. Ive and his team will help build and design hardware for AI interfaces. OpenAI also revealed that it has partnered with Mattel to develop toys and games with AI-powered products.

Moves like these show that OpenAI continues to unlock imagination and creativity, Dekate said. He added that cloud vendors struggle with this, meaning that there will always likely be a need for the OpenAIs of the world in technology.

"If you do not have OpenAI and Anthropic, I'm willing to bet that the cloud providers will go back to playing yesterday's game," Dekate said, adding that the cloud providers would return to playing it safe without these startups.

"You need externally forced impetus to change," he said. "It's a yin and a yang."

Moreover, agentic AI adds another level of innovation to the GenAI market, Beccue said.

In this new phase of AI development, it's not clear how influential OpenAI will be, even though it has its own agentic products.

"Agentic is the next iteration of generative AI," he said. "It's disruptive. It causes more players to enter the space."

Therefore, even as the interplay between OpenAI and the big tech vendors continues unabated, the momentum behind agentic AI means more competition ahead.

Esther Shittu is an Informa TechTarget news writer and podcast host covering AI software and systems.

Dig Deeper on AI business strategies