your123 - stock.adobe.com

OpenAI's new versions of GPT target more enterprises

The vendor targets enterprises with new tools and products tailored to what businesses and developers need to incorporate generative AI, including better pricing.

After finding mainstream success with its AI chatbot ChatGPT, OpenAI is stepping up its campaign to emphasize itself as an enterprise vendor.

During its first DevDay conference, on Nov. 6, the generative AI vendor cut prices, introduced new capabilities and features, and updated some of its previous offerings.

First, OpenAI updated its most powerful large language model, GPT-4. The new version, GPT-4 Turbo, is updated with information up to April 2023. It also includes a context window that can fit up to 300 pages of text in a single prompt. OpenAI also reduced the price of GPT-4 Turbo by making it what it said is three times cheaper for input tokens than GPT-4 and half the price for output tokens compared to GPT-4.

Tokens are pieces of texts that language models read and analyze. They can be a character or a word, or combinations of both. Customers can use OpenAI's token analyzer to count how many tokens they need to create their text.

Under the new pricing model, for example, GPT-4 prompt tokens with 8,000 context lengths cost $0.03 per 1,000 prompt tokens. For models with 32,000 context lengths, the price is $0.06 for 1,000 prompt tokens.

Other than updating GPT-4, the vendor introduced a series of new developer products. One, GPTs, lets developers build their own version of ChatGPT into their apps and services.

New modalities for the API mean that GPT-4 Turbo can accept also images as inputs, the image-generating model Dall-E can be integrated into developer apps and products with the Images API, and developers can generate human-quality speech with a text-to-speech API, according to OpenAI.

The updates and capabilities come as the generative AI market has grown more competitive since OpenAI introduced Dall-E and the text-to-text tool ChatGPT in late 2022.

Playing catch-up

While some of the capabilities represent OpenAI responding to its customers' requests, other releases display the vendor's ambition, Gartner analyst Arun Chandrasekaran said.

By updating GPT-4's knowledge base, OpenAI addresses concerns about previous versions of its LLM being outdated and not having enough accurate data. Previous versions were only knowledgeable about world events and other information up to 2021.

Moreover, enlarging the context window is a response to other AI vendors, such as Anthropic with its Claude 2 LLM, that already had a context window of more than 100,000 tokens with their LLMs.

"The race to expand the context windows for these models is a very significant area of contention at this point in time," Forrester Research analyst Rowan Curran said.

Finally, by reducing the price of GPT-4 Turbo, OpenAI addressed the concerns of customers that saw GPT-4 at too expensive when running specific workloads.

"These are all things that OpenAI was trying to play catch-up to in a sense," Chandrasekaran said. "These were capabilities that customers have been asking for a long time."

Targeting enterprises

With its new Assistants API, new API modalities and custom GPTs, OpenAI is making its goal of explicitly targeting enterprise customers, Chandrasekaran continued.

Assistants API enable developers to build "agent-like experiences within their own applications," according to OpenAI.

The race to expand the context windows for these models is a very significant area of contention at this point in time.
Rowan CurranAnalyst, Forrester Research

Developers can use these APIs to interpret code, retrieve data from third-party sources, and orchestrate and work within applications.

The GPTs, currently available for ChatGPT Plus and Enterprise users, let users build models to help with tasks at work or home.

With these products, OpenAI is focusing on the needs of enterprise developers and business challenges enterprises face on how to use the AI technology, Chandrasekaran said.

"What they really try to do is ... immerse themselves more and more into the business workflows of enterprise customers," he said. "They're also trying to make it easier for business users to leverage AI."

As the generative AI market grows, AI vendors also have an opportunity to work with enterprise customers as clients, Curran said.

"What we're seeing is a further emphasis on the tooling that businesses are being louder and more articulate about needing when they're building with generative AI today," he said.

Therefore, instead of just building a trendy and fun tool like ChatGPT, which has found wide consumer use, OpenAI is also listening to what businesses and developers are saying, he added.

OpenAI also wants to market itself as a platform vendor, Chandrasekaran added.

With the new releases, the vendor is providing more tools for customization and creating various variants of the AI models they provide to customers.

Some challenges

However, a key challenge for OpenAI is to work with enterprises and help them customize and run their models across different vertical industries and domains. The AI vendor need a solid market ecosystem for this, according to Chandrasekaran.

Not only does OpenAI need to continue to work closely with one of its biggest partners and investor, Microsoft, it also should work with system integrators like Deloitte, Accenture and IBM Consulting, he said.

This will help it translate its product strategy into more effective go-to-market moves.

Moreover, OpenAI needs to figure out a better pricing approach, because many enterprise customers don't like to work with tokens. Instead, they want to work on a contract basis, Chandrasekaran continued.

OpenAI also introduced Copyright Shield and said it would defend against and pay costs customers incur from copyright infringement lawsuits.

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close