Getty Images/iStockphoto

AI race surges as Anthropic intros Claude 3

The new models have a larger context window and multimodal capabilities. They reflect the new level of normal in generative AI and the myriad model choices for enterprises.

AI startup Anthropic on Monday introduced the next generation of its AI model, Claude.

Anthropic is battling other startups such as OpenAI, Cohere and Mistral as well as tech giants Microsoft and Google in the burgeoning generative AI market.

Claude 3 is a family of three foundation models: Claude 3 Haiku, Claude 3 Sonnet and Claude 3 Opus.

Haiku is the fastest and most compact model. Sonnet is engineered for high endurance in large-scale AI deployments. Opus can navigate open-ended prompts and has human-like understanding, according to the AI startup.

Opus and Sonnet are both available in Claude.ai, an AI assistant, and the Claude API. Haiku will soon be available, Anthropic said.

Claude 3

The models have vision capabilities that can process various visual formats such as photos, charts, graphs and diagrams.

The new family of models also boasts improved accuracy in answering challenging open-ended questions compared to previous models, the vendor said.

The new models come amid the emergence of many small language models in recent months to rival large language models (LLMs) in the enterprise generative AI market.

Google in mid-February introduced Gemma, a lightweight open model with about 2 billion parameters. Open source startup H2O.ai introduced its new small H2O-Danube-1.8B last week.

"In a world that is realizing the value of smaller and more targeted models, Anthropic's release of Claude 3 seems to indicate that the era of large language models is far from over," Futurum Research analyst Keith Kirkpatrick said.

On par with the market

The new family of models also follows a trajectory in the generative AI market in which context windows -- the space in AI models user interface that contains prompt, text, images or video -- are larger and multimodality is the norm.

The Claude 3 models provides a context window of 200,000 tokens.

"We're really entering this phase where massively huge context windows, in addition to multimodal capabilities ... [are] becoming not quite the standard but certainly more of the par for the high-end models that we're seeing today," Forrester Research analyst Rowan Curran said.

The Claude 3 family of models is mostly on par with other models introduced in recent months.

For example, Google's new Gemini 1.5 Pro came with the release of a standard 128,000 context window, with the ability to scale up to a 1 million-token context window.

Also, on Monday, GPT creator OpenAI revealed that ChatGPT can read its text responses aloud on the web and mobile app. This comes after the vendor introduced multimodal capabilities to the AI chatbot in September 2023.

"[These models] don't appear that earth-shaking," HFS Research analyst David Cushman said of the new Anthropic offerings. "They are a little better, allegedly, than some of the current models."

However, with the speed with which AI vendors release models, new models are sure to be released soon that will overtake Claude 3, Cushman said.

Distinguishing itself

Anthropic's ability to distinguish itself from others comes from its constitutional AI design within the Claude 3 family, Gartner Research Arun Chandrasekaran said.

Constitutional AI is a method that uses reinforcement learning and supervised learning to minimize or eliminate harm that AI assistants can do.

[These models] don't appear that earth shaking.
David CushmanExecutive research leader, HFS Research

Anthropic also differentiates itself by providing its models not only through an API but also on Google Cloud Platform and AWS.

"The fact that Anthropic is available in more than one cloud is a net positive to them," Chandrasekaran said.

Moreover, with Claude 3, Anthropic is taking a more nuanced approach to handling risks compared to previous versions, which overcompensated and were too conservative, Kirkpatrick said.

Compared to previous models, Claude models will not refuse to answer prompts that cross the system's safety guardrails to prevent inaccurate and biased outputs, Anthropic said.

"If [Anthropic] strikes the appropriate balance between respecting issues like toxicity and bias, but allow responses to be generated by reality, even when reality is a little messy, it could help them generate additional usage by enterprise customers," Kirkpatrick added.

He added that enterprises are taking an agnostic approach to generative AI, selecting models based on use case, risk tolerance and cost.

A challenge for enterprises

Companies are not only dealing with the push to use GenAI but also the profusion of new generative AI tools that are popping up every week, Cushman said.

"We are in an embarrassment of riches position where firms have got to work out for themselves what is worth using this technology for," he said. "It's lovely that this arms race is going on. I think we will all benefit from it in the end. But in the meantime, companies need to turn to an expert intermediary."

Enterprises might also want to consider turning to a modular architecture that contains multiple modules within the larger IT system, Chandrasekaran said.

"You're essentially building a generative AI platform strategy where you're able to accommodate multiple models as part of the generative AI platform strategy," he said. "You are having the ability to swap these models as need be."

Enterprises might also want to slow down before switching out one model for the other, especially because some of the models introduced in 2023 are still good for certain applications, Curran said.

"Just because there is a new shiny object being released that doesn't necessarily mean that whatever tool you're using is obsolete and needs to be replaced immediately," he said.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI technologies