putilov_denis - stock.adobe.com

IBM moves ahead with open source, multi-model AI strategy

The 113-year-old tech giant expanded its open source, open ecosystem, multi-model approach with support for more models and updates to its Watsonx generative AI assistants.

IBM on Tuesday released into open source some of its Granite large language models, widened support for third-party AI models and added to its line of AI assistants on the year-old Watsonx generative AI platform.

The moves came on the eve of the more-than-century-old tech giant's IBM Think 2024 conference. They expanded and reaffirmed the vendor's commitment to the open source stance it adopted in 2000.

They also come during the same month that OpenAI and Google made splashy introductions of their latest multimodal generative AI (GenAI) chatbots designed to speak and interact with people as well as create text, images and videos.

IBM's latest offerings are not multimodal, but the company provides access to third-party AI models with multimodal capabilities.

"We are seeing IBM play the neutral advisor role for companies entering their LLM journey," said R "Ray" Wang, founder of Constellation Research. "As a trusted advisor, customers see Watsonx and Granite LLMs as an insurance policy in the rapidly advancing and changing world of LLMs. Customers need a place to get started and a platform like Watsonx to remain flexible for the changes to come."

GenAI moves

The new AI assistants are Watsonx Code Assistant for Enterprise Java Applications, expected to be generally available in June, and Watsonx Code Assistant for Z to accelerate mainframe application modernization, planned for general availability in October. IBM also expanded Code Assistant for Z with natural language explanation.

The Granite LLMs for code are generally available now under open source Apache 2.0 licenses on Hugging Face and GitHub.

The Granite models range from 3 billion to 34 billion parameters in size and are aimed at modernizing applications on a large-scale, generating code, fixing bugs, explaining and documenting code, and maintaining code repositories. The code models are trained on 116 programming languages, and IBM claimed they are among the top-performing open LLMs for code-related tasks.

Granite models come with indemnity protection for users. They have been trained for business use on data sets from internet, academic, code, legal and finance domains.

Training data has been scrubbed for objectionable content and filtered to address governance, risk assessment, privacy protection and bias mitigation. The vendor released Watsonx Governance, an AI governance toolkit to create trust-based workflows, last year.

AI for business

At a pre-conference media briefing, IBM executives emphasized that the vendor and its big consulting division are focusing on helping enterprises get generative AI out of the planning stages and into action.

We are seeing IBM play the neutral advisor role for companies entering their LLM journey.
R 'Ray' WangFounder, Constellation Research

"Our AI assistants really stand at the forefront of providing a very effective way to put GenAI in the hands of business," said Kareem Yusuf, senior vice president of product management and growth.

Meanwhile, IBM Consulting Advantage, an AI services platform the vendor released last year, is taking advantage of the new AI assistants and Granite models, said Mohamed Ali, senior vice president and COO at IBM Consulting.

The consulting division is now working on 300 generative AI projects with customers, according to Ali.

"Consulting Advantage is a common layer that provides a common security framework, a common PII [personally identifiable information] framework, common bias framework, common governance framework, common cost framework," he said. "Below that are all these different LLM apps, there's this whole set of assistants now that sit on top, and this way, our consultants can consume this sort of multimodal technologies in a consistent way."

Third-party alliances

As part of its open ecosystem strategy, IBM on Tuesday also unveiled a series of new and expanded partnerships involving generative AI with other tech giants.

A new partnership with AWS to bring Watsonx Governance to users of Amazon SageMaker is available now.

With Adobe, a longtime partner, IBM said it is collaborating on hybrid cloud and AI technology to bring Watsonx, Red Hat and OpenShift capabilities to Adobe Experience by next month.

With Meta, IBM unveiled the availability of the social media giant's latest open LLM, Meta Llama 3, on Watsonx.

Watsonx is also now available on Microsoft Azure. Commercial models from generative AI vendor Mistral will be available on Watsonx by June. IBM also expanded its partnership with security vendor Palo Alto Networks on AI security.

This openness to many third-party generative AI technologies, combined with a focus on AI safety and businesses rather than consumers, means that while IBM's AI offerings might be less flashy than others, enterprises will welcome them, said Steven Dickens, a Futurum Group analyst.

IBM's traditional customer base, including large financial institutions and government agencies, will also appreciate the somewhat stolid but reliable AI posture of the long-established tech giant, he said.

"Do you really want to get caught up in all that noise if you're looking for an enterprise AI partner?" Dickens said, referring to the high-profile generative AI systems from OpenAI and Google. "The traditional heartland is going to want sensible, solid, dependable and the ability to be indemnified."

"They want to know that information hasn't been scraped off The New York Times or TripAdvisor," he continued. "IBM is singularly focused on being an enterprise AI company."

Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience.

Dig Deeper on AI technologies