BOSTON -- Red Hat jumped on the generative AI bandwagon this week alongside parent company IBM with the beta release of a coding assistant for Ansible and the launch of a new OpenShift product line for MLOps, taking early steps into a market established by GitHub's Copilot and other large language model-based products.
Ansible Lightspeed, a fresh phase of the open source Project Wisdom Red Hat started last fall, will use IBM's Watson Code Assistant to generate Ansible playbooks in YAML based on natural language prompts. So far, Project Wisdom has drawn from Ansible open source code repositories and tapped IBM's internal users to train the foundational model behind Watson Code Assistant. Now, the public beta release due to be available by the end of June will open that training to all enterprise Ansible users, in the hopes of further refining the AI's responses.
Red Hat's preview-stage foray into AI-generated code will trail the general release of the most popular generative AI coding assistant so far, GitHub's Copilot, by a year. It also lags the availability of generative AI for infrastructure-as-code products from Pulumi and Firefly by months. General availability for Lightspeed isn't expected until the second half of the year.
The timing wasn't lost on Chris Wright, CTO and senior vice president of global engineering at Red Hat, who acknowledged the noisy market for generative AI tools during a keynote presentation Tuesday at the Red Hat Summit -- and how Red Hat plans to stand out.
"We all know that ChatGPT has garnered a lot of attention worldwide," Wright said. "But here's what we asked ourselves: What could you do if you got really specific with AI? Instead of a general-purpose [model] like ChatGPT, what if we trained a [model] on specific code ... [to] deliver focused, domain-specific AI solutions for IT automation?"
Generative AI tools for OpenShift will come next, Wright said during his keynote, but didn't offer further details. Red Hat also repositioned its OpenShift Data Science product as the basis for a new OpenShift AI product line, with planned additions to model deployment pipelines such as bias detection, improved GPU support and support for multiple AI models.
Red Hat hasn't ruled out supporting AI models other than IBM's in OpenShift AI, said Ashesh Badani, senior vice president and chief product officer at Red Hat, in an interview this week.
"We're trying ... to make OpenShift the foundation for various models to be deployed," he said. "Now, in the future, could you see us integrating with other generative AI models? Yes. There's no reason that precludes us from doing that."
Ansible Lightspeed's promise appeals, but must show results
Whatever Red Hat might have planned with other AI models, so far it has stayed close to IBM for Ansible Lightspeed. OpenShift AI infrastructure was used to train the Watsonx.ai platform rolled out earlier this month, and Ansible Lightspeed's private beta audience was largely drawn from the 12,000 employees in IBM's Office of the CIO.
"[IBM Research teams] really understand data and foundation models," said Thomas Anderson, vice president and general manager of Red Hat's Ansible business unit. "What we bring is the domain expertise about Ansible."
Anderson didn't provide specific information on how Project Wisdom measured the accuracy of the model's responses in early testing or how much it improved in private beta. Anecdotally, IBM's Office of the CIO was able to use Lightspeed to correctly generate 60% of Ansible code as part of a pilot program.
While the quality of results for Lightspeed remains to be proven in the wider world, there's also a compelling value proposition behind generative AI for infrastructure-as-code that has enterprise IT pros' attention, even if production use is still a long way off.
Ty LimSite reliability engineering manager, Blue Shield of California
"AI right now is a buzzword, but I don't know what a year from now will look like -- I'm not ruling anything out," said Ty Lim, site reliability engineering manager at health insurer Blue Shield of California, in an interview at Red Hat Summit. "As it matures further, AI will be incorporated at some level in IT to reduce costs and automate faster, and reduce the level of code [expertise] required to automate or deploy something. That's the promise."
Ansible Lightspeed is careful to provide source information where possible, according to Red Hat officials, but most IT pros likely won't be convinced to adopt any generative AI for code used in production until there's a ruling in an ongoing lawsuit against OpenAI, Microsoft and GitHub over Copilot licensing, said Andy Thurai, an analyst at Constellation Research.
Furthermore, Thurai said he remains skeptical about how well Watson Code Assistant, also slated for general availability later this year, will stack up against competitors.
"Watson Code Assistant is not that impressive -- [it's] run of the mill, like many others," Thurai said. "In my view, [GitHub] Copilot is much better in terms of accuracy and the variety of the code it can write."
Still, Ansible has strong brand recognition in infrastructure as code and a large customer base, which is a point in Red Hat's favor, Thurai said. Ansible Lightspeed will integrate initially with an extension for Microsoft's Visual Studio Code, which Thurai called a smart move, as that could help Lightspeed piggyback on Microsoft and GitHub's success with Copilot.
Another IT expert said he sees the logic in using domain-specific data to train Watson Code Assistant, though he, too, will wait to see how well Lightspeed performs in the real world.
"Copilot could be pulling from anywhere in Git, versus Watson Code Assistant using a finite amount of known good code that is applicable to the YAML you are trying to build for Ansible," said Rob Strechay, founder at Smuget Consulting. "Context matters, and [a] larger [data set] is not always better."
Beth Pariseau, senior news writer at TechTarget Editorial, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.