Getty Images/iStockphoto releases small language model: H2O-Danube-1.8B

The new model comes as the generative market continues to see the emergence of small language models. The models provide enterprises with better privacy and data controls.

AI cloud and open source vendor on Thursday introduced a small language model that the vendor said provides users with stronger privacy and data controls than large language models.

H2O-Danube-1.8B is a 1.8-billion-parameter open source natural language model.

The small language model can run on portable devices such as smartphones, laptops and desktops.

It was trained on 1 trillion tokens from different web sources and techniques refined from models such as Llama 2 and Mistral, said. It is also released under the Open Source Apache 2.0 license. also introduced H20-Danube-1.8B-Chat, a chatbot version of the fine-tuned model for conversational applications, along with the new small language model.

The language model and the chat version are now available from Hugging Face.

H2O-Danube-1.8B comes after the vendor introduced H2OGPTe in January. H2OGPTe is an enterprise generative AI platform that can retrieve internal data and privately host LLMs.

Small language models

It also comes as the generative AI market is seeing more small language models (SLMs) emerge.

Most recently, tech giant Google introduced its 2 billion-parameter Gemma open source models. Also, French startup Mistral AI is popular with cloud providers like AWS and Microsoft that have integrated the vendor's 7B model into their generative AI stack.

The growth of SLMs is due to a few factors, according to RPA2AI Research analyst Kashyap Kompella.

For one, they are more affordable. Enterprises can run them locally and on consumer devices without expensive GPUs.

Also, open source SLMs offer greater control to enterprises over how data is handled, Kashyap said.

For example, an LLM from a provider like OpenAI can be used via an API, and the data from the enterprise will go to OpenAI.

But an open source model hosted by the enterprise leads to better control.

"The trade-off here is better performance of commercial providers versus greater flexibility and control of open source models," Kompella said. "SLMs may not match the performance of LLMs, but there are scenarios in which they are needed. Small language models like H2O-Danube can be a useful addition to an Enterprise's AI toolkit."

Showing what's possible

For H2O, Danube is an example of what's possible for others.

"We are demonstrating that you can own ... your own LLM on your own content," CEO and co-founder Sri Ambati said.

Small language models like H2O-Danube can be a useful addition to an enterprise's AI toolkit.
Kashyap KompellaAnalyst, RPA2AI Research

While a smaller model will not be as high-performance as OpenAI's GPT-4, for example, it can act as a safety guardrail for the bigger model and can help users extend the training of the LLM.

H2O-Danube was trained with a few people and cost between $30,000 to $40,000 to build, Ambati said.

"We're demonstrating it doesn't have to be a big boys club to build this model raising tens of hundreds of millions of dollars," he said.

However, the cost of building a model is relative, Gartner analyst Arun Chandrasekaran said.

Sometimes, vendors are fine-tuning an existing model to create a new model so the cost can be less than creating a new model from scratch, he added.

Too many models

The emergence of new models like H20-Danube can also be overwhelming for enterprises, Chandrasekaran continued. "Customers today are being overwhelmed by model choices because there is literally a new model that's being released every week, if not every day."

This has led to a new category of tools called model routers. Model routers help enterprises automate model selection based on the applications and the key outcomes a user wants.

The profusion of different models makes it hard for a vendor like, Chandrasekaran said.

While has gained a popular following among open source users, it is still in danger of being drowned out by vendors such as Google.

"So how they're able to, for example, gain new customers by better branding and explaining a clear value proposition for their models -- I think that's probably going to be a challenge," he said.

Vendors such as must then try to compete against much bigger AI vendors like OpenAI, Anthropic and Google by ensuring that models have enough community support and adoption, he added.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI technologies

Business Analytics
Data Management