WavebreakMediaMicro - Fotolia

Tip

AI applications: Cloud vs. on-premises deployment options

Should clients build on-premises infrastructures for their AI needs or turn to cloud-based AI? Insight Enterprises' Brandon Ebken and Juan Orlandini discuss these options.

Editor's note: AI applications are no longer the sole province of tech giants such as Facebook and Google. Indeed, the technology is beginning to see use in organizations of all sizes. As more customers begin asking about AI applications, cloud vs. on-premises deployment will become a pivotal issue for IT service providers and other channel partners. This article from Brandon Ebken and Juan Orlandini examines these options.

Today, artificial intelligence has overcome many of the challenges it previously faced, and we're starting to see an eruption of implementations largely due to the cloud.

We can now store and compute the massive amount of data often needed for AI initiatives, and access has become ubiquitous and affordable, too, as organizations of all sizes can use AI capabilities merely by swiping a credit card. The increased accessibility of storage technologies and graphics processing units is enabling the masses to access world-class AI capabilities. Simply put, AI has become pervasive and is ready for the right now -- in fact, it'll be the biggest opportunity for businesses over the next decade.

As a result, AI is driving organizations' digital innovation initiatives, from bots and predictive analytics to virtual assistants and predictive maintenance on the edge. Yet, many clients are often weary of AI because of the significant amount of computation power and data it requires.

The question that every company must ask as they tackle AI is "Where do we fall on the AI continuum?" Answering this question can help you determine whether you need to build an infrastructure on premises to support your AI needs or whether consuming prebuilt models that offer AI tools and services in the cloud can help you reach your objective.

Considering the cloud for AI applications

Cloud-based AI services are an ideal solution for many organizations. Instead of building out a massive data center to gain access to compute, they can use the infrastructure someone else built. In fact, one reason why AI has become so pervasive is that cloud providers offer many plug-and-play AI cloud services, as well as access to enough compute power and pre-trained models to launch AI applications. This significantly reduces the barriers to entry.

Some clients look to the cloud to test the waters and see what AI initiatives work for their organizations. For even more clients, the cloud is not only the place to try things out, but it is the place to use and run AI -- forever. Many organizations are training deep learning models in the cloud, utilizing the easy access to unlimited compute power and massive amounts of data storage.

While some organizations are not yet at the point of using deep machine learning, AI will continue to accelerate the adoption of cloud technologies, opening doors for more complex solutions as the cloud continues to optimize compute, storage, networking and security. For offerings like speech-based cognitive services or implementing robotic process automation -- i.e., bots -- cloud providers are well-equipped to deliver the compute power needed.

On certain occasions, the pre-trained models or the computational/storage requirements of the cloud can be inappropriate or cost-prohibitive. In those situations, an on-premises solution may make more sense.

chart on AI projects
Factors to consider when assessing AI initiatives

The case for on-premises AI

So what moves customers on premises?

There is a whole ecosystem of tools built for on premises that can work with mass amounts of compute power, which can be expensive in the cloud. Some customers find it more economical to do this on premises or prefer a capital expense to an operational expense model. If your organization determines that it wants to get more involved in this or roll out AI at scale, then it may make more sense to invest in on-premises infrastructure instead of consuming cloud-based services.

Consider a client that has an existing data lake they want to use but that doesn't want to lift and shift it into the cloud because of the sheer volume of data -- it could be terabytes, hundreds of terabytes or petabytes worth of information. Lifting it into the cloud to manipulate the data only to bring it back on premises can be incredibly expensive.

Additionally, for clients working in edge scenarios -- in which they need to make immediate decisions at the point of data -- it may not make sense to stream the data to the cloud for processing. Instead, it may make more sense to simply do all of that data manipulation on premises, where investing in the equipment makes good business sense.

In these instances, it's important to establish an element of storage, processing and compute, and software. However, setting up and managing that entire environment can be a complicated task.

Challenges to implementing AI

Regardless of your organization's approach to AI and deep learning, it's important to keep in mind the gravity of all the data involved. Moving this data around can be a challenge, so if your environment is already prebuilt and it's sitting in and occupying space in the cloud, the gravity of the data will naturally attract you to do all of these things in the cloud.

If most of your data lives on premises and you need to use that data to do the AI exercise you're pursuing, then your natural gravity will bring you to the on-premises side. This is not to say you cannot overcome that data gravity. Some clients have taken their on-premises data to the cloud, while others have started in the cloud and decided to bring data back on premises. However, doubling back on your strategy at the wrong time can have certain cost implications.

Regardless of your organization's approach to AI and deep learning, it's important to keep in mind the gravity of all the data involved.

As you evaluate how to implement AI initiatives, it is important to consider that much of the data driving AI is actually siloed in a legacy infrastructure and is not necessarily in the right format or easily accessible. There is a massive amount of unstructured data to be processed, and, in many cases, that data has grown beyond a company's infrastructure.

Dealing with much larger data sets requires more difficult computation and algorithms. In fact, the majority of your time could be spent cleaning data, de-identifying it for security purposes and getting it to a point where it can be used to gather insights. It can also be a challenge to give engineers and data scientists access to that data in the place.

Looking ahead, AI is going to drive innovation and competitive differentiation for years to come. Choosing the appropriate infrastructure for your organization will take a combination of knowing what tools you have to make your data processable, managing the computation and generating the inferences from the appropriate models.

If your organization is early on in its digital transformation journey, the cloud is a great option to test AI services given the speed to market and low cost to experiment. It may also be the best solution for your long-term plans, but it's important for your organization to continually iterate, evaluate and assess these efforts to understand when they need to adapt, change or scale them.

Editor's Note: Brandon Ebken is CTO of digital innovation and Juan Orlandini is chief architect of cloud and data center transformation at Insight. For more information on AI applications, see recent TechTarget articles on AI in healthcare and AI in finance.

Next Steps

Minimize the high costs of AI in cloud with FinOps

Dig Deeper on Emerging technologies for MSPs

MicroScope
Security
Storage
Networking
Cloud Computing
Data Management
Business Analytics
Close