Manage Learn to apply best practices and optimize your operations.

From words to action: Implementing AI

Artificial intelligence is a theme that has dominated the world of technology for some time now, and it looks like it will continue to do so through the near future. From conversational chatbots to predictive maintenance for machines, AI has quickly gone from being a mere competitive advantage to being a business necessity. It doesn’t come as a massive surprise, then, that Gartner has predicted that AI technologies will be in almost every software product by as soon as 2020.

Despite the market buzz, many companies are still stumped by the prospect of deriving actual business value from the use of AI. Its introduction to products and systems remains one of the leading sticking points for businesses. So, how do you actually implement an AI technology?

It’s complicated

As AI goes from a “nice to have” to a “need to have,” it’s also evolving in terms of complexity. Simple, standardized AI services that do image or text recognition are no longer enough. Companies now want to see complex predictive scenarios that are specific to their operations and customized for their business needs.

Take a scenario that uses time-series data to generate business insights, such as predictive maintenance for industrial IoT or customer churn analysis for a customer-experience company. Getting accurate and actionable results in these predictive scenarios requires a lot of data science work, with data being used over time to iteratively train the models and improve the accuracy and quality of the output. Additionally, businesses are being challenged to engineer new features, run and test many different models and determine the right mix of models to provide the most accurate result — and that’s just to determine what needs to be implemented in a production environment.

Similar to how digital transformation has branched out from being an IT-driven initiative to a company-wide effort, AI is no longer the exclusive domain of the data scientists and engineers that help prepare the data. Organizations must move beyond a siloed AI approach that divides the analytics and app development teams. To be successful, app developers need to become more knowledgeable about the data science lifecycle, while app designers have to start thinking more about how predictive insights can drive the application experience.

Identifying an approach that enables them to easily put models into production in a language that is appropriate for runtime — without rewriting the analytical model — is key here. Companies need to not only optimize their initial models, but also feed data and events back to the production model so that it can be continuously improved upon.

This can seem like a complicated process, but it’s a prerequisite for successful implementation of AI.

Go comprehensive or go home

So the next question is: How can companies effectively implement AI in a way that enables them to address complex predictive scenarios with limited data science resources? And how do they achieve success without retraining their entire development team?

The truth is that it can’t be done by simply creating a narrowly defined, one-size-fits-all approach that will get you results with only a few parameters. It requires a more comprehensive strategy to be insightful, actionable and valuable to the business.

Consider an IIoT predictive maintenance application that analyzes three months of time-series data from sensors on hundreds of machines and returns the results automatically. This isn’t a simple predictive result set that is returned, but a complete set of detected anomalies that occurred over that time, with prioritized results to eliminate the alert storms that previously made it impossible to operationalize the results. These prioritized results are served up via a work order on a mobile app to the appropriate regional field service team, which is then able to perform the necessary maintenance to maximize machine performance. It’s a complex process where the machine learning is automated and feature engineering is done in an unsupervised fashion. The provided results analyze individual sensor data, machine-level data and machine population data, and are packaged up in a format that enables the business to take action.

Toward AI 2.0

Currently, the best available term to describe the processes behind these technologies is “anomaly detection.” But not all technologies take the same approach and not all systems deliver predictions that lead to better business outcomes. What you’re about to see is a fundamental shift in how machine learning capabilities are delivered — and we aren’t just talking deployment in the cloud versus on-premises. We’re talking about a shift from delivering data science tools that make data scientists more effective toward data science results that eliminate the need for the data scientist to have these tools in the first place. This way, data scientists will soon be able to spend their time analyzing and improving the results, instead of dedicating huge chunks of their time to non-mission-critical tasks.

The only thing that is required is that the data is provided in a time-series format. Otherwise, you simply upload the data to the cloud (but on-premises options will exist too) and the automated AI does the rest, with accurate results returned within days.

Welcome to the new world of AI implementation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

CIO
Security
Networking
Data Center
Data Management
Close