Sergey - stock.adobe.com
Thanks to rapid improvements in machine learning tools, AI applications are just now starting to make inroads in industrial processes, promising to improve older industrial automation protocols built around expert systems.
AI process automation tools that simplify the workflow of front-line employees present a big opportunity for businesses, but several challenges remain. Enterprises are still grappling with making vast stores of existing data available to AI platforms. There are also several challenges when bringing agility to AI application development and improving training data quality for machine learning algorithms.
Speakers at the Re-Work AI in Industrial Automation Summit in San Francisco discussed how enterprises are taking on these challenges.
Focus on pain points
Broken systems and downtime are among the biggest drivers of AI adoption in industrial automation.
"One of the keys for our customers is that they have experienced some incident, like a failure," said Drew Conway, CEO of Alluvium, a company that makes machine learning tools for analyzing the performance of industrial equipment.
In many cases, data that could predict large-scale equipment failures in industrial settings is available prior to the failure, but the human experts viewing it don't recognize key signals indicating it is likely.
"All of this data is falling to the ground," Conway said. "A big problem is figuring out how to build tools that work with that data in a way that is valuable."
Building a better algorithm to detect problems involves more than simply analyzing sensor data. It's important to capture expert operator feedback and institutional knowledge to identify potential issues and alert the operators when a problem occurs. Conway said industrial automation is in need of better ways to blend existing approaches to machine learning with expert-driven systems that can provide operators with more actionable feedback.
"We realized that if we could get people in the control room to use software they trusted, it would grow usage," Conway said.
This involves not just predicting problems, but relating these predictions to operators' understanding of how machines work and their different potential failures so operators can take preventative measures.
Streamlining manufacturing processes
The core principles behind Agile development started in the manufacturing sector as part of lean production processes. For the most part, this has been driven by people identifying sources of waste in manufacturing processes.
Now, enterprises are starting to use AI applications and Agile software development practices to develop AI process automation strategies, said Greg Kinsey, vice president at Hitachi's Insight Group. This is being driven in large part by the rise of industrial IoT and better data management practices.
Traditional lean manufacturing processes work well to optimize highly standardized production lines that don't change much. But they can suffer problems when a production line is constantly adapting in response to market pressures, said Kinsey.
For example, Hitachi has been working with one company that produces polymers. The marketing department found it could significantly increase sales by making custom blends for a variety of uses. The problem was yield would drop by 30-40% each time the production line changed the blend it made.
Hitachi worked with the company to use machine learning algorithms to figure out how to adjust the settings for the equipment for each new production run, which reduced the drop in yield to less than 10%.
Agile machine learning for new data sets
The hard part of this AI process automation wasn't not finding the right data; Hitachi worked with the polymer company to identify almost 300 different data streams that might relate to yield. But it wasn't as simple as compiling all these data sets to train algorithms. Each data set had to be cleaned, calibrated and synchronized with other data sources to produce useful results.
Hitachi worked with the company on an Agile development process that started with the minimum viable data sets, Kinsey said. In the discovery phase, they assessed the predictive value of a few critical variables.
"Once you have a hypothesis, you can think about the data you need and then do the hard work of cleansing, labeling, ingesting and aligning that with the tasks that engineers need to do," Kinsey said.
Hitachi representatives typically spend a month or two on the discovery phase, during which they deliberately try to avoid talking about applications. In the second phase, they begin to look at specific applications. This is done to formulate a hypothesis and create a minimum viable data set for a potentially larger AI process automation use case.
One of the biggest challenges is making sure you have the right mix of personalities on your team to tackle the different aspects of process automation problems. Highly innovative people are creative, and even though they may make some mistakes, they frequently bring new perspectives to problems. Solution-oriented people look for a stable process. The creative types play a stronger role in the discovery phase, while the solution-oriented types play a stronger role in the deployment phase.
Filling in the data gaps
In many cases, the data required to identify rare but expensive failures does not exist, said Dragos Margineantu, AI chief technologist at Boeing. Airplanes and the maintenance crews that service them collect vast troves of data. But airplanes are rarely grounded or breakdown in flight, so there is not much recorded data about what to look for related to edge cases that might cause a plane to break.
"No matter how much data you collect from real-world processes, it is typically incomplete," Margineantu said. "We have data sets from customers that operate that have not had a single rejected takeoff in four years. This is an event that happens rarely."
Building better algorithms for industrial automation sometimes requires finding ways to make sense of data stored in manuals and tapping into the knowledge of experts. It frequently demands a broad survey of potential sources of knowledge rather than simply building a bigger data set.
AI architecture required
Going forward, Margineantu believes AI process automation will require the development of special application architectures designed for other types of enterprise applications. These could be built using components that can be switched out, like microservices running on Docker containers. The beginnings of these kinds of architectures are already being used in domains like autonomous cars that use the Robotic Operating System framework.
An AI architecture can make it easy to develop and deploy a machine learning algorithm and then quickly switch it out when a better algorithm comes along. Today, Margineantu finds Boeing spends a lot of time developing the application infrastructure that wraps around each new machine learning algorithm.
Robustness is important
It's also important to focus on robustness rather than just accuracy. Systems should be designed to alert humans when an algorithm has trouble reaching a conclusive prediction or recommendation, especially when AI decisions impact industrial equipment.
For example, if an algorithm is trained to identify cats and dogs in pictures, it may struggle with an edge case that includes a picture of a bear. AI systems will have to know how to respond when challenged by new classes.
"If you see a bear, you would like the systems to respond, 'I don't know,' or 'give me more information,'" Margineantu said.
In the long run, this kind of robustness is likely to be built by groups of algorithms that work together.
"I want to remind you that all competitions in machine learning are won by ensembles, since they provide for more robust outputs," Margineantu said.