Problem solve Get help with specific problems with your technologies, process and projects.

Limiting bias and inexperience in AI-powered factories of the future

The United Nations Sustainable Development Goals eight and nine are important in the context of Industry 4.0 and industrial IoT. SDG-8 calls for decent work and economic growth, while SDG-9 calls for innovation in industry and infrastructure. The purpose of the SDGs is to improve social conditions and advance humanity. AI plays a critical role in accomplishing this. For instance, let’s look at the innovation that’s happening in the Industry 4.0 space and where AI systems are proving efficient in preventing human errors and improving efficiency. The case studies from early AI systems clearly demonstrate that AI can not only improve efficiency metrics, like yield and throughput, but it can also reduce material waste and harmful emissions. In these scenarios, AI will create a net gain for us as society, improving human conditions.

AI can transform humanity by giving time back to humans to focus on more productive tasks. There are new skills to be learned and it is clear that specific types of work will be displaced by new ones. For the sake of this article, let’s assume that we are able to empower our current factory workers with new skills that make them relevant and productive in the age of AI. If we do that, are we all set? Is that the only societal challenge we have for realizing the potential of AI completely?

In an ideal world, there are AI systems working seamlessly with humans to create factories of future that are lean, efficient and environmentally friendly. But we are far from that ideal world for two reasons: the current infrastructure present in industrial setting to collect and provide accurate data, and algorithmic biases.

There are different ways of architecting AI systems. The most common way is to model the behavior of the world through data and make decisions based on the realized model of the world. As you can see, this is problematic. What if the data is not accurate? What if we don’t have enough data? What if our data only partially captures the world we want to model?

With the last surge of industrial IoT revolution, there was a surge of data available in factories. This opens the door to applying AI to factory operations. The challenge, however, is that the data is not ideal in several ways. Data collection processes were never optimized for a future AI application, rather they were built for simple responsive actions and decision-making. This shows up when the data is used to create machine learning models for building smart automation or predictive maintenance tools. Some problems with data can include incorrect sample rate, compressed or lossy data, incorrect sensor readings through faulty sensors or mechanical degradation, and so forth.

Algorithmic bias in AI, simply put, is a phenomenon where an AI deployment has a systematic error causing it to draw improper conclusions. This systematic error can creep in either because the data used to model and train the AI system is faulty, or because the engineers who created the algorithms had an incomplete or biased understanding of the world.

There have been several articles published about the human bias contributing to biased AI systems. There is well-documented evidence of AI systems showing biases in terms of political preferences, racial profiling and gender discrimination. However, in the context of Industry 4.0 applications, they are as big of a problem as data bias.

Going back to the SDG goals discussed above, we should aspire to improve the human conditions by providing people meaningful work. Let’s take an example of Ernesto Miguel, who has worked at a cement factory as a plant operator for the last 30 years. Ernesto spends most of his time ensuring the equipment under his watch functions efficiently. Over the last three decades, he has formed an intimate bond with the machines in his factory. He developed extraordinary abilities to predict what might be wrong with a machine by hearing the sound it makes. He can do more, like training more workers to be intuitive like him. He wants to share his expertise, but unfortunately Ernesto spends most of his time reacting to equipment problems and preventing failures. This is a problem ripe for AI.

We deployed one of our AI systems to model a crucial piece of plant equipment — a cooler — in a cement factory. The idea was to learn how adequately we could model equipment behavior by looking at two years’ worth of time series data. The data provided a great deal of insight into how the cooler was operating. Using the data, our engineers were able to identify correlation between different inputs to the equipment and its corresponding operating conditions.

If this worked flawlessly, we would accomplish two goals: use smart AI systems that could keep the equipment functioning in an optimum way and allow Ernesto to focus on more meaningful work, such as effectively training other factory workers.

Bias creeps in inadvertently when AI system designers confuse data with knowledge.

It was a big moment when the first AI system was deployed in the cement plant. We don’t yet live in a world where we can trust machines completely, and for good reason. So, there was a safety switch included for the plant operator to intervene if something went wrong. The first exercise was to run the software overnight, where the AI system monitored the cooler and was responsible for keeping it within safe bounds. To the delight of everyone, the system successfully ran overnight. But that joy was short-lived when the first weaknesses in the model started appearing.

The cooler temperature was increasing. And the model with an established correlation between the temperature and fan speed kept increasing the fan speed. In the meantime, the back grate pressure rose above the safe value. But the model identified no correlation between the back grate pressure and the temperature and felt no need to adjust the back grate pressure in its objective of bringing down the cooler temperature. The plant operator overrode the control and shut off the AI model.

An experienced plant control would have immediately responded to the increasing back grate pressure as it is detrimental to the cooler’s operation. How did the AI model miss this?

In his 30 years, Ernesto never had to wait for the grate pressure to build up before reacting. He just knew when the pressure would build up and proactively controlled the parameters to ensure that the grate pressure would never cross a safe bound. By merely looking at the data, there was no way for the AI engineers to determine this. The data alone without context would tell you that the grate pressure would never be a problem.

Bias hurts AI systems in many ways. The biggest of all is that it takes trust away from these systems. On top of watching his workers and equipment, Ernesto will have to watch the AI models. He has to teach the system to do things differently, which the system then has to learn. The next versions will improve. This will always be a problem when we model AI systems purely from incomplete or inaccurate data. In industrial IoT settings, this will always be the case because data will be inaccurate or incomplete.

As technology builders, what does this mean for us? How do we realize the full potential of industrial AI systems? The answer lies in us starting to design these systems with empathy and taking a thoughtful approach:

  • We cannot assume that data is a complete representation of the environment we are aspiring to model.
  • We need to spend time doing contextual inquiry — a semi-structured interview guided by questions, observations and follow-up questions while people work in their own environments — to understand the life of the workers who we are trying to empower AI systems with.
  • We need to assess all the possible scenarios that could occur in the problem we are trying to solve.
  • We need to always start with a semi-autonomous system and only transition to fully autonomous system when we are confident of its performance in production environments.
  • We should continually adapt and train models to learn about the environment we are operating in.

Bringing AI into factory settings is more than just technology. It is about people. It is also about doing something with empathy and understanding the people whose lives the technology is going to touch.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

CIO
Security
Networking
Data Center
Data Management
Close