How do you know your AI is making the right choices?
There is a lot of conversation around data right now. Its value continues to increase, whether as something that can be monetized for profit or savings, or in terms of improving understanding and operations for a business that uses it effectively. That’s why the advent of artificial intelligence (AI) and machine learning — which allow us to quickly gain insights from vast quantities of data that was previously siloed — has been such an incredible revolution. Most are gaining actionable insights from smart systems, but how do we stay accountable in the age of the machine? How do we know we can trust the data and resulting insights when human beings are a lesser part of the equation?
There are three things to consider when determining how to create a structure of transparency, ethics and accountability with data. The first is access to the raw data, whether it’s from a sensor or a system. It is critical to maintain a transparent pathway back to that raw data that can be accessed easily. In a building context, for example, analytics can help quickly search through video footage during a time-sensitive security event to identify and pursue a perpetrator. They can do this based on clothing color, gender or other details rather than having to manually search through hours of footage. It remains important to have access to the original footage however, so that there cannot be claims that the footage was edited to challenge the findings.
The second consideration is context. Raw data by itself may not make any sense. It needs to have some level of context around it. Without that, there is no picture of what’s going on. This is true for humans and it is true for machines. Decision making is a result of information, but also of relevant context that informs what action is taken. For another building systems example, take an instance of an uncomfortably warm inside temperature. This could lead to the belief that the HVAC system is not functioning. However, if there is also a meeting taking place that resulted in higher than average usage of the space, that can be an important factor. Without the ability to make determinations based on both data AND its context, systems wouldn’t be considered “smart.”
The third item to prioritize is data security. People need to feel their information is safe and secure from bad actors and misuse. Strategies to increase security have included two-factor authentication of logins or financial transactions in some settings, but they must continue to be a priority moving forward.
Machines as decision-makers: An ethical question
For nearly all of history, decision-making has been a human prerogative, where judgements about right or wrong can be applied (the subjective nature of right and wrong notwithstanding). When the machine becomes the decision-maker, it becomes an ethical question. How do we maintain data trustworthiness as the need for human involvement decreases? In other words, how do we hold machines accountable?
It comes down to transparency: access to raw data as mentioned above, but also referential integrity for all data and the context used to analyze it. The employment of knowledge graphs, a visual way to represent the “thought process” of intelligent technologies, becomes very useful. They provide a way to visualize how AI, in whatever form it may take, got to its decision. Just as there are traditional practices to keep humans accountable for their decisions and resulting actions, machines must have standards put in place now.
In addition to knowledge graphs, having insight into the learning process of your technology through a “digital twin” is a crucial part of machine accountability. By being able to see a representation of physical systems and play out scenarios to answer “what if” questions, it can give insight into the decision-making process. This provides peace of mind and confidence in the machine’s ability to properly analyze the information it is collecting, whether it be from an HVAC, security or lighting system, or something else entirely.
In the future, when machines are driving more decisions, we want technologies to maintain ethical and authentic practices and access to private information that we see now. It is important to establish best practices now and to understand the decisions that are being made and the learning processes that machines are utilizing. With any decision, made from human or machine, it is important to maintain transparency and a logic chain for accountability and peace of mind. By establishing a logic chain to truly understand the decisions that are being made by machines, it becomes possible to understand why they make the recommendations they do and provides accountability and the opportunity to adjust accordingly for a smarter building.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.