AI is transforming the world as we know it. Contextual awareness paired with AI is opening the door to many positive solutions for healthcare, environmental protection, conservation, smart cities and public safety. Enterprise AI applications also proliferate in marketing and sales, HR and recruiting, security, autonomous operations and financial services. On the other hand, the rapid advancement of AI also raises questions and concerns around data ethics, which are only beginning to be addressed.
As a case in point, the New York Police Department (NYPD) has been challenged by AI bias concerns for its new crime analysis AI tool. The tool is intended to help identify crime patterns for faster response and crime prevention. To avoid bias, the NYPD removed race, gender and location attributes from the historical data that was used to train the tool. However, as one analyst noted, if racial or gender bias played any role in past police actions, even if it’s not explicitly captured in the data, the tool’s predictions will still be impacted by race and gender. As the AI market continues to explode, clarifying an ethical approach to scenarios like these will be paramount.
How fast is the AI market exploding?
IDC predicts that global spending on AI systems will reach $79.2 billion by 2022 at a compound annual growth rate of 38% between 2018 and 2022. For a good visual reference of just how fast the AI topography is evolving, just compare the AI business landscape from five years ago to the current2019 AI landscape. AI topography and the number of AI inventions follows similar paths of acceleration, as this visualization of AI patents illustrates.
What’s to fear? The rise in AI ethics
If AI can really help us make the world better, then what’s the problem? Consider this example: A city has a network of smart traffic lights to reduce congestion. AI algorithms time the lights to minimize traffic buildup and capture license plate images of any vehicles that fail to stop for a red light. The system automatically matches license plate numbers to vehicle owners, enabling the city to process traffic violations faster and more effectively.
In this scenario, you may not be surprised to receive a traffic ticket in the mail. But what if your insurance company notifies you that your rates are going up as a result? While that raises questions around citizen privacy, addressing AI bias may be an even bigger challenge. For instance, what if AI model that determined your initial insurance rate was trained on historical data that contained bias with respect to your race, gender, or education level? In fact, IBM researchers working to mitigate AI bias have identified and classified more than 180 human biases that could affect AI models.
Most AI ethics concerns can be generally categorized into these general areas:
- Human and machine interaction. Machines impersonating or fooling humans; autonomy gone wrong in weapons, accidents or rogue robots, and human abuse of machines.
- Data collection. Collecting more data than is needed; collecting data without permission; surveillance, selling or connecting different data sets without permission and perpetuating bias and human error contained in data used to train AI.
- Data use. Deep fake videos, fake news, social manipulation, invasion of privacy, social grading based on insurance, credit or jobs and discrimination.
Fortunately, many governments and companies with a stake in AI are aware of the potential for bias and working to implement ethical approaches. For example, The European Union published a set of guidelines for ethical AI for companies and governments. The U.S. Department of Defense is hiring an AI ethicist. Industry consortiums such as The Partnership on AI and the IEEE have developed guidelines on AI ethics. The Partnership on AI, which includes 100 corporate and nonprofit members, also recently announced a research fellowship to advance diversity and inclusion in AI. Many tech giants, including Google, Microsoft, IBM have developed their own AI ethics guidelines, and Amazon has committed $10 million to AI fairness research.
Growing ethical AI is bigger than just one stakeholder or policy, but typically include principles of fairness, inclusion, transparency, privacy, security, accountability and societal benefit. Data is at the heart of many of these areas as it really boils down to how data is collected and used to train AI models.
How trusted data exchanges could help
AI models are as good as the data used to train them. Larger sample sizes with diverse characteristics that have been cleaned of bias and errors will result in fairer, more accurate AI models. Additional data sources added over time can also help improve AI models. For example, Siemens built an AI model that analyzes sensor data from trains paired with historical data to predict faulty components. In the future, the company could add data from other sources — such as route-based weather forecasts — to further improve the model.
Data exchanges are emerging as one way that organizations can better use the data they have, including monetization, as well as tap into data from other sources. Combining data from different sources across IoT data exchanges can improve AI models, yield deeper insights and open the door to new services. Trust is an essential building block.
Data exchanges built with the principles of a data trust are better equipped to address data privacy concerns. By directly and securely interconnecting IoT ecosystem participants, data, and algorithms, a trusted data exchange can generate maximum shared value while keeping data private and safe in transit on low-latency, high bandwidth infrastructure.
As Jeni Tennison, CEO of The Open Data Institute put it, “We only unlock the full value of data when it gets used, so we really need to find good ways to share data more widely without putting people at risk.”
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.