Many executives who fund AI initiatives do not realize the extent of unknown risks that can accompany working with AI -- risks that can lead to significant losses and poor public image.
Most business leaders and AI teams want to get the first-mover advantage or capitalize quickly on investor or marketplace dynamics. They are failing fast and moving fast and breaking things. But with AI technologies that are being empowered to code themselves through new generative AI capabilities and simultaneously having less human oversight, we all must slow down and take steps to bring about more trustworthy technology.
Consider the harm that untrustworthy AI has already caused -- the incorrect cancer treatment advice by the AI-based Oncology Expert Advisor program that cost its developer $62 million, Amazon's hiring AI tool that was biased against women or facial recognition software that caused innocent people to be arrested.
These incidents broke the trust of investors, customers and the public. They caused bad press, class-action lawsuits, and resulted in damages to corporate reputations which decreased their market valuations. These incidents also increased calls for regulation from policy organizations and retribution against offending companies from activists.
The good news is that these kinds of problems can be minimized. First, you need to know what trustworthy AI is, then you need to know how to deliver it.
What is trustworthy AI?
In a field moving as fast as artificial intelligence, identifying the characteristics of a trustworthy AI system is difficult. Attributes like safety, accuracy and fairness can be tested mathematically and with certainty in some AI applications but, as the examples of AI use cases cited above demonstrate, these same attributes can be difficult if not impossible to guarantee in other applications of AI.
Still, we won't be able to build trustworthy AI systems unless we know what trustworthy AI means to us. What are the basic tenets that help build trust in AI systems?
Below is a list of the 12 Tenets of Trust, outlined in my book What You Don't Know: AI's Unseen Influence on Your Life and How to Take Back Control. The tenets are the result of my experiences working with Fortune 100 AI teams to successfully scale and operationalize their AI initiatives. Trust-building was always the single most important factor in successful AI project delivery.
The 12-step program is for data scientists and developers who understand that bad things can happen when AI goes awry and who have assumed a leadership role in reforming the move-fast-and-break-things culture AI has grown up in.
Each of the tenets below is accompanied by a pledge that spells out the steps AI leaders will need to take to uphold the tenet.
- Humane. We evaluated if our use of AI served humanity. We asked if the AI we developed could cause more harm than good to society, the environment and an individual's pursuit of life, freedom and happiness. We evaluated the likelihood that our AI would be used by bad actors in unintended and harmful ways. We set safeguards in place to deter such bad actors (if relevant). We conducted a risk and impact assessment to understand if our AI use case has high, medium or low risk of causing any serious harms.
- Consensual. We sought permission from individuals, business partners or third parties to use their data for the purpose of the specific AI that was developed.
- Transparent. For AI we developed that would influence decisions about an individual's life, livelihood or happiness, we actively informed affected people -- in language understandable to them -- that these algorithms exist and what impact they could have.
- Accessible. We not only informed individuals whose data we used for our AI, but we also documented the decisions the AI made regarding them and made the information available to them online or in an app. A person affected by our AI decisioning system can check its results about them anytime unencumbered.
- Agency-imbuing. We set up and communicated an appeals program for individuals who feel the algorithm's recommendations or source data about them may be incorrect.
- Explainable. We explained the AI's decisions and sources in plain words.
- Private and secure. All information used to develop, train, deploy, manage and govern the ongoing operations of the AI system are kept private and secure by design, including when third-party vendors and business partners were involved.
- Fair and quality data. The data used to train and develop the AI system was based on sound data standards including data that is thoroughly analyzed and adjusted for: biases, bad data and missing data. Sound, appropriate proxies were used for data that were unobtainable or missing.
- Accountable. If any part of the AI system malfunctions, the company has declared, trained and communicated the people who are responsible for fixing it. Corporate policies and guidelines related to the new AI system have been updated to ensure there will always be an incident response plan for any emergency situations that may occur with the AI system.
- Traceable. We have set up monitoring tools, processes and employees to tell which part of an AI system went wrong and when it happened.
- Feedback-incorporating. We have provided ways for users, impacted people or experts to give input into the AI system's ongoing learning. Where possible, we ensured diversity of feedback in this process to help mitigate any bias that could creep into the system upon release and into the future. This could be as easy as a thumbs up or down rating on each piece of data and each decision.
- Governed and rectifiable. If the AI system fails, becomes biased or corrupt, the company has set up the ability to detect it right away with model drift and data drift monitoring tools, processes and designated people. We have an incident response plan in place to rectify emergency situations to ensure safety and liberties are guaranteed while the high-stakes AI is down.
Why we need trustworthy AI
At a macro level, AI is the bedrock of world-impacting systems that can affect areas such as income-based inequality via the use of AI in financial credit scoring and hiring systems; the environment, due to generative AI's outsized carbon footprint and water cooling requirements; and diplomatic tensions between nations, as, for example, the suspected Chinese surveillance of U.S. citizens via TikTok.
At the micro level, AI affects individuals in everything from landing a job to retirement planning, securing home loans, job and driver safety, health diagnoses and treatment coverage, arrests and police treatment, political propaganda and fake news, conspiracy theories, and even our children's mental health and online safety.
We need trustworthy AI to ensure the following:
- Physical safety. Examples include autonomous vehicles, robotic manufacturing equipment and virtually any human-machine interactions where AI is deployed in split-second decision-making.
- Health. Examples include AI-assisted robotic surgeries, AI-assisted diagnostic imaging tools, clinical decision support, generative AI-based chatbots, health insurance claims systems.
- Ability to secure necessities to live, such as food and housing. Examples of AI systems affecting these are hiring systems, financial credit scoring systems and home loan systems.
- Rights and liberties. AI systems that can violate rights are predictive policing systems, facial matching programs and judicial recidivism prediction systems.
- Democracy. AI systems that can violate democratic principles include overreaching AI surveillance programs, divisive fake news created with generative AI, propaganda, conspiracy theories and terrorist recruitment spread via AI recommendation engines.
Key benefits of trustworthy AI
AI is a huge investment for your company, a key competitive differentiation or perhaps even a major cost-efficiency play. Either way, you need to ensure it is trustworthy so you can reap the following AI benefits:
- Improve your brand reputation.
- Increase your market competitiveness.
- Garner greater AI return on investment.
- Increase your company and clients' AI adoption rates.
- Decrease your risk of class-action lawsuits and major harms.
- Improve regulatory readiness.
- Decrease your risk of a major harm-causing incident.
- Increase public trust.
Cortnie Abercrombie is the CEO and Founder of AI ethics nonprofit AI Truth and author of What You Don't Know: AI's Unseen Influence on Your Life and How to Take Back Control. Reach her at [email protected].