Leading AI with ethics: The new governance mandate
Ethical AI governance is now a boardroom priority, enabling organizations to curb bias, ensure accountability and build trust as a strategic advantage.
As AI reshapes the modern enterprise, ethics is becoming a core leadership imperative. Conversations that once lived inside engineering teams and legal circles now shape boardroom decisions around how AI affects trust, reputation and long-term value.
Rising expectations across the business ecosystem are driving this shift. Customers want clarity into how AI shapes the products and services they buy. Employees want confidence that AI technology will help them with their work rather than replace them. Partners and investors are scrutinizing whether organizations can demonstrate responsible AI practices at scale. In this environment, ethical AI isn't optional but foundational to organizational credibility.
Joe Kaufmann, global head of privacy and data protection officer at Jumio, an online identity verification and payments company, views this evolution as a sign that executives are becoming comfortable with AI and accountable for how it's used.
"Early debates focused on technical details, such as training data, guardrails and model mechanics," Kaufmann said. "Today, leaders are grappling with what AI should be allowed to do, and how its use reflects company values and brand trust. As a result, AI ethics has become a core leadership issue."
The speed at which AI-related risks can surface has also elevated ethics to the board level. While corporate leaders have always worried about risk, generative AI changes the risk profile by making failures faster, more visible and harder to contain than those in traditional AI and IT systems, said Neil Sahota, a United Nations AI advisor and former CEO of ACSILabs, a cognitive science and AI research organization.
"With generative AI, failures become public facing instantly," he said. "One screenshot can create reputational damage, regulatory scrutiny and customer churn in a day."
Security leaders charged with managing AI risk also emphasize accountability. AI ethics is less about intent and more about accountability, according to Jill Knesek, CISO at BlackLine, a financial operations and accounting automation platform. Weaving accountability into both the design and operational governance of AI systems is critical to ensuring innovation doesn't outpace trust, she said.
"If leaders can't clearly explain why a system made a decision or who's accountable when something goes wrong, trust breaks down quickly," she said.
At its core, AI ethics refers to the responsible design, deployment and oversight of AI systems in ways that are fair, transparent, accountable and aligned with human values. It aims to minimize harm, reduce bias, protect privacy and ensure AI-based systems serve people and society.
Regulation didn't create ethical AI -- it exposed where organizations hadn't operationalized it.
Jill KnesekCISO at BlackLine
Ethical AI is fundamentally about the societal impact of the technology, said Patrizia Bertini, founder of Euler Associates, a UK consultancy specializing in operationalizing compliance for digital products. "If systems are designed without ethical constraints, they can harm humans and reproduce or amplify biases that already exist in society," she said.
While today's conversations often center on AI regulation, the roots of ethical AI predate laws. Developers, researchers and early adopters have grappled with questions of bias, fairness and accountability, trying to prevent harm from opaque models and flawed training data.
Early discussions of ethical AI were driven less by compliance requirements and more by a sense of responsibility, BlackLine's Knesek noted, particularly focusing on how AI systems could reinforce existing inequities or produce unintended outcomes. Regulation later formalized these expectations, providing businesses with a standardized framework as AI moved into real-world applications, she said.
Understanding this history helps explain why organizations struggle. "Regulation didn't create ethical AI; it exposed where organizations hadn't operationalized it," Knesek said. "Governance only works when ethical expectations already exist inside the business."
Many organizations pursue ethical AI primarily due to regulation, legal risk or reputational pressure. However, when core ethical principles are embedded across the AI lifecycle, ethics has the power to guide everyday decisions and culture. Yet, too often, organizations treat an ethical approach as a checkbox, doing the minimum rather than fully embedding it in their processes, workflows and culture.
"What it takes to achieve ethical AI isn't a mission statement," Bertini said. "It's design choices, governance choices, operational practices and core ethical AI principles that prevent harm before systems are deployed and keep preventing harm as systems evolve."
Real-world examples of ethical AI in action
Across industries, the presence or absence of ethical oversight often determines whether AI builds trust or sparks backlash. For example, in 2018, Amazon scrapped an AI recruiting tool after discovering it systematically penalized resumes from women. The system had been trained on historical hiring data that amplified existing gender biases, showing how well-intentioned AI initiatives can fail when fairness, bias mitigation and transparency aren't built in from the start.
Another cautionary tale is Microsoft's Tay chatbot. It was launched in 2016 as an experiment in conversational AI and began producing offensive and inflammatory tweets after interacting with users online. The incident highlighted how AI systems can behave unpredictably without ethical guardrails, monitoring and governance.
"These failures weren't surprises," Knesek said. "They were the result of deploying powerful systems without clearly defined ethical boundaries or escalation paths."
But ethical AI also drives positive outcomes. In healthcare, Stanford University's AI model for detecting skin cancer showed how diverse training data can improve equity and early detection for underserved populations.
Strong governance gives leaders the confidence to say no when use cases introduce unnecessary risk.
Jill KnesekCISO at BlackLine
In finance, FICO used explainable AI to create a more transparent model for credit scoring. And in tech, Google's AI principles and review processes have shaped decision-making, including walking away from certain government contracts -- a reminder that governance can shape responsible growth even at the cost of short-term revenue.
Trust is especially fragile when AI intersects with sensitive technologies such as biometrics. Mistakes or opaque decisions in these areas can lead to privacy violations, biased outcomes and public backlash, making it harder for organizations to gain acceptance. Concepts like explainable AI make AI systems more transparent, understandable and accountable, helping build consumer trust. Narrowly defined, explainable AI tasks within transparent workflows can reduce public concern and build confidence, according to Jumio's Kaufmann.
But transparency alone isn't always sufficient. In some cases, such as with Amazon's recruiting tool, the most ethical decision is choosing not to deploy AI at all. "Just because AI can be used doesn't mean it should be," Knesek said. "Strong governance gives leaders the confidence to say no when use cases introduce unnecessary risk."
Several issues must be considered and overcome when establishing an AI ethics framework.
The dangers of unethical AI
Without strong safeguards, AI can introduce new forms of organizational risk, including model bias, hallucinations, privacy breaches, model drift and reputational exposure. As AI systems scale and adapt, these risks evolve quickly, making ad hoc controls insufficient.
These risks can take many forms. The following are some areas where unethical AI can have serious consequences. They highlight why ethics and governance shouldn't be treated as an afterthought and must be woven into every stage of an AI initiative.
Bias and discrimination
AI systems trained on historical data can unintentionally perpetuate existing inequities, leading to unfair outcomes in hiring, lending, healthcare and various other areas. Without proactive mitigation, these biases can reinforce systemic discrimination.
Decision-making opacity
Black box AI algorithms produce decisions without transparency, eroding trust. When organizations can't explain how an AI system reached an outcome, credibility and safety are questioned.
"Without visibility into training data, bias risks and failure modes, organizations might not understand where systems fail, or why," Euler Associate's Bertini said. Strong governance and explainable AI practices are essential to prevent these lack-of-transparency risks.
Privacy and data misuse
Sensitive data must be carefully governed. Unethical handling of personal information, whether through breaches, misuse or unauthorized profiling, can expose individuals to harm and organizations to legal and reputational consequences.
Societal and human impact
AI operating without ethical oversight can result in wrongful denials of service, surveillance overreach and other harms, triggering public backlash and regulatory scrutiny. Leaders often underestimate how difficult it is to undo AI mistakes. "Unlike traditional software," Knesek said, "systems trained on flawed or biased data can't simply be fixed after deployment, raising the cost of ethical failures."
This isn't just a technical issue, it's also a leadership one, she added. "When AI risk isn't integrated into enterprise risk management, ethical failures become business failures," she said. "Boards need to treat AI oversight with the same seriousness as cybersecurity or financial controls."
How ethical AI informs successful AI governance
Ethics and governance are closely intertwined. Ethics is the lens leaders use to determine whether AI initiatives garner trust, and AI governance provides the structure to enforce these principles. Simply put, ethics sets the standard, and governance makes it operational. This approach gives leaders the confidence that AI initiatives align with both organizational values and societal expectations.
Most business users aren't rejecting AI because they hate technology. They're rejecting it because they fear being blamed for an opaque outcome.
Neil SahotaU.N. AI advisor and former CEO of ACSILabs
Transparency and explainability are key to building that trust. Leaders need to understand how systems work, how decisions are made and how outputs can be challenged. "Trust follows when organizations consistently demonstrate fairness and accountability," Bertini said.
But understanding alone isn't enough; People must also feel confident that they can use AI responsibly without being unfairly blamed. This is where accountability comes in. Business users often hesitate to use AI, not because they dislike technology, but because they fear being held responsible for decisions an opaque system makes.
Strong governance resolves this tension by creating repeatable expectations, so teams don't have to debate definitions or responsibilities every time a system is deployed. "Most business users aren't rejecting AI because they hate technology," U.N. advisor Sahota said. "They're rejecting it because they fear being blamed for an opaque outcome."
Effective governance goes beyond rules and documentation. Embedding ethics into every workflow ensures that teams act responsibly without second-guessing whether a system aligns with organizational values, reinforcing trust and consistency across the organization. "It's about embedding ethics into every workflow so teams can act responsibly without second-guessing whether a system aligns with organizational values," Knesek said.
The value of these practices is clear at the executive level. The 2025 Investment Management Compliance Testing survey of 577 compliance professionals at U.S. investment advisory firms found that executives rank AI governance among the top enterprise compliance concerns. This is on par with cybersecurity and anti-money laundering, signaling that embedding ethics into governance is a strategic priority.
When organizations integrate ethics into governance, strategic benefits, such as the following, result:
Trust and credibility. Fair, explainable and accountable AI systems strengthen relationships with customers, partners and employees.
Brand differentiation. Organizations recognized for responsible AI gain reputational advantages that competitors can't easily replicate.
Operational confidence. Clear governance reduces uncertainty, enabling faster and safer deployment of AI initiatives.
Innovation enablement. Ethical frameworks provide safe boundaries for experimentation without risking public backlash.
Regulatory frameworks, such as the European Union AI Act, further emphasize why governance matters. According to these frameworks, strong oversight is critical for compliance, competitiveness, trust and long-term organizational success. "The EU AI Act sends a clear message," Bertini said. "Boards are accountable for how systems are built, used, documented and monitored."
Building governance that puts ethics first
Embedding ethical principles at every stage of the AI lifecycle ensures AI governance evolves alongside adoption. In practice, this means moving ethics out of policy documents and into system design and day-to-day decision-making processes.
When ethics is treated as a checkbox, accountability breaks down and organizations struggle to explain or defend decisions when challenged. Embedding ethical and privacy considerations into system design is critical to accountability. Without that foundation, organizations often find themselves unable to explain or defend AI-driven decisions, particularly under scrutiny. According to Kaufmann, close collaboration between privacy teams and business leaders helps ensure ethics functions as an enabler rather than an obstacle.
Even with strong operational controls, ethical AI governance ultimately hinges on leadership. Executive ownership is what turns frameworks into sustained practice, ensuring responsibility is clearly defined and ethics remains embedded long after systems are deployed. Ethical AI succeeds when leaders take responsibility for outcomes, continuously monitor systems and integrate ethical oversight into everyday workflows, Knesek said.
To translate responsibility into action, organizations can take the following steps to operationalize ethical AI:
Map systems and ethical priorities. Identify all AI systems in use, flag high-risk applications and determine which ethical principles are most important for each use case.
Establish principles and oversight. Establish clear ethical standards and define accountability through governance bodies, escalation paths and documented requirements.
Embed ethics into practice. Integrate ethical reviews into development, deployment and vendor management so day-to-day decisions consistently align with stated principles.
Enable governance alongside AI. Continuously monitor systems, report to boards and integrate AI governance with ERM and environmental, social and governance frameworks as systems evolve.
For Knesek, these steps are what transform ethics from aspiration into execution. "If you can't demonstrate how AI decisions are made, reviewed and corrected over time, you don't have governance, you have intent," she said. "Strong governance makes ethics repeatable, defensible and scalable."
As AI adoption accelerates, these practices elevate ethics from a governance task to a standing leadership responsibility -- one that increasingly defines long-term resilience and trust from the boardroom down.
"AI will continue to evolve," Knesek added. "Organizations that earn lasting trust will be the ones that treat ethics as a permanent leadership responsibility, not a temporary response to regulation."
Kinza Yasar is a technical writer for Informa TechTarget's AI and Emerging Tech group and has a background in computer networking.