https://www.techtarget.com/searchenterpriseai/feature/Top-resources-to-build-an-ethical-AI-framework
As generative AI (GenAI) gains a stronger foothold in the enterprise, executives are called upon to bring greater attention to AI ethics, moral principles and methods that help shape the development and use of AI technology.
An ethical approach to AI is important, not only to making the world a better place but also to protecting a company's bottom line, since it involves understanding the financial and reputational impact of biased data, hallucinations, transparency, explainability, limitations and other factors that erode public trust in AI.
"The most impactful frameworks or approaches to addressing ethical AI issues … take all aspects of the technology -- its usage, risks and potential outcomes -- into consideration," said Tad Roselund, managing director and senior partner at Boston Consulting Group.
Many firms approach the development of ethical AI frameworks from a purely values-based position, Roselund said. It's important to take a holistic, ethical AI approach that integrates strategy with process and technical controls, cultural norms and governance. These three elements of an ethical AI framework can help institute responsible AI policies and initiatives. And it all starts by establishing a set of principles around AI usage.
"Oftentimes, businesses and leaders are narrowly focused on one of these elements when they need to focus on all of them," Roselund reasoned. Addressing any one element might be a good starting point, but by considering all three elements -- controls, cultural norms and governance -- businesses can devise an all-encompassing ethical AI framework. This approach is especially important when it comes to GenAI and its ability to democratize the use of AI.
Enterprises must also instill AI ethics in those who develop and use AI tools and technologies. Open communication, educational resources, and enforced guidelines and processes to ensure the proper use of AI, Roselund advised, can further bolster an internal AI ethics framework that addresses GenAI.
The following is an alphabetical list of standards, tools, techniques and other resources to help shape a company's internal ethical AI framework.
The institute focuses on the social implications of AI and policy research in responsible AI. Research areas include algorithmic accountability, antitrust concerns, biometrics, worker data rights, large-scale AI models and privacy. The April report "AI Now 2023 Landscape: Confronting Tech Power" provides a deep dive into many ethical issues that can be helpful in developing a responsible AI policy.
The center fosters research into the big questions related to the ethics and governance of AI. It has contributed to the dialogue about information quality, influenced policymaking on algorithms in criminal justice, supported the development of AI governance frameworks, studied algorithmic accountability and collaborated with AI vendors.
JTC 21 is an ongoing EU initiative for various responsible AI standards. The group focuses on producing standards for the European market and informing EU legislation, policies and values. It also plans to specify technical requirements for characterizing transparency, robustness and accuracy in AI systems.
The ITEC Handbook was a collaborative effort between Santa Clara University's Markkula Center for Applied Ethics and the Vatican to develop a practical, incremental roadmap for technology ethics. The handbook includes a five-stage maturity model with specific, measurable steps that enterprises can take at each level of maturity. It also promotes an operational approach for implementing ethics as an ongoing practice, akin to DevSecOps for ethics. The core idea is to bring legal, technical and business teams together during ethical AI's early stages to root out bugs at a time when they're much cheaper to fix than after responsible AI deployment.
The initiative focuses on ethical issues with incorporating GenAI into future autonomous and agentic systems. In particular, the group is exploring how to extend traditional safety concepts to include scientific integrity and public safety as priorities. Of note, they are exploring how the concept of safety has sometimes been misappropriated and misunderstood with regard to new GenAI risks. This working group is developing new standards, toolkits, AI safety champions and awareness campaigns to improve the ethical use of GenAI tools and infrastructure.
The standard describes how an organization can manage risks specifically related to AI. It can help standardize the technical language characterizing the underlying principles and how these principles apply to developing, provisioning or offering AI systems. It also covers policies, procedures and practices to assess, treat, monitor, review and record risk. It's highly technical and oriented toward engineers rather than business experts.
AI content generators to explore
The best large language models
Attributes of open vs. closed AI explained
Generative models: VAEs, GANs, diffusion, transformers, NeRFs
AI RMF 1.0 guides government agencies and the private sector in managing new AI risks and in promoting responsible AI. According to NIST, the framework is "intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic."
The open source toolkit provides a flexible interface for defining the specific behavioral rails that bots need to follow. It supports the Colang modeling language. One chief data scientist said his company uses Nvidia NeMo Guardrails to prevent a support chatbot on a lawyer's website from providing answers that might be construed as legal advice.
PAI has been exploring many fundamental assumptions about how to build AI systems that benefit all stakeholders. Founding members include executives from Amazon, Facebook, Google, Google DeepMind, Microsoft and IBM. Supporting members include more than a hundred partners from academia, civil society, industry and various nonprofits. One novel aspect of this group has been the creation of an AI Incident Database to assess, manage and communicate newly discovered AI risks and harms. It has also explored many of the concerns related to the opportunities and harms of AI-generated content.
HAI provides ongoing research and guidance into best practices for human-centered AI. One early initiative in collaboration with Stanford Medicine is Responsible AI for Safe and Equitable Health, which addresses ethical and safety issues surrounding AI in health and medicine.
This paper by Matthias Samwald, Robert Praas and Konstantin Hebenstreit takes a Socratic approach to identify underlying assumptions, contradictions and errors through dialogue and questioning about truthfulness, transparency, robustness and alignment of ethical principles. One goal is to develop AI meta-systems in which two or more component AI models complement, critique and improve their mutual performance.
This June 2023 white paper includes 30 "action-oriented" recommendations to "navigate AI complexities and harness its potential ethically." It includes sections on responsible development and release of GenAI, open innovation and international collaboration, and social progress.
Ethical AI resources are a sound starting point toward tailoring and establishing a company's ethical AI framework and launching responsible AI policies and initiatives. The following best practices can help achieve these goals:
Researchers, enterprise leaders and regulators are still investigating ethical issues related to responsible AI. Increasingly, enterprises will need to consider ethics, not just as a liability and checkmark item but as a lens to expand opportunities for collecting better data and building trust with customers and regulators.
Legal challenges involving copyright and intellectual property protection will also have to be addressed. Issues related to GenAI and hallucinations will take longer to address, since some of those potential problems are inherent in the design of today's AI systems. In the U.S., copyright regulators have agreed that AI-generated content is copyrightable with some level of human involvement. There are still many question marks relating to training data, which sometimes includes copyrighted content, violation of terms of service and even blatant pirating of copyrighted works by large AI developers like Meta. The courts are still deciding these matters, and risk-averse companies might consider steering clear of the AI tools and services built on some of these practices -- even when they are proffered as "open AI."
Enterprises and data scientists will also need to solve issues of bias and inequality in training data and machine learning algorithms. In addition, issues relating to AI system security, including cyberattacks against large language models, will require continuous engineering and design improvements to keep pace with increasingly sophisticated criminal adversaries. The NIST AI Risk Management Framework could help mitigate some of these concerns.
In the near future, Pallath sees AI evolving toward enhancing human capabilities in collaboration with AI technologies rather than supplanting humans entirely. "Ethical considerations," he explained, "will revolve around optimizing AI's role in augmenting human creativity, productivity and decision-making, all while preserving human control and oversight."
AI ethics will continue to be a fast-growing movement for the foreseeable future, Green said. "With AI," he acknowledged, "now we have created thinkers outside of ourselves and discovered that, unless we give them some ethical thoughts, they won't make good choices."
AI ethics is never done. Ethical judgments might need to change as conditions change. "We need to maintain our awareness and skill," Green said, "so that, if AI is not benefiting society, we can make the necessary improvements."
For example, new GenAI tools could give nations a leg up in critical areas such as warfare or more effective cyberattacks, yet merely developing these tools could also empower adversaries. This could result in a zero-sum situation that benefits no one and costs money in the process. These issues are getting thornier, particularly with the growth of open source AI that could also empower nation-states and malicious adversaries. However, stifling open source AI development could also disempower more efficient and effective AI development in the long run.
Editor's note: This article was updated in March 2025 to include additional ethical AI framework resources.
George Lawton is a journalist based in London. Over the last 30 years, he has written more than 3,000 stories about computers, communications, knowledge management, business, health and other areas that interest him.
03 Mar 2025