putilov_denis - stock.adobe.com

Enterprise businesses use NIST AI RMF to guide AI use

The NIST AI RMF guides both developers and businesses in crafting responsible AI systems.

The U.S. government hasn't adopted artificial intelligence regulation, but it has crafted numerous pieces of AI guidance to help businesses start safely implementing and deploying the technology.

The U.S. first released its Blueprint for an AI Bill of Rights in October 2022, which identifies five principles for the design and use of AI. On the heels of the AI Bill of Rights, the National Institute of Standards and Technology released its AI Risk Management Framework (RMF) in January 2023. The NIST AI RMF promotes responsible AI use and offers detailed guidance for businesses across the AI lifecycle on identifying and managing AI risks.

According to Evi Fuelle, director of global policy at AI governance platform Credo AI, the NIST AI RMF has been "instrumental" in providing a comprehensive, contextual policy framework for managing AI risks. It can be used to address multiple AI use cases, ranging from resume screening and credit risk prediction to fraud detection, unmanned vehicles and facial recognition systems, she said. Fuelle spoke during a panel Wednesday at the International Association of Privacy Professionals' Global Privacy Summit 2024 in Washington, D.C.

"There's a lot of reason to trust this as a framework where enterprises can begin managing their risk," she said.

Applying the NIST AI RMF

Going forward, many enterprise businesses are likely to adopt AI in one way or another, Fuelle said. The NIST AI RMF is a valuable tool because it was created through a multi-stakeholder process, allowing it to become a well-rounded piece of guidance for enterprise businesses, she said.

Fuelle said that managing privacy risk is just one aspect of the broader challenges posed by emerging AI models. This is where the NIST AI RMF comes in handy. For example, AI impact assessments should include details such as model name, training data, how the model was trained, what the model is expected to do, and known and unknown outcomes.

"A lot of the enterprises we are working with come to us and ask, 'What does good look like for my enterprise and my use cases? I'm doing this with AI in this sector. What can I look to or what do you recommend for how I can build trust in my use of AI in that way?'" she said.

It's critical that enterprises at every level don't sleep on the NIST AI RMF. It's the foundation of a lot of what we're seeing globally.
Evi FuelleDirector of global policy, Credo AI

Fuelle pointed to the Govern 1.6 section of the NIST AI RMF as an important step in the AI governance process. Govern 1.6 asks industries to create an inventory of their AI use cases. Beyond the NIST AI RMF, inventorying AI use cases appears in regulations and standards, including the recent AI policy from the Office of Management and Budget for federal agencies and in the European Union's AI Act.

She emphasized the importance of businesses understanding how and where the organization applies AI, which is where the Govern 1.6 feature becomes valuable.

For enterprise businesses just getting started on crafting an AI governance policy, an interdisciplinary team of people across an organization serves a vital purpose in identifying and managing AI risks, said Reva Schwartz, research scientist and principal investigator for AI bias at NIST. Schwartz spoke during the panel with Fuelle.

The NIST AI RMF features a "Core," which includes four functions, including the Govern and subsequent Govern 1.6 sections, that provides actions and outcomes for enterprise businesses to manage AI risk, Schwartz said.

"For those just starting with AI governance, you might not have a set of processes for measuring various aspects of risks that might occur," Schwartz said. "We give outcomes that organizations can meet."

NIST AI RMF a start, but not the end

While the NIST AI RMF serves as a guide for breaking down AI risk into clear actions, it's only a framework and not a formal standard or requirement for businesses, said Ashley Casovan, managing director of the IAPP AI Governance Center. Casovan spoke on the panel with Fuelle and Schwartz.

Casovan said guides like the NIST AI RMF are a steppingstone to understanding technology governance better. She explained that they are crucial for setting up effective governance, determining who should be involved, evaluating the impact of various technologies and assessing potential risks and harms.

"These accessible guides are a good starting point," she said.

However, the NIST AI RMF and other guidance, including the OMB AI policy, could eventually evolve into industry standards for AI use, particularly as large companies using AI that contract with the federal government adhere to the guidance, Fuelle said.

"It's critical that enterprises at every level don't sleep on the NIST AI RMF," Fuelle said. "It's the foundation of a lot of what we're seeing globally."

The NIST AI RMF will also soon feature a section devoted to generative AI, which will be released for public comment in July.

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close